Peer review innovations at Researcher to Reader 2024 – insights and ideas
Authors and workshop facilitators
Tony Alves - HighWire Press
Jason De Boer - De Boer Consultancy and Kriyadocs
Alice Ellingham - Editorial Office Limited
Elizabeth Hay - Editorial Office Limited
Christopher Leonard - Cactus Communications
Introduction
The Peer Review Innovations workshop at this year’s Researcher to Reader Conference in London brought together 30 colleagues from various facets of scholarly communications, including publishers, institutional librarians, open research advocates, consultants and service providers. In keeping with the overall ethos of this popular annual industry event, our collective goal was to share insights from across the scholarly community, and to explore innovative ideas that could help improve peer review for all the stakeholders engaged in this process. This review provides an overview of the discussions and main outputs from the workshop.
Setting the scene
Before discussing ideas to improve peer review, the workshop agreed the parameters for our discussions – defining peer review as the timeframe between the submission of research to a journal or other platform for publication, and the editorial decision by that journal or platform to publish the work. Whilst not a perfect or all-encompassing definition, this was intended to give a workable frame of reference to the workshop participants for the three one-hour sessions during the two days of the conference.
We also discussed the current state of peer review, asking the participants to vote for one of four options. The vast majority of the group (93%) felt that peer review is need of major improvements to meet the needs of its various stakeholders:
Nobody in the group felt that peer review in its current state is working well most of the time – perhaps unsurprising in a workshop dedicated to discussing peer review innovations!
This exercise not only illustrated that the participants were aligned in their perception that peer review requires a major rethink, it also created a sense of urgency and purpose in the room.
Threats, pain points and successes
In session one, the workshop participants, working in five groups, were asked to list the threats, pain points and successes (described during the session as “things that work well”) in peer review in its current state. The workshop participants collectively prioritised the following main threats, pain points and successes:
Threats
- Generative AI – fake papers, fake people, fake everything
- Integrity – fake journals, fake papers, fake reviews
- Publish or perish culture – institutional incentives
- Deficiencies in scholarly rigor and ethics
- Misconduct
The consensus from the group was that the culture of Publish or Perish is incentivising misconduct, and generative AI is providing readily available tools that make misconduct easier.
Pain points
- Proof of identity and lack of industry collaboration for identity management
- Finding reviewers due to lack of training and a lack of standards/consistency
- Overloaded teams and systems – crumbling under pressure, working with small reviewer pools and too many papers
- Finding suitably qualified peer reviewers
- Time pressure for researchers and peer reviewers
The overall sense was that the difficulty in finding qualified reviewers is exacerbated by the inability to fully trust reviewer identity, and, related, the lack of knowledge about possible reviewers outside of mainstream western institutions.
Successes
- Concept of peer review – cumulative trust indicator
- Open peer review – can this help with trust
- Longevity – peer review has been around for over 100 years. Hard to make a behaviour change
- Improves science
- Using AI tools to find peer reviewers
Perhaps unsurprisingly, the participants found it easier to identify specific threats and pain points than specific successes, which were more thematic in nature. Fundamentally, peer review is a strong and valued concept, it signals trust and improves science, but its mechanics need major attention to cope with the numerous threats and pain points.
Gaps and innovations
Session two asked the participants to think about what would enable “perfect peer review”, focusing on gaps which need addressing, current successes which can be enhanced or extended, and areas for innovation and new thinking.
The session used an adapted version of the ‘1-2-4-All’ framework for generating, discussing and refining ideas – in this case ‘1-2-Table’ for each of the five groups. Starting with individual ideas, participants then discussed their respective ideas in pairs, before coming together as a table to prioritise their top five ideas.
Key themes from this session were: changing culture and incentives upstream from the peer review process; embracing technology which can be trusted as being critical and reliable, rather than purely generative; creating and adopting industry standards; and a push towards prompt, effective and constructive peer review.
Practical solutions and blue-sky thinking
At the start of session three, the various ideas shared by each workshop group in session two were categorised for a Slido poll in which participants were asked to rank their top five ideas in order of preference:
Researcher-focused
- Formal training and mentoring for early career researchers
- Recognition (e.g. continuing professional development; research assessment)
- Monetary incentives (e.g. paid-for peer review; APC discounts)
- Foster a community of peer reviewers to share experiences and best practice
Institution-focused
- Provide researchers with the space, time and resources to undertake peer review
- Formal training for researchers
- Disincentivise malpractice (e.g. stop ‘publish or perish’)
- Recognise peer review in researcher career development
Metadata and infrastructure-focused
- Widely adopted PIDs / user authentication (e.g. ORCID; something else?)
- Easier, more efficient metadata capture throughout the workflow
- Infrastructure to support portable peer review
Technology-focused
- Reduce friction and delays in the peer review workflow
- Automated tools to reduce admin burden on journals when triaging submissions
- Automated integrity checks upstream, pre-submission
- Automated reviewer finding and matching tools
- Embrace AI as part of undertaking the review process
- Enable collaborative peer review for greater transparency and engagement
Culture-focused
- Recognise and engage with differences in the culture of peer review globally
- Prioritise quality over quantity in submissions and published research
- Take a longer-term view on the research lifecycle
Top five areas for innovation
The workshop participants voted for the following top five areas, and each group was assigned one of these ideas to discuss (i) practical actions which individuals and their organization can take immediately; (ii) realistic medium-term actions for adoption by the scholarly communications community; and (iii) blue-sky ideas if money, resources and time are no obstacle.
Interesting that the bottom ranked idea (#20) was the idea of monetary payment for peer review. Whether this reflects wider industry sentiment, or just the collective view of the 30 participants voting on these 20 specific ideas, is a moot point.
Disincentivise malpractice
Immediate practical actions
- Greater awareness and education across the scholarly community
- Campaigning by industry bodies
- Greater volume and consistency of resources between publishers
Realistic medium-term actions
- Greater consequences for malpractice at an institutional level – monetarily and reputationally
Blue-sky thinking
- Stop publish and perish!
- That being said, there will always be a push for some form of relative measurement of researchers and institutional research performance – will bad actors simply find ways to game the alternatives?
These solutions point to a consensus that researcher malpractice stems from institutional incentives and that it is the researcher’s organization that is ultimately responsible for monitoring and punishing bad actors. Publishers can police the process, and tools can be developed to aid the publisher in that process, but in the researcher’s employer who has the greatest sway over researcher behaviour.
Recognition
Immediate practical actions
- Extend current initiatives such as CoARA, ORCID Peer Review Deposit and ReviewerCredits
- Improve the level and consistency of feedback provided to peer reviewers
Realistic medium-term actions
- Devise more effective measures for quality control in peer review – reviewer rankings?
- Funder-driven initiatives for recognising peer review contributions
- Enable readers to provide feedback – affirmative or critical – on open peer review and post-publication peer review
Blue-sky thinking
- Extend peer review quality measures upstream to institutions, to encourage institutions to value and recognise the time spent by their researchers in undertaking peer review
Professionalization of peer review appears to be the solution across all categories. Today peer review is seen as a volunteer activity, done on one’s own time. Implementing training, carving out time during working hours, and institutionalizing recognition for researchers who engage in peer review might go a long way to increasing the willingness of researchers to perform this important service. Interestingly, as noted above, there is little support for actually paying peer reviewers for their efforts. This may be a result of the demographics of the participants, which was largely publisher-centric.
Widely adopted PIDs and User Authentication
Immediate practical actions
- More widespread (ideally near universal) adoption of ORCID by scholarly publishers
Realistic medium-term actions
- Funders to contribute to the financing of widespread / standardised PID adoption, as part of their investment in safeguarding the version of record
- Use PIDs such as ORCID to track and share data on aggregated quantity of reviewers
- Use PIDs such as ORCID to disambiguate reviewer identities and to engage with previous reviewers
Blue-sky thinking
- Globally standardised metadata – for instance on article types, institutions
- Interoperable peer review standards
In many ways the solutions are already in place but are clearly underutilized. For example, ORCID was enthusiastically embraced early on, and usage continues to grow. However, the adoption needs to be accelerated with the help of funders and institutions, the forces that wield the real incentives, money and career prospects. If ORCID could be enhanced to support user verification, as a step on from user disambiguation, and utilised in the same way as the KYC protocol in financial services, this could provide additional benefits and value for scholarly publishers and funders.
Similarly, NISO, Crossref, and other organizations maintain and promote standards for metadata, and recently a Peer Review Terminology Standard was developed jointly by STM and NISO. The challenge is getting the entire ecosystem utilizing these standards, especially funders and institutions.
Collaborative peer review
Immediate practical actions
- Consider for adoption on specific journal titles
Realistic medium-term action
- Create a taxonomy of reviewer contributions – ideally as an extension of CRediT for author contributions for a more holistic view of researcher activity in scholarly communications
Blue-sky thinking
- Develop an industry platform (or platform-standard) which supports collaboration, transparency, engagement and equity between all stakeholders in the peer review process.
Similar to recognition, collaboration focuses on the peer reviewer as the central character. Because peer review is usually a solo endeavour and tends to take place in the dark, the activity is seen as a burden and there is increasing mistrust in the process. Finding ways to open up the evaluation process, introducing more collaboration, and allowing (acknowledged) early career researcher participation might be solutions to these issues.
Prioritise quality over quantity
Immediate practical actions
- Improved submission software systems
- Ensure researchers are choosing the right journal or platform for their research
- Reduce the focus on publishing at volume
Realistic medium-term actions
- Quality is subjective – we need a consistent or standardised definition of what constitutes a high-quality submission
- Train scholars on writing effective abstracts
- Move away from a seemingly endless cascading process used by some publishers to retain submissions within their portfolio
- Look at the motivations behind researchers submitting too many papers to too many journals
Blue-sky thinking
- Change the incentives driving Publish or Perish
- Develop standardised abstract analysis tools for faster and more accurate triage and peer reviewer identification
- Adopt a ‘two strikes and out’ industry rule for submitted research – if a submission is rejected by two journals, irrespective of publisher, it cannot be considered by any other journal – how workable is this idea?
The final area for innovation is perhaps the hardest to achieve, since it involves an overall cultural change in how research is published and the incentives that drive researchers to publish. Reducing “salami-slicing” and limiting cascading are tactics, but the larger solution again lies with funders and research institutions who reward quantity.
There are vast differences in how research is conducted and reported across disciples, with unique cultural idiosyncrasies and discipline-specific traditions, making standardization challenging. There is also an indication that technical systems need to modernize to deploy more AI-type tools to identify and fix quality issues, but like the previously mentioned challenge, there are many players, many different technologies, and many varied requirements from discipline to discipline.
Summary
The most potentially contentious innovation from the workshop is the final one discussed by the participants and shared here, the ‘two strikes and out’ idea. It will be interesting to see if the idea of restricting the number of times a paper can be submitted to any journal, before it effectively becomes void, is either desirable, workable or fraud-proof. It would certainly require industry collaboration and technological capabilities to support such a move. And would it be regarded as being in some way prejudicial towards certain authors or global regions? These considerations notwithstanding, it was certainly fun to end the workshop with such as hot topic for discussion and further debate!
Having established our parameters, defining peer review as the timeframe between the submission of research to a journal or other platform for publication, and the editorial decision by that journal or platform to publish the work, and having established consensus during the first exercise, that peer review needs major improvements, the participants were quite productive identifying threats and pain points, while struggling to come up with specific answers to what are some successes. Settling on the concept of peer review as a success, this motivated the room to find solutions to secure its foundations and build a better infrastructure. There were many ideas for specific innovations, like better use of PIDs, ORCID, reviewer recognition, and quality assessment tools.
However, there was overwhelming recognition that the biggest change factors are institutions and funders who control the purse strings and manage the reputations of researchers. Publishers who manage the peer review process can create rules and utilize technology to improve peer review, but the innovation that might make a difference lies with changing the publish or perish culture that drives researchers to overwhelm the system, create peer reviewer shortages, and foster mistrust of science.
We hope this workshop provided food for thought for the participants, and for the wider scholarly communications community. We look forward to ongoing collaboration with colleagues as we take these themes forward in future discussions.
Acknowledgements
The authors would like to thank all of the workshop participants for their wholehearted contributions to the conversations, debates and outputs. We are also extremely grateful to Mark Carden, Jayne Marks and the Researcher to Reader 2024 Conference organizing committee for the opportunity to develop and deliver this workshop.