Peer Review: How to Cheat
And how to fix the system that lets you
The consequences of our actions
Let me open today’s blog post with this quote from Haug (2015) from “Peer-review fraud — hacking the scientific publication process” and all that it means: “The problem is the perverse incentive systems in scientific publishing. As long as authors are (mostly) rewarded for publishing many articles and editors are (mostly) rewarded for publishing them rapidly, new ways of gaming the traditional publication models will be invented more quickly than new control measures can be put in place” (p. 2394).
You may be wondering why I started with this quote. Well, it sets the stage for one of the major issues in peer review journals, and that is the incentivisation of publication. The “publish or die” mentality. What does publishing in these peer reviewed journals get you?
● Money
● Recognition
● Better ‘things’ (jobs, grants, positions in a project, etc.)
And maybe even your degree, if you’re required to publish a paper before finishing your stint at university. There is an immense pressure to publish. And to no surprise, gaming the system isn’t as unusual as you might think.
How to win at peer review
The biggest problem with the necessity to publish is the peer review process is going to weed out your trash papers quicker than you can write them. So what do you do? Well, as Haug (2015) so handily describes, you become your own peer reviewer. By volunteering under a fake name, it becomes possible for certain reviewers to discover and approve their own papers – over and over again until their paper has more ‘approve’ than ‘reject’ votes (pp. 2393-2394). It’s far from a perfect ‘game the system’ win though. Eventually, people who try this get caught, so don’t get any funny ideas, along with the lawsuits and problems to follow. But the fact that this can happen at all shows a fault in the system. Not only is it possible to game the system, but its possible trash within an article can just get through, even with the best of intentions from the reviewers. Peer review is primarily volunteer based, which means that often peer reviewers are under-educated on how to actually peer review well, or even might not be completely educated on specific aspects of a field that they’re in (Gropp et al., 2017, p. 409). What that means is, there could be any number of problems with an article which is peer reviewed, simply because a mistake got through.
Under-education hurts everyone
In case you were wondering, there is creditable backing for why not having proper education ‘Peer Reviewing 101’ might be harming the system: “Based on this sample of BMC-series medical journals, we found that peer reviewers failed to detect important deficiencies in the reporting of the methods and results in randomised trials. The number of changes requested by peer reviewers was relatively small; however, most authors did comply with recommendations or requests by peer reviewers in their revised manuscript” (Hopewell et al., 2014, p. 4). So, while the suggestions to change things were implemented when they were made at all, often to the research’s benefit, there was a surprisingly large amount of false information in articles which just got peer review. Which I hopefully don’t have to explain is probably more than just a bit bad. Hopewell (2014) suggests that this could have been avoided had there been proper following of peer review education processes: “Better use and adherence of reporting checklists (www.equator-network.org) by journal editors, peer reviewers, and authors could be one important step towards improving the reporting of published articles” (Hopewell et al., 2014, p. 4). Limited training harms both the researcher, the reader of the research, and the peer reviewer who is putting in the effort to give a correct and accurate peer review. By better education, it might be possible for peer reviewers to get better, more accurate research through peer review. But at the same time, such education would require peer reviewers to be more educated and make the lack of reward for the job they perform even more glaring. Self-improvement only goes so far.
How to fix the ‘bugs’ in peer review
There might not be a way to fix all the bugs in the peer review process – it involves humans after all. (And while there could be arguments for AI or another such tool, that’s another batch of issues beyond my pay grade.) But I’d like to draw you back to how I started this post, where Haug (2015) said, “The problem is the perverse incentive systems in scientific publishing” (p. 2394). Which isn’t exactly a simple solution. In fact, it opens up a new issue: are we just supposed to stop paying people? There’s no one-size-fits-all solution, unfortunately, but there should start to be a movement to distance ourselves from the cultural necessity within scientific and academic communities to constantly need to publish, particularly for incentives. Otherwise, there are just going to be the gamers of the peer review world who attempt to create new streams of income from constantly publishing trash in the guise of “science.” Additionally, the screening of papers via single-blind (where the reviewer knows the identity of the researcher but not the other way around) makes the chance of gaming the peer review system easier for those who want to try and create their own fake reviews. There is a possibility that double-blind journals might be able to mitigate this particular aspect of peer reviewing issues, but as Haug (2015) said: “new ways of gaming the traditional publication models will be invented more quickly than new control measures can be put in place” (p. 2394).
Work Cited:
Gropp, R. E., Glisson, S., Gallo, S., & Thompson, L. (2017). Peer review: A system under stress. Bioscience, 67(5), 407-410. https://doi.org/10.1093/biosci/bix034
Haug, C. J. (2015). Peer-review fraud — hacking the scientific publication process. The New England Journal of Medicine, 373(25), 2393-2395. https://doi.org/10.1056/NEJMp1512330
Hopewell, S., Collins, G. S., Boutron, I., Yu, L., Cook, J., Shanyinde, M., Wharton, R., Shamseer, L., & Altman, D. G. (2014). Impact of peer review on reports of randomised trials published in open peer review journals: Retrospective before and after study. BMJ (Online), 349(jul01 8), g4145-g4145. https://doi.org/10.1136/bmj.g4145
About the Creator
Minte Stara
Small writer and artist who spends a lot of their time stuck in books, the past, and probably a library.
Currently I'm working on my debut novel What's Normal Here, a historical/fantasy romance.
Comments (1)
The incentivization to publish is a huge issue. It leads to the "publish or die" mentality. Gaming the system, like being your own peer reviewer, is a problem. It shows flaws in the process. How can we fix this pressure to publish and ensure the integrity of peer review? Also, it's crazy that people try to cheat the system. What more can be done to stop such unethical behavior? We need better safeguards to keep the peer review process honest.