Niko Kriegeskorte from the MRC Cognition and Brain Sciences Unit at Cambridge recently discussed the future of scientific publishing (his blog on this). He considered these questions: What’s good and bad about the current system? What features define the future system and how can we transition to it?
Good things about the current system include that it provides a signal of what to read. It does this through peer review and journal prestige. One issue is whether journal prestige is a good indicator of whether something should be read, an issue that’s considered below.
Another good thing with the current system is that it provides an appealing layout for papers. Niko mentioned that this is desirable for, but not critical to, scientific progress. I think of this more broadly than just layout, in that publishers implement frameworks for dissemination of the content. They put it in the right place, provide it in different formats, keep track of downloads, etc. They also enforce a threshold on these qualities. I would agree that this is desirable, but not essential. In some sense, things like bad layout are self-correcting. In the absence of enforced layout, people would likely gravitate towards those papers/labs/people with good layout.
Something bad about the current system is that most journals are not open access. He makes the point that scientific papers benefit society only the extent that they are accessible. If the public pays for scientific research, it should demand that the results are openly accessible. To me, this is a weird straw man and almost certainly not true. It seems to assume that all research that would be published is publicly supported or at least that publications from all research should open since that’s in the interests of the public even though they only pay for a subset of the research.
But what are the public paying for and why are they paying for it? Is the answer to that even singular? They may pay for things that benefit themselves or their society in the future, these being assumed in the they pay for it so they should have access to it line. But wanting to benefit doesn’t mean they need access to the papers. In fact, most of those in the public who are paying for it may not care at all about accessing the papers. Some may care even less in the face of evidence that giving them access meant a worse system overall. In addition, other forms of communication might be better suited with respect to benefiting them. One could say that the public may benefit by having the scribbling and marker board content of labs that they are supporting. Should there likewise be a system of providing them with such content?
The technologies or knowledge that the research generates, however remotely connected, may be what actually matters to them. They wouldn’t necessarily benefit by having access to all of the literature that contributes to the design of an A5 chip. They benefit from a faster and useful iPhone.
Continuing with bad things about the current publishing system, the main evaluative signal provided to readers is a journal’s prestige, their reputation. The point here is that although the journal is correlated to the quality of the papers, the signal it provides is one dimensional, and therefore greatly impoverished. The content of the reviews and the ratings that reviewers provide to the journal are kept secret.
My view here is that it is unclear that this can only be fixed by doing away with the middle man or by going fully open. What’s to stop someone from creating a place for rating and reviewing papers and charging for it? In fact, don’t social networking reference management tools, among other things, already do that? They are already tracking things like what people are are reading what papers. They could also collect reviews, at least there is nothing stopping them from doing that right now.
Further point on journal prestige as an evaluative signal, it’s compromised by circularity. Prestige is related to impact factors, which in turn depend on citation frequencies. Because something is published in a high profile journal it will be cited more, something that’ll continue to make the journal more prestigious. This is true regardless of whether the papers are actually any good. Impact factors give us a quality index that’s distorted to an unknown degree by this self-fulfilling prophesy of citation frequency. Also, the prestige reflects the journal, not the individual papers. The two are too weakly correlated with paper quality. On a related point, having only two to four reviews provides only a noisy evaluation signal to justify the influence high-impact publications have on the attention of the scientific community, publication policy, science funding and an individual scientist’s career.
Another bad aspect of the current system has to do with the review process, which Niko says lacks transparency. It relies on secret reviews that are visible only to editors and authors. As above, the selection is based on two to four peer reviewers. Niko’s point is that the quality and originality of a paper cannot be reliably assessed by such a small number of reviewers, even in the ideal case where they are neutral experts. Niko points out that reviewers are rarely objective, considering, for example, that they may be invested in the theory supported in the paper or invested in some other theory that’s challenged by the paper. The current publication system also comes with long publication delays. Pre-publication review can carry on for over a year. Niko’s point is that this delay slows the progress of science.
Moving towards the future of scientific publishing Niko points to some things that are in the right direction. These include arXiv.org, an open access paper repository. The PLoS Journals, which are open-access and invite post-publication commentary. Faculty of 1000, a commercial source for alternative paper evaluations. ResearchBlogging.org, which collects blog responses to peer-reviewed papers. And, Frontiers, which combines open access and democratic post-publication selection for greater visibility.
So what about the system of the future? In Niko’s view, it includes open access and open post-publication peer review. The idea is to immediately publish the paper and then allow open peer review and reception.
Some details about open post-publication review. Anyone can instantly publish a review and anyone can instantly access it. Every review is permanently linked to the paper. Reviews are digitally authenticated at different levels. There are signed reviews in which the author is authenticated and publicly identified, unsigned reviews by authenticated group members (e.g., a member of a professional group such as the Society for Neuroscience), and unauthenticated reviews.
Further points about this kind of review: in order for reviewing to be open, it has to be post-publication; review and reception are an integrated ongoing process, which takes places after publication; reviews do not decide about or delay publication; peer review is not perfect, but it is the best evaluation mechanism we have; the most serious drawbacks of peer review derive from the fact that it is currently a secret process.
A comparison between current and future systems. In current, a review is secret communication to authors and editors. In future, it’s an open letter to the community. In current, a review decides the fate of the paper. In future, it evaluates published work. In current, reviewer’s motivation includes selfless qualities, such as scientific objectivity, and selfish ones, such as scientific politics. In future, reviewer’s motivations include the same selfless qualities, but the selfish ones are replaced by looking smart and objective in public. In current, a weak argument can kill a paper. In future, an argument is as powerful as it is compelling.
In the future system, authors may ask a senior scientist to edit a paper, at which point the senior scientist would choose three reviewers. The editor asks them to openly review the paper. The editor is named on the paper.
Also, a paper evaluation function, quantifies papers based on available meta-information. The simplest metric might be a weighted average of review ratings. These could be weighted by dimension or by reviewer information, such as by expertise factor, time investment, or independence of authors. They can also be optionally normalized by error margin, so that papers with more reviews have higher scores. Individuals or groups can define their own paper evaluation functions to prioritize the literature according to their needs. Papers with very high scores could be showcased in Science or Nature.
How do we make this happen? A public website for open posting of digitally authenticated post-publication reviews. PubMed-scale investment to develop collaborative software and install the system. This could involve public funding and involvement from Google. Papers published in the current system can be reviewed using the new system. Original reviewers can publish the reviews they wrote for a traditional journal. This provides a platform for continual online evaluation of the scientific literature. Tipping point reached when the evaluative signal becomes more reliable than journal prestige. At that point, papers can be published instantly without journals, as authenticated digital documents, like the reviews.
What can we do now? Publish the reviews we write and receive online. This is useful activism, not the solution. View the problem as a grand challenge to cognitive and brain science. How to organize the collective cognition of the scientific community? Imagine how we want it to work, then talk and write about it.