The idea for a journal “eLife-A”

The process of publishing and reviewing scientific results is often a strange and difficult-to-navigate landscape.

It would take too long to mention all the problems and pitfalls related to publishing. For example: overly high publication charges; long turnaround times for reviews and editorial decisions; incomprehensible reviews; incomprehensible manuscripts; delayed careers due to rejections and revision processes; lack of transparency of manuscript evaluation; lack of open access. Just to name what comes to my mind immediately.

Some of these issues have been partly addressed during the last decade by preprints (arXiv and bioRxiv), open reviews (check out my previous blog post on public peer reviews), non-legal mirrors of closed-source material (sci-hub) and experiments with new publication models (most prominently, the new eLife publication model).

Since the inception of eLife’s new publication model End of 2022, the first papers have now gone through this process, and I have the overall impression that the model is mostly well-received and definitely worth the experiment. The published papers now not only include the public peer reviews, but also an assessment by the editors, which summarizes and evaluates the claims of the paper. Manuel Schottdorf just pointed out to me that this digest can also be an outright negative assessment, for example concluding that “the evidence” for the claims is “considered inadequate” (full article plus assessment)! That’s impressive! Even though such an assessment might appear harsh from the authors’ side, such an open process makes science itself and its progress more transparent. The specific paper is outside my area of expertise, but I like the idea in general. The conclusions and claims of a paper should not be evaluated based on its association with a specific journal, but based on its assessments by expert reviewers.

In a post on Twitter, Aaron Milstein just put forward a suggestion to implement this assessment in a way that is also readable by evaluation and grant committees: by giving to each paper a grade between “A” and “F”.

For example, “eLife-A” would correspond to “Nature/Science/Cell” in terms of broad relevance, “eLife-B” to “Nature Neuroscience” or “Neuron”, “eLife-C” maybe to the current version of eLife, and so on.

Personally, I do not like the US grading scheme (from A to F), and I’m also a bit skeptical about mixing impact/relevance and methodological rigor/correctness – I really hate high-impact papers with strong claims that are not supported or only weakly supported by data, but I know that many people think differently. In any way, a one-dimensional metric (from A to F) would certainly benefit from improved interpretability, which is already better than what we have right now.

I could also imagine that as a further step, a retrospective re-evaluation of specific papers could be possible, with this secondary, retrospective evaluation reflecting the impact a paper had on its field. It is easier to tag an existing paper that already has a specific “rating” with a new additional “post-publication review rating”, rather than saying that “in retrospect, this Scientific Reports paper should have been published in Nature”. But this is just a side-note.

In an ideal scenario, such a publication model would be set up by an independent entity. eLife would actually be a good choice, because its publication model is already very close to this model. Alternatively, the EU funding agencies or the NIH could set up such a journal and oblige all projects funded by EU or NIH to publish in this journal.

One of the several positive and less obvious side-effects would be to prevent journal hopping. Journal hopping is the process of sequentially submitting a paper to several high-impact journals, in the hope of being lucky with one of them. This process not only wastes a lot of resources from both journals and voluntary reviewers but also comes at the cost of junior researchers whose careers are delayed to an unnecessary degree by this process.

I really hope that a publication scheme that gets rid of the journal tags and replaces them with grades (=subjournals) becomes a reality soon. I think it is a good idea.

This entry was posted in Reviews and tagged , . Bookmark the permalink.

2 Responses to The idea for a journal “eLife-A”

  1. Do we need to rank papers at the point of review/publication? If so, how would we evaluate if the rankings are accurate?

    I like reading reviews that discuss a group of recent papers and how they fit in the field, their strengths and weaknesses (e.g., TINS). That’s a type of post-publication evaluation, and it seems constructive and useful.

    • If the rankings were accurate, it would be still up to the evaluation committee to decide. Ideally, they would have time to read some of the papers themselves. If not, they could use the ranking as guideline (as they use nowadays journal names). For sure not a perfect system, but maybe an improvement?

      Agree, formats such as TINS are very useful! However, these reviews cover mostly the strengths of published papers, and not so overtly the weaknesses (understandably, because the review authors still want to have friends in their fields!).

      Some years ago, I read a discussion about hippocampus neuroscientists about what the hippocampus actually does in Nature Neuroscience, and I’m still delighted by the obviously different and conflicting views that became open when reading the piece: https://www.nature.com/articles/nn.4661

      Sometimes I get a similar feeling of diving deeper into the conflicts of a field when reading public peer review files, with different views of what is important or true clash into each other.

      However, these aspects are interesting from a scientific point, but probably not so much from point of view of a committee deciding about grants or tenure based on publication records. Probably no committee will read a TINS paper just to be able to better judge a researcher…

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.