How do we fix the publishing system. Three (doable?) solutions.

I’ve been playing for a while with some ideas that are at the same time potential solutions, and to some extent doable. But I am aware some are highly unlikely to happen due to social dynamics. They play around with ideas related to reducing the number of papers we publish and changing the evaluation and discovery systems in place.

  1. The Synthesis Journal: This would be a non-for-profit ideal journal that only publishes anonymous papers. There are two types of papers a) Wikipedia consensus-type method papers with the aim to create standard methods. The beauty is that the metadata of newly collected data would indicate clearly which method was used e.g. ISO-345, which has an associated data format and hence combining those is easy programmatically. Bots can even crawl the web if metadata is in EML format looking for studies using standard methods. Methods have no public authors and are achieved by consensus. b) The second type of paper are synthesis papers. Those are dynamic papers that collate data collected with standard methods to answer general ecological questions using modern programmatic workflows. As new data is created following a), the model outputs are updated, as well as the main results. Versioning can do its magic, here. To avoid having field workers that create data and synthesizers that get credit, anonymous teams donate their time to this synthesis endeavor. Hence the anonymity. This will limit also the number of synthesis papers published.
  2. The Cooperative of Ecologists: This is something I really like. Cooperatives have a long tradition of allowing the development of common interests in a non-capitalistic way. Entering in the cooperative would be voluntary (some references or formal approval may be necessary). Duties can involve adhering to a decalog of good practices, publishing in a non-selective style repository, giving feedback to twice the number of manuscripts you sign as the first author, and evaluating a random peer per year with a short statement (no numerical values). The benefits are getting feedback on papers (you can use it to update your results as you see) and having yearly public evaluations you can use for funding/promotion. With one evaluation per year, you can quickly see how your peers judge your contributions to the field. One of the core problems of the publishing system is the need to be evaluated. This moves the focus of evaluation outside where you publish your papers, and these evaluations can highlight better aspects such as creativity of ideas, service, etc…
  3. Crowd-sourced paper evaluation plug-in: As stated in the previous posts, one of the main problems is that we use where papers are published not only serve to discover what we should read, but also to evaluate our performance. I know that a single index will never make the evaluation job, this is why we need to diversify the options for evaluators (grant agencies, hiring committees, … ). Right now, in addition to the number of papers and the journal prestige / IF, metrics like citations received, F1000-type evaluations, or alt-metrics are already available. DORA-style narrative CVs are also great, but hard to evaluate when the candidate lists grow dramatically. So, what if a plug-in exists for internet browsers where you can log in with your ORCID? Each time you visit a webpage of a scientific paper (including archives), a simple three axes evaluation emerges. You can rate with three simple clicks it’s 1) robustness (sample size, methods, reproducibility) 2) novelty (confirmatory, new hypothesis, controversial) 3) overall quality. I am sure these axes can be better though, and reproducibility may be an automatic tag (yes/no) depending on data/code statements. You can also view the evaluations received so far. With enough users, this can be a democratic powerful tool to create one more option to be evaluated. Plus, recommendation services may be built upon it. I would love to read a robust controversial paper liked by many of my peers. I believe this is not complex technologically, and if done in a user-friendly way, can help the transition to publish in non-selective free journals or archives. This also selects for quality and not quantity. I know cheating is possible, but with verified ORCID accounts and some internal checks to identify serial heaters/unconditional fans and the power of big numbers, this may work.

This is it. If it was not clear, the aim of the post is to think outside of the box, and lay out a few ideas, not a detailed bulletproof plan.

Where the hell I publish now?

The scientific publishing system is hindering scientific progress. This is known and I won’t repeat myself or other more detailed analysis dissecting the problem of publishers making massive profits on our behalf without (almost) any value added (e.g. Edwards and Roy 2017, Racimo et al. 2022).

In the last years, cost-effective alternatives to publish our results have emerged and I don’t think technical aspects are an issue anymore. I think the problem is that when I publish something, I want to be read. I know that if I publish in certain journals, the day before the paper is published almost all researchers interested in that topic will see my paper. I also want to be evaluated. Most funding agencies still use where you publish as a quality indicator of your contribution (consciously or unconsciously), not to mention that the same paper published in a given journal will receive much more citations than if published elsewhere, if citations are what funding agencies will look at, bypassing the infamous IF. 

My approach so far has been trying to publish in Society Based Journals. Despite most of these journals still partnering with big publishers, I heard that most have a decent deal with publishers (but I also heard some got terrible deals). The advantages are obvious. Those are well read and evaluated, have no APC, and the money they make reverts to the societies. The drawback is that not all my papers are top papers that can find a home there, and that the papers are not open access (you pay to read). This is secondary for me in a world with SciHub, but still important. In addition, this model is getting slowly outdated, and some of those journals are already changing to pay to publish model. Paying high APC (anything > 200 EUR for the EU standard) is a bad replacement for the current system in my opinion.

I made a quick tally and in the last 5 years (2017-2021) I published:

  • 32 papers in Society Based Journals with no APC. Wow! These include BES, ESA, Nordic SE, AAAS, Am Nat, and other conservation and Behavioural societies.
  • 6 in Selective Journals that require APC (but about half of the time my co-authors paid for the APC), such as PNAS, Nature Communications, or Scientific advances, but also other less fancy. I try to minimize those because, despite their visibility, I prefer to invest money in salaries, then in publishers, but if I (or my lab group) can publish in e.g. PNAS, this is money well invested regarding career advance. Let’s be honest.
  • 5 in Non-Selective Journals with APC such as Plos, Open Science B, Sci Reports, PeerJ… Not always my decision, and while I support non-selective journals, especially if non-for-profit or with sustainable policies, their APCs are increasing in an unsustainable way.
  • 3 in For Profit Selective Journals without APC. Despite trying to convince my co-authors to avoid those, I do not always succeed. Yes, I had 1 paper published in Elsevier last year (sorry). The other two are high-impact journals whose visibility might compensate for the balance (TREE and Nature E&E). Everybody has a price.
  • 2 Free to publish – Free to read Journals. This is the way to go! One is in a Journal I did not know until recently. The other is the newly created Peer Community Journal, which I support. Other Journals in this list are Web Ecology, Journal of Pollination Biology, Ecologia Austral, and to be honest, not much more that I know (and Ecosistemas, despite it publishes mostly in Spanish). I am also looking forward to the new EU-Horizon Journal, but it’s closed to EU-Horizon projects, and I think it still costs quite a lot to the EU per article, so indirectly, we are still paying for it.

While I am happy with the last 5 years’ record regarding where I published, I think this is not enough. I want to publish more in free-to-publish – free-to-read journals, especially if I am the first author (I got tenure already). But I also want those to be read. Our Peer Community Journal paper has almost no citations despite being quite good (IMHO). I am sure that if it was published in Ecology Letters it will have now several more citations.

So how do we fix this? I have some ideas, but nothing clear. The next posts will explore those ideas.

Resources about Ecology Journals:

How to answer to reviewers

This is another of the aspects of doing science that nobody explicitly teach you. The basics are pretty simple to explain (just respond to everything and point by point). You start by mimicking what your mentor does, how other co-authors respond, and how other people respond to your reviewers.  But after seeing different co-authors at work, and specially now that I saw a lot of responses from different people as an editor, there are bad responses, good responses, and those so good, that make your paper fly to the publication stage. Why? The little differences.

1) Be concise (i.e. give a lot of information clearly and in a few words). You can spend some time in formalities and a “thank you” part and a “highlighting the strong points part” is important, but make your case quick and personal. Don’t thank reviewers for “enhancing the paper” because you have too. Thank them for pointing out A or B, which really made a difference. If comments were minor, its not necessary to make a big deal with empty words because you want to be concise. Being personal and not using pre-established “thanks you phrases” helps connecting with the reviewer and sets his/her mood for reading the rest. Also, always briefly highlight the positive things. Editors are busy people, if a reviewer are supportive or partially supportive, bring that up in the response to the editor to put him back in context.

2) Following with conciseness, show that you care about the science. If you did a good work, reviewers do not know your data/analysis as well as you do, so make them trust you by providing details on the decisions you made, and back up all your claims with data and references, not only in the Response to Reviewers, but also in the edited paper. This seems obvious, but I’ve seen several “we don’t agree with this change” without a proper justification.

4) Number your responses. that allow you to refer to previous responses, and avoids repetition. Nobody wants to hear the same justification twice. If your reviewer is not tidy (e.g. do not separate main concerns, from small comments), you should be. Your responses should always flow and for example, you can summary main changes first, and then refer to it when brought up by the reviewer in the middle of other in-line comments that deal with smaller wording issues.

5) Put the focus of the review on the ms, not on the R to R. That means that other than in particular cases you don’t quote the changes in the response, but refer to the lines where the changes are. BUT the real pro-tip is that you highlight the changes in the new ms. Track changes are burdensome and require software specific, but using a different color (I personally like using blue font because red is too contrasting) for the changed sentences in the new ms is a big help for reviewers. This allow both, a smooth read of the full paper, and makes it easier to find the new passages.

Any other tip you use?

Using twitter to streamline the review process

[Update: Peerage of science (and myself) has started to use the hashtag #ProRev]

You know I am worried about the current status of the review process, mainly because is one of the pillars of science. There are tons of things we can do in the long run to enhance it, but I come up with a little thing we can do right now to complement the actual process. The basic idea is to give the opportunity to reviewers to be proactive and state its interest to review a paper on a given topic when they have the time. How? via twitter. Editors (like i will be doing from @iBartomeus) can ask for it using a hashtag (#ProactiveReviewers). For example:

“Anyone interested in review a paper on this cool subject for whatever awesome journal? #ProactiveReviwers”

If you are interested and have the time, just reply to the twit, or sent me an email/DM if you are concerned about privacy.

The rules: is not binding. 1) I can choose not to send it to you, for example if there are conflict of interests. 2) you can choose not to accept it once you read the full abstract.

Why the hell should I, as a reviewer, want to volunteer? I already got a 100 invitations that I had to decline!

Well, fair enough, here is the main reasons:

Because you believe being proactive helps speed up the process and you are interested in making publishing as faster as possible. Matching reviewers interests and availability will be faster done that way than sending an invitation one by one to people the editor think may be interested (for availability there is not even a guess).

Some extra reasons:

Timing: Because you received 10 invitations to review last month, when you had this grant deadline and couldn’t accept any, and now that you have “time” you want to review but invitations don’t come.

Interests: Because you only receive invitations to review stuff related to your past work, but you want to actually review things about your current interests.

– Get in the loop: Because you are finishing your PhD and want to gain experience reviewing, but you don’t get the invitations yet.

– Because you want the “token” that some Journals give in appreciation (i.e. Ecology Letters gives you free subscription for reviewing for them).

– Because you want to submit your work to a given Journal and want to see how the review process work first hand.

So, is this going to work? I don’t know, but if a few editors start using it, the hashtag #ProactiveReviewer can become one more tool. Small changes can be powerful.

Peer-Review, making the numbers

We know it, the system is saturated, but what are we doing? Here are some numbers from 4 recent Journals I reviewed and published (or tried to publish) recently.

Time given to me to complete the review Time to take a 1st decision in my ms
PNAS 10 days > 3 months
PLoSOne 10 days 2 months
EcolLett 20 days 2 months
GCB 30 days 2 months

I think most reviewers do handle the ms on time (or almost on time), and that editors handle ms’s as fast as possible, so where are we losing the time? On finding the reviewers! In my limited experience in J Ecol I have to invite 6-10 reviewers to get two to accept, and that imply at least 15 days delay at best. And note that all the above are leading journals, so I don’t want to know how much it take for a low-tier Journal.

However, the positive line is: There are people willing to review all this papers. Seriously, there is a lot of potential reviewers that like to read an interesting paper on their topic, specially if they get some reward other than being the first on knowing about that paper. So I see two problems, which rewards can we offer and how to find the people who is interested in reviewing that paper efficiently.

1) Rewards: Yes, I love reviewing, I learn and I feel engaged with the community, but it also takes a lot of time. However, a spoon full of sugar helps the medicine go down. I don’t want money, I want to feel appreciated. For example, Ecol Lett offers you a free subscription for 3 months or GCB a free color figure in your next accepted ms (given that you manage to get one accepted). I am sure other options are out there, including some fun rewards, like for example “the person with more reviews in a year wins a dinner at next ESA/Intecol meeting with the chief editor” to put a silly example. Recognition is another powerful reward, but more on that line in the next item.

2) Interests matching: Rather than a blind guess from the editor of who will be interested in reviewing a paper, we should be able maximize interests. Can we adapt an internet dating system for finding a suitable partner to find a suitable reviewer? As an editor, I would love to see which reviewers with “my interests” are “single” (i.e. available) at this moment. Why sing in as a reviewer? May be because you want the free subscription to Ecol Lett or you die for this dinner with Dr. X. Also, by making your profile and activity public is easy to track your reputation as a reviewer (and of course you can put your reputation-score in your CV). Identify cheaters in the system (which submit papers but don’t review) will be also easy, and new PhD students can enter the game faster. Any entrepreneur wants to develop it?

While  there is still also a lot of bad advice out there which contribute to saturate the system, other models to de-saturate the system are possible (PubCreds are an other awesome idea). I am looking forward to see how all it evolves.