How do we fix the publishing system. Three (doable?) solutions.

I’ve been playing for a while with some ideas that are at the same time potential solutions, and to some extent doable. But I am aware some are highly unlikely to happen due to social dynamics. They play around with ideas related to reducing the number of papers we publish and changing the evaluation and discovery systems in place.

  1. The Synthesis Journal: This would be a non-for-profit ideal journal that only publishes anonymous papers. There are two types of papers a) Wikipedia consensus-type method papers with the aim to create standard methods. The beauty is that the metadata of newly collected data would indicate clearly which method was used e.g. ISO-345, which has an associated data format and hence combining those is easy programmatically. Bots can even crawl the web if metadata is in EML format looking for studies using standard methods. Methods have no public authors and are achieved by consensus. b) The second type of paper are synthesis papers. Those are dynamic papers that collate data collected with standard methods to answer general ecological questions using modern programmatic workflows. As new data is created following a), the model outputs are updated, as well as the main results. Versioning can do its magic, here. To avoid having field workers that create data and synthesizers that get credit, anonymous teams donate their time to this synthesis endeavor. Hence the anonymity. This will limit also the number of synthesis papers published.
  2. The Cooperative of Ecologists: This is something I really like. Cooperatives have a long tradition of allowing the development of common interests in a non-capitalistic way. Entering in the cooperative would be voluntary (some references or formal approval may be necessary). Duties can involve adhering to a decalog of good practices, publishing in a non-selective style repository, giving feedback to twice the number of manuscripts you sign as the first author, and evaluating a random peer per year with a short statement (no numerical values). The benefits are getting feedback on papers (you can use it to update your results as you see) and having yearly public evaluations you can use for funding/promotion. With one evaluation per year, you can quickly see how your peers judge your contributions to the field. One of the core problems of the publishing system is the need to be evaluated. This moves the focus of evaluation outside where you publish your papers, and these evaluations can highlight better aspects such as creativity of ideas, service, etc…
  3. Crowd-sourced paper evaluation plug-in: As stated in the previous posts, one of the main problems is that we use where papers are published not only serve to discover what we should read, but also to evaluate our performance. I know that a single index will never make the evaluation job, this is why we need to diversify the options for evaluators (grant agencies, hiring committees, … ). Right now, in addition to the number of papers and the journal prestige / IF, metrics like citations received, F1000-type evaluations, or alt-metrics are already available. DORA-style narrative CVs are also great, but hard to evaluate when the candidate lists grow dramatically. So, what if a plug-in exists for internet browsers where you can log in with your ORCID? Each time you visit a webpage of a scientific paper (including archives), a simple three axes evaluation emerges. You can rate with three simple clicks it’s 1) robustness (sample size, methods, reproducibility) 2) novelty (confirmatory, new hypothesis, controversial) 3) overall quality. I am sure these axes can be better though, and reproducibility may be an automatic tag (yes/no) depending on data/code statements. You can also view the evaluations received so far. With enough users, this can be a democratic powerful tool to create one more option to be evaluated. Plus, recommendation services may be built upon it. I would love to read a robust controversial paper liked by many of my peers. I believe this is not complex technologically, and if done in a user-friendly way, can help the transition to publish in non-selective free journals or archives. This also selects for quality and not quantity. I know cheating is possible, but with verified ORCID accounts and some internal checks to identify serial heaters/unconditional fans and the power of big numbers, this may work.

This is it. If it was not clear, the aim of the post is to think outside of the box, and lay out a few ideas, not a detailed bulletproof plan.

Where the hell I publish now?

The scientific publishing system is hindering scientific progress. This is known and I won’t repeat myself or other more detailed analysis dissecting the problem of publishers making massive profits on our behalf without (almost) any value added (e.g. Edwards and Roy 2017, Racimo et al. 2022).

In the last years, cost-effective alternatives to publish our results have emerged and I don’t think technical aspects are an issue anymore. I think the problem is that when I publish something, I want to be read. I know that if I publish in certain journals, the day before the paper is published almost all researchers interested in that topic will see my paper. I also want to be evaluated. Most funding agencies still use where you publish as a quality indicator of your contribution (consciously or unconsciously), not to mention that the same paper published in a given journal will receive much more citations than if published elsewhere, if citations are what funding agencies will look at, bypassing the infamous IF. 

My approach so far has been trying to publish in Society Based Journals. Despite most of these journals still partnering with big publishers, I heard that most have a decent deal with publishers (but I also heard some got terrible deals). The advantages are obvious. Those are well read and evaluated, have no APC, and the money they make reverts to the societies. The drawback is that not all my papers are top papers that can find a home there, and that the papers are not open access (you pay to read). This is secondary for me in a world with SciHub, but still important. In addition, this model is getting slowly outdated, and some of those journals are already changing to pay to publish model. Paying high APC (anything > 200 EUR for the EU standard) is a bad replacement for the current system in my opinion.

I made a quick tally and in the last 5 years (2017-2021) I published:

  • 32 papers in Society Based Journals with no APC. Wow! These include BES, ESA, Nordic SE, AAAS, Am Nat, and other conservation and Behavioural societies.
  • 6 in Selective Journals that require APC (but about half of the time my co-authors paid for the APC), such as PNAS, Nature Communications, or Scientific advances, but also other less fancy. I try to minimize those because, despite their visibility, I prefer to invest money in salaries, then in publishers, but if I (or my lab group) can publish in e.g. PNAS, this is money well invested regarding career advance. Let’s be honest.
  • 5 in Non-Selective Journals with APC such as Plos, Open Science B, Sci Reports, PeerJ… Not always my decision, and while I support non-selective journals, especially if non-for-profit or with sustainable policies, their APCs are increasing in an unsustainable way.
  • 3 in For Profit Selective Journals without APC. Despite trying to convince my co-authors to avoid those, I do not always succeed. Yes, I had 1 paper published in Elsevier last year (sorry). The other two are high-impact journals whose visibility might compensate for the balance (TREE and Nature E&E). Everybody has a price.
  • 2 Free to publish – Free to read Journals. This is the way to go! One is in a Journal I did not know until recently. The other is the newly created Peer Community Journal, which I support. Other Journals in this list are Web Ecology, Journal of Pollination Biology, Ecologia Austral, and to be honest, not much more that I know (and Ecosistemas, despite it publishes mostly in Spanish). I am also looking forward to the new EU-Horizon Journal, but it’s closed to EU-Horizon projects, and I think it still costs quite a lot to the EU per article, so indirectly, we are still paying for it.

While I am happy with the last 5 years’ record regarding where I published, I think this is not enough. I want to publish more in free-to-publish – free-to-read journals, especially if I am the first author (I got tenure already). But I also want those to be read. Our Peer Community Journal paper has almost no citations despite being quite good (IMHO). I am sure that if it was published in Ecology Letters it will have now several more citations.

So how do we fix this? I have some ideas, but nothing clear. The next posts will explore those ideas.

Resources about Ecology Journals:

Making a sustainable lab

At this point, there is no need for an introduction on why we need to make our life more sustainable. We just need to do it and do it now. This includes being more sustainable at work.
Last week, all lab members gathered together and brainstorm a bit on which actions are we already doing or can start doing. By no means, this “light” measures will fix a global problem, but we think all contributions are welcomed.
  1. Traveling: Select carefully which trips you need to do and arrange Skype meetings when possible. Go by train for trips < 1000 km. Limit international conferences to max. 1 EU conference a year and 1 intercontinental conference every 3-5 years. When flying, on a personal decision, we encourage lab members to compensate for CO2 emissions*. Most projects do not allow paying CO2 compensations directly from the project budget, but we can use the leftovers of the stipulated diet reimbursement to that end.
  2. Daily live: Most of us bike to work. We bring our own food (but we can do better). We use natural air conditioning/heater when possible (open windows in the morning in summer, use a pullover in winter, etc…)
  3. Lab work: We do kill pollinators, but we do not kill any pollinator without a clear purpose and which is not going to be properly curated and databased. We don’t use much single-use plastic, but we plan to change our marking methods for plants (from single-use plastic to re-usable wire). Killing jars are re-used as much as possible. We try to optimize fieldwork by carpooling or visiting sequential sites on the same day. We re-use (and re-assemble) electronic material and computers (Thanks David!).

Comment on “Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition”

I just read this worrying paper summarizing a big problem we should all be aware: “Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition“.

I don’t have a perfect solution to change the system for good, but I have an easy patch to help your integrity and the integrity of your group. And I say this because I am very conscious that I am (we all are) weak and when under pressure, the easier person to fool is yourself. This means, that even if you don’t want to cheat consciously, behaviors like p-hacking, ad hoc interpretations and not double checking results that fit your expectations are hard to avoid if you are on your own. So this is the patch: Don’t do things alone. You can fool yourself, but it’s harder to fool your team-mates. And as a corollary, don’t let your students do field work, data cleaning, analysis, etc… alone. Somedays I may be tired and tempted to be more sloppy during fieldwork, for example, but if I have a team-mate with me, it’s easier overcome the situation as a team. In our lab, one way to do this is using git collaboratively. Git tracks all steps of your research since data entry. The first thing we do when we have raw data is to upload it to git and check it ideally at least among 2 of us. This creates a permanent record and avoids the temptation of editing anything there if results are not what you expected later on. Same with data cleaning, and analysis. When those steps are shared and your actions are tracked, it’s easier to be honest. Just to be clear, this mechanism doesn’t work as the threat of a “big brother” that is watching you, is more a feeling of teamwork, where you want to live for the team expectations.

Marie Curie, concessions, and pressure to publish.

I have to admit I didn’t know much about Marie Curie a few days ago (other than the “trivia” facts such as that she discovered the radioactivity and was the first women winner of a Nobel prize). But I just read a book* about her and I really loved it. Oh my god, she was unique in a thousand ways. The book is written by Rosa Montero, and uses Marie Curie’s diary written after Pierre Curie death to talk about very personal things including death, gender balance, society pressures, self-esteem, and many other main topics in life. So it’s not a typical biography, but an excuse to reflect on important things. I won’t go into details, but I highly recommend it.

And while reading the book I found a quote by Pierre Curie that reflects at perfection my actual feeling in science.

“Besides, we must make a living, and this forces us to become a wheel in the machine. The most painful are the concessions we are forced to make to the prejudices of the society in which we live. We must make more or fewer compromises according as we feel ourselves feebler or stronger. If one does not make enough concessions he is crushed; if he makes too many he is ignoble and despises himself”

I do think finding this balance is what kept you (and your science) alive in this world.

Which brings us to the last point. I just discussed a result with my PhD student. It is not significant (p = 0.08), but the effect size is quite big (probability something happening goes from 0.6 to 0.2), but the sample size is small (n < 20). The unavoidable question raised. “It’s 0.08 marginally significant?”, “can we say there is an effect?” My reply was that in a perfect world we would use this data to frame a hypothesis. Then, we would collect 30 more independent data points and test it for real. But the project is almost over, he needs to defend the PhD soon and we are not in a perfect world. So we make concessions. And we will try to publish what we have and cross our fingers hoping that someone else will validate our finding. But we don’t concede too much either, and we should make sure to discuss the result appropriately. A potential large effect size, but very variable and based on a limited sampling size. Or in other words, we will try to avoid the p-value dichotomy once more.

*The book is edited only in Spanish, French, Dutch, and Portuguese… for once, sorry English speakers!

Scrum-like lab meetings

UPDATE: @naupakaz shared a paper explaining a very similar idea via twitter.

Organising successful lab meetings is not an easy task, specially when your group may not have a critical mass of PhD’s and PostDocs. Main aims of the meetings for most labs I’ve been include 1) stay up-to-date on what people is doing, 2) Discuss relevant literature, 3) Give early feedback on peoples projects and 4) discuss techniques, methods or academic culture. And above all, the most important thing for me is that they has the potential to build team spirit.

Despite the above points are all well-aimed, and most times are more or less accomplished, I have also seen some lab meetings fail and people feeling that going is an obligation/waste of time rather than an opportunity. The main reason for that I think is that meetings tend to be no prepared in advance. This is usually associated to overly long meetings and people not being engaged with the topic.

This will be my approach starting next Monday. SCRUM-like stand up meetings every Monday  and Wednesday 9:30 am (sharp!) in my office. Maximum duration 15 minutes (strict!). During the meeting each person will answer this three questions.

  • What I did / learn last week.
  • What I plan to do this week.
  • Which help do I need to accomplish the plan.

In that moment there is no discussion around those questions. If anything needs to be discussed at length, we will schedule a specific meeting for that just with the interested actors. Moreover we will do this in English, not in Spanish, to force new PhD students to get into the habit to talk in english.

This structure has several advantages. Keeps everyone updated and fosters interaction among the different members of the group. Helps you think about what you accomplished /learnt, which should be a positive reinforcement. And finally helps you to plan ahead and be goal oriented. For me as a mentor, allows me to be on the loop on all projects in a fast way, and allows me to act as a facilitator, rather than as an old-school leader. SCRUM people do this everyday, but I think in my lab each people work in a quite different / independent project, so maybe two days a week its fine. However, my IT friends maintain that the way to go is a meeting every morning, so I’ll consider it.

Regarding the other aims of lab meetings, I’ll get advantage of others labs to gain critical mass. EBD runs a monthly Journal club we will attend to discuss papers. This will also make us read broader. We will also join Montse Vilà’s group for getting feedback on projects or discussing methods. This longer meetings will be scheduled when need and will have to requisites. One hour maximum duration and people has to come prepared in advance.

The post is getting lengthy, so I wont go into implementing formal retrospectives every time we publish a paper, but if you are interested in “Agile” development, follow the wikipedia links on SCRUM and Agile.

How to answer to reviewers

This is another of the aspects of doing science that nobody explicitly teach you. The basics are pretty simple to explain (just respond to everything and point by point). You start by mimicking what your mentor does, how other co-authors respond, and how other people respond to your reviewers.  But after seeing different co-authors at work, and specially now that I saw a lot of responses from different people as an editor, there are bad responses, good responses, and those so good, that make your paper fly to the publication stage. Why? The little differences.

1) Be concise (i.e. give a lot of information clearly and in a few words). You can spend some time in formalities and a “thank you” part and a “highlighting the strong points part” is important, but make your case quick and personal. Don’t thank reviewers for “enhancing the paper” because you have too. Thank them for pointing out A or B, which really made a difference. If comments were minor, its not necessary to make a big deal with empty words because you want to be concise. Being personal and not using pre-established “thanks you phrases” helps connecting with the reviewer and sets his/her mood for reading the rest. Also, always briefly highlight the positive things. Editors are busy people, if a reviewer are supportive or partially supportive, bring that up in the response to the editor to put him back in context.

2) Following with conciseness, show that you care about the science. If you did a good work, reviewers do not know your data/analysis as well as you do, so make them trust you by providing details on the decisions you made, and back up all your claims with data and references, not only in the Response to Reviewers, but also in the edited paper. This seems obvious, but I’ve seen several “we don’t agree with this change” without a proper justification.

4) Number your responses. that allow you to refer to previous responses, and avoids repetition. Nobody wants to hear the same justification twice. If your reviewer is not tidy (e.g. do not separate main concerns, from small comments), you should be. Your responses should always flow and for example, you can summary main changes first, and then refer to it when brought up by the reviewer in the middle of other in-line comments that deal with smaller wording issues.

5) Put the focus of the review on the ms, not on the R to R. That means that other than in particular cases you don’t quote the changes in the response, but refer to the lines where the changes are. BUT the real pro-tip is that you highlight the changes in the new ms. Track changes are burdensome and require software specific, but using a different color (I personally like using blue font because red is too contrasting) for the changed sentences in the new ms is a big help for reviewers. This allow both, a smooth read of the full paper, and makes it easier to find the new passages.

Any other tip you use?

Get credit for your reviews: Publons

This posts look like an advertisement, but it is not. I don’t even know who is behind this initiative, but reviewing is an important part of my job, and I think we should get better credit*.

Now I can get some credit easier. Basically, instead of listing in my CV my contributions as a reviewer, I can show them verified at Publons. I can also decide how much information I show about each one. Initially I though that I don’t feel like entering this data manually everytme I do a review, but they allow you to forward the review “thank you” email to them and they add it to your profile, so this makes it quite easy to keep an updated profile. Would be cool that the Journals do that automatically via ORCID, but this is another battle I suspect. And with this tool, you can even calculate pubcreds, if you want!

In any case, the web is easy to use and the guys are very responsive, so I hope this kick off.

*In a nutshell: I advocate for a double blind process, and full disclosure of all names (authors and reviewers) when a decision is made. Until this happens, I sign my reviews.

Short guideline on multi-authored papers.

After being on the two sides of the story (first author and one more of dozens of co-authors) I already made a few errors someone may find useful to know, specially since multi-author papers (more than 10-15 authors from different institutes) are becoming normal (and I am not judging if this is good or bad*, is just happening).

1. Talk about co-authorship early on, but with conditions. This things should be talked at the beginning of the collaboration, because there is nothing more awkward than someone thinking that he/she is coauthoring a paper, while the lead author thinks that he/she is a data provider. However, do not grant co-authorship before even starting the project. Make clear that someone will be a co-author if his/her contribution is [fill in here your expectations] (e.g. the data provided ends up being critical for the paper, you are engaged on ms writing, etc…). By clear I meant very clear.

2. Establish feed-back points. This one is very tricky, because first authors (or the core team leading the paper) do not want 50 people commenting in every decision, but they don’t want coauthors to end up not contributing much. On the other hand, some coauthors want to be more involved than others, but they need to be offered the opportunity to contribute in order to do so. I would recommend to fix at least three points to provide feed-back. First a draft of the questions, hypothesis to be tested and which approach will be used. Second a draft with the main results/ figures. And third a first draft of the paper. Even this seems really a minimum, I made myself the error of not sending almost anything before the complete draft of the paper was ready to some coauthors.

3. Make all correspondence open. Always include all coauthors in the emails with drafts or results to discuss. All coauthors should be able to see other people comments. This is specially important when two coauthors disagree on something. The lead author has the final word, but other coauthors should discuss the disagreement between them (and hopefully agree on something) and in front of all other coauthors.

4. Be clear on what you want. That applies also to both sides. As a first author is very useful to tell people what do you want from them. Instead of letting people comment on whatever they want (they will do that anyway), ask for specific questions. Can some native speaker check my grammar? Can you go trough the mathy part and make sure it is correct? With several coauthors you have the risk that everyone will hope the other will look on the three pages of equations, and no-one ends up doing it. As a co-author is always nice also to state on what do you want to contribute. Even if you think that you will be “near the end of the list”, if you want to be more engaged and have clear ideas on how to redo an analysis, or enhance a figure, say it! (Author order is /should be flexible, so you may end up among firsts authors if you contribute)

Lastly, those are just suggestions, and all of them refer to one basic idea: enhance communication.

*I do think more than 10 authors are rarely useful…

I have a guest post in Practical Management blog

Quick note to say I am very glad to have a guest post in an awesome blog about data management, a neglected topic that affect all scientists. The blog is quite funny also, bringing some glamour to the art of data processing. Thanks Christie for inviting me to contribute!

The post is about style, check it out here: