How do we fix the publishing system. Three (doable?) solutions.

I’ve been playing for a while with some ideas that are at the same time potential solutions, and to some extent doable. But I am aware some are highly unlikely to happen due to social dynamics. They play around with ideas related to reducing the number of papers we publish and changing the evaluation and discovery systems in place.

  1. The Synthesis Journal: This would be a non-for-profit ideal journal that only publishes anonymous papers. There are two types of papers a) Wikipedia consensus-type method papers with the aim to create standard methods. The beauty is that the metadata of newly collected data would indicate clearly which method was used e.g. ISO-345, which has an associated data format and hence combining those is easy programmatically. Bots can even crawl the web if metadata is in EML format looking for studies using standard methods. Methods have no public authors and are achieved by consensus. b) The second type of paper are synthesis papers. Those are dynamic papers that collate data collected with standard methods to answer general ecological questions using modern programmatic workflows. As new data is created following a), the model outputs are updated, as well as the main results. Versioning can do its magic, here. To avoid having field workers that create data and synthesizers that get credit, anonymous teams donate their time to this synthesis endeavor. Hence the anonymity. This will limit also the number of synthesis papers published.
  2. The Cooperative of Ecologists: This is something I really like. Cooperatives have a long tradition of allowing the development of common interests in a non-capitalistic way. Entering in the cooperative would be voluntary (some references or formal approval may be necessary). Duties can involve adhering to a decalog of good practices, publishing in a non-selective style repository, giving feedback to twice the number of manuscripts you sign as the first author, and evaluating a random peer per year with a short statement (no numerical values). The benefits are getting feedback on papers (you can use it to update your results as you see) and having yearly public evaluations you can use for funding/promotion. With one evaluation per year, you can quickly see how your peers judge your contributions to the field. One of the core problems of the publishing system is the need to be evaluated. This moves the focus of evaluation outside where you publish your papers, and these evaluations can highlight better aspects such as creativity of ideas, service, etc…
  3. Crowd-sourced paper evaluation plug-in: As stated in the previous posts, one of the main problems is that we use where papers are published not only serve to discover what we should read, but also to evaluate our performance. I know that a single index will never make the evaluation job, this is why we need to diversify the options for evaluators (grant agencies, hiring committees, … ). Right now, in addition to the number of papers and the journal prestige / IF, metrics like citations received, F1000-type evaluations, or alt-metrics are already available. DORA-style narrative CVs are also great, but hard to evaluate when the candidate lists grow dramatically. So, what if a plug-in exists for internet browsers where you can log in with your ORCID? Each time you visit a webpage of a scientific paper (including archives), a simple three axes evaluation emerges. You can rate with three simple clicks it’s 1) robustness (sample size, methods, reproducibility) 2) novelty (confirmatory, new hypothesis, controversial) 3) overall quality. I am sure these axes can be better though, and reproducibility may be an automatic tag (yes/no) depending on data/code statements. You can also view the evaluations received so far. With enough users, this can be a democratic powerful tool to create one more option to be evaluated. Plus, recommendation services may be built upon it. I would love to read a robust controversial paper liked by many of my peers. I believe this is not complex technologically, and if done in a user-friendly way, can help the transition to publish in non-selective free journals or archives. This also selects for quality and not quantity. I know cheating is possible, but with verified ORCID accounts and some internal checks to identify serial heaters/unconditional fans and the power of big numbers, this may work.

This is it. If it was not clear, the aim of the post is to think outside of the box, and lay out a few ideas, not a detailed bulletproof plan.

New paper out: Pollinators, pests and soil properties interactively shape oilseed rape yield

We have a new paper showing that processes like pollination or pest attacks are not independent process, but one process affects the other. But the title and abstract are quite explicative, so I’ll explain a few other things here.

First, this an example of a cost-opportunity paper. Vesna was already planing to collect data on pests, so she already had selected the fields, contacted the farmers, etc… so adding the pollinator (and soil) surveys was really cost effective, and allowed talking an important question (in addition to her studies on pest control).

Second, I posted a pre-print of this paper 11 months ago. This is how long it took to submit it to a couple of journals (which didn’t review it and rejected it quickly), and to go through the review process (three reviewers, two rounds!) and editorial typesetting. During this time not only I could share a citable pre-print with a couple of interested colleagues, but also get > 500 views and > 200 Downloads from bioRxive. Moreover, the preprint allows you to compare the submitted version with the accepted version. We removed one analysis and added a couple more. The main conclusions are unaltered, but its nice to see the process from an historical point of view.

Are exotic plants good for pollinators?

Answer quickly. Do you think most pollinators can use exotic plants, and hence will probably benefit from them? My gut feeling was to answer yes, but I am not convinced after seriously reviewing the available evidence.

A while ago I accepted to write a book chapter on the interface between behaviour and invasive species. I really like the idea that pollinators behaviour mediates its responses to environmental changes, including plant invasions. Hence, the main point of the book chapter is that “not all pollinators respond equally”. Yes, the idea of winners and losers of the global change is becoming a leitmotiv in my research.

Doing a book chapter allowed me to do a review, an opinion paper, and throw in some re-analysis of old data for supporting  my claims all in one. I am pretty happy about the result because it crystallise a lot of thoughts I had since my PhD and identifies important knowledge gaps.

If you want to read a draft before the book gets published, you can find a pre-print here: Invasive plants as novel food resources, the pollinators’ perspective.

Pollinator contribution to yield quality (and my preprint experience)

I already shared a preprint and post about this paper some time ago, but now is officially peer reviewed and online. You can download the final version here: https://peerj.com/articles/328/

My experience with preprints? The publication process at ThePeerJ was super fast (~ 3 months from submission to publication). In this 3 months 84 people visited the PrePrint and only 52 downloaded the PDF. Nobody commented on it. Taking into account that we are 11 authors (who should account for some of this downloads), you may think that the visibility of the paper didn’t increase much from being out there in advance, but I can prove you wrong. Maybe not much people read it, but I was contacted by one PhD student with a question about the paper. She was working on the topic and preparing the next filed season, so for her, reading it in January, instead of in April was useful. Plus, she found it by google-ing about the topic, proving that preprints are discoverable. So, not always by publishing preprints you reach more people, or get amazing feedback, but at least you can reach the right people, and that’s important enough for me.

One more paper showing pollinators matters

We have a new PrePrint up at the peerJ (note that it is not peer-reviewed yet, but already citable) showing that pollinators increase not only yield, but also the quality of four european crops. While the evidence that pollinators are important for crop production is quite strong now, specially after Klein et al. 2007 review and Garibaldi et al 2013 synthesis, I think our paper still contributes to the field by quantifying the contribution to yield (and quality!) in a experimental way along a landscape gradient. Moreover, I think the introduction and discussion is well crafted and points out some aspects that are difficult to cover in short high impact papers (i.e. like our “Garibaldi” science paper). Which points? You will need to read the paper.

You can see the data were collected in 2005, so it has a long, long story I prefer not to dig in. In any case, it ended up in my table and I experienced the pains (and joys) of working with someone else data. That’s why, after waiting 8 years in a messy excel file, I felt that the data deserved to see the light as fast as possible and I pushed to publish it as a preprint. This is an awesome way to make it public probably ~ 6 months earlier than the final reviewed version. I am also happy to try a new Journal that is doing very nice and innovative things. Taking together this preprint and my F1000Research experience, I really think it makes no sense to hide a paper ready to be read until its final version. This can only slow down science. Read more about preprints here.

PS: Also read Klatt et al 2014 paper on strawberries, which spoiled a bit our findings, but is really good.