I’ve been playing for a while with some ideas that are at the same time potential solutions, and to some extent doable. But I am aware some are highly unlikely to happen due to social dynamics. They play around with ideas related to reducing the number of papers we publish and changing the evaluation and discovery systems in place.
- The Synthesis Journal: This would be a non-for-profit ideal journal that only publishes anonymous papers. There are two types of papers a) Wikipedia consensus-type method papers with the aim to create standard methods. The beauty is that the metadata of newly collected data would indicate clearly which method was used e.g. ISO-345, which has an associated data format and hence combining those is easy programmatically. Bots can even crawl the web if metadata is in EML format looking for studies using standard methods. Methods have no public authors and are achieved by consensus. b) The second type of paper are synthesis papers. Those are dynamic papers that collate data collected with standard methods to answer general ecological questions using modern programmatic workflows. As new data is created following a), the model outputs are updated, as well as the main results. Versioning can do its magic, here. To avoid having field workers that create data and synthesizers that get credit, anonymous teams donate their time to this synthesis endeavor. Hence the anonymity. This will limit also the number of synthesis papers published.
- The Cooperative of Ecologists: This is something I really like. Cooperatives have a long tradition of allowing the development of common interests in a non-capitalistic way. Entering in the cooperative would be voluntary (some references or formal approval may be necessary). Duties can involve adhering to a decalog of good practices, publishing in a non-selective style repository, giving feedback to twice the number of manuscripts you sign as the first author, and evaluating a random peer per year with a short statement (no numerical values). The benefits are getting feedback on papers (you can use it to update your results as you see) and having yearly public evaluations you can use for funding/promotion. With one evaluation per year, you can quickly see how your peers judge your contributions to the field. One of the core problems of the publishing system is the need to be evaluated. This moves the focus of evaluation outside where you publish your papers, and these evaluations can highlight better aspects such as creativity of ideas, service, etc…
- Crowd-sourced paper evaluation plug-in: As stated in the previous posts, one of the main problems is that we use where papers are published not only serve to discover what we should read, but also to evaluate our performance. I know that a single index will never make the evaluation job, this is why we need to diversify the options for evaluators (grant agencies, hiring committees, … ). Right now, in addition to the number of papers and the journal prestige / IF, metrics like citations received, F1000-type evaluations, or alt-metrics are already available. DORA-style narrative CVs are also great, but hard to evaluate when the candidate lists grow dramatically. So, what if a plug-in exists for internet browsers where you can log in with your ORCID? Each time you visit a webpage of a scientific paper (including archives), a simple three axes evaluation emerges. You can rate with three simple clicks it’s 1) robustness (sample size, methods, reproducibility) 2) novelty (confirmatory, new hypothesis, controversial) 3) overall quality. I am sure these axes can be better though, and reproducibility may be an automatic tag (yes/no) depending on data/code statements. You can also view the evaluations received so far. With enough users, this can be a democratic powerful tool to create one more option to be evaluated. Plus, recommendation services may be built upon it. I would love to read a robust controversial paper liked by many of my peers. I believe this is not complex technologically, and if done in a user-friendly way, can help the transition to publish in non-selective free journals or archives. This also selects for quality and not quantity. I know cheating is possible, but with verified ORCID accounts and some internal checks to identify serial heaters/unconditional fans and the power of big numbers, this may work.
This is it. If it was not clear, the aim of the post is to think outside of the box, and lay out a few ideas, not a detailed bulletproof plan.