When ecological complex rules are not distinguishable from chance


I am increasingly interested in measuring “chance” as an ecological process. In addition to improbable events with strong influence (e.g., the arrival of monkeys to the New World), at finer scales, the probabilistic view of ecology can help explain why complex systems emerge and are stable. This is some quick rambling to write down some half-baked ideas before I forget them. Proceed at your own risk.

Measuring chance

My first approach was looking on how people measure chance in other fields, for example, in board games. It looks like a common metric is the spread of ELO values. ELO is a robust metric to create rankings among players that compete among each other multiple times, without the need of having all players compete against each other.

The beauty of this is that a pure chance game will have a low spread of ELO values. In fact, we can create easily this expectation for a given number of players. This is our null model.

library(elo) 
#https://cran.r-project.org/web/packages/elo/vignettes/running_elos.html

#Create a set of 100 species that overall play 9999 games and assing a random probability of wining each game
null <- data.frame(sp1 = round(runif(n = 9999, 
min = 1, max = 100)),
                   sp2 = round(runif(n = 9999, 
min = 1, max = 100)),
                   performance_sp1 = rnorm(n = 9999, 
mean = 1, sd = 0.5),
                   performance_sp2 = rnorm(n = 9999, 
mean = 1, sd = 0.5))

#run elo rankings
er <- elo.run(score(performance_sp1, performance_sp2) ~ as.character(sp1) + as.character(sp2), data = null, k = 20)
hist(final.elos(er))

As we can see, from the initial 1500 ELO all species started with, some (by chance) end up winning more games than others, and the spread goes from 1400 to 1600. I am aware this null model can be enhanced, for example by simply using a Bernoulli draw with 0.5 probability of wining each game, but I think it works to make my point.

Let’s now create a game where skills are the only important thing. The species with higher skills will always win. The second species with higher skills wins all others but the first one, and so on.

#100 species over 9999 games
skil <- data.frame(sp1 = round(runif(n = 9999, 
min = 1, max = 100)),
                   sp2 = round(runif(n = 9999, 
min = 1, max = 100)))

#fix skill level
skil$performance_sp1 <- skil$sp1
skil$performance_sp2 <- skil$sp2

#run elos
er <- elo.run(score(performance_sp1, performance_sp2) ~ as.character(sp1) + as.character(sp2), 
              data = skil, k = 20)
hist(final.elos(er))

Now the spread goes from 800 to 2200. This is (probably) among the widest spread you expect for any set of games with no chance involved in it (pure skills). So, if we want to compare for a given game, how far it is from pure chance, we can use the sd() of these distributions.

#Create null
m_null <- rep(NA, 100)
sd_null <- rep(NA, 100)
for(i in 1:100){
  null <- data.frame(sp1 = round(runif(n = 999, 
min = 1, max = 30)),
                     sp2 = round(runif(n = 999, 
min = 1, max = 30)),
                     performance_sp1 = rnorm(n = 999, 
mean = 1, sd = 0.5),
                     performance_sp2 = rnorm(n = 999, 
mean = 1, sd = 0.5))
  er_null <- elo.run(score(performance_sp1, performance_sp2) ~ as.character(sp1) + as.character(sp2), data = null, k = 20)
  sd_null[i] <- sd(final.elos(er_null))
}

#A game with performance fixed
skil <- data.frame(sp1 = round(runif(n = 999, min = 1, max = 30)),
                   sp2 = round(runif(n = 999, min = 1, max = 30)))
skil$performance_sp1 <- skil$sp1
skil$performance_sp2 <- skil$sp2
er_skil <- elo.run(score(performance_sp1, performance_sp2) ~ as.character(sp1) + as.character(sp2), 
              data = skil, k = 20)
sd_skil <- sd(final.elos(er_skil))
#sd_skil #Repeated runs give sd very close to 200 depending on the game species pairing, but the differences are minimal.

hist(sd_null, xlim = c(1,250))
abline(v = sd_skil, col = "red")

And if we were to calculate Z-scores, or maybe other better measures of distance from observed to null, we would quantify how far away from the null expectation our pure skill game is.

What happens when performance depends on the context?

However, in nature, we often observe species A outcompeting species B in context X (e.g., high precipitation), but losing in context Y (e.g., low precipitation). This is not chance, but an ecological mechanism. Let’s consider two extreme situations where the hierarchies are reversed, depending on the context.

real <- data.frame(sp1 = round(runif(n = 999, min = 1, max = 30)),
                   sp2 = round(runif(n = 999, min = 1, max = 30)),
                   situation = round(runif(n = 999, 
min = 1, max = 2)))
situation_performance <- data.frame(species = 1:30,
                                    situation1sp1 = 1:30,
                                    situation1sp2 = 30:1,
                                    situation2sp1 = 30:1,
                                    situation2sp2 = 1:30)
real <- merge(real, situation_performance[,c(1,2,4)], by.x = "sp1", by.y = "species")
real <- merge(real, situation_performance[,c(1,3,5)], by.x = "sp2", by.y = "species")
real$performance1 <- ifelse(real$situation == 1, real$situation1sp1, real$situation2sp1)
real$performance2 <- ifelse(real$situation == 1, real$situation1sp2, real$situation2sp2)

er_real <- elo.run(score(performance1, performance2) ~ as.character(sp1) + as.character(sp2), 
              data = real, k = 20)
sd_real <- sd(final.elos(er_real))

hist(sd_null)
abline(v = sd_real, col = "red")

Our observed competition is not differentiable from chance. This is an extreme scenario, but a single change in the dominance hierarchies makes a deterministic mechanism indistinguishable from chance.

I find this quite interesting. In nature, we have plenty of situations where interactions among species depend on the context. For example, some species have good years when it rains plenty, others when it is dry. Some species win in open habitats, others in closed ones. All this variability breaks the hierarchies, making understanding coexistence similar to a game of chance.

But ecology does not work like board games…

I know, species (or populations (or even individuals)) do not win/lose, but there are already tools to calculate ELO quantitatively from differences in performance. Also, species do not compete only in pairs, but again, there are options to measure ELO in games with > 2 players.

Conclusions?

I am not sure where this is going. I am probably just reinventing the wheel. Gause already proved that coexistence was possible among paramecia only as long as conditions fluctuated. On the other hand, is very nice to see how deterministic mechanisms can be compatible with the neutral hypothesis (which is probably also something known?).

At the end I was not able to measure chance, but this is research, you know where do you start, but not where you will end up.

On Leadership

“Wise is the one who chooses the questions well, not the one who tries to solve them all”

Securing a permanent position in science often implies you invest an increasing amount of time in managing people (and projects, and budgets) rather than doing science. Unfortunately, scientists are not trained in managing. While some people might have natural (or learned) skills, most struggle and have to learn them the hard way (i.e. by trial and error). As a mixed strategy, I read a bit about managing people during the last few years (and also “tried and err” a lot). Rather than a proper post, I am using this entry to bullet point things I found useful (and try to practice with more or less success). This is written mainly for me, as a place to go back and refresh some of the ideas, because most of these ideas are easier to say than to do and need revisiting and practice, but maybe it is also useful for someone else. 

  • A good group leader is the one who cares. It doesn’t matter if you mess up often, as far as you care and try hard. You can be better or worse at it at the beginning, but most problematic PI’s simply don’t care about their group. If you care about it, you will be fine, and it’s just a matter of time to get better. 
  • Learn from your heroes. Science fiction and fantasy books are full of leaders. Which ones excel at leading groups? For PhD advising, I love the Gandalf analogy. You set up the plan and bring up the motivation, then you disappear and let the student lead their adventure, but if there is trouble, you always arrive on time to support and bring reinforcements if needed. Kelsier from Mistborn I is also a good example of forming a united group built on mutual trust and a common goal. Ned Stark (Game of Thorns) nails it when suggesting not to impose a task on anyone (the task can be beheading a person, but also working 8 straight hours under the sun) that you won’t like/accept to do yourself.
  • Disagree and commit. In a team, you need to be able to disagree, but if a decision you do not support is made as a group, then you fully commit to it. Looks easy, but many people fail either to disagree (so you are not heard) or to commit (boycotting the project on purpose or not). If you care about a group, then you explain your reservations but respect the group’s decision.
  • Define who is making decisions. There is nothing more frustrating than a meeting where you don’t know who calls the final decision. Would this be a consensus decision? Someone will decide pondering all the discussion? Or this is simply a brainstorming meeting and no decisions will be made at this point? Clarify that and your meetings will relax a lot.
  • Learn when to talk first or talk last. Talking first sets the tone, can bias your team’s ideas, but ensures the discussion does not go in a totally unwanted direction. Talking last promotes candor in the participants, you practice listening and allows you to talk with a balanced and informed position. Of course, ask a lot of questions in between.
  • Ask a lot of questions. This is the only way to understand.
  • If you have drawn an idea, write it up and vv. This is a recurrent piece of advice I give. If an idea works in a graph, try writing it down. If your paragraph makes sense, draw it, make a figure, a causal path, or a sketch. Often they complement and enlighten.
  • Start with why: Always give context. Why we want to do a project is more important than how can we do it, or what should we do to implement it. Explain your feelings.
  • Give credit, take the blame. As a group, any credit goes to the team, and all the blame to the leader. Period.
  • If in doubt, talk in person. Any friction or sensitive topic should be dealt with in person (or by telephone if in-person is not possible). Emails/Slack are great for everyday exchanges, but if the topic is complex, make it personal. 
  • Assume good faith. Do not try to read behaviors. If something botehrs you, assume good faith and talk it in person as soon as possible. 
  • Choose your teammates. Working with generous, fun people is better.
  • Be process-oriented and not goal-oriented. People who play chess for fun, keep playing even when they lose, get better in the process, and have fun (and become infinite players). People who play to win stop when they lose, don’t want to play with stronger rivals, and are stressed. In science, we play for fun.
  • Trust. It’s your team, you trust them, and you give them agency. Things will never be exactly as you initially envisioned. But often they will be better.
  • Let people up-manage you. You work for them, not the other way around. When you want a task to be done, ask people how you can help. They know better what they need from you in different situations (e.g. resources, talk to key people, redefine objectives). Don’t try to guess and let them tell you what is blocking the task, and what you can do. If they only need you to do nothing, do nothing.
  • Discuss ideas, not sides. Never confront what person A proposed against what person B proposed, even if it’s not your intention and you just use the person’s name for simplicity. Discuss idea X and idea Y, both emerging from the team. Wording matters more than we think.
  • Part of your job is listening about and solving problems not work-related. This is time well invested. 
  • Use retrospectives to reflect on what worked great in a project and should be repeated, what worked just ok and can be enhanced, and what did not work and should be avoided next time. Don’t miss learning opportunities. Focus on what we learned and not on blaming. Acknowledge the contingencies, and be aware that you now have all the information, but at the time of making many decisions, our knowledge was partial.
  • When conflict emerges play “convince-me”, and give the floor to whoever thinks differently and just listen and ask questions. Genuinely aim to be convinced. 
  • Assume you are a leader, and take risks, and take responsibility.

Overall, invest in team culture. This does not mean your lab needs to be all super good friends, they only need to work well together and trust each other. For example, I liked the radical candor approach, but there are many more approaches. Nothing of what I said is easy to do, or to do well, and not always work as expected, but is a good start.

Feel free to add what works for you in the comments.

Network ecology is dead, long live network ecology!

Carsten F. Dormann writes a provoking book chapter entitled “The rise, and possible fall, of network ecology” openly criticizing the concept of network ecology. Read it, I enjoyed it.

In fact, despite using network theory in some of my papers, I never liked the term network ecologist, and I never presented myself as such (but I’ve been presented as a network ecologist). For me, being a network ecologist would be like being a GLMM ecologist, or a differential equation ecologist. Networks are tools, not ends.

I believe that things interact with things. While simple systems do not require complex network theory, complex systems can benefit from it, but in agreement with Carsten, only if well applied. Unfortunately, some tools are over-(and miss-)used, such as topological indexes, while others are under-used, such as exploring network dynamics. It is paradoxical that the creator of the bipartite package, which popularized the blind use of multiple topological indexes not necessarily related to processes, is now warning about it.

One of my major concerns about “network ecology” is that many “network” papers are question free. If you don’t have a question, do not use networks. I don’t think answering how index X changes along Y is a valid question as change is the only constant in ecology. I also agree many tools from networks have analogous community ecology metrics, but this is fine with me as it reflects different ways of thinking. So let’s see a few examples of instances where I think network tools can bring exciting complementary veiws:

  • To measure indirect and higher-order interactions. The neighbour of my neighbour can still influence me even if I never meet him/her. How important are indirect interactions in ecology? How do we account for their effects on fitness? Many open questions where a network perspective can help.
  • To assess community stability. There is a lot of work using population dynamic models linked through species interaction networks that can help elucidate which interaction structures are compatible with stable communities and why.
  • To describe entire multi-species communities with different interaction types. We can’t even describe how entire communities look like. Networks are good tools for describing patterns, which is the first step to asking questions about processes.
  • To model the flow of “information”. Diffusion networks are under-used as a tool to model how things move through a network. Like pollen grains*

So I agree with Carsten on the fact that “Network ecology” should fall, but i think network analysis applied to ecological problems should prevail!

* we barely scratch the surface on this topic here: Allen-Perkins et al. (2024), Multilayer diffusion networks as a tool to assess the structure and functioning of fine grain sub-specific plant–pollinator networks. Oikos e10168. https://nsojournals.onlinelibrary.wiley.com/doi/abs/10.1111/oik.10168

Code of conduct

I’ve been promoting Codes of Conduct in different institutions, because I learned they are important to show the institution’s values and make minorities feel more secure. In fact, just having a code of conduct can also help prevent misconduct. However, I realized we don’t have one for the lab. I hope we don’t need to use it, but better safe than sorry.

Code of conduct

The Bartomeus Lab is composed of people who are aware of the value of diversity. We understand that fostering diversity within our collective is the only way to generate an enriching, inclusive atmosphere that promotes novel ideas, ways of thinking, and projects toward a better understanding of community ecology and towards answering the current biodiversity crisis.


We want to be an active force in democratizing knowledge and fostering diversity in the academic world, because a diverse and tolerant academic space, mirroring current social realities, is a key pillar of the knowledge society. Ecologists need to be able to carry out the tasks associated with their job (lab or fieldwork, conference talks/networking, assistance to courses, etc) without any type of discrimination or abuse, independent of their gender identity, sexual orientation, functional diversity, origin, religious beliefs, etc.


In the academic world, certain abusive and discriminatory behaviors have been tolerated by erroneously assuming that these are personality traits that do not affect the professional merit or performance of researchers. However, abusive behaviors negatively affect the careers of those who suffer from them. We are conscious that, unfortunately, the academic world has a hierarchical structure, and this tends to hide and perpetuate power abuses that are, furthermore, disproportionately suffered by vulnerable groups and minorities.


We know that most people taking part in scientific activities are students and professionals in unstable positions, who are especially vulnerable to different types of abuse. Therefore, we have decided to establish several measures to protect this diverse and large collective. We will ensure that diversity is respected and upheld. We want to prioritize the well-being of everyone, especially the most vulnerable ones. We will not tolerate any form of abuse, discrimination, or degrading treatment. This includes:

  • Offensive jokes and negative comments related to a person’s gender, sexual orientation, origin, functional diversity, religion, age, person’s lifestyle, dietary choices, health, or physical appearance, including the NON-use of people’s name and gender.
  • Negative or discrediting comments related to a person’s professional career or work.
  • Deliberate intimidatory behaviour including online harassment on any social media platform.
  • Reiterated requests for intimate relationships after being rejected.
  • Not properly giving credit to a person’s scientific contributions and hindering deserved career opportunities.

We want to guarantee, as much as possible, that all members can profit from their stage at the lab with complete freedom, regardless of their gender, sexual orientation, functional diversity, origin, religion, academic status, and career stability. Therefore, according to the severity of the situation, action will be taken against people who do not abide by this code of conduct.

If you feel uncomfortable in any situation, or you see potentially uncomfortable situations for other people, speak up directly (we cultivate a culture of open honest feedback) or report it if you don’t feel secure to speak up in public. For reporting abusive behaviors, or any other doubt related to this issue, please write to Virginia Dominguez or Ignasi Bartomeus. Only these persons will have access to the information, and it will never be shared in any form without the explicit consent of the denouncer. We understand that some situations might be not easy, especially when one is in an unstable position or is starting to work in a certain field. Therefore, our commitment is to help the denouncers and maintain their anonymity.


This code of conduct is inspired by others, such as the community of R developers and associations of Ecology.


Ashoka, the Anarchist banker and Picasso submit their paper to Nature

Ashoka was a great ruler 250 years BC in what is now India. He is well known for promoting a non-violent movement. Of course, this was after killing all of his enemies. The idea is that a brutal war made him realize violence was not the way. But in any case, he won by brute force by exploiting an unfair system first, and once established, he changed the rules. 

Fernando Pessoa wrote the anarchist banker in 1922. The satiric idea is that the only way to be free from a capitalistic system and be a true anarchist is to stop caring about money, and the only way not to care about money is to have so much money, that you can act as you want. A perverse but suggestive idea. 

Picasso painted some really weird paintings that revolutionized art. But before that, he had to demonstrate extraordinary artistic talent in his early years following the standard cannon. Only then he was able to disrupt the scene. 

We just got rejected from both Nature and Science recently. I am a bit disappointed because it was a really solid and cool paper. But especially I am disappointed because publishing there would give me some slack to change things. A good excuse to redeem my past behavior and embrace non-violent publishing systems, or enough prestige to stop caring about where to publish and embrace anarchy, or demonstrate something and ensure someone still pays attention to what I do when I try new ways of doing science. 

This is just rambling in a semi-poetic way, don’t over-interpret it. It’s just writing for fun.

How do we fix the publishing system. Three (doable?) solutions.

I’ve been playing for a while with some ideas that are at the same time potential solutions, and to some extent doable. But I am aware some are highly unlikely to happen due to social dynamics. They play around with ideas related to reducing the number of papers we publish and changing the evaluation and discovery systems in place.

  1. The Synthesis Journal: This would be a non-for-profit ideal journal that only publishes anonymous papers. There are two types of papers a) Wikipedia consensus-type method papers with the aim to create standard methods. The beauty is that the metadata of newly collected data would indicate clearly which method was used e.g. ISO-345, which has an associated data format and hence combining those is easy programmatically. Bots can even crawl the web if metadata is in EML format looking for studies using standard methods. Methods have no public authors and are achieved by consensus. b) The second type of paper are synthesis papers. Those are dynamic papers that collate data collected with standard methods to answer general ecological questions using modern programmatic workflows. As new data is created following a), the model outputs are updated, as well as the main results. Versioning can do its magic, here. To avoid having field workers that create data and synthesizers that get credit, anonymous teams donate their time to this synthesis endeavor. Hence the anonymity. This will limit also the number of synthesis papers published.
  2. The Cooperative of Ecologists: This is something I really like. Cooperatives have a long tradition of allowing the development of common interests in a non-capitalistic way. Entering in the cooperative would be voluntary (some references or formal approval may be necessary). Duties can involve adhering to a decalog of good practices, publishing in a non-selective style repository, giving feedback to twice the number of manuscripts you sign as the first author, and evaluating a random peer per year with a short statement (no numerical values). The benefits are getting feedback on papers (you can use it to update your results as you see) and having yearly public evaluations you can use for funding/promotion. With one evaluation per year, you can quickly see how your peers judge your contributions to the field. One of the core problems of the publishing system is the need to be evaluated. This moves the focus of evaluation outside where you publish your papers, and these evaluations can highlight better aspects such as creativity of ideas, service, etc…
  3. Crowd-sourced paper evaluation plug-in: As stated in the previous posts, one of the main problems is that we use where papers are published not only serve to discover what we should read, but also to evaluate our performance. I know that a single index will never make the evaluation job, this is why we need to diversify the options for evaluators (grant agencies, hiring committees, … ). Right now, in addition to the number of papers and the journal prestige / IF, metrics like citations received, F1000-type evaluations, or alt-metrics are already available. DORA-style narrative CVs are also great, but hard to evaluate when the candidate lists grow dramatically. So, what if a plug-in exists for internet browsers where you can log in with your ORCID? Each time you visit a webpage of a scientific paper (including archives), a simple three axes evaluation emerges. You can rate with three simple clicks it’s 1) robustness (sample size, methods, reproducibility) 2) novelty (confirmatory, new hypothesis, controversial) 3) overall quality. I am sure these axes can be better though, and reproducibility may be an automatic tag (yes/no) depending on data/code statements. You can also view the evaluations received so far. With enough users, this can be a democratic powerful tool to create one more option to be evaluated. Plus, recommendation services may be built upon it. I would love to read a robust controversial paper liked by many of my peers. I believe this is not complex technologically, and if done in a user-friendly way, can help the transition to publish in non-selective free journals or archives. This also selects for quality and not quantity. I know cheating is possible, but with verified ORCID accounts and some internal checks to identify serial heaters/unconditional fans and the power of big numbers, this may work.

This is it. If it was not clear, the aim of the post is to think outside of the box, and lay out a few ideas, not a detailed bulletproof plan.

Where the hell I publish now?

The scientific publishing system is hindering scientific progress. This is known and I won’t repeat myself or other more detailed analysis dissecting the problem of publishers making massive profits on our behalf without (almost) any value added (e.g. Edwards and Roy 2017, Racimo et al. 2022).

In the last years, cost-effective alternatives to publish our results have emerged and I don’t think technical aspects are an issue anymore. I think the problem is that when I publish something, I want to be read. I know that if I publish in certain journals, the day before the paper is published almost all researchers interested in that topic will see my paper. I also want to be evaluated. Most funding agencies still use where you publish as a quality indicator of your contribution (consciously or unconsciously), not to mention that the same paper published in a given journal will receive much more citations than if published elsewhere, if citations are what funding agencies will look at, bypassing the infamous IF. 

My approach so far has been trying to publish in Society Based Journals. Despite most of these journals still partnering with big publishers, I heard that most have a decent deal with publishers (but I also heard some got terrible deals). The advantages are obvious. Those are well read and evaluated, have no APC, and the money they make reverts to the societies. The drawback is that not all my papers are top papers that can find a home there, and that the papers are not open access (you pay to read). This is secondary for me in a world with SciHub, but still important. In addition, this model is getting slowly outdated, and some of those journals are already changing to pay to publish model. Paying high APC (anything > 200 EUR for the EU standard) is a bad replacement for the current system in my opinion.

I made a quick tally and in the last 5 years (2017-2021) I published:

  • 32 papers in Society Based Journals with no APC. Wow! These include BES, ESA, Nordic SE, AAAS, Am Nat, and other conservation and Behavioural societies.
  • 6 in Selective Journals that require APC (but about half of the time my co-authors paid for the APC), such as PNAS, Nature Communications, or Scientific advances, but also other less fancy. I try to minimize those because, despite their visibility, I prefer to invest money in salaries, then in publishers, but if I (or my lab group) can publish in e.g. PNAS, this is money well invested regarding career advance. Let’s be honest.
  • 5 in Non-Selective Journals with APC such as Plos, Open Science B, Sci Reports, PeerJ… Not always my decision, and while I support non-selective journals, especially if non-for-profit or with sustainable policies, their APCs are increasing in an unsustainable way.
  • 3 in For Profit Selective Journals without APC. Despite trying to convince my co-authors to avoid those, I do not always succeed. Yes, I had 1 paper published in Elsevier last year (sorry). The other two are high-impact journals whose visibility might compensate for the balance (TREE and Nature E&E). Everybody has a price.
  • 2 Free to publish – Free to read Journals. This is the way to go! One is in a Journal I did not know until recently. The other is the newly created Peer Community Journal, which I support. Other Journals in this list are Web Ecology, Journal of Pollination Biology, Ecologia Austral, and to be honest, not much more that I know (and Ecosistemas, despite it publishes mostly in Spanish). I am also looking forward to the new EU-Horizon Journal, but it’s closed to EU-Horizon projects, and I think it still costs quite a lot to the EU per article, so indirectly, we are still paying for it.

While I am happy with the last 5 years’ record regarding where I published, I think this is not enough. I want to publish more in free-to-publish – free-to-read journals, especially if I am the first author (I got tenure already). But I also want those to be read. Our Peer Community Journal paper has almost no citations despite being quite good (IMHO). I am sure that if it was published in Ecology Letters it will have now several more citations.

So how do we fix this? I have some ideas, but nothing clear. The next posts will explore those ideas.

Resources about Ecology Journals:

Conserving biodiversity will need true sacrifices.

[this is a half-baked reflection after a lab meeting discussion on the EU biodiversity policies. Thanks to the lab and especially to Elena Velado for the discussion]

We are embedded in a culture where the only valid success is complete success. Making compromises, or renouncing something to focus on other priorities is often seen as a weakness if not a failure. There is no better example than what we see every day in the movies. Even when the main character has to make a sacrifice, in the end, he or she ends up gaining everything, including what they sacrificed. For example, you need to let go of your true love, but then you discover that was the way to secure it. You renounce your dream job for your family, and this allows you to find an even better job and be successful also in that dimension. You need to betray your friends to save the galaxy, and your friends still love you for that. You get the point. We are in a culture where we are expected to do small sacrifices to gain it all anyway, so in the end, you really don’t sacrifice anything.

The EU Green Deal is also embedded in this narrative, where we are promised that with a “small” economic sacrifice, we can conserve biodiversity, and in doing so, we will enhance our well-being and our economy. No real sacrifices and a vision that changing gears to support biodiversity will allow us to keep growing economically. I love this story, and it would be great to switch to a more sustainable future with no real costs. In fact, I used the fallacy that conserving pollinators is cost-effective because they will increase your crop yield. But this is not completely true. When going deeper, we also showed that pollinators worth conserving are not usually the ones who deliver crop pollination (Kleinj et al 2016) and that the costs of sustaining pollinators do not always compensate economically via an increase in yield (Scheper et al in prep). This does not mean we shouldn’t safeguard pollinators, but that doing it only for economical reasons is not going to work. At some point, we need to sacrifice something, and it’s our choice.

The danger of a narrative that ignores real situations of compromise where you need to sacrifice things that make your life easier, and not acknowledge that there are trade-offs that force us to choose is that we create false expectations. Expectations that we will always win in all axes. But narratives are important because they prepare society to make decisions. And we need honest narratives now to prepare us to take those decisions tomorrow. Only by being clear on the fact that we will need to make decisions and renounce something in order to gain another thing will set us in the right framework.

We should start the conversation on what are we willing to sacrifice or compromise and what not to conserve biodiversity. Indeed, we may discover some sacrifices are not as hard as they look, especially for the average citizen. Maybe we prefer to have one more species of butterfly in the EU than having a few rich people traveling to space for pleasure. Maybe we prefer to eat strawberries only in spring, but have birds migrating through our wetlands. It’s all about educating ourselves and being aware we decide our future, but not selling fairy tales.

Our behavior is heavily influenced by the context we live in. The first step to change behaviors is to create the adequate context, and for me, this context needs an honest narrative that acknowledges trade-offs and prepares us to take informed decisions.

Your PhD data is a treasure

You will never have as much time as during your PhD to collect high-quality data. I didn’t realize it at that time, but detailed data you really know and understand is a lifetime companion. I used mine for its main purpose during my PhD, but also to test new methods when I needed to test those with data. In addition, it contributed to several synthesis papers, including an ongoing one led by someone at my lab right now. This makes at least five papers I used this data so far. As the data is openly available for a long time it was also used by several other synthesis papers. All this preamble is to encourage you to love your data!

When I collected the data I did most of the bees id’s myself (I had lots of help from experts such as Jordi Bosch, but in the end, it was me who ran through all the samples). This implies I identified several individuals to moprhospecies level, and I couldn’t put a name in all my pollinators. I would say this is typical for many ecological studies, and we always cross fingers this is good enough at the ecological community level. More than 10 years later I decided to properly identify by taxonomonists the full collection (which I managed to keep all this time while traveling through three countries!)

Almost 30% of individuals changed from morphospecies to being properly identified at the species level. Among those, most morphospecies belonged to a single species, but not all. In general, I underestimated the number of species present from 81 to 114. Several similar species were lumped together by my ignorance. However, to my relief, this did not change drastically the relative differences between the 12 sites, as seen in the figure. Connectance and the number of links per species decreased for all sites at similar rates, but some metrics are less consistent, such as nestedness, but nestedness is also a metric known to be quite volatile.

Quick comparison between the new (corrected) and old (morphospecies) dataset for some common metrics such as connectance, species richness, links per species, nestedness or H2.

As I said, the data was released in different places. Web Interaction Database has a copy of the data in matrix format, which makes it hard to split the data e.g. by dates. Web of Life has even a more drastic pooling, which I am not sure how they did it, as I was not contacted by them, but I noticed all sites are pooled, including invaded and non invaded sites. FigShare had the best data so far, and it is associated with a paper and its analysis, so it’s better to keep this version at the morphospecies level for historical reasons. Hence, the new release of the data, with all new identifications is at Mangal. The webpage is very nice, and you can access programmatically all the networks in R.

library("rmangal")
mgs <- search_datasets("bartomeus")
mgn <- get_collection(mgs)
mgn; names(mgn)
mgn[[1]]$nodes
mgn[[1]]$interactions
library(tidygraph)
tg <- as_tbl_graph(mgn[[1]])
plot(tg)

I am a bit embarrassed by the lower quality of the original data, but better fixing it now than never. Long life to data!