Teams vs. Lead authors

If you read last post you know I am experimenting with working in teams. I tend to think in graphs, so here is the conclusion I reach so far:

Teams.001

By “Satisfaction” I meant satisfaction of the leader or of the team, and involves quality of the paper, how much fun we had writing it, how innovative it is, etc*… By “Competence” I also meant competence of the lead author or of the team regarding the topic, stats, ideas… While for the lead author there is a lot on being good at integrating coauthors feed-back, for teams it also involves getting along really well. This graph is only a feeling, but I think that while teams can accomplish better results when they work hard, a single author is most likely to have moderate success even when the overall performance is low.

I think good teams need time to really get along. I also think working in the same space helps, but with email, tweeter, etc… may be is not that important anymore. I also think that 3-4 people is the key number to work in team, as is not realistic in my opinion to have more than 5 people fully involved without discussing badly.

Anyone with experiences to share?

*yes, ect… includes impact factor.

Are the tools we use shaping the way we collaborate?

This is the first of some thoughts about collaboration. I am quite convinced that working in teams enhance creativity, is more fun, and more productive, but is not always straightforward how to get the most out of our collaborations. Recently, I started wondering if the tools we use shape the way we collaborate. Let me put an example, the typical use of the track changes (in Word, OpenOffice, or whatever you want) predisposes you to have a leading author “accepting” or “rejecting” other people changes. On the other side, if you use a Git stile workflow, the system only show you where the changes are in the document, and (at least for me) kind of assumes you will accept those (if you don’t spot anything wrong). Don’t stop reading, this is not a typical “I hate Word” post and the next examples are all using a text processor.

What I am trying to say is that if you want a lead author supervising everything, you should use track changes, but when you aim for a more equal contribution, where all team members are expected to come to an agreement, track changes is against the flow of work. I don’t think is only a matter of how do you prefer to see the changes, but that it actually affects (unconsciously) people feelings and behaviours. For example, as a non-lead author, you don’t feel the project as yours as you should, and you are probably tempted to point out where the project needs to be enhanced, rather than enhancing it yourself. This feeling of “the lead-author will check if what I am editing reads ok” calls for sloppier edits. But in a team with no one accepting formally your changes, you are more likely to work until your changes read perfectly.

I’ve been experimenting with this in a couple of places. We agreed with Rachael on not using track changes for our F1000 evaluations. Those are short pieces and usually only need a couple of revision rounds among us (now in .txt format, way lighter than a word file*). It works perfectly. I am also co-leading a paper where we tried not to use track changes for writing the discussion. It only half worked here. At the beginning we didn’t agree on the structure, so we re-write a lot each other versions, and that felt time-consuming. I recognize it is hard to let go the control upon something you wrote. However at the end I am positive as the discussion is now a real (good) mix of our ideas and stiles. I have to say that we were working on the same office, so that helps a lot to solve questions in real-time.

Where I am going? I don’t know, and I did a lengthier post than usual, so I’ll write about pro’s and con’s of teams vs. lead authors in a couple of days.

———-
* ok, here is my “I hate word” rant.

Dear Journal

As a reviewer you should be allowed to answer review request using the same journal slang. Here is my version adapted to Science:

“Thank you for offering me to review the manuscript [Insert title here]. Competition for my time allocated to review is becoming increasingly fierce; I currently receive close to 30 requests a year, but I am only able to review roughly 12. The topic is certainly in my area of expertise and the analysis is interesting, however it was given a lower priority ranking than others. Unfortunately, this means that I will not be able to review it in-depth. My decisions are made on the basis of current workload as well as interest and significance. Due to my time limitations, I must reject excellent, exciting, and potentially important papers every month. Thus, my inability to review this work should not be taken as a direct comment on its quality or importance.” *

This is clearly ironic, and highlights the pressure to find reviewers, but honestly, I feel sorry every single time I have to say no to a review request, and I always want to write back explaining why I can’t this time.

*The acceptance to review version can also be quite interesting.

One more paper showing pollinators matters

We have a new PrePrint up at the peerJ (note that it is not peer-reviewed yet, but already citable) showing that pollinators increase not only yield, but also the quality of four european crops. While the evidence that pollinators are important for crop production is quite strong now, specially after Klein et al. 2007 review and Garibaldi et al 2013 synthesis, I think our paper still contributes to the field by quantifying the contribution to yield (and quality!) in a experimental way along a landscape gradient. Moreover, I think the introduction and discussion is well crafted and points out some aspects that are difficult to cover in short high impact papers (i.e. like our “Garibaldi” science paper). Which points? You will need to read the paper.

You can see the data were collected in 2005, so it has a long, long story I prefer not to dig in. In any case, it ended up in my table and I experienced the pains (and joys) of working with someone else data. That’s why, after waiting 8 years in a messy excel file, I felt that the data deserved to see the light as fast as possible and I pushed to publish it as a preprint. This is an awesome way to make it public probably ~ 6 months earlier than the final reviewed version. I am also happy to try a new Journal that is doing very nice and innovative things. Taking together this preprint and my F1000Research experience, I really think it makes no sense to hide a paper ready to be read until its final version. This can only slow down science. Read more about preprints here.

PS: Also read Klatt et al 2014 paper on strawberries, which spoiled a bit our findings, but is really good.