Teams vs. Lead authors

If you read last post you know I am experimenting with working in teams. I tend to think in graphs, so here is the conclusion I reach so far:

Teams.001

By “Satisfaction” I meant satisfaction of the leader or of the team, and involves quality of the paper, how much fun we had writing it, how innovative it is, etc*… By “Competence” I also meant competence of the lead author or of the team regarding the topic, stats, ideas… While for the lead author there is a lot on being good at integrating coauthors feed-back, for teams it also involves getting along really well. This graph is only a feeling, but I think that while teams can accomplish better results when they work hard, a single author is most likely to have moderate success even when the overall performance is low.

I think good teams need time to really get along. I also think working in the same space helps, but with email, tweeter, etc… may be is not that important anymore. I also think that 3-4 people is the key number to work in team, as is not realistic in my opinion to have more than 5 people fully involved without discussing badly.

Anyone with experiences to share?

*yes, ect… includes impact factor.

Are the tools we use shaping the way we collaborate?

This is the first of some thoughts about collaboration. I am quite convinced that working in teams enhance creativity, is more fun, and more productive, but is not always straightforward how to get the most out of our collaborations. Recently, I started wondering if the tools we use shape the way we collaborate. Let me put an example, the typical use of the track changes (in Word, OpenOffice, or whatever you want) predisposes you to have a leading author “accepting” or “rejecting” other people changes. On the other side, if you use a Git stile workflow, the system only show you where the changes are in the document, and (at least for me) kind of assumes you will accept those (if you don’t spot anything wrong). Don’t stop reading, this is not a typical “I hate Word” post and the next examples are all using a text processor.

What I am trying to say is that if you want a lead author supervising everything, you should use track changes, but when you aim for a more equal contribution, where all team members are expected to come to an agreement, track changes is against the flow of work. I don’t think is only a matter of how do you prefer to see the changes, but that it actually affects (unconsciously) people feelings and behaviours. For example, as a non-lead author, you don’t feel the project as yours as you should, and you are probably tempted to point out where the project needs to be enhanced, rather than enhancing it yourself. This feeling of “the lead-author will check if what I am editing reads ok” calls for sloppier edits. But in a team with no one accepting formally your changes, you are more likely to work until your changes read perfectly.

I’ve been experimenting with this in a couple of places. We agreed with Rachael on not using track changes for our F1000 evaluations. Those are short pieces and usually only need a couple of revision rounds among us (now in .txt format, way lighter than a word file*). It works perfectly. I am also co-leading a paper where we tried not to use track changes for writing the discussion. It only half worked here. At the beginning we didn’t agree on the structure, so we re-write a lot each other versions, and that felt time-consuming. I recognize it is hard to let go the control upon something you wrote. However at the end I am positive as the discussion is now a real (good) mix of our ideas and stiles. I have to say that we were working on the same office, so that helps a lot to solve questions in real-time.

Where I am going? I don’t know, and I did a lengthier post than usual, so I’ll write about pro’s and con’s of teams vs. lead authors in a couple of days.

———-
* ok, here is my “I hate word” rant.

Dear Journal

As a reviewer you should be allowed to answer review request using the same journal slang. Here is my version adapted to Science:

“Thank you for offering me to review the manuscript [Insert title here]. Competition for my time allocated to review is becoming increasingly fierce; I currently receive close to 30 requests a year, but I am only able to review roughly 12. The topic is certainly in my area of expertise and the analysis is interesting, however it was given a lower priority ranking than others. Unfortunately, this means that I will not be able to review it in-depth. My decisions are made on the basis of current workload as well as interest and significance. Due to my time limitations, I must reject excellent, exciting, and potentially important papers every month. Thus, my inability to review this work should not be taken as a direct comment on its quality or importance.” *

This is clearly ironic, and highlights the pressure to find reviewers, but honestly, I feel sorry every single time I have to say no to a review request, and I always want to write back explaining why I can’t this time.

*The acceptance to review version can also be quite interesting.

Book chapters vs Journal papers

I was offered to write a book chapter (a real one, not for a predatory editorial) and I asked my lab mate what she thought about it, given that time spent writing book chapters is time I am not writing papers in my queue. She kindly replied, but I already knew the answer because, all in all, we share office, we are both postdocs on the same research topic, and in general have a similar background. Then I asked my other virtual lab mates in tweeter and, as always, I got a very stimulating diversity of opinions, so here I post my take home message from the discussion.

Basically there are two opinions: One is “Book chapters don’t get cited” (link via @berettfavaro, but others shared similar stories with recommendations of not to lose time there). However quite other people jump on defending that books are still well read. Finally some people gave his advice on what to write about:

So, I agree that books don’t get cited, but I also agree that (some) books get read. In fact, I read myself quite a lot of science books (Julie Lockwood Avian Invasions is a great in deep book on a particular topic, or Cognitive Ecology of pollinators, edited by Chitka and Thomson, is a terrific compendium of knowledge merging two amazing topics). However: I don’t cite books.

So if you want to be cited do not write a book chapter. If what you have to say fits into a review or a research article, don’t write a book chapter. But if you have something to say for which papers are not the perfect fit (e.g. provide a historical overview of the topic, speculate about merging topics) then write a book chapter! It also will look nice in your CV.

Finally some people had a fair point on availability, a thing to take into account:

@ibartomeus I’ve done 3 this year and I’m concerned about future accessibility. In my field, books are getting expensive too, who buys them?

— Dr Cameron Webb (@Mozziebites) November 8, 2013

In summary:

  • Book chapters are not papers.
  • They won’t get cited, but will get read. However…
  • Make sure your editorial is well-known (& also sells pdf versions /allow preprints in your web)
  •  For early career researchers one/two book chapters can give you credit, but remember that you will be evaluated mainly on papers, so keep the ratio of books/papers low.

PS: Yes, I will write it!

Science at the speed of light

May be is not going that fast, but at the speed of R at least. And R is pretty quick. This has pros and cons. I think that understanding the drawbacks is key to maximize the good things of speed, so here are a few examples.

I have a really awful excel file with a dozen sheets calculating simple diversity indexes and network parameters from my dissertation. I also paste in there the output of Atmar and Patterson Nested Calculator (an .exe program!) and of the MATLAB code that Jordi Bascompte kindly send me to run his nestedness null model. I also used Pajek to plot the networks and calculate some extra indices. It took me at least a year to perform all those calculations and believe me, it will take me another year to be able to reproduce those steps again. That was only 6 years ago. Now, I can have nicer plots and calculate way more indexes than I want in less than 5 minutes using the bipartite package, and yes, is fully reproducible. On the other hand I really understood what I was doing, while running bipartite is completely black-boxy for most people.

Last year I also needed a plant phylogeny to test phylogenetic diversity among different communities. I was quite impressed to find the very useful Phylomatic webpage. I only had to prepare the data in the right format and get the tree. Importing the tree to R proved challenging for a newcomer and I had to tweak the tree in Mesquite beforehand. So yes, time-consuming and not reproducible, but I thought it was extremely fast and cool way to get phylogenies. Just one year after that, I can do all that from my R console thanks to the ropensci people (package taxize). Again, faster, easier, but I also need less knowledge on how that phylogeny is built. I am attaching the simple workflow I used below, as it may be useful. Thanks to Scott Chamberlain for advice on how to catch the errors on the retrieved family names.

library(taxize)
#vector with my species
spec <- c("Poa annua", "Abies procera", "Helianthus annuus")

#prepare the data in the right format (including retrieving family name)
names <- lapply(spec, itis_phymat_format, format='isubmit')

#I still have to do manually the ones with errors
names[grep("^na/", names, value = FALSE, perl = TRUE)]
#names[x] <- "family/genus/genus_species/" #enter those manually

#get and plot the tree
tree <- phylomatic_tree(taxa = names, get = "POST", taxnames=FALSE, parallel=FALSE)
tree$tip.label <- capwords(tree$tip.label)
plot(tree, cex = 0.5)

Finally, someone told me he found an old professor’s lab notebook with schedules of daily tasks (sorry I am terrible with details). The time slot booked to perform an ANOVA by hand was a full day! In this case, you really have to think very carefully which analysis you want to do beforehand. Nowadays speed is not an issue to perform most analysis (but our students will still laugh at our slow R code in 5 years!). Speed can help advance science, but with a great power comes great responsibility. Hence, now is more necessary than ever to understand what we do, and why we do it. I highly recommend to read about recent discussions on the use of sensible default values or the problem of increasing researcher degrees of freedom if you are interested in that topic.

Using twitter to streamline the review process

[Update: Peerage of science (and myself) has started to use the hashtag #ProRev]

You know I am worried about the current status of the review process, mainly because is one of the pillars of science. There are tons of things we can do in the long run to enhance it, but I come up with a little thing we can do right now to complement the actual process. The basic idea is to give the opportunity to reviewers to be proactive and state its interest to review a paper on a given topic when they have the time. How? via twitter. Editors (like i will be doing from @iBartomeus) can ask for it using a hashtag (#ProactiveReviewers). For example:

“Anyone interested in review a paper on this cool subject for whatever awesome journal? #ProactiveReviwers”

If you are interested and have the time, just reply to the twit, or sent me an email/DM if you are concerned about privacy.

The rules: is not binding. 1) I can choose not to send it to you, for example if there are conflict of interests. 2) you can choose not to accept it once you read the full abstract.

Why the hell should I, as a reviewer, want to volunteer? I already got a 100 invitations that I had to decline!

Well, fair enough, here is the main reasons:

Because you believe being proactive helps speed up the process and you are interested in making publishing as faster as possible. Matching reviewers interests and availability will be faster done that way than sending an invitation one by one to people the editor think may be interested (for availability there is not even a guess).

Some extra reasons:

Timing: Because you received 10 invitations to review last month, when you had this grant deadline and couldn’t accept any, and now that you have “time” you want to review but invitations don’t come.

Interests: Because you only receive invitations to review stuff related to your past work, but you want to actually review things about your current interests.

– Get in the loop: Because you are finishing your PhD and want to gain experience reviewing, but you don’t get the invitations yet.

– Because you want the “token” that some Journals give in appreciation (i.e. Ecology Letters gives you free subscription for reviewing for them).

– Because you want to submit your work to a given Journal and want to see how the review process work first hand.

So, is this going to work? I don’t know, but if a few editors start using it, the hashtag #ProactiveReviewer can become one more tool. Small changes can be powerful.

signing reviews pays back (and about sharing good and bad news)

Quick post to share an awesome experience I had today. I received an email from an author I just reviewed a paper. The paper was rejected. To my surprise that was a “thank you” email. I feel I have to quote it, hope that is ok…

I write to thank you for all the comments and suggestions. They have been extremely helpful in improving the quality of the manuscript and in calling our attention to previously unnoticed weaknesses.

 

I have been signing reviews for a couple of years now. So far I had one “thank you” letter, and zero angry letters. If I didn´t convince you before, are you still not convinced to sign your reviews now?

On a side note I realize I tend to share the good news, but not always the bad ones. However we should do it too. Twitter and blogs also work a bit like an empathy box and is good to share new cool papers and experiences, but also is good to share rejections (Yes, for example last week Proc B rejected my paper without review) or experiment failing (The aphids that were supposed to be my herbivory treatment, were ate by coccinelids), specially to show PhD students that everyone has ups and downs, and struggles to do science. Now I have to go to try to fix the aphid issue …

 

 

Peer-Review, making the numbers

We know it, the system is saturated, but what are we doing? Here are some numbers from 4 recent Journals I reviewed and published (or tried to publish) recently.

Time given to me to complete the review Time to take a 1st decision in my ms
PNAS 10 days > 3 months
PLoSOne 10 days 2 months
EcolLett 20 days 2 months
GCB 30 days 2 months

I think most reviewers do handle the ms on time (or almost on time), and that editors handle ms’s as fast as possible, so where are we losing the time? On finding the reviewers! In my limited experience in J Ecol I have to invite 6-10 reviewers to get two to accept, and that imply at least 15 days delay at best. And note that all the above are leading journals, so I don’t want to know how much it take for a low-tier Journal.

However, the positive line is: There are people willing to review all this papers. Seriously, there is a lot of potential reviewers that like to read an interesting paper on their topic, specially if they get some reward other than being the first on knowing about that paper. So I see two problems, which rewards can we offer and how to find the people who is interested in reviewing that paper efficiently.

1) Rewards: Yes, I love reviewing, I learn and I feel engaged with the community, but it also takes a lot of time. However, a spoon full of sugar helps the medicine go down. I don’t want money, I want to feel appreciated. For example, Ecol Lett offers you a free subscription for 3 months or GCB a free color figure in your next accepted ms (given that you manage to get one accepted). I am sure other options are out there, including some fun rewards, like for example “the person with more reviews in a year wins a dinner at next ESA/Intecol meeting with the chief editor” to put a silly example. Recognition is another powerful reward, but more on that line in the next item.

2) Interests matching: Rather than a blind guess from the editor of who will be interested in reviewing a paper, we should be able maximize interests. Can we adapt an internet dating system for finding a suitable partner to find a suitable reviewer? As an editor, I would love to see which reviewers with “my interests” are “single” (i.e. available) at this moment. Why sing in as a reviewer? May be because you want the free subscription to Ecol Lett or you die for this dinner with Dr. X. Also, by making your profile and activity public is easy to track your reputation as a reviewer (and of course you can put your reputation-score in your CV). Identify cheaters in the system (which submit papers but don’t review) will be also easy, and new PhD students can enter the game faster. Any entrepreneur wants to develop it?

While  there is still also a lot of bad advice out there which contribute to saturate the system, other models to de-saturate the system are possible (PubCreds are an other awesome idea). I am looking forward to see how all it evolves.

Ramon Y Cajal advice

This post has two purposes, first, celebrate that I was awarded a RyC fellowship to go back to Spain, which is very exciting. Second to recommend to everyone the reading of Ramón y Cajal advice for a young researcher [PDF here].

It was written in 1920’s and is surprisingly modern. He makes a strong argument to let the data talk for your science and he make some very relevant points against the inclusion of honorary authors. I also love his steps to write a paper:

(1) Have something to say, (2) say it, (3) stop once it is said, and (4) give the article a suitable title and order of presentation.

He is a little bit too harsh on substituting talent by working hard, but I agree that working hard (i.e. don’t expect discoveries to come easy) is a good advice. Putting that together with his advice on how to criticise others work without hurting any feeling (i.e,  always acknowledging the good points first), I can summarise it with a quote borrowed from my father: “work hard and be nice to people”. On my own experience, I recommend anyone to maximize the feeling that Science is a big community of helpful people with a common purpose rather than a competition among researchers.

The advice for the Spaniards (how to do science from a country on the cue of scientific production and with very limited funding in 1920) is not as up-to-date nowadays, but I am affraid we will have to apply some of his advide on that soon, if things keep that way.

I don’t agree with everything. For example, I think working in group and establish collaborations is basic to get the most of our imagination and talent, instead of working alone for long hours. I also think is funny the advise he gives in order to find an appropriate wife, and it may look even a bit offensive nowadays, although the bottom line is quite true: find someone who understands you!

The last thing I want to highlight is that i love how he transmit the ideal of a scientist as a nobel pursuser of the truth, unbiased, humble, honorable, almost kind of a knight extracted from a tale. But I’ll let you read the rest. Enjoy.