I have a guest post in Practical Management blog

Quick note to say I am very glad to have a guest post in an awesome blog about data management, a neglected topic that affect all scientists. The blog is quite funny also, bringing some glamour to the art of data processing. Thanks Christie for inviting me to contribute!

The post is about style, check it out here: http://practicaldatamanagement.wordpress.com/2014/02/24/guest-post-about-style/

Teams vs. Lead authors

If you read last post you know I am experimenting with working in teams. I tend to think in graphs, so here is the conclusion I reach so far:

Teams.001

By “Satisfaction” I meant satisfaction of the leader or of the team, and involves quality of the paper, how much fun we had writing it, how innovative it is, etc*… By “Competence” I also meant competence of the lead author or of the team regarding the topic, stats, ideas… While for the lead author there is a lot on being good at integrating coauthors feed-back, for teams it also involves getting along really well. This graph is only a feeling, but I think that while teams can accomplish better results when they work hard, a single author is most likely to have moderate success even when the overall performance is low.

I think good teams need time to really get along. I also think working in the same space helps, but with email, tweeter, etc… may be is not that important anymore. I also think that 3-4 people is the key number to work in team, as is not realistic in my opinion to have more than 5 people fully involved without discussing badly.

Anyone with experiences to share?

*yes, ect… includes impact factor.

Are the tools we use shaping the way we collaborate?

This is the first of some thoughts about collaboration. I am quite convinced that working in teams enhance creativity, is more fun, and more productive, but is not always straightforward how to get the most out of our collaborations. Recently, I started wondering if the tools we use shape the way we collaborate. Let me put an example, the typical use of the track changes (in Word, OpenOffice, or whatever you want) predisposes you to have a leading author “accepting” or “rejecting” other people changes. On the other side, if you use a Git stile workflow, the system only show you where the changes are in the document, and (at least for me) kind of assumes you will accept those (if you don’t spot anything wrong). Don’t stop reading, this is not a typical “I hate Word” post and the next examples are all using a text processor.

What I am trying to say is that if you want a lead author supervising everything, you should use track changes, but when you aim for a more equal contribution, where all team members are expected to come to an agreement, track changes is against the flow of work. I don’t think is only a matter of how do you prefer to see the changes, but that it actually affects (unconsciously) people feelings and behaviours. For example, as a non-lead author, you don’t feel the project as yours as you should, and you are probably tempted to point out where the project needs to be enhanced, rather than enhancing it yourself. This feeling of “the lead-author will check if what I am editing reads ok” calls for sloppier edits. But in a team with no one accepting formally your changes, you are more likely to work until your changes read perfectly.

I’ve been experimenting with this in a couple of places. We agreed with Rachael on not using track changes for our F1000 evaluations. Those are short pieces and usually only need a couple of revision rounds among us (now in .txt format, way lighter than a word file*). It works perfectly. I am also co-leading a paper where we tried not to use track changes for writing the discussion. It only half worked here. At the beginning we didn’t agree on the structure, so we re-write a lot each other versions, and that felt time-consuming. I recognize it is hard to let go the control upon something you wrote. However at the end I am positive as the discussion is now a real (good) mix of our ideas and stiles. I have to say that we were working on the same office, so that helps a lot to solve questions in real-time.

Where I am going? I don’t know, and I did a lengthier post than usual, so I’ll write about pro’s and con’s of teams vs. lead authors in a couple of days.

———-
* ok, here is my “I hate word” rant.

Dear Journal

As a reviewer you should be allowed to answer review request using the same journal slang. Here is my version adapted to Science:

“Thank you for offering me to review the manuscript [Insert title here]. Competition for my time allocated to review is becoming increasingly fierce; I currently receive close to 30 requests a year, but I am only able to review roughly 12. The topic is certainly in my area of expertise and the analysis is interesting, however it was given a lower priority ranking than others. Unfortunately, this means that I will not be able to review it in-depth. My decisions are made on the basis of current workload as well as interest and significance. Due to my time limitations, I must reject excellent, exciting, and potentially important papers every month. Thus, my inability to review this work should not be taken as a direct comment on its quality or importance.” *

This is clearly ironic, and highlights the pressure to find reviewers, but honestly, I feel sorry every single time I have to say no to a review request, and I always want to write back explaining why I can’t this time.

*The acceptance to review version can also be quite interesting.

Book chapters vs Journal papers

I was offered to write a book chapter (a real one, not for a predatory editorial) and I asked my lab mate what she thought about it, given that time spent writing book chapters is time I am not writing papers in my queue. She kindly replied, but I already knew the answer because, all in all, we share office, we are both postdocs on the same research topic, and in general have a similar background. Then I asked my other virtual lab mates in tweeter and, as always, I got a very stimulating diversity of opinions, so here I post my take home message from the discussion.

Basically there are two opinions: One is “Book chapters don’t get cited” (link via @berettfavaro, but others shared similar stories with recommendations of not to lose time there). However quite other people jump on defending that books are still well read. Finally some people gave his advice on what to write about:

So, I agree that books don’t get cited, but I also agree that (some) books get read. In fact, I read myself quite a lot of science books (Julie Lockwood Avian Invasions is a great in deep book on a particular topic, or Cognitive Ecology of pollinators, edited by Chitka and Thomson, is a terrific compendium of knowledge merging two amazing topics). However: I don’t cite books.

So if you want to be cited do not write a book chapter. If what you have to say fits into a review or a research article, don’t write a book chapter. But if you have something to say for which papers are not the perfect fit (e.g. provide a historical overview of the topic, speculate about merging topics) then write a book chapter! It also will look nice in your CV.

Finally some people had a fair point on availability, a thing to take into account:

@ibartomeus I’ve done 3 this year and I’m concerned about future accessibility. In my field, books are getting expensive too, who buys them?

— Dr Cameron Webb (@Mozziebites) November 8, 2013

In summary:

  • Book chapters are not papers.
  • They won’t get cited, but will get read. However…
  • Make sure your editorial is well-known (& also sells pdf versions /allow preprints in your web)
  •  For early career researchers one/two book chapters can give you credit, but remember that you will be evaluated mainly on papers, so keep the ratio of books/papers low.

PS: Yes, I will write it!

Science at the speed of light

May be is not going that fast, but at the speed of R at least. And R is pretty quick. This has pros and cons. I think that understanding the drawbacks is key to maximize the good things of speed, so here are a few examples.

I have a really awful excel file with a dozen sheets calculating simple diversity indexes and network parameters from my dissertation. I also paste in there the output of Atmar and Patterson Nested Calculator (an .exe program!) and of the MATLAB code that Jordi Bascompte kindly send me to run his nestedness null model. I also used Pajek to plot the networks and calculate some extra indices. It took me at least a year to perform all those calculations and believe me, it will take me another year to be able to reproduce those steps again. That was only 6 years ago. Now, I can have nicer plots and calculate way more indexes than I want in less than 5 minutes using the bipartite package, and yes, is fully reproducible. On the other hand I really understood what I was doing, while running bipartite is completely black-boxy for most people.

Last year I also needed a plant phylogeny to test phylogenetic diversity among different communities. I was quite impressed to find the very useful Phylomatic webpage. I only had to prepare the data in the right format and get the tree. Importing the tree to R proved challenging for a newcomer and I had to tweak the tree in Mesquite beforehand. So yes, time-consuming and not reproducible, but I thought it was extremely fast and cool way to get phylogenies. Just one year after that, I can do all that from my R console thanks to the ropensci people (package taxize). Again, faster, easier, but I also need less knowledge on how that phylogeny is built. I am attaching the simple workflow I used below, as it may be useful. Thanks to Scott Chamberlain for advice on how to catch the errors on the retrieved family names.

library(taxize)
#vector with my species
spec <- c("Poa annua", "Abies procera", "Helianthus annuus")

#prepare the data in the right format (including retrieving family name)
names <- lapply(spec, itis_phymat_format, format='isubmit')

#I still have to do manually the ones with errors
names[grep("^na/", names, value = FALSE, perl = TRUE)]
#names[x] <- "family/genus/genus_species/" #enter those manually

#get and plot the tree
tree <- phylomatic_tree(taxa = names, get = "POST", taxnames=FALSE, parallel=FALSE)
tree$tip.label <- capwords(tree$tip.label)
plot(tree, cex = 0.5)

Finally, someone told me he found an old professor’s lab notebook with schedules of daily tasks (sorry I am terrible with details). The time slot booked to perform an ANOVA by hand was a full day! In this case, you really have to think very carefully which analysis you want to do beforehand. Nowadays speed is not an issue to perform most analysis (but our students will still laugh at our slow R code in 5 years!). Speed can help advance science, but with a great power comes great responsibility. Hence, now is more necessary than ever to understand what we do, and why we do it. I highly recommend to read about recent discussions on the use of sensible default values or the problem of increasing researcher degrees of freedom if you are interested in that topic.

Using twitter to streamline the review process

[Update: Peerage of science (and myself) has started to use the hashtag #ProRev]

You know I am worried about the current status of the review process, mainly because is one of the pillars of science. There are tons of things we can do in the long run to enhance it, but I come up with a little thing we can do right now to complement the actual process. The basic idea is to give the opportunity to reviewers to be proactive and state its interest to review a paper on a given topic when they have the time. How? via twitter. Editors (like i will be doing from @iBartomeus) can ask for it using a hashtag (#ProactiveReviewers). For example:

“Anyone interested in review a paper on this cool subject for whatever awesome journal? #ProactiveReviwers”

If you are interested and have the time, just reply to the twit, or sent me an email/DM if you are concerned about privacy.

The rules: is not binding. 1) I can choose not to send it to you, for example if there are conflict of interests. 2) you can choose not to accept it once you read the full abstract.

Why the hell should I, as a reviewer, want to volunteer? I already got a 100 invitations that I had to decline!

Well, fair enough, here is the main reasons:

Because you believe being proactive helps speed up the process and you are interested in making publishing as faster as possible. Matching reviewers interests and availability will be faster done that way than sending an invitation one by one to people the editor think may be interested (for availability there is not even a guess).

Some extra reasons:

Timing: Because you received 10 invitations to review last month, when you had this grant deadline and couldn’t accept any, and now that you have “time” you want to review but invitations don’t come.

Interests: Because you only receive invitations to review stuff related to your past work, but you want to actually review things about your current interests.

– Get in the loop: Because you are finishing your PhD and want to gain experience reviewing, but you don’t get the invitations yet.

– Because you want the “token” that some Journals give in appreciation (i.e. Ecology Letters gives you free subscription for reviewing for them).

– Because you want to submit your work to a given Journal and want to see how the review process work first hand.

So, is this going to work? I don’t know, but if a few editors start using it, the hashtag #ProactiveReviewer can become one more tool. Small changes can be powerful.