Dear Journal

As a reviewer you should be allowed to answer review request using the same journal slang. Here is my version adapted to Science:

“Thank you for offering me to review the manuscript [Insert title here]. Competition for my time allocated to review is becoming increasingly fierce; I currently receive close to 30 requests a year, but I am only able to review roughly 12. The topic is certainly in my area of expertise and the analysis is interesting, however it was given a lower priority ranking than others. Unfortunately, this means that I will not be able to review it in-depth. My decisions are made on the basis of current workload as well as interest and significance. Due to my time limitations, I must reject excellent, exciting, and potentially important papers every month. Thus, my inability to review this work should not be taken as a direct comment on its quality or importance.” *

This is clearly ironic, and highlights the pressure to find reviewers, but honestly, I feel sorry every single time I have to say no to a review request, and I always want to write back explaining why I can’t this time.

*The acceptance to review version can also be quite interesting.

Book chapters vs Journal papers

I was offered to write a book chapter (a real one, not for a predatory editorial) and I asked my lab mate what she thought about it, given that time spent writing book chapters is time I am not writing papers in my queue. She kindly replied, but I already knew the answer because, all in all, we share office, we are both postdocs on the same research topic, and in general have a similar background. Then I asked my other virtual lab mates in tweeter and, as always, I got a very stimulating diversity of opinions, so here I post my take home message from the discussion.

Basically there are two opinions: One is “Book chapters don’t get cited” (link via @berettfavaro, but others shared similar stories with recommendations of not to lose time there). However quite other people jump on defending that books are still well read. Finally some people gave his advice on what to write about:

So, I agree that books don’t get cited, but I also agree that (some) books get read. In fact, I read myself quite a lot of science books (Julie Lockwood Avian Invasions is a great in deep book on a particular topic, or Cognitive Ecology of pollinators, edited by Chitka and Thomson, is a terrific compendium of knowledge merging two amazing topics). However: I don’t cite books.

So if you want to be cited do not write a book chapter. If what you have to say fits into a review or a research article, don’t write a book chapter. But if you have something to say for which papers are not the perfect fit (e.g. provide a historical overview of the topic, speculate about merging topics) then write a book chapter! It also will look nice in your CV.

Finally some people had a fair point on availability, a thing to take into account:

@ibartomeus I’ve done 3 this year and I’m concerned about future accessibility. In my field, books are getting expensive too, who buys them?

— Dr Cameron Webb (@Mozziebites) November 8, 2013

In summary:

  • Book chapters are not papers.
  • They won’t get cited, but will get read. However…
  • Make sure your editorial is well-known (& also sells pdf versions /allow preprints in your web)
  •  For early career researchers one/two book chapters can give you credit, but remember that you will be evaluated mainly on papers, so keep the ratio of books/papers low.

PS: Yes, I will write it!

Science at the speed of light

May be is not going that fast, but at the speed of R at least. And R is pretty quick. This has pros and cons. I think that understanding the drawbacks is key to maximize the good things of speed, so here are a few examples.

I have a really awful excel file with a dozen sheets calculating simple diversity indexes and network parameters from my dissertation. I also paste in there the output of Atmar and Patterson Nested Calculator (an .exe program!) and of the MATLAB code that Jordi Bascompte kindly send me to run his nestedness null model. I also used Pajek to plot the networks and calculate some extra indices. It took me at least a year to perform all those calculations and believe me, it will take me another year to be able to reproduce those steps again. That was only 6 years ago. Now, I can have nicer plots and calculate way more indexes than I want in less than 5 minutes using the bipartite package, and yes, is fully reproducible. On the other hand I really understood what I was doing, while running bipartite is completely black-boxy for most people.

Last year I also needed a plant phylogeny to test phylogenetic diversity among different communities. I was quite impressed to find the very useful Phylomatic webpage. I only had to prepare the data in the right format and get the tree. Importing the tree to R proved challenging for a newcomer and I had to tweak the tree in Mesquite beforehand. So yes, time-consuming and not reproducible, but I thought it was extremely fast and cool way to get phylogenies. Just one year after that, I can do all that from my R console thanks to the ropensci people (package taxize). Again, faster, easier, but I also need less knowledge on how that phylogeny is built. I am attaching the simple workflow I used below, as it may be useful. Thanks to Scott Chamberlain for advice on how to catch the errors on the retrieved family names.

library(taxize)
#vector with my species
spec <- c("Poa annua", "Abies procera", "Helianthus annuus")

#prepare the data in the right format (including retrieving family name)
names <- lapply(spec, itis_phymat_format, format='isubmit')

#I still have to do manually the ones with errors
names[grep("^na/", names, value = FALSE, perl = TRUE)]
#names[x] <- "family/genus/genus_species/" #enter those manually

#get and plot the tree
tree <- phylomatic_tree(taxa = names, get = "POST", taxnames=FALSE, parallel=FALSE)
tree$tip.label <- capwords(tree$tip.label)
plot(tree, cex = 0.5)

Finally, someone told me he found an old professor’s lab notebook with schedules of daily tasks (sorry I am terrible with details). The time slot booked to perform an ANOVA by hand was a full day! In this case, you really have to think very carefully which analysis you want to do beforehand. Nowadays speed is not an issue to perform most analysis (but our students will still laugh at our slow R code in 5 years!). Speed can help advance science, but with a great power comes great responsibility. Hence, now is more necessary than ever to understand what we do, and why we do it. I highly recommend to read about recent discussions on the use of sensible default values or the problem of increasing researcher degrees of freedom if you are interested in that topic.

Using twitter to streamline the review process

[Update: Peerage of science (and myself) has started to use the hashtag #ProRev]

You know I am worried about the current status of the review process, mainly because is one of the pillars of science. There are tons of things we can do in the long run to enhance it, but I come up with a little thing we can do right now to complement the actual process. The basic idea is to give the opportunity to reviewers to be proactive and state its interest to review a paper on a given topic when they have the time. How? via twitter. Editors (like i will be doing from @iBartomeus) can ask for it using a hashtag (#ProactiveReviewers). For example:

“Anyone interested in review a paper on this cool subject for whatever awesome journal? #ProactiveReviwers”

If you are interested and have the time, just reply to the twit, or sent me an email/DM if you are concerned about privacy.

The rules: is not binding. 1) I can choose not to send it to you, for example if there are conflict of interests. 2) you can choose not to accept it once you read the full abstract.

Why the hell should I, as a reviewer, want to volunteer? I already got a 100 invitations that I had to decline!

Well, fair enough, here is the main reasons:

Because you believe being proactive helps speed up the process and you are interested in making publishing as faster as possible. Matching reviewers interests and availability will be faster done that way than sending an invitation one by one to people the editor think may be interested (for availability there is not even a guess).

Some extra reasons:

Timing: Because you received 10 invitations to review last month, when you had this grant deadline and couldn’t accept any, and now that you have “time” you want to review but invitations don’t come.

Interests: Because you only receive invitations to review stuff related to your past work, but you want to actually review things about your current interests.

– Get in the loop: Because you are finishing your PhD and want to gain experience reviewing, but you don’t get the invitations yet.

– Because you want the “token” that some Journals give in appreciation (i.e. Ecology Letters gives you free subscription for reviewing for them).

– Because you want to submit your work to a given Journal and want to see how the review process work first hand.

So, is this going to work? I don’t know, but if a few editors start using it, the hashtag #ProactiveReviewer can become one more tool. Small changes can be powerful.

signing reviews pays back (and about sharing good and bad news)

Quick post to share an awesome experience I had today. I received an email from an author I just reviewed a paper. The paper was rejected. To my surprise that was a “thank you” email. I feel I have to quote it, hope that is ok…

I write to thank you for all the comments and suggestions. They have been extremely helpful in improving the quality of the manuscript and in calling our attention to previously unnoticed weaknesses.

 

I have been signing reviews for a couple of years now. So far I had one “thank you” letter, and zero angry letters. If I didn´t convince you before, are you still not convinced to sign your reviews now?

On a side note I realize I tend to share the good news, but not always the bad ones. However we should do it too. Twitter and blogs also work a bit like an empathy box and is good to share new cool papers and experiences, but also is good to share rejections (Yes, for example last week Proc B rejected my paper without review) or experiment failing (The aphids that were supposed to be my herbivory treatment, were ate by coccinelids), specially to show PhD students that everyone has ups and downs, and struggles to do science. Now I have to go to try to fix the aphid issue …

 

 

Peer-Review, making the numbers

We know it, the system is saturated, but what are we doing? Here are some numbers from 4 recent Journals I reviewed and published (or tried to publish) recently.

Time given to me to complete the review Time to take a 1st decision in my ms
PNAS 10 days > 3 months
PLoSOne 10 days 2 months
EcolLett 20 days 2 months
GCB 30 days 2 months

I think most reviewers do handle the ms on time (or almost on time), and that editors handle ms’s as fast as possible, so where are we losing the time? On finding the reviewers! In my limited experience in J Ecol I have to invite 6-10 reviewers to get two to accept, and that imply at least 15 days delay at best. And note that all the above are leading journals, so I don’t want to know how much it take for a low-tier Journal.

However, the positive line is: There are people willing to review all this papers. Seriously, there is a lot of potential reviewers that like to read an interesting paper on their topic, specially if they get some reward other than being the first on knowing about that paper. So I see two problems, which rewards can we offer and how to find the people who is interested in reviewing that paper efficiently.

1) Rewards: Yes, I love reviewing, I learn and I feel engaged with the community, but it also takes a lot of time. However, a spoon full of sugar helps the medicine go down. I don’t want money, I want to feel appreciated. For example, Ecol Lett offers you a free subscription for 3 months or GCB a free color figure in your next accepted ms (given that you manage to get one accepted). I am sure other options are out there, including some fun rewards, like for example “the person with more reviews in a year wins a dinner at next ESA/Intecol meeting with the chief editor” to put a silly example. Recognition is another powerful reward, but more on that line in the next item.

2) Interests matching: Rather than a blind guess from the editor of who will be interested in reviewing a paper, we should be able maximize interests. Can we adapt an internet dating system for finding a suitable partner to find a suitable reviewer? As an editor, I would love to see which reviewers with “my interests” are “single” (i.e. available) at this moment. Why sing in as a reviewer? May be because you want the free subscription to Ecol Lett or you die for this dinner with Dr. X. Also, by making your profile and activity public is easy to track your reputation as a reviewer (and of course you can put your reputation-score in your CV). Identify cheaters in the system (which submit papers but don’t review) will be also easy, and new PhD students can enter the game faster. Any entrepreneur wants to develop it?

While  there is still also a lot of bad advice out there which contribute to saturate the system, other models to de-saturate the system are possible (PubCreds are an other awesome idea). I am looking forward to see how all it evolves.

Ramon Y Cajal advice

This post has two purposes, first, celebrate that I was awarded a RyC fellowship to go back to Spain, which is very exciting. Second to recommend to everyone the reading of Ramón y Cajal advice for a young researcher [PDF here].

It was written in 1920’s and is surprisingly modern. He makes a strong argument to let the data talk for your science and he make some very relevant points against the inclusion of honorary authors. I also love his steps to write a paper:

(1) Have something to say, (2) say it, (3) stop once it is said, and (4) give the article a suitable title and order of presentation.

He is a little bit too harsh on substituting talent by working hard, but I agree that working hard (i.e. don’t expect discoveries to come easy) is a good advice. Putting that together with his advice on how to criticise others work without hurting any feeling (i.e,  always acknowledging the good points first), I can summarise it with a quote borrowed from my father: “work hard and be nice to people”. On my own experience, I recommend anyone to maximize the feeling that Science is a big community of helpful people with a common purpose rather than a competition among researchers.

The advice for the Spaniards (how to do science from a country on the cue of scientific production and with very limited funding in 1920) is not as up-to-date nowadays, but I am affraid we will have to apply some of his advide on that soon, if things keep that way.

I don’t agree with everything. For example, I think working in group and establish collaborations is basic to get the most of our imagination and talent, instead of working alone for long hours. I also think is funny the advise he gives in order to find an appropriate wife, and it may look even a bit offensive nowadays, although the bottom line is quite true: find someone who understands you!

The last thing I want to highlight is that i love how he transmit the ideal of a scientist as a nobel pursuser of the truth, unbiased, humble, honorable, almost kind of a knight extracted from a tale. But I’ll let you read the rest. Enjoy.

How I deal with work overload (without working too much)

I have several posts I would like to do, but this month has been very hectic. This encouraged me to revise how I deal with my tendency to overcommit myself with new projects, while managing to accomplish deadlines and still not working on the weekends. The result is that I am posting this instead of other alternative posts. See why.

1) Externalize fun work for out of the office. When I arrive home I play with my daughter so there is no way I can do actual computer work there. However most of my work involves thinking. I can think in many places. My favourites are on my bike on the way to work or when running.  I not only think about work then, I also wonder about other stuff or picture myself in a tour de france time trial. However, I find that some fun problems are better solved in that context. Why? Because if you don’t come up with an idea in 5 minutes while sitting in front of your computer you feel desolated, but is ok to not have ideas if you are already doing something (e.g. running). Also, because if you are on your computer you tend to try (and put hands on) the first thing your intuition tells you will work. This way is easy to get lost in the details or do overcomplicated things that won’t work at the end. However, while running, you are forced to develop all the steps necessary and abstractly think if they will work discarding bad ideas way faster. Plus, blod is pumped to your brain continuously  boosting your potential (or this is what I hope). But take notes as soon as you get out of the shower!

2) Minimum effort rule. I usually start by the task that requires less time to be completed. That way I can take it off my list and maximize the chances to move on any project. If I can solve something (e.g. a review or a simulation for a coauthor) in 1 to 4 hours, I just do it and have it done fast. Answering a question by email? I’ll do it asap and archive it. This short tasks are usually related to collborations and that also make happy those people and allow the project to keep moving. 

3) Block time. The minimum effort rule fails when you start spending most time completing short tasks so there is no time left to work on long daunting (but exciting) projects. Then, I decided to block at least 2-3 full mornings or days a week to work on that kind of long term projects. No answering emails, no improvised meetings, no multitasking on the blocked time slots.

I thought about that post on my bike ride this morning, I knew it will be written fast, so I did it as soon as I have some spare time, but not this morning. This morning was blocked to do some other analysis.

I’m still on the process, so comment on what works for you!

How to decide where to submit your paper (my two cents)

Following Jeremy Fox interesting blog post, and at least three other people follow-up (herehere and here).  Here are my thoughts on where to submit your paper. In a nutshell, I think times are changing. If you are in a strong position, you can bet for the model you think is best. But if you are not settled yet, I think is wise to have a compromise between publishing some old school papers based on journals prestige, but also make your bet by submitting other manuscripts to faster and open access Journals. That way you can defend your position in a variety of situations.

Following Jeremy’s points:

  • Aim as high as you reasonably can. Agreed, but “high” is a vague term. Impact factor is not a reliable measure and “prestige” is difficult to asses. I think like Jan, that the difference is between the 3 top interdisciplinary journals, the top journals of your field, and then everything else. Within this categories, I don’t worry anymore about the journal in terms of “high impact”. (OA discussed below)
  • Don’t just go by journal prestige; consider “fit”. I do think fit is important, but not in terms of people finding your paper (despite lots of researchers keep using TOC’s of a few well-known journals), but because having a type of journal (or reader) in mind helps you frame your article. For example, I’d expect different things from the same title in Am Nat, than in Ecology.
  • How much will it cost? Important only if you don’t have the money.
  • How likely is the journal to send your paper out for external review? I liked Ethan’s advice on the importance of the speed of the process. By maximizing your chances of being sent to review, not only you can accumulate citations faster but also it reduce the amount of frustration.
  • Is the journal open access? Ideally, Yes, is very important for me. In reality, well, my projects rarely have the money to pay for it, so I end up not making them open.
  • Does the journal evaluate papers only on technical soundness? I think this is a model that will substitute all low tier journals. I’m writing mainly three types of papers. Papers that I hope can make a great advance on Ecology and that I would like to see in a top journal. Papers that has an specific niche, and where I want to target people working on this niche. And good papers that I think can make its moderate contribution, and I want them out there fast for people to read. This papers are ideal for open access and evaluated on technical soundness.
  • Is the journal part of a review cascade? Again, completely agree with Ethan. In fact I would love a model where papers are valued on technical soundness and then there is an “editors choice” or something like that.
  • Is it a society journal? I value supporting Societies. But most important: Is the publisher making profit? Is Copyright retained to the author? Society journals or other organisational journals (i.e. PLOS) has the great advantage from my point of view that revert the benefits to the community, and usually they require a licence to publish, but not a copyright transfer. It’s important for me to avoid as much as possible making a business of science.
  • Have you had good experiences with the journal in the past? I don’t think that’s relevant.
  • Is there anyone on the editorial board who’d be a good person to handle your paper? I’ve never thought on that.

Extra stuff:

  • Publish in a diversity of journals: If you want to increase your readership, increase the spectrum of journals you publish. Publish in general ecology Journals, in more specialised journals, Plos ONE stile. That would help you gain experience with the system too.
  • Listen to your feelings: Is there any journal you like (rationally or irrationally) specially? Forget the pros and cons. Publishing is hard, and its also important to fulfil your whims.