A few months ago I blogged enthusiastically about a paper in Science describing an approach to deorphan enzymes in parallel. Two anonymous commenters were quite derisive, claiming the chemistry for generating labeled metabolites in the paper impossible. Now Science's editor Bruce Alberts has published an expression of concern, which cites worries over the chemistry as well as the failure of the authors to post promised supporting data to their website and changing stories as to how the work was done.
The missing supporting data hits a raw nerve. I've been frustrated on more than one occasion whilst reviewing a paper that I couldn't access their supplementary data, and have certainly encountered this as a reader as well. I've sometimes meekly protested as a reviewer; in the future I resolve to consider this automatic grounds for "needs major revision". Even if the mistake is honest, it means day considered important is unavailable for consideration. Given modern publications with data which is either too large to print or simply incompatible with paper, "supplementary" data is frequently either central to the paper or certainly just off center.
This controversy also underscores a challenge for many papers which I have faced as a reviewer. To be quite honest, I'm utterly unqualified to judge the chemistry in this paper -- but feel quite qualified to judge many of the biological aspects. I have received for review papers with this same dilemma; parts I can critique and parts I can't. The real danger is if the editor inadvertantly picks reviewers who all share the same blind spot. Of course, in an ideal world a paper would always go to reviewers capable of vetting all parts of it, but with many multidisciplinary papers that is unlikely to happen. However, it also suggests a rethink of the standard practice of assigning three reviewers per paper -- perhaps each topic area should be covered by three qualified reviewers (of course, the reviewers would need to honestly declare this -- and not at review deadline time when it is too late to find supplementary reviewers!).
But, it is a mistake to think that peer review can ever be a perfect filter on the literature. It just isn't practical to go over every bit of data with a fine toothed comb. A current example illustrates this: a researcher has been accused of faking multiple protein structures. While some suspicion was raised when other structures of the same molecule didn't agree, the smoking gun is that the structures have systematic errors in how the atoms are packed. Is any reviewer of a structure paper really going to check all the atomic packing details? At some point, the best defense against scientific error & misconduct is to allow the entire world to scrutinize the work.
One of my professors in grad school had us first year students go through a memorable exercise. The papers assigned one week were in utter conflict with each other. We spent the entire discussion time trying to finesse how they could both be right -- what was different about the experimental procedures and how issues of experiment timing might explain the discrepancies. At the end, we asked what the resolution was, and was told "It's simple -- the one paper is a fraud". Once we knew this, we went back and couldn't believe we had believed anything -- nothing in the paper really supported its key conclusion. How had we been so blind before? A final coda to this is that the fraudulent paper is the notorious uniparental mouse paper -- and of course cloning of mice turns out to actually be possible. Not, of course, by the methods originally published and indeed at that time (mid 1970s) it would be well nigh impossible to actually prove that a mouse was cloned.
With that in mind, I will continue to blog here about papers I don't fully understand. That is one bit of personal benefit for me -- by exposing my thoughts to the world I invite criticism and will sometimes be shown the errors in my thinking. It never hurts to be reminded that skepticism is always useful, but I'll still take the risk of occasionally being suckered by P.T. Barnum, Ph.D.. This is, after all, a blog and not a scientific journal. It's meant to be a bit noisy and occasionally wrong -- I'll just try to keep the mean on the side of being correct.