I picked up a copy of The Economist last week, as is my habit when flying, and it happened to have a quarterly review of technology. There is a quite accurate story on microarrays that does a good job of explaining the technology for non-scientists.
One of the bits of the microarray story I had forgotten is retold there: how both the Affymetrix and Stanford groups pioneering microarrays had grant proposals which received truly dismal priority scores.
For those readers not steeped in academic science, when you ask various funding sources for money, your proposal is put together with a bunch of related proposals. A group of volunteers, called a study section, review the proposals and rate them. The best are given numeric scores and these scores are used to decide which proposals will receive funding. Study sections also have some power to suggest changes to grants -- i.e. cuts -- and to make written critiques. Ideally, these are constructive in nature but such niceties are not always observed.
A commonly heard complaint is that daring grant proposals are not funded. Judah Folkman apparently has an entire office wallpapered with grant rejections for his proposal of soluble pro- and anti- angiogenic factors. Robert Langer apparently has a similar collection trashing his ideas for novel drug delivery methods, such as drug-releasing wafers to be embedded in brain tumors. Of course, both of these concepts have now been clinically validated so they can gleefully recount these tales (I heard them at a Millennium outside speaker series I will dearly miss).
I've participated once (this summer) in a grant review study section and would love to comment on it -- but by rule what happens in Gaithersburg stays in Gaithersburg. There are good reasons for such secrecy, but it is definitely a double-edged sword. It has the potential to encourage both candor and back-stabbing. It certainly prevents any sort of systematic review of how study sections function and dysfunction.
What I think is a serious issue is that such grant review processes have little or no mechanism for selecting good judges and avoiding poor ones. Reviewers who torpedo daring good proposals have no sanction and those who champion heterodoxy no bonus. It isn't obvious how you could do this, so I do not propose a solution, but I wish somehow it could work.
One wonders whether the persons who passed over microarrays regret their decisions or stand by them (and what got funded instead?). Do they even remember their role in retarding these technologies? If you could ask them now, would they say "Boy did I blow it!" or "Microarrays? Why ask about that passing fad?"
No comments:
Post a Comment