I won't claim to be a connoisseur of the cinema, but I enjoy a good movie. I don't get to the cinema much, but my Netflix queue should keep me busy for a long time.
Biotechnology doesn't show up much in the movies. The reason is simple: biotech isn't very cinematic. The drama is slow and not photogenic. Most movies with a biotech angle are science fiction, with the biotech not exactly wearing a white hat: think Gattaca or Jurassic Park.
Once in a while biotech shows up in a movie which isn't generally sci fi. For example, in Family Business grandfather Sean Connery and father Dustin Hoffman are trying to convince their biologist son, Matthew Broderick (who I once had a very slight resemblance to -- twice I had strangers say I almost looked like him) to abscond with a plasmid from his company. The lab settings, as I remember, looked pretty reasonable.
At the other end of the spectrum is Mission Impossible II, which had me out of breath from laughing, though I doubt this was the intent of the director. The evil biotech company devising some devious human virus has two facilities which Tom Cruise's character must raid. The first one is in an amazing downtown high-rise -- yeah, the VCs would spring for that. The second is a cave-like seaside complex, with dripping water & bats living in the eaves. Yes, the perfect place for propagating mammalian viruses via cell culture!
If you want to see some actual biotech space, though not fitted out for such, then go watch The Spanish Prisoner. The MacGuffin driving the plot is a secret formula -- whose very field of relevance is never mentioned -- which disappears & is chased through the rest of the movie. The company digs at the beginning of the movie are at One Kendall Square, just across the courtyard from my current workplace. If I'm not mistaken, the shoot was in the space which currently houses next generation sequencing shop Helicos Biosystems. This was once Millennium space, a common history for much biotech space in Cambridge, and one group setting up there was familiar with its history & let me in on the secret. Alas, their plans for a Spanish Prisoner screening there were short-circuited by one of the first rounds of 'reshaping'.
Coincidentally, I first saw The Spanish Prisoner while flying to Europe on a Millennium business trip, though even more appropriate would have been to see it at the cinema which is part of the One Kendall Square complex. We had the space then, though I was unaware of its history. I like the movie -- the language has a distinctive rhythm of Mamet but without (if I remember correctly) the torrents of foul language that characterize some of his other movies (though I do like Glengarry Glen Ross, which should be mandatory watching before contemplating any real estate transaction!).
I'm sure this isn't a comprehensive survey of cinematic biotech. Anyone got any other favorites?
A computational biologist's personal views on new technologies & publications on genomics & proteomics and their impact on drug discovery
Wednesday, January 31, 2007
Monday, January 29, 2007
Shh! Fins in the making!
Last week's Nature contains an interesting article on fin development in sharks, rays and other cartilaginous fish. The article both illustrates how science doesn't always get things right the first time but rather sometimes approaches truth through successive attempts. It is also a case that illuminates the difference between the science of evolution and the pseudoscience which purports to compete with it.
The hedgehog (hh) gene was originally identified in Drosophila; the phenotype of the hh mutant larvae suggested a hedgehog, and with the whimsical naming tradition in fly that is the name it was given. When later work searched for hh homologs in vertebrates, a whole family was found, and these were named after various hedgehogs (african hedgehog, indian hedgehog, etc). With surely a mischievous smile, one was named after the video game character Sonic. As fate would have it, Sonic hedgehog (Shh) is the most studied of the bunch.
An important role for Shh is in the patterning of the vertebrate limb. While mapping out this pattern across the limbed vertebrates, a curious thing happened: a previous Nature paper reported that the cartilaginous fishes, who left the vertebrate line first among still living limbed vertebrates, seemed to lack a dependence on Shh.
The new paper reverses the previous finding. The previous paper had looked only for an Shh message, but the new paper uses a full court press on the problem.
First, they sequenced elements resembling Shh appendage-specific regulatory elements (ShAREs) from multiple cartilaginous species -- these are the DNA elements which drive Shh expression in the limbs. Both the sequence and position relative to Shh were found to be conserved. Second, appropriate expression of Shh was shown in fin buds from multiple species. Third, the developmental program in these buds was shown to respond to Shh or retinoic acid (a powerful developmental trigger) in a manner similar to the response observed in bony fish and other vertebrates.
Nature recently raised a firestorm by publishing a pro-creationism letter from a prominent European creationist, which provoked a flurry of letters in response. Some of these letters supported the publication, on the grounds that it denied creationists the claim that they are censored, while others bemoaned Nature besmirching itself with pseudoscience.
The shark/ray fin paper illustrates neatly what makes evolution a science and creationism (or its sheep's clothing sibling, intelligent design) a pseudoscience. There is a lot of evidence from the fossil record and embryology that shark fins are developmental homologs of bony fish fins, bird wings, and our arms & legs. If this work had shown that the molecular program was entirely different, then the world would be turned upside down. A major inconsistency would exist between the classical evolutionary view and the molecular developmental view. While such inconsistencies have generally been resolved with the molecular side shifting more (as recently pointed out by Carl Zimmer), there always exists the possibility that the two will be irreconcilable. In that case, a serious crisis would exist for the modern evolutionary synthesis.
We can contrast this with intelligent design and creationism (hereafter ID/C). ID/C posits an unknown (ID) and/or unknowable (C) designer, who had total freedom in designing species. Therefore, not finding ShAREs or Shh-dependent development of shark fins would be no big deal; it would just be one more design choice. Indeed, there would be no particular reason to expect ShARE's of similar sequence and similar location. In contrast, the evolutionary view predicted the existence of conserved ShAREs and conserved mechanism -- and without it would be in serious trouble. Because ID/C is equally compatible with all possible evidential outcomes, it has zero predictive power -- and therefore zero true explanatory power. That which explains anything equally well actually explains nothing.
The hedgehog (hh) gene was originally identified in Drosophila; the phenotype of the hh mutant larvae suggested a hedgehog, and with the whimsical naming tradition in fly that is the name it was given. When later work searched for hh homologs in vertebrates, a whole family was found, and these were named after various hedgehogs (african hedgehog, indian hedgehog, etc). With surely a mischievous smile, one was named after the video game character Sonic. As fate would have it, Sonic hedgehog (Shh) is the most studied of the bunch.
An important role for Shh is in the patterning of the vertebrate limb. While mapping out this pattern across the limbed vertebrates, a curious thing happened: a previous Nature paper reported that the cartilaginous fishes, who left the vertebrate line first among still living limbed vertebrates, seemed to lack a dependence on Shh.
The new paper reverses the previous finding. The previous paper had looked only for an Shh message, but the new paper uses a full court press on the problem.
First, they sequenced elements resembling Shh appendage-specific regulatory elements (ShAREs) from multiple cartilaginous species -- these are the DNA elements which drive Shh expression in the limbs. Both the sequence and position relative to Shh were found to be conserved. Second, appropriate expression of Shh was shown in fin buds from multiple species. Third, the developmental program in these buds was shown to respond to Shh or retinoic acid (a powerful developmental trigger) in a manner similar to the response observed in bony fish and other vertebrates.
Nature recently raised a firestorm by publishing a pro-creationism letter from a prominent European creationist, which provoked a flurry of letters in response. Some of these letters supported the publication, on the grounds that it denied creationists the claim that they are censored, while others bemoaned Nature besmirching itself with pseudoscience.
The shark/ray fin paper illustrates neatly what makes evolution a science and creationism (or its sheep's clothing sibling, intelligent design) a pseudoscience. There is a lot of evidence from the fossil record and embryology that shark fins are developmental homologs of bony fish fins, bird wings, and our arms & legs. If this work had shown that the molecular program was entirely different, then the world would be turned upside down. A major inconsistency would exist between the classical evolutionary view and the molecular developmental view. While such inconsistencies have generally been resolved with the molecular side shifting more (as recently pointed out by Carl Zimmer), there always exists the possibility that the two will be irreconcilable. In that case, a serious crisis would exist for the modern evolutionary synthesis.
We can contrast this with intelligent design and creationism (hereafter ID/C). ID/C posits an unknown (ID) and/or unknowable (C) designer, who had total freedom in designing species. Therefore, not finding ShAREs or Shh-dependent development of shark fins would be no big deal; it would just be one more design choice. Indeed, there would be no particular reason to expect ShARE's of similar sequence and similar location. In contrast, the evolutionary view predicted the existence of conserved ShAREs and conserved mechanism -- and without it would be in serious trouble. Because ID/C is equally compatible with all possible evidential outcomes, it has zero predictive power -- and therefore zero true explanatory power. That which explains anything equally well actually explains nothing.
Who to sequence?
I am optimistic that the price of sequencing a 4Gb (e.g. human) genome will be falling rapidly over the next decade, so that the idea of commissioning a genome sequence with personal funds will not be out of the question. Alternatively, having a genome run may become a popular giveaway, though it may be that only the rich can afford free (note to anyone trying to give me the same gift: I'll wear a barrel before giving up that prize).
The question then becomes: would it be worth it? If I could get my genome sequenced, would I? Undoubtedly there will be a horde of companies offering to help me interpret the results, probably with the slavish devotion to scientific rigor which characterizes the contemporary nutritional supplement industry. Even if I got good advice, would I follow it? I know I should eat better & exercise more; would a 'bad' genotype really give me that gluteus kick I need? I'm not optimistic about that.
If some interesting biology could result, then that might be enough to convince me. But that doesn't seem likely. The family lore would suggest that I am a gemisch of various Northern European clans, which is certainly a type which has already been sequenced and will certainly get heavily sequenced going forward. Nope, nothing terribly exciting there.
My one form of regular exercise is walking; most work days I walk at least 2km in my commute. When I am not walking to work, I am rarely alone. My most common companion on my walks, now there is a genome worth sequencing.
My best friend is from Asian stock and has a significantly restricted bloodline. Her clan is reputed to have originated in Tibet, but it was in the court of the Chinese emperor that they gained fame, for they were royal greeters in the Forbidden City. With that lofty duty came restricted social opportunities, and so the clan stuck to themselves from a reproductive standpoint.
However, with the 20th century came a rejection of the old ways, and being associated with the emperor was no asset. So they tried to hide -- but how could they? Those many generations of non-intermingling had left a genetic footprint -- distinctive facial features, very fine hair, short stature. Despite the difficulty, some of the clan did escape and emigrate to safer parts of the world to start anew.
Now that is a genome worth sequencing! To have a shot at understanding how genetic variation translates into morphologic variation: that I could see springing some dough on. I can see it now, sitting down together with my laptop and poring over the results, trying to ponder which resulted in that distinctive familial face, which leads to the fine hair that knots without provocation, which prevented cartilage formation in her ears, and which puts that marked curl in her tail which wags so furiously when I get home.
The question then becomes: would it be worth it? If I could get my genome sequenced, would I? Undoubtedly there will be a horde of companies offering to help me interpret the results, probably with the slavish devotion to scientific rigor which characterizes the contemporary nutritional supplement industry. Even if I got good advice, would I follow it? I know I should eat better & exercise more; would a 'bad' genotype really give me that gluteus kick I need? I'm not optimistic about that.
If some interesting biology could result, then that might be enough to convince me. But that doesn't seem likely. The family lore would suggest that I am a gemisch of various Northern European clans, which is certainly a type which has already been sequenced and will certainly get heavily sequenced going forward. Nope, nothing terribly exciting there.
My one form of regular exercise is walking; most work days I walk at least 2km in my commute. When I am not walking to work, I am rarely alone. My most common companion on my walks, now there is a genome worth sequencing.
My best friend is from Asian stock and has a significantly restricted bloodline. Her clan is reputed to have originated in Tibet, but it was in the court of the Chinese emperor that they gained fame, for they were royal greeters in the Forbidden City. With that lofty duty came restricted social opportunities, and so the clan stuck to themselves from a reproductive standpoint.
However, with the 20th century came a rejection of the old ways, and being associated with the emperor was no asset. So they tried to hide -- but how could they? Those many generations of non-intermingling had left a genetic footprint -- distinctive facial features, very fine hair, short stature. Despite the difficulty, some of the clan did escape and emigrate to safer parts of the world to start anew.
Now that is a genome worth sequencing! To have a shot at understanding how genetic variation translates into morphologic variation: that I could see springing some dough on. I can see it now, sitting down together with my laptop and poring over the results, trying to ponder which resulted in that distinctive familial face, which leads to the fine hair that knots without provocation, which prevented cartilage formation in her ears, and which puts that marked curl in her tail which wags so furiously when I get home.
Sunday, January 28, 2007
The Limits of Foresight
I've been asked, via a friend I made through this blog, to make predictions about 50 years from now. It's a daunting task, for as some great speaker noted, "Prediction is hard, especially about the future".
It is especially daunting given how badly past attempts have gone. World's fairs, pronouncements from futurists, science fiction writers -- in general have a high ratio of chaff to wheat. A few have been good at their game, but these are the exceptions.
It doesn't help to think of where predictions truly mean life-or-death, as failing to foresee something will lead to horrible consequences. Saturday marked the 40th anniversary of one such tragedy, one which dominated the headlines for days & halted the U.S.'s second giant national technology project for nearly a year and a half.
The U.S.'s first great national technology project, defined here as requiring enormous resources with efforts spread across the country, leaped to the public eye with the obliteration of an entire city. The third one was far more benign in intent & one which I attempted to play a part in (perhaps with little result), and was also far more international in character. Its finish line was far less clear, as finishing the genome became a question of successive approximations.
But that second project, when it got started again most of the world watched. Sadly, I may well have slept through it -- but I slept a lot in those days. In July 1969, two men walked on the moon, stunning the world. And because of a lack of foresight by others, Gus Grissom was not one of them.
Grissom was in the first batch of American astronauts & had (it is believed) been promised the first moon attempt. But on 27 January 1967, Grissom and his two colleagues, Ed White and Roger Chaffee, were roasted alive. They could not be saved, because under fire conditions the laws of physics held their hatch shut: it opened inwards, with the full pressure of the heated cabin pushing the other way. The 100% oxygen atmosphere, combined with sloppy assembly, was a disaster waiting to happen.
After the accident, a full review of the capsule design resulted in many changes -- many changes to try to avoid what had not been previously forseen and perhaps other accidents. But not all could be forseen -- and the Apollo program would have two close calls with Apollo 13 and Apollo-Soyuz -- again, designs failing to anticipate all that could happen.
Nineteen years and a day after Apollo 1, a new lack of foresight (and perhaps some rotten engineering graphics; here is one that might have saved the Challenger) would lead to seven more astronaut deaths. I was in high school that day, lamenting that the cold weather hadn't led to canceled school so I could watch the launch. Instead, in chemistry class we watched replay after replay of the accident (along with the misinformed, premature commentary the news media feels obligated to provide during such events).
Just barely over 17 years later, I was driving in to Boston to go to the Children's Museum with my family. A short radio broadcast on I-93 (I remember the spot clearly) had me whispering 'Not again!' -- lost contact with a spacecraft is not normal! Again, a failure of engineering foresight -- arguably stretching back to the original design, had doomed seven humans.
In every case, many very smart minds tackled a problem -- and failed to see all the consequences that could result from their decisions. Other failures of foresight have enabled success by the attackers at Pearl Harbor and on 9/11. Only after the event can we see so clearly what we didn't before -- as noted by a recently deceased intelligence analyst who examined such things. Oppenheimer saw the raw power of the atomic bomb ("Now I am become death, the destroyer of worlds") , but not the long-term poisoning of the survivors.
As far as I know, no one has died as a direct result of the genome project. Other than being electrocuted by a sequencing machine, it is hard to imagine how that could happen. But the long term effects of the genome project seem as difficult to predict as any other ripples from a technological stone. Surely much good will come of it, but alas undoubtedly there will be mischief as well. Let us hope that such mischief comes closer to genetic paparazzi than to Huxley's nightmare visions of Brave New World.
It is especially daunting given how badly past attempts have gone. World's fairs, pronouncements from futurists, science fiction writers -- in general have a high ratio of chaff to wheat. A few have been good at their game, but these are the exceptions.
It doesn't help to think of where predictions truly mean life-or-death, as failing to foresee something will lead to horrible consequences. Saturday marked the 40th anniversary of one such tragedy, one which dominated the headlines for days & halted the U.S.'s second giant national technology project for nearly a year and a half.
The U.S.'s first great national technology project, defined here as requiring enormous resources with efforts spread across the country, leaped to the public eye with the obliteration of an entire city. The third one was far more benign in intent & one which I attempted to play a part in (perhaps with little result), and was also far more international in character. Its finish line was far less clear, as finishing the genome became a question of successive approximations.
But that second project, when it got started again most of the world watched. Sadly, I may well have slept through it -- but I slept a lot in those days. In July 1969, two men walked on the moon, stunning the world. And because of a lack of foresight by others, Gus Grissom was not one of them.
Grissom was in the first batch of American astronauts & had (it is believed) been promised the first moon attempt. But on 27 January 1967, Grissom and his two colleagues, Ed White and Roger Chaffee, were roasted alive. They could not be saved, because under fire conditions the laws of physics held their hatch shut: it opened inwards, with the full pressure of the heated cabin pushing the other way. The 100% oxygen atmosphere, combined with sloppy assembly, was a disaster waiting to happen.
After the accident, a full review of the capsule design resulted in many changes -- many changes to try to avoid what had not been previously forseen and perhaps other accidents. But not all could be forseen -- and the Apollo program would have two close calls with Apollo 13 and Apollo-Soyuz -- again, designs failing to anticipate all that could happen.
Nineteen years and a day after Apollo 1, a new lack of foresight (and perhaps some rotten engineering graphics; here is one that might have saved the Challenger) would lead to seven more astronaut deaths. I was in high school that day, lamenting that the cold weather hadn't led to canceled school so I could watch the launch. Instead, in chemistry class we watched replay after replay of the accident (along with the misinformed, premature commentary the news media feels obligated to provide during such events).
Just barely over 17 years later, I was driving in to Boston to go to the Children's Museum with my family. A short radio broadcast on I-93 (I remember the spot clearly) had me whispering 'Not again!' -- lost contact with a spacecraft is not normal! Again, a failure of engineering foresight -- arguably stretching back to the original design, had doomed seven humans.
In every case, many very smart minds tackled a problem -- and failed to see all the consequences that could result from their decisions. Other failures of foresight have enabled success by the attackers at Pearl Harbor and on 9/11. Only after the event can we see so clearly what we didn't before -- as noted by a recently deceased intelligence analyst who examined such things. Oppenheimer saw the raw power of the atomic bomb ("Now I am become death, the destroyer of worlds") , but not the long-term poisoning of the survivors.
As far as I know, no one has died as a direct result of the genome project. Other than being electrocuted by a sequencing machine, it is hard to imagine how that could happen. But the long term effects of the genome project seem as difficult to predict as any other ripples from a technological stone. Surely much good will come of it, but alas undoubtedly there will be mischief as well. Let us hope that such mischief comes closer to genetic paparazzi than to Huxley's nightmare visions of Brave New World.
Thursday, January 25, 2007
Tag! You're It!
The protein kinases are a sizable chunk of the human proteome (>500 members) and the subject of intense research. Some estimates peg about a quarter of all current small molecule drug discovery efforts as targeting kinases, and a lot of basic research remains to be done on the class.
A key question for a given kinase is what are the proteins it phosphorylates, its substrates. It turns out that this is a decidedly difficult problem. With mass spectrometry one can now read out lots of phosphorylation sites on proteins, but figuring out which kinase is phosphorylating which site remains difficult. A lot of methods have been proposed and successfully used, but while the mass spec folks are getting adept at churning out thousands of phosphorylation sites (here is another new one from PNAS), papers linking kinases to substrates typically have very small numbers of substrates (such as uno!) in them -- though there are some notable exceptions [not claiming this is an exhaustive list]. Which is too bad, as quite a few kinases have no known substrates (or none besides themselves; most kinases will trans-phosphorylate amongst themselves). Indeed, one hint that a breakthrough is needed is the fact that the number of methods keep proliferating; there's no one (or a few) good method to settle on yet.
A new report in Journal of the American Chemical Society is pretty intriguing in this light. I'll confess I haven't gotten past the abstract because my overdue ACS renewal is still riding in my backpack, but the abstract is tantalizing enough. The claim is that a biotin-ATP conglomerate will be accepted by a kinase and result in the kinase biotinylating the substrate. Since pulling down biotin-tagged proteins or peptides is truly old hat (heck, I did it as an undergraduate in the 80's -- and it was my all thumbs experience there that helped make me a computational biologist!), this is pretty exciting. If it proves to work with many (can we dare hope for most?) kinases, this could really revolutionize things.
Now, some previous tagging schemes had been developed, but they all required additional steps and perhaps even genetically modifying the kinase first. None of these methods has seemed to have caught on, at least from the standpoint of a growing pile of papers.
Clearly, the proof will be in the replication. If the next couple of years sees a flurry of papers reading out kinase substrates by biotinylation, that will be validation. If not, then let the next round of scheming begin!
A key question for a given kinase is what are the proteins it phosphorylates, its substrates. It turns out that this is a decidedly difficult problem. With mass spectrometry one can now read out lots of phosphorylation sites on proteins, but figuring out which kinase is phosphorylating which site remains difficult. A lot of methods have been proposed and successfully used, but while the mass spec folks are getting adept at churning out thousands of phosphorylation sites (here is another new one from PNAS), papers linking kinases to substrates typically have very small numbers of substrates (such as uno!) in them -- though there are some notable exceptions [not claiming this is an exhaustive list]. Which is too bad, as quite a few kinases have no known substrates (or none besides themselves; most kinases will trans-phosphorylate amongst themselves). Indeed, one hint that a breakthrough is needed is the fact that the number of methods keep proliferating; there's no one (or a few) good method to settle on yet.
A new report in Journal of the American Chemical Society is pretty intriguing in this light. I'll confess I haven't gotten past the abstract because my overdue ACS renewal is still riding in my backpack, but the abstract is tantalizing enough. The claim is that a biotin-ATP conglomerate will be accepted by a kinase and result in the kinase biotinylating the substrate. Since pulling down biotin-tagged proteins or peptides is truly old hat (heck, I did it as an undergraduate in the 80's -- and it was my all thumbs experience there that helped make me a computational biologist!), this is pretty exciting. If it proves to work with many (can we dare hope for most?) kinases, this could really revolutionize things.
Now, some previous tagging schemes had been developed, but they all required additional steps and perhaps even genetically modifying the kinase first. None of these methods has seemed to have caught on, at least from the standpoint of a growing pile of papers.
Clearly, the proof will be in the replication. If the next couple of years sees a flurry of papers reading out kinase substrates by biotinylation, that will be validation. If not, then let the next round of scheming begin!
Tuesday, January 23, 2007
Stopping the Hopping
When I was a kid we had a game called (I think) Frantic Frogs. You wound up a bunch of little mechanical frogs & placed it's rear under a disk in the middle of the board. Once everyone's frog was on the starting line, you lifted the disk and the frogs started hopping towards the outer rim of the board. You had to steer your frogs by gentle taps with a provided plastic stick -- first one to corral all their frogs wins.
Many biomolecular interactions are like those frogs, constantly bouncing up-and-down. This represents a challenge for trying to capture these interactions -- any condition that reasonably washes away what didn't bind is also likely to wash away these transient interactors.
An article a week back in Science provides a clever microfluidic fix to this problem -- by supplying that central disk. In the scheme, an array of DNA spots is bound to a transcription factor. A pressure activated membrane can then drop on each spot, squeezing out the solution but pinning down the solid phase bound molecules.
The article throws in some other tricks: the microfluidic device is actually layered atop a spotted DNA array & the transcription factors were synthesized using in vitro transcription-translation. In short, this
One reason I find microfluidics so fascinating is that most of our everyday understanding of how things work get thrown by the wayside; because the scales are so small, forces such as surface tension take over from things such as gravity. This one stretches the mind again -- one isn't used to thinking about using mechanical forces to keep things from going into solution.
Many biomolecular interactions are like those frogs, constantly bouncing up-and-down. This represents a challenge for trying to capture these interactions -- any condition that reasonably washes away what didn't bind is also likely to wash away these transient interactors.
An article a week back in Science provides a clever microfluidic fix to this problem -- by supplying that central disk. In the scheme, an array of DNA spots is bound to a transcription factor. A pressure activated membrane can then drop on each spot, squeezing out the solution but pinning down the solid phase bound molecules.
The article throws in some other tricks: the microfluidic device is actually layered atop a spotted DNA array & the transcription factors were synthesized using in vitro transcription-translation. In short, this
One reason I find microfluidics so fascinating is that most of our everyday understanding of how things work get thrown by the wayside; because the scales are so small, forces such as surface tension take over from things such as gravity. This one stretches the mind again -- one isn't used to thinking about using mechanical forces to keep things from going into solution.
Monday, January 22, 2007
How We Die
Columnist Art Buchwald died last week, after nearly a year of life his doctors did not expect him to have. Buchwald had chosen to avoid the extension of life promised by kidney dialysis, instead choosing to live his last days on his own terms.
Death is not something we like to think about, but in medicine it is a regular reality -- particularly in oncology. For many patients, thnase choices offered by the current state of oncologic medicine are not easy to accept: disfigurement, chemical warfare agents, extreme nausea, hair loss, memory loss, persistent loss of feeling or constant pain, complete ablation of the immune system, etc. Yet, when the other choice is death, how could anyone choose otherwise?
The answer, of course, is complicated, and each person makes their own choice as to their path. A good exploration of this topic is Sherwin Nuland's book, How We Die. Nuland is a wonderful writer, and he uses many personal stories to trace the topic. The first chapter describes the gradual decline of his beloved grandmother. Particularly poignant for me was the story of a friend of his who was diagnosed with cancer. Nuland recommended an extremely aggressive course of treatment. At the end of his life, the friend basically said "thanks for your concern, but please don't do that to anyone else" -- the small amount of time wasn't worth the agonizing side effects. It is a sobering message.
Death is not something we like to think about, but in medicine it is a regular reality -- particularly in oncology. For many patients, thnase choices offered by the current state of oncologic medicine are not easy to accept: disfigurement, chemical warfare agents, extreme nausea, hair loss, memory loss, persistent loss of feeling or constant pain, complete ablation of the immune system, etc. Yet, when the other choice is death, how could anyone choose otherwise?
The answer, of course, is complicated, and each person makes their own choice as to their path. A good exploration of this topic is Sherwin Nuland's book, How We Die. Nuland is a wonderful writer, and he uses many personal stories to trace the topic. The first chapter describes the gradual decline of his beloved grandmother. Particularly poignant for me was the story of a friend of his who was diagnosed with cancer. Nuland recommended an extremely aggressive course of treatment. At the end of his life, the friend basically said "thanks for your concern, but please don't do that to anyone else" -- the small amount of time wasn't worth the agonizing side effects. It is a sobering message.
Thursday, January 18, 2007
Marginalia
One of the most famous quotes in mathematics is also its most notorious tease: "Cuius rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet." ("I have a truly marvelous proof of this proposition which this margin is too narrow to contain.") [quote & text from Wikipedia] -- Fermat's last theorem. Only hundreds of years later was this to be proven, using methods far beyond what was available at the time.
Marginal notes are not a skill I have ever acquired. I might underline a few sentences, but beyond that my notes are likely to be single words. This is little use to anyone else, and give only a limited window into my thinking should I review the same papers later. The vagaries of my filing system sometimes lead to multiple printings of the same article, which certainly defeats any markup on the first copy.
If you ever discover a colleague who does have the habit: treasure them! I once had a trip for Millennium to visit a clinical collaborator. The trip involved two planes, as the city we were traveling to lacked direct flights from Boston. On the first leg I began my usual habit of reading The Economist, but en route I spied one of my colleagues reviewing papers relevant to our visit. Feeling a tad chagrined that I hadn't brought any such practical reading material with me, I asked if I could borrow some of the papers. What a revelation!
There in the margins of the photocopies in small but neat script were sentences! Multiple sentences! Questions raised by the text! Cross-references to other work! Criticisms & commentary! Her notes were succinct yet revealing of her thinking of the strengths & weaknesses of the paper and how they fit into the context of other work.
Alas, I am unlikely to frequently have such moments, as our professional paths have now diverged. And I certainly didn't borrow her marked up photocopies nearly often enough after that trip. But it is a standard to keep in mind, even if to only to match infrequently.
Marginal notes are not a skill I have ever acquired. I might underline a few sentences, but beyond that my notes are likely to be single words. This is little use to anyone else, and give only a limited window into my thinking should I review the same papers later. The vagaries of my filing system sometimes lead to multiple printings of the same article, which certainly defeats any markup on the first copy.
If you ever discover a colleague who does have the habit: treasure them! I once had a trip for Millennium to visit a clinical collaborator. The trip involved two planes, as the city we were traveling to lacked direct flights from Boston. On the first leg I began my usual habit of reading The Economist, but en route I spied one of my colleagues reviewing papers relevant to our visit. Feeling a tad chagrined that I hadn't brought any such practical reading material with me, I asked if I could borrow some of the papers. What a revelation!
There in the margins of the photocopies in small but neat script were sentences! Multiple sentences! Questions raised by the text! Cross-references to other work! Criticisms & commentary! Her notes were succinct yet revealing of her thinking of the strengths & weaknesses of the paper and how they fit into the context of other work.
Alas, I am unlikely to frequently have such moments, as our professional paths have now diverged. And I certainly didn't borrow her marked up photocopies nearly often enough after that trip. But it is a standard to keep in mind, even if to only to match infrequently.
Wednesday, January 17, 2007
Smell-o-phone
One of the classic Bugs Bunny cartoons puts an elderly Bugs and Elmer far into the future. In the background, one hears the news bulletin that 'Smell-o-vision replaces television" -- this was one of my childhood friends' favorite gags.
One of my favorite bits of irrational exuberance during the Internet bubble was a company (whose name escaped me -- but not Google -- see this article) called Digiscents which was promising to build a box which would attach to your computer and generate smells. The pitch was that perfume websites could give samples, restaurant sites would tempt you with their aromas, etc. No mention of what would happen if you went to a site focused on skunks or house training your dog -- though the article Google dug up did mention that game designers were inquiring about rotting flesh smells.
What I relished about this company was that they were simultaneously dipping into two pools of irrational exuberance -- not only was it a crazy dotcom, but it was a crazy biotech as well! They were hiring basic biologists & bioinformaticists to try and do basic research on human olfaction! I suppose the pitch was that by understanding the sense of smell they could build a box which could use a small number of odorants to generate a large number of aromatic sensations, but to me this underlined how far ahead they were of the science.
Further Googling picked up all sorts of stories. It would seem that the choice of product names was about as clearly thought out as the business plan: iSmell.
What triggered this walk down memory lane? While it isn't as versatile as that box, from Japan (which seems to be the testing ground of every cell phone fad) comes a cell phone that can emit smells. As if I didn't already have enough trouble with my phone just ringing at the wrong time!
One of my favorite bits of irrational exuberance during the Internet bubble was a company (whose name escaped me -- but not Google -- see this article) called Digiscents which was promising to build a box which would attach to your computer and generate smells. The pitch was that perfume websites could give samples, restaurant sites would tempt you with their aromas, etc. No mention of what would happen if you went to a site focused on skunks or house training your dog -- though the article Google dug up did mention that game designers were inquiring about rotting flesh smells.
What I relished about this company was that they were simultaneously dipping into two pools of irrational exuberance -- not only was it a crazy dotcom, but it was a crazy biotech as well! They were hiring basic biologists & bioinformaticists to try and do basic research on human olfaction! I suppose the pitch was that by understanding the sense of smell they could build a box which could use a small number of odorants to generate a large number of aromatic sensations, but to me this underlined how far ahead they were of the science.
Further Googling picked up all sorts of stories. It would seem that the choice of product names was about as clearly thought out as the business plan: iSmell.
What triggered this walk down memory lane? While it isn't as versatile as that box, from Japan (which seems to be the testing ground of every cell phone fad) comes a cell phone that can emit smells. As if I didn't already have enough trouble with my phone just ringing at the wrong time!
Tuesday, January 16, 2007
Meow. Achoo! Meow. Achoo! -- a bit longer
For reasons other than predicted, namely an unexpected round of furniture moving, a couple of ideas for this space will continue to stay ideas. But there is an interesting article in The Scientist (you may need a free subscription) on the company which claims to have produced a hypoallergenic cat (to much media fanfare), but has yet to actually produce such a cat for inspection. For those hoping to obtain such a cat, the article would suggest that it will not be exploring your catnip patch anytime soon. As for me, I need to take a catnap...
Monday, January 15, 2007
Nose to the Grindstone, Shoulder to the Wheel
One impetus for starting this blog was the end of my career at a biotech company -- I thought I might have a lot of time on my hands, which represented both an opportunity to do something new and a threat to my professional sanity. That hiatus ended up being significantly compressed due to an extension of my term of service, but the blog was out there and required feeding.
Because my former employer, Millennium Pharmaceuticals, had grown so huge and shrunk so much, plus with my lengthy service there, I know a lot of alumni and a large number of them are still in the Boston area. Thanks to many of them, I had rapid success with obtaining screening interviews & as a result tomorrow I get to embark on the next stage of my career as I begin a new position. I will be joining the startup synthetic biology company Codon Devices, one of whose founders is my graduate advisor George Church. It's an unbelievable opportunity to try to explore amazing technologies & apply them to interesting biological problems.
So what to do about the blog? I hope to continue it, but we'll see what the reality is. Time, particularly mental cycle time, may become a rare commodity -- one of the first blogs I got hooked on dried up when the author joined a startup. Some distillations of topics of interest may need to be reserved for my new organization. But, there's still plenty out there. I also don't intend for this to become some sort of propaganda site for Codon; I may comment on synthetic biology sometimes, but perhaps this will largely be an outlet for my musings on topics at best only distantly connected to their business plans.
In any case, I do thank all of you who have read this pages & commented or emailed me or otherwise noted this daily attempt to go beyond PowerPoint bullets (it was gratifying, but novel, to have my blog cited in the intro at one job talk!). It is gratifying, and I hope you will continue to find reading these blurbs worth your while.
Because my former employer, Millennium Pharmaceuticals, had grown so huge and shrunk so much, plus with my lengthy service there, I know a lot of alumni and a large number of them are still in the Boston area. Thanks to many of them, I had rapid success with obtaining screening interviews & as a result tomorrow I get to embark on the next stage of my career as I begin a new position. I will be joining the startup synthetic biology company Codon Devices, one of whose founders is my graduate advisor George Church. It's an unbelievable opportunity to try to explore amazing technologies & apply them to interesting biological problems.
So what to do about the blog? I hope to continue it, but we'll see what the reality is. Time, particularly mental cycle time, may become a rare commodity -- one of the first blogs I got hooked on dried up when the author joined a startup. Some distillations of topics of interest may need to be reserved for my new organization. But, there's still plenty out there. I also don't intend for this to become some sort of propaganda site for Codon; I may comment on synthetic biology sometimes, but perhaps this will largely be an outlet for my musings on topics at best only distantly connected to their business plans.
In any case, I do thank all of you who have read this pages & commented or emailed me or otherwise noted this daily attempt to go beyond PowerPoint bullets (it was gratifying, but novel, to have my blog cited in the intro at one job talk!). It is gratifying, and I hope you will continue to find reading these blurbs worth your while.
Mouse Mind Mega Map
In Richard Feynman's hilarious memoirs Surely You're Joking Mr. Feynman, he describes one incident where he confused a librarian by asking for 'a map of a cat'. The librarian ultimately realized that Feynman was looking for an anatomical chart.
Nowadays, it is common to have all sorts of maps of bodies -- genome maps, neural maps, cell fate maps, etc. On the flip side, one might fear that widespread adoption of talking GPS units will dull literacy for actual geographic maps. Progress!
Last week's Nature describes an audacious map of gene expression in the mouse brain -- 20K messages mapped by in situ hybridization. The work was largely funded by a foundation endowed by Microsoft founder Paul Allen, which might make one feel a little less guilty for tithing to Redmond -- Microsoft does much to earn enmity, but between this work & what the Gates Foundation is doing for diseases prevalent in Third World countries, it seems necessary to temper that ire.
The informatics required for this are extremely impressive. An inbred mouse strain was used for all the samples to reduce mouse-to-mouse variability, but what remained was solved by performing a three dimensional mouse brain alignment of all the samples (ClustalW is cool, but can it do that! :-)
The supplemental methods make clear the industrial scale of the project (boldface mine)
Of course, like most genomics projects this isn't the be-all, end-all but rather an enormous database of hypotheses. For scientists interested in human brain diseases, clearly a first cut will be to verify whether genes showing interesting expression patterns in mouse show the same pattern in human. Undoubtedly there will also be many splice variants, alternative 3' & 5' ends, etc to characterize as well. But what a grand sandbox to explore!
Interestingly, the article itself seems to be freely accessible along with the Supplementary Material, but you'll need a subscription (or purchase access) to read the accompanying News & Views item on the paper. There is also a permanent database at http://www.brain-map.org
Nowadays, it is common to have all sorts of maps of bodies -- genome maps, neural maps, cell fate maps, etc. On the flip side, one might fear that widespread adoption of talking GPS units will dull literacy for actual geographic maps. Progress!
Last week's Nature describes an audacious map of gene expression in the mouse brain -- 20K messages mapped by in situ hybridization. The work was largely funded by a foundation endowed by Microsoft founder Paul Allen, which might make one feel a little less guilty for tithing to Redmond -- Microsoft does much to earn enmity, but between this work & what the Gates Foundation is doing for diseases prevalent in Third World countries, it seems necessary to temper that ire.
The informatics required for this are extremely impressive. An inbred mouse strain was used for all the samples to reduce mouse-to-mouse variability, but what remained was solved by performing a three dimensional mouse brain alignment of all the samples (ClustalW is cool, but can it do that! :-)
The supplemental methods make clear the industrial scale of the project (boldface mine)
The production laboratory was built with specifications that allowed the ABA project a full capacity production of approximately 1,000 slides/4000 brain sections daily. The facility has strict environmental controls on air humidity and temperature as well as an RNAse-free water system capable of delivering the 300 liters of water necessary to run five robotic in situ hybridization platforms daily.The News & Views item puts the final tally at over 1 million sections from 6000 brains.
Of course, like most genomics projects this isn't the be-all, end-all but rather an enormous database of hypotheses. For scientists interested in human brain diseases, clearly a first cut will be to verify whether genes showing interesting expression patterns in mouse show the same pattern in human. Undoubtedly there will also be many splice variants, alternative 3' & 5' ends, etc to characterize as well. But what a grand sandbox to explore!
Interestingly, the article itself seems to be freely accessible along with the Supplementary Material, but you'll need a subscription (or purchase access) to read the accompanying News & Views item on the paper. There is also a permanent database at http://www.brain-map.org
Friday, January 12, 2007
Two Men of Adventure
Today's Boston Globe carried stories on two men who led adventurous lives, one because of the 100th anniversary of his birth and the other reporting the second man's passing.
Sergei Korolyov was the genius behind the early Soviet space program. He nearly died in a Stalin death camp, and his death is often claimed to have been from incompetent surgeons, but during his live he served his country much better than it treated him. His stunning successes with Sputnik, Vostok and others led to consternation in the United States. One positive outcome of that shock was an overhaul of American science & math education, an overhaul that served me well (though some backsliding from it didn't!). The shock also led to the American moon program, which I still find inspiring. Today is the 100th anniversary of his birth.
Bradford Washburn led an amazing life. I never had the privilege to meet him, but he was the driving force behind the Museum of Science, which I have enjoyed many times. His pictures of mountains are stunning. To give some idea of his spirit, for his honeymoon he & his new bride became the first climbers to scale Mount Bertha in Alaska (picture in Globe article -- get it free while you can!).
Sergei Korolyov was the genius behind the early Soviet space program. He nearly died in a Stalin death camp, and his death is often claimed to have been from incompetent surgeons, but during his live he served his country much better than it treated him. His stunning successes with Sputnik, Vostok and others led to consternation in the United States. One positive outcome of that shock was an overhaul of American science & math education, an overhaul that served me well (though some backsliding from it didn't!). The shock also led to the American moon program, which I still find inspiring. Today is the 100th anniversary of his birth.
Bradford Washburn led an amazing life. I never had the privilege to meet him, but he was the driving force behind the Museum of Science, which I have enjoyed many times. His pictures of mountains are stunning. To give some idea of his spirit, for his honeymoon he & his new bride became the first climbers to scale Mount Bertha in Alaska (picture in Globe article -- get it free while you can!).
Thursday, January 11, 2007
Binding Resolutions
The new issue of Nature Methods contains an article outlining a European consortium which is resolving to generate a vast array of affinity reagents targeting the human proteome. It is a daunting task, and most of the article is tightly packed enumeration of what is daunting about it.
Even deciding what to generate reagents to is no easy task: take the 25K human ORFeome and mix in various post-translational modifications (proteolysis, phosphorylation, glycosylation, ubiquitination (and its cousins SUMO, NEDD8, ISG15 and several more), acetylation, lipidation) and variant folding/activity states, and there are almost certainly more than the 100K targets that they propose going after -- which is one of the first statements in the paper.
Their review of available technologies is brief, but gives a quick listing of about all the choices -- none of which has been proven to be scalable to the task. There's even the question of whether heterodox scaffolds, such as non-antibody protein scaffolds or nucleic acid aptamers, are appropriate or whether antibodies are the only way to go. Deciding on what assays to support for validation is no easier than deciding what to target, and doing validation on that scale may be a bigger challenge than generating all the reagents in the first place.
They also propose wrangling the available information on binders, in particular keeping track of where things bind. The current state of affairs is terrible, with many antibodies labeled only with common names by the ever expanding multitude of antibody suppliers. Worse yet, the same antibodies are sold by multiple vendors, but with no consistent way to tell -- except by looking for identical background noise in the Western blot images from each vendor. Sites such as ExactAntigen try to get a handle on things & cut down the tedium of antibody identification a bit, but the antibody information world is more chaos than order.
All that said, boy is this project needed! They will need a lot of luck (and copious quantities of money), but the lack of off-the-shelf affinity reagents for any protein of sudden interest is a serious handicap for validating array or computational experiments, as I found all too often in my last job. DNA & RNA have the elegant beauty of Watson-Crick pairing; if only such a rule system could be devised for protein!
BTW, if you don't get Nature Methods, the print subscriptions are given out pretty freely to biotech/biopharma professionals.
Even deciding what to generate reagents to is no easy task: take the 25K human ORFeome and mix in various post-translational modifications (proteolysis, phosphorylation, glycosylation, ubiquitination (and its cousins SUMO, NEDD8, ISG15 and several more), acetylation, lipidation) and variant folding/activity states, and there are almost certainly more than the 100K targets that they propose going after -- which is one of the first statements in the paper.
Their review of available technologies is brief, but gives a quick listing of about all the choices -- none of which has been proven to be scalable to the task. There's even the question of whether heterodox scaffolds, such as non-antibody protein scaffolds or nucleic acid aptamers, are appropriate or whether antibodies are the only way to go. Deciding on what assays to support for validation is no easier than deciding what to target, and doing validation on that scale may be a bigger challenge than generating all the reagents in the first place.
They also propose wrangling the available information on binders, in particular keeping track of where things bind. The current state of affairs is terrible, with many antibodies labeled only with common names by the ever expanding multitude of antibody suppliers. Worse yet, the same antibodies are sold by multiple vendors, but with no consistent way to tell -- except by looking for identical background noise in the Western blot images from each vendor. Sites such as ExactAntigen try to get a handle on things & cut down the tedium of antibody identification a bit, but the antibody information world is more chaos than order.
All that said, boy is this project needed! They will need a lot of luck (and copious quantities of money), but the lack of off-the-shelf affinity reagents for any protein of sudden interest is a serious handicap for validating array or computational experiments, as I found all too often in my last job. DNA & RNA have the elegant beauty of Watson-Crick pairing; if only such a rule system could be devised for protein!
BTW, if you don't get Nature Methods, the print subscriptions are given out pretty freely to biotech/biopharma professionals.
Wednesday, January 10, 2007
Gregor's Genes
Hot on the heels to my exhibit report on Gregor Mendel is a report in Science of the identification of one of the loci he worked with. It turns out that the same locus (staygreen) that turned his pea's cotyledon's yellow is responsible for the seasonal shut-down of chlorophyll production in many plants, including my lawn.
Based on some quick searching, this would seem to be the molecular scorecard (phenotype descriptions lifted from the Field Museum site) -- corrections & improvements most welcome!
I'm particularly suspicious that the color genes are known, but Google & PubMed couldn't find the paper in five minutes of searching.
Based on some quick searching, this would seem to be the molecular scorecard (phenotype descriptions lifted from the Field Museum site) -- corrections & improvements most welcome!
- Seed color (yellow or green): staygreen, no predicted molecular function
- Seed shape (smooth or wrinkled): starch branching enzyme
- Pod color (yellow or green): uncloned?
- Pod shape (inflated or purple: uncloned?
- Flower color (purple or white): uncloned?
- Flower position (axial or terminal): uncloned
- Stem height (tall or short): gibberellin 3 beta-hydroxylase
I'm particularly suspicious that the color genes are known, but Google & PubMed couldn't find the paper in five minutes of searching.
Tuesday, January 09, 2007
Nanodiamonds are forever
My first science was geology: as a kid I collected rocks. A key test in geology is the scratch test for hardness, with a set of ten index values known as the Moh's Scale -- and I had all but #10, as Mom would not lend me her engagement ring.
Diamonds have all sorts of amazing properties, and a report in PNAS describes one more. Nanodiamonds around 35nM in size can be used as fluorescent tags. This entry's title derives from one interesting property: they don't photobleach. Organic dyes exposed to light for an extended period bleach out, but not these diamonds.
You'll need a subscription to view the article (alas, I'm without easy access to one at the moment), but the supporting information is freely available -- including a movie of a single nanodiamond inside a HeLa cell.
Diamonds have all sorts of amazing properties, and a report in PNAS describes one more. Nanodiamonds around 35nM in size can be used as fluorescent tags. This entry's title derives from one interesting property: they don't photobleach. Organic dyes exposed to light for an extended period bleach out, but not these diamonds.
You'll need a subscription to view the article (alas, I'm without easy access to one at the moment), but the supporting information is freely available -- including a movie of a single nanodiamond inside a HeLa cell.
Monday, January 08, 2007
Counting Proteins
I earlier wrote two pieces on a microfluidics chip for counting nucleic acids by limiting dilution PCR -- in one application it was used to count mRNAs and the other for counting bacteria. Last week's Science has a nice complement to this: microfluidic chips that count proteins.
A really cool aspect of the chip is its assembly line nature: it doesn't just count proteins, it performs the upstream steps as well. Starting with a sample of cells, it plucks out a single cell. The cell is rinsed and then lysed. The fluorescent antibodies are introduced (if necessary) and the an electrophoretic separation performed. Finally, fluorescent molecules are counted as they pass through a chip region which is illuminated with a sliver of light of the correct excitation wavelength.
They actually demonstrated two variants of the basic scheme: one chip performed an immunoassay on eukaryotic cells as described above, while the other looked at naturally fluorescent proteins in cyanobacteria. The immunoassay chip analyzed a single cell, whereas the cyanobacterial chip ran three in parallel.
This is a really neat development and presuming the costs can be made reasonable, would be very interesting for many applications. But, there are very few naturally fluorescent proteins and so for most applications high quality antibodies (or equivalent specific binders) will be needed -- a nut that has yet to be cracked.
A really cool aspect of the chip is its assembly line nature: it doesn't just count proteins, it performs the upstream steps as well. Starting with a sample of cells, it plucks out a single cell. The cell is rinsed and then lysed. The fluorescent antibodies are introduced (if necessary) and the an electrophoretic separation performed. Finally, fluorescent molecules are counted as they pass through a chip region which is illuminated with a sliver of light of the correct excitation wavelength.
They actually demonstrated two variants of the basic scheme: one chip performed an immunoassay on eukaryotic cells as described above, while the other looked at naturally fluorescent proteins in cyanobacteria. The immunoassay chip analyzed a single cell, whereas the cyanobacterial chip ran three in parallel.
This is a really neat development and presuming the costs can be made reasonable, would be very interesting for many applications. But, there are very few naturally fluorescent proteins and so for most applications high quality antibodies (or equivalent specific binders) will be needed -- a nut that has yet to be cracked.
Saturday, January 06, 2007
NAR Database Issue Out!
Nucleic Acids Research's annual database issue is out. This is a great place to start looking for info on the plethora of biology databases out there, ranging from such broadly useful databases as Genbank to the many highly specialized niche databases. The NCBI maintained meta-database of biological databases apparently has close to 1000 entries -- and I'm sure it will keep climbing.
In an ideal world, perhaps many of these databases would be integrated (or perhaps they all would be), but the reality is that small, passionately focused databases on topics near-and-dear to the curator's interests have the highest quality information.
In an ideal world, perhaps many of these databases would be integrated (or perhaps they all would be), but the reality is that small, passionately focused databases on topics near-and-dear to the curator's interests have the highest quality information.
Friday, January 05, 2007
The Incredible Shrinking Bacterium
How's this for an ecosystem niche: 30-50C (84-122F), pH -0.5 to 1.5. micromolar arsenic & copper and nearly molar iron. That's the witches brew found in an abandoned mine in California. Last week's Science (alas, subscription will be required to read) contains a paper describing one of the archeans that lives in a biofilm in the midst of that awful solution. The bug was identified initially as a novel 16S rRNA sequence in a metagenomics sequencing project. Further sequencing pieced together 4Kb from this bug and another 13K from a related species.
The 16S sequences contain some significant mismatches from commonly used 'universal' rRNA primers, which shows a big advantage of metagenomics for discovering novel organisms: it is unbiased.
Things get really interesting when in situ hybridization was used to localize the bugs -- they are the tiniest well documented organisms yet, roughly 244nM x 175 nM -- a volume of <6nM^3 -- vs. about 20nM^3 for the previous record holder. As they comment, if half the cell is occupied by ribosomes it works out to about 350 ribosomes -- and not leaving much room for anything else.
It is interesting that the paper studiously avoids mentioning nanobacteria or nanobodies. Nanobacteria are microscopic structures which have been claimed to be self-replicating and putatively linked to various biomineralization processes and diseases, but their existence is controversial. Nanobodies are even smaller structures claimed to be biological in character.
I had been thinking about nanobacteria recently in the context of looking at some internet lists of controversial ideas that have become accepted. Nanobacteria struck me as one of the shakier contenders, and a quick Entrez Search (try this) appeared to confirm the concern. In particular, there is a paucity, particularly in recent times, of papers in well known journals. This doesn't mean the hypothesis is wrong, just that calling it accepted is a stretch.
Nanobacteria had a huge spotlight thrown on them when it was claimed that structures in a Mars-derived meteorite resembled nanobacterial fossils. Given the shaky nature of nanobacteria, I wouldn't have wanted to hang my revolutionary theory on it, but NASA went ahead.
What is particularly striking about the nanobacterial story is the lack of confirmed DNA data from such a beast. My Entrez search didn't seem to find any, and the Wikipedia entry states that the only claimed nanobacterial sequence is too close to a common contaminant to be believed, especially since no reagent-only PCR control was run.
If nanobacteria are anything like conventional lifeforms, they should have nucleic acids in them. A metagenomics run through a nanobacterial preparation should find something; in the absence of getting a novel sequence (and confirming that sequence's location in the nanobacteria by in situ), one would be forced to invoke non-nucleic acid life-like forms ala prions -- or honorably admit defeat. In other words, do exactly what this new paper in Science did. Perhaps nanobacterial hunting should be proposed the next time someone is giving away next generation sequencing runs, though I think I know one even better I'll write up here at some unspecified time in the future.
The 16S sequences contain some significant mismatches from commonly used 'universal' rRNA primers, which shows a big advantage of metagenomics for discovering novel organisms: it is unbiased.
Things get really interesting when in situ hybridization was used to localize the bugs -- they are the tiniest well documented organisms yet, roughly 244nM x 175 nM -- a volume of <6nM^3 -- vs. about 20nM^3 for the previous record holder. As they comment, if half the cell is occupied by ribosomes it works out to about 350 ribosomes -- and not leaving much room for anything else.
It is interesting that the paper studiously avoids mentioning nanobacteria or nanobodies. Nanobacteria are microscopic structures which have been claimed to be self-replicating and putatively linked to various biomineralization processes and diseases, but their existence is controversial. Nanobodies are even smaller structures claimed to be biological in character.
I had been thinking about nanobacteria recently in the context of looking at some internet lists of controversial ideas that have become accepted. Nanobacteria struck me as one of the shakier contenders, and a quick Entrez Search (try this) appeared to confirm the concern. In particular, there is a paucity, particularly in recent times, of papers in well known journals. This doesn't mean the hypothesis is wrong, just that calling it accepted is a stretch.
Nanobacteria had a huge spotlight thrown on them when it was claimed that structures in a Mars-derived meteorite resembled nanobacterial fossils. Given the shaky nature of nanobacteria, I wouldn't have wanted to hang my revolutionary theory on it, but NASA went ahead.
What is particularly striking about the nanobacterial story is the lack of confirmed DNA data from such a beast. My Entrez search didn't seem to find any, and the Wikipedia entry states that the only claimed nanobacterial sequence is too close to a common contaminant to be believed, especially since no reagent-only PCR control was run.
If nanobacteria are anything like conventional lifeforms, they should have nucleic acids in them. A metagenomics run through a nanobacterial preparation should find something; in the absence of getting a novel sequence (and confirming that sequence's location in the nanobacteria by in situ), one would be forced to invoke non-nucleic acid life-like forms ala prions -- or honorably admit defeat. In other words, do exactly what this new paper in Science did. Perhaps nanobacterial hunting should be proposed the next time someone is giving away next generation sequencing runs, though I think I know one even better I'll write up here at some unspecified time in the future.
Pigs & Flies in the News
Two news items which are not earth shattering, but fun.
In time for the upcoming Year of the Pig a Chinese group has bred GFP-expressing swine. Presumably these are for some clever preclinical imaging studies, but would presumably make for some interesting dishes to eat by candlelight or during power failures. With the multitude of GFP variants around, one could have quite a lively dish!
The Scientist reports (free subscription may be required) that 30K fruit flies were released during a production of the Sarte play 'The Flies' at Brown University. I did a graduate rotation in a fly lab, and 30K flies isn't a tiny quantity (if I remember correctly, a few hundred in a bottle is about the right density, so this is a lot of bottles). I do hope the same group doesn't try to mount a production of Ionesco's Rhinoceros!
In time for the upcoming Year of the Pig a Chinese group has bred GFP-expressing swine. Presumably these are for some clever preclinical imaging studies, but would presumably make for some interesting dishes to eat by candlelight or during power failures. With the multitude of GFP variants around, one could have quite a lively dish!
The Scientist reports (free subscription may be required) that 30K fruit flies were released during a production of the Sarte play 'The Flies' at Brown University. I did a graduate rotation in a fly lab, and 30K flies isn't a tiny quantity (if I remember correctly, a few hundred in a bottle is about the right density, so this is a lot of bottles). I do hope the same group doesn't try to mount a production of Ionesco's Rhinoceros!
Wednesday, January 03, 2007
GAO Weights in on Drug Discovery
The Government Accountability Office, or GAO, recently publicly released its report to Congress entitled "New Drug Development: Science, Business, Regulatory and Intellectual Property Issues Cited as Hampering Drug Development Efforts". At 52 pages (including all appendices), there is a bunch to read, and I won't claim to have fully digested it. I certainly might comment further on it at a future date.
Note: if you are reading the electronic version, add 4 to all my page numbers to find the right one with Acrobat; it numbers in its count the various header pages that aren't given roman numberings in the report. I had initially used the Acrobat numbers, so if any turn out wrong try subtracting 4 from the page number.
One graph I am grappling with is Figure 1 on page 8, which shows the attrition of compounds through the development pipeline. The starting line is marked with 10,000 compounds yielding one final drug at the end. The plot certainly deserves showing up on Junk Charts, as it is not what it could be (and what such an important topic requires). For example, several stages are marked with numbers of compounds, but these label trapezoids with no clarity whether the number represents the start or the end of the stage. I'm guessing that the 10K number is estimating initial screening hits (counting in failed programs). The preclinical trapezoid is labeled "250 compounds" -- so that would be 40:1 hits to something (leads?). I'd better quit -- the more I stare at the graphic the more infuriating I find it.
Figure 6 (p.15) shows the depressing statistics: increasing R&D spending but a flat rate of New Drug Applications (NDAs) and especially NDAs for novel molecules (New Molecular Entities, or NMEs). Personally I'd prefer these as two vertically arrayed graphs & both in the same format (why bars for one but lines for the other?), but it does make the point.
Figure 5 (p.17) is the sort to enrage drug industry critics: 68% of all NDAs are not for NMEs. One thing not made clear in the methodology is whether generic drug applications (ANDAs) or supplementary applications (sNDAs, such as for additional indications) are in these numbers.
It would make no sense to include them, but given the high number of non-NME NDAs in their numbers I am suspicious. I'll confess that I'm not fully conversant in the classification scheme used here (any enlightenment attempts welcome!). For example, where would the next statin fall? Nexium? One wonders whether the classifications are really particularly useful.
In the Internet age, it is a travesty that the report isn't accompanied by computer-readable tables of all the data used. This really wouldn't be very difficult, the data is all public information, and would certainly allow other authors to vet the results or bring in their own analysis methods.
One more complaint: the PDF is apparently set so that copying can't be performed out of it! Aaargh!
One section I was planning to blockquote extensively was the section on translational medicine. The report cites one trouble area (p.27) as
Even prior to reading a related discussion on In The Pipeline, I had been thinking out a different strategy. There should be better incentives for Ph.D.-M.D.s (which I'm pretty sure is what the report was tracking), but any program to create more will take a while, and some in it will choose other careers or interests. If you really want to expand the translational medicine pool, then start thinking about option beyond a very narrow credential list. Perhaps the most obvious would be to develop training programs to add skills to existing M.D.s, without forcing them to go the full Ph.D. route.
Slightly more radical woudl be the notion of developing translational medicine nurse-practioners -- after all, nursing training is very focused on patient care & patient observation, and would therefore be very suitable for careers in clinical medicine. The news is often filled with stories of nurses leaving the profession for various burnout reasons -- perhaps this option would keep some of these highly skilled persons in the field.
Going farther out, a lot of translational medicine is around developing and analyzing biomarkers. Again, nurses have many of the appropriate skills, particularly in observing side-effects that may be biomarkers (such as skin rashes observed both for EGFR inhibitors and bortezomib). Other biomarker development projects involving new assays fall clearly in the domain of med techs -- in my college internship I was in a lab staffed mostly by med techs, and that crew would have made an excellent biomarker pursuit team.
Perhaps the most interesting part of the report is the section of suggestions, beginning on p.35
. Tightly summarized they are:
Well, I'm out of steam. Comments?
Note: if you are reading the electronic version, add 4 to all my page numbers to find the right one with Acrobat; it numbers in its count the various header pages that aren't given roman numberings in the report. I had initially used the Acrobat numbers, so if any turn out wrong try subtracting 4 from the page number.
One graph I am grappling with is Figure 1 on page 8, which shows the attrition of compounds through the development pipeline. The starting line is marked with 10,000 compounds yielding one final drug at the end. The plot certainly deserves showing up on Junk Charts, as it is not what it could be (and what such an important topic requires). For example, several stages are marked with numbers of compounds, but these label trapezoids with no clarity whether the number represents the start or the end of the stage. I'm guessing that the 10K number is estimating initial screening hits (counting in failed programs). The preclinical trapezoid is labeled "250 compounds" -- so that would be 40:1 hits to something (leads?). I'd better quit -- the more I stare at the graphic the more infuriating I find it.
Figure 6 (p.15) shows the depressing statistics: increasing R&D spending but a flat rate of New Drug Applications (NDAs) and especially NDAs for novel molecules (New Molecular Entities, or NMEs). Personally I'd prefer these as two vertically arrayed graphs & both in the same format (why bars for one but lines for the other?), but it does make the point.
Figure 5 (p.17) is the sort to enrage drug industry critics: 68% of all NDAs are not for NMEs. One thing not made clear in the methodology is whether generic drug applications (ANDAs) or supplementary applications (sNDAs, such as for additional indications) are in these numbers.
It would make no sense to include them, but given the high number of non-NME NDAs in their numbers I am suspicious. I'll confess that I'm not fully conversant in the classification scheme used here (any enlightenment attempts welcome!). For example, where would the next statin fall? Nexium? One wonders whether the classifications are really particularly useful.
In the Internet age, it is a travesty that the report isn't accompanied by computer-readable tables of all the data used. This really wouldn't be very difficult, the data is all public information, and would certainly allow other authors to vet the results or bring in their own analysis methods.
One more complaint: the PDF is apparently set so that copying can't be performed out of it! Aaargh!
One section I was planning to blockquote extensively was the section on translational medicine. The report cites one trouble area (p.27) as
... a shortage of physician-scientists, also known as translational researchers--who possess both medical and research degrees and thus the expertise needed to translate discovery -stage research into safe and effective drugs--was seen by panelists and other experts as a fundamental barrier to increasing the productivity of drug development. ... Experts attribute this shortage to a variety of factors, including lengthy training and relatively lower compensation for physicians who are also scientists, compared to those in clinical practice. In addition, researchers, including those in academia, have noted that academic institutions have not taken the initiative to provide financial incentives, such as scholarships, for medical students to pursue these research interests.
Even prior to reading a related discussion on In The Pipeline, I had been thinking out a different strategy. There should be better incentives for Ph.D.-M.D.s (which I'm pretty sure is what the report was tracking), but any program to create more will take a while, and some in it will choose other careers or interests. If you really want to expand the translational medicine pool, then start thinking about option beyond a very narrow credential list. Perhaps the most obvious would be to develop training programs to add skills to existing M.D.s, without forcing them to go the full Ph.D. route.
Slightly more radical woudl be the notion of developing translational medicine nurse-practioners -- after all, nursing training is very focused on patient care & patient observation, and would therefore be very suitable for careers in clinical medicine. The news is often filled with stories of nurses leaving the profession for various burnout reasons -- perhaps this option would keep some of these highly skilled persons in the field.
Going farther out, a lot of translational medicine is around developing and analyzing biomarkers. Again, nurses have many of the appropriate skills, particularly in observing side-effects that may be biomarkers (such as skin rashes observed both for EGFR inhibitors and bortezomib). Other biomarker development projects involving new assays fall clearly in the domain of med techs -- in my college internship I was in a lab staffed mostly by med techs, and that crew would have made an excellent biomarker pursuit team.
Perhaps the most interesting part of the report is the section of suggestions, beginning on p.35
. Tightly summarized they are:
- Industry-government-academia collaborations to systematically analyze drug failures, develop validated biomarker inventories, and prioritize diseases
- Bigger push in academia for translational medicine specialists (as commented on above)
- FDA incentives & disincentives based on importance of a new drug: innovative medicines get the push, the me-tos discouraged. One proposed method would be basing patent life on the innovativeness & clinical value of a drug
Well, I'm out of steam. Comments?
Tuesday, January 02, 2007
Feed Frenzy
In response to a previous posting about journal tables of contents, I received the helpful suggestion of trying our an RSS feed reader. I soon installed the Sage reader for Firefox, and have had found it a very useful way to keep up. It is interesting to note the wide diversity of what various journals provide as far as RSS feeds.
The first frustration is those things I'd like as RSS feeds but aren't available (ah! the zeal of the newly converted). Science Magazine's advance publication spot, Science Express being at the top of the list. The ASBMB follows a similar line: Molecular & Cellular Proteomics and Journal of Biological Chemistry also fail to provide feeds for their advance articles.
The key point of variance is in what is provided. Oncogene opts for the most terse: just the title of each article. Many journals, including most Nature family journals (which Oncogene is in), give titles and complete abstracts. Journal of Proteome Research (an ACS journal) has titles, author lists, and an iconic graphic from the paper. PNAS picks a strange mix: the title, the author list, and then the timeline of submission, review, acceptance, etc. That's some administrivia I really don't see high in my list of things I need to keep on top of. PNAS also includes a short bit of the first sentence, but never enough.
I'm sure there are other permutations out there -- I'm opting to accrete RSS feeds slowly. Overall, I think I prefer the title+abstract format or the title+graphic format. This is what whets my appetite for a paper. Titles alone are nice and terse, but perhaps too terse.
The first frustration is those things I'd like as RSS feeds but aren't available (ah! the zeal of the newly converted). Science Magazine's advance publication spot, Science Express being at the top of the list. The ASBMB follows a similar line: Molecular & Cellular Proteomics and Journal of Biological Chemistry also fail to provide feeds for their advance articles.
The key point of variance is in what is provided. Oncogene opts for the most terse: just the title of each article. Many journals, including most Nature family journals (which Oncogene is in), give titles and complete abstracts. Journal of Proteome Research (an ACS journal) has titles, author lists, and an iconic graphic from the paper. PNAS picks a strange mix: the title, the author list, and then the timeline of submission, review, acceptance, etc. That's some administrivia I really don't see high in my list of things I need to keep on top of. PNAS also includes a short bit of the first sentence, but never enough.
I'm sure there are other permutations out there -- I'm opting to accrete RSS feeds slowly. Overall, I think I prefer the title+abstract format or the title+graphic format. This is what whets my appetite for a paper. Titles alone are nice and terse, but perhaps too terse.
Monday, January 01, 2007
Our Founder
I enjoy Boston as a city, but traveling to other cities is a good reminder of how petite a city it is. Case in point: Chicago. Particularly if you transit by air on a clear night, the enormous nature of the city becomes apparent, reinforced by the Jeffersonian grid of streets.
The size difference extends to some public institutions as well. Boston's Museum of Science is a very good museum, but Chicago splits the same subject matter into three institutions, two of which (Museum of Science & Industry and the Field Museum) are almost certainly larger than the MoS (I've never made it to the Adler Planetarium, so I can't make the comparison there).
Some overlap is to be expected, and so both the MoS&I and the Field have exhibits on genetics. We only dashed through the one at the MoS&I as a shortcut, though I did catch a glimpse of a former Millennium colleague in one of the videos. But the Field's exhibit on Brother Gregor, well that could not be skipped.
The exhibit covers the life and impact of Gregor Mendel, the monk who trained extensively in science but never received a degree there. His pioneering work might never had happened, except he was a failure as a ministering cleric. Among its revelations for me was his extensive efforts in other sciences, such as astronomy. I also had not heard that Mendel had, near the end of his life, been confident that his work had not been in vain. We can also see an all-too-common story in his life: ultimately his scientific efforts were cut off by administrative duties, as he finished his career as abbot of the monastery.
The exhibit had a nice mix of modern elements, reconstructions, and actual artifacts. For the latter, one example was his microscope & slide set! A box with 5K peas (if I remember correctly) showed just how many peas he scored -- in only the first year of experiments! Various computer games attempt to capture the attention of the modern set, such as the genetics project I have had running for seven years. There are also the juicy personal tidbits, such as his habit of throwing dried peas at sleeping students! One other point brought out by the exhibit: how Mendel was a pioneer of combining math with biology.
Like any good exhibit, I found myself leaving with many unanswered questions -- not because the exhibit wasn't well designed, but because it had stoked my curiosity. For example, it mentioned that Francis Galton had performed similar lines of inquiry, which perhaps made him very receptive to Mendel's work once it had been rediscovered. There was also a brief mention of the three scientists who rediscovered Mendel's work. What was truly similar and what was distinct in these five efforts?
The exhibit also touched on, but in very minimal form, the controversy over whether Mendel's numbers were too good. Did he trim his data, or is there some biology going on there? It didn't seem to point out Mendel's luck: he picked seven traits which are unlinked (I think two are very loosely linked, detectable but not easily). Would he have stumbled if two traits had proved partially linked?
Another point to ponder: Mendel published his work in the lowly local journal, but he attempted through correspondence to spread the word further. The exhibit mentions a prominent botanist who politely knocked down Mendel's suggestions, but alas didn't translate the letters (some are on display). A fascinating historical question is who did read Mendel's books: perhaps a check of the lending records of the libraries that received them would be informative (Boston's Public Library is reputedly one; one day I'll try to do this).
It also touched on two of the unpleasant 20th century genetics episodes: Eugenics & Lysenko. The eugenics section notes Galton's fascination with the subject and presents a chilling Nazi poster decrying the resources spent on someone with a genetic 'sickness'. The Lysenko panel notes his impact on Soviet science, the grim penalties for supporting Mendel during Lysenko's reign & the fact that only after Lysenko's death was a commemorative plaque placed at Mendel's monastery.
The exhibit will be at the Field until April, and will then tour a number of other museums (schedule). The tour seems somewhat geographically restricted: nowhere west of the Mississippi. If you can get to it, do so -- you won't be sorry!