One
of the most electrifying talks at AGBT this year was given by Joe DeRisi of
UCSF, who gave a brief intro on the difficulty of diagnosing the root cause of
encephalitis (as it can be autoimmune, viral, protozoal, bacterial and probably
a few other causes) and then ran down a gripping case history which seemed
straight out of House.
A computational biologist's personal views on new technologies & publications on genomics & proteomics and their impact on drug discovery
Wednesday, February 26, 2014
Monday, February 24, 2014
A Sunset for Draft Genomes?
The
sun set during AGBT 2014 for a final time over a week ago. The posters have long been down, and perhaps
the liver enzyme levels of the attendees are now down to normal as well. This year’s conference underscored a
possibility that was suggested last year: that the era of the poorly connected,
low quality draft genome is headed for the sunset as well
Thursday, February 13, 2014
How will you deal with GRCh38?
I was foolishly attempting to catch up with Twitter last night during Valerie Schneider's AGBT talk last night on the new human reference, GRCh38. After all, my personal answer to my title is nothing, because this isn't a field I work in. But Dr. Schneider is a very good speaker and I could not help but have my attention pulled in. While clearly not the final word on a human reference, this new edition fixes many gaps, expands the coverage of highly polymorphic regions, and even models the difficult to assemble centromeres. Better assembly, combined with emerging tools to better handle those complex regions via graph representations, means better mapping send better variant calls.
So, a significant advance, but a bit unpleasant one if you are in the space. You now have several ugly options before you with regard to your prior data mapped to an earlier reference.
The do nothing option must appeal to some. Forgo the advantages of the new reference and just stick to the old. Perhaps start new projects on the new one, leading to a cacophony of internal tools dealing with different versions, with an ongoing risk of mismatched results. Also, cross your fingers that none of changes might be revised if analyzed against the new reference. Perhaps this route will be rationalized as healthy procrastination until a well-vetted set of graph-aware mappers exist, but once you start putting-off it is hard to stop doing so.
The other pole would be to embrace the new reference whole-heartedly and realign all the old data against the new reference. After burning a lot of compute cycles and storage space running in place, spend a lot of time reconciling old and new results. Then decide whether to ditch all your old alignments, or suffer an even larger storage burden.
A tempting shortcut would be to just remap alignments and variants by the known relationships between the two references. After all, the vast majority of the results will simply shift coordinates a bit, but with no other effects. In theory, one could estimate all the map regions that are now suspect and simply realign the reads which map to those regions, plus attempt to place reads that previously failed to map. Again reconciliation of results, but on a much reduced scale.
None would seem particularly appealing options. Perhaps that latter route will be a growth industry of new tools acting on BAM, CRAM or VCF which themselves will provide a morass of competing claims of accuracy, efficiency and speed. Doesn't make me at all in a hurry to leave a cozy world of haploid genomes that are often finished by a simple pipeline!