I'm sorely guilty of neglecting this space, but a recent (and now storified) Twitter conversation from Jonathan Eisen (@phylogenomics) has improbably fired me up enough to scribble something.
Sunday, December 07, 2014
Tuesday, September 09, 2014
I will confess that when our first MinION burn-in data for lambda came in & I threw a few aligners at it (after first getting my data extractor in Julia shaken out), I was disappointed at the results. Very few 2D reads, very few aligned reads and the alignments all short. At this point, I sat back to wait to see what others had experienced and to think of additional bioinformatics approaches. It never occurred to me to dash off a glorified blog post and submit it to a journal.
Friday, September 05, 2014
A huge event in the genomics community this summer has been the Oxford Nanopore MinION Access Program (MAP), which has enabled a sizable but select group of researchers to try out ONT's novel nanopore-based sequencing technology. While results and rumors have periodically drifted out over the summer, this week saw three disclosures, one of which resulted in fireworks and action
Monday, June 30, 2014
As I mentioned recently, I've been exploring how I might use the emerging Julia language to solve problems. While that requires a large amount of mental work, I see some potential gains, both in having more readable code than Perl as well as to potentially leverage a lot of high-level concepts for parallel execution that are built into the language. But beyond the challenge of elderly canine pedagogy that I present, there is the issue that the BioJulia library is quite embryonic, with serious consideration of treating much of the existing code base as a first draft (or, that is the impression I get from skimming the Google group). So I'm going to try to pitch in, despite my multiple handicaps.
Tuesday, June 24, 2014
I don't read a lot of books aimed at the MBA crowd, but one set I have liked, and sometimes cite here, are Clayton Christensen's on inovation and disruption. As you may have heard, a recent article in the New Yorker by Jill Lepore took a gimlet-eye view to the whole concept and raised serious questions about Christensen's methods. This was then summarized by another author in Slate and since then Christensen has responded in part via a Business Week interview. He's also scheduled to be interviewed on PBS this weekend, so likely there will be further developments. Indeed, after sketching this out on the commute home I discovered a Financial Times article whose tone is very similar to what I have written below.
Tuesday, June 03, 2014
As I've remarked before, I've done significant coding in a large number of languages over the last 35-or-so years. I don't consider myself a computer language savant; I've known folks who can pick up new languages quickly and switch between them facilely, but for me it is more difficult. I haven't tried learning a new language in perhaps 5 years, but this week I backed into one
Wednesday, February 26, 2014
One of the most electrifying talks at AGBT this year was given by Joe DeRisi of UCSF, who gave a brief intro on the difficulty of diagnosing the root cause of encephalitis (as it can be autoimmune, viral, protozoal, bacterial and probably a few other causes) and then ran down a gripping case history which seemed straight out of House.
Monday, February 24, 2014
The sun set during AGBT 2014 for a final time over a week ago. The posters have long been down, and perhaps the liver enzyme levels of the attendees are now down to normal as well. This year’s conference underscored a possibility that was suggested last year: that the era of the poorly connected, low quality draft genome is headed for the sunset as well
Thursday, February 13, 2014
I was foolishly attempting to catch up with Twitter last night during Valerie Schneider's AGBT talk last night on the new human reference, GRCh38. After all, my personal answer to my title is nothing, because this isn't a field I work in. But Dr. Schneider is a very good speaker and I could not help but have my attention pulled in. While clearly not the final word on a human reference, this new edition fixes many gaps, expands the coverage of highly polymorphic regions, and even models the difficult to assemble centromeres. Better assembly, combined with emerging tools to better handle those complex regions via graph representations, means better mapping send better variant calls.
So, a significant advance, but a bit unpleasant one if you are in the space. You now have several ugly options before you with regard to your prior data mapped to an earlier reference.
The do nothing option must appeal to some. Forgo the advantages of the new reference and just stick to the old. Perhaps start new projects on the new one, leading to a cacophony of internal tools dealing with different versions, with an ongoing risk of mismatched results. Also, cross your fingers that none of changes might be revised if analyzed against the new reference. Perhaps this route will be rationalized as healthy procrastination until a well-vetted set of graph-aware mappers exist, but once you start putting-off it is hard to stop doing so.
The other pole would be to embrace the new reference whole-heartedly and realign all the old data against the new reference. After burning a lot of compute cycles and storage space running in place, spend a lot of time reconciling old and new results. Then decide whether to ditch all your old alignments, or suffer an even larger storage burden.
A tempting shortcut would be to just remap alignments and variants by the known relationships between the two references. After all, the vast majority of the results will simply shift coordinates a bit, but with no other effects. In theory, one could estimate all the map regions that are now suspect and simply realign the reads which map to those regions, plus attempt to place reads that previously failed to map. Again reconciliation of results, but on a much reduced scale.
None would seem particularly appealing options. Perhaps that latter route will be a growth industry of new tools acting on BAM, CRAM or VCF which themselves will provide a morass of competing claims of accuracy, efficiency and speed. Doesn't make me at all in a hurry to leave a cozy world of haploid genomes that are often finished by a simple pipeline!
Thursday, January 16, 2014
Illumina made a brace of big hardware announcements at this week's J.P. Morgan conference, and Mick Watson has done a nice job of covering them. I'll try to cover some different points that have occurred to me after letting the news ferment -- plus Illumina made yet another announcement tonight that scotched a portion of an earlier draft of this piece.
Monday, January 13, 2014
An evening ritual is to inquire what homework requires assistance, and at the beginning of the year it was a science worksheet as part of an introduction to chemistry. That, and a later project, have exposed how much rust my knowledge of chemistry has accumulated, but also have led me down the path of repairing forgotten bits and certainly learning some new stuff
Wednesday, January 01, 2014
Rather than make any New Year's resolutions of my own, which I would then feel guilty about not keeping, I've decided to make one for someone else: they will write the perfect open source scaffolder. There's a lot of scaffolders out there, both stand-alone and integrated into various assemblers, but none are quite right.
If you are sequencing an isolated bacterium or archean and are looking for a scaffolder, except in a few rare cases, you're doing something wrong: given enough long reads from PacBio it should be possible to solve nearly every bacterial genome. But, if you're sequencing eukaryotic genomes or any metagenome (or you're unlucky or data short on a simple microbial genome), you're probably in the market for one. I'm going to supply a list of attributes I cooked up during a long drive up the Eastern Seaboard today, without much regard for feasibility or even if some conflict with each other.