Friday, October 13, 2017

iGenomX Riptide Kits Promise a Sea of Data

A theme for me in my six years on Starbase has been addressing the challenge of cost-effectively sequencing many small genomes.  While sequence generation bulk prices have plummeted, all-in library construction cost has tended to stubbornly resist dramatic change.  Large genome projects don't face quite such a pinch, but if you want to sequence thousands of bacteria, viruses or molecular biology constructs, paying many-fold more for getting a sequence into the box than you're paying to move it through the box ends up being a roadblock. Illumina's Nextera approach dropped prices a bit, but not really a sea change.  Various published protocols drop  costs further via reagent dilution, but these can suffer from variable library yield and an increased dependence on precise input DNA quantitation and balancing.  Even then, the supplied barcoding reagents for Nextera handle at most 384 samples, and that is only a relatively recent expansion from 96. I previously profiled seqWell's plexWell kits, which like Nextera use a transposase scheme but with modifications to enhance tolerance to input sample concentration variation.  plexWell also enables very high numbers of libraries, which better mates projects with large numbers of small genomes to sequencers with enormous data generation capabilities.  Now comes another entrant in the mass Illumina library generation space: iGenomX, which has reformatted their chemistry from a microdroplet mode intended for linked read generation to a 96-well plate format requiring no unusual hardware.
iGenomX calls their new product Riptide, and they launchedthis week the American Society for Microbiology's Next Generation Sequencing conference was winding down and the American Society for Human Genetics is about to start.  Introductory pricing is somewhat coyly described as an order-of-magnitude below Nextera, and in conversation CEO Keith Brown noted that iGenomX is exploring additional formats which would adjust the price-per-library according to the number of samples the kit is targeting.  So a smaller capacity kit for human samples -might have a higher cost-per-library but future kits for even denser plate formats could drop the costs per library even lower.

To review from last year's post on the linked read format, iGenomX uses combinations of polymerases and nucleotides to generate library fragments without every actually fragmenting the input DNA.  A diagram from their website is included below.  A first extension reaction from random primers incorporates biotinylated terminator nucleotides, creating initial fragments which are then captured.  At this point the fragments are barcoded and the reactions for a single plate can now be pooled. A second round of priming on the captured fragments uses a displacing polymerase so that only the longest second round products remain affixed to the beads.  Final processing includes using limited cycle PCR to amplify the library and attach a plate-specific barcode.  Riptide kits enable generating 960 distinguishable libraries in this manner, 10 plates of 96 samples, with only five pipetting steps.


Riptide kits come with all required reagents to go from purified DNA to libraries, which is definitely a plus.  Nothing is more painful than reviewing a recipe and then discovering your pantry is missing some key ingredients. This includes reagents for final size selection of the libraries, enabling a number of paired end formats to be supported.  Input quantities are specified as 50 nanograms, but in conversation Brown stated that they have good results with 1 nanogram and have successfully explored 10 femtograms.input, which isn't shocking given that this started out as a chemistry for microdroplet applications.  Indeed, by cycling the first reaction it effectively works as a whole genome amplification stage.  The kit does not include a built-in normalization; Brown believes that for many applications in this space there is a tolerance for variation in read counts.

The iGenomX chemistry can be affected by the nucleotide composition of the target DNA.  To compensate for this, the kits include two different mixes for the first reaction.  One mix is tuned for samples with <50 and="" for="" gc="" one="" samples="" with="">50% GC.  If your samples are in the middle or a mixture (such as microbiome samples), then the two reagents are combined.  This is an effort to provide some degree of tuning while still keeping the kit economical to produce.  In test samples of bacteria with different GC richness, the chemistry delivered coverage plots (the open circles) that most of the genome would show a two-to-three fold variation in coverage. 



Brown also suggested to me that further variations on the theme are in the works to create library kits for specific applications.  For example, using a reverse transcriptase for the first stage would generate a set of barcoded RNA-Seq fragments which could then be pooled prior to ribosomal RNA removal.  Further exploration of this chemistry should prove interesting to watch.

It will be interesting to see how products like Riptide and plexWell will shift the balance over time between what work primary labs perform on their own and what work is done in core labs.  When I was chatting at the CDC last month, it was clear that many labs have their own MiSeqs but that HiSeq on up are also available in core labs.  Brown also mentioned some tremendous numbers he has heard for backlogs at pathogen-focused labs, such as 50K at one prominent facility and 140K at another.  Clearing that out is clearly a task for HiSeqs and NovaSeqs.  The NovaSeq with the current S2 chip is rated at 1,000Gbp of data, which works out to 4000 E.coli-class genomes sequenced to 50X depth. So will labs turn over their thousands of samples to cores, or will they make libraries themselves, run a pilot on the smaller boxes and then turn these over to cores for the big iron? 

Library generation continues to be an area of spectacular creativity in the Illumina world.  Many new *Seq methods convert non-sequence information such as biological rates and locations into sequence, but plenty of methods are still being developed for just acquiring and perfecting new genomic sequences.  I am a strong believer that these improving methods drive a virtuous cycle of new projects enabled by sequencing stirring new ambitions that push the limits of those methods (and for Illumina, this is a very profitable cycle!).  What seemed absurdly beyond reasonable scale yesterday becomes today's stretch goal and tomorrow's routine process.  Most importantly, our view of the biological world grows richer and our ability to exploit that to improve the human condition broadens.  

2 comments:

Dale Yuzuki said...

Nice write-up Keith!

Your comment about "spectacular creativity" in the library creation world reminds me of the poster that Jacques Reteif put together several years ago. (Looking over it now, it was from 2014 and was entitled 'For all you seq...' and had sections with cartoons on the methods, such as RNA Transcription, RNA Low-level detection, Methylation, DNA-Protein Interactions etc.)

Looks like that great poster has now been reduced to an online tool. Easier to scale, yes, but it was something to see the dozens (or was it over a hundred?) methods in one huge graphic. https://www.illumina.com/science/sequencing-method-explorer.html

nucacidhunter said...

nucacidhunter

To be a useful product for sequencing small genome libraries on NovaSeq they have to come up with improvements to enable identifying reads resulting from index hopping.