iGenomX calls their new product Riptide, and they launchedthis week the American Society for Microbiology's Next Generation Sequencing conference was winding down and the American Society for Human Genetics is about to start. Introductory pricing is somewhat coyly described as an order-of-magnitude below Nextera, and in conversation CEO Keith Brown noted that iGenomX is exploring additional formats which would adjust the price-per-library according to the number of samples the kit is targeting. So a smaller capacity kit for human samples -might have a higher cost-per-library but future kits for even denser plate formats could drop the costs per library even lower.
To review from last year's post on the linked read format, iGenomX uses combinations of polymerases and nucleotides to generate library fragments without every actually fragmenting the input DNA. A diagram from their website is included below. A first extension reaction from random primers incorporates biotinylated terminator nucleotides, creating initial fragments which are then captured. At this point the fragments are barcoded and the reactions for a single plate can now be pooled. A second round of priming on the captured fragments uses a displacing polymerase so that only the longest second round products remain affixed to the beads. Final processing includes using limited cycle PCR to amplify the library and attach a plate-specific barcode. Riptide kits enable generating 960 distinguishable libraries in this manner, 10 plates of 96 samples, with only five pipetting steps.
Riptide kits come with all required reagents to go from purified DNA to libraries, which is definitely a plus. Nothing is more painful than reviewing a recipe and then discovering your pantry is missing some key ingredients. This includes reagents for final size selection of the libraries, enabling a number of paired end formats to be supported. Input quantities are specified as 50 nanograms, but in conversation Brown stated that they have good results with 1 nanogram and have successfully explored 10 femtograms.input, which isn't shocking given that this started out as a chemistry for microdroplet applications. Indeed, by cycling the first reaction it effectively works as a whole genome amplification stage. The kit does not include a built-in normalization; Brown believes that for many applications in this space there is a tolerance for variation in read counts.
The iGenomX chemistry can be affected by the nucleotide composition of the target DNA. To compensate for this, the kits include two different mixes for the first reaction. One mix is tuned for samples with <50 and="" for="" gc="" one="" samples="" with="">50% GC. If your samples are in the middle or a mixture (such as microbiome samples), then the two reagents are combined. This is an effort to provide some degree of tuning while still keeping the kit economical to produce. In test samples of bacteria with different GC richness, the chemistry delivered coverage plots (the open circles) that most of the genome would show a two-to-three fold variation in coverage. 50>
Brown also suggested to me that further variations on the theme are in the works to create library kits for specific applications. For example, using a reverse transcriptase for the first stage would generate a set of barcoded RNA-Seq fragments which could then be pooled prior to ribosomal RNA removal. Further exploration of this chemistry should prove interesting to watch.
It will be interesting to see how products like Riptide and plexWell will shift the balance over time between what work primary labs perform on their own and what work is done in core labs. When I was chatting at the CDC last month, it was clear that many labs have their own MiSeqs but that HiSeq on up are also available in core labs. Brown also mentioned some tremendous numbers he has heard for backlogs at pathogen-focused labs, such as 50K at one prominent facility and 140K at another. Clearing that out is clearly a task for HiSeqs and NovaSeqs. The NovaSeq with the current S2 chip is rated at 1,000Gbp of data, which works out to 4000 E.coli-class genomes sequenced to 50X depth. So will labs turn over their thousands of samples to cores, or will they make libraries themselves, run a pilot on the smaller boxes and then turn these over to cores for the big iron?
Library generation continues to be an area of spectacular creativity in the Illumina world. Many new *Seq methods convert non-sequence information such as biological rates and locations into sequence, but plenty of methods are still being developed for just acquiring and perfecting new genomic sequences. I am a strong believer that these improving methods drive a virtuous cycle of new projects enabled by sequencing stirring new ambitions that push the limits of those methods (and for Illumina, this is a very profitable cycle!). What seemed absurdly beyond reasonable scale yesterday becomes today's stretch goal and tomorrow's routine process. Most importantly, our view of the biological world grows richer and our ability to exploit that to improve the human condition broadens.