Wednesday, November 06, 2024

Revio Refresh

ASHG is ongoing and tonight PacBio has a big party planned, with an unnamed musical guest.  Rumors swirl as to what will be announced at that event.  But in advance of the meeting, last week PacBio described multiple updates to the Revio platform, an instrument which made its debut two years ago at ASHG.  PacBio CEO Christian Henry was kind enough to chat with me last week about the upgrades.

SPRQ

The flashiest announcement is the new SPRQ flowcell chemistry for Revio.  SPRQ increases yields without reducing quality, enabling two 20X human genomes to be generated per flowcell for $500 in library prep plus running cost (but that cost not including sample prep or downstream analysis).  Of course, once you are multiplexing samples there's opportunity to use the instrument capacity for non-integer numbers of genomes per flowcell, so this could also mean running ten 16X genomes per run or the non triskadecaphobic could pack thirteen 12X human genomes per run.  And so forth.  

Not only more genomes per flowcell, but both 5mC (in CpG context) and 6mA methylation calling.  Henry noted that the 5mC fraction can be a useful indicator of the degree of bacterial contamination of a sample - such as one from saliva (more on this below).  PacBio is excited about FiberSeq methods for chromatin accessibility (which I covered back in the spring) enabled by the 6mA detection.

More data is great, but perhaps equally valuable is lowering the amount of input required.  Instead of 2 micrograms of total library, the requirement is now only 500 nanograms.  And that is total library - if multiplexing two samples then each must yield only 250 nanograms.  Nothing has changed in the library prep - which must yield a sigh of relief from companies such as Volta working on PacBio compatible library automation.

On the technical side, SPRQ hosts a new polymerase and new buffers, yielding subreads which are about 2 phred points higher in accuracy.  That doesn't sound like much, but anyone who has worked with noisy reads will grasp how just a small improvement in accuracy greatly eases the alignment and other steps involved in the CCS and DeepConsensus algorithms.  That will also mean that subread sets that previously wouldn't yield high quality data now will.   PacBio also changed how the sample is applied to the flowcell, enabling more of the ZMWs to be productively loaded (this is covered in more detail by Nava Whiteford).  So better raw data from more ZMWs to yield an overall increase in HiFi data yield.

Along with all this improvement, Revio is dropping in list price from $779K to $599K - or similar to what the list used to be on a Sequel IIe. 

Ready for Their Spit Take

Back in 2021 PacBio acquired sample prep maker Circulomics.  The Circulomics Nanobind PanDNA kit has been a workhorse for many sample types - but not saliva.  Now saliva joins the list of sample types, opening up much easier access to human samples, with a 500 microliter sample requirement.  I haven't done a spit test myself - but have swabbed each of my dogs.  Coupled with SPRQ's lesser input requirement - and the 5mC calling for quality control - Revio should become even more popular for human genetics research

Head in the Cloud

For years it has been an option to process your PacBio data in the cloud, if you were willing to learn the intricate geekery of setting up and maintaining cloud compute resources. PacBio will soon (Q1) be rolling out SMRT Link Cloud, usable with any S3 compatible cloud (e.g. AWS, GCP or Azure) compute provider.  The software is free; users are responsible for paying their own cloud costs.  Henry suggested that in the future there might be subscription-based advanced analysis tools, but that's not a given. SMRT Link Cloud is also intended to interface cleanly with analysis partners such as GeneYX or DNANexus.  Or perhaps ultimately from some lab that isn't yet using HiFi - Henry believes "our best ideas will come from our customers" 

Closing

As noted, there is a big event tonight with many rumors surrounding it (alas, the grim reaper has prevented a proper Fleetwood Mac from being the musical headliners).  If you want to start any rumors about whether Christian & I discussed anything beyond what is above, you have about half a day to propagate them.

Wednesday, October 09, 2024

MiSeq Makeover

MiSeq is the the oldest instrument in Illumina's lineup, first unveiled back in 2011.  MiSeq's launch stole much of the thunder from the Ion Torrent PGM at the time.  Illumina brought out other instruments to push the lower boundary of their line: MiniSeq came in 2016 and iSeq 100 in 2018 - but MiSeq remained the most popular instrument of that batch.  It has a warm place in my heart; at Starbase we contracted out many MiSeq runs since the necessary batch size was often very appropriate for us.  In the meantime, various other instruments came and went - HiSeq originally launched about the same time as MiSeq and later there was HiSeq X, and in that time period we've seen Ion PGM be replaced by Ion Proton, PacBio cycle through multiple models, and 454 abandon the market and  - as well as fizzles such as Genapsys.  But today Illumina announced a new instrument family under the MiSeq moniker - and the iSeq 100 moniker - called the MiSeq i100, which harmonizes the low end of their line with the higher end.    

Friday, September 27, 2024

QuantumScale: Two Million Cells is the Opening Offer

I'm always excited by sequencing technology going bigger.  Every time the technology can generate significantly more data, experiments that previously could only be run as proof-of-concept can move to routine, and what was previously completely impractical enters the realm of proof-of-concept.  These shifts have steadily enabled scientists to look farther and broader into biology - though the complexity of the living world always dwarves our approaches.  So it was easy to say yes several weeks ago to an overture from Scale Bio to again chat with CEO Giovanna Prout about their newest leap forward: QuantumScale, which will start out enabling single cell 3' RNA sequencing experiments with two million cells of output- but that's just the beginning. And to help with it, they're collaborating with three other organizations sharing the vision of sequencing at unprecedented scale: Ultima Genomics on the data generation side,  NVIDIA for data analysis, and Chan Zuckerberg Initiative (CZI) which will subsidize the program and make the research publicly available on Chan Zuckerberg Cell by Gene Discover.


Scale Bio is launching QuantumScale as an Early Access offering, originally aiming for 100 million cells across all participants - though since I spoke with Prout they've received over 140 million cells in submitted proposals.  First 50 million cells would be converted to libraries at Scale Bio and sequenced by Ultima (with CZI covering the cost), with the second 50 million cells prepped in the participants labs with Scale Bio covering the library costs (and CZI subsidizing sequencing cost).  Data return would include CRAMs and gene count matrices.  Labs running their own sequencing have a choice of Ultima or NovaSeq X - the libraries are agnostic, but it isn't practical to run these libraries on anything smaller.  Prout mentioned that a typical target is 20K reads per cell, though Scale Bio and NVIDIA are exploring ways to reduce this, so with 2M cells that's 40B reads required - or about two 25B flowcells on NovaSeq X. 


How do they do it?  The typical Scale Bio workflow has gotten a new last step, for which two million cells is expected to be only the beginning.  The ScalePlex reagent can be first used to tag samples prior to the initial fixation, with up to 1000 samples per pool (as I covered in June).  Samples are fixed and then distributed to a 96-well plate in which reverse transcription and a round of barcoding take place.  Then pool those and split into a new 96-well plate which performs the "Quantum Barcoding", with around 800K barcodes within each well.  Prout says full technical details of that process aren't being released now but will be soon, but hinted that it might involve microwells within each well.  Indexing primers during the PCR add another level of coding, generating over 600 million possible barcode combinations.  This gives Scale Bio, according to Prout, a roadmap to experiments with 10 million, 30 million or perhaps even more cells per experiment - and multiplet rates "like nothing".


As noted above, the scale of data generation is enormous, and that might stress or break some existing pipelines.  Prout suggested that Seurat probably won't work, but scanpy "might".  So having NVIDIA on board makes great sense - they're already on the Ultima UG100 performing alignment, but part of the program will be NVIDIA working with participants to build out secondary and tertiary analyses using the Parabricks framework.  


What might someone do with all that?  I don't run single cell 3' RNA experiments myself, but reaching back to my pharma days I can start imagining.  In particular, there are a set of experiment schemes known as Perturb-Seq or CROP-Seq which use single cell RNA readouts from pools of CRISPR constructs - the single cell data both provides a fingerprint of cellular state and reveals which guide RNA (or guide RNAs; some of these have multiple per construct) are present.  


Suppose there is a Perturb-Seq experiment and the statisticians say we require 10K cells per sample to properly sample the complexity of the CRISPR pool we are using.  Two million cells just became 200 samples.  Two hundred seems like a big number, but suppose we want to run each perturbation in quadruplicate to deal with noise.  For example, I'd like to spread those four cells around the geometry of a plate, knowing that there are often corner and edge effects and even more complex location effects from where the plate is in the incubator.  So now only 50 perturbations - perhaps my 49 favorite drugs plus a vehicle control.  Suddenly 2M cells isn't so enormous any more - I didn't even get into timepoints or using different cell lines or different compound concentrations or any of numerous other experimental variables I might wish to explore.  But Perturb-Seq on 49 drugs in quadruplicate at a single concentration in a single cell line is still many orders of magnitude more perturbation data than we could dream about two decades ago at Millennium to pack into three 96-well plates.


And that, as I started with, is the continuing story:  'omics gets bigger and our dreams of what we might explore just ratchet up to the new level of just in reach.  


The announcement of QuantumScale also has interesting timing in the industry, arriving a bit over a month after Illumina announced it was entering the single cell RNA-Seq library prep market with the purchase of Fluent Biosciences.  While nobody (except perhaps BGI/MGI/Complete Genomics) makes their single cell solution tied exclusively to one sequencing platform, the connection of Scale Bio and Ultima makes clear business sense - Illumina is now a frenemy to be treated more cautiously and boosting an alternative is good business.  Ultima would of course love if QuantumScale nudges more labs into their orbit, and these 3' counting assays perform very well on Ultima with few concerns about homopolymers confusing the results  (and Prout assures me that all the Scale Bio multiplex tags are read very effectively) .  And as is so often the case, NVIDIA finds itself in the center of a new data hungry computing trend.  


Will many labs jump into QuantumScale?  Greater reach is wonderful, but one must have the budgets to run the experiments and grind the data.  PacBio in particular and to a degree Illumina have seen their big new machines face limited demand - or in the case of Revio the real possibility that everyone is spending the same money to get more data (great for science, not great for PacBio's bottom line).    But perhaps academic labs won't be the main drivers here, but instead pharma and perhaps even more so the emerging space of tech companies hungry for biological data to train foundation models - sometimes not even having their own labs but instead relying on companies such as my employer to run the experiments. 


A favorite quote of mine is from late 1800s architect Daniel Burnham; among his masterpieces is Washington DC's Union Station. "Make no little plans. They have no magic to stir men's blood and probably will not themselves be realized."  I can't wait to see what magic is stirred in women's and men's blood by QuantumScale, which is certainly not the stuff of little plans.


[2024-10-02 tweaked working around how program is funded]

Thursday, August 29, 2024

Illumina Would Like to Change the Conversation

A maxim from the great but fictional advertising executive Don Draper: "if you don't like what people are saying, change the conversation".  In an online strategy update presented two weeks ago ( Slides / Replay  ), Illumina announced they'd like a new conversation around sequencing costs.  No longer will they tout reagent cost per basepair, but instead will be focused on the total cost of sequencing workflows.  The obvious cynical response is that Illumina is conceding defeat on the raw cost, having been severely beaten by Ultima Genomics (and Complete Genomics aka MGI, but that group continues to face stiff headwinds) and even matched - if you have the volume - by Element Biosciences.  Total cost of ownership is what really matters, right?  The catch is how is it being calculated and who is doing the calculating?

It has always been known that cost per gigabase or per million reads was a convenient fiction.  Convenient because only simply arithmetic was required to convert performance specs and list prices into the metrics.  But a fiction since all the other costs didn't magically go away.  But which costs are we now counting? And how do you count them?  For example, if the library prep requires 4 hours of hands-on time, whose hands?  A Ph.D. paid at Boston rates or a fresh B.S. graduate paid at U.S. heartland rates? (not knocking either - but cost-of-living in Boston is particularly painful for those starting out and that is reflected in higher wages). Illumina would particularly like to highlight the value of their DRAGEN computational acceleration platform - but when comparing it to conventional compute, what number do you pencil in?  It all runs afoul of a dictum thrown out at a class on product financial modeling back at Millennium: keep it simple - "why spend the effort to invent a lot of numbers when you can just invent a few?"

Illumina would like to calculate from having a purified DNA sample to results on the other end, which fits with their strategy of offering - but not insisting on - vertical integration.  So library prep, running the sequencer, primary bioinformatics and secondary bioinformatics.  The same webinar teased that two new library prep products will be coming, though a year to a year-and-a-half (if they keep schedule) away that will further fit this model.

Other companies have already been taking potshots at Illumina on cost angles that might not make it in Illumina's official numbers.  For example, Ultima Genomics UG100 has a "daily care and feeding" arrangement which differs greatly from Illumina's "load a new run after the next has finished" - since Illumina runs often annoyingly exceed an even multiple of 24 hours, full Illumina instrument utilization will ultimately require night and graveyard shifts.  Oxford Nanopore would similarly tout the ability of PromethION to launch new runs at will.  Element and Oxford would both count to lower capital costs.  And so on.

Which also brings up under what scenario are we calculating costs?  One with enough samples arriving all-at-once to get maximum cost efficiency on a NovaSeq X 25B flowcell?  Or a scenario favoring Element where you must run now with a much smaller batch of samples - which seems to be a more practical model for the majority of core labs.  So many ways for each company to frame the problem to favor themselves and prevent any sort of apples-to-apples comparison!

Two New Library Preps -- in the Future

Illumina touted two new library prep approaches they are developing - one which claims it will perform library prep on the flowcell and another offering "5 base" sequencing which would call 5-methylcytosine (5mC).  No details were provided as to how either of these would accomplish this.

Element has been leading in moving library processes onto the flowcell, though in their case it isn't the initial library prep but hybrid capture enrichment.  The Illumina prep won't be cost feasible without some sort of pre-instrument operation; the input DNA's must be tagged because there are just about no applications which call for running an entire 25B flowcell on a single sample.  Perhaps this would just be tagging with barcoded Nextera (Tn5), but then the samples can be pooled and placed on the flowcell to complete the process.  Another speculation I've seen is that the PIPseq templating technology acquired from Fluent would somehow apply.  

Illumina not only is promising a simplified workflow, but also that the quality of the final data would be better than any other solution out there - and they were clearly aiming at (but without naming) PacBio HiFi data.  That is certainly in the category of "show me the data!", as that is a very hard challenge - particularly since good long range contiguity data requires high molecular weight preps going into the process.  This claim might suggest they are using the PIPseq technology to generate linked reads ala the old 10X Genomics kit - but I still remain skeptical that such data can deliver in the face of certain types of repetitive content, such as Variable Number of Tandem Repeat (VNTR) alleles where the repeat array is longer than the actual read length.  And there are a range of applications - perhaps not yet as big as whole human genomes but someday - which require high accuracy single molecules - each single molecule read is the datapoint.

The other big promise is a 5-base reading chemistry.  The first thing to note is it isn't the same as the "on instrument library prep".  Illumina also didn't talk about reading 5-hydroxymethylcytosine (5hmC), the rarer but potentially buzzier additional mammalian epigenetic mark.  The claim is their method will be a simple workflow with a single library, so not a case of running one bisulfite or enzymatically modified library to read 5mC and another native one to read the genome itself.  A speculation I'll throw out is again around PIPseq - perhaps some partitions would have the enzymes to recode 5mC to something else (or all the non-5mC to U, as most modification methods do.

The most advanced approach in this space is Biomodal, which is overdue for a focused approach (and was founded by the creator of Solexa technology, Shankar Balasubramanian, originally under the name Cambridge Epigenetix).  Biomodal creates libraries which effectively are duplexes, with one read reading one strand and the other reading the other.  By clever series of enzymatic steps, the end result is that comparing the two strands can reveal both 5mC and 5hmC while still reading the underlying sequence - 6 base sequencing.  Of course, there ain't no such thing as a free lunch - any advantages of having paired end reads for mapping are no longer available, and there's always the danger of creating noise by the enzymes not always hitting their marks. 

Illumina didn't announce a purchase of Biomodal, so they must have found a different way of converting.  They also promised a simple workflow - a knock I've heard on Biomodal is the workflow is not simple.  

One smaller tease from Illumina is a goal of putting XLEAP chemistry on the MiSeq - which would certainly tidy up their product line.  But would this be existing MiSeqs or is a next generation MiSeq under development?  That was left ambiguous - as well as what would happen to MiniSeq and iSeq in the process.

All-in-all, it is a welcome change to see Illumina acting as if competition exists - the webinar was full of claims that the company is listening to their customers and seeking input.  So they are going to talk the talk of not being stuck in monopolist mode - but will the walk the walk?  Let's see how the next few  years play out

Monday, July 29, 2024

Musings on Possible Fixes To PacBio & ONT's Achilles Heels

I recently tried to place a claim that I had first conceived Oxford Nanopore's "6b4" strategy for solving homopolymers, but that appropriately brought a number of citations for the concept that predated my blog piece.  Not one to give up easily (and as hinted in that piece), I'm going to spend part of this piece trying to stake claim on some new concepts for fixing Oxford Nanopore's homopolymer issues - and PacBio's trouble with polypurine stretches.  To be honest, much of this piece will consist of me posing questions I haven't bothered to try to chase down if they've already been answered in the literature.  But not only might someone do that, but it may well be that data already exists in the public sphere to explore proof-of-concept!  But I haven't checked that either - though doing so was on my list of "what to do if management gave me the summer off" - but they didn't.

Tuesday, July 09, 2024

Tagify: seqWell's Line of Tagmentation Reagents Awaits Your Creative Thoughts!

One of the most important enzymes in the sequencing world, one which enables spectacular creativity on the part of novel assay designers, is Tn5 transposase.  Personally, I spend many times each month thinking about how to use Tn5 and its ability to tagment - both tag and fragment - input DNA. There’s even reports that Tn5 can tagment RNA-DNA hybrids such as from reverse transcription or even long single-stranded DNA.  I’ve covered seqWell in the past,with their fully kitted reagents; now the company (which just turned ten) is launching a Tagify product line that is focused on enabling NGS dreamers to easily explore new Tn5-based library preparation methods.


Friday, June 28, 2024

mRNA Therapeutic / Vaccine Quality Control: A Major ONT Opportunity?

Oxford Nanopore is in the process of morphing into a product-focused company, and so must identify specific markets in which they believe nanopore sequencing can compete or even dominate.  One such market that was spotlighted this year at London Calling is the quality control of mRNA therapeutics, where nanopore sequencing may be able to replace a kitchen sink of technologies and often provide superior data.

Pharmaceutical and diagnostic quality control is both similar and very different to research.  While many sequencing research experiments are to some degree a fishing expedition, in a quality control assay very specific hypotheses are tested with specific, pre-determined thresholds.  Consistency of results is the most critical; an assay run today must be comparable with one run last month or last year.  These markets may be less sensitive than research to cost; if a QC test is part of qualifying a vaccine batch which will sell for millions of dollars, spending a thousand on that assay isn't unreasonable at all.

It's worth reviewing the process of how mRNA vaccine drug substance are made. The initial vaccine design is synthesized into a plasmid; this design includes a poly-A tail followed by a restriction site (which cannot occur within the vaccine design, though it could occur elsewhere in the promoter backbone).  Enormous batches of plasmid are grown in E.coli and extracted and then linearized with the restriction enzyme that cuts after the poly-A tail and has no sites .  In vitro transcription is used to transcribe the linear template, with the nucleotide mix containing a uridine analog such as 5-pseudouridine in place of uridine.  If the BioNTech process is used, then the nucleotide pool also contains a guanine analog which contains a 5' cap structure (CleanCap).  If Moderna's process, then the in vitro transcription product is treated with a capping enzyme (typically Vaccinia Capping Enzyme aka VCE; please see conflict-of-interest disclosure at the bottom of this piece). After purification and concentration of the active drug substance (removing nucleotides, process enzymes, uncapped product, etc), drug product is ready for the finish-and-fill steps of encapsulating it in the lipid nanoparticles and filling vials for distribution.

QC is all about detecting what might go wrong and ensuring consistency of product.  mRNA therapeutics and vaccines are complex products, with many possible parameters to measure.

First, there's the question of "is this the right product?".  mRNA vaccines continue to evolve and expand in scope, with new designs targeting specific SARS-CoV-2 variants, influenza and RSV vaccines.  If a vaccine product should be one specific variant, it is mislabeled and unusable if it is really a different variant.  Many vaccines are now polyvalent, targeting multiple viruses or multiple variants within a single virus.  This adds a whole new dimension of not only have the correct set of vaccines been blended together, but is the fraction of the whole for each one within defined bounds.  As RNA products, there is also the question of whether the RNA is what was intended and no mutations have arisen during propagation of the plasmid.

Similarly, was the correct uridine analog used in production?  In vitro transcription may generate undesirable products, such as double-stranded forms of the intended product.  How much of these are present?  What fraction of the transcripts are capped? Are the RNAs full length or are there partial or degraded versions present? How much plasmid is left, and is it linear or closed-circular form?  How much E.coli genomic DNA contamination is present?

Many "old school" technologies exist for many of these questions.  A standard gel can be used to assess the length distribution.  Sanger or short read sequencing can be used for sequence verification - though Sanger will be a poor choice for multivalent designs.  HPLC may be used for a number of the questions.  But typically each assay asks a single question, and often with significant constraints. For example, if a problem is discovered in a multivalent vaccine in which there are out-of-spec shorter RNAs present, can Sanger or short reads tell which component is degraded?  

Pfizer has published an approach using specific RNA cleavage (harking back to how Woese sequenced RNA to create the Archea hypothesis - and much before) feeding into mass spectrometry.  In some ways it looks like really short short read sequencing - some fragments are indistinguishable.  The perceived advantages are that this method can distinguish fragments with the correct uridine analog vs. those with just uridine and it can distinguish capped 5' end fragments from uncapped ones.  I've meant to do a deep dive on this for over a year after Kevin McKernan had pointed me to it; time to re-prioritize that!

ONT is proposing that Direct RNA sequencing (plus DNA sequencing of plasmid batches) can be used to build a single assay to test nearly all - if not all - of the final drug product and standard DNA sequencing for assessing batches of circular or linearized plasmids.  As noted in my piece on ElysION and TraxION, this sort of "applied market" would be very appealing to ONT in terms of providing a steady source of revenue.  Direct RNA is the only currently marketed sequencing approach that can look at the modified bases, potentially giving ONT a large edge.  Many of the questions of interest are better answered with long reads - the distribution of RNA species lengths, which RNA species are which length - giving any long read platform an edge.  Should there be a problem, long read sequencing can quickly identify correlations between different anomalies.  

Of course, this does require levels of precision and accuracy.  Data was presented suggesting that minor variants can be detected at around 1% frequency.  Improved algorithms for poly-A length determination appear to enable very precise determination.

ONT dreams of covering more angles.  For example, nanopore sequencing on its own probably can't determine whether the 5' cap structure is present. But, with some sort of pre-processing - perhaps resembling Cappable-Seq/Recappable-Seq, it may be possible to tag either correctly capped or non-capped messages.  Similarly, it may be possible to differentially tag single stranded and double-stranded RNA

In terms of scale, Direct RNA sequencing in the current ONT protocol cannot be barcoded.  For huge infectious disease batches that may not be an issue; for small personalized cancer vaccine batches cost may be more of an issue.  Flowcell washing may be one solution, or ONT may be driven to enable barcoding (there are apparently external protocols for this).

How big a market will RNA vaccines be for ONT?  That is of course the big question.  mRNA vaccines seem to be here to stay, but how many more vaccines will be launched?  Delivering other therapeutics by mRNA is still an unproven market.  If mRNA delivery turns out to be a growth market, ONT can ride that wave.  If it remains a niche market, there's still gain for ONT but not what will drive them to profitability.  Lacking a reliable crystal ball, everyone must simply wait to see how this unfolds.



Conflict of Interest Disclosure / humble brag / me pretending to do Business Development.  I am (still!) employed by and hold stock in Ginkgo Bioworks. During the pandemic Ginkgo Bioworks developed a new fermentation process for producing Vaccinia Capping Enzyme (VCE).  This process is ten-fold more productive than the baseline process.  Ginkgo licensed this process to Aldevron, which is now owned by Danaher.  So production of mRNA therapeutics with capping using VCE may, through an opaque process, benefit me financially.  Little to no evidence of that so far, but it could happen!  And if you have a fermentation process that could be tuned up, feel free to reach out to me!