I'm always excited by sequencing technology going bigger. Every time the technology can generate significantly more data, experiments that previously could only be run as proof-of-concept can move to routine, and what was previously completely impractical enters the realm of proof-of-concept. These shifts have steadily enabled scientists to look farther and broader into biology - though the complexity of the living world always dwarves our approaches. So it was easy to say yes several weeks ago to an overture from Scale Bio to again chat with CEO Giovanna Prout about their newest leap forward: QuantumScale, which will start out enabling single cell 3' RNA sequencing experiments with two million cells of output- but that's just the beginning. And to help with it, they're collaborating with three other organizations sharing the vision of sequencing at unprecedented scale: Ultima Genomics on the data generation side, NVIDIA for data analysis, and Chan Zuckerberg Initiative (CZI) which will subsidize the program and make the research publicly available on Chan Zuckerberg Cell by Gene Discover.
Scale Bio is launching QuantumScale as an Early Access offering, originally aiming for 100 million cells across all participants - though since I spoke with Prout they've received over 140 million cells in submitted proposals. First 50 million cells would be converted to libraries at Scale Bio and sequenced by Ultima (with CZI covering the cost), with the second 50 million cells prepped in the participants labs with Scale Bio covering the library costs (and CZI subsidizing sequencing cost). Data return would include CRAMs and gene count matrices. Labs running their own sequencing have a choice of Ultima or NovaSeq X - the libraries are agnostic, but it isn't practical to run these libraries on anything smaller. Prout mentioned that a typical target is 20K reads per cell, though Scale Bio and NVIDIA are exploring ways to reduce this, so with 2M cells that's 40B reads required - or about two 25B flowcells on NovaSeq X.
How do they do it? The typical Scale Bio workflow has gotten a new last step, for which two million cells is expected to be only the beginning. The ScalePlex reagent can be first used to tag samples prior to the initial fixation, with up to 1000 samples per pool (as I covered in June). Samples are fixed and then distributed to a 96-well plate in which reverse transcription and a round of barcoding take place. Then pool those and split into a new 96-well plate which performs the "Quantum Barcoding", with around 800K barcodes within each well. Prout says full technical details of that process aren't being released now but will be soon, but hinted that it might involve microwells within each well. Indexing primers during the PCR add another level of coding, generating over 600 million possible barcode combinations. This gives Scale Bio, according to Prout, a roadmap to experiments with 10 million, 30 million or perhaps even more cells per experiment - and multiplet rates "like nothing".
As noted above, the scale of data generation is enormous, and that might stress or break some existing pipelines. Prout suggested that Seurat probably won't work, but scanpy "might". So having NVIDIA on board makes great sense - they're already on the Ultima UG100 performing alignment, but part of the program will be NVIDIA working with participants to build out secondary and tertiary analyses using the Parabricks framework.
What might someone do with all that? I don't run single cell 3' RNA experiments myself, but reaching back to my pharma days I can start imagining. In particular, there are a set of experiment schemes known as Perturb-Seq or CROP-Seq which use single cell RNA readouts from pools of CRISPR constructs - the single cell data both provides a fingerprint of cellular state and reveals which guide RNA (or guide RNAs; some of these have multiple per construct) are present.
Suppose there is a Perturb-Seq experiment and the statisticians say we require 10K cells per sample to properly sample the complexity of the CRISPR pool we are using. Two million cells just became 200 samples. Two hundred seems like a big number, but suppose we want to run each perturbation in quadruplicate to deal with noise. For example, I'd like to spread those four cells around the geometry of a plate, knowing that there are often corner and edge effects and even more complex location effects from where the plate is in the incubator. So now only 50 perturbations - perhaps my 49 favorite drugs plus a vehicle control. Suddenly 2M cells isn't so enormous any more - I didn't even get into timepoints or using different cell lines or different compound concentrations or any of numerous other experimental variables I might wish to explore. But Perturb-Seq on 49 drugs in quadruplicate at a single concentration in a single cell line is still many orders of magnitude more perturbation data than we could dream about two decades ago at Millennium to pack into three 96-well plates.
And that, as I started with, is the continuing story: 'omics gets bigger and our dreams of what we might explore just ratchet up to the new level of just in reach.
The announcement of QuantumScale also has interesting timing in the industry, arriving a bit over a month after Illumina announced it was entering the single cell RNA-Seq library prep market with the purchase of Fluent Biosciences. While nobody (except perhaps BGI/MGI/Complete Genomics) makes their single cell solution tied exclusively to one sequencing platform, the connection of Scale Bio and Ultima makes clear business sense - Illumina is now a frenemy to be treated more cautiously and boosting an alternative is good business. Ultima would of course love if QuantumScale nudges more labs into their orbit, and these 3' counting assays perform very well on Ultima with few concerns about homopolymers confusing the results (and Prout assures me that all the Scale Bio multiplex tags are read very effectively) . And as is so often the case, NVIDIA finds itself in the center of a new data hungry computing trend.
Will many labs jump into QuantumScale? Greater reach is wonderful, but one must have the budgets to run the experiments and grind the data. PacBio in particular and to a degree Illumina have seen their big new machines face limited demand - or in the case of Revio the real possibility that everyone is spending the same money to get more data (great for science, not great for PacBio's bottom line). But perhaps academic labs won't be the main drivers here, but instead pharma and perhaps even more so the emerging space of tech companies hungry for biological data to train foundation models - sometimes not even having their own labs but instead relying on companies such as my employer to run the experiments.
A favorite quote of mine is from late 1800s architect Daniel Burnham; among his masterpieces is Washington DC's Union Station. "Make no little plans. They have no magic to stir men's blood and probably will not themselves be realized." I can't wait to see what magic is stirred in women's and men's blood by QuantumScale, which is certainly not the stuff of little plans.
[2024-10-02 tweaked working around how program is funded]