The Proof of Concept paper outlines the key steps of the SBB process, though according to Simon probably about every detail of the detection method will probably be replaced. The core chemistry involved annealing a primer and then interrogating the next base by flowing sequentially flowing in each of the four nucleotides in the presence of a polymerase which can't incorporate the nucleotide. In the PoC paper this was cleverly detected label free on single molecules using plasmon resonance, but that would not allow great densities so apparently the company will be switching to fluorescently labeled nucleotides. After each interrogation flow and detection, the complex is washed off. The greatest signal is obtained for the correct base. Upon completion of interrogation, a mix of four (must be terminators, though the preprint says dNTP) nucleotides is flowed in with an extension-competent polymerase to extend the primer. Deblock the terminator and proceed with another round of incorporation.
Since advancing one nucleotide involves five nucleotide+polymerase flows, the system is apparently trending to be slower than Illumina. But the claim made is that the phred scores are at least an order of magnitude higher than for Illumina -- unmodified nucleotides tend to give lower error for the interrogation step. In a Twitter dialogue with Simon, he pointed out that separating the interrogation and extension steps may allow optimizing each rather than settling for an uneasy compromise of conditions. Having a simpler, unlabeled terminator may have advantages in terms of high incorporation.
The production system would move from single molecule to a clonal system, enabling very high densities. The drawback of course is the introduction of phasing error. With a single extension step with deblocking, one might expect very high efficiencies and therefore low dephasing and large numbers of cycles, but there are no details on this.
An intriguing possibility fronted by Simon is that the system may be able to read 5-methylcytosine as different than cytosine. Presumably this would involve viewing the kinetics of incorporation during the G flow, akin to how PacBio detects methylation by a difference in kinetics. That means fancier, faster optics than would be required to just detect a stable signal, but a system capable of reading methylated DNA at scale from unmodified DNA is tantalizing.
The appeal for PacBio is apparent: with a short read technology in their stable they can chase after lucrative applications that don't fit well with their long read approach -- such as cell free DNA for cancer screening and monitoring and non-invasive prenatal tesing, DNA extracted from FFPE-preserved clinical samples and other applications where either counting is critical or the DNA is inherently shredded into tiny pieces. As Simon points out, a high accuracy short read approach would be very attractive for these cases since a high error rate can generate false positives in background normal/maternal DNA. There are many approaches for tackling this on Illumina, but they involve generating tagged copies and then oversampling, which reduces effective sequencer yield (if you must oversample 10X, then your unique molecular yield is corresponding 1/10th X) and can be a bit tricky to get right -- oversample too much and you waste capacity, too little and you don't get the accuracy boost on enough fragments.
My concern is that Omniome might require a lot of work to get from wherever it is now to a marketable product. Most post-Sanger sequencers have an acquisition by a big player in their history (Genapsys would seem the exception), but there's a depressing long list of companies acquired and then never seen in the marketplace. Roche slurped up Genia seven years ago and Stratos Genomics last year; Agilent with LaserGen and ThermoFisher with several startups. Now it could be that these technologies were doomed in any case, but it is also easy to imagine that launching a radical new technology doesn't work well in a big company, that there are inevitably culture clashes and defections with acquisitions and that management focus could be an issue.
That last point is my big worry, and yes this is from someone with attention issues.. PacBio previously had one mission: develop HiFi technology and push it into many markets. They now have a second mission. Of course everyone imagines lots of synergies and efficiencies -- PacBio and Omniome are both optical technologies and ultimately a single sales force might be used more aggressively selling both. There's probably other useful overlaps -- machine learning for various uses.
But many of the markets will be different. There is also the issue of integrating the bioinformatics software -- of not doing so. There's always a push to have one united engine, but how many sites will really be operating both sequencers? Will it be enough and on enough combined projects that the effort to bring them together is really worth it?
But my main concern is that PacBio already had plenty of interesting problems on their plate -- continuing to up HiFi yields and accuracy, solving library construction in high throughput, all sorts of software improvements to SMRTLink such as moving everything onto HiFi. Perhaps even of a few more ideas of mine from personal experience with some proprietary import. Now there's all the additional chemistry and informatics needs of Omniome, and inevitably clever people are a zero-sum resource -- working on one platform will deny resources from the other platform. And I'm a bit pessimistic that the synergies and efficiencies will overcome that. Hiring in additional clever people is one approach, but finding them is hard and growth in staff always has its own inefficiencies and frictions.
Not that I want this to fail -- having more options and more competition in the sequencing market is a universal good in my mind. Perhaps Omniome really was close to getting out to beta and getting both sequence data and sequencer specs out to the public eye.
I don't see how methyl-C detection can be done on a clonal system (won't it get wiped out on the amplification?) but interested to know if I'm wrong there.
ReplyDeleteAs always with claims of accuracy, I will wait and see what a customer gets. Haven't every next-NGS company oversold on this? Illumina has proved its accuracy for many many years however.
Keith, great post as usual, I do have a couple of comments though:
ReplyDelete1- If they need to cluster first (e.g. assuming this is not going to be single molecule), how can they detect methylation (the methylation signal should have been lost during clustering)?
2- If the read out is optical, how do they measure kinetics across the whole flow cell (i.e. real time measurement)? The only option would be to use a CMOS flow cell, in which case they may be limited to small flow cells only, given the cost of a large CMOS chip.
nice write up.
ReplyDeletethe terminator has to be fantastically efficient and pure to stop phasing.
if they using labelled nucs then they're using custom pol and they'll have photodamage. does it read on ILMN IP or is it equivalent?
cycle times look slow, box will be mid sized to large.
clever synergies may take some of the complexities out of making a new box/product lines.
To the two folks who pointed out clonal amplification (clustering) would erase any methylation I can only say:
ReplyDeleteDOH!
This quote from the genomeweb write up gave me a good chuckle.
ReplyDelete'While the sequencing chemistry is "very far along and very robust," PacBio will work to dial in the clustering to optimize the number of reads per flow cell and will apply its knowledge about enzymology, surface chemistry, dye and optics, and bioinformatics.'
Just a few unimportant things to wrap up. It's very robust.
I was about to say we only really find out about these things when they ship, but even thats not true, what happens in the years subsequently really makes or breaks these genomic platform technologies. It’s a long long road from announcement to reality.
ReplyDeleteLooking at pacbs numbers, they’re not on track yet for the 97% revenue growth promised last year, looking more like 30-40%. This merger ups their running costs in a way that far outstrips growth. With this instrument not due to 2023, and probably not making much initially, pacb looks like its in the debt stripey-hole. Unless of course, more announcements are forthcoming.
I agree with the other comments - methylation is almost certainly out because of the clustering issue. But even if it could be retained after cluster gen, you would be averaging a kinetic measurement over hundreds or thousands of molecules in each well. Even with expensive optics (which seems unlikely if the want to be competitive), I'm not sure this would be possible. Unless of course it can be converted to a 5th base, and no kinetic measurements are needed. In short, I think the kinetic abilities will be lost post cluster gen.
ReplyDeleteA comment about error rate - my understanding for applications like cfDNA is that the errors are largely in the library prep, since any PCR amplification will introduce errors even before the library is loaded on the sequencer. So you can make the sequencer as accurate as you want, but you still need UMIs and oversampling to remove errors from the library prep. The only way I can see to work around this is if you could load tiny amounts of un-amplified DNA into the sequencer. Then you'd need much higher conversion on a flow cell, which is challenging from a cost perspective. Another challenge of course would be amplification-free target capture.
If cfDNA PCR introduces errors then presumably so does cluster generation? Or do the fewer cycles not create this, or is cluster generation non-exponential so not create this??
DeleteCluster gen will introduce errors, but I think at a much lower rate, for the following reasons:
ReplyDelete-It uses very different polymerase & cycling conditions than lib prep which are likely higher fidelity (I assume Illumina has optimized for this)
-In the cluster gen, you only really care if an error happens in the first couple cycles, since you are taking an average fluorescent measurement over all copied fragments (essentially equivalent of a UMI consensus). So if an error is made in cycle 10 (they aren't really traditional cycles since its isothermal), it such a small fraction of the total signal that you won't ever see it.
Even when you strip out the Covid sales, nanopore had higher revenues from life science research than PacBio in 2020.
ReplyDelete~$79 million vs ~$90 (£65.5).
Seems like PacBio need SRS tech as they are losing the LRS tech battle.
Plasmonic nanohole arrays - could sound a bit like the zero mode waveguides PCB is using. Whats the chance they will "just" load the Omniome seq chem onto their existing ZMW arrays and do sequencing? Seems they are anyways adding the fluorescent detection...
ReplyDelete