Thursday, February 01, 2024

Ultima Launches

As part of the run-up to Gold sponsorship at AGBT, Ultima Genomics held a multi-day event in early December, with tours of the headquarters facility and factory floor in the Bay Area and a day at a beautiful Wine Country resort. The resort session included talks from the company, early access collaborators and a pair of big name early backers, with a few hundred current customers and many contemplating the leap.  So confident was the company in their product, they even invited a blogger to moderate one of the panel discussions!  The UG100 is now officially launched as a fully commercial product, with ambitions to replace panels, exomes and microarrays with whole genome sequences at $100 apiece.  All in an instrument package designed for continuous industrial-scale operation.  Please note that Ultima did review this piece to ensure I didn’t disclose information they did not wish public, but for the most part just gave me some very good proofreading support.  Photos are my own, except as noted.


First, a summary of the technical aspects, working backwards starting with the chemistry.  Ultima is using unterminated sequencing-by-synthesis, with “mostly natural” nucleotide flows in which only a fraction of the nucleotides carry a fluorescent label.  Polymerase remains bound to the end of each growing DNA strand. 

Rather than a traditional flowcell, Ultima has a 200mm wafer - a simple disk of silicon skimmed from the abundant supply chain of the global semiconductor industry.  Clonal DNA beads are bound to the surface.  Sequencing flows and wash buffers are applied to the center of a rotating disk and distributed by centrifugal force via spin-coating technology borrowed from semiconductor manufacturing.  Each reagent is delivered by a separate nozzle, eliminating nucleotide contamination between flows.  The spin-coated reagents spread as a uniformly thin layer, minimizing reagent consumption and avoiding the problems of chaotic flows at the inlets and around corners which can occur in flowcells.

A pristine wafer reflects my groups' tour guide
Imaging of the sequencing pattern takes place continuously while the wafer spins, eliminating the need to rapidly accelerate and stop the camera to image multiple regions of a conventional flowcell.  Each UG100 instrument can run two wafers simultaneously, with the wafers alternating between a chemistry station and an imaging station.  With the current chemistry, the instrument can run 4 wafers per day.

Clonal DNA is prepared by automated emulsion PCR.  An OpenTrons robot and three small scientific appliances are used to prepare the beads and emulsions, including an enrichment step.  Emulsion PCR and the oft-hated emulsion breaking is performed by a robot, yielding a tube of beads ready to load onto the instrument.  The Company has given a lot of thought to automating and simplifying a once challenging process.  It is interesting to note that having a separate instrument for cluster generation represents a significant design difference from the integrated instrument approach that has dominated the industry.  The original Illumina GA and HiSeq instruments required a separate instrument, but most systems have integrated it.  An integrated Ultima unit would be ginormous, but Ultima also sees integration as undesirable - cluster generation can easily have a different cadence than sequencing and so by separating the two it is possible to have more flexibility and higher sequencer utilization.

For library preps, Ultima adapters are used in combination with a number of kits from major suppliers, and library conversion kits are provided to convert alternative libraries to Ultima-compatible ones. 

All of this leads to a rated operation of 12 hour runtime yielding 6-8 billion reads per wafer for a dual-wafer run.  Since it is a flow chemistry, the read length is a distribution, with 300 bases being the expected median, though for applications not requiring such lengths the runs can be faster to save sequencing reagents and increase instrument occupancy and throughput.


Ultima has spent a great deal of effort building an industrial instrument.  Two critical dimensions for industrial application are staff scheduling and instrument uptime.  

Modern flowcells used in sequencers are marvelous, high precision technological devices.  But all that precision requires a significant amount of specialized manufacturing and assembly, bulk to ship and more bulk to dispose of.  Ultima’s wafers are of amazing precision – atomic-scale flatness – and come with a simple enclosing plastic carrier for handling.  Six wafers can be loaded at a time, and for really queueing up runs over a weekend or holiday an operator can wait for two wafers to go through the bead loading process and then load two more wafers into the magazine in order for the sequencer to complete the runs and kicks off new runs without attendance, effectively enhancing the capacity to eight wafers.

Jukebox-style wafer loading

Just as they don’t have a conventional flowcell, Ultima doesn’t sell sequencing “kits” as is standard in the industry.  Running reagents are sold by the tank, with each tank good for 60 hours of dual-wafer operation.  One tank contains separate subtanks for each reagent, whereas the large buffer tank now sports an integrated handle.  The instrument bears two reagent drawers, so one can be replenished while the other is supporting active sequencing.  A small number of reagent tubes specific for a run – including the loaded beads from the emPCR robot – are borne on a barcoded carrier.

Ultima’s reagent suite
Top left: reagent tanks.
Top right: wash buffer tank
Middle: holder for libraries to be loaded
Bottom left: qPCR kit for library quantification from NEB.
Bottom right: a fresh wafer in a plastic holder

So the result is that the timing of operator actions are significantly decoupled from the timing of sequencer actions.  There’s no “can’t load one side until the other side is done” or “having to load the sequencer an hour later each consecutive day due to 25 hour run times” y.  Operators can simply feed the instrument on a regular schedule and even fully load the machine with wafers and reagents so it can run over a weekend without further intervention, a bit like filling up a tank of gas without needing to stop driving.

Reagent sippers

Of course, such operation is only theoretically possible if the instrument won’t run reliably.  Both Ultima and some of the Early Access partners showed plots that instrument uptime didn’t start out very reliable in the beginning stages of Early Access, but this has improved significantly.  One of the updates being rolled out to the Early Access sites is a “hardening update” to improve reliability.  Ultima also has a host of instruments at HQ dedicated to validation testing and even more R&D based instruments that can be brought in for additional validation capacity if needed.

The end result is a platform rated to generate 20K human genomes at 30X coverage per year, delivered by twinned deli-fridge sized instruments plus some smaller hardware. Ultima has succeeded in running 22 wafers on a single instrument per week for 300bp read lengths and 28 wafers per week for ~225bp read lengths.


Image processing and basecalling are performed by a computer server which can either be mounted atop the UG100, or alternatively placed in another room and linked by a dedicated optical cable – the latter option moves noise and additional thermal load away from the sequencer.


Because it uses flow-based chemistry, Ultima should, and does have, a very low miscall rate relative to terminated chemistries.  The challenge is to accurately call the number of bases at positions where the same nucleotide is repeated consecutively in the sequence (aka a homopolymer).  Ultima has a key advantage over the prior flow chemistries used by 454 and Ion Torrent: those systems detected transient signals (a flash of light from luciferase for 454; a pulse of hydrogen ions for Ion Torrent) whereas Ultima is detecting a stable optical signal.  The community never saw much from Genapsys, which was using electrical methods to detect a stable signal but never generated much data or achieved even minimal market success.  Ultima says their signal is linear up to homopolymer lengths of 12 bases.  Their chemistry and informatics are  optimized for up to a 12mer length as well.  Which raises the question: how often does anyone really care about longer homopolymers?  I’m sure the answer isn’t “never” but nor it is “all the time”.  

On the indel calling front, Ultima has improved from 96% F1 score when they first exited stealth mode to 99.4% – by tuning amplification conditions and improving bioinformatics tools. Ultima shared that they are now at 99% F1 at 9mers and above 95% at 12mers, a significant improvement from the early access period.  

I would be remiss if not dropping into curmudgeon mode to make my patent-pending complaint that Ultima displays these numbers as linear scaled accuracy and not log-scaled error rates (aka phred or quality scores).  With accuracy in excess of 99%, framing it in a linear scale will hide any further significant advances on error rate.

Those that were around in the days when there were multiple sequencing technologies competing to be THE next generation sequencing platform (454, Ion Torrent, SOLiD, Helicos and Solexa/Illumina – we all know who won that race!) will recall that different sequencing chemistries necessitated different ways of talking about accuracy.  454 and Ion Torrent had “flow space,” SOLiD had “color space,” and Illumina had “sequence space.”  These were all chemistry-specific attempts to quantify the uncertainty in each detection event, which would then be translated into base calls.  For sequencing by synthesis and “sequence space,” this conversion turns out to be intuitive – each detection event represents a single base and the detection error rate can be represented directly as a base error rate - the very familiar “phred” quality scores..

Ultima’s technology uses a flow-based chemistry, which means that accuracy is most directly measured in “flow space,” which won’t map directly to “sequence space” or BQ.  While it’s frustrating to not have an easy measure of base quality that we’ve all become accustomed to, it comes with a huge upside – an inherently low substitution error rate.  Flow-based chemistry’s achilles heel is in homopolymers (where it becomes harder to distinguish whether 12 or 13 bases added into a long homopolymer), whereas Illumina chemistry’s achilles heel is substitutions.

Putting theory into practice (and as if they heard my complaint), Ultima has come up with a measure of substitution call accuracy, “SNVQ.”  Think of it as a new “BQ” that translates from “flow space” to sequence space in order to establish a comparable metric to specifically measure the accuracy of called substitutions.  SNVQ thus represents the probability that a base called as non-reference was an error and actually the reference base.  For example, let’s say a normal sample matches the reference, reading “TAC” at a specific genomic position, but the sequencer reports “TGC” in one read – the likelihood that the TGC call is an error and the read should actually be TAC is X, and the SNVQ is then -10*log(X).  According to data Ultima showed at the launch event, more than 85% of called substitutions have an SNVQ of 40 or greater – that means the probability those substitution calls are errors (“X” as we termed it above) is less than 0.0001, or 1 in 10,000.


If an error rate of 1 in 10,000 isn’t quite good enough for an application, Ultima has a special mode, involving a slightly different bead prep and downstream informatics, for suppressing errors triggered by sample damage or emPCR errors.  This enables them to achieve 1 in a million raw read accuracy for base calls (ie part per million)!  In terms of SNVQ, that’s Q60! !  In ppmSeq (paired plus minus Sequencing), both strands of the library molecule are attached to a bead and amplified; in many protocols only one strand is bound.  This results in amplification products arising from each strand.  A small tag introduced during library prep confirms that both strands are indeed represented, which is true for about half of the reads (the rest are just standard data).  Since Ultima uses flow chemistry, any position which is a mixture of bases – due to an early emPCR jackpot or more commonly if the base on one input strand is damaged – will result in a radical dephasing of the signal.  In contrast, with 4-base sequencing-by-synthesis there would just be a small signal change at the one base, challenging to detect.  Downstream software then clips the read at the damaged position.

Given that sample errors occur at a rate of 10-4 – 10-6, ppmSeq discards a small amount of data to retain only pristine reads.. There isn’t an attempt to correct actual sequencing errors – it only works so well because Ultima’s flow-based chemistry effectively doesn’t make substitution errors, so once sample errors are removed accuracies of 1 part-per-million or better are possible.  Input amounts of as low as 5 nanograms for PCR-free library preparation were reported for ppmSeq.

The incumbent solution to this challenge is duplex sequencing, as exemplified by TwinStrand, in which UMIs and strand-specific tagging are used to detect and weed out false diversity generated by damage or amplification errors.   In addition to requiring massive amounts of oversequencing, another downside to duplex sequencing is it requires careful tuning; if the library is too diverse then too few UMIs will be seen multiple times to enable the correction and conversely if the library is not diverse enough only a limited number of UMIs are grossly oversampled.  

Dan-Avi Landau from New York Genome Center demonstrated using ppmSeq for detection of cancer driver mutations in cell-free DNA.  Hopefully some preprints will soon surface so that these results can be discussed in greater detail.


Same as before, but lower cost.  Or as Dan-Avi Landau commented in his talk: “cost limits imagination”.  Anyone familiar with this space would note that as sequencing has dropped in price, applications which were previously thought to be completely impractical became routine.  One personal incarnation of this was during the first few years of Warp Drive Bio, where I implemented a sequencing-based screening process for rare natural product biosynthesis clusters which would have been logistically impossible even with a large Sanger-based sequencing center; with Illumina (outsourced) it was doable and spectacularly successful.  

John Overton of Regeneron believes firmly that “you cannot pursue modern drug discovery and development without incorporating human genetics”.  Regeneron has invested heavily in using exome sequencing coupled to imputation.  But imputation breaks down if the available population genotype data don’t fully capture a given human population - and the existing databases don’t work well with non-European populations.  So Regeneron is using UG100 to generate hundreds of non-European genomes to reduce the disparity.

Yuval Dor from the Hebrew University pointed out that cell-free DNA is really cell-free chromatin, with much more information than just sequence or even sequence with methylation status.  But that’s no reason not to go deep on methylation analysis. Cell-free chromatin offers a window into cell death in a very short window, an opportunity to see the effects of recent insults.  Using a panel to focus on a small amount of the genome throws out large amounts of information.


At this meeting Exact Sciences announced that they have committed to using UG100 for an unspecified future product.  Ultima later announced a diagnostics partnership with Quest Diagnostics.  So we’re already seeing large diagnostics players commit to the Ultima platform.  That’s an important early commercial adoption milestone already passed.

New Frontiers

Pengfei Liu of Baylor College of Medicine presented results on ultradeep RNA-Seq for rare disease hypothesis exploration.  A serious challenge for anyone working in diagnostics or translational medicine is that the tissues one wishes to explore with molecular approaches are often inaccessible. The talk resurrected a two-decade old memory of wanting biopsies for a Millennium program that were simply too dangerous for the patients to be ethical.  Liu asks a simple question: what if cells, when made accessible with minimally-invasive methods, can be windows into “tissue specific” transcription relevant to the disease?  In other words, how tissue specific can transcription be if we dredge deeply?  Can we use fibroblasts (from inner cheek) or lymphocytes (from blood draws)?  Or induced pluripotent stem cells created from such cells?

The biggest takeaway from Erez Lieberman Aiden’s talk on HiC is that nobody does HiC as deeply as the methodology can use.  He aims to change that.  At a reception, I made the naive mistake of asking how many samples were loaded on each wafer; it really should have been whether a dozen wafers were enough for one sample.  Lieberman Aiden calculates that 300 billion reads might still not saturate detection for large loops in the 650-700 kilobase range.  The curse of combinatorics - if you have 3B nucleotides in a genome there’s 3B squared over two possible interactions. But with such profligate use of truly Sagan-esque read quantities - billions and billions – chromatin loops at a wide range of scales can be fully resolved.  If, as Lieberman Aiden posits, such looping drives the vast majority of eukaryotic gene regulation, then such resolution is valuable indeed.  Precisely resolving loop contacts requires precisely calling the junction position in the HiC fragment; with Ultima’s single longish read the junction is nearly always resolved, in contrast to the risk of paired end reads failing to reach junctions from either end.

Ancient DNA is yet another interesting application.  Archaeological samples are inevitably heavily contaminated with modern bacterial DNA.  A number of clever enrichment schemes have been developed – but why enrich when you can just power through the mess? 

There were more – some of my notes had me cursing my failure to capture and illegible handwriting but several more were marked “do not post” – I was told that the goal is to have these in preprint form prior to AGBT.  So stay tuned – I’m champing at the bit to write up several of these results.

There were also questions posed which are interesting to ponder.  Ultima CSO Doron Lipson asked “can we put single-cell RNA-Seq in the clinic?”  Landau posed the idea of using cell-free DNA release from tumors post-dosing as a possible rapid readout  of cytotoxic chemotherapy efficacy – no spike in cfDNA then try another drug.  As he pointed out, adjuvant chemotherapy (dosing after surgery intended to be curative) benefits only 3% of patients – if a patient won’t benefit, sparing them such treatment will avoid the serious side-effects. Gady Getz of the Broad presented data on deep phylogenetic analysis of tumor sample time courses to identify genetic drivers of therapy resistance.  Joseph Replongle dangled data on ginormous Perturb-Seq studies to explore the effects on knockouts on cells using RNA-Seq readouts. 

Commercial Organization

Nobody at the shindig believes that Ultima will take over all applications or that their big, centralized sequencing center model is a universal solution.  Small labs with small, focused studies from many labs might not find Ultima’s throughput a good match to their needs.  But as Richard Gibbs on my panel pointed out, centralized centers give large studies the most consistent treatment, the smallest batch effects and therefore the most minable data.  Ultima is designing and scaling their organization on this model.  They will grow out of the “one Field Service Engineer per site” that was absolutely necessary during Early Access, but there will be a large team ready to address any downtime issues.  Both the company and multiple early access sites commented on how instrument reliability has been steadily improving. Ultima has also already built out a Regulatory Affairs unit, so they are already laying the grounds for clinical use.

Rows upon rows of instruments on the Ultima factory floor.

As exciting as a $100 genome or 6-8 billion reads per run might be, Ultima has even bigger plans.  Ultima has several avenues through which to scale further over future generations.  The 200mm wafers currently used are not the largest in the semiconductor industry; increasing to 300mm wafers would more than approximately double the usable area.  While that wouldn’t mean a one third reduction in sequencing costs – more area means more reagent to cover it – it’s not hard to salivate over a $50 genome if that’s all such a larger surface could deliver.  While no details are provided, Ultima says they have a roadmap to $10 genomes or even lower costs – as Vinod Khosla put it, when it costs more to FedEx the sample than to generate data one might think about not pushing farther.  Or probably not – a recurrent message that Ultima says that they keep close to heart is to never be complacent.

Another instrument interior shot; courtesy of Ultima Genomics

[2024-02-01 17:03 EST - cleaned up the markup that came through -- the section on ppmSeq has text authored originally by Ultima but which I then edited. Also fixed the Chemistry section header]


Anonymous said...

That is a complicated looking beast. does it require a gas supply ?

Keith Robison said...

No gas -- but does have a water line (instrument does its own deionization) and a water waste line.

rod albano said...

Like a washing machine then....

Anonymous said...

Like a washing machine then...

Anonymous said...

Impressive. Is there detail on whether the wafer is patterned, or is bead attachment random? Some form of patterning, I presume, to speed identification of where a signal would be expected, upon imaging? Are these little pits, like the wells in Ion Torrent's CMOS wafers? If so, is the bead size and pit dimensions already as small as they can reasonably be?

Keith Robison said...

The wafers are not patterned - smaller beads are a likely direction to go for even higher output.

Anonymous said...

Ultima CSO Doron Lipson asked “can we put single-cell RNA-Seq in the clinic?” ...said somebody that has no idea what it takes to get anything into the clinic.
Unfortunately, and quite sadly, not even "simple" tumor hotspot panels with fairly clear cut QC and interpretation have really made it into the clinic.

Keith Robison said...

Lipson was part of the team that built Foundation Medicine's FoundationOne - he has experience developing novel sequencing-based assays into the clinic

Anonymous said...

Fair enough and sure it is an interesting question to ponder. On the flip side the more interesting question to me is why simpler more direct assays have not seen the adoption us Genomics folks would hope for. Leave alone assays developed by Invitae or Grail. But I guess that doesn't quite align with the business case for an Ultima sequencer.

Anonymous said...

"An OpenTrons robot and three small scientific appliances are used to prepare the beads and emulsions, including an enrichment step"

Are there any details on what kind of enrichment, how it works etc?