The new paper also uses a different optical setup than the Helicos system. Helicos used confocal microscopy, whereas the new paper uses Total Internal Reflection Fluorescence (TIRF) microscopy. That sent me running to Wikipedia and Khan Academy. There's a lot of weird physics I've picked up over the years, but unfortunately the gaps in my training are huge, having only completed the required year of physics in high school For example, I got very excited at one Marco Island when a poster discussed an approach to sequencing using Auger Electron Spectroscopy, which I got to play with during a high school internship. Using Peltier devices for PCR thermocyclers occurred to me the first time I heard of PCR, as my father worked on thermocouples for power generation on spacecraft.
TIRF microscopy is pretty cool, and somewhat reminiscent of Pacific Biosciences Zero-Mode Waveguide (ZMW), technology in its slightly non-intuitive trickery. Total internal reflection is pretty easy to understand; if light intersects a surface at the right angle, then the light bounces off the surface rather than refracting as it passes through it. However, there is a bit of an asterisk on that: while the light does reflect, an evanescent field extends beyond the surface it is reflecting from (just as with ZMWs, the light can effectively act a bit beyond a hole smaller than the light's wavelength). So if the excitation light for a fluorophore is fired into a total reflection setup, it can excite that fluorophore only a shallow distance from the flowcell surface. (Please correct my errors, anyone out there whose knowledge of this physics was not learned in the last 48 hours from the Internet!). Unfortunately, the paper doesn't discuss at all why the optical switch was made -- cost, size, etc.
The real focus of the paper is on a capture technique, which also replaces library construction. Helicos always had an amazingly simple library preparation, consisting solely of using terminal transferase to add an oligo-dT tail. This is even simpler: the flowcell has capture probes to target the material of interest, and these capture probes double as sequencing primers! So it would appear there is no library prep whatsoever, though in this paper they used the system only with synthesized oligos containing either wild-type or mutant sequences.
In this proof-of-concept paper, a fluorophore was attached to the simulated patient DNA. By observing the photobleaching of this fluorophore, whether it winks out in one or two steps, spots could be classified has having one (good) or two (unusable) DNA molecules. Sequencing would then proceed by flowing one reversible terminator nucleotide, imaging spots which incorporated the nucleotide, imaging and then cleaving the terminator. In this study, nucleotides were flowed in a simple rotation (G,C,T,A), as opposed to some of the more complex schemes that Ion Torrent has used. The imaging buffer has glucose and glucose oxidase in it, which I don't understand - though perhaps the buffer recipe is missing a key ingredient: catalase -- some fluorescence systems use this to scavenge oxygen. Imaging consisted of 4 exposures of 0.1 second each for each field-of-view (FOV), with 300 FOV imaged, with 2200-2500 active spots per field. Total cycle time appears to be on the order of 15-20 minutes.
By doing all this with a small number of capture probe designs, the group achieved nearly 2000-fold coverage of their targets. Now, in a real situation some of these probes might capture and sequence off-target molecules, though this would presumably be rare. Variable yield of capture was observed, though a good mixture of capture probes could presumably be empirically derived over time. In the current system, the authors estimate their detection sensitivity for a minor allele at 3%.
Using the capture probes to prime synthesis is highly advantageous for this system, given that read lengths are very short (only 19-30 flows were used). Since the primer can be designed to start reading very close to the allelic site of interest, this can work well -- and very little sequencing of extraneous information occurs. Since activating oncogenic mutations tend to be very focal, such an application is a good fit. For example, KRAS/NRAS/HRAS each have essentially 7 nucleotides of high interest, 6 of which are in a row (codons 12 and 13). EGFR can have mutations in more sites, but still relatively few. On the other hand, scanning a tumor suppressor -- such as BRCA genes, or trying to find a genomic translocation generating a fusion, seem like very poor fits to this technology.
The paper points out that almost every step in their process is potentially a spot for improvement, ranging from better base calling to using four differently-labeled reversible terminators in each cycle.
The proffered advantage of this system over existing ones is a complete lack of PCR or other enzymatic steps to prepare template. Clearly such steps offer complexity and a potential for distortion, but do these really present a problem in real life? Given competing ideas, such as Illumina's possible polymerase kinetics chemistry or even putting NextSeq-style optics in a MiSeq-sized box, the Ion platform or Oxford's ultra-low-cost MinION platform, is another platform with expensive optics going to gain traction in the clinical market? I find all single molecule techniques very cool -- they still seem a bit like science fiction made real-- but commercial realities may be quite harsh. Best wishes to the new team, and may they avoid the grim violinist for a long time.