Monday, January 03, 2011

Semianalogy to the Semiconductor Industry?

One area where Jonathon Rothberg has gotten a lot of mileage in the tech press is with his claim that Ion Torrent can successfully leverage the entire semiconductor industry to drive the platform into the stratosphere. Since the semiconductor industry keeps building denser and denser chips, Ion Torrent will be able to get denser and denser sensors, leading to cheaper and cheaper sequencing. It's an appealing concept, but does it have warts?

The most obvious difference is that your run-of-the-mill semiconductor operates in a very different environment, a quite dry one. Ion Torrent's chips must operate in an aqueous environment, which presumably means more than a few changes from the standard design. Can any chip foundry in the world actually make the chips? That's the claim Ion Torrent likes to make, but given the additional processing steps that must be required it would seem some skepticism isn't out the question.

But perhaps more importantly, its in the area of minaturization where the greatest deviation might be expected to occur. Most of Moore's law in chips has come from continually packing greater numbers of smaller transitors on a chip. Simply printing the designs was one challenge to overcome; with finer designs comes a need for photolithography with wavelengths shorter than visible. This is clearly in the category of problems with Ion Torrent can count as solved by the semiconductor industry.

A second problem is that smaller features are less and less tolerant of smaller and smaller defects in the crystalline wafer of which chips are fabricated from. Indeed, in memory chips the design carries more memory than the final chip so that some can be sacrificed to defects; if excess memory units are left over after manufacturing they are shorted out in a final step. Chips with excessive defects go in the discard bin, or sometimes allegedly are sold simply as lower grade memory units. One wonders if Ion Torrent will give all their partial duds to their methods development group or perhaps give them away in a way calculated to give maximal PR impact (to high schools?).

But, there are other problems which are quite different. For example, with chips a challenge at small feature sizes is that the insulating regions between wires on the chip become so narrow as to not be as reliable. Heat is another issue with small feature sizes and high clock speeds. These would seem to be problems the semiconductor-industry won't pass on to Ion Torrent.

On the other hand, Ion Torrent is trying to do something very different than most chips. They are measuring a chemical event, the release of protons. As the size of the sensor features decrease, presumably there will be greater noise; at an extreme there would be "shot noise" from simply trying to count very small numbers of protons.

Eventually, even the semiconductor industry will hit a limit on packing in features. After all, no feature in a circuit can be smaller than an atom in size (indeed, a question I love to ask but which usually catches folks off-guard is how many atoms are, on average, in a feature on their chip). One possible route out for semiconductors is to go vertical; stacking components upon components in a way that avoids the huge speed and energy hits when information must be transferred from one chip to another. It is very difficult to see how Ion Torrent will be able to "go vertical".

None of this erases that Ion Torrent will be able to leverage a lot of technology from chip manufacturing. But, it will not solve all their challenges. The real proof, of course, will be in Ion Torrent regularly releasing new chips with greater densities. An important first milestone is the on-time release of the second generation chip this spring, which is touted as generating four times as many reads (at double the cost). Rothberg is claiming to have a chip capable of a single human exome by 2012; assuming 40X coverage of a 50M exome, that would require a 200-fold improvement in performance, or nearly four quadruplings of performance. Some of that might come from process or software improvements to increase the yield per chip of a given size (more on the contest in the future); indeed, to meet that schedule in 2 years would insist on either that or a very rapid stream of quadrupling gains.

As I have commented before, they might even pick a strategy where some of the chips trade off higher density for read length or accuracy. For example, applications requiring counting (expression profiling, rRNA profiling) of tags which can be distinguished relatively easily (or the cost of some confusion is small) might prefer very high numbers of short reads.

1 comment:

  1. There is no reason at the moment to suppose that the process will be limited by proton counting (the numbers being counted are huge currently).

    Currently, the limitations seem to be mainly on getting dense loading of high-quality beads: that is the prep is more of a limiting factor than the sensors.

    I understand that the read length is limited mainly by the stripping off of templates during the wash cycle---something that might be fixed by better attachment of the templates to the beads or gentler washes.

    ReplyDelete