My recent run of posts on Ion Torrent certainly garnered a lot of comments, and it would be much less than honest to say that many of the comments were far less favorable to Ion Torrent than what I have written. Indeed, many were not terribly favorable on me given what I had written about Ion Torrent -- one even asked if I "felt used" as part of a publicity stunt. (BTW, I don't -- if I can't ask the hard questions I have nobody to blame but myself).
One Ion's other very public events around the Ion Torrent has been to announce a series of three challenges to improve the performance of the instrument system (a fourth has been announced centered around SOLiD and three others have yet to be unveiled). The winner of a challenge can get $1M in prize money.
Now, contests along these lines have been successfully used by companies and organizations to drive technologies forwards. Netflix successfully crowdsourced better prediction of a user's movie tastes. The most spectacular success for such a contest was the winning entry for the Ansari X-Prize, SpaceShip One. Google is currently sponsoring a contest to land a rover on the moon and transmit HDTV images, which I look forward to eagerly.
Unfortunately, so far Life Technologies & Ion Torrent's contest seems to be all hat and no cattle. While the three goals have been announced (double the output per run, halve the sample prep time and double the accuracy), nothing else is in place. Each competition is apparently separate; there's no prize for halfway success on two of the axes. If they are serious about attracting competitors, they need to get down to brass tacks.
Now, I can't say I'm surprised. Not only has Ion shown a penchant for loudly trumpeting their progress prior to demonstrating it, but in their previous contest showed a certain degree of haste and a few punchlist items. In the first contest, submissions for how to use the instrument were judged to yield two U.S. winners (followed recently by two European winners). Each submission consisted of two parts; in the original rules it wasn't clearly stated what the distinction was between the two parts (perhaps it should have been obvious, but I don't routinely write grants) other than the rules stated a word limit for one of them. Once you tried to submit, however, then the word limit on the second section became apparent. Ion also ended up extending the deadline for submissions, which can either be seen as generous or irritating -- in the latter case, if you've burned midnight oil & spent part of a vacation chopping down an overlong second section to get your entry in on time. Importantly, that contest has a tiny fraction of complexity of any one of these contests.
Starting with, what are the rules? One key question will be around cost. For example, can a winning entry for sample prep use an instrument that costs much more than the PGM? That's not an absurd concept. Can the double the output prize be won by a sample prep process that takes a long time? For example, can I assay to find only DNA-bearing beads & then use a micromanipulator to position them? That is obviously a deliberately absurd proposal. But, unless the rules are carefully crafted someone will attempt a silly entry, and Ion will have a real mess if they are forced to put the laurels on silly.
A key & challenging area is around intellectual property (IP). The first obvious issue in this department is how much IP can you retain when submitting an entry? Obviously Ion isn't interested in paying out $1M to something they can't use -- so is the $1M in effect a license fee (with no royalties?)? On the other side of the IP coin, how much IP can a winning submission use which the submitter does not have rights to? For example, some wag might submit a sample prep protocol that is bridge PCR using in part Illumina reagents. But more complicated would be methods that only an IP lawyer can decide either infringe or build on some prior patent. If it's Life's patent, presumably they wouldn't care -- but an Illumina or Affy patent would be an entirely different fish.
Materials are going to be another critical issue issue for the yield and sample prep challenges. Any reasonable scheme for attacking these is going to get very expensive if complete kits must be purchased each time. For example, you may want to hammer on the beads without ever actually putting them on a chip. Will Ion give at-cost access to the specialized reagents (such as beads)? Furthermore, how much information are they willing to give out on the precise specs. For example, suppose a concept requires attaching something different to the beads than standard -- will specifications be provided to create appropriate beads?
Another key question is what samples? Will Life Tech supply the samples to use for improving yield or does a group get to define them? A devious approach to winning the prize would be to develop a sample which preps very badly with the standard prep. An attempt could be made to legislate this possibility away, but there would be significant advantage to standardized samples. But should these be real world samples, idealized samples (such as a pure population of a single fragment) or deliberately hard real world situations (e.g. an amplicon sample with a high fraction of primer dimers)? In a similar vein, what dataset will be made available for the accuracy challenge?
Now, Life is promising more information this Spring, and since that is still a few weeks away (or do they go by Punxsutawney Phil?). I really hope in the future they try to hold back their announcements until they're really ready to go. It doesn't help that the Twitter rehash scrolling on their screen is full of links that might provide more information, but none of them work. They really need to rethink their strategy of piecemeal delivery that can do nothing but frustrate the possible entrants in the contest.
Part of my frustration is I can't help but ponder throwing my hat in the ring. It's not hard to think of ideas for the sample prep problem and while I couldn't do the experiments I do have friends who could (time to get the core Codon team back in action!). Of coures, working out the IP headache would be an issue (unless the work was done at work, which is sadly too far afield of what we do to be a responsible course). I can also imagine a number of academic groups and even a few companies which might seriously consider entering some of these challenges. I'd love to talk up the accuracy challenge with computer geeks I know. The problems are of a very attractive sort for me -- you can very quickly generate very large and rich datasets, enabling quantitative approaches (such as Design of Experiments) to optimization. A lot of data can be generated without actually running chips but using even lower cost methods (such as microscopy or flow cytometry) to measure key aspects. But with nothing concrete to point at, it seems rather pointless to start scheming.
But, while I can't actually move forward on any of these, I can do a bit of homework on emulsion PCR. I'll try to write up that homework later this week, as it's been informative to me and I believe puts me in a better position to handicap Ion Torrent's claims on sample prep -- and various comments on emPCR from the previous posts.