I’ve covered a lot of genomics in this space, but there is an inherent challenge to studying biology via DNA - DNA is the underlying blueprint, but that blueprint must pass through multiple steps before actual biology of interest emerges. RNA-Seq gets closer, but much of the real action is at the level of proteins (though much is not - let’s not forget all the metabolites!). When I set out in this space 18 years ago, I thought I’d cover more proteomics but that didn’t materialize - time to plunk one piece on the proteomics side of the ledger!
Proteomics has multiple challenges, but two inherent ones are the diversity of proteoforms and the dynamic range within the proteome.
The diversity of proteins within a human is astounding, even if we discard the inherently hypervariable antibodies and T cell receptors which have specific means of diversification within an individual that include random generation of sequence during VDJ recombination and somatic hypermutation of antibodies. The rest of the bunch are subject to transcript-level diversification by features such as alternative promoters, alternative splicing and RNA editing and then another wealth of post-translational proteolysis, phosphorylation, glycosylation and a heap more covalent modifications. If we really wanted to make things complex, we’d worry about protein localization, who a protein is partnered with and even alternative protein conformations - but let’s just stick to primary proteoforms and a diversity that is estimated in excess of 1 million different forms.
The key part here is that there is no analytical method capable of resolving all of these. Any proteomics method is to some degree ignoring much of the proteome entirely, and for many other proteins compressing many forms into a single signal. Indeed, most proteomic tools look at very short windows of sequence or perhaps patches of three dimensional structure, and will rarely if ever be able to directly connect two such short windows or patches - they will be stuck correlating them. The key takeaway here is that all proteomics methods work on a reduced representation of the proteome.
The dynamic range in the proteome is astounding, with some potentially challenging effects. For example, blood serum is utterly dominated by a handful of proteins such as serum albumin, beta 2 microglobulin and immunoglobulins - for methods that look at the total proteome there is a serious danger of flooding out your signal with these abundant but relatively dull proteins and not being able to seen interesting ones such as hormones that are many logs lower in concentration.
Proteomics has been dominated by mass spectrometry, which has had over three decades to develop into a mature science. Mass spec is inherently a counting process and on its own can’t focus or filter out the dull stuff. Even more so, you don’t fly intact proteins in a mass spec, but peptides and there’s only a few useful proteases out there. Peptides don’t ionize consistently, so that adds a layer of challenge to quantitation. But as noted, this has been an intensely developed field for multiple decades and so there are very good mass spectroscopy proteomics techniques using liquid chromatography (LC-MS) and other methods to remove abundant dull proteins and fractionate complex peptide pools into manageable ones.
But, protein LC-MS is very much its own discipline, and most proteomics labs aren’t strong in genomics or vice versa - though there are certainly collaborations or dual-threat labs. LC-MS setups require serious capital budgets for the instruments and their accompanying sample handling automation and highly skilled personnel.
A number of companies are attempting to apply the strategies of high throughput DNA sequencing to peptide sequencing or identification. Quantum-SI is the only one to make it to market but there are other startups out there such as Erisyon are plugging away. These methods look a bit like mass spectrometry in their sample requirements, as they will also be counting peptides - and the current Quantum-SI doesn’t count nearly enough to be practical for complex samples such as serum or plasma.
The other “next gen proteomics” - one lesson not learned from the DNA sequencing world is the problem of calling something “next-gen” - this year will be the 20th anniversary of the commercial launch of 454 sequencing - approach is to use affinity reagents such as antibodies or aptamers and tag them with DNA barcodes, then sequence those barcodes on high throughput DNA sequencers. By using affinity reagents, the problem of boring but abundant proteins goes away – just don’t give them any affinity reagents. Dynamic range can be addressed as well - the exact details aren’t necessarily disclosed by manufacturers but one could imagine only labeling a fraction of a given antibody to tune how many counts are generated from a certain concentration of targeted analyte.
Olink Proteomics, now a component of Thermo Fisher, is one company offering a product in this space. Olink’s Proximity Extension Assay (PEA) relies on two antibodies to each protein of interest and requiring hybridization between the probes on both antibodies to enable extension by polymerase in order to generate a signal. This increases the specificity of the signal and tamps down any signal from non-specific binding - or from just having antibodies in solution.
Olink has released a series of panels targeting increasing numbers of target proteins in the human proteome. This is generally a good thing - except counting more proteins means generating more DNA tags which means a bigger sequencing budget per sample. The other knock on Olink’s (and their competitor SomaLogic, now within Standard Biotools and also marketed by Illumina) approach is a complex laboratory workflow that mandates liquid handling automation. So this has meant that the big Olink Explore discovery panels are inevitably going to be run at huge genome centers that have both the big iron sequencers and the liquid handling robots that are required. And this strategy has started paying out scientific dividends - some of which were covered by the Olink Proteomics World online symposium last fall that featured speakers such as Kari Stefansson. Olink’s and Ultima’s recent announcement on starting to process all of the UK Biobank is an example of such grand plans, and this will be run at Regeneron’s genome center.
Academic center core labs and smaller biotechs often power important biomedical advances, but if Olink Explore is only practical with NovaSeq/UG100 class machines and fancy liquid handlers, then few of these important scientific constituencies will be able to access the technology. Which would be unfortunate, since small labs often cultivate very interesting sample sets that very large population-based projects like UK Biobank might not have. Large population based and carefully curated small projects are complementary, but is only one able to access Olink’s technology?
And that’s where Olink’s newest product, Olink Reveal, comes in, enabling smaller labs to process 86 samples. First, a select set of about 1000 proteins is targeted, bringing the required sequencing for a panel of samples plus controls to fit on a NextSeq-class flowcell - only 1 billion reads required. Second, the laboratory workflow has been made very simple and practical to execute with only multichannel pipettes. The product is shipped with a 96-well plate that contains dried down PEA reagents; simply adding samples and controls to the wells activates the assay for an overnight incubation. The next day, PCR reagents are added to graft sample index barcodes onto the ligation products and then that is pooled to form a sequencing library. The library prep costs $98 per sample (list price) - $8,428 per kit. Throw in sequencing costs of $2K-$5K per run (depending on the instrument) and this isn’t out-of-line for other genomics applications.
Of course, this is a reduced representation over the larger “Explore” sets But Olink has selected the proteins to be a useful reduced representation. They’ve used sources such as Reactome to prioritize proteins, and have also prioritized proteins that have been shown to have genetically-driven expression variability in the human population - protein QTLs aka pQTLs. If the new panel is cross-referenced to studies using the larger panels, most of these studies would have found at least one protein showing statistically significant change in concentration. This can be seen in the plot below, where each row is a study colored by disease area. On the left is the distribution of P-values for the actual Olink Explore data and the right the same data filtered for proteins in the Olink Reveal panel.
It’s also robust - Olink has sent validation samples to multiple operators and compared the results, and the values from each lab are tightly correlated.
So Olink with their affinity proteomics approach is basically following the same playbook as genomics did with exomes. When hybrid capture approaches for exome sequencing first came out, it was thought these would be used for only a few years and then be completely displaced by whole genome sequencing (WGS). But exomes have proven too cost effective - even with drops in WGS costs, it is still possible to sequence more samples with exomes for the same budget. Yes, that risks missing causal variants outside the exome target set was always a concern – the recent excitement around lesions in non-coding RNAs such as RNU4-2 have demonstrated that - but many investigators saw exomes as enabling studies that otherwise wouldn’t happen. Plus sometimes the bigger worry is biological noise obscuring a signal you could see and that is dealt with by more samples.
The new Olink Reveal product fills a gap between Olink’s large “Explore” discovery sets and very small custom panels. In the Proteomics World talks many speakers described work run with PEA panels of only two dozen or so targets, often using PCR as a readout rather than sequencing. This shows one bit of synergy in the Olink acquisition by ThermoFisher, as Thermo has an extensive PCR product catalog including array-type formats. Thus PEA follows the well worn patterns in genomics: huge discovery panels for some studies, high value panels that balance cost and coverage for many studies and focused custom panels for validating findings on very large cohorts. The Proteomics World talks even suggested some of these focused panels might soon be seriously evaluated as in vitro diagnostics. With developments like these, targeted proteomics via sequencing will be a very interesting space to watch.
No comments:
Post a Comment