該当箇所へ

ラボデータ解析の自動化がもたらす研究開発の革新

September 20, 2022
Stephan Steigele

In biopharmaceutical R&D, everyone is talking about innovation. One hope is that more information-rich, novel assay technologies or approaches—including phenotypic screens and detailed biophysical and mechanistic studies—can be applied early enough during hit discovery and characterization to prevent costly failures at later clinical stages. Such assays and approaches might unveil new therapeutic candidates acting through novel mechanisms or provide data to feed into AI-based, predictive algorithms, allowing us to arrive faster at promising lead molecules.

Sounds great—but if you’re the lead scientist tasked with delivering on these grand plans, how do you execute? The approaches mentioned above generate complex multiparameter readouts, and performing them at scale creates a practical challenge: how can you process so much multimodal data both efficiently and consistently?  In a previous blog post, we talked about how to maximize ROI in lab automation hardware with automation of data capture and transfer. In this post, we will address how to automate data analysis, especially for complex assays, and illustrate this with examples from our own work at Genedata.

The Challenge with Complex Data Analysis: Achieving Scale and Consistency

As you process and analyze complex data, you encounter a series of different choices concerning quality control, model fitting, and result validation—just to name a few. Manual analysis might be manageable for exploratory or low throughput experiments. However, crunching through these analyses becomes time-consuming if you are trying to scale up an assay for a high throughput hit discovery screen, or if you want to apply it in a standard way for routine profiling.

A second, equally important problem is that when these analysis choices are made using human judgment alone, which while expert is also subjective, this can lead to inconsistent outcomes. We have seen this to be particularly true for ambiguous data or corner cases, where the correct analysis decision might be unclear: Scientist A might look at a particular curve and decide it should be fit with one model, while Scientist B might decide it should be fit with another. This variability makes downstream decision-making difficult: like trying to make a purchasing decision based on conflicting product ratings, it becomes hard to make reasoned choices. Evidence also shows that this kind of inconsistency negatively impacts the output of AI-based algorithms: for example, our own work shows that mislabeling of training data affects classification accuracy.

Three Examples of Complex Lab Data: Mechanistic Kinetic, SPR, and Imaging

Let us illustrate this challenge with three concrete examples involving kinetic assays, surface plasmon resonance (SPR), and high-content, image-based screening (HCS):

  1. Kinetic assays probe the mechanism by which a candidate drug interacts with its target: whether it binds in a 1-step or 2-step process or its modality of inhibition (competitive, noncompetitive, uncompetitive). Characterizing these different mechanisms provides further information than just characterizing potency or efficacy alone. Having more of this information early on allows you to enrich for candidates with specific, desirable features, such as slow-binding profiles that are potentially more effective and show less off target-effects. For this reason, these days discovery programs try to incorporate detailed mechanistic studies during early hit finding. To enable this, companies like AstraZeneca have developed new assays that can be run at sufficiently high throughput and lower reagent cost, without compromising the time resolution required for kinetic analysis. In their case, they’ve used the FLIPR Tetra system, which unlike standard fluorescence readers can capture data from a full plate in one read. But when it comes to analyzing kinetic data, scientists must make complex decisions about raw data quality, which model best describes the data, and the quality of model fit. These decisions take experience and expertise, not to mention the enormous challenge of making them at scale.  
  2. SPR is a biophysical method used to assess molecular interactions, in a direct, time-dependent, and label-free manner. SPR is highly sensitive and quantitative and can be used to measure strength of target binding, examine binding kinetics, determine binding stoichiometry, or precisely measure protein concentration. Traditionally, SPR assays occurred later, during hit-to-lead or characterization stages, because they were too low throughput. Akin to the biochemical kinetic studies described above, however, technological innovations now allow SPR to be performed earlier and at higher throughput—even during primary screening, where biophysical approaches provide a fresh angle for tackling challenging targets. Quality control of SPR experiments often involves visual review raw of SPR sensorgrams, and for kinetic studies, selecting the most appropriate fit model for each candidate and annotating it accordingly. Again, as with biochemical kinetic studies, this can be a subjective and time-consuming process.
  3. HCS is a target-agnostic approach that produces multiple phenotypic endpoints. This opens the possibility to characterize a candidate by relating its “profile” with those of tool compounds with known mode-of-action (MOA). Cellular models promise greater physiological relevance, and images—especially in the case of multiplexed profiling approaches like Cell Painting—are a rich source of data. Due to the volume and multiparametric nature of imaging data in which a phenotype can comprise hundreds of potentially relevant cellular features, artificial intelligence (AI) is a very popular tactic for data analysis. AI algorithms must be trained on a sufficiently large set of pre-classified images, and the creation and correct labeling of those training datasets—traditionally, something that has been done manually—creates a major obstacle.

Software Workflows and AI to Automate Lab Data Analysis: 30 Hours to 30 Minutes

So, how do we overcome these dual challenges of inefficiency and subjectivity encountered in the examples above? We believe it’s possible to automate even the most complex data analysis, and that doing so helps (a) to save time and (b) make analysis more objective. By working closely with scientists in the biopharma R&D community, Genedata has managed to automate analysis for all three of the assays listed above.

For biochemical kinetic assays, my colleagues and I collaborated with the team at AstraZeneca to create an automated, multistage analysis workflow in Genedata Screener®. Every step of the workflow is based on user-defined standards or empirically determined criteria: determining the right time window for analysis, based on controls; checking that raw progress curves lie within the reliable signal detection range and excluding any suspicious outliers; selecting the best mechanistic model (from built-in models that come with the platform) based on statistics; and finally, annotating each compound with its respective model and flagging any unreliable results. At AstraZeneca, this automation reduced the analysis time for a full-deck screen from 30 hours to 30 minutes, and moreover, made the analysis more objective, consistent, and robust.

In collaboration with scientists at Amgen, we applied AI to automate SPR data analysis.  First, Screener triages raw sensorgrams, analyzing only those with sufficient binding. Then using AI, the platform instantaneously and automatically classifies data for each tested drug candidate as best fit by a kinetic or steady-state binding model. This AI-driven, automated workflow is largely successful: Screener chooses the correct model more than 90% of the time and clearly flags those cases in which a model cannot be clearly defined. This ensures that only accurately labeled data are later used downstream.

See How Amgen Uses AI to Automate SPR Analysis

In the case of image-based screening, our Imagence solution automates training data curation: unlike traditional AI methods where you would need scripting skills and long hours spent fine-tuning parameters, Imagence has a highly intuitive interface and automates the definition of phenotype sets used for training of deep neural networks, as well as image classification for entire production-level screens. It makes AI-based analysis both versatile and accessible to a cellular biologist who has no special programming expertise. This has allowed organizations like AstraZeneca to efficiently deploy this solution across multiple therapeutic areas and stages of drug discovery. Moreover, this led to more robust results (e.g. fewer failed curves in a dose-response assay) and even gave them a handle on targets lacking a simple, single biomarker—delivering on the promise of phenotypic screening to access challenging targets.

These three cases demonstrate that even with the most complex assays or lab data requiring many layers of quality control, validation, and decision-making, it’s possible to automate analysis and thus produce scientifically sound outcomes at scale. We can create solutions that not only incorporate intelligent methods to achieve this, but wrap them into practical, easy-to-use workflows that speed up discovery while also enforcing best practices, ensuring consistency, and enabling innovation.

Look out for the final article in this series, in which we will discuss the future outlook for total research process automation.


Stephan Steigele, Ph.D., is Head of Science, Genedata Screener