High Content Imaging: The Automation Answer
September 16, 2024
Alexandre Peter
High-content imaging yields great insights in pharmaceutical R&D, but limitations in data processing often narrow the breadth and scale of its application e.g. in drug discovery. In this blog post, we’ll talk about how to overcome these limitations, by deep learning automation.
How Conventional Image Analysis Limits High Content Screening
In high-content imaging, also known as phenotypic imaging, compounds are assayed based on how they alter a cellular phenotype. This data-rich method allows scientists to focus on developing sophisticated cellular models, yielding more detailed and relevant insights.
In the last 15+ years, high-content imaging for efficient discovery of new therapeutics has led to numerous successes. With modern organoids and organ-on-a-chip systems, more physiologically relevant cellular models can now be developed and included in screening assays. Here, phenotypic imaging enables rapid discovery of tissue-relevant compound effects. But improvements in wet lab techniques need improvements in data analysis to keep pace.
Traditionally, scientists need to segment cells from extremely rich and often noisy images, pinpoint which of the possibly hundreds of extracted parameters would meaningfully describe the phenotypic effect, and quantify it, while controlling for experimental variability. This requires substantial expertise and time, yet is currently the most common method of analysis.
As phenotypic imaging data is very rich and high-dimensional, automation to replace traditional analysis and interpret assays has enormous potential to streamline workflow and shorten assay development. AI can do this, allowing scientists to better invest their time. The right approach allows teams to allocate their time-consuming multi-step analysis to a powerful AI model, streamline workflows, and minimize R&D costs. However, the choice of model matters if you want to harness the full power of automation.
Deep Learning Supports Automation of Image Analysis
Advances in AI techniques can complement, or in some cases replace, almost all steps in conventional analysis workflows. Largely, these are being done by computational methods like Machine Learning (ML) and Deep Learning (DL).
In the classical approach to phenotypic imaging, cells are stained and imaged, cellular regions within those images are segmented as needed, features within those segments are extracted (e.g., cell area, staining intensity, etc.), then relevant features are selected and combined in a specific way to relate the observed effect. This yields a rich dataset but is highly labor intensive.
In ML approaches, these first steps are the same up until the features have been calculated, at which point the numerous cellular features are reduced to interpretable biological insights. This saves some time on analysis, but to reach the maximum potential, a solution which automates yet more of the analysis workflow is key.
In DL approaches, labeled images are supplied to train a deep neural network to form an intrinsic model of the phenotypes of interest. Then, the algorithm identifies and classifies the cells without the need for prior segmentation. This produces results comparable to classical approaches, yet in much less time, freeing up resources. In addition, it allows for effortless identification of multiple phenotypic classes, without needing any object segmentation or the creation and optimization of complex workflows for every new image-based assay. For these reasons, DL automation has transformed image analysis.
Complete automation of image analysis using DL has great potential to minimize R&D costs. For example, scientists can push the limits of DL-based analysis by creating training models automatically - by simply annotating control wells on plates, the AI can automatically teach itself to classify different phenotypes without the need to annotate a training set of images first. This maintains the complexity and data-rich nature of the assay, but simplifies and automates analysis, shortens assay development, reduces bias, and is easily scalable.
Genedata Screener Streamlines and Automates High Content Imaging Workflows
Genedata Screener for HCS covers the whole image analysis process with High Content Extension and Imagence, streamlining the analysis cycles of phenotypic imaging assays. An easy-to-use interface, connecting to an AI neural network, automates the whole image analysis workflow from image loading to result computation. The user can choose to either explore the phenotypic landscape of an imaging assay, via an intuitive interface called Similarity Map, or build a training set simply based on the experimental controls present on the assay plate. This second workflow using only well annotations offers a fully automated alternative method for analyzing HCS data, which accelerates and simplifies image analysis while eliminating any human bias generated when manually annotating a training set.
With these two options, scientists can tailor their level of automation to best support their own needs. This allows scientists to reassign their resources as their projects require.
Screener supports for a large diversity of assays in one single platform out-of-the-box. With its support for incremental learning, as new data comes in, scientists can easily adjust trained models without needing to start from scratch, making application easy for the scientists and for collaboration. Models and analyses are traceable – this is key for production assays. Eventually it enables automated analysis in an enterprise framework.
Conclusion
Screener for HCS automates high content imaging analysis, so scientists can focus on advancing disease models and the discovery process. This streamlines workflows to shorten assay development, minimizing R&D costs.
See how Roche uses Genedata Screener to automate assay development and production workflows of high content assays.