Presented as SLAS, San Diego, CA, USA
The notion of morphologically different ‘cellular phenotypes’ lies at the core of high-content screening (HCS). Robustly differentiating these phenotypes is key to obtaining reliable quantitative information from high content screens. Such phenotypes serve: (1) as stable endpoints for primary drug response, (2) for assessment of toxicity and safety-relevant effects, (3) for the discovery of previously unexpected drug effects. However, a cellular phenotype to-date is defined by an agreement between experts about what visual aspects define it. So far, the automated exploration of phenotype space in HCS is computationally expensive and requires multiple cycles of image processing and machine learning to yield an overview of possible phenotypes.
We presented recently an innovative workflow based on convolutional neural networks (‘Deep Learning’), tailored towards pharma-relevant HCS, supporting complex research questions such as posed by phenotypic in-vitro assays. Here, we go one step further and show how Deep Learning constructs similarity maps for phenotype identification, network training and subsequent effect quantification in phenotypic space. We illustrate the usefulness of these maps on a production screen for Adipogenesis and discuss the importance of similarity maps for the analysis process; in particular their robustness against unwanted batch effects and their performance in similarity grouping and visualization.