Jump to content

Deep Neural Network Interpretability in High Content Imaging through Deep Feature Isolation Mixing

September 18, 2024

Interpreting Deep Neural Network (DNN) classification results is challenging due to their black-box characteristics. However, in applications like high-content imaging for drug screening where DNNs automate image analysis, a need for better interpretation becomes particularly evident. Here it is essential to verify if the DNNs' decisions accurately reflect the characteristics of the biological system and process under investigation – to avoid focusing on molecules which the DNN identifies as producing an effect, but not the one sought after. This interpretation in terms of biological meaning includes determining whether the classification is based on the relevant image channels and sections of the image.

Here, we introduce our novel approach DFMIX (Deep Feature Isolation Mixing) to achieve a better understanding and interpretability of the DNN classification of high-content images in drug screening.

DFMIX determines individual channel significance during classification, which is particularly useful in multichannel experiments such as cell painting. It also generates pixel-level attribution maps to point out relevant sections of the HCS image which contribute significantly to the classification outcome. Both provide essential information to the scientist, enabling a confident use of the automated DNN image classification on the basis of the assay biology, thus accelerating analysis and decision making in high-content imaging.

The state-of-the art performance of DFMIX is demonstrated on a set of real-world datasets from both fluorescence-based (RxRx1, neurotensin receptor internalization assay) and bright field (BBBC054, BBBC010) high-content screening experiments.


Request Resource

By submitting my data, I give consent to the collection, processing and use of my personal data in accordance with the Genedata privacy policy