Vision Transformers are powerful AI algorithms that represent the current state of the art in computer vision technology. Genedata Imagence scientists successfully used the technology to solve a complex assay analysis problem. The present case study was conducted in collaboration with our strategic partners at Genentech and demonstrates how AI technologies like Vision Transformers are poised to revolutionize many aspects of high-content drug screening.
Live cell assays with fluorescent markers are invaluable to study dynamic cell behaviors and are widely used in drug discovery labs. Despite their remarkable versatility, fluorescent markers are not without drawbacks. Prominent among them are phototoxicity, which can impair cell physiology and even lead to cell death. Photobleaching limits the time cell cultures can be observed in longitudinal studies and genetically encoded fluorescent tags can affect the function of the protein under investigation. Last but not least, fluorescent assays are expensive and time-consuming to perform.
In theory, these drawbacks might be avoided by studying label-free samples with a conventional brightfield microscope. But it is not practical to employ a large group of human experts to perform such tasks at scale. However, computer vision in the age of AI changes this rationale fundamentally. It is now possible to teach a computer to recognize relevant changes in features such as cell morphology from brightfield images in much the same way a human expert would. The computer simply learns from thousands of examples.
In the present case study, the algorithm was trained to distinguish between live and dead tumor cells in a T-cell killing assay, which is used in many drug discovery labs to confirm functional activity of immune cell therapeutics. The results show that the cell viability assessment by Vision Transformer was in very good agreement with a fluorescently-labeled control assay. The latter was assessed by conventional methods, which require not only fluorescent labeling but also semi-automatic cell segmentations that frequently need manual adjustments to handle different cell phenotypes. By contrast, the Vision Transformer proved to be robust and able to process four phenotypically different cell lines without any manual adjustments. Moreover, the new AI-based workflow proved to be three times faster than the conventual one.
You do not have to be an AI expert to take advantage of these powerful tools. The AI system simply learns from examples you provide by labeling available control data according to the features you wish to detect, distinguish, and count for your analysis, e.g. cell killing versus no killing. In the present case study, it took only about 15 min. of training to prepare the neural network, which can then be used perpetually.
Are you interested in learning more about our hands-free AI-driven image analysis workflow?
Request our poster presented at the SBI2 2022 or feel free to reach out to me directly: cameron.scott [at] genedata.com