該当箇所へ

Unlocking Label-Free Live Cell Imaging with AI

February 2, 2026
Stephan Steigele

T-cell-based therapies, including TCR-T cell therapies and bispecific antibodies (BsAbs), have steadily continued to break ground in the fight against cancer, with hundreds of clinical trials underway. Cell-based co-culture assays, which test the ability of these therapies to kill tumor cells, are a staple in this field. However, the way these assays are conventionally performed, with live-cell imaging of fluorescent labels, faces experimental limitations and analytical challenges. 

With Genentech, we developed a label-free, AI-driven alternative that can overcome these limitations and transform how researchers evaluate immune effector function.

A New Approach: Brightfield Imaging + Hands-Free AI Analysis Using Vision Transformers

In our study, we present a novel analysis workflow powered by hands-free AI analysis pipeline built on a Vision Transformer (ViT) architecture. This workflow:

  • Extracts small, overlapping patches from images
  • Using the AI model, classifies each patch as "killing" or "non-killing”
  • Quantifies killing percentage (% patches in “killing class”)

Crucially, the model required no manually annotated training data. Instead, it leveraged weak labels derived from treatment controls — thus reducing setup time and enabling scalable assay deployment.

What’s especially new here is that we’ve applied this AI-based analysis pipeline to brightfield or phase-contrast images of live-cell co-culture assays, as pioneered by our collaborators at Genentech. Traditional fluorescence-based assays are subject to artifacts such as dye-related cell death (leading to overestimation of the therapy’s effect) or diminished fluorescence over time (due to photobleaching or expulsion by the cells). These are both especially problematic for prolonged experiments. By using brightfield imaging, they can address these issues.

Putting AI-Based Analysis to the Test with Immune Cell Therapies

We evaluated our new workflow across two different use cases. First, we investigated the tumor cell-killing ability of TCR engineered T-cells. These cells were tested across three different brain cancer cell lines and one gastric adenocarcinoma. Secondly, we assessed the anti-tumor effects of a T-cell-dependentbispecific antibody against a breast cancer cell line, in combination with a panel of 32 different costimulatory receptor bispecifics. 

For each case, AI-based analysis of brightfield images was benchmarked against  conventional fluorescent-based imaging. For the bispecific, it was also benchmarked against conventional analysis of brightfield images.

In the fluorescent assay, the number of cancer cells was measured using either a red fluorescent cytoplasmic dye or nuclear tag. A green fluorescent reagent was used to detect cancer cell death (e.g. caspase-3/7 activation). Both these assays were analyzed using a conventional, segmentation-based image analysis pipeline to obtain cell-by-cell counts from instrument software and calculating cell loss over time as well as the percentage of cell death (apoptosis). In parallel, a luminescent CellTiterGlo assay was used to measure the number of viable cancer cells, based on ATP levels.

In conventional analysis of brightfield images, the total area of confluent cancer cells was quantified based on their texture detected using a Gabor filter and thresholding.

What Did We Find?

AI-derived killing metrics closely matched fluorescence-based and conventional brightfield analysis results. This impressed us, given the complexity of analyzing a co-culture assay, in which two or even three different cell types in their different states (CD4+ and CD8+ T-cells, dead and live tumor cells) need to be distinguished and quantified. In the case of the bispecific antibody, it even outperformed the luminescent assay in terms of how it ranked the different costimulatory receptor bispecifics. 

Surprisingly, we found that the AI workflow was consistent even when tumor cell morphology varied dramatically across cancer cell lines. This is a significant outcome, because segmentation-based methods require constant parameter tuning based on cell appearance: adherent vs. non-adherent cells, donor vs. donor differences, or even changes in phenotype over the course of the assay. In contrast, the AI model automatically generalizes across all these conditions. This speeds up assay development and saves valuable scientist time.

Together, our label-free, AI-based approach delivered some major advantages:

1. More biologically faithful assays: Brightfield imaging eliminates artifacts due to phototoxicity and bleaching, as detailed above.

2. Lower Experiment Cost & Complexity: Brightfield imaging cuts out the need for extended staining protocols or parallel batches to cover multiple timepoints.

3. Scalable, Automated Analysis: AI removes manual image segmentation — a major bottleneck in high-throughput drug screening — and, as explained above, is versatile across multiple assay conditions.

We can already foresee further improving this workflow by training the AI network with more diverse data, including “intermediate” killing phenotypes. 

Read the full paper here.