Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Accessible and Efficient Foundation Models for Biological Discovery

Identifying Biological Priors and Structure in Single-Cell Foundation Models

Flavia Pedrocchi · Stefan Stark · Gunnar Ratsch · Amir Joudaki

Keywords: [ single cell ] [ Interpretability ] [ foundation model ] [ Computational Biology ]


Abstract:

Foundation models pre-trained on large-scale transcriptomic data are gaining popularity for generating latent representations of cells or genes for downstream analysis.While these models suggest promising results towards a better understanding of cellular behavior, their complexity and black-box nature poses a challenge in their wider adoption in computational biology.Without a clear understanding of how these models process data and make predictions, it is difficult to discern their strengths and limitations and identify areas where they can be improved.In this study, we explore approaches for uncovering structural and biological connections within foundation models, using Geneformer and UCE as case studies. Our explored approaches are straightforward to implement, are adaptable across various transformer architectures, and suggest possible strategies to interpret and optimize existing models and architectures.Our primary findings utilize attention rollout for biological interpretation of attention maps, linear probes to uncover where learned biological concepts appear as well as a comparison of hidden states to show learning progression and the emergence of token patterns.

Chat is not available.