Skip to yearly menu bar Skip to main content


Workshop

ICML 2025 Workshop on Multi-modal Foundation Models and Large Language Models for Life Sciences

Pengtao Xie · James Zou · Le Song · Aidong Zhang · Danielle Grotjahn · Linda Awdishu · Eran Segal · Wei Wang

Recent advances in foundation models and large language models (LLMs) have revolutionized life sciences by enabling AI-driven insights into complex biological systems. However, most existing models focus on single-modal data, limiting their ability to capture the inherently multi-modal nature of biological processes. This workshop will explore the development and application of multi-modal foundation models and LLMs that integrate diverse biological data types, such as protein sequences, structures, genomic and transcriptomic data, and metabolomics. By bringing together researchers from machine learning, computational biology, and biomedical sciences, the workshop will address challenges in modality fusion, cross-modal representation learning, scalable pretraining, and interpretability. Discussions will focus on novel architectures, self-supervised learning methods, and real-world applications in drug discovery, precision medicine, and multi-omics data analysis. Through invited talks, poster sessions, contributed presentations, and panel discussions, this workshop aims to advance multi-modal foundation models and LLMs for biological discovery and foster interdisciplinary collaborations that push the boundaries of machine learning in life sciences.

Live content is unavailable. Log in and register to view live content