Skip to yearly menu bar Skip to main content


Workshop

The Second Workshop on Long-Context Foundation Models

Zexue He · Tianyu Gao · Amanda Bertsch · Yuandong Tian · Danqi Chen · Graham Neubig · Rogerio Feris

Foundation models have become a cornerstone in the advancement of artificial intelligence, enabling applications across a wide range of domains. Many complex tasks today require processing and synthesizing information over thousands to millions of individual pieces of data, from text and images to audio and genomic sequences. Recent progress in long-context models has made it possible to handle such extensive inputs, but significant challenges remain, particularly in terms of computational efficiency, data quality and quantity, and evaluation. This workshop will convene researchers to explore these challenges and foster developments in long-context foundation models. Key topics include new modeling architectures, training approaches, efficiency techniques, and comprehensive evaluation methods. Additionally, in this edition, special attention will be given to long-context reasoning, multimodal learning, and applications in scientific fields such as genomics, climate science, etc. By tackling these critical challenges, we aim to push the boundaries of long-context modeling and shape its future directions.

Live content is unavailable. Log in and register to view live content