ICML 2024
Skip to yearly menu bar Skip to main content


Workshop

Long-Context Foundation Models

Tianyu Gao · Weijia Shi · Amanda Bertsch · Tri Dao · Danqi Chen · Graham Neubig · Christopher Re

Hall A2
[ Abstract ] Workshop Website
Fri 26 Jul, midnight PDT

Foundation models have become a cornerstone in the advancement of artificial intelligence, widely used across both academic and practical applications. Across domains, many challenging tasks require synthesizing information over thousands to millions of individual pieces of data, which may take many forms, including images, text, audio, genomes, etc. As a result, much recent work has focused on developing long-context models capable of processing, understanding, and generating responses based on extensive inputs. Enabling foundation models to process long contexts introduces several key challenges: (1) Computation efficiency: transformers, the predominate architecture for foundation models, incur a quadratic computational complexity with respect to the input length. (2) Lack of data: The development of long-context foundation models requires access to a large amount of long-sequence data, which is difficult to satisfy due to the limited availability of such collections. (3) Evaluation complexity: Evaluating the performance of long-context foundation models is inherently complex, as it is costly to collect, construct, or verify such evaluation data by humans.Our workshop aims to convene researchers to address these challenges, fostering discussions, developments, and evaluation of long-context foundation models across various AI disciplines.

Live content is unavailable. Log in and register to view live content