ICML 2024
Skip to yearly menu bar Skip to main content


Workshop

Workshop on Theoretical Foundations of Foundation Models (TF2M)

Berivan Isik · Ziteng Sun · Banghua Zhu · Enric Boix-Adserà · Nezihe Merve Gürel · Bo Li · Ahmad Beirami · Sanmi Koyejo

Straus 2
[ Abstract ] Workshop Website
Sat 27 Jul, midnight PDT

Recent advancements in generative foundation models (FMs) such as large language models (LLMs) and diffusion models have propelled the capability of deep neural models to seemingly magical heights. Yet, the soaring growth in the model size and capability has also led to pressing concerns surrounding such modern AI systems. The scaling of the models significantly increases their energy consumption and deployment cost. Overreliance on AI may perpetuate existing inequalities and lead to widening discrimination against certain groups of people. The gap between the understanding of the internal workings of FMs and their empirical success has also reached an unprecedented level, hindering accountability and transparency.For decades, theoretical tools from statistics, information theory, and optimization have played a pivotal role in extracting information from unstructured data. Currently, the rapid pace of FM development has outstripped theoretical investigation, creating a potential gap between theoretical researchers and the challenges surrounding FMs. This workshop proposes a platform for bringing together researchers and practitioners from the foundation model and theory community (including statistics, information theory, optimization, and learning theory), to discuss advances and challenges in addressing these concerns, with a focus on responsible AI, efficiency, and principled foundations.

Live content is unavailable. Log in and register to view live content