Expo Workshop
Uncertainty Estimation in LLM-Generated Content
Anoop Kumar · Alfy Samuel · Vivek Datla · Geoff Pleiss · Sanghamitra Dutta · Michael Kirchhof
West Ballroom B
The ability of Large Language Models (LLMs) to accurately estimate uncertainty is not just a theoretical concern; it’s a fundamental bottleneck hindering their safe and effective deployment in high-stakes, industrial-scale applications. The gap between model confidence and actual correctness poses an immediate and escalating risk. To mitigate these risks, this workshop convenes leading industry experts and academic researchers to confront the urgent challenges in LLM uncertainty estimation. We must define the state-of-the-art, establish rigorous evaluation standards, and forge a path toward reliable AI. This workshop will focus on:
- Calibration: How can we ensure LLMs’ confidence levels align with their true accuracy?
- Confidence-Aware Generation: What novel methods can enable LLMs to express their own uncertainty during content creation?
- Out-of-Distribution Detection: How do we equip LLMs to recognize and flag inputs that lie outside their training data?
- Uncertainty Communication: What are the most effective techniques for conveying LLM uncertainty to end-users, fostering trust and informed decision-making?
- Benchmarking: What are the various metrics to measure how well models and express quantify uncertainty.
The insights and collaborations generated here will directly shape the future of LLM development and deployment.
Live content is unavailable. Log in and register to view live content
Schedule
Mon 4:30 p.m. - 5:10 p.m.
|
Uncertainty Estimation in LLM-Generated Content: An Overview
(
Intro and Presentation
)
>
|
Anoop Kumar · Alfy Samuel · Vivek Datla 🔗 |
Mon 5:15 p.m. - 6:00 p.m.
|
Panel Discussion on Uncertainty Estimation
(
Panel
)
>
|
Michael Kirchhof · Sanghamitra Dutta · Geoff Pleiss · Anoop Kumar 🔗 |