Workshop
Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning (ITR3)
Ahmad Beirami · Flavio Calmon · Berivan Isik · Haewon Jeong · Matthew Nokleby · Cynthia Rush
Sat 24 Jul, 7 a.m. PDT
The empirical success of state-of-the-art machine learning (ML) techniques has outpaced their theoretical understanding. Deep learning models, for example, perform far better than classical statistical learning theory predicts, leading to its widespread use by Industry and Government. At the same time, the deployment of ML systems that are not fully understood often leads to unexpected and detrimental individual-level impact. Finally, the large-scale adoption of ML means that ML systems are now critical infrastructure on which millions rely. In the face of these challenges, there is a critical need for theory that provides rigorous performance guarantees for practical ML models; guides the responsible deployment of ML in applications of social consequence; and enables the design of reliable ML systems in large-scale, distributed environments.
For decades, information theory has provided a mathematical foundation for the systems and algorithms that fuel the current data science revolution. Recent advances in privacy, fairness, and generalization bounds demonstrate that information theory will also play a pivotal role in the next decade of ML applications: information-theoretic methods can sharpen generalization bounds for deep learning, provide rigorous guarantees for compression of neural networks, promote fairness and privacy in ML training and deployment, and shed light on the limits of learning from noisy data.
We propose a workshop that brings together researchers and practitioners in ML and information theory to encourage knowledge transfer and collaboration between the sister fields. For information theorists, the workshop will highlight novel and socially-critical research directions that promote reliable, responsible, and rigorous development of ML. Moreover, the workshop will expose ICML attendees to emerging information-theoretic tools that may play a critical role in the next decade of ML applications.
Schedule
Sat 7:00 a.m. - 7:15 a.m.
|
Opening Remarks
(
Intro & Welcome
)
>
SlidesLive Video |
Ahmad Beirami 🔗 |
Sat 7:15 a.m. - 8:00 a.m.
|
Virtual Poster Session #1 ( Poster Session ) > link | 🔗 |
Sat 8:00 a.m. - 8:30 a.m.
|
Invited Talk: Maxim Raginsky
(
Invited Talk
)
>
SlidesLive Video |
Maxim Raginsky 🔗 |
Sat 8:30 a.m. - 8:45 a.m.
|
Q&A: Maxim Raginsky
(
Q&A
)
>
|
🔗 |
Sat 8:45 a.m. - 9:15 a.m.
|
Invited Talk: Alex Dimakis
(
Invited Talk
)
>
SlidesLive Video |
Alexandros Dimakis 🔗 |
Sat 9:15 a.m. - 9:30 a.m.
|
Q&A: Alex Dimakis
(
Q&A
)
>
|
🔗 |
Sat 9:30 a.m. - 10:00 a.m.
|
Small Break
|
🔗 |
Sat 10:00 a.m. - 10:30 a.m.
|
Invited Talk: Kamalika Chaudhuri
(
Invited Talk
)
>
SlidesLive Video |
Kamalika Chaudhuri 🔗 |
Sat 10:30 a.m. - 10:45 a.m.
|
Q&A: Kamalika Chaudhuri
(
Q&A
)
>
|
🔗 |
Sat 10:45 a.m. - 11:15 a.m.
|
Invited Talk: Todd Coleman
(
Invited Talk
)
>
SlidesLive Video |
Todd Coleman 🔗 |
Sat 11:15 a.m. - 11:30 a.m.
|
Q&A: Todd Coleman
(
Q&A
)
>
|
🔗 |
Sat 11:30 a.m. - 11:45 a.m.
|
Contributed Talk #1
(
Contributed Talk
)
>
SlidesLive Video |
Eric Lei · Hamed Hassani · Shirin Bidokhti 🔗 |
Sat 11:45 a.m. - 12:00 p.m.
|
Contributed Talk #2
(
Contributed Talk
)
>
SlidesLive Video |
Borja Rodríguez Gálvez · Mikael Skoglund · Ragnar Thobaben · German Bassi 🔗 |
Sat 12:00 p.m. - 1:00 p.m.
|
Big Break
|
🔗 |
Sat 1:00 p.m. - 1:45 p.m.
|
Panel Discussion
(
Panel
)
>
SlidesLive Video |
🔗 |
Sat 1:45 p.m. - 2:15 p.m.
|
Invited Talk: Kush Varshney
(
Invited Talk
)
>
SlidesLive Video |
Kush Varshney 🔗 |
Sat 2:15 p.m. - 2:30 p.m.
|
Q&A: Kush Varshney
(
Q&A
)
>
|
🔗 |
Sat 2:30 p.m. - 3:00 p.m.
|
Invited Talk: Thomas Steinke
(
Invited Talk
)
>
SlidesLive Video |
Thomas Steinke 🔗 |
Sat 3:00 p.m. - 3:15 p.m.
|
Q&A: Thomas Steinke
(
Q&A
)
>
|
🔗 |
Sat 3:15 p.m. - 4:00 p.m.
|
Virtual Poster Session #2 ( Poster Session ) > link | 🔗 |
Sat 4:00 p.m. - 4:30 p.m.
|
Invited Talk: Lalitha Sankar
(
Invited Talk
)
>
SlidesLive Video |
Lalitha Sankar 🔗 |
Sat 4:30 p.m. - 4:45 p.m.
|
Q&A: Lalitha Sankar
(
Q&A
)
>
|
🔗 |
Sat 4:45 p.m. - 5:00 p.m.
|
Contributed Talk #3
(
Contributed Talk
)
>
SlidesLive Video |
Sharu Jose · Osvaldo Simeone 🔗 |
Sat 5:00 p.m. - 5:15 p.m.
|
Contributed Talk #4
(
Contributed Talk
)
>
SlidesLive Video |
Mohammad Samragh · Hossein Hosseini · Kambiz Azarian · Farinaz Koushanfar 🔗 |
Sat 5:15 p.m. - 5:45 p.m.
|
Invited Talk: David Tse
(
Invited Talk
)
>
SlidesLive Video |
David Tse 🔗 |
Sat 5:45 p.m. - 6:00 p.m.
|
Q&A: David Tse
(
Q&A
)
>
|
🔗 |
Sat 6:00 p.m. - 6:15 p.m.
|
Concluding Remarks
(
Concluding Remarks
)
>
SlidesLive Video |
🔗 |
Sat 6:15 p.m. - 7:00 p.m.
|
Social Hour ( Social Hour ) > link | 🔗 |
-
|
Single-Shot Compression for Hypothesis Testing
(
Poster
)
>
|
Fabrizio Carpi · Siddharth Garg · Elza Erkip 🔗 |
-
|
When Optimizing f-divergence is Robust with Label Noise
(
Poster
)
>
|
Jiaheng Wei · Yang Liu 🔗 |
-
|
Active privacy-utility trade-off against a hypothesis testing adversary
(
Poster
)
>
|
Ecenaz Erdemir · Pier Luigi Dragotti · Deniz Gunduz 🔗 |
-
|
A unified PAC-Bayesian framework for machine unlearning via information risk minimization
(
Poster
)
>
|
Sharu Jose · Osvaldo Simeone 🔗 |
-
|
Neural Network-based Estimation of the MMSE
(
Poster
)
>
|
Mario Diaz · Peter Kairouz · Lalitha Sankar 🔗 |
-
|
Realizing GANs via a Tunable Loss Function
(
Poster
)
>
|
Gowtham Raghunath Kurri · Tyler Sypherd · Lalitha Sankar 🔗 |
-
|
True Few-Shot Learning with Language Models
(
Poster
)
>
|
Ethan Perez · Douwe Kiela · Kyunghyun Cho 🔗 |
-
|
Learning under Distribution Mismatch and Model Misspecification
(
Poster
)
>
|
Mohammad Saeed Masiha · Mohammad Reza Aref 🔗 |
-
|
Soft BIBD and Product Gradient Codes: Coding Theoretic Constructions to Mitigate Stragglers in Distributed Learning
(
Poster
)
>
|
Animesh Sakorikar · Lele Wang 🔗 |
-
|
Information-Guided Sampling for Low-Rank Matrix Completion
(
Poster
)
>
|
Simon Mak · Shaowu Yuchi · Yao Xie 🔗 |
-
|
Sliced Mutual Information: A Scalable Measure of Statistical Dependence
(
Poster
)
>
|
Ziv Goldfeld · Kristjan Greenewald 🔗 |
-
|
Minimax Bounds for Generalized Pairwise Comparisons
(
Poster
)
>
|
Kuan-Yun Lee · Thomas Courtade 🔗 |
-
|
Characterizing the Generalization Error of Gibbs Algorithm with Symmetrized KL information
(
Poster
)
>
|
Gholamali Aminian · Yuheng Bu · Laura Toni · Miguel Rodrigues · Gregory Wornell 🔗 |
-
|
Prediction-focused Mixture Models
(
Poster
)
>
|
Abhishek Sharma · Sanjana Narayanan · Catherine Zeng · Finale Doshi-Velez 🔗 |
-
|
Tighter Expected Generalization Error Bounds via Wasserstein Distance
(
Poster
)
>
|
Borja Rodríguez Gálvez · German Bassi · Ragnar Thobaben · Mikael Skoglund 🔗 |
-
|
Data-Dependent PAC-Bayesian Bounds in the Random-Subset Setting with Applications to Neural Networks
(
Poster
)
>
|
Fredrik Hellström · Giuseppe Durisi 🔗 |
-
|
Active Sampling for Binary Gaussian Model Testing in High Dimensions
(
Poster
)
>
|
Javad Heydari · Ali Tajer 🔗 |
-
|
Unsupervised Information Obfuscation for Split Inference of Neural Networks
(
Poster
)
>
|
Mohammad Samragh · Hossein Hosseini · Aleksei Triastcyn · Kambiz Azarian · Joseph B Soriaga · Farinaz Koushanfar 🔗 |
-
|
Within-layer Diversity Reduces Generalization Gap
(
Poster
)
>
|
Firas Laakom · Jenni Raitoharju · Alexandros Iosifidis · Moncef Gabbouj 🔗 |
-
|
Entropic Causal Inference: Identifiability for Trees and Complete Graphs
(
Poster
)
>
|
Spencer Compton · Murat Kocaoglu · Kristjan Greenewald · Dmitriy Katz 🔗 |
-
|
Towards a Unified Information-Theoretic Framework for Generalization
(
Poster
)
>
|
Mahdi Haghifam · Gintare Karolina Dziugaite · Shay Moran 🔗 |
-
|
Sub-population Guarantees for Importance Weights and KL-Divergence Estimation
(
Poster
)
>
|
Parikshit Gopalan · Nina Narodytska · Omer Reingold · Vatsal Sharan · Udi Wieder 🔗 |
-
|
Excess Risk Analysis of Learning Problems via Entropy Continuity
(
Poster
)
>
|
Aolin Xu 🔗 |
-
|
Out-of-Distribution Robustness in Deep Learning Compression
(
Poster
)
>
|
Eric Lei · Hamed Hassani 🔗 |
-
|
Coded Privacy-Preserving Computation at Edge Networks
(
Poster
)
>
|
Elahe Vedadi · Yasaman Keshtkarjahromi · Hulya Seferoglu 🔗 |
-
|
a-VAEs : Optimising variational inference by learning data-dependent divergence skew
(
Poster
)
>
|
Jacob Deasy · Tom McIver · Nikola Simidjievski · Pietro Lió 🔗 |