Workshop
Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning (ITR3)
Ahmad Beirami 路 Flavio Calmon 路 Berivan Isik 路 Haewon Jeong 路 Matthew Nokleby 路 Cynthia Rush
Sat 24 Jul, 7 a.m. PDT
The empirical success of state-of-the-art machine learning (ML) techniques has outpaced their theoretical understanding. Deep learning models, for example, perform far better than classical statistical learning theory predicts, leading to its widespread use by Industry and Government. At the same time, the deployment of ML systems that are not fully understood often leads to unexpected and detrimental individual-level impact. Finally, the large-scale adoption of ML means that ML systems are now critical infrastructure on which millions rely. In the face of these challenges, there is a critical need for theory that provides rigorous performance guarantees for practical ML models; guides the responsible deployment of ML in applications of social consequence; and enables the design of reliable ML systems in large-scale, distributed environments.
For decades, information theory has provided a mathematical foundation for the systems and algorithms that fuel the current data science revolution. Recent advances in privacy, fairness, and generalization bounds demonstrate that information theory will also play a pivotal role in the next decade of ML applications: information-theoretic methods can sharpen generalization bounds for deep learning, provide rigorous guarantees for compression of neural networks, promote fairness and privacy in ML training and deployment, and shed light on the limits of learning from noisy data.
We propose a workshop that brings together researchers and practitioners in ML and information theory to encourage knowledge transfer and collaboration between the sister fields. For information theorists, the workshop will highlight novel and socially-critical research directions that promote reliable, responsible, and rigorous development of ML. Moreover, the workshop will expose ICML attendees to emerging information-theoretic tools that may play a critical role in the next decade of ML applications.
Schedule
Sat 7:00 a.m. - 7:15 a.m.
|
Opening Remarks
(
Intro & Welcome
)
>
SlidesLive Video |
Ahmad Beirami 馃敆 |
Sat 7:15 a.m. - 8:00 a.m.
|
Virtual Poster Session #1 ( Poster Session ) > link | 馃敆 |
Sat 8:00 a.m. - 8:30 a.m.
|
Invited Talk: Maxim Raginsky
(
Invited Talk
)
>
SlidesLive Video |
Maxim Raginsky 馃敆 |
Sat 8:30 a.m. - 8:45 a.m.
|
Q&A: Maxim Raginsky
(
Q&A
)
>
|
馃敆 |
Sat 8:45 a.m. - 9:15 a.m.
|
Invited Talk: Alex Dimakis
(
Invited Talk
)
>
SlidesLive Video |
Alexandros Dimakis 馃敆 |
Sat 9:15 a.m. - 9:30 a.m.
|
Q&A: Alex Dimakis
(
Q&A
)
>
|
馃敆 |
Sat 9:30 a.m. - 10:00 a.m.
|
Small Break
|
馃敆 |
Sat 10:00 a.m. - 10:30 a.m.
|
Invited Talk: Kamalika Chaudhuri
(
Invited Talk
)
>
SlidesLive Video |
Kamalika Chaudhuri 馃敆 |
Sat 10:30 a.m. - 10:45 a.m.
|
Q&A: Kamalika Chaudhuri
(
Q&A
)
>
|
馃敆 |
Sat 10:45 a.m. - 11:15 a.m.
|
Invited Talk: Todd Coleman
(
Invited Talk
)
>
SlidesLive Video |
Todd Coleman 馃敆 |
Sat 11:15 a.m. - 11:30 a.m.
|
Q&A: Todd Coleman
(
Q&A
)
>
|
馃敆 |
Sat 11:30 a.m. - 11:45 a.m.
|
Contributed Talk #1
(
Contributed Talk
)
>
SlidesLive Video |
Eric Lei 路 Hamed Hassani 路 Shirin Bidokhti 馃敆 |
Sat 11:45 a.m. - 12:00 p.m.
|
Contributed Talk #2
(
Contributed Talk
)
>
SlidesLive Video |
Borja Rodr铆guez G谩lvez 路 Mikael Skoglund 路 Ragnar Thobaben 路 German Bassi 馃敆 |
Sat 12:00 p.m. - 1:00 p.m.
|
Big Break
|
馃敆 |
Sat 1:00 p.m. - 1:45 p.m.
|
Panel Discussion
(
Panel
)
>
SlidesLive Video |
馃敆 |
Sat 1:45 p.m. - 2:15 p.m.
|
Invited Talk: Kush Varshney
(
Invited Talk
)
>
SlidesLive Video |
Kush Varshney 馃敆 |
Sat 2:15 p.m. - 2:30 p.m.
|
Q&A: Kush Varshney
(
Q&A
)
>
|
馃敆 |
Sat 2:30 p.m. - 3:00 p.m.
|
Invited Talk: Thomas Steinke
(
Invited Talk
)
>
SlidesLive Video |
Thomas Steinke 馃敆 |
Sat 3:00 p.m. - 3:15 p.m.
|
Q&A: Thomas Steinke
(
Q&A
)
>
|
馃敆 |
Sat 3:15 p.m. - 4:00 p.m.
|
Virtual Poster Session #2 ( Poster Session ) > link | 馃敆 |
Sat 4:00 p.m. - 4:30 p.m.
|
Invited Talk: Lalitha Sankar
(
Invited Talk
)
>
SlidesLive Video |
Lalitha Sankar 馃敆 |
Sat 4:30 p.m. - 4:45 p.m.
|
Q&A: Lalitha Sankar
(
Q&A
)
>
|
馃敆 |
Sat 4:45 p.m. - 5:00 p.m.
|
Contributed Talk #3
(
Contributed Talk
)
>
SlidesLive Video |
Sharu Jose 路 Osvaldo Simeone 馃敆 |
Sat 5:00 p.m. - 5:15 p.m.
|
Contributed Talk #4
(
Contributed Talk
)
>
SlidesLive Video |
Mohammad Samragh 路 Hossein Hosseini 路 Kambiz Azarian 路 Farinaz Koushanfar 馃敆 |
Sat 5:15 p.m. - 5:45 p.m.
|
Invited Talk: David Tse
(
Invited Talk
)
>
SlidesLive Video |
David Tse 馃敆 |
Sat 5:45 p.m. - 6:00 p.m.
|
Q&A: David Tse
(
Q&A
)
>
|
馃敆 |
Sat 6:00 p.m. - 6:15 p.m.
|
Concluding Remarks
(
Concluding Remarks
)
>
SlidesLive Video |
馃敆 |
Sat 6:15 p.m. - 7:00 p.m.
|
Social Hour ( Social Hour ) > link | 馃敆 |
-
|
Single-Shot Compression for Hypothesis Testing
(
Poster
)
>
|
Fabrizio Carpi 路 Siddharth Garg 路 Elza Erkip 馃敆 |
-
|
When Optimizing f-divergence is Robust with Label Noise
(
Poster
)
>
|
Jiaheng Wei 路 Yang Liu 馃敆 |
-
|
Active privacy-utility trade-off against a hypothesis testing adversary
(
Poster
)
>
|
Ecenaz Erdemir 路 Pier Luigi Dragotti 路 Deniz Gunduz 馃敆 |
-
|
A unified PAC-Bayesian framework for machine unlearning via information risk minimization
(
Poster
)
>
|
Sharu Jose 路 Osvaldo Simeone 馃敆 |
-
|
Neural Network-based Estimation of the MMSE
(
Poster
)
>
|
Mario Diaz 路 Peter Kairouz 路 Lalitha Sankar 馃敆 |
-
|
Realizing GANs via a Tunable Loss Function
(
Poster
)
>
|
Gowtham Raghunath Kurri 路 Tyler Sypherd 路 Lalitha Sankar 馃敆 |
-
|
True Few-Shot Learning with Language Models
(
Poster
)
>
|
Ethan Perez 路 Douwe Kiela 路 Kyunghyun Cho 馃敆 |
-
|
Learning under Distribution Mismatch and Model Misspecification
(
Poster
)
>
|
Mohammad Saeed Masiha 路 Mohammad Reza Aref 馃敆 |
-
|
Soft BIBD and Product Gradient Codes: Coding Theoretic Constructions to Mitigate Stragglers in Distributed Learning
(
Poster
)
>
|
Animesh Sakorikar 路 Lele Wang 馃敆 |
-
|
Information-Guided Sampling for Low-Rank Matrix Completion
(
Poster
)
>
|
Simon Mak 路 Shaowu Yuchi 路 Yao Xie 馃敆 |
-
|
Sliced Mutual Information: A Scalable Measure of Statistical Dependence
(
Poster
)
>
|
Ziv Goldfeld 路 Kristjan Greenewald 馃敆 |
-
|
Minimax Bounds for Generalized Pairwise Comparisons
(
Poster
)
>
|
Kuan-Yun Lee 路 Thomas Courtade 馃敆 |
-
|
Characterizing the Generalization Error of Gibbs Algorithm with Symmetrized KL information
(
Poster
)
>
|
Gholamali Aminian 路 Yuheng Bu 路 Laura Toni 路 Miguel Rodrigues 路 Gregory Wornell 馃敆 |
-
|
Prediction-focused Mixture Models
(
Poster
)
>
|
Abhishek Sharma 路 Sanjana Narayanan 路 Catherine Zeng 路 Finale Doshi-Velez 馃敆 |
-
|
Tighter Expected Generalization Error Bounds via Wasserstein Distance
(
Poster
)
>
|
Borja Rodr铆guez G谩lvez 路 German Bassi 路 Ragnar Thobaben 路 Mikael Skoglund 馃敆 |
-
|
Data-Dependent PAC-Bayesian Bounds in the Random-Subset Setting with Applications to Neural Networks
(
Poster
)
>
|
Fredrik Hellstr枚m 路 Giuseppe Durisi 馃敆 |
-
|
Active Sampling for Binary Gaussian Model Testing in High Dimensions
(
Poster
)
>
|
Javad Heydari 路 Ali Tajer 馃敆 |
-
|
Unsupervised Information Obfuscation for Split Inference of Neural Networks
(
Poster
)
>
|
Mohammad Samragh 路 Hossein Hosseini 路 Aleksei Triastcyn 路 Kambiz Azarian 路 Joseph B Soriaga 路 Farinaz Koushanfar 馃敆 |
-
|
Within-layer Diversity Reduces Generalization Gap
(
Poster
)
>
|
Firas Laakom 路 Jenni Raitoharju 路 Alexandros Iosifidis 路 Moncef Gabbouj 馃敆 |
-
|
Entropic Causal Inference: Identifiability for Trees and Complete Graphs
(
Poster
)
>
|
Spencer Compton 路 Murat Kocaoglu 路 Kristjan Greenewald 路 Dmitriy Katz 馃敆 |
-
|
Towards a Unified Information-Theoretic Framework for Generalization
(
Poster
)
>
|
Mahdi Haghifam 路 Gintare Karolina Dziugaite 路 Shay Moran 馃敆 |
-
|
Sub-population Guarantees for Importance Weights and KL-Divergence Estimation
(
Poster
)
>
|
Parikshit Gopalan 路 Nina Narodytska 路 Omer Reingold 路 Vatsal Sharan 路 Udi Wieder 馃敆 |
-
|
Excess Risk Analysis of Learning Problems via Entropy Continuity
(
Poster
)
>
|
Aolin Xu 馃敆 |
-
|
Out-of-Distribution Robustness in Deep Learning Compression
(
Poster
)
>
|
Eric Lei 路 Hamed Hassani 馃敆 |
-
|
Coded Privacy-Preserving Computation at Edge Networks
(
Poster
)
>
|
Elahe Vedadi 路 Yasaman Keshtkarjahromi 路 Hulya Seferoglu 馃敆 |
-
|
a-VAEs : Optimising variational inference by learning data-dependent divergence skew
(
Poster
)
>
|
Jacob Deasy 路 Tom McIver 路 Nikola Simidjievski 路 Pietro Li贸 馃敆 |