Show Detail |
Timezone: America/Los_Angeles |
Filter Rooms:
SUN 23 JUL
1 p.m.
(ends 8:00 PM)
2:30 p.m.
3:30 p.m.
Expo Talk Panel with Coffee & a Snack:
(ends 4:30 PM)
4:30 p.m.
Expo Talk Panel with Coffee & a Snack:
(ends 5:30 PM)
5 p.m.
5:30 p.m.
6 p.m.
7 p.m.
8 p.m.
MON 24 JUL
11 a.m.
(ends 10:00 PM)
11:30 a.m.
11:45 a.m.
12:30 p.m.
Tutorial:
(ends 3:00 PM)
1 p.m.
1:30 p.m.
3 p.m.
4:30 p.m.
Tutorial:
(ends 6:30 PM)
Tutorial:
(ends 6:30 PM)
Tutorial:
(ends 6:30 PM)
6:30 p.m.
7 p.m.
Tutorial:
(ends 9:00 PM)
9:15 p.m.
9:30 p.m.
TUE 25 JUL
11 a.m.
(ends 9:00 PM)
noon
12:15 p.m.
1 p.m.
1:30 p.m.
2 p.m.
(ends 3:30 PM)
3:30 p.m.
5 p.m.
(ends 6:30 PM)
6:30 p.m.
7 p.m.
8 p.m.
8:30 p.m.
Orals 8:30-9:50
[8:30]
Bayesian Design Principles for Frequentist Sequential Learning
[8:38]
Towards Theoretical Understanding of Inverse Reinforcement Learning
[8:46]
On the Power of Pre-training for Generalization in RL: Provable Benefits and Hardness
[8:54]
Delayed Feedback in Kernel Bandits
[9:02]
Provably Learning Object-Centric Representations
[9:10]
Task-specific experimental design for treatment effect estimation
[9:18]
Are labels informative in semi-supervised learning? Estimating and leveraging the missing-data mechanism.
[9:26]
Interventional Causal Representation Learning
[9:34]
Returning The Favour: When Regression Benefits From Probabilistic Causal Knowledge
[9:42]
Sequential Underspecified Instrument Selection for Cause-Effect Estimation
(ends 10:00 PM)
Orals 8:30-9:50
[8:30]
Raising the Cost of Malicious AI-Powered Image Editing
[8:38]
Dynamics-inspired Neuromorphic Visual Representation Learning
[8:46]
Scaling Vision Transformers to 22 Billion Parameters
[8:54]
Facial Expression Recognition with Adaptive Frame Rate based on Multiple Testing Correction
[9:02]
Fourmer: An Efficient Global Modeling Paradigm for Image Restoration
[9:10]
Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping
[9:18]
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
[9:26]
Rockmate: an Efficient, Fast, Automatic and Generic Tool for Re-materialization in PyTorch
[9:34]
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks at the Edge
[9:42]
Fast Inference from Transformers via Speculative Decoding
(ends 10:00 PM)
Orals 8:30-9:58
[8:30]
Self-Repellent Random Walks on General Graphs - Achieving Minimal Sampling Variance via Nonlinear Markov Chains
[8:38]
Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond
[8:46]
Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression
[8:54]
Tighter Information-Theoretic Generalization Bounds from Supersamples
[9:02]
Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels
[9:10]
Bayes-optimal Learning of Deep Random Networks of Extensive-width
[9:18]
Why does Throwing Away Data Improve Worst-Group Error?
[9:26]
Marginalization is not Marginal: No Bad VAE Local Minima when Learning Optimal Sparse Representations
[9:34]
Sharper Bounds for $\ell_p$ Sensitivity Sampling
[9:42]
AdaBoost is not an Optimal Weak to Strong Learner
[9:50]
Generalization on the Unseen, Logic Reasoning and Degree Curriculum
(ends 10:00 PM)
Orals 8:30-9:50
[8:30]
AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
[8:38]
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples
[8:46]
Graphically Structured Diffusion Models
[8:54]
Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines?
[9:02]
Refining Generative Process with Discriminator Guidance in Score-based Diffusion Models
[9:10]
Diffusion Models are Minimax Optimal Distribution Estimators
[9:18]
GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse Problems with Denoising Diffusion Restoration
[9:26]
OCD: Learning to Overfit with Conditional Diffusion Models
[9:34]
Denoising MCMC for Accelerating Diffusion-Based Generative Models
[9:42]
Cones: Concept Neurons in Diffusion Models for Customized Generation
(ends 10:00 PM)
Orals 8:30-9:50
[8:30]
Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark
[8:38]
Information-Theoretic State Space Model for Multi-View Reinforcement Learning
[8:46]
Reparameterized Policy Learning for Multimodal Trajectory Optimization
[8:54]
Representation Learning with Multi-Step Inverse Kinematics: An Efficient and Optimal Approach to Rich-Observation RL
[9:02]
Subequivariant Graph Reinforcement Learning in 3D Environments
[9:10]
A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs
[9:18]
Warm-Start Actor-Critic: From Approximation Error to Sub-optimality Gap
[9:26]
Efficient RL via Disentangled Environment and Agent Representations
[9:34]
Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning
[9:42]
On the Statistical Benefits of Temporal Difference Learning
(ends 10:00 PM)
Orals 8:30-9:50
[8:30]
Learning GFlowNets From Partial Episodes For Improved Convergence And Stability
[8:38]
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
[8:46]
Reinforcement Learning from Passive Data via Latent Intentions
[8:54]
Best of Both Worlds Policy Optimization
[9:02]
Exponential Smoothing for Off-Policy Learning
[9:10]
Quantile Credit Assignment
[9:18]
Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels
[9:26]
Hierarchies of Reward Machines
[9:34]
Human-Timescale Adaptation in an Open-Ended Task Space
[9:42]
Settling the Reward Hypothesis
(ends 10:00 PM)
WED 26 JUL
11 a.m.
(ends 9:00 PM)
12:30 p.m.
Invited Talk:
Jennifer Doudna
(ends 1:30 PM)
1 p.m.
1:30 p.m.
2 p.m.