Skip to yearly menu bar Skip to main content




Show Detail Timezone:
America/Los_Angeles
 
Filter Rooms:  

SUN 21 JUL
1 a.m.
(ends 8:00 AM)
4 a.m.
Expo Talk Panel:
(ends 5:00 AM)
Expo Talk Panel:
(ends 5:00 AM)
5 a.m.
Break:
(ends 5:30 AM)
5:30 a.m.
Expo Talk Panel:
(ends 6:30 AM)
Expo Talk Panel:
(ends 6:30 AM)
6:30 a.m.
Break:
(ends 7:00 AM)
7 a.m.
Expo Talk Panel:
(ends 8:00 AM)
Expo Talk Panel:
(ends 8:00 AM)
11:30 p.m.
(ends 9:00 AM)

MON 22 JUL
midnight
Workshop:
(ends 7:00 AM)
Workshop:
(ends 7:00 AM)
Break:
(ends 12:30 AM)
2:30 a.m.
Break:
(ends 4:00 AM)
6 a.m.
Break:
(ends 6:30 AM)
6:30 a.m.
Tutorial:
(ends 8:30 AM)
Tutorial:
(ends 8:30 AM)
Tutorial:
(ends 8:30 AM)
8:30 a.m.
Break:
(ends 9:00 AM)
9 a.m.
Invited Talk:
(ends 10:00 AM)
10 a.m.
Reception:
(ends 11:15 AM)
11 p.m.
(ends 9:00 AM)

TUE 23 JUL
midnight
Invited Talk:
(ends 1:00 AM)
1 a.m.
Break:
(ends 1:30 AM)
1:30 a.m.
Orals 1:30-2:30
[1:30] Position Paper: Embracing Negative Results in Machine Learning
[1:45] Position Paper: A Safe Harbor for AI Evaluation and Red Teaming
[2:00] Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining
[2:15] Position Paper: Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis
(ends 2:30 AM)
Orals 1:30-2:30
[1:30] Debating with More Persuasive LLMs Leads to More Truthful Answers
[1:45] Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
[2:00] A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity
[2:15] Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study
(ends 2:30 AM)
Orals 1:30-2:30
[1:30] LSEnet: Lorentz Structural Entropy Neural Network for Deep Graph Clustering
[1:45] Image Clustering with External Guidance
[2:00] Making old things new: a unified algorithm for differentially private clustering
[2:15] Pruned Pivot: Correlation Clustering Algorithm for Dynamic, Parallel, and Local Computation Models
(ends 2:30 AM)
Orals 1:30-2:30
[1:30] Genie: Generative Interactive Environments
[1:45] Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization
[2:00] Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition
[2:15] VideoPoet: A Large Language Model for Zero-Shot Video Generation
(ends 2:30 AM)
Orals 1:30-2:30
[1:30] MorphGrower: A Synchronized Layer-by-layer Growing Approach for Plausible Neuronal Morphology Generation
[1:45] EquiPocket: an E(3)-Equivariant Geometric Graph Neural Network for Ligand Binding Site Prediction
[2:00] Expressivity and Generalization: Fragment-Biases for Molecular GNNs
[2:15] Preference Optimization for Molecule Synthesis with Conditional Residual Energy-based Models
(ends 2:30 AM)
Orals 1:30-2:30
[1:30] SparseTSF: Modeling Long-term Time Series Forecasting with *1k* Parameters
[1:45] Arrows of Time for Large Language Models
[2:00] Unified Training of Universal Time Series Forecasting Transformers
[2:15] SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
(ends 2:30 AM)
2:30 a.m.
Posters 2:30-4:00
(ends 4:00 AM)
3:30 a.m.
Break:
(ends 5:00 AM)
4:30 a.m.
Posters 4:30-6:00
(ends 6:00 AM)
6 a.m.
Invited Talk:
(ends 7:00 AM)
7 a.m.
Break:
(ends 7:30 AM)
7:30 a.m.
Orals 7:30-8:30
[7:30] Position Paper: On The Importance of Technical Research and Talent for AI Governance
[7:45] Position: AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research
[8:00] Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI
[8:15] Position Paper: On the Societal Impact of Open Foundation Models
(ends 8:30 AM)
Orals 7:30-8:30
[7:30] How Private are DP-SGD Implementations?
[7:45] Private Truly-Everlasting Robust-Prediction
[8:00] ViP: A Differentially Private Foundation Model for Computer Vision
[8:15] PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs
(ends 8:30 AM)
Orals 7:30-8:30
[7:30] Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation
[7:45] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
[8:00] DiJiang: Efficient Large Language Models through Compact Kernelization
[8:15] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs
(ends 8:30 AM)
Orals 7:30-8:30
[7:30] Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion
[7:45] DITTO: Diffusion Inference-Time T-Optimization for Music Generation
[8:00] Fast Timing-Conditioned Latent Audio Diffusion
[8:15] Listenable Maps for Audio Classifiers
(ends 8:30 AM)
Orals 7:30-8:30
[7:30] Position Paper: The Platonic Representation Hypothesis
[7:45] Robustness of Nonlinear Representation Learning
[8:00] Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
[8:15] Rejuvenating image-GPT as Strong Visual Representation Learners
(ends 8:30 AM)
Orals 7:30-8:30
[7:30] Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape
[7:45] The Complexity of Attention, or How Optimal is FlashAttention?
[8:00] Improving Transformers with Dynamically Composable Multi-Head Attention
[8:15] Less is More: on the Over-Globalizing Problem in Graph Transformers
(ends 8:30 AM)
11:30 p.m.
(ends 9:00 AM)

WED 24 JUL
midnight
Invited Talk:
(ends 1:00 AM)
1 a.m.
Break:
(ends 1:30 AM)
1:30 a.m.
Orals 1:30-2:30
[1:30] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution
[1:45] Mean-field Chaos Diffusion Models
[2:00] NaturalSpeech 3: Zero-Shot Speech Synthesis with Factorized Codec and Diffusion Models
[2:15] Scaling Rectified Flow Transformers for High-Resolution Image Synthesis
(ends 2:30 AM)
Orals 1:30-2:30
[1:30] Chain of Code: Reasoning with a Language Model-Augmented Code Emulator
[1:45] SceneCraft: An LLM Agent for Synthesizing 3D Scenes as Blender Code
[2:00] Interpreting and Improving Large Language Models in Arithmetic Calculation
[2:15] Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks
(ends 2:30 AM)
Orals 1:30-2:30
[1:30] Active Statistical Inference
[1:45] Sparse Inducing Points in Deep Gaussian Processes: Enhancing Modeling with Denoising Diffusion Variational Inference
[2:00] Probabilistic Generating Circuits - Demystified
[2:15]