Toggle Poster Visibility
Sat Jul 24 04:15 AM -- 04:30 AM (PDT)
Opening Remark
Sat Jul 24 04:30 AM -- 05:00 AM (PDT)
Invited Talk #0
Sat Jul 24 05:00 AM -- 05:30 AM (PDT)
Invited Talk #1
Sat Jul 24 05:30 AM -- 06:00 AM (PDT)
Invited Talk #2
Sat Jul 24 06:10 AM -- 06:40 AM (PDT)
Invited Talk #3
Sat Jul 24 06:40 AM -- 07:10 AM (PDT)
Invited Talk #4
Sat Jul 24 08:20 AM -- 08:50 AM (PDT)
Invited Talk #5
Sat Jul 24 08:50 AM -- 09:20 AM (PDT)
Invited Talk #6
Sat Jul 24 09:30 AM -- 10:00 AM (PDT)
Invited Talk #7
Sat Jul 24 10:00 AM -- 11:00 AM (PDT)
Panel Discussion
Sat Jul 24 11:00 AM -- 11:30 AM (PDT)
Invited Talk #8
Sat Jul 24 11:30 AM -- 11:50 AM (PDT)
Closing Remarks
Accelerating the Convergence of Human-in-the-Loop Reinforcement Learning with Counterfactual Explanations
Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap
Explicable Policy Search via Preference-Based Learning under Human Biases
Explaining Reinforcement Learning Policies through Counterfactual Trajectories
Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment
Interpretable Machine Learning: Moving From Mythos to Diagnostics
To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance
ToM2C: Target-oriented Multi-agent Communication and Cooperation with Theory of Mind
Effect of Combination of HBM and Certainty Sampling onWorkload of Semi-Automated Grey Literature Screening
Machine Teaching with Generative Models for Human Learning
PreferenceNet: Encoding Human Preferences in Auction Design
Less is more: An Empirical Analysis of Model Compression for Dialogue
On The State of Data In Computer Vision: Human Annotations Remain Indispensable for Developing Deep Learning Models.
Active Learning under Pool Set Distribution Shift and Noisy Data
Convergence of a Human-in-the-Loop Policy-Gradient Algorithm With Eligibility Trace Under Reward, Policy, and Advantage Feedback
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Improving Human Decision-Making with Machine Learning
Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos
High Frequency EEG Artifact Detection with Uncertainty via Early Exit Paradigm
Differentially Private Active Learning with Latent Space Optimization
A Simple Baseline for Batch Active Learning with Stochastic Acquisition Functions
Interpretable Video Transformers in Imitation Learning of Human Driving
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks
Mitigating Sampling Bias and Improving Robustness in Active Learning
Successful Page Load