Recent years have witnessed the rising need for the machine learning systems that can interact with humans in the learning loop. Such systems can be applied to computer vision, natural language processing, robotics, and human computer interaction. Creating and running such systems call for interdisciplinary research of artificial intelligence, machine learning, and software engineering design, which we abstract as Human in the Loop Learning (HILL). The HILL workshop aims to bring together researchers and practitioners working on the broad areas of HILL, ranging from the interactive/active learning algorithms for real-world decision making systems (e.g., autonomous driving vehicles, robotic systems, etc.), lifelong learning systems that retain knowledge from different tasks and selectively transfer knowledge to learn new tasks over a lifetime, models with strong explainability, as well as interactive system designs (e.g., data visualization, annotation systems, etc.). The HILL workshop continues the previous effort to provide a platform for researchers from interdisciplinary areas to share their recent research. In this year’s workshop, a special feature is to encourage the debate between HILL and label-efficient learning: Are these two learning paradigms contradictory with each other, or can they be organically combined to create a more powerful learning system? We believe the theme of the workshop will be of interest for broad ICML attendees, especially those who are interested in interdisciplinary study.
Opening Remark (Demonstration) | |
Invited Talk #0 (Demonstration) | |
Invited Talk #1 (Demonstration) | |
Invited Talk #2 (Demonstration) | |
Q&A (Demonstration) | |
Invited Talk #3 (Demonstration) | |
Invited Talk #4 (Demonstration) | |
Q&A (Demonstration) | |
Poster (Demonstration) | |
Invited Talk #5 (Demonstration) | |
Invited Talk #6 (Demonstration) | |
Q&A (Demonstration) | |
Invited Talk #7 (Demonstration) | |
Panel Discussion (Discussion panel) | |
Invited Talk #8 (Demonstration) | |
Closing Remarks (Demonstration) | |
Explaining Reinforcement Learning Policies through Counterfactual Trajectories (Poster) | |
A Simple Baseline for Batch Active Learning with Stochastic Acquisition Functions (Poster) | |
Effect of Combination of HBM and Certainty Sampling onWorkload of Semi-Automated Grey Literature Screening (Poster) | |
Personalizing Pretrained Models (Poster) | |
Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment (Poster) | |
Interpretable Machine Learning: Moving From Mythos to Diagnostics (Poster) | |
To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions (Poster) | |
PreferenceNet: Encoding Human Preferences in Auction Design (Poster) | |
Machine Teaching with Generative Models for Human Learning (Poster) | |
IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance (Poster) | |
ToM2C: Target-oriented Multi-agent Communication and Cooperation with Theory of Mind (Poster) | |
Differentiable Learning Under Triage (Poster) | |
Less is more: An Empirical Analysis of Model Compression for Dialogue (Poster) | |
On The State of Data In Computer Vision: Human Annotations Remain Indispensable for Developing Deep Learning Models. (Poster) | |
Active Learning under Pool Set Distribution Shift and Noisy Data (Poster) | |
Convergence of a Human-in-the-Loop Policy-Gradient Algorithm With Eligibility Trace Under Reward, Policy, and Advantage Feedback (Poster) | |
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks (Poster) | |
Improving Human Decision-Making with Machine Learning (Poster) | |
Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos (Poster) | |
High Frequency EEG Artifact Detection with Uncertainty via Early Exit Paradigm (Poster) | |
Differentially Private Active Learning with Latent Space Optimization (Poster) | |
Interpretable Video Transformers in Imitation Learning of Human Driving (Poster) | |
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks (Poster) | |
Mitigating Sampling Bias and Improving Robustness in Active Learning (Poster) | |
Accelerating the Convergence of Human-in-the-Loop Reinforcement Learning with Counterfactual Explanations (Poster) | |
Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap (Poster) | |
Explicable Policy Search via Preference-Based Learning under Human Biases (Poster) | |