Timezone: »
Recent years have witnessed the rising need for the machine learning systems that can interact with humans in the learning loop. Such systems can be applied to computer vision, natural language processing, robotics, and human computer interaction. Creating and running such systems call for interdisciplinary research of artificial intelligence, machine learning, and software engineering design, which we abstract as Human in the Loop Learning (HILL). The HILL workshop aims to bring together researchers and practitioners working on the broad areas of HILL, ranging from the interactive/active learning algorithms for real-world decision making systems (e.g., autonomous driving vehicles, robotic systems, etc.), lifelong learning systems that retain knowledge from different tasks and selectively transfer knowledge to learn new tasks over a lifetime, models with strong explainability, as well as interactive system designs (e.g., data visualization, annotation systems, etc.). The HILL workshop continues the previous effort to provide a platform for researchers from interdisciplinary areas to share their recent research. In this year’s workshop, a special feature is to encourage the debate between HILL and label-efficient learning: Are these two learning paradigms contradictory with each other, or can they be organically combined to create a more powerful learning system? We believe the theme of the workshop will be of interest for broad ICML attendees, especially those who are interested in interdisciplinary study.
Sat 4:15 a.m. - 4:30 a.m.
|
Opening Remark
(
Demonstration
)
SlidesLive Video » |
Shanghang Zhang · Shiji Zhou 🔗 |
Sat 4:30 a.m. - 5:00 a.m.
|
Invited Talk #0
(
Demonstration
)
SlidesLive Video » |
🔗 |
Sat 5:00 a.m. - 5:30 a.m.
|
Invited Talk #1
(
Demonstration
)
SlidesLive Video » |
Yarin Gal 🔗 |
Sat 5:30 a.m. - 6:00 a.m.
|
Invited Talk #2
(
Demonstration
)
SlidesLive Video » |
Hugo Larochelle 🔗 |
Sat 6:00 a.m. - 6:10 a.m.
|
Q&A
(
Demonstration
)
|
🔗 |
Sat 6:10 a.m. - 6:40 a.m.
|
Invited Talk #3
(
Demonstration
)
SlidesLive Video » |
Vittorio Ferrari 🔗 |
Sat 6:40 a.m. - 7:10 a.m.
|
Invited Talk #4
(
Demonstration
)
SlidesLive Video » |
Razvan Pascanu 🔗 |
Sat 7:10 a.m. - 7:20 a.m.
|
Q&A
(
Demonstration
)
|
🔗 |
Sat 7:20 a.m. - 8:20 a.m.
|
Poster
(
Demonstration
)
|
Shiji Zhou · Nastaran Okati · Wichinpong Sinchaisri · Kim de Bie · Ana Lucic · Mina Khan · Ishaan Shah · JINGHUI LU · Andreas Kirsch · Julius Frost · Ze Gong · Gokul Swamy · Ah Young Kim · Ahmed Baruwa · Ranganath Krishnan
|
Sat 8:20 a.m. - 8:50 a.m.
|
Invited Talk #5
(
Demonstration
)
SlidesLive Video » |
Fei Sha 🔗 |
Sat 8:50 a.m. - 9:20 a.m.
|
Invited Talk #6
(
Demonstration
)
SlidesLive Video » |
Ranjay Krishna 🔗 |
Sat 9:20 a.m. - 9:30 a.m.
|
Q&A
(
Demonstration
)
|
🔗 |
Sat 9:30 a.m. - 10:00 a.m.
|
Invited Talk #7
(
Demonstration
)
SlidesLive Video » |
Kimin Lee 🔗 |
Sat 10:00 a.m. - 11:00 a.m.
|
Panel Discussion
(
Discussion panel
)
|
🔗 |
Sat 11:00 a.m. - 11:30 a.m.
|
Invited Talk #8
(
Demonstration
)
SlidesLive Video » |
Alison Gopnik 🔗 |
Sat 11:30 a.m. - 11:50 a.m.
|
Closing Remarks
(
Demonstration
)
SlidesLive Video » |
🔗 |
-
|
PreferenceNet: Encoding Human Preferences in Auction Design
(
Poster
)
The design of optimal auctions is a problem of interest in economics, game theory and computer science. Recent methods using deep learning have shown some success in approximating optimal auctions, recovering several known solutions and outperforming strong baselines when optimal auctions are not known. In addition to maximizing revenue, auction mechanisms may also seek to encourage socially desirable constraints such as allocation fairness or diversity. However, these philosophical notions neither have standardization nor do they have widely accepted formal definitions. In this paper, we propose PreferenceNet, an extension of existing neural-network-based auction mechanisms to encode constraints using (potentially human-provided) exemplars of desirable allocations. We demonstrate that our proposed method is competitive with current state-of-the-art neural-network based auction designs and validate our approach through human subject research and show that we are able to effectively capture real human preferences. |
Neehar Peri · Michael Curry · Samuel Dooley · John P Dickerson 🔗 |
-
|
IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance
(
Poster
)
Neural networks (NNs) are widely used for classification tasks for their remarkable performance. However, the robustness and accuracy of NNs heavily depend on the training data. In many applications, massive training data is usually not available. To address the challenge, this paper proposes an iterative adversarial data augmentation (IADA) framework to learn neural network models from insufficient amount of training data. The method uses formal verification to identify the most ``confusing'' input samples, and leverages human guidance to safely and iteratively augment the training data with these samples. The proposed framework is applied to an artificial 2D dataset, the MNIST dataset, and a human motion dataset. By applying IADA to fully-connected NN classifiers, we show that our training method can improve the robustness and accuracy of the learned model. By comparing to regular supervised training, on the MNIST dataset, the average perturbation bound improved $107.4\%$. The classification accuracy improved $1.77\%$, $3.76\%$, $10.85\%$ on the 2D dataset, the MNIST dataset, and the human motion dataset respectively.
|
Ruixuan Liu · Changliu Liu 🔗 |
-
|
Machine Teaching with Generative Models for Human Learning
(
Poster
)
Experimental scientists face an increasingly difficult challenge: while technological advances allow for the collection of larger and higher quality datasets, computational methods to better understand and make new discoveries in the data lag behind. Existing explainable AI and interpretability methods for machine learning focus on better understanding model decisions, rather than understanding the data itself. In this work, we tackle a specific task that can aid experimental scientists in the era of big data: given a large dataset of annotated samples divided into different classes, how can we best teach human researchers what is the difference between the classes? To accomplish this, we develop a new framework combining machine teaching and generative models that generates a small set of synthetic teaching examples for each class. This set will aim to contain all the information necessary to distinguish between the classes. To validate our framework, we perform a human study in which human subjects learn how to classify various datasets using a small teaching set generated by our framework as well as several subset selection algorithms. We show that while generated samples succeed in teaching humans better than chance, subset selection methods (such as k-centers or forgettable events) succeed better in this task, suggesting that real samples might be better suited than realistic generative samples. We suggest several ideas for improving human teaching using machine learning. |
Michael Doron · Hussein Mozannar · David Sontag · Juan Caicedo 🔗 |
-
|
Differentiable Learning Under Triage
(
Poster
)
Multiple lines of evidence suggest that predictive models may benefit from algorithmic triage. Under algorithmic triage, a predictive model does not predict all instances but instead defers some of them to human experts. However, the interplay between the prediction accuracy of the model and the human experts under algorithmic triage is not well understood. In this work, we start by formally characterizing under which circumstances a predictive model may benefit from algorithmic triage. In doing so, we also demonstrate that models trained for full automation may be suboptimal under triage. Then, given any model and the desired level of triage, we show that the optimal triage policy is a deterministic threshold rule in which triage decisions are derived deterministically by thresholding the difference between the model and human errors on a per-instance level. Building upon these results, we introduce a practical gradient-based algorithm that is guaranteed to find a sequence of predictive models and triage policies of increasing performance. Experiments on a wide variety of supervised learning tasks using synthetic and real data from two important applications---content moderation and scientific discovery---illustrate our theoretical results and show that the models and triage policies provided by our algorithm outperform those provided by several competitive baselines. |
Nastaran Okati · Abir De · Manuel Gomez-Rodriguez 🔗 |
-
|
High Frequency EEG Artifact Detection with Uncertainty via Early Exit Paradigm
(
Poster
)
Electroencephalography (EEG) is crucial for the monitoring and diagnosis of brain disorders. However, EEG signals suffer from perturbations caused by non-cerebral artifacts limiting their efficacy. Current artifact detection pipelines are resource-hungry and rely heavily on hand-crafted features. Moreover, these pipelines are deterministic in nature, making them unable to capture predictive uncertainty. We propose E4G, a deep learning framework for high frequency EEG artifact detection. Our framework exploits the early exit paradigm, building an implicit ensemble of models capable of capturing uncertainty. We evaluate our approach on the Temple University Hospital EEG Artifact Corpus (v2.0) achieving state-of-the-art classification results. In addition, E4G provides well-calibrated uncertainty metrics comparable to sampling techniques like Monte Carlo dropout in just a single forward pass. E4G opens the door to uncertainty-aware artifact detection supporting clinicians-in-the-loop frameworks. |
Lorena Qendro · Alex Campbell · Pietro Lió · Cecilia Mascolo 🔗 |
-
|
Improving Human Decision-Making with Machine Learning
(
Poster
)
A key aspect of human intelligence is their ability to convey their knowledge to others in succinct forms. However, current machine learning models are largely blackboxes that are hard for humans to learn from. We study the problem of whether we can design machine learning algorithms capable of conveying their insights to humans in the context of a sequential decision making task. In particular, we propose a novel machine learning algorithm for extracting interpretable tips from a policy trained to solve the task using reinforcement learning. In particular, it searches over a space of interpretable decision rules to identify the one that most improves human performance. Then, we perform an extensive user study to evaluate our approach, based on a virtual kitchen-management game we designed that requires the participant to make a series of decisions to minimize overall service time. Our experiments show that (i) the tips generated by our algorithm are effective at improving performance, (ii) they significantly outperform the two baseline tips, and (iii) they successfully help participants build on their own experience to discover additional strategies and overcome their resistance to exploring counterintuitive strategies. |
Hamsa Bastani · Osbert Bastani · Wichinpong Sinchaisri 🔗 |
-
|
Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos
(
Poster
)
Learning from visual data opens the potential to accrue a large range of manipulation behaviors by leveraging human videos without specifying each of them mathematically, but rather through natural task specification. We consider the task of imitation from human videos for learning robot manipulation skills. In this paper, we present Learning by Watching (LbW), an algorithmic framework for policy learning through imitation from a single video specifying the task. The key insights of our method are two-fold. First, since the human arms may not have the same morphology as robot arms, our framework learns unsupervised human to robot translation to overcome the morphology mismatch issue. Second, to capture the details in salient regions that are crucial for learning state representations, our model performs unsupervised keypoint detection on the translated robot videos. The detected keypoints form a structured representation that contains semantically meaningful information and can be used directly for computing reward and policy learning. We evaluate the effectiveness of our LbW framework on five robot manipulation tasks, including reaching, pushing, sliding, coffee making, and drawer closing. Extensive experimental evaluations demonstrate that our method performs favorably against the state-of-the-art approaches. |
Haoyu Xiong · Yun-Chun Chen · Homanga Bharadhwaj · Samrath Sinha · Animesh Garg 🔗 |
-
|
To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
(
Poster
)
In hybrid human-AI systems, users need to decide whether or not to trust an algorithmic prediction while the true error in the prediction is unknown. To accommodate such settings, we introduce RETRO-VIZ, a method for (i) estimating and (ii) explaining trustworthiness of regression predictions. It consists of RETRO, a quantitative estimate of the trustworthiness of a prediction, and VIZ, a visual explanation that helps users identify the reasons for the (lack of) trustworthiness of a prediction. We find that RETRO-scores negatively correlate with prediction error. In a user study with 41 participants, we confirm that RETRO-VIZ helps users identify whether and why a prediction is trustworthy or not. |
Kim de Bie · Ana Lucic · Hinda Haned 🔗 |
-
|
Interpretable Machine Learning: Moving From Mythos to Diagnostics
(
Poster
)
Despite years of progress in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals stated as consumers' use cases. To address this gap, we argue for the IML community to embrace a diagnostic vision for the field. Instead of viewing IML methods as a panacea for a variety of overly broad use cases, we emphasize the need to systematically connect IML methods to narrower --yet better defined-- target use cases. To formalize this vision, we propose a taxonomy including both methods and use cases, helping to conceptualize the current gaps between the two. Then, to connect these two sides, we describe a three-step workflow to enable researchers and consumers to define and validate IML methods as useful diagnostics. Eventually, by applying this workflow, a more complete version of the taxonomy will allow consumers to find relevant methods for their target use cases and researchers to identify motivating use cases for their proposed methods. |
Valerie Chen · Jeffrey Li · Joon Kim · Gregory Plumb · Ameet Talwalkar 🔗 |
-
|
Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment
(
Poster
)
Saliency methods—techniques to identify the importance of input features on a model’s output—are a common first step in understanding neural network behavior. However, interpreting saliency requires tedious manual inspection to identify and aggregate patterns in model behavior, resulting in ad hoc or cherry-picked analysis. To address these concerns, we present Shared Interest: a set of metrics for comparing saliency with human-annotated ground truths. By providing quantitative descriptors, Shared Interest allows ranking, sorting, and aggregation of inputs thereby facilitating large-scale systematic analysis of model behavior. We use Shared Interest to identify eight recurring patterns in model behavior including focusing on a sufficient subset of ground truth features or being distracted by contextual features. Working with representative real-world users, we show how Shared Interest can be used to rapidly develop or lose trust in a model's reliability, uncover issues that are missed in manual analyses, and enable interactive probing of model behavior. |
Angie Boggust · Benjamin Hoover · Arvind Satyanarayan · Hendrik Strobelt 🔗 |
-
|
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
(
Poster
)
Given the increasing promise of Graph Neural Networks (GNNs) in real-world applications, several methods have been developed for explaining their predictions. So far, these methods have primarily focused on generating subgraphs that are especially relevant for a particular prediction. However, such methods do not provide a clear opportunity for recourse: given a prediction, we want to understand how the prediction can be changed in order to achieve a more desirable outcome. In this work, we propose a method for generating counterfactual (CF) explanations for GNNs: the minimal perturbation to the input (graph) data such that the prediction changes. Using only edge deletions, we find that our method can generate CF explanations for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94\% accuracy. This indicates that our method primarily removes edges that are crucial for the original predictions, resulting in minimal CF explanations. |
Ana Lucic · Maartje ter Hoeve · Gabriele Tolomei · Maarten de Rijke · Fabrizio Silvestri 🔗 |
-
|
Personalizing Pretrained Models
(
Poster
)
Self-supervised or weakly supervised models trained on large-scale datasets have shown sample-efficient transfer to diverse datasets in few-shot settings. We consider how upstream pretrained models can be leveraged for downstream few-shot, multilabel, and continual learning tasks. Our model CLIPPER (CLIP-PERsonalized) uses image representations from CLIP, a large-scale image representation learning model trained using weak natural language supervision. We developed a technique, called Multi-label Weight Imprinting (MWI), for multi-label, continual, and few-shot learning, and CLIPPER uses MWI with image representations from CLIP. We evaluated CLIPPER on 10 single-label and 5 multi-label datasets. Our model shows robust and competitive performance, and we set new benchmarks for few-shot, multi-label, and continual learning. Our lightweight technique is also compute-efficient and enables privacy-preserving applications as the data is not sent to the upstream model for fine-tuning. Thus, we enable few-shot, multilabel, and continual learning in compute-efficient and privacy-preserving settings. |
Mina Khan · Advait Rane · Pattie Maes 🔗 |
-
|
Convergence of a Human-in-the-Loop Policy-Gradient Algorithm With Eligibility Trace Under Reward, Policy, and Advantage Feedback
(
Poster
)
Fluid human–agent communication is essential for the future of human-in-the-loop reinforcement learning. An agent must respond appropriately to feedback from its human trainer even before they have significant experience working together. Therefore, it is important that learning agents respond well to various feedback schemes human trainers are likely to provide. This work analyzes the COnvergent Actor–Critic by Humans (COACH) algorithm under three different types of feedback—policy feedback, reward feedback, and advantage feedback. For these three feedback types, we find that COACH can behave sub-optimally. We propose a variant of COACH, episodic COACH (E-COACH), which we prove converges for all three types. We compare our COACH variant with two other reinforcement learning algorithms: Q-learning and TAMER. |
Ishaan Shah · David Halpern · Michael L. Littman · Kavosh Asadi 🔗 |
-
|
Effect of Combination of HBM and Certainty Sampling onWorkload of Semi-Automated Grey Literature Screening
(
Poster
)
With the rapid increase of unstructured text data, grey literature has become an important source of information to support research and innovation activities. In this paper, we propose a novel semi-automated grey literature screening approach that combines a Hierarchical BERT Model (HBM) with active learning to reduce the human workload in grey literature screening. Evaluations over three real-world grey literature datasets demonstrate that the proposed approach can save up to 64.88% of the human screening workload, while maintaining high screening accuracy. We also demonstrate how the use of the HBM model allows salient sentences within grey literature documents to be selected and highlighted to support workers in screening tasks. |
JINGHUI LU · Brian Mac Namee 🔗 |
-
|
A Simple Baseline for Batch Active Learning with Stochastic Acquisition Functions
(
Poster
)
In active learning, new labels are commonly acquired in batches. However, common acquisition functions are only meant for one-sample acquisition rounds at a time, and when their scores are used naively for batch acquisition, they result in batches lacking diversity, which deteriorates performance. On the other hand, state-of-the-art batch acquisition functions are costly to compute. In this paper, we present a novel class of stochastic acquisition functions that extend one-sample acquisition functions to the batch setting by observing how one-sample acquisition scores change as additional samples are acquired and modelling this difference for additional batch samples. We simply acquire new samples by sampling from the pool set using a Gibbs distribution based on the acquisition scores. Our acquisition functions are both vastly cheaper to compute and out-perform other batch acquisition functions. |
Andreas Kirsch · Sebastian Farquhar · Yarin Gal 🔗 |
-
|
Active Learning under Pool Set Distribution Shift and Noisy Data
(
Poster
)
Active Learning is essential for more label-efficient deep learning. Bayesian Active Learning has focused on BALD, which reduces model parameter uncertainty. However, we show that BALD gets stuck on out-of-distribution or junk data that is not relevant for the task. We examine a novel Expected Predictive Information Gain (EPIG) to deal with distribution shifts of the pool set. EPIG reduces the uncertainty of predictions on an unlabelled evaluation set sampled from the test data distribution whose distribution might be different to the pool set distribution. Based on this, our new EPIG-BALD acquisition function for Bayesian Neural Networks selects samples to improve the performance on the test data distribution instead of selecting samples that reduce model uncertainty everywhere, including for out-of-distribution regions with low density in the test data distribution. Our method outperforms state-of-the-art Bayesian active learning methods on high-dimensional datasets and avoids out-of-distribution junk data in cases where current state-of-the-art methods fail. |
Andreas Kirsch · Tom Rainforth · Yarin Gal 🔗 |
-
|
Explaining Reinforcement Learning Policies through Counterfactual Trajectories
(
Poster
)
In order for humans to confidently decide where to employ RL agents for real-world tasks, a human developer must validate that the agent will perform well at test-time. Some policy interpretability methods facilitate this by capturing the policy's decision making in a set of agent rollouts. However, even the most informative trajectories of training time behavior may give little insight into the agent's behavior out of distribution. In contrast, our method conveys how the agent performs under distribution shifts by showing the agent's behavior across a wider trajectory distribution. We generate these trajectories by guiding the agent to more diverse unseen states and showing the agent's behavior there. In a user study, we demonstrate that our method enables users to score better than baseline methods on one of two agent validation tasks. |
Julius Frost · Olivia Watkins · Eric Weiner · Pieter Abbeel · Trevor Darrell · Bryan Plummer · Kate Saenko 🔗 |
-
|
Differentially Private Active Learning with Latent Space Optimization
(
Poster
)
Existing Active Learning (AL) schemes typically address privacy in the narrow sense of furnishing a differentially private classifier. Private data are exposed to both the labeling and learning functions, a limitation that necessarily restricts their applicability to a single entity. In this paper, we propose an AL framework that allows the use of untrusted parties for both labeling and learning, thereby allowing joint use of data from multiple entities without trust relationships. Our method is based on differentially private generative models and an associated novel latent space optimization scheme that is more flexible than the traditional ranking method. Our experiments on three datasets (MNIST, CIFAR10, CelebA) show that our proposed scheme produces better or comparable results than state-of-the-art techniques on two different acquisition functions (VAR and BALD). |
Senching Cheung · Xiaoqing Zhu · Herb Wildfeuer · Chongruo Wu · Wai-tian Tan 🔗 |
-
|
Explicable Policy Search via Preference-Based Learning under Human Biases
(
Poster
)
As intelligent agents become pervasive in our lives, they are expected to not only achieve tasks alone but also engage in tasks that require close collaboration with humans. In such a context, the optimal agent behavior without considering the humans in the loop may be viewed as inexplicable, resulting in degraded team performance and loss of trust. Consequently, to be seen as good team players, such agents are required to learn about human idiosyncrasies and preferences for their behaviors based on human feedback and respect them during decision-making. On the other hand, human biases can skew the feedback and cause such learning agents to deviate from their original design purposes, leading to severe consequences. Therefore, it is critical for these agents to be aware of human biases and trade off optimality with human preferences for their behaviors appropriately. In this paper, we formulate the problem of Explicable Policy Search (EPS). We assume that human biases arise from the human’s belief about the agent’s domain dynamics and the human’s reward function. Directly learning the human’s belief and reward function is possible but largely inefficient and unnecessary. We demonstrate that they can be encoded by a single surrogate reward function that is learned in a preference-based framework. With this reward function, the agent then learns a stochastic policy via maximum entropy reinforcement learning to recover an explicable policy. We evaluate our method for EPS in a set of continuous navigation domains with synthetic human models and in an autonomous driving domain with a human subject study. The results suggest that our method can effectively generate explicable behaviors that are more desirable under various human biases. |
Ze Gong · Yu Zhang 🔗 |
-
|
Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap
(
Poster
)
We provide a unifying view of a large family of previous imitation learning algorithms through the lens of moment matching. At its core, our classification scheme is based on whether the learner attempts to match (1) reward or (2) action-value moments of the expert's behavior, with each option leading to differing algorithmic approaches. By considering adversarially chosen divergences between learner and expert behavior, we are able to derive bounds on policy performance that apply for all algorithms in each of these classes, the first to our knowledge. We also introduce the notion of moment recoverability, implicit in many previous analyses of imitation learning, which allows us to cleanly delineate how well each algorithmic family is able to mitigate compounding errors. We derive three novel algorithm templates (AdVIL, AdRIL, and DAeQuIL) with strong guarantees, simple implementation, and competitive empirical performance. |
Gokul Swamy · Sanjiban Choudhury · James Bagnell · Steven Wu 🔗 |
-
|
On The State of Data In Computer Vision: Human Annotations Remain Indispensable for Developing Deep Learning Models.
(
Poster
)
High-quality labeled datasets play a crucial role in fueling the development of machine learning (ML), and in particular the development of deep learning (DL). However, since the emergence of the ImageNet dataset and the AlexNet model in 2012, the size of new open-source labeled vision datasets has remained roughly constant. Consequently, only a minority of publications in the computer vision community tackle supervised learning on datasets that are orders of magnitude larger than Imagenet. In this paper, we survey computer vision research domains that study the effects of such large datasets on model performance across different vision tasks. We summarize the community’s current understanding of those effects, and highlight some open questions related to training with massive datasets. In particular, we tackle: (a) The largest datasets currently used in computer vision research and the interesting takeaways from training on such datasets; (b) The effectiveness of pre-training on large datasets; (c) Recent advancements and hurdles facing synthetic datasets; (d) An overview of double descent and sample nonmonotonicity phenomena; and finally, (e) A brief discussion of lifelong/continual learning and how it fares compared to learning from huge labeled datasets in an offline setting. Overall, our findings are that research on optimization for deep learning focuses on perfecting the training routine and thus making DL models less data hungry, while research on synthetic datasets aims to offset the cost of data labeling. However, for the time being, acquiring non-synthetic labeled data remains indispensable to boost performance. |
Zeyad Emam · Sasha Harrison · Felix Lau · Ah Young Kim 🔗 |
-
|
ToM2C: Target-oriented Multi-agent Communication and Cooperation with Theory of Mind
(
Poster
)
Being able to predict the mental states of others is a key factor to effective social interaction. It is also crucial to distributed multi-agent systems, where agents are required to communicate and cooperate with others. In this paper, we introduce such an important social-cognitive skill, i.e. Theory of Mind (ToM), to build socially intelligent agents who are able to communicate and cooperate effectively to accomplish challenging tasks. With ToM, each agent is able to infer the mental states and intentions of others according to its (local) observation. Based on the inferred states, the agents decide "when" and with "whom'' to share their intentions. With the information observed, inferred, and received, the agents decide their sub-goals and reach a consensus among the team. In the end, the low-level executors independently take primitive actions according to the sub-goals. We demonstrate the idea in a typical target-oriented multi-agent task, namely multi-sensor target coverage problem. The experiments show that the proposed model not only outperforms the state-of-the-art methods in target coverage rate and communication efficiency, but also shows good generalization across different scales of the environment. |
Yuanfei Wang · Fangwei Zhong · Jing Xu · Yizhou Wang 🔗 |
-
|
Accelerating the Convergence of Human-in-the-Loop Reinforcement Learning with Counterfactual Explanations
(
Poster
)
The capability to interactively learn from human feedback would enable robots in new social settings. For example, novice users could train service robots in new tasks naturally and interactively. |
Jakob Karalus · Felix Lindner 🔗 |
-
|
Less is more: An Empirical Analysis of Model Compression for Dialogue
(
Poster
)
Large language models have achieved near human performance across wide Natural Language Generation tasks such as Question Answering and Open-Domain Conversation. These large models take up large memory footprints and also inference time. Compressed models with fewer parameters are easily deployable on FPGAs and low-end devices with limited storage memory and processing power. In this work, we carry out an empirical evaluation of three model compression techniques on conversational agents specifically pre-trained on large language transformer networks. Using OpenAI GPT-2 transformer network, we evaluate and compare the performance of open-domain dialogue models before and after undergoing compression. When trained and tested on the DailiyDialog corpus, compressed models exhibit performances achieving state-of-the-art results on the corpus while maintaining human likeness. |
Ahmed Baruwa 🔗 |
-
|
Mitigating Sampling Bias and Improving Robustness in Active Learning
(
Poster
)
This paper presents simple and efficient methods to mitigate sampling bias in active learning while achieving state-of-the-art accuracy and model robustness. We introduce supervised contrastive active learning by leveraging the contrastive loss for active learning under a supervised setting. We propose an unbiased query strategy that selects informative data samples of diverse feature representations with our methods: supervised contrastive active learning (SCAL) and deep feature modeling (DFM). We empirically demonstrate our proposed methods reduce sampling bias, achieve state-of-the-art accuracy and model calibration in an active learning setup with the query computation 26x faster than Bayesian active learning by disagreement and 11x faster than CoreSet. The proposed SCAL method outperforms by a big margin in robustness to dataset shift and out-of-distribution. |
Ranganath Krishnan · Alok Sinha · Nilesh Ahuja · Mahesh Subedar · Omesh Tickoo · Ravi Iyer 🔗 |
-
|
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks
(
Poster
)
While graph neural networks (GNNs) have been shown to perform well on graph-based data from a variety of fields, they suffer from a lack of transparency and accountability, which hinders trust and consequently the deployment of such models in high-stake and safety-critical scenarios. Even though recent research has investigated methods for explaining GNNs, these methods are limited to single-instance explanations, also known as local explanations. Motivated by the aim of providing global explanations, we adapt the well-known Automated Concept-based Explanation approach (Ghorbani et al., 2019) to GNN node and graph classification, and propose GCExplainer. GCExplainer is an unsupervised approach for post-hoc discovery and extraction of global concept-based explanations for GNNs, which puts the human in the loop. We demonstrate the success of our technique on five node classification datasets and two graph classification datasets, showing that we are able to discover and extract high-quality concept representations by putting the human in the loop. We achieve a maximum completeness score of 1 and an average completeness score of 0.753 across the datasets. Finally, we show that the concept-based explanations provide an improved insight into the datasets and GNN models compared to the state-of-the-art explanations produced by GNNExplainer (Ying et al., 2019). |
Lucie Charlotte Magister · Dmitry Kazhdan · Vikash Singh · Pietro Lió 🔗 |
-
|
Interpretable Video Transformers in Imitation Learning of Human Driving
(
Poster
)
Transformers applied to high-level vision tasks showcase impressive performance due to the use of self-attention sublayers for computing affinity weights across tokens corresponding to image patches. A simple Vision Transformer encoder can also be trained with video clip inputs from popular driving datasets in a weakly supervised imitation learning task, framed as predicting future human driving actions as a time series sequence over a prediction horizon. In this paper, we propose this task as a simple, scalable method for autonomous vehicle planning to match human driving behaviour. We demonstrate initial results for this method, along with model visualizations for interpreting features in video inputs that contribute to sequence predictions. |
Andrew Dai 🔗 |
Author Information
Trevor Darrell (University of California at Berkeley)
Xin Wang (UC Berkeley)
Li Erran Li (AWS AI, Amazon)
Fisher Yu (University of California, Berkeley)
Zeynep Akata (University of Tübingen)
Zeynep Akata is a professor of Computer Science (W3) within the Cluster of Excellence Machine Learning at the University of Tübingen. After completing her PhD at the INRIA Rhone Alpes with Prof Cordelia Schmid (2014), she worked as a post-doctoral researcher at the Max Planck Institute for Informatics with Prof Bernt Schiele (2014-17) and at University of California Berkeley with Prof Trevor Darrell (2016-17). Before moving to Tübingen in October 2019, she was an assistant professor at the University of Amsterdam with Prof Max Welling (2017-19). She received a Lise-Meitner Award for Excellent Women in Computer Science from Max Planck Society in 2014, a young scientist honour from the Werner-von-Siemens-Ring foundation in 2019 and an ERC-2019 Starting Grant from the European Commission. Her research interests include multimodal learning and explainable AI.
Wenwu Zhu (Tsinghua University)
Wenwu Zhu is currently a Professor of Computer Science Department of Tsinghua University and Vice Dean of National Research Center on Information Science and Technology. Prior to his current post, he was a Senior Researcher and Research Manager at Microsoft Research Asia. He was the Chief Scientist and Director at Intel Research China from 2004 to 2008. He worked at Bell Labs New Jersey as a Member of Technical Staff during 1996-1999. He has been serving as the chair of the steering committee for IEEE T-MM since January 1, 2020. He served as the Editor-in-Chief for the IEEE Transactions on Multimedia (T-MM) from 2017 to 2019. And Vice EiC for IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) from 2020-2021 He served as co-Chair for ACM MM 2018 and co-Chair for ACM CIKM 2019. His current research interests are in the areas of multimodal big data and intelligence, and multimedia networking. He received 10 Best Paper Awards. He is a member of Academia Europaea, an IEEE Fellow, AAAS Fellow, and SPIE Fellow.
Pradeep Ravikumar (CMU)
Shiji Zhou (Tsinghua University)
Shanghang Zhang (UC Berkeley)
Kalesha Bullard (Facebook AI Research)
More from the Same Authors
-
2020 : Poster #46 »
Kalesha Bullard -
2021 : Explaining Reinforcement Learning Policies through Counterfactual Trajectories »
Julius Frost · Olivia Watkins · Eric Weiner · Pieter Abbeel · Trevor Darrell · Bryan Plummer · Kate Saenko -
2023 : LLM-grounded Text-to-Image Diffusion Models »
Long (Tony) Lian · Boyi Li · Adam Yala · Trevor Darrell -
2023 : Caveats of neural persistence in deep neural networks »
Leander Girrbach · Anders Christensen · A. Koepke · Ole Winther · Zeynep Akata -
2023 Poster: Curriculum Co-disentangled Representation Learning across Multiple Environments for Social Recommendation »
Xin Wang · Zirui Pan · Yuwei Zhou · Hong Chen · Chendi Ge · Wenwu Zhu -
2023 Poster: Wasserstein Barycenter Matching for Graph Size Generalization of Message Passing Neural Networks »
Xu Chu · Yujie Jin · Xin Wang · Shanghang Zhang · Yasha Wang · Wenwu Zhu · Hong Mei -
2022 : Back to the Source: Test-Time Diffusion-Driven Adaptation »
Jin Gao · Jialing Zhang · Xihui Liu · Trevor Darrell · Evan Shelhamer · Dequan Wang -
2022 Poster: Visual Attention Emerges from Recurrent Sparse Reconstruction »
Baifeng Shi · Yale Song · Neel Joshi · Trevor Darrell · Xin Wang -
2022 Poster: Graph Neural Architecture Search Under Distribution Shifts »
Yijian Qin · Xin Wang · Ziwei Zhang · Pengtao Xie · Wenwu Zhu -
2022 Spotlight: Visual Attention Emerges from Recurrent Sparse Reconstruction »
Baifeng Shi · Yale Song · Neel Joshi · Trevor Darrell · Xin Wang -
2022 Spotlight: Graph Neural Architecture Search Under Distribution Shifts »
Yijian Qin · Xin Wang · Ziwei Zhang · Pengtao Xie · Wenwu Zhu -
2022 Poster: Auxiliary Learning with Joint Task and Data Scheduling »
Hong Chen · Xin Wang · Chaoyu Guan · Yue Liu · Wenwu Zhu -
2022 Poster: Zero-Shot Reward Specification via Grounded Natural Language »
Parsa Mahmoudieh · Deepak Pathak · Trevor Darrell -
2022 Spotlight: Auxiliary Learning with Joint Task and Data Scheduling »
Hong Chen · Xin Wang · Chaoyu Guan · Yue Liu · Wenwu Zhu -
2022 Spotlight: Zero-Shot Reward Specification via Grounded Natural Language »
Parsa Mahmoudieh · Deepak Pathak · Trevor Darrell -
2022 Poster: DNA: Domain Generalization with Diversified Neural Averaging »
Xu Chu · Yujie Jin · Wenwu Zhu · Yasha Wang · Xin Wang · Shanghang Zhang · Hong Mei -
2022 Poster: Parametric Visual Program Induction with Function Modularization »
Xuguang Duan · Xin Wang · Ziwei Zhang · Wenwu Zhu -
2022 Poster: Large-Scale Graph Neural Architecture Search »
Chaoyu Guan · Xin Wang · Hong Chen · Ziwei Zhang · Wenwu Zhu -
2022 Spotlight: Large-Scale Graph Neural Architecture Search »
Chaoyu Guan · Xin Wang · Hong Chen · Ziwei Zhang · Wenwu Zhu -
2022 Spotlight: Parametric Visual Program Induction with Function Modularization »
Xuguang Duan · Xin Wang · Ziwei Zhang · Wenwu Zhu -
2022 Spotlight: DNA: Domain Generalization with Diversified Neural Averaging »
Xu Chu · Yujie Jin · Wenwu Zhu · Yasha Wang · Xin Wang · Shanghang Zhang · Hong Mei -
2021 Workshop: Self-Supervised Learning for Reasoning and Perception »
Pengtao Xie · Shanghang Zhang · Ishan Misra · Pulkit Agrawal · Katerina Fragkiadaki · Ruisi Zhang · Tassilo Klein · Asli Celikyilmaz · Mihaela van der Schaar · Eric Xing -
2021 : Poster »
Shiji Zhou · Nastaran Okati · Wichinpong Sinchaisri · Kim de Bie · Ana Lucic · Mina Khan · Ishaan Shah · JINGHUI LU · Andreas Kirsch · Julius Frost · Ze Gong · Gokul Swamy · Ah Young Kim · Ahmed Baruwa · Ranganath Krishnan -
2021 : Opening Remark »
Shanghang Zhang · Shiji Zhou -
2021 Workshop: Machine Learning for Data: Automated Creation, Privacy, Bias »
Zhiting Hu · Li Erran Li · Willie Neiswanger · Benedikt Boecking · Yi Xu · Belinda Zeng -
2021 Poster: AutoAttend: Automated Attention Representation Search »
Chaoyu Guan · Xin Wang · Wenwu Zhu -
2021 Poster: Compositional Video Synthesis with Action Graphs »
Amir Bar · Roi Herzig · Xiaolong Wang · Anna Rohrbach · Gal Chechik · Trevor Darrell · Amir Globerson -
2021 Spotlight: Compositional Video Synthesis with Action Graphs »
Amir Bar · Roi Herzig · Xiaolong Wang · Anna Rohrbach · Gal Chechik · Trevor Darrell · Amir Globerson -
2021 Spotlight: AutoAttend: Automated Attention Representation Search »
Chaoyu Guan · Xin Wang · Wenwu Zhu -
2021 Poster: Explainable Automated Graph Representation Learning with Hyperparameter Importance »
Xin Wang · Shuyi Fan · Kun Kuang · Wenwu Zhu -
2021 Spotlight: Explainable Automated Graph Representation Learning with Hyperparameter Importance »
Xin Wang · Shuyi Fan · Kun Kuang · Wenwu Zhu -
2021 : Q&A Part Two »
Oana-Maria Camburu · Zeynep Akata -
2021 : Part Two »
Zeynep Akata -
2021 : Q&A Part One »
Oana-Maria Camburu · Zeynep Akata -
2021 Tutorial: Natural-XAI: Explainable AI with Natural Language Explanations »
Oana-Maria Camburu · Zeynep Akata -
2020 : Closing Remarks »
Shanghang Zhang · Fisher Yu -
2020 : Invited Talk 10: Prof. Wenwu Zhu from Tsinghua University »
Wenwu Zhu -
2020 : Invited Talk: Dr. Kalesha Bullard from Facebook AI Research »
Kalesha Bullard -
2020 Workshop: 2nd ICML Workshop on Human in the Loop Learning (HILL) »
Shanghang Zhang · Xin Wang · Fisher Yu · Jiajun Wu · Trevor Darrell -
2020 : Closing remark (best paper award: sponsored by NVIDIA) »
Wei-Lun (Harry) Chao · Rowan McAllister · Li Erran Li · Adrien Gaidon · Sven Kreiss -
2020 : Open Remark 2 »
Wei-Lun (Harry) Chao · Sven Kreiss · Rowan McAllister · Li Erran Li · Adrien Gaidon -
2020 : Paper presentation opening »
Rowan McAllister · Li Erran Li · Adrien Gaidon · Sven Kreiss · Wei-Lun (Harry) Chao -
2020 Workshop: Workshop on AI for Autonomous Driving (AIAD) »
Wei-Lun (Harry) Chao · Rowan McAllister · Adrien Gaidon · Li Erran Li · Sven Kreiss -
2020 : Open Remark 1 »
Wei-Lun (Harry) Chao · Rowan McAllister · Li Erran Li · Sven Kreiss · Adrien Gaidon -
2020 Poster: Video Prediction via Example Guidance »
Jingwei Xu · Harry (Huazhe) Xu · Bingbing Ni · Xiaokang Yang · Trevor Darrell -
2020 Poster: Frustratingly Simple Few-Shot Object Detection »
Xin Wang · Thomas Huang · Joseph E Gonzalez · Trevor Darrell · Fisher Yu -
2020 : Facebook: Learning to Communicate Nonverbally for Embodied Agent Populations »
Kalesha Bullard -
2019 : Fisher Yu: "Motion and Prediction for Autonomous Driving" »
Fisher Yu · Trevor Darrell -
2019 Workshop: Workshop on AI for autonomous driving »
Anna Choromanska · Larry Jackel · Li Erran Li · Juan Carlos Niebles · Adrien Gaidon · Wei-Lun Chao · Ingmar Posner · Wei-Lun (Harry) Chao -
2019 : Poster Session 1 (all papers) »
Matilde Gargiani · Yochai Zur · Chaim Baskin · Evgenii Zheltonozhskii · Liam Li · Ameet Talwalkar · Xuedong Shang · Harkirat Singh Behl · Atilim Gunes Baydin · Ivo Couckuyt · Tom Dhaene · Chieh Lin · Wei Wei · Min Sun · Orchid Majumder · Michele Donini · Yoshihiko Ozaki · Ryan P. Adams · Christian Geißler · Ping Luo · zhanglin peng · · Ruimao Zhang · John Langford · Rich Caruana · Debadeepta Dey · Charles Weill · Xavi Gonzalvo · Scott Yang · Scott Yak · Eugen Hotaj · Vladimir Macko · Mehryar Mohri · Corinna Cortes · Stefan Webb · Jonathan Chen · Martin Jankowiak · Noah Goodman · Aaron Klein · Frank Hutter · Mojan Javaheripi · Mohammad Samragh · Sungbin Lim · Taesup Kim · SUNGWOONG KIM · Michael Volpp · Iddo Drori · Yamuna Krishnamurthy · Kyunghyun Cho · Stanislaw Jastrzebski · Quentin de Laroussilhe · Mingxing Tan · Xiao Ma · Neil Houlsby · Andrea Gesmundo · Zalán Borsos · Krzysztof Maziarz · Felipe Petroski Such · Joel Lehman · Kenneth Stanley · Jeff Clune · Pieter Gijsbers · Joaquin Vanschoren · Felix Mohr · Eyke Hüllermeier · Zheng Xiong · Wenpeng Zhang · Wenwu Zhu · Weijia Shao · Aleksandra Faust · Michal Valko · Michael Y Li · Hugo Jair Escalante · Marcel Wever · Andrey Khorlin · Tara Javidi · Anthony Francis · Saurajit Mukherjee · Jungtaek Kim · Michael McCourt · Saehoon Kim · Tackgeun You · Seungjin Choi · Nicolas Knudde · Alexander Tornede · Ghassen Jerfel -
2019 Workshop: Human In the Loop Learning (HILL) »
Xin Wang · Xin Wang · Fisher Yu · Shanghang Zhang · Joseph Gonzalez · Yangqing Jia · Sarah Bird · Kush Varshney · Been Kim · Adrian Weller -
2019 Poster: Disentangled Graph Convolutional Networks »
Jianxin Ma · Peng Cui · Kun Kuang · Xin Wang · Wenwu Zhu -
2019 Oral: Disentangled Graph Convolutional Networks »
Jianxin Ma · Peng Cui · Kun Kuang · Xin Wang · Wenwu Zhu -
2018 Poster: CyCADA: Cycle-Consistent Adversarial Domain Adaptation »
Judy Hoffman · Eric Tzeng · Taesung Park · Jun-Yan Zhu · Philip Isola · Kate Saenko · Alexei Efros · Trevor Darrell -
2018 Oral: CyCADA: Cycle-Consistent Adversarial Domain Adaptation »
Judy Hoffman · Eric Tzeng · Taesung Park · Jun-Yan Zhu · Philip Isola · Kate Saenko · Alexei Efros · Trevor Darrell -
2017 Poster: Projection-free Distributed Online Learning in Networks »
Wenpeng Zhang · Peilin Zhao · Wenwu Zhu · Steven Hoi · Tong Zhang -
2017 Talk: Projection-free Distributed Online Learning in Networks »
Wenpeng Zhang · Peilin Zhao · Wenwu Zhu · Steven Hoi · Tong Zhang -
2017 Poster: Curiosity-driven Exploration by Self-supervised Prediction »
Deepak Pathak · Pulkit Agrawal · Alexei Efros · Trevor Darrell -
2017 Talk: Curiosity-driven Exploration by Self-supervised Prediction »
Deepak Pathak · Pulkit Agrawal · Alexei Efros · Trevor Darrell