Timezone: »

 
Workshop
ICML Workshop on Imitation, Intent, and Interaction (I3)
Nicholas Rhinehart · Sergey Levine · Chelsea Finn · He He · Ilya Kostrikov · Justin Fu · Siddharth Reddy

Sat Jun 15 08:30 AM -- 06:00 PM (PDT) @ 201

Website: https://sites.google.com/view/icml-i3

Abstract: A key challenge for deploying interactive machine learning systems in the real world is the ability for machines to understand human intent. Techniques such as imitation learning and inverse reinforcement learning are popular data-driven paradigms for modeling agent intentions and controlling agent behaviors, and have been applied to domains ranging from robotics and autonomous driving to dialogue systems. Such techniques provide a practical solution to specifying objectives to machine learning systems when they are difficult to program by hand.

While significant progress has been made in these areas, most research effort has concentrated on modeling and controlling single agents from dense demonstrations or feedback. However, the real world has multiple agents, and dense expert data collection can be prohibitively expensive. Surmounting these obstacles requires progress in frontiers such as:
1) the ability to infer intent from multiple modes of data, such as language or observation, in addition to traditional demonstrations.
2) the ability to model multiple agents and their intentions, both in cooperative and adversarial settings.
3) handling partial or incomplete information from the expert, such as demonstrations that lack dense action annotations, raw videos, etc..

The workshop on Imitation, Intention, and Interaction (I3) seeks contributions at the interface of these frontiers, and will bring together researchers from multiple disciplines such as robotics, imitation and reinforcement learning, cognitive science, AI safety, and natural language understanding. Our aim will be to reexamine the assumptions in standard imitation learning problem statements (e.g., inverse reinforcement learning) and connect distinct application disciplines, such as robotics and NLP, with researchers developing core imitation learning algorithms. In this way, we hope to arrive at new problem formulations, new research directions, and the development of new connections across distinct disciplines that interact with imitation learning methods.

Sat 8:45 a.m. - 9:00 a.m.
Welcoming Remarks [ Video
Sat 9:00 a.m. - 9:20 a.m.

Title: Beyond demonstrations: Learning behavior from higher-level supervision

Sat 9:20 a.m. - 9:40 a.m.
[ Video

Title: Collaboration in Situated Language Communication

Sat 9:40 a.m. - 10:00 a.m.
[ Video

Multi-agent Imitation and Inverse Reinforcement Learning

Sat 10:00 a.m. - 10:20 a.m.
[ Video

Title: Nested Reasoning About Autonomous Agents Using Probabilistic Programs

Sat 10:20 a.m. - 11:30 a.m.
Poster session and coffee
Sat 11:30 a.m. - 11:50 a.m.
Changyou Chen (Contributed talk)
Sat 11:50 a.m. - 12:10 p.m.
Faraz Torabi (Contributed talk) [ Video
Sat 12:10 p.m. - 12:30 p.m.
Seyed Kamyar Seyed Ghasemipour (Contributed talk) [ Video
Sat 12:30 p.m. - 2:00 p.m.
Lunch break
Sat 2:05 p.m. - 2:25 p.m.
[ Video

Title: Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning

We propose a unified mechanism for achieving coordination and communication in Multi-Agent Reinforcement Learning (MARL), through rewarding agents for having causal influence over other agents' actions. Causal influence is assessed using counterfactual reasoning. At each timestep, an agent simulates alternate actions that it could have taken, and computes their effect on the behavior of other agents. Actions that lead to bigger changes in other agents' behavior are considered influential and are rewarded. We show that this is equivalent to rewarding agents for having high mutual information between their actions. Empirical results demonstrate that influence leads to enhanced coordination and communication in challenging social dilemma environments, dramatically increasing the learning curves of the deep RL agents, and leading to more meaningful learned communication protocols. The influence rewards for all agents can be computed in a decentralized way by enabling agents to learn a model of other agents using deep neural networks. In contrast, key previous works on emergent communication in the MARL setting were unable to learn diverse policies in a decentralized manner and had to resort to centralized training. Consequently, the influence reward opens up a window of new opportunities for research in this area."

Sat 2:25 p.m. - 2:45 p.m.
[ Video

Title: Self-Supervision and Play

Abstract: Real-world robotics is too complex to supervise with labels or through reward functions. While some amount of supervision is necessary, a more scalable approach instead is to bootstrap learning through self-supervision by first learning general task-agnostic representations. Specifically, we argue that we should learn from large amounts of unlabeled play data. Play serves as a way to explore and learn the breadth of what is possible in an undirected way. This strategy is widely used in nature to prepare oneself to achieve future tasks without knowing in advance which ones. In this talk, we present methods for learning vision and control representations entirely from unlabeled sequences. We demonstrate these representations self-arrange semantically and functionally and can be used for downstream tasks, without ever using labels or rewards.

Sat 2:45 p.m. - 3:05 p.m.
Nicholas R Waytowich (Contributed talk) [ Video
Sat 3:05 p.m. - 4:30 p.m.
Poster session and coffee
Sat 4:30 p.m. - 4:50 p.m.
[ Video

Title: Multi-modal trajectory forecasting

Sat 4:50 p.m. - 5:10 p.m.
Abhishek Das (Contributed talk) [ Video

Author Information

Nicholas Rhinehart (Carnegie Mellon University)

Nick Rhinehart is a Ph.D. student at Carnegie Mellon University, focusing on understanding, forecasting, and controlling the behavior of agents through computer vision and machine learning. He is particularly interested in systems that learn to reason about the future. He has researched with Sergey Levine at UC Berkeley, Paul Vernaza at N.E.C. Labs, and Drew Bagnell at Uber ATG. His First-Person Forecasting work received the Marr Prize (Best Paper) Honorable Mention Award at ICCV 2017. Nick co-organized Tutorial on Inverse RL for Computer Vision at CVPR 2018 and is the primary organizer of ICML 2019 Workshop on Imitation, Intent, and Interaction.

Sergey Levine (UC Berkeley)
Sergey Levine

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

Chelsea Finn (Stanford, Google, UC Berkeley)
Chelsea Finn

Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University. Finn's research interests lie in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction. To this end, her work has included deep learning algorithms for concurrently learning visual perception and control in robotic manipulation skills, inverse reinforcement methods for learning reward functions underlying behavior, and meta-learning algorithms that can enable fast, few-shot adaptation in both visual perception and deep reinforcement learning. Finn received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley. Her research has been recognized through the ACM doctoral dissertation award, the Microsoft Research Faculty Fellowship, the C.V. Ramamoorthy Distinguished Research Award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg. Throughout her career, she has sought to increase the representation of underrepresented minorities within CS and AI by developing an AI outreach camp at Berkeley for underprivileged high school students, a mentoring program for underrepresented undergraduates across four universities, and leading efforts within the WiML and Berkeley WiCSE communities of women researchers.

He He (NYU)
Ilya Kostrikov (NYU)
Justin Fu (University of California, Berkeley)
Siddharth Reddy (University of California, Berkeley)

More from the Same Authors