Timezone: »

ICML Workshop on Imitation, Intent, and Interaction (I3)
Nicholas Rhinehart · Sergey Levine · Chelsea Finn · He He · Ilya Kostrikov · Justin Fu · Siddharth Reddy

Sat Jun 15 08:30 AM -- 06:00 PM (PDT) @ 201

Website: https://sites.google.com/view/icml-i3

Abstract: A key challenge for deploying interactive machine learning systems in the real world is the ability for machines to understand human intent. Techniques such as imitation learning and inverse reinforcement learning are popular data-driven paradigms for modeling agent intentions and controlling agent behaviors, and have been applied to domains ranging from robotics and autonomous driving to dialogue systems. Such techniques provide a practical solution to specifying objectives to machine learning systems when they are difficult to program by hand.

While significant progress has been made in these areas, most research effort has concentrated on modeling and controlling single agents from dense demonstrations or feedback. However, the real world has multiple agents, and dense expert data collection can be prohibitively expensive. Surmounting these obstacles requires progress in frontiers such as:
1) the ability to infer intent from multiple modes of data, such as language or observation, in addition to traditional demonstrations.
2) the ability to model multiple agents and their intentions, both in cooperative and adversarial settings.
3) handling partial or incomplete information from the expert, such as demonstrations that lack dense action annotations, raw videos, etc..

The workshop on Imitation, Intention, and Interaction (I3) seeks contributions at the interface of these frontiers, and will bring together researchers from multiple disciplines such as robotics, imitation and reinforcement learning, cognitive science, AI safety, and natural language understanding. Our aim will be to reexamine the assumptions in standard imitation learning problem statements (e.g., inverse reinforcement learning) and connect distinct application disciplines, such as robotics and NLP, with researchers developing core imitation learning algorithms. In this way, we hope to arrive at new problem formulations, new research directions, and the development of new connections across distinct disciplines that interact with imitation learning methods.

Author Information

Nicholas Rhinehart (Carnegie Mellon University)

Nick Rhinehart is a Ph.D. student at Carnegie Mellon University, focusing on understanding, forecasting, and controlling the behavior of agents through computer vision and machine learning. He is particularly interested in systems that learn to reason about the future. He has researched with Sergey Levine at UC Berkeley, Paul Vernaza at N.E.C. Labs, and Drew Bagnell at Uber ATG. His First-Person Forecasting work received the Marr Prize (Best Paper) Honorable Mention Award at ICCV 2017. Nick co-organized Tutorial on Inverse RL for Computer Vision at CVPR 2018 and is the primary organizer of ICML 2019 Workshop on Imitation, Intent, and Interaction.

Sergey Levine (UC Berkeley)
Sergey Levine

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

Chelsea Finn (Stanford, Google, UC Berkeley)
Chelsea Finn

Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University. Finn's research interests lie in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction. To this end, her work has included deep learning algorithms for concurrently learning visual perception and control in robotic manipulation skills, inverse reinforcement methods for learning reward functions underlying behavior, and meta-learning algorithms that can enable fast, few-shot adaptation in both visual perception and deep reinforcement learning. Finn received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley. Her research has been recognized through the ACM doctoral dissertation award, the Microsoft Research Faculty Fellowship, the C.V. Ramamoorthy Distinguished Research Award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg. Throughout her career, she has sought to increase the representation of underrepresented minorities within CS and AI by developing an AI outreach camp at Berkeley for underprivileged high school students, a mentoring program for underrepresented undergraduates across four universities, and leading efforts within the WiML and Berkeley WiCSE communities of women researchers.

He He (NYU)
Ilya Kostrikov (NYU)
Justin Fu (University of California, Berkeley)
Siddharth Reddy (University of California, Berkeley)

More from the Same Authors