Skip to yearly menu bar Skip to main content


Invited talk
in
Workshop: Foundations of Reinforcement Learning and Control: Connections and Perspectives

Max Simchovitz: Provable Guarantees for Generative Behavior Cloning

Max Simchowitz

[ ]
Sat 27 Jul 2:55 a.m. PDT — 3:30 a.m. PDT

Abstract:

Behavior cloning — teaching a robot to imitate from example demonstrations — lies at the heart of many of today’s most promising robot learning endeavors due to its intuitive data collection and simplicity. In this short talk, we explore how behavior cloning in continuous state/action spaces differs from how we might think of the problem with a more discrete lens. By adopting ideas from control theoretic stability with generative sampling oracles, we introduce a framework for behavior cloning that enables an agent to imitate nearly arbitrary behavior with provable guarantees, even when the dynamics governing the agent and environments interaction are nonlinear.

This is based on joint work with Adam Block, Daniel Pfrommer, Ali Jadbabaie and Russ Tedrake. https://arxiv.org/abs/2307.14619

Bio: Max Simchowitz is currently postdoctoral researcher in the Robot Locomotion Group at MIT CSAIL, and incoming as an assistant professor at the Machine Learning Department at Carnegie Mellon University. He studies the theoretical foundations of machine learning problems with a sequential or dynamical component; he currently focuses on robotics and out-of-distribution learning, and with past work ranging broadly across control, reinforcement learning, optimization and algorithmic fairness. He received his PhD from University of California, Berkeley in 2021 under Ben Recht and Michael I. Jordan, and his work has been recognized with an ICML 2018 Best Paper Award, ICML 2022 Outstanding Paper Award, and RSS 2023 and ICRA 2024 Best Paper Finalist designations.

Chat is not available.