Skip to yearly menu bar Skip to main content


Poster

Action-Constrained Imitation Learning

Chia-Han Yeh · Tse-Sheng Nan · Risto Vuorio · Wei Hung · Hung-Yen Wu · Shao-Hua Sun · Ping-Chun Hsieh

West Exhibition Hall B2-B3 #W-602
[ ] [ ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Policy learning under action constraints plays a central role in ensuring safe behaviors in various robot control and resource allocation applications.In this paper, we study a new problem setting termed Action-Constrained Imitation Learning (ACIL), where an action-constrained imitator aims to learn from a demonstrative expert with larger action space.The fundamental challenge of ACIL lies in the unavoidable mismatch of occupancy measure between the expert and the imitator caused by the action constraints. We tackle this mismatch through trajectory alignment and propose DTWIL, which replaces the original expert demonstrations with a surrogate dataset that follows similar state trajectories while adhering to the action constraints. Specifically, we recast trajectory alignment as a planning problem and solve it via Model Predictive Control, which aligns the surrogate trajectories with the expert trajectories based on the Dynamic Time Warping (DTW) distance. Through extensive experiments, we demonstrate that learning from the dataset generated by DTWIL significantly enhances performance across multiple robot control tasks and outperforms various benchmark imitation learning algorithms in terms of sample efficiency.

Lay Summary:

Robots often need to learn from expert demonstrations, and in many real-world situations, they must operate under strict constraints—for example, avoiding certain actions for safety. This creates a challenge when the expert has more flexibility than the learning robot: the robot cannot simply copy the expert’s behavior. Our research addresses this issue in a setting we call Action-Constrained Imitation Learning (ACIL).We propose a method called DTWIL. Instead of using the expert data directly, we generate new “surrogate” trajectories that follow similar paths but respect the robot’s constraints. We do this by framing the alignment as a planning problem and solving it with Model Predictive Control, using Dynamic Time Warping to match the expert’s trajectory.Experiments show that learning from these aligned trajectories leads to better performance and sample efficiency across various robotic control tasks, enabling safer learning in action-constrained environments.

Live content is unavailable. Log in and register to view live content