Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Interactive Learning with Implicit Human Feedback

Legible Robot Motion from Conditional Generative Models

Matthew Bronars · Danfei Xu


Abstract:

In human robot collaboration, legible motion thatclearly conveys its intentions and goals is essential. This is because forecasting a robot’s nextmove can lead to an improved user experience,safety, and task efficiency. Current methods forgenerating legible motion utilize hand designedcost functions and classical motion planners, butthere is need for data driven policies that aretrained end-to-end on demonstration data. Inthis paper we propose Generative Legible Motion Models (GLMM), a framework that utilizesconditional generative models to learn legible trajectories from human demonstrations. We findthat GLMM produces motion that is 76% morelegible than standard goal conditioned generativemodels and 83% percent more legible than generative models without goal conditioning.

Chat is not available.