Skip to yearly menu bar Skip to main content


Poster

Learning Calibratable Policies using Programmatic Style-Consistency

Eric Zhan · Albert Tseng · Yisong Yue · Adith Swaminathan · Matthew Hausknecht

Keywords: [ Deep Learning - Generative Models and Autoencoders ] [ Deep Generative Models ]


Abstract: We study the problem of controllable generation of long-term sequential behaviors, where the goal is to calibrate to multiple behavior styles simultaneously. In contrast to the well-studied areas of controllable generation of images, text, and speech, there are two questions that pose significant challenges when generating long-term behaviors: how should we specify the factors of variation to control, and how can we ensure that the generated behavior faithfully demonstrates combinatorially many styles? We leverage programmatic labeling functions to specify controllable styles, and derive a formal notion of style-consistency as a learning objective, which can then be solved using conventional policy learning approaches. We evaluate our framework using demonstrations from professional basketball players and agents in the MuJoCo physics environment, and show that existing approaches that do not explicitly enforce style-consistency fail to generate diverse behaviors whereas our learned policies can be calibrated for up to $4^5 (1024)$ distinct style combinations.

Chat is not available.