Workshop
Programmatic Representations for Agent Learning
Shao-Hua Sun · Levi Lelis · Xinyun Chen · Shreyas Kapur · Jiayuan Mao · Ching-An Cheng · Anqi Li · Kuang-Huei Lee
This workshop explores the use of programmatic representations to enhance the interpretability, generalizability, efficiency, and scalability of agent learning frameworks. By leveraging structured representations—such as symbolic programs, code-based policies, and rule-based abstractions—agents can achieve greater interpretability, improved generalization, and enhanced efficiency. Programs can explicitly encode policies, reward functions, task structures, and environment dynamics, providing human-understandable reasoning while reducing the reliance on massive data-driven models. Furthermore, programmatic representations enable modularity and compositionality, allowing agents to efficiently reuse knowledge across tasks and adapt with minimal retraining. By bringing together the sequential decision-making community—including researchers in reinforcement learning, imitation learning, planning, search, and optimal control—with experts in program synthesis and code generation, this workshop aims to tackle the fundamental challenges of agent learning at scale and drive progress toward interpretable, generalizable, verifiable, robust and safe autonomous systems across domains ranging from virtual agents to robotics.
Live content is unavailable. Log in and register to view live content