Skip to yearly menu bar Skip to main content


Poster

INSIGHT: End-to-End Neuro-Symbolic Visual Reinforcement Learning with Language Explanations

Lirui Luo · Guoxi Zhang · Hongming Xu · Yaodong Yang · Cong Fang · Qing Li


Abstract:

Neuro-symbolic reinforcement learning (NS-RL) has emerged as a promising paradigm for explainable decision-making, characterized by the interpretability of symbolic policies.For tasks with visual observations, NS-RL entails structured representations for states, but previous algorithms are unable to refine the structured states with reward signals due to a lack of efficiency.Accessibility is also an issue, as extensive domain knowledge is required to interpret current symbolic policies.In this paper, we present a framework that is capable of learning structured states and symbolic policies simultaneously, whose key idea is to overcome the efficiency bottleneck by distilling vision foundation models into a scalable perception module.Moreover, we design a pipeline that uses large language models to generate concise and readable language explanations for policies and decisions.In experiments on nine Atari tasks, our approach demonstrates substantial performance gains over existing NSRL methods.We also showcase explanations for policies and decisions.

Live content is unavailable. Log in and register to view live content