Skip to yearly menu bar Skip to main content


Adversarial Learning of Distributional Reinforcement Learning

Yang Sui · Yukun Huang · Hongtu Zhu · Fan Zhou

Exhibit Hall 1 #632
[ ]
[ Slides [ PDF [ Poster


Reinforcement learning (RL) has made significant advancements in artificial intelligence. However, its real-world applications are limited due to differences between simulated environments and the actual world. Consequently, it is crucial to systematically analyze how each component of the RL system can affect the final model performance. In this study, we propose an adversarial learning framework for distributional reinforcement learning, which adopts the concept of influence measure from the statistics community. This framework enables us to detect performance loss caused by either the internal policy structure or the external state observation. The proposed influence measure is based on information geometry and has desirable properties of invariance. We demonstrate that the influence measure is useful for three diagnostic tasks: identifying fragile states in trajectories, determining the instability of the policy architecture, and pinpointing anomalously sensitive policy parameters.

Chat is not available.