Skip to yearly menu bar Skip to main content


Afternoon Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction

State trajectory abstraction and visualization method for explainability in reinforcement learning

Yoshiki Takagi · roderick tabalba · Jason Leigh


Abstract:

Explainable AI (XAI) has demonstrated the potential to help reinforcement learning (RL) practitioners to understand how RL models work. However, XAI for users who have considerable domain knowledge but lack machine learning (ML) expertise, is understudied. Solving such a problem would enable RL experts to communicate with domain experts in producing ML solutions that better meet their intentions. This study examines a trajectory-based approach to the problem. Trajectory-based XAI appears promising in enabling non-RL experts to understand a RL model’s behavior by viewing a visual representation of the behavior that consists of trajectories that depict the transitions between the major states of the RL models. This paper proposes a framework to to create and evaluate a visual representation of RL models' behavior that is easy to understand between both RL and non-RL experts.

Chat is not available.