Skip to yearly menu bar Skip to main content


Poster
in
Workshop: “Could it have been different?” Counterfactuals in Minds and Machines

Why Don’t We Focus on Episodic Future Reasoning, Not Only Counterfactual?

Dongsu Lee · Minhae Kwon


Abstract:

Understanding cognitive processes in multi-agent interactions is a primary goal in cognitive science. It can guide the direction of artificial intelligence (AI) research toward social decision-making in heterogeneous multi-agent systems. In this paper, we introduce the episodic future thinking (EFT) mechanism of a reinforcement learning (RL) agent by benchmarking the cognitive process of animals. To achieve future thinking functionality, we first train a multi-character policy that reflects heterogeneous characters with an ensemble of heterogeneous policies. An agent's character is defined as a different weight combination on reward components, thus explaining the agent's behavioral preference. The future thinking agent collects observation-action trajectories of the target agents and uses the pre-trained multi-character policy to infer their characters. Once the character is inferred, the agent predicts the upcoming actions of the targets and simulates the future. This capability allows the agent to adaptively select the optimal action, considering the upcoming behavior of others in multi-agent scenarios. To evaluate the proposed mechanism, we consider the multi-agent autonomous driving scenario in which autonomous vehicles with different driving traits are on the road. Simulation results demonstrate that the EFT mechanism with accurate character inference leads to a higher reward than existing multi-agent solutions. We also confirm that the effect of reward improvement remains valid across societies with different levels of character diversity.

Chat is not available.