Timezone: »

Graph Neural Network Reinforcement Learning for Autonomous Mobility-on-Demand Systems
Daniele Gammelli · Kaidi Yang · James Harrison · Filipe Rodrigues · Francisco Pereira · Marco Pavone

Autonomous mobility-on-demand (AMoD) systems represent a rapidly developing mode of transportation wherein travel requests are dynamically handled by a coordinated fleet of robotic, self-driving vehicles. Given a graph representation of the transportation network - one where, for example, nodes represent areas of the city, and edges the connectivity between them - we argue that the AMoD control problem is naturally cast as a node-wise decision-making problem. In this paper, we propose a deep reinforcement learning framework to control the rebalancing of AMoD systems through graph neural networks. Crucially, we demonstrate that graph neural networks enable reinforcement learning agents to recover behavior policies that are significantly more transferable, generalizable, and scalable than policies learned through other approaches. Empirically, we show how the learned policies exhibit promising zero-shot transfer capabilities when faced with critical portability tasks such as inter-city generalization, service area expansion, and adaptation to potentially complex urban topologies.

Author Information

Daniele Gammelli (Technical Univeristy of Denmark)
Kaidi Yang (Stanford University)
James Harrison (Stanford University)
Filipe Rodrigues (Technical University of Denmark (DTU))
Francisco Pereira (DTU)
Marco Pavone (Stanford University)

More from the Same Authors