Timezone: »
Autonomous mobility-on-demand (AMoD) systems represent a rapidly developing mode of transportation wherein travel requests are dynamically handled by a coordinated fleet of robotic, self-driving vehicles. Given a graph representation of the transportation network - one where, for example, nodes represent areas of the city, and edges the connectivity between them - we argue that the AMoD control problem is naturally cast as a node-wise decision-making problem. In this paper, we propose a deep reinforcement learning framework to control the rebalancing of AMoD systems through graph neural networks. Crucially, we demonstrate that graph neural networks enable reinforcement learning agents to recover behavior policies that are significantly more transferable, generalizable, and scalable than policies learned through other approaches. Empirically, we show how the learned policies exhibit promising zero-shot transfer capabilities when faced with critical portability tasks such as inter-city generalization, service area expansion, and adaptation to potentially complex urban topologies.
Author Information
Daniele Gammelli (Technical Univeristy of Denmark)
Kaidi Yang (Stanford University)
James Harrison (Stanford University)
Filipe Rodrigues (Technical University of Denmark (DTU))
Francisco Pereira (DTU)
Marco Pavone (Stanford University)
More from the Same Authors
-
2021 : Recurrent Flow Networks: A Recurrent Latent Variable Model for Density Modelling of Urban Mobility »
Daniele Gammelli -
2021 Poster: Deep Reinforcement Learning amidst Continual Structured Non-Stationarity »
Annie Xie · James Harrison · Chelsea Finn -
2021 Spotlight: Deep Reinforcement Learning amidst Continual Structured Non-Stationarity »
Annie Xie · James Harrison · Chelsea Finn