Skip to yearly menu bar Skip to main content


Poster

Momentum for the Win: Collaborative Federated Reinforcement Learning across Heterogeneous Environments

Han Wang · Sihong He · Zhili Zhang · Fei Miao · James Anderson


Abstract: We explore a Federated Reinforcement Learning (FRL) problem where $N$ agents collaboratively learn a common policy without sharing their trajectory data. To date, existing FRL work has primarily focused on agents operating in the same or ``similar" environments. In contrast, our problem setup allows for arbitrarily large levels of environment heterogeneity. To obtain the optimal policy which maximizes the average performance across all \emph{potentially completely different} environments, we propose two algorithms: \textsc{FedSVRPG-M} and \textsc{FedHAPG-M}. In contrast to existing results, we demonstrate that both \textsc{FedSVRPG-M} and \textsc{FedHAPG-M}, both of which leverage momentum mechanisms, can exactly converge to a stationary point of the average performance function, regardless of the magnitude of environment heterogeneity. Furthermore, by incorporating the benefits of variance-reduction techniques or Hessian approximation, both algorithms achieve state-of-the-art convergence results, characterized by a sample complexity of $\mathcal{O}\left(\epsilon^{-\frac{3}{2}}/N\right)$. Notably, our algorithms enjoy linear convergence speedups with respect to the number of agents, highlighting the benefit of collaboration among agents in finding a common policy.

Live content is unavailable. Log in and register to view live content