Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundations of Reinforcement Learning and Control: Connections and Perspectives

Event-Based Federated Q-Learning

Guner Dilsad ER · Michael Muehlebach


Abstract:

This paper introduces an event-based communication mechanism in federated Q-learning algorithms, enhancing convergence and reducing communication overhead. We present a communication scheme, which leverages event-based communication to update Q-tables between agents and a central server. Through theoretical analysis and empirical evaluation, we demonstrate the convergence properties of event-based QAvg, highlighting its effectiveness in federated reinforcement learning settings.

Chat is not available.