Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: LatinX in AI (LXAI) Research at ICML 2021

Model Reference Adaptive Control for Online Policy Adaptation and Network Synchronization

Miguel F. Arevalo-Castiblanco · Cesar Uribe · Eduardo Mojica-Nava


Abstract:

We propose an online adaptive synchronization method for leader-follower networks of heterogeneous agents. Synchronization is achieved using a distributed Model Reference Adaptive Control (DMRAC-RL) that enables the improved performance of Reinforcement Learning (RL)-trained policies on a reference model. The leader observes the performance of the reference model, and the followers observe the states and actions of the agents they are connected to, but not the reference model. Notably, both the leader and followers models might differ from the reference model the RL-control policy was trained. DMRAC-RL uses an internal loop that adjusts the learned policy for the agents in the form of an augmented input to solve the distributed control problem. Numerical examples of the synchronization of a network of inverted pendulums support our theoretical findings.