Skip to yearly menu bar Skip to main content


Poster

Coprocessor Actor Critic: A Model-Based Reinforcement Learning Approach For Adaptive Deep Brain Stimulation

Michelle Pan · Mariah Schrum · Vivek Myers · Erdem Biyik · Anca Dragan


Abstract:

Deep Brain Stimulation (DBS) can treat neurological conditions such as Parkinson’s disease and post-stroke motor deficits by influencing abnormal neural activity. Because of patient heterogeneity, each patient requires a unique DBS control policy to achieve optimal neural responses. Model-free reinforcement learning (MFRL) holds promise in learning effective policies for a variety of control tasks similar to DBS. However, MFRL's limitation lies in its need for numerous environment interactions, making it impractical for domains like DBS in which interactions with the patient (i.e., brain stimulations) are costly. In this work we introduce Coprocessor Actor Critic (CoPAC), a novel, model-based reinforcement learning (MBRL) approach for learning neural coprocessor policies for DBS. Our key insight is that coprocessor policy learning is a combination of learning how to act optimally in the world and learning how to induce optimal actions through stimulation of the injured brain. We leverage a physiologically and neurologically realistic model of a stroke patient to learn the former in simulation, enabling us to minimize online interaction while learning the latter. We show that our approach surpasses the limitations of traditional MFRL methods in terms of sample efficiency and task success and outperforms baseline MBRL approaches in a neurologically realistic model of an injured brain. This work establishes a foundation for improving the understanding and efficacy of RL solutions for DBS.

Live content is unavailable. Log in and register to view live content