Skip to yearly menu bar Skip to main content


Poster

Low-Precision Reinforcement Learning: Running Soft Actor-Critic in Half Precision

Johan Björck · Xiangyu Chen · Christopher De Sa · Carla Gomes · Kilian Weinberger

Keywords: [ Reinforcement Learning and Planning ]


Abstract:

Low-precision training has become a popular approach to reduce compute requirements, memory footprint, and energy consumption in supervised learning. In contrast, this promising approach has not yet enjoyed similarly widespread adoption within the reinforcement learning (RL) community, partly because RL agents can be notoriously hard to train even in full precision. In this paper we consider continuous control with the state-of-the-art SAC agent and demonstrate that a na\"ive adaptation of low-precision methods from supervised learning fails. We propose a set of six modifications, all straightforward to implement, that leaves the underlying agent and its hyperparameters unchanged but improves the numerical stability dramatically. The resulting modified SAC agent has lower memory and compute requirements while matching full-precision rewards, demonstrating that low-precision training can substantially accelerate state-of-the-art RL without parameter tuning.

Chat is not available.