Timezone: »

 
Poster
RLlib: Abstractions for Distributed Reinforcement Learning
Eric Liang · Richard Liaw · Robert Nishihara · Philipp Moritz · Roy Fox · Ken Goldberg · Joseph E Gonzalez · Michael Jordan · Ion Stoica

Fri Jul 13 09:15 AM -- 12:00 PM (PDT) @ Hall B #21

Reinforcement learning (RL) algorithms involve the deep nesting of highly irregular computation patterns, each of which typically exhibits opportunities for distributed computation. We argue for distributing RL components in a composable way by adapting algorithms for top-down hierarchical control, thereby encapsulating parallelism and resource requirements within short-running compute tasks. We demonstrate the benefits of this principle through RLlib: a library that provides scalable software primitives for RL. These primitives enable a broad range of algorithms to be implemented with high performance, scalability, and substantial code reuse. RLlib is available as part of the open source Ray project at http://rllib.io/.

Author Information

Eric Liang (University of California, Berkeley)
Richard Liaw (UC Berkeley)
Robert Nishihara (Unknown)
Philipp Moritz (UC Berkeley)
Roy Fox (UC Berkeley)
Ken Goldberg (UC Berkeley)
Joseph E Gonzalez (UC Berkeley)
Michael Jordan (UC Berkeley)
Ion Stoica (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors