Timezone: »

Structured Evolution with Compact Architectures for Scalable Policy Optimization
Krzysztof Choromanski · Mark Rowland · Vikas Sindhwani · Richard E Turner · Adrian Weller

Wed Jul 11 04:50 AM -- 05:00 AM (PDT) @ A1

We present a new method of blackbox optimization via gradient approximation with the use of structured random orthogonal matrices, providing more accurate estimators than baselines and with provable theoretical guarantees. We show that this algorithm can be successfully applied to learn better quality compact policies than those using standard gradient estimation techniques. The compact policies we learn have several advantages over unstructured ones, including faster training algorithms and faster inference. These benefits are important when the policy is deployed on real hardware with limited resources. Further, compact policies provide more scalable architectures for derivative-free optimization (DFO) in high-dimensional spaces. We show that most robotics tasks from the OpenAI Gym can be solved using neural networks with less than 300 parameters, with almost linear time complexity of the inference phase, with up to 13x fewer parameters relative to the Evolution Strategies (ES) algorithm introduced by Salimans et al. (2017). We do not need heuristics such as fitness shaping to learn good quality policies, resulting in a simple and theoretically motivated training mechanism.

Author Information

Krzysztof Choromanski (Google Brain Robotics)
Mark Rowland (University of Cambridge)
Vikas Sindhwani (Google)
Richard E Turner (University of Cambridge)

Richard Turner holds a Lectureship (equivalent to US Assistant Professor) in Computer Vision and Machine Learning in the Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, UK. He is a Fellow of Christ's College Cambridge. Previously, he held an EPSRC Postdoctoral research fellowship which he spent at both the University of Cambridge and the Laboratory for Computational Vision, NYU, USA. He has a PhD degree in Computational Neuroscience and Machine Learning from the Gatsby Computational Neuroscience Unit, UCL, UK and a M.Sci. degree in Natural Sciences (specialism Physics) from the University of Cambridge, UK. His research interests include machine learning, signal processing and developing probabilistic models of perception.

Adrian Weller (University of Cambridge, Alan Turing Institute)

Adrian Weller is a Senior Research Fellow in the Machine Learning Group at the University of Cambridge, a Faculty Fellow at the Alan Turing Institute for data science and an Executive Fellow at the Leverhulme Centre for the Future of Intelligence (CFI). He is very interested in all aspects of artificial intelligence, its commercial applications and how it may be used to benefit society. At the CFI, he leads their project on Trust and Transparency. Previously, Adrian held senior roles in finance. He received a PhD in computer science from Columbia University, and an undergraduate degree in mathematics from Trinity College, Cambridge.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors