Timezone: »

Exploration in Reinforcement Learning Workshop
Benjamin Eysenbach · Benjamin Eysenbach · Surya Bhupatiraju · Shixiang Gu · Harrison Edwards · Martha White · Pierre-Yves Oudeyer · Kenneth Stanley · Emma Brunskill

Sat Jun 15 08:30 AM -- 06:00 PM (PDT) @ Hall A
Event URL: https://sites.google.com/view/erl-2019/ »

Exploration is a key component of reinforcement learning (RL). While RL has begun to solve relatively simple tasks, current algorithms cannot complete complex tasks. Our existing algorithms often endlessly dither, failing to meaningfully explore their environments in search of high-reward states. If we hope to have agents autonomously learn increasingly complex tasks, these machines must be equipped with machinery for efficient exploration.

The goal of this workshop is to present and discuss exploration in RL, including deep RL, evolutionary algorithms, real-world applications, and developmental robotics. Invited speakers will share their perspectives on efficient exploration, and researchers will share recent work in spotlight presentations and poster sessions.

Sat 9:00 a.m. - 9:30 a.m.
Doina Precup (Keynote) [ Video
Sat 9:30 a.m. - 10:00 a.m.
Spotlight Talks [ Video
Sat 10:00 a.m. - 11:00 a.m.

This is the first poster session and coffee break. All the papers will be presented at both poster sessions.

Adrien Ali Taiga, Aniket Deshmukh, Tabish Rashid, Jonathan Binas, Niko Yasui, Vitchyr Pong, Takahisa Imagawa, Jesse Clifton, Sid Mysore, Shi-Chun Tsai, Caleb Chuck, Giulia Vezzani, Hannes Bengt Eriksson
Sat 11:00 a.m. - 11:30 a.m.
Emo Todorov (Invited Talk) [ Video
Sat 11:30 a.m. - 12:00 p.m.
Best Paper Talks [ Video
Sat 12:00 p.m. - 12:30 p.m.
Pieter Abbeel (Invited Talk) [ Video
Sat 12:30 p.m. - 2:00 p.m.
Sat 2:00 p.m. - 2:30 p.m.
Raia Hadsell (Invited Talk)
Sat 2:30 p.m. - 3:00 p.m.
Lightning Talks
Sat 3:00 p.m. - 4:00 p.m.

This is the second poster session and coffee break. All the papers will be presented at both poster sessions.

Sat 4:00 p.m. - 4:30 p.m.
Martha White - Adapting Behaviour via Intrinsic Rewards to Learn Predictions (Invited Talk)
Sat 4:30 p.m. - 5:30 p.m.

We will have a panel on exploration with panelists Martha White, Jeff Clune, Pulkit Agrawal, and Pieter Abbeel, and moderated by Doina Precup.

Author Information

Benjamin Eysenbach (CMU, Google Brain)
Benjamin Eysenbach (Google)
Surya Bhupatiraju (Google Brain)
Shixiang Gu (Google)
Harrison Edwards (OpenAI / University of Edinburgh)
Martha White (University of Alberta)
Pierre-Yves Oudeyer (Inria)

Dr. Pierre-Yves Oudeyer is Research Director (DR1) at Inria and head of the Inria and Ensta-ParisTech FLOWERS team (France). Before, he has been a permanent researcher in Sony Computer Science Laboratory for 8 years (1999-2007). After working on computational models of language evolution, he is now working on developmental and social robotics, focusing on sensorimotor development, language acquisition and life-long learning in robots. Strongly inspired by infant development, the mechanisms he studies include artificial curiosity, intrinsic motivation, the role of morphology in learning motor control, human-robot interfaces, joint attention and joint intentional understanding, and imitation learning. He has published a book, more than 80 papers in international journals and conferences, holds 8 patents, gave several invited keynote lectures in international conferences, and received several prizes for his work in developmental robotics and on the origins of language. In particular, he is laureate of the ERC Starting Grant EXPLORERS. He is editor of the IEEE CIS Newsletter on Autonomous Mental Development, and associate editor of IEEE Transactions on Autonomous Mental Development, Frontiers in Neurorobotics, and of the International Journal of Social Robotics. He is also working actively for the diffusion of science towards the general public, through the writing of popular science articles and participation to radio and TV programs as well as science exhibitions. Web:http://www.pyoudeyer.com and http://flowers.inria.fr

Ken Stanley (Uber AI and University of Central Florida)
Ken Stanley

Kenneth O. Stanley leads a research team at OpenAI on the challenge of open-endedness. He was previously Charles Millican Professor of Computer Science at the University of Central Florida and was also a co-founder of Geometric Intelligence Inc., which was acquired by Uber to create Uber AI Labs, where he was head of Core AI research. He received a B.S.E. from the University of Pennsylvania in 1997 and received a Ph.D. in 2004 from the University of Texas at Austin. He is an inventor of the Neuroevolution of Augmenting Topologies (NEAT), HyperNEAT, , novelty search, and POET algorithms, as well as the CPPN representation, among many others. His main research contributions are in neuroevolution (i.e. evolving neural networks), generative and developmental systems, coevolution, machine learning for video games, interactive evolution, quality diversity, and open-endedness. He has won best paper awards for his work on NEAT, NERO, NEAT Drummer, FSMC, HyperNEAT, novelty search, Galactic Arms Race, and POET. His original 2002 paper on NEAT also received the 2017 ISAL Award for Outstanding Paper of the Decade 2002 - 2012 from the International Society for Artificial Life. He is a coauthor of the popular science book, "Why Greatness Cannot Be Planned: The Myth of the Objective" (published by Springer), and has spoken widely on its subject.

Emma Brunskill (Stanford University)

More from the Same Authors