Timezone: »

 
Poster
Graph Networks as Learnable Physics Engines for Inference and Control
Alvaro Sanchez-Gonzalez · Nicolas Heess · Jost Springenberg · Josh Merel · Martin Riedmiller · Raia Hadsell · Peter Battaglia

Wed Jul 11 09:15 AM -- 12:00 PM (PDT) @ Hall B #84

Understanding and interacting with everyday physical scenes requires rich knowledge about the structure of the world, represented either implicitly in a value or policy function, or explicitly in a transition model. Here we introduce a new class of learnable models--based on graph networks--which implement an inductive bias for object- and relation-centric representations of complex, dynamical systems. Our results show that as a forward model, our approach supports accurate predictions from real and simulated data, and surprisingly strong and efficient generalization, across eight distinct physical systems which we varied parametrically and structurally. We also found that our inference model can perform system identification. Our models are also differentiable, and support online planning via gradient-based trajectory optimization, as well as offline policy optimization. Our framework offers new opportunities for harnessing and exploiting rich knowledge about the world, and takes a key step toward building machines with more human-like representations of the world.

Author Information

Alvaro Sanchez-Gonzalez (DeepMind)
Nicolas Heess (DeepMind)
Jost Springenberg (DeepMind)
Josh Merel (DeepMind)
Martin Riedmiller (DeepMind)
Raia Hadsell (DeepMind)

Raia Hadsell, a senior research scientist at DeepMind, has worked on deep learning and robotics problems for over 10 years. Her early research developed the notion of manifold learning using Siamese networks, which has been used extensively for invariant feature learning. After completing a PhD with Yann LeCun, which featured a self-supervised deep learning vision system for a mobile robot, her research continued at Carnegie Mellon’s Robotics Institute and SRI International, and in early 2014 she joined DeepMind in London to study artificial general intelligence. Her current research focuses on the challenge of continual learning for AI agents and robotic systems. While deep RL algorithms are capable of attaining superhuman performance on single tasks, they cannot transfer that performance to additional tasks, especially if experienced sequentially. She has proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting and improve transfer learning.

Peter Battaglia (DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors