Iain Murray is a SICSA Lecturer in Machine Learning at the University of Edinburgh. Iain was introduced to machine learning by David MacKay and Zoubin Ghahramani, both previous NIPS tutorial speakers. He obtained his PhD in 2007 from the Gatsby Computational Neuroscience Unit at UCL. His thesis on Monte Carlo methods received an honourable mention for the ISBA Savage Award. He was a commonwealth fellow in Machine Learning at the University of Toronto, before moving to Edinburgh in 2010.
Iain's research interests include building flexible probabilistic models of data, and probabilistic inference from indirect and uncertain observations. Iain is passionate about teaching. He has lectured at several Summer schools, is listed in the top 15 authors on videolectures.net, and was awarded the EUSA Van Heyningen Award for Teaching in Science and Engineering in 2015.
Maria-Florina Balcan is an Associate Professor in the School of Computer Science at Carnegie Mellon University. Her main research interests are machine learning and theoretical computer science. Her honors include the CMU SCS Distinguished Dissertation Award, an NSF CAREER Award, a Microsoft Faculty Research Fellowship, a Sloan Research Fellowship, and several paper awards. She has served as a Program Committee Co-chair for COLT 2014, a Program Committee Co-chair for ICML 2016, and a board member of the International Machine Learning Society.
Raia Hadsell, a senior research scientist at DeepMind, has worked on deep learning and robotics problems for over 10 years. Her early research developed the notion of manifold learning using Siamese networks, which has been used extensively for invariant feature learning. After completing a PhD with Yann LeCun, which featured a self-supervised deep learning vision system for a mobile robot, her research continued at Carnegie Mellon’s Robotics Institute and SRI International, and in early 2014 she joined DeepMind in London to study artificial general intelligence. Her current research focuses on the challenge of continual learning for AI agents and robotic systems. While deep RL algorithms are capable of attaining superhuman performance on single tasks, they cannot transfer that performance to additional tasks, especially if experienced sequentially. She has proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting and improve transfer learning.