Skip to yearly menu bar Skip to main content


 

Genomics, Big Data, and Machine Learning: Understanding the Human Wiring Diagram and Driving the Healthcare Revolution

Peter Donnelly (University of Oxford/Genomics Plc)

 

Peter Donnally

Donnelly is Director of the Wellcome Trust Centre for Human Genetics and Professor of Statistical Science at the University of Oxford, and CEO of Genomics Plc.  He grew up in Australia and on graduating from the University of Queensland he studied for a doctorate in Oxford as a Rhodes Scholar. He held professorships at the Universities of London and Chicago before returning to Oxford in 1996. Peter’s early research work concerned the development of stochastic models in population genetics, including the coalescent, and then the development of statistical methods for genetic and genomic data. His group developed several widely-used statistical algorithms, including STRUCTURE and PHASE, and, in collaboration with colleagues in Oxford, IMPUTE. His current research focuses on understanding the genetic basis of human diseases, human demographic history, and the mechanisms involved in meiosis and recombination.

Peter played a major role in the HapMap project, and more recently, he chaired the Wellcome Trust Case Control Consortium (WTCCC) and its successor, WTCCC2, a large international collaboration studying the genetic basis of more than 20 common human diseases and conditions in over 60,000 people. He also led WGS500, an Oxford collaboration with Illumina to sequence 500 individuals with a range of clinical conditions to assess the short-term potential for whole genome sequencing in clinical medicine; a precursor to the NHS 100,000 Genomes Project. Peter is a Fellow of the Royal Society and of the Academy of Medical Sciences, and is an Honorary Fellow of the Institute of Actuaries. He has received numerous awards and honours for his research.

Abstract: Each of our cells carries two copies of our genome, the 3bn letters of DNA that serves as their instruction manual. The costs of sequencing (reading) a human genome have decreased by more than six orders of magnitude over the last 10-15 years. Globally, perhaps 100,000 whole genomes have been sequenced, with a clear short-term path to several million. In 10-15 years a billion human genomes will have been sequenced, with many of those sequences linked to extensive information about the individuals, from their medical records and wearable devices. The availability of extensive genetic information linked to information about health outcomes and other traits on very large numbers of individuals presents an extraordinary opportunity. Combining genomic information with biological and health measurements on individuals will improve our ability to assess individual health risks, predict outcomes, and personalise medical treatment. But crucially, and perhaps uniquely, genetics also offers the possibility of unravelling causality amongst otherwise highly correlated features. The resulting much deeper understanding of human biology will have a big impact on drug discovery and healthcare delivery. DNA sequence data from different individuals has a complex correlation structure due to our shared evolutionary history. Inference methods which model these correlations have been very successful to date, but the explosion in the scale and nature of available data will require novel approaches. The talk will illustrate the opportunities and challenges in applying ML and other inference tools to genomic data, by walking through specific examples. No previous knowledge of genetics will be necessary.


How AI Designers will Dictate Our Civic Future

Latanya Sweeney (Harvard University)

 

Latanya SweeneyAs Professor of Government and Technology in Residence at Harvard University, my mission is create and use technology to assess and solve societal, political and governance problems, and to teach others how to do the same. On focus area is the scientific study of technology's impact on humankind, and I am the Editor-in-Chief of Technology Science. Another focus area is data privacy, and I am the Director of the Data Privacy Lab at Harvard. There are other foci too. (more)

I was formerly the Chief Technology Officer, also called the Chief Technologist, at the U.S. Federal Trade Commission (FTC). It was a fantastic experience! I thank Chairwoman Ramirez for appointing me. One of my goals was to make it easier for others to work on innovative solutions at the intersection of technology, policy and business. Often, I thought of my past students, who primarily came from computer science or governance backgrounds, and who were highly motivated to change the world. I would like to see society harness their energy and get others thinking about innovative solutions to pressing problems. During my time there, I launched the summer research fellows program and blogged on Tech@FTC to facilitate explorations and ignite brainstorming on FTC-related topics.

Abstract: Technology designers are the new policymakers. No one elected them, and most people do not know their names, but the decisions they make when producing the latest gadgets and online innovations dictate the code by which we conduct our daily lives and govern our country. Challenges to the privacy and security of our personal data are part of the first wave of this change; as technology progresses, says Latanya Sweeney, every demographic value and every law comes up for grabs and will likely be redefined by what technology does or does not enable. How will it all fit together or fall apart? Join Sweeney, who after serving as chief technology officer at the U.S. Federal Trade Commission, has been helping others unearth unforeseen consequences and brainstorm on how to engineer the way forward.


Towards Reinforcement Learning in the Real World

Raia Hadsell (DeepMind)

Raia HadsellRaia Hadsell, a senior research scientist at DeepMind, has worked on deep learning and robotics problems for over 10 years. Her early research developed the notion of manifold learning using Siamese networks, which has been used extensively for invariant feature learning. After completing a PhD with Yann LeCun, which featured a self-supervised deep learning vision system for a mobile robot, her research continued at Carnegie Mellon’s Robotics Institute and SRI International, and in early 2014 she joined DeepMind in London to study artificial general intelligence. Her current research focuses on the challenge of continual learning for AI agents and robotic systems. While deep RL algorithms are capable of attaining superhuman performance on single tasks, they cannot transfer that performance to additional tasks, especially if experienced sequentially. She has proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting and improve transfer learning. 

Abstract: Deep reinforcement learning has rapidly grown as a research field with far-reaching potential for artificial intelligence. Large set of ATARI games have been used as the main benchmark domain for many fundamental developments. As the field matures, it is important to develop more sophisticated learning systems with the aim of solving more complex tasks. I will describe some recent research from DeepMind that allows end-to-end learning in challenging environments with real-world variability and complex task structure.

Causal Learning

Bernhard Schölkopf (Max Planck Institute for Intelligent Systems)

Peter DonnallyBernhard Schölkopf's scientific interests are in machine learning and causal inference. He has applied his methods to a number of different application areas, ranging from biomedical problems to computational photography and astronomy. Bernhard has researched at AT&T Bell Labs, at GMD FIRST, Berlin, and at Microsoft Research Cambridge, UK, before becoming a Max Planck director in 2001. He is a member of the German Academy of Sciences (Leopoldina), and has received the J.K. Aggarwal Prize of the International Association for Pattern Recognition, the Max Planck Research Award (shared with S. Thrun), the Academy Prize of the Berlin-Brandenburg Academy of Sciences and Humanities, and the Royal Society Milner Award.

Abstract: In machine learning, we use data to automatically find dependences in the world, with the goal of predicting future observations. Most machine learning methods build on statistics, but one can also try to go beyond this, assaying causal structures underlying statistical dependences. Can such causal knowledge help prediction in machine learning tasks? We argue that this is indeed the case, due to the fact that causal models are more robust to changes that occur in real world datasets. We discuss implications of causality for machine learning tasks, and argue that many of the hard issues benefit from the causal viewpoint. This includes domain adaptation, semi-supervised learning, transfer, life-long learning, and fairness, as well as an application to the removal of systematic errors in astronomical problems.