Talk
Neural Message Passing for Quantum Chemistry
Justin Gilmer · Samuel Schoenholz · Patrick F Riley · Oriol Vinyals · George Dahl

Wed Aug 9th 04:24 -- 04:42 PM @ Darling Harbour Theatre

Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels.

Author Information

Justin Gilmer (Google Brain)
Samuel Schoenholz (Google Brain)
Patrick F Riley (Google)

Patrick Riley is a principal engineer at Google and has been at Google since 2005. He got his Ph.D. from Carnegie Mellon University studying artificial intelligence in multi-agent systems. He currently works on Google Accelerated Science where he collaborates with external scientists to apply Google's knowledge and experience in running complex algorithms over large data sets to important scientific problems. Previously, he led a number of efforts on the collection and analysis of user behavior in web search.

Oriol Vinyals (DeepMind)

Oriol Vinyals is a Research Scientist at Google. He works in deep learning with the Google Brain team. Oriol holds a Ph.D. in EECS from University of California, Berkeley, and a Masters degree from University of California, San Diego. He is a recipient of the 2011 Microsoft Research PhD Fellowship. He was an early adopter of the new deep learning wave at Berkeley, and in his thesis he focused on non-convex optimization and recurrent neural networks. At Google Brain he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, language, and vision.

George Dahl (Google Brain)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors