Timezone: »
The explosion in workload complexity and the recent slow-down in Moore's law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations; augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.
Author Information
Milad Hashemi (Google)
Kevin Swersky (Google Brain)
Jamie Smith (Google)
Grant Ayers (Stanford)
Heiner Litz (UC Santa Cruz)
Jichuan Chang (Google)
Christos Kozyrakis (Stanford University)
Christos Kozyrakis is a Professor of Electrical Engineering & Computer Science at Stanford University. He received a BS degree from the University of Crete (Greece) and a PhD degree from the University of California at Berkeley (USA), both in Computer Science. He is a fellow of the ACM and the IEEE. He has received the ACM SIGARCH Maurice Wilkes award, the NSF Career Award, an IBM Faculty Award, the Okawa Foundation Research Grant, the Noyce Family Faculty Scholarship, and the Willard R. and Inez Kerr Bell Faculty Scholarship. Dr. Kozyrakis works on architectures, runtime environments, and programming models for parallel computer systems. At Berkeley, he developed the IRAM architecture, a novel media-processor system that combined vector processing with embedded DRAM technology. At Stanford, he led the Transactional Coherence and Consistency (TCC) project at Stanford that developed hardware and software mechanisms for programming with transactional memory. He also led the Raksha project, that developed practical hardware support and security policies to deter high-level and low-level security attacks against deployed software. Dr. Kozyrakis is currently working on resource efficient computing for warehouse-scale datacenters.
Parthasarathy Ranganathan (Google, USA)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Learning Memory Access Patterns »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #91
More from the Same Authors
-
2021 Poster: Variational Data Assimilation with a Learned Inverse Observation Operator »
Thomas Frerix · Dmitrii Kochkov · Jamie Smith · Daniel Cremers · Michael Brenner · Stephan Hoyer -
2021 Poster: Oops I Took A Gradient: Scalable Sampling for Discrete Distributions »
Will Grathwohl · Kevin Swersky · Milad Hashemi · David Duvenaud · Chris Maddison -
2021 Spotlight: Variational Data Assimilation with a Learned Inverse Observation Operator »
Thomas Frerix · Dmitrii Kochkov · Jamie Smith · Daniel Cremers · Michael Brenner · Stephan Hoyer -
2021 Oral: Oops I Took A Gradient: Scalable Sampling for Discrete Distributions »
Will Grathwohl · Kevin Swersky · Milad Hashemi · David Duvenaud · Chris Maddison -
2020 Workshop: Graph Representation Learning and Beyond (GRL+) »
Petar Veličković · Michael M. Bronstein · Andreea Deac · Will Hamilton · Jessica Hamrick · Milad Hashemi · Stefanie Jegelka · Jure Leskovec · Renjie Liao · Federico Monti · Yizhou Sun · Kevin Swersky · Rex (Zhitao) Ying · Marinka Zitnik -
2020 Poster: An Imitation Learning Approach for Cache Replacement »
Evan Liu · Milad Hashemi · Kevin Swersky · Parthasarathy Ranganathan · Junwhan Ahn -
2020 Poster: Optimizing Long-term Social Welfare in Recommender Systems: A Constrained Matching Approach »
Martin Mladenov · Elliot Creager · Omer Ben-Porat · Kevin Swersky · Richard Zemel · Craig Boutilier -
2019 Poster: Flexibly Fair Representation Learning by Disentanglement »
Elliot Creager · David Madras · Joern-Henrik Jacobsen · Marissa Weis · Kevin Swersky · Toniann Pitassi · Richard Zemel -
2019 Oral: Flexibly Fair Representation Learning by Disentanglement »
Elliot Creager · David Madras · Joern-Henrik Jacobsen · Marissa Weis · Kevin Swersky · Toniann Pitassi · Richard Zemel