Skip to yearly menu bar Skip to main content


Session

Other Applications 2

Abstract:
Chat is not available.

Thu 12 July 2:00 - 2:20 PDT

Learning Memory Access Patterns

Milad Hashemi · Kevin Swersky · Jamie Smith · Grant Ayers · Heiner Litz · Jichuan Chang · Christos Kozyrakis · Parthasarathy Ranganathan

The explosion in workload complexity and the recent slow-down in Moore's law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations; augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.

Thu 12 July 2:20 - 2:40 PDT

Geodesic Convolutional Shape Optimization

Pierre Baque · Edoardo Remelli · Francois Fleuret · EPFL Pascal Fua

Aerodynamic shape optimization has many industrial applications. Existing methods, however, are so computationally demanding that typical engineering practices are to either simply try a limited number of hand-designed shapes or restrict oneself to shapes that can be parameterized using only few degrees of freedom. In this work, we introduce a new way to optimize complex shapes fast and accurately. To this end, we train Geodesic Convolutional Neural Networks to emulate a fluidynamics simulator. The key to making this approach practical is remeshing the original shape using a poly-cube map, which makes it possible to perform the computations on GPUs instead of CPUs. The neural net is then used to formulate an objective function that is differentiable with respect to the shape parameters, which can then be optimized using a gradient-based technique. This outperforms state-of-the-art methods by 5 to 20\% for standard problems and, even more importantly, our approach applies to cases that previous methods cannot handle.

Thu 12 July 2:40 - 2:50 PDT

AutoPrognosis: Automated Clinical Prognostic Modeling via Bayesian Optimization with Structured Kernel Learning

Ahmed M. Alaa · Mihaela van der Schaar

Clinical prognostic models derived from largescalehealthcare data can inform critical diagnosticand therapeutic decisions. To enable off-theshelfusage of machine learning (ML) in prognosticresearch, we developed AUTOPROGNOSIS:a system for automating the design of predictivemodeling pipelines tailored for clinical prognosis.AUTOPROGNOSIS optimizes ensembles ofpipeline configurations efficiently using a novelbatched Bayesian optimization (BO) algorithmthat learns a low-dimensional decomposition ofthe pipelines’ high-dimensional hyperparameterspace in concurrence with the BO procedure.This is achieved by modeling the pipelines’ performancesas a black-box function with a Gaussianprocess prior, and modeling the “similarities”between the pipelines’ baseline algorithmsvia a sparse additive kernel with a Dirichlet prior.Meta-learning is used to warmstart BO with externaldata from “similar” patient cohorts by calibratingthe priors using an algorithm that mimicsthe empirical Bayes method. The system automaticallyexplains its predictions by presentingthe clinicians with logical association rules thatlink patients’ features to predicted risk strata. Wedemonstrate the utility of AUTOPROGNOSIS using10 major patient cohorts representing various aspectsof cardiovascular patient care.

Thu 12 July 2:50 - 3:00 PDT

TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service

Amartya Sanyal · Matt Kusner · Adria Gascon · Varun Kanade

Machine learning methods are widely used for a variety of prediction problems. Prediction as a service is a paradigm in which service providers with technological expertise and computational resources may perform predictions for clients. However, data privacy severely restricts the applicability of such services, unless measures to keep client data private (even from the service provider) are designed. Equally important is to minimize the nature of computation and amount of communication required between client and server. Fully homomorphic encryption offers a way out, whereby clients may encrypt their data, and on which the server may perform arithmetic computations. The one drawback of using fully homomorphic encryption is the amount of time required to evaluate large machine learning models on encrypted data. We combine several ideas from the machine learning literature, particularly work on quantization and sparsification of neural networks, together with algorithmic tools to speed-up and parallelize computation using encrypted data.