Timezone: »
Making predictions and quantifying their uncertainty when the input data is sequential is a fundamental learning challenge, recently attracting increasing attention. We develop SigGPDE, a new scalable sparse variational inference framework for Gaussian Processes (GPs) on sequential data. Our contribution is twofold. First, we construct inducing variables underpinning the sparse approximation so that the resulting evidence lower bound (ELBO) does not require any matrix inversion. Second, we show that the gradients of the GP signature kernel are solutions of a hyperbolic partial differential equation (PDE). This theoretical insight allows us to build an efficient back-propagation algorithm to optimize the ELBO. We showcase the significant computational gains of SigGPDE compared to existing methods, while achieving state-of-the-art performance for classification tasks on large datasets of up to 1 million multivariate time series.
Author Information
Maud Lemercier (University of Warwick)
Cristopher Salvi (University of Oxford)
Thomas Cass (Imperial College London)
Edwin V Bonilla (CSIRO's Data61)
Theodoros Damoulas (University of Warwick & The Alan Turing Institute)
Terry Lyons (University of Oxford)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: SigGPDE: Scaling Sparse Gaussian Processes on Sequential Data »
Thu. Jul 22nd 04:00 -- 06:00 PM Room
More from the Same Authors
-
2023 Poster: Transformed Distribution Matching for Missing Value Imputation »
He Zhao · Ke Sun · Amir Dezfouli · Edwin V Bonilla -
2023 Poster: Free-Form Variational Inference for Gaussian Process State-Space Models »
Xuhui Fan · Edwin V Bonilla · Terence O'kane · Scott SIsson -
2023 Poster: Sampling-based Nyström Approximation and Kernel Quadrature »
Satoshi Hayakawa · Harald Oberhauser · Terry Lyons -
2022 Poster: Learning Efficient and Robust Ordinary Differential Equations via Invertible Neural Networks »
Weiming Zhi · Tin Lai · Lionel Ott · Edwin V Bonilla · Fabio Ramos -
2022 Spotlight: Learning Efficient and Robust Ordinary Differential Equations via Invertible Neural Networks »
Weiming Zhi · Tin Lai · Lionel Ott · Edwin V Bonilla · Fabio Ramos -
2022 Poster: Optimizing Sequential Experimental Design with Deep Reinforcement Learning »
Tom Blau · Edwin V Bonilla · Iadine Chades · Amir Dezfouli -
2022 Spotlight: Optimizing Sequential Experimental Design with Deep Reinforcement Learning »
Tom Blau · Edwin V Bonilla · Iadine Chades · Amir Dezfouli -
2021 Poster: Neural SDEs as Infinite-Dimensional GANs »
Patrick Kidger · James Foster · Xuechen Li · Terry Lyons -
2021 Spotlight: Neural SDEs as Infinite-Dimensional GANs »
Patrick Kidger · James Foster · Xuechen Li · Terry Lyons -
2021 Poster: Neural Rough Differential Equations for Long Time Series »
James Morrill · Cristopher Salvi · Patrick Kidger · James Foster -
2021 Spotlight: Neural Rough Differential Equations for Long Time Series »
James Morrill · Cristopher Salvi · Patrick Kidger · James Foster -
2021 Poster: BORE: Bayesian Optimization by Density-Ratio Estimation »
Louis Chi-Chun Tiao · Aaron Klein · Matthias W Seeger · Edwin V Bonilla · Cedric Archambeau · Fabio Ramos -
2021 Poster: "Hey, that's not an ODE": Faster ODE Adjoints via Seminorms »
Patrick Kidger · Ricky T. Q. Chen · Terry Lyons -
2021 Spotlight: "Hey, that's not an ODE": Faster ODE Adjoints via Seminorms »
Patrick Kidger · Ricky T. Q. Chen · Terry Lyons -
2021 Oral: BORE: Bayesian Optimization by Density-Ratio Estimation »
Louis Chi-Chun Tiao · Aaron Klein · Matthias W Seeger · Edwin V Bonilla · Cedric Archambeau · Fabio Ramos -
2020 Poster: Non-separable Non-stationary random fields »
Kangrui Wang · Oliver Hamelijnck · Theodoros Damoulas · Mark Steel -
2018 Poster: Spatio-temporal Bayesian On-line Changepoint Detection with Model Selection »
Jeremias Knoblauch · Theodoros Damoulas -
2018 Oral: Spatio-temporal Bayesian On-line Changepoint Detection with Model Selection »
Jeremias Knoblauch · Theodoros Damoulas