Timezone: »
Oceanographers are interested in predicting ocean currents and identifying divergences in a current vector field based on sparse observations of buoy velocities. Since we expect current dynamics to be smooth but highly non-linear, Gaussian processes (GPs) offer an attractive model. But we show that applying a GP with a standard stationary kernel directly to buoy data can struggle at both current prediction and divergence identification -- due to some physically unrealistic prior assumptions. To better reflect known physical properties of currents, we propose to instead put a standard stationary kernel on the divergence and curl-free components of a vector field obtained through a Helmholtz decomposition. We show that, because this decomposition relates to the original vector field just via mixed partial derivatives, we can still perform inference given the original data with only a small constant multiple of additional computational expense. We illustrate the benefits of our method on synthetic and real oceans data.
Author Information
Renato Berlinghieri (Massachusetts Institute of Technology)
Brian Trippe (Columbia University)
David Burt (Massachusetts Institute of Technology)
Ryan Giordano (Massachusetts Institute of Technology)
Kaushik Srinivasan
Tamay Özgökmen
Junfei Xia
Tamara Broderick (MIT)
More from the Same Authors
-
2021 : High-Dimensional Variable Selection and Non-Linear Interaction Discovery in Linear Time »
Raj Agrawal · Tamara Broderick -
2023 : Practical and Asymptotically Exact Conditional Sampling in Diffusion Models »
Brian Trippe · Luhuan Wu · Christian Naesseth · David Blei · John Cunningham -
2023 Poster: SE(3) diffusion model with application to protein backbone generation »
Jason Yim · Brian Trippe · Valentin De Bortoli · Emile Mathieu · Arnaud Doucet · Regina Barzilay · Tommi Jaakkola -
2021 : High-Dimensional Variable Selection and Non-Linear Interaction Discovery in Linear Time »
Tamara Broderick · Raj Agrawal -
2021 Poster: Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients »
Artem Artemev · David Burt · Mark van der Wilk -
2021 Poster: Finite mixture models do not reliably learn the number of components »
Diana Cai · Trevor Campbell · Tamara Broderick -
2021 Spotlight: Finite mixture models do not reliably learn the number of components »
Diana Cai · Trevor Campbell · Tamara Broderick -
2021 Oral: Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients »
Artem Artemev · David Burt · Mark van der Wilk -
2019 Oral: Rates of Convergence for Sparse Variational Gaussian Process Regression »
David Burt · Carl E Rasmussen · Mark van der Wilk -
2019 Poster: The Kernel Interaction Trick: Fast Bayesian Discovery of Pairwise Interactions in High Dimensions »
Raj Agrawal · Brian Trippe · Jonathan Huggins · Tamara Broderick -
2019 Oral: The Kernel Interaction Trick: Fast Bayesian Discovery of Pairwise Interactions in High Dimensions »
Raj Agrawal · Brian Trippe · Jonathan Huggins · Tamara Broderick -
2019 Poster: Rates of Convergence for Sparse Variational Gaussian Process Regression »
David Burt · Carl E Rasmussen · Mark van der Wilk -
2019 Poster: LR-GLM: High-Dimensional Bayesian Inference Using Low-Rank Data Approximations »
Brian Trippe · Jonathan Huggins · Raj Agrawal · Tamara Broderick -
2019 Oral: LR-GLM: High-Dimensional Bayesian Inference Using Low-Rank Data Approximations »
Brian Trippe · Jonathan Huggins · Raj Agrawal · Tamara Broderick -
2018 Poster: Bayesian Coreset Construction via Greedy Iterative Geodesic Ascent »
Trevor Campbell · Tamara Broderick -
2018 Poster: Minimal I-MAP MCMC for Scalable Structure Discovery in Causal DAG Models »
Raj Agrawal · Caroline Uhler · Tamara Broderick -
2018 Oral: Minimal I-MAP MCMC for Scalable Structure Discovery in Causal DAG Models »
Raj Agrawal · Caroline Uhler · Tamara Broderick -
2018 Oral: Bayesian Coreset Construction via Greedy Iterative Geodesic Ascent »
Trevor Campbell · Tamara Broderick -
2018 Tutorial: Variational Bayes and Beyond: Bayesian Inference for Big Data »
Tamara Broderick