Timezone: »

 
Oral
Differentiable Compositional Kernel Learning for Gaussian Processes
Shengyang Sun · Guodong Zhang · Chaoqi (Alec) Wang · Wenyuan Zeng · Jiaman Li · Roger Grosse

Wed Jul 11 05:10 AM -- 05:20 AM (PDT) @ A4

The generalization properties of Gaussian processes depend heavily on the choice of kernel, and this choice remains a dark art. We present the Neural Kernel Network (NKN), a flexible family of kernels represented by a neural network. The NKN’s architecture is based on the composition rules for kernels, so that each unit of the network corresponds to a valid kernel. It can compactly approximate compositional kernel structures such as those used by the Automatic Statistician (Lloyd et al., 2014), but because the architecture is differentiable, it is end-to-end trainable with gradient- based optimization. We show that the NKN is universal for the class of stationary kernels. Empirically we demonstrate NKN’s pattern discovery and extrapolation abilities on several tasks that depend crucially on identifying the underlying structure, including time series and texture extrapolation, as well as Bayesian optimization.

Author Information

Shengyang Sun (University of Toronto)
Guodong Zhang (University of Toronto)
Chaoqi (Alec) Wang (University of Toronto)
Wenyuan Zeng (University of Toronto, Uber ATG)
Jiaman Li (University of Toronto)
Roger Grosse (University of Toronto and Vector Institute)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors