Timezone: »

 
Oral
On Variational Bounds of Mutual Information
Ben Poole · Sherjil Ozair · Aäron van den Oord · Alexander Alemi · George Tucker

Thu Jun 13 04:40 PM -- 05:00 PM (PDT) @ Grand Ballroom

Estimating, minimizing, and/or maximizing Mutual Information (MI) is core to many objectives in machine learning, but tractably bounding MI in high dimensions is challenging. To establish tractable and scalable objectives, recent work has turned to variational bounds parameterized by neural networks (Alemi et al., 2016, Belghazi et al., 2018, van den Oord et al., 2018). However, the relationships and tradeoffs between these bounds remains unclear. In this work, we unify these recent developments in a single framework. We find that the existing variational lower bounds degrade when the MI is large, exhibiting either high bias or high variance. To address this problem, we introduce a continuum of lower bounds that encompasses previous bounds and flexibly trades off bias and variance. On a suite of high-dimensional, controlled problems, we empirically characterize the bias and variance of both the bounds and their gradients and demonstrate the effectiveness of these new bounds for estimation and representation learning.

Author Information

Ben Poole (Google Brain)
Sherjil Ozair (University of Montreal)
Aäron van den Oord (Google Deepmind)
Alexander Alemi (Google)
George Tucker (Google Brain)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors