We propose a hierarchical generative model that captures the self-similar structure of image regions as well as how this structure is shared across image collections. Our model is based on a novel, variational interpretation of the popular expected patch log-likelihood (EPLL) method as a model for randomly positioned grids of image patches. While previous EPLL methods modeled image patches with finite Gaussian mixtures, we use nonparametric Dirichlet process (DP) mixtures to create models whose complexity grows as additional images are observed. An extension based on the hierarchical DP then captures repetitive and self-similar structure via image-specific variations in cluster frequencies. We derive a structured variational inference algorithm that adaptively creates new patch clusters to more accurately model novel image textures. Our denoising performance on standard benchmarks is superior to EPLL and comparable to the state-of-the-art, and provides novel statistical justifications for common image processing heuristics. We also show accurate image inpainting results.
Geng Ji (Brown University)
Michael C. Hughes (Harvard University)
Erik Sudderth (University of California, Irvine)
Related Events (a corresponding poster, oral, or spotlight)
2017 Talk: From Patches to Images: A Nonparametric Generative Model »
Mon Aug 7th 07:51 -- 08:09 AM Room C4.9& C4.10