Skip to yearly menu bar Skip to main content


Poster

Neuroexplicit Diffusion Models for Inpainting of Optical Flow Fields

Tom Fischer · Pascal Peter · Joachim Weickert · Eddy Ilg


Abstract: Deep learning has revolutionized the field of computer vision by introducing large scale neural networks with millions of parameters. Training these networks requires massive datasets and leads to intransparent models that can fail to generalize.At the other extreme, models designed from partial differential equations (PDEs) embed specialized domain knowledge into mathematical equations and usually rely on few manually chosen hyperparameters.This makes them transparent by construction and if designed and calibrated carefully, they can generalize well to unseen scenarios. In this paper, we show how to bring model- and data-driven approaches together by combining the explicit PDE-based approaches with convolutional neural networks to obtain the best of both worlds. We illustrate a joint architecture for the task of inpainting optical flow fields and show that the combination of model- and data-driven modeling leads to an effective architecture.Our model outperforms both fully explicit and fully data-driven baselines in terms of reconstruction quality, robustness and amount of required training data. Averaging the endpoint error across different mask densities, our method outperforms the explicit baselines by $11-27\%$, the GAN baseline by $47\%$ and the Probabilisitic Diffusion baseline by $42\%$. With that, our method sets a new state of the art for inpainting of optical flow fields from random masks.

Live content is unavailable. Log in and register to view live content