Skip to yearly menu bar Skip to main content


Poster

Input uncertainty propagation through trained neural networks

Paul Monchot · Loic Coquelin · Sébastien J. Petit · Sébastien Marmin · Erwann LE PENNEC · Nicolas Fischer

Exhibit Hall 1 #628
[ ]
[ PDF [ Poster

Abstract:

When physical sensors are involved, such as image sensors, the uncertainty over the input data is often a major component of the output uncertainty of machine learning models. In this work, we address the problem of input uncertainty propagation through trained neural networks. We do not rely on a Gaussian distribution assumption of the output or of any intermediate layer. We propagate instead a Gaussian Mixture Model (GMM) that offers much more flexibility, using the Split&Merge algorithm. This paper's main contribution is the computation of a Wasserstein criterion to control the Gaussian splitting procedure for which theoretical guarantees of convergence on the output distribution estimates are derived. The methodology is tested against a wide range of datasets and networks. It shows robustness, and genericity and offers highly accurate output probability density function estimation while maintaining a reasonable computational cost compared with the standard Monte Carlo (MC) approach.

Chat is not available.