In one-class-learning tasks, only the normal case (foreground) can be modeled with data, whereas the variation of all possible anomalies is too erratic to be described by samples. Thus, due to the lack of representative data, the wide-spread discriminative approaches cannot cover such learning tasks, and rather generative models,which attempt to learn the input density of the foreground, are used. However, generative models suffer from a large input dimensionality (as in images) and are typically inefficient learners.We propose to learn the data distribution of the foreground more efficiently with a multi-hypotheses autoencoder. Moreover, the model is criticized by a discriminator, which prevents artificial data modes not supported by data, and which enforces diversity across hypotheses. Our multiple-hypotheses-based anomaly detection framework allows the reliable identification of out-of-distribution samples. For anomaly detection on CIFAR-10, it yields up to 3.9% points improvement over previously reported results. On a real anomaly detection task, the approach reduces the error of the baseline models from 6.8% to 1.5%.
Tam Nguyen (University of Freiburg)
Zhongyu Lou (Bosch)
Michael Klar (Bosch)
Thomas Brox (University of Freiburg)
Related Events (a corresponding poster, oral, or spotlight)
2019 Oral: Anomaly Detection With Multiple-Hypotheses Predictions »
Wed Jun 12th 09:35 -- 09:40 PM Room Seaside Ballroom