Timezone: »
Selecting informative data points for expert feedback can significantly improve the performance of anomaly detection (AD) in various contexts, such as medical diagnostics or fraud detection. In this paper, we determine a set of theoretical conditions under which anomaly scores generalize from labeled queries to unlabeled data. Motivated by these results, we propose a data labeling strategy with optimal data coverage under labeling budget constraints. In addition, we propose a new learning framework for semi-supervised AD. Extensive experiments on image, tabular, and video data sets show that our approach results in state-of-the-art semi-supervised AD performance under labeling budget constraints.
Author Information
Aodong Li (University of California, Irvine)
Chen Qiu (Bosch Center for AI, USA)
Marius Kloft (TU Kaiserslautern)
Padhraic Smyth (University of California, Irvine)
Stephan Mandt (University of California, Irivine)
Stephan Mandt is an Assistant Professor of Computer Science at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and head of the statistical machine learning group at Disney Research, first in Pittsburgh and later in Los Angeles. He held previous postdoctoral positions at Columbia University and at Princeton University. Stephan holds a PhD in Theoretical Physics from the University of Cologne. He is a Fellow of the German National Merit Foundation, a Kavli Fellow of the U.S. National Academy of Sciences, and was a visiting researcher at Google Brain. Stephan serves regularly as an Area Chair for NeurIPS, ICML, AAAI, and ICLR, and is a member of the Editorial Board of JMLR. His research is currently supported by NSF, DARPA, IBM, and Qualcomm.
Maja Rudolph (BCAI)
More from the Same Authors
-
2023 : Computing non-vacuous PAC-Bayes generalization bounds for Models under Adversarial Corruptions »
Waleed Mustafa · Philipp Liznerski · Dennis Wagner · Puyu Wang · Marius Kloft -
2023 : Lossy Image Compression with Conditional Diffusion Model »
Ruihan Yang · Stephan Mandt -
2023 : Estimating the Rate-Distortion Function by Wasserstein Gradient Descent »
Yibo Yang · Stephan Eckstein · Marcel Nutz · Stephan Mandt -
2023 : Autoencoding Implicit Neural Representations for Image Compression »
Tuan Pham · Yibo Yang · Stephan Mandt -
2023 Workshop: Neural Compression: From Information Theory to Applications »
Berivan Isik · Yibo Yang · Daniel Severo · Karen Ullrich · Robert Bamler · Stephan Mandt -
2023 Poster: Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes »
Ba-Hien Tran · Babak Shahbaba · Stephan Mandt · Maurizio Filippone -
2023 Poster: Training Normalizing Flows from Dependent Data »
Matthias Kirchler · Christoph Lippert · Marius Kloft -
2022 Poster: Structured Stochastic Gradient MCMC »
Antonios Alexos · Alex Boyd · Stephan Mandt -
2022 Poster: Fair Generalized Linear Models with a Convex Penalty »
Hyungrok Do · Preston Putzel · Axel Martin · Padhraic Smyth · Judy Zhong -
2022 Spotlight: Fair Generalized Linear Models with a Convex Penalty »
Hyungrok Do · Preston Putzel · Axel Martin · Padhraic Smyth · Judy Zhong -
2022 Spotlight: Structured Stochastic Gradient MCMC »
Antonios Alexos · Alex Boyd · Stephan Mandt -
2022 Poster: Modeling Irregular Time Series with Continuous Recurrent Units »
Mona Schirmer · Mazin Eltayeb · Stefan Lessmann · Maja Rudolph -
2022 Poster: Latent Outlier Exposure for Anomaly Detection with Contaminated Data »
Chen Qiu · Aodong Li · Marius Kloft · Maja Rudolph · Stephan Mandt -
2022 Poster: On the Generalization Analysis of Adversarial Learning »
Waleed Mustafa · Yunwen Lei · Marius Kloft -
2022 Spotlight: Modeling Irregular Time Series with Continuous Recurrent Units »
Mona Schirmer · Mazin Eltayeb · Stefan Lessmann · Maja Rudolph -
2022 Spotlight: Latent Outlier Exposure for Anomaly Detection with Contaminated Data »
Chen Qiu · Aodong Li · Marius Kloft · Maja Rudolph · Stephan Mandt -
2022 Spotlight: On the Generalization Analysis of Adversarial Learning »
Waleed Mustafa · Yunwen Lei · Marius Kloft -
2021 Poster: Neural Transformation Learning for Deep Anomaly Detection Beyond Images »
Chen Qiu · Timo Pfrommer · Marius Kloft · Stephan Mandt · Maja Rudolph -
2021 Spotlight: Neural Transformation Learning for Deep Anomaly Detection Beyond Images »
Chen Qiu · Timo Pfrommer · Marius Kloft · Stephan Mandt · Maja Rudolph -
2020 Poster: The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks »
Jakub Swiatkowski · Kevin Roth · Bastiaan Veeling · Linh Tran · Joshua V Dillon · Jasper Snoek · Stephan Mandt · Tim Salimans · Rodolphe Jenatton · Sebastian Nowozin -
2020 Poster: How Good is the Bayes Posterior in Deep Neural Networks Really? »
Florian Wenzel · Kevin Roth · Bastiaan Veeling · Jakub Swiatkowski · Linh Tran · Stephan Mandt · Jasper Snoek · Tim Salimans · Rodolphe Jenatton · Sebastian Nowozin -
2020 Poster: Variational Bayesian Quantization »
Yibo Yang · Robert Bamler · Stephan Mandt -
2019 Poster: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Oral: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2018 Poster: Iterative Amortized Inference »
Joe Marino · Yisong Yue · Stephan Mandt -
2018 Poster: Disentangled Sequential Autoencoder »
Yingzhen Li · Stephan Mandt -
2018 Oral: Disentangled Sequential Autoencoder »
Yingzhen Li · Stephan Mandt -
2018 Oral: Iterative Amortized Inference »
Joe Marino · Yisong Yue · Stephan Mandt -
2018 Poster: Quasi-Monte Carlo Variational Inference »
Alexander Buchholz · Florian Wenzel · Stephan Mandt -
2018 Poster: Improving Optimization in Models With Continuous Symmetry Breaking »
Robert Bamler · Stephan Mandt -
2018 Poster: Deep One-Class Classification »
Lukas Ruff · Nico Görnitz · Lucas Deecke · Shoaib Ahmed Siddiqui · Robert Vandermeulen · Alexander Binder · Emmanuel Müller · Marius Kloft -
2018 Oral: Quasi-Monte Carlo Variational Inference »
Alexander Buchholz · Florian Wenzel · Stephan Mandt -
2018 Oral: Improving Optimization in Models With Continuous Symmetry Breaking »
Robert Bamler · Stephan Mandt -
2018 Oral: Deep One-Class Classification »
Lukas Ruff · Nico Görnitz · Lucas Deecke · Shoaib Ahmed Siddiqui · Robert Vandermeulen · Alexander Binder · Emmanuel Müller · Marius Kloft -
2017 Poster: Dynamic Word Embeddings »
Robert Bamler · Stephan Mandt -
2017 Talk: Dynamic Word Embeddings »
Robert Bamler · Stephan Mandt