Timezone: »
Data transformations (e.g. rotations, reflections, and cropping) play an important role in self-supervised learning. Typically, images are transformed into different views, and neural networks trained on tasks involving these views produce useful feature representations for downstream tasks, including anomaly detection. However, for anomaly detection beyond image data, it is often unclear which transformations to use. Here we present a simple end-to-end procedure for anomaly detection with learnable transformations. The key idea is to embed the transformed data into a semantic space such that the transformed data still resemble their untransformed form, while different transformations are easily distinguishable. Extensive experiments on time series show that our proposed method outperforms existing approaches in the one-vs.-rest setting and is competitive in the more challenging n-vs.-rest anomaly-detection task. On medical and cyber-security tabular data, our method learns domain-specific transformations and detects anomalies more accurately than previous work.
Author Information
Chen Qiu (TU Kaiserslautern/Bosch Center for Artificial Intelligence)
Timo Pfrommer (Bosch Center for Artificial Intelligence)
Marius Kloft (TU Kaiserslautern)
Stephan Mandt (University of California, Irivine)
Stephan Mandt is an Assistant Professor of Computer Science at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and head of the statistical machine learning group at Disney Research, first in Pittsburgh and later in Los Angeles. He held previous postdoctoral positions at Columbia University and at Princeton University. Stephan holds a PhD in Theoretical Physics from the University of Cologne. He is a Fellow of the German National Merit Foundation, a Kavli Fellow of the U.S. National Academy of Sciences, and was a visiting researcher at Google Brain. Stephan serves regularly as an Area Chair for NeurIPS, ICML, AAAI, and ICLR, and is a member of the Editorial Board of JMLR. His research is currently supported by NSF, DARPA, IBM, and Qualcomm.
Maja Rudolph (BCAI)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Neural Transformation Learning for Deep Anomaly Detection Beyond Images »
Thu. Jul 22nd 04:00 -- 06:00 PM Room Virtual
More from the Same Authors
-
2023 : Computing non-vacuous PAC-Bayes generalization bounds for Models under Adversarial Corruptions »
Waleed Mustafa · Philipp Liznerski · Dennis Wagner · Puyu Wang · Marius Kloft -
2023 : Lossy Image Compression with Conditional Diffusion Model »
Ruihan Yang · Stephan Mandt -
2023 : Estimating the Rate-Distortion Function by Wasserstein Gradient Descent »
Yibo Yang · Stephan Eckstein · Marcel Nutz · Stephan Mandt -
2023 : Autoencoding Implicit Neural Representations for Image Compression »
Tuan Pham · Yibo Yang · Stephan Mandt -
2023 Workshop: Neural Compression: From Information Theory to Applications »
Berivan Isik · Yibo Yang · Daniel Severo · Karen Ullrich · Robert Bamler · Stephan Mandt -
2023 Poster: Deep Anomaly Detection under Labeling Budget Constraints »
Aodong Li · Chen Qiu · Marius Kloft · Padhraic Smyth · Stephan Mandt · Maja Rudolph -
2023 Poster: Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes »
Ba-Hien Tran · Babak Shahbaba · Stephan Mandt · Maurizio Filippone -
2023 Poster: Training Normalizing Flows from Dependent Data »
Matthias Kirchler · Christoph Lippert · Marius Kloft -
2022 Poster: Structured Stochastic Gradient MCMC »
Antonios Alexos · Alex Boyd · Stephan Mandt -
2022 Spotlight: Structured Stochastic Gradient MCMC »
Antonios Alexos · Alex Boyd · Stephan Mandt -
2022 Poster: Latent Outlier Exposure for Anomaly Detection with Contaminated Data »
Chen Qiu · Aodong Li · Marius Kloft · Maja Rudolph · Stephan Mandt -
2022 Poster: On the Generalization Analysis of Adversarial Learning »
Waleed Mustafa · Yunwen Lei · Marius Kloft -
2022 Spotlight: Latent Outlier Exposure for Anomaly Detection with Contaminated Data »
Chen Qiu · Aodong Li · Marius Kloft · Maja Rudolph · Stephan Mandt -
2022 Spotlight: On the Generalization Analysis of Adversarial Learning »
Waleed Mustafa · Yunwen Lei · Marius Kloft -
2020 Poster: The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks »
Jakub Swiatkowski · Kevin Roth · Bastiaan Veeling · Linh Tran · Joshua V Dillon · Jasper Snoek · Stephan Mandt · Tim Salimans · Rodolphe Jenatton · Sebastian Nowozin -
2020 Poster: How Good is the Bayes Posterior in Deep Neural Networks Really? »
Florian Wenzel · Kevin Roth · Bastiaan Veeling · Jakub Swiatkowski · Linh Tran · Stephan Mandt · Jasper Snoek · Tim Salimans · Rodolphe Jenatton · Sebastian Nowozin -
2020 Poster: Variational Bayesian Quantization »
Yibo Yang · Robert Bamler · Stephan Mandt -
2018 Poster: Iterative Amortized Inference »
Joe Marino · Yisong Yue · Stephan Mandt -
2018 Poster: Disentangled Sequential Autoencoder »
Yingzhen Li · Stephan Mandt -
2018 Oral: Disentangled Sequential Autoencoder »
Yingzhen Li · Stephan Mandt -
2018 Oral: Iterative Amortized Inference »
Joe Marino · Yisong Yue · Stephan Mandt -
2018 Poster: Quasi-Monte Carlo Variational Inference »
Alexander Buchholz · Florian Wenzel · Stephan Mandt -
2018 Poster: Improving Optimization in Models With Continuous Symmetry Breaking »
Robert Bamler · Stephan Mandt -
2018 Poster: Deep One-Class Classification »
Lukas Ruff · Nico Görnitz · Lucas Deecke · Shoaib Ahmed Siddiqui · Robert Vandermeulen · Alexander Binder · Emmanuel Müller · Marius Kloft -
2018 Oral: Quasi-Monte Carlo Variational Inference »
Alexander Buchholz · Florian Wenzel · Stephan Mandt -
2018 Oral: Improving Optimization in Models With Continuous Symmetry Breaking »
Robert Bamler · Stephan Mandt -
2018 Oral: Deep One-Class Classification »
Lukas Ruff · Nico Görnitz · Lucas Deecke · Shoaib Ahmed Siddiqui · Robert Vandermeulen · Alexander Binder · Emmanuel Müller · Marius Kloft -
2017 Poster: Dynamic Word Embeddings »
Robert Bamler · Stephan Mandt -
2017 Talk: Dynamic Word Embeddings »
Robert Bamler · Stephan Mandt