Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICML Workshop on Human in the Loop Learning (HILL)

IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance

Ruixuan Liu · Changliu Liu


Abstract: Neural networks (NNs) are widely used for classification tasks for their remarkable performance. However, the robustness and accuracy of NNs heavily depend on the training data. In many applications, massive training data is usually not available. To address the challenge, this paper proposes an iterative adversarial data augmentation (IADA) framework to learn neural network models from insufficient amount of training data. The method uses formal verification to identify the most ``confusing'' input samples, and leverages human guidance to safely and iteratively augment the training data with these samples. The proposed framework is applied to an artificial 2D dataset, the MNIST dataset, and a human motion dataset. By applying IADA to fully-connected NN classifiers, we show that our training method can improve the robustness and accuracy of the learned model. By comparing to regular supervised training, on the MNIST dataset, the average perturbation bound improved $107.4\%$. The classification accuracy improved $1.77\%$, $3.76\%$, $10.85\%$ on the 2D dataset, the MNIST dataset, and the human motion dataset respectively.

Chat is not available.