Skip to yearly menu bar Skip to main content


Poster

Compositional Curvature Bounds for Deep Neural Networks

Taha Entesari · Sina Sharifi · Mahyar Fazlyab


Abstract:

A key challenge that threatens the widespread use of neural networks in safety-critical applications is their vulnerability against adversarial attacks. In this paper, we study the second-order behavior of deep neural networks, focusing on robustness against adversarial perturbations. First, we provide a theoretical analysis of robustness and attack certificates for deep classifiers using bounds on their first-order derivative (Lipschitz constant) and second-order derivative (curvature constant). Next, we introduce an iterative algorithm to analytically compute these bounds for continuously differentiable neural networks. This algorithm leverages the compositional structure of the model to propagate the estimated curvature from input to output, giving rise to a scalable and modular approach. We finally demonstrate the effectiveness of our method on classification problems using MNIST and CIFAR-10 datasets. We will publicly release our source code after the review process.

Live content is unavailable. Log in and register to view live content