Timezone: »
Formal verification of machine learning-based building blocks is important for complex and critical systems such as autonomous vehicles, medical devices, or cybersecurity systems where guarantees on safety, fault tolerance and correctness are essential. Formal verification of machine learning is an emerging and interdisciplinary field, intersecting with fields of computer-aided verification, programming languages, robotics, computer security, and optimization, with many challenging open problems. This workshop aims to raise awareness of the importance of formal verification methods in the machine learning community and to bring together researchers and practitioners interested in this emerging field from a broad range of disciplines and backgrounds. Organizers of this workshop include pioneering proponents of machine learning verification and six confirmed invited speakers who have solid works in this field with diverse research and demographic backgrounds. The workshop includes posters, contributed talks, and a panel to encourage novel contributed work and interdisciplinary discussions on open challenges.
Fri 5:45 a.m. - 6:00 a.m.
|
Opening Remark
(
Introduction
)
SlidesLive Video » |
🔗 |
Fri 6:00 a.m. - 6:30 a.m.
|
Invited Talk 1 (Gagandeep Singh): Proof Sharing and Transfer for Boosting Neural Network Verification
(
Talk
)
SlidesLive Video » The deployment pipeline of deep neural networks (DNNs) in safety-critical settings involves formally verifying their trustworthiness. Domain experts design a large number of specifications (~10-100K), typically defined for inputs in the test set, to cover DNN behavior under diverse real-world scenarios. If the DNN is proved to be trustworthy, then it can be deployed. Otherwise, it is repaired or retrained. While there has been a lot of work in recent years on developing precise and scalable DNN verifiers, existing verifiers are inefficient when applied inside the deployment pipeline. This inefficiency is because the verifier needs to be run from scratch for every new specification and DNN. |
Gagandeep Singh 🔗 |
Fri 6:30 a.m. - 6:45 a.m.
|
Backward Reachability for Neural Feedback Loops
(
Poster
)
SlidesLive Video » The increasing prevalence of neural networks (NNs) in safety-critical applications calls for methods to certify their behavior and guarantee safety. This paper presents a backward reachability approach for safety verification of neural feedback loops (NFLs), i.e., closed-loop systems with NN control policies. While recent works have focused on forward reachability as a strategy for safety certification of NFLs, backward reachability offers advantages over the forward strategy, particularly in obstacle avoidance scenarios. Prior works have developed techniques for backward reachability analysis for systems without NNs, but the presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible. To overcome these challenges, we use existing forward NN analysis tools to find affine bounds on the control inputs and solve a series of linear programs (LPs) to efficiently find an approximation of the backprojection (BP) set, i.e., the set of states for which the NN control policy will drive the system to a given target set. We present an algorithm to iteratively find BP set estimates over a given time horizon and demonstrate the ability to reduce conservativeness in the BP set estimates by up to 88\% with low additional computational cost. We use numerical results from a double integrator model to verify the efficacy of these algorithms and demonstrate the ability to certify safety for a linearized ground robot model in a collision avoidance scenario where forward reachability fails. |
Nicholas Rober · Michael Everett · Jonathan How 🔗 |
Fri 6:45 a.m. - 7:00 a.m.
|
Characterizing Neural Network Verification for Systems with NN4SysBench
(
Talk
)
SlidesLive Video » We present NN4SysBench, a benchmark suite for neural network verification, comprised of benchmarks from neural networks for systems (or NN4Sys). NN4Sys is booming: there are hundreds of proposals of using neural networks in computer systems—databases, OSes, and networked systems—that are safety critical. We observe that NN4Sys has some unique characteristics that to- day’s neural network verification tools overlooked. This benchmark aims at bridging the gap between NN4Sys and NN-verification by tailoring impactful NN4Sys instances to benchmarks that today’s NN-verification tools can work on. |
Haoyu He · Tianhao Wei · Huan Zhang · Changliu Liu · Cheng Tan 🔗 |
Fri 7:00 a.m. - 7:30 a.m.
|
Break
|
🔗 |
Fri 7:30 a.m. - 8:00 a.m.
|
Invited Talk 2 (Anton Dahbura): Undeterminism and the AI Uncertainty Principle
(
Talk
)
SlidesLive Video » As AI/ML-based systems are deployed in increasingly autonomous and safety-critical applications, there is increasing concern for many reasons about the behaviors of the ML in the wild, perhaps most of all because the problems for which ML is applied are so complex that it’s not possible to know in all cases what the ML will do. In this talk I will introduce the concept of undeterminism, extracted from my experience in developing techniques for communication protocol conformance testing, and argue that undeterminism yields inherent uncertainty. I show why systems cannot have certainty and ML simultaneously and that there is a clear set of tradeoffs that should guide research in formal methods for ML for the foreseeable future. |
Anton Dahbura 🔗 |
Fri 8:00 a.m. - 8:30 a.m.
|
Invited Talk 3 (M. Pawan Kumar): Neural Networks for Neural Network Verification
(
Talk
)
SlidesLive Video » Can neural networks help us better exploit the inherent structure in the neural network verification problem? In this talk, I will present some recent efforts to speed-up formal verification of neural networks using deep learning. Specifically, we use deep learning to make branching decisions for branch-and-bound, compute descent directions for bound computation, and reduce the search space for counterexample generation. The common methodology behind all these approaches is a graph neural network (GNN) whose architecture closely resembles the network we wish to verify. Using shared parameters, the GNN can be trained on smaller networks and tested on larger ones. Using standard verification tasks, I will show some promising results for our approach. |
M. Pawan Kumar 🔗 |
Fri 8:30 a.m. - 9:00 a.m.
|
Morning Poster Session
(
Poster Session
)
|
🔗 |
Fri 9:00 a.m. - 10:30 a.m.
|
Lunch Break
|
🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Invited Talk 4 (Suman Jana): Efficient Neural Network Verification using Branch and Bound
(
Talk
)
SlidesLive Video » In this talk, I will describe two recent Branch and Bound (BaB) verifiers developed by our group to ensure different safety properties of neural networks. The BaB verifiers involve two main steps: (1) recursively splitting the original verification problem into easier independent subproblems by splitting input or hidden neurons; and (2) for each split subproblem, using fast but incomplete bound propagation techniques to compute sound estimated bounds for the outputs of the target neural network. One of the key limitations of existing BaB verifiers is computing tight relaxations of activation functions' (i.e., ReLU) nonlinearities. Our recent works (α-CROWN and β-CROWN) introduce a primal-dual approach and jointly optimize the corresponding Lagrangian multipliers for each ReLU with gradient ascent. Such an approach is highly parallelizable and avoids calls to expensive LP solvers. Our verifiers not only provide tighter output estimations than existing bound propagation methods but also can fully leverage GPUs with massive parallelization. Our verifier, α, β-CROWN (alpha-beta-CROWN), won the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score. |
Suman Jana 🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Invited Talk 5 (Somayeh Sojoudi): Computational Methods for Non-convex Machine Learning Problems
(
Talk
)
SlidesLive Video » We need efficient computational methods with provable guarantees that can cope with the complex nature and high nonlinearity of many real-world systems. Practitioners often design heuristic algorithms tailored to specific applications, but the theoretical underpinnings of these methods remain a mystery, and this limits their usage in safety-critical systems. In this talk, we aim to address the above issue for some machine learning problems. First, we study the problem of certifying the robustness of neural networks against adversarial inputs. Then we study when simple local search algorithms could solve a class of nonlinear problems to global optimality. We discuss our recent results in addressing these problems and demonstrate them on tensor decomposition with outliers and video processing. |
Somayeh Sojoudi 🔗 |
Fri 11:30 a.m. - 11:45 a.m.
|
IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound
(
Talk
)
SlidesLive Video »
Recent works have tried to increase the verifiability of adversarially trained networks by running the attacks over domains larger than the original perturbations and adding various regularization terms to the objective. However, these algorithms either underperform or require complex and expensive stage-wise training procedures, hindering their practical applicability. We present IBP-R, a novel verified training algorithm that is both simple and effective. IBP-R induces network verifiability by coupling adversarial attacks on enlarged domains with a regularization term, based on inexpensive interval bound propagation, that minimizes the gap between the non-convex verification problem and its approximations. By leveraging recent branch-and-bound frameworks, we show that IBP-R obtains state-of-the-art verified robustness-accuracy trade-offs for small perturbations on CIFAR-10 while training significantly faster than relevant previous work. Additionally, we present UPB, a novel branching strategy that, relying on a simple heuristic based on $\beta$-CROWN, reduces the cost of state-of-the-art branching algorithms while yielding splits of comparable quality.
|
Alessandro De Palma · Rudy Bunel · Krishnamurthy Dvijotham · M. Pawan Kumar · Robert Stanforth 🔗 |
Fri 11:45 a.m. - 12:00 p.m.
|
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
(
Talk
)
SlidesLive Video » Data poisoning attacks aim at manipulating model behaviors through distorting training data. Previously, an aggregation-based certified defense, Deep Partition Aggregation (DPA), was proposed to mitigate this threat. DPA predicts through an aggregation of base classifiers trained on disjoint subsets of data, thus restricting its sensitivity to dataset distortions. In this work, we propose an improved certified defense against general poisoning attacks, namely Finite Aggregation. In contrast to DPA, which directly splits the training set into disjoint subsets, our method first splits the training set into smaller disjoint subsets and then combines duplicates of them to build larger (but not disjoint) subsets for training base classifiers. This reduces the worst-case impacts of poison samples and thus improves certified robustness bounds. In addition, we offer an alternative view of our method, bridging the designs of deterministic and stochastic aggregation-based certified defenses. Empirically, our proposed Finite Aggregation consistently improves certificates on MNIST, CIFAR-10, and GTSRB, boosting certified fractions by up to 3.05%, 3.87% and 4.77%, respectively, while keeping the same clean accuracies as DPA's, effectively establishing a new state of the art in (pointwise) certified robustness against data poisoning. |
Wenxiao Wang · Alexander Levine · Soheil Feizi 🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Break
|
🔗 |
Fri 12:30 p.m. - 1:00 p.m.
|
Invited Talk 6 (Changliu Liu): Applications of Neural Verification on Robotics
(
Talk
)
SlidesLive Video » In this talk, I will discuss how algorithms for neural network verification can be used to solve robotics problems. I will first introduce our toolbox NeuralVerification.jl, followed by the discussion of two applications. The first is to build verified models for human behavior prediction in human-robot systems. The second is to develop safe controllers for dynamic models encoded in deep neural networks. To support real-time computation in robotics tasks, I will also discuss potential approaches to speed up the computation of the verification algorithms. |
Changliu Liu 🔗 |
Fri 1:00 p.m. - 1:15 p.m.
|
Towards Optimal Randomized Smoothing: A Semi-Infinite Linear Programming Approach
(
Poster
)
SlidesLive Video » Randomized smoothing is a leading approach to producing certifiably robust classifiers. The goal of optimal randomized smoothing is to maximize the average certified radius over the space of smoothing distributions. We theoretically study this problem through the lens of infinite-dimensional optimization over measure spaces, and prove that the nonconvex infinite program is lower-bounded by a conic linear program wherein the classifier's confidence acts as a surrogate objective to optimize. A semi-infinite linear programming approximation to the problem is presented, whose sub-problems are proven to attain nontrivial strong duality. A proof-of-concept experiment demonstrates the effectiveness of the proposed approach. |
Brendon G. Anderson · Samuel Pfrommer · Somayeh Sojoudi 🔗 |
Fri 1:15 p.m. - 1:30 p.m.
|
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
(
Talk
)
SlidesLive Video » A variety of methods have been proposed to try to explain how deep neural networks make their decisions. Key to those approaches is the need to sample the pixel space efficiently in order to derive importance maps. However, it has been shown that the sampling methods used to date introduce biases and other artifacts, leading to inaccurate estimates of the importance of individual pixels and severely limit the reliability of current explainability methods. Unfortunately, the alternative -- to exhaustively sample the image space is computationally prohibitive. In this paper, we introduce EVA (Explaining using Verified perturbation Analysis) -- the first explainability method guarantee to have an exhaustive exploration of a perturbation space. Specifically, we leverage the beneficial properties of verified perturbation analysis -- time efficiency, tractability and guaranteed complete coverage of a manifold -- to efficiently characterize the input variables that are most likely to drive the model decision. We evaluate the approach systematically and demonstrate state-of-the-art results on multiple benchmarks. |
Melanie Ducoffe · David Vigouroux · Thomas Serre · Remi Cadene · Thomas FEL · Mikael Capelle 🔗 |
Fri 1:30 p.m. - 2:00 p.m.
|
Afternoon Poster Session
(
Poster Session
)
|
🔗 |
Fri 2:00 p.m. - 2:45 p.m.
|
Panel
(
Discussion Panel
)
SlidesLive Video » |
🔗 |
Fri 2:45 p.m. - 3:00 p.m.
|
Closing Remarks
(
Closing
)
SlidesLive Video » |
🔗 |
-
|
On Quantum Computing for Neural Network Robustness Verification
(
Poster
)
link »
SlidesLive Video » In recent years, a multitude of approaches to certify the prediction of neural networks have been proposed.Classically, complete verification techniques struggle with large networks as the combinatorial space grows exponentially, implying that realistic networks are difficult to be verified.In this paper, we propose a new hybrid approach to the robustness verification of ReLU networks with quantum computers.By applying Benders decomposition, the verification problem is split into a quadratic unconstrained binary optimization and a linear program which are solved by quantum and classical computers, respectively.Further, we improve existing hybrid methods based on the Benders decomposition by reducing the overall number of iterations and placing a limit on the maximum number of qubits required.We show that, in a simulated environment, our certificate is sound, and provide bounds on the minimum number of qubits necessary to obtain a reasonable approximation. |
Nicola Franco · Tom Wollschläger · Jeanette Lorenz · Stephan Günnemann 🔗 |
-
|
Verification-friendly Networks: the Case for Parametric ReLUs
(
Poster
)
link »
SlidesLive Video » It has increasingly been recognised that verification can contribute to the validation and debugging of neural networks before deployment, particularly in safety-critical areas. While considerable progress has been made in recent years, present techniques still do not scale to large architectures used in many applications. In this paper we show that substantial gains can be obtained by employing Parametric ReLU activation functions in lieu of plain ReLU functions. We give training procedures that produce networks which achieve one order of magnitude gain in verification overheads and 30-100% fewer timeouts with an SoA Symbolic Interval Propagation-based verification toolkit, while not compromising the resulting accuracy. Furthermore, we show that adversarial training combined with our approach improves certified robustness up to 36% compared to adversarial training performed on baseline ReLU networks. |
Patrick Henriksen · Francesco Leofante · Alessio Lomuscio 🔗 |
-
|
Formal Privacy Guarantees for Neural Network queries by estimating local Lipschitz constant
(
Poster
)
SlidesLive Video » Cloud based machine learning inference is an emerging paradigm where users share their data with a service provider. Due to increased concerns over data privacy, several recent works have proposed using Adversarial Representation Learning (ARL) to learn a privacy-preserving encoding of sensitive user data before it is shared with an untrusted service provider. Traditionally, the privacy of these encodings is evaluated empirically as they lack formal guarantees. In this work, we develop a framework that provides formal privacy guarantees for an arbitrarily trained neural network by linking its local Lipschitz constant with its local sensitivity. To use local sensitivity for guaranteeing privacy, we extend the Propose-Test-Release~(PTR) framework to make it compatible and tractable for neural network based queries. We verify the efficacy of our framework on real world datasets, and elucidate the role of ARL in improving the privacy-utility tradeoff. |
Abhishek Singh · Praneeth Vepakomma · Vivek Sharma · Ramesh Raskar 🔗 |
-
|
Sound randomized smoothing in floating-point arithmetics
(
Poster
)
SlidesLive Video »
Randomized smoothing is sound when using infinite precision. However, we show that randomized smoothing is no longer sound for limited floating-point precision. We present a simple example where randomized smoothing certifies a radius of $1.26$ around a point, even though there is an adversarial example in the distance $0.8$ and extend this example further to provide false certificates for CIFAR10. We discuss the implicit assumptions of randomized smoothing and show that they do not apply to generic image classification models whose smoothed versions are commonly certified. In order to overcome this problem, we propose a sound approach to randomized smoothing when using floating-point precision with essentially equal speed and matching the certificates of the standard, unsound practice for standard classifiers tested so far. Our only assumption is that we have access to a fair coin.
|
Václav Voráček · Matthias Hein 🔗 |
-
|
Optimized Symbolic Interval Propagation for Neural Network Verification
(
Poster
)
SlidesLive Video » Neural networks are increasingly applied in safety critical domains, their verification thus is gaining importance.A large class of recent algorithms for proving input-output relations offeed-forward neural networks are based on linear relaxations andsymbolic interval propagation. However, due to variable dependencies, theapproximations deteriorate with increasing depth of the network.In this paper we present DPNeurifyFV, a novel branch-and-bound solver for ReLU networks with low dimensional input-space that is based on symbolic interval propagation with fresh variables and input-splitting.A new heuristic for choosing the fresh variables allows to ameliorate the dependency problem, while our novel splitting heuristic, in combination with several other improvements, speeds up the branch-and-bound procedure.We evaluate our approach on the airborne collision avoidance networks ACAS Xu and demonstrate runtime improvements compared to state-of-the-art tools. |
Philipp Kern · Marko Kleine Büning · Carsten Sinz 🔗 |
-
|
Sound and Complete Verification of Polynomial Networks
(
Poster
)
SlidesLive Video » Polynomial Networks (PNs) have demonstratedpromising performance on face and image recognition recently. However, robustness of PNs is unclear and thus obtaining certificates becomes imperative for enabling their adoption in real-world applications. Existing verification algorithms on ReLU neural networks (NNs) based on branch and bound (BaB) techniques cannot be trivially applied to PN verification. In this work, we devise a new bounding method, equipped with BaB for global convergence guarantees, called VPN. One key insight is that we obtain much tighter bounds than the interval bound propagation baseline. This enables sound and complete PN verification with empirical validation on MNIST, CIFAR10 and STL10 datasets. We believe our method has its own interest to NN verification. |
Elias Abad Rocamora · Mehmet Fatih Sahin · Fanghui Liu · Grigorios Chrysos · Volkan Cevher 🔗 |
-
|
Safety Verification and Repair of Deep Neural Networks
(
Poster
)
SlidesLive Video »
This paper presents the Veritex tool for reachability analysis and repair of deep neural networks (DNNs). Veritex includes methods for exact (sound and complete) analysis and over-approximative (sound and incomplete) reachability analysis of DNNs using novel set representations, including the facet-vertex incidence matrix, face lattice, and $\mathcal{V}$-zono. In addition to sound and complete safety verification of DNNs, these methods can also efficiently compute the exact output reachable domain, as well as the exact unsafe input space that causes safety violations of DNNs in the output. More importantly, based on the exact unsafe input-output reachable domain, Veritex can repair unsafe DNNs on multiple safety properties with negligible performance degradation. The repair is conducted by updating the DNN parameter through retraining. The approach also works in the absence of the safe model reference and the original dataset for learning. Veritex primarily addresses the issue of constructing provably safe DNNs, which is not yet significantly addressed in most of the current formal methods for trustworthy artificial intelligence (AI). The utility of Veritex is evaluated for these two aspects, specifically safety verification and DNN repair. Benchmarks for verification include the ACAS Xu networks, and benchmarks for the repair include unsafe ACAS Xu networks and an unsafe agent trained in deep reinforcement learning (DRL).
|
Xiaodong Yang · Tomoya Yamaguchi · Bardh Hoxha · Danil Prokhorov · Taylor Johnson 🔗 |
-
|
Robustness Verification for Contrastive Learning
(
Poster
)
SlidesLive Video » Contrastive adversarial training has successfully improved the robustness of contrastive learning (CL). However, the robustness metric used in these methods is linked to attack algorithms, image labels and downstream tasks, all of which may affect the consistency and reliability of robustness metric for CL. To address these problems, this paper proposes a novel Robustness Verification framework for Contrastive Learning (RVCL). Furthermore, we use extreme value theory to reveal the relationship between the robust radius of the CL encoder and that of the supervised downstream task. Extensive experimental results on various benchmark models and datasets verify our theoretical findings, and further demonstrate that our proposed RVCL is able to evaluate the robustness of both models and images. |
Zekai Wang · Weiwei Liu 🔗 |
-
|
CertiFair: A Framework for Certified Global Fairness of Neural Networks
(
Poster
)
SlidesLive Video » We consider the problem of whether a Neural Network (NN) model satisfies global individual fairness. Individual Fairness suggests that similar individuals with respect to a certain task are to be treated similarly by the decision model. In this work, we have two main objectives. The first is to construct a verifier which checks whether the fairness property holds for a given NN in a classification task or provide a counterexample if it is violated, i.e., the model is fair if all similar individuals are classified the same, and unfair if a pair of similar individuals are classified differently. To that end, We construct a sound and complete verifier that verifies global individual fairness properties of ReLU NN classifiers using distance-based similarity metrics. The second objective of this paper is to provide a method for training provably fair NN classifiers from unfair (biased) data. We propose a fairness loss that can be used during training to enforce fair outcomes for similar individuals. We then provide provable bounds on the fairness of the resulting NN. We run experiments on commonly used fairness datasets that are publicly available and we show that global individual fairness can be improved by 96 % without significant drop in test accuracy. |
Haitham Khedr · Yasser Shoukry 🔗 |
-
|
Neural Network Compression of ACAS Xu Early Prototype is Unsafe: Closed-Loop Verification through Quantized State Backreachability
(
Poster
)
link »
SlidesLive Video » ACAS Xu is an air-to-air collision avoidance system designed for unmanned aircraft that issues horizontal turn advisories to avoid an intruder aircraft. Analysis of this system has spurred a significant body of research in the formal methods community on neural network verification. While many powerful methods have been developed, most work focuses on open-loop properties of the networks, rather than the main point of the system—collision avoidance—which requiresclosed-loop analysis. In this work, we develop a technique to verify a closed-loop approximation of the system using state quantization and back-reachability. We use favorable assumptions for the analysis—perfect sensor information, instantfollowing of advisories, ideal aircraft maneuvers, and an intruder that only flies straight. When the method fails to prove the system is safe, we refinethe quantization parameters until generating counterexamples where the original (non-quantized) system also has collisions. |
Stanley Bak · Dung Tran 🔗 |
-
|
Programmatic Reinforcement Learning with Formal Verification
(
Poster
)
SlidesLive Video » We present Marvel, a verification-based reinforcement learning framework that synthesizes safe programmatic controllers for environments with continuous state and action space. The key idea is the integration of program reasoning techniques into reinforcement learning training loops. Marvel performs abstraction-based program verification to reason about a programmatic controller and its environment as a closed-loop system. Based on a novel verification-guided synthesis loop for training, Marvel minimizes the amount of safety violation in the system abstraction, which approximates the worst-case safety loss, using gradient-descent style optimization. Experimental results demonstrate the substantial benefits of leveraging verification feedback for safe-by-construction of programmatic controllers. |
Yuning Wang · He Zhu 🔗 |
-
|
Toward Certified Robustness Against Real-World Distribution Shifts
(
Poster
)
SlidesLive Video » We consider the problem of certifying the robustness of deep neural networks against real-world distribution shifts. To do so, we bridge the gap between hand-crafted specifications and realistic deployment settings by proposing a novel neural-symbolic verification framework, in which we train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model. A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations, which are fundamental to many state-of-the-art generative models. To address this challenge, we propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement. The key idea is to ``lazily'' refine the abstraction of sigmoid functions to exclude spurious counter-examples found in the previous abstraction, thus guaranteeing progress in the verification process while keeping the state-space small. Experiments on the MNIST and CIFAR-10 datasets show that our framework significantly outperforms existing methods on a range of challenging distribution shifts. |
Haoze Wu · TERUHIRO TAGOMORI · Alex Robey · Fengjun Yang · Nikolai Matni · George J. Pappas · Hamed Hassani · Corina Pasareanu · Clark Barrett 🔗 |
-
|
Verification of Neural Ordinary Differential Equations using Reachability Analysis
(
Poster
)
link »
SlidesLive Video » Continuous deep learning models, referred to as Neural Ordinary Differential Equations (Neural ODEs), have received considerable attention over the last several years. Despite their burgeoning impact, there is a lack of formal analysis techniques for these systems. In this paper, we consider a general class of neural ODEs with varying architectures and layers, and introduce a novel reachability framework that allows for the formal analysis of their behavior. The methods developed for the reachability analysis of neural ODEs are implemented in a new tool called NNVODE. Specifically, our work extends an existing neural network verification tool to support neural ODEs. We demonstrate the capabilities of our methods through the analysis of a set of benchmarks that include neural ODEs used for classification, and in control and dynamical systems, including an evaluation of the efficacy and capabilities of our approach with respect to existing software tools within the continuous-time systems reachability literature, when it is possible to do so. |
Diego Manzanas Lopez · Patrick Musau · Nathaniel Hamilton · Taylor T Johnson 🔗 |
-
|
Robust Training and Verification of Implicit Neural Networks: A Non-Euclidean Contractive Approach
(
Poster
)
SlidesLive Video »
This paper proposes a theoretical and computational framework fortraining and robustness verification of implicit neural networks based upon non-Euclidean contraction theory. The basic idea is to cast the robustness analysis of a neural network as a reachability problem and use (i) the $\ell_{\infty}$-norm input-output Lipschitz constant and (ii) the tight inclusion function of the network to over-approximate its reachable sets. First, for a given implicit neural network, we use $\ell_{\infty}$-matrix measures to propose sufficient conditions for its well-posedness, design an iterative algorithm to compute its fixed points, and provide upper bounds for its $\ell_\infty$-norm input-output Lipschitz constant. Second, we introduce a related embedded network and show that the embedded network can be used to provide an $\ell_\infty$-norm box over-approximation of the reachable sets of the original network. Moreover, we use the embedded network to design an iterative algorithm for computing the upper bounds of the original system's tight inclusion function. Third, we use the upper bounds of the Lipschitz constants and the upper bounds of the tight inclusion functions to design two algorithms for the training and robustness verification of implicit neural networks. Finally, we apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
|
Saber Jafarpour · Alexander Davydov · Matthew Abate · Francesco Bullo · Samuel Coogan 🔗 |
-
|
Improving adversarial robustness via joint classification and multiple explicit detection classes
(
Poster
)
SlidesLive Video » This work concerns the development of deep networks that are certifiably robust to adversarial attacks. Joint robust classification-detection was recently introduced as a certified defense mechanism, where adversarial examples are either correctly classified or assigned to the |
Sina Baharlouei · Fatemeh Sheikholeslami · Meisam Razaviyayn · Zico Kolter 🔗 |
-
|
ReCIPH: Relational Coefficients for Input Partitioning Heuristic
(
Poster
)
SlidesLive Video » With the rapidly advancing improvements to thealready successful Deep Learning artifacts, Neu-ral Networks (NN) are poised to permeate agrowing number of everyday applications, includ-ing ones where safety is paramount and, there-fore, formal guarantees are a precious commodity.To this end, Formal Methods, a long-standing,mathematically-inspired field of research saw aneffervescent outgrowth targeting NN and advanc-ing almost as rapidly as AI itself. Without a doubt,the most challenging problem facing this newresearch direction is the scalability to the ever-growing NN models. This paper stems from thisneed and introduces Relational Coefficients forInput partitioning Heuristic (ReCIPH), accelerat-ing NN analysis. Extensive experimentation issupplied to assert the added value to two differentsolvers handling several models and properties(coming, in part, from two industrial use-cases). |
Serge Durand · Augustin Lemesle 🔗 |
-
|
Certified Robustness Against Natural Language Attacks by Causal Intervention
(
Poster
)
SlidesLive Video » Deep learning models have achieved great success in many fields, yet they are vulnerable to adversarial examples. This paper follows a causal perspective to look into the adversarial vulnerability and proposes Causal Intervention by Semantic Smoothing (CISS), a novel framework towards robustness against natural language attacks. Instead of merely fitting observational data, CISS learns causal effects p(y|do(x)) by smoothing in the latent semantic space to make robust predictions, which scales to deep architectures and avoids tedious construction of noise customized for specific attacks. CISS is provably robust against word substitution attacks, as well as empirically robust even when perturbations are strengthened by unknown attack algorithms. For example, on YELP, CISS surpasses the runner-up by 6.7% in terms of certified robustness against word substitutions, and achieves 79.4% empirical robustness when syntactic attacks are integrated. |
Haiteng Zhao · Chang Ma · Xinshuai Dong · Anh Tuan Luu · Zhi-Hong Deng · Hanwang Zhang 🔗 |
-
|
Towards Optimal Randomized Smoothing: A Semi-Infinite Linear Programming Approach
(
Poster
)
link »
Randomized smoothing is a leading approach to producing certifiably robust classifiers. The goal of optimal randomized smoothing is to maximize the average certified radius over the space of smoothing distributions. We theoretically study this problem through the lens of infinite-dimensional optimization over measure spaces, and prove that the nonconvex infinite program is lower-bounded by a conic linear program wherein the classifier's confidence acts as a surrogate objective to optimize. A semi-infinite linear programming approximation to the problem is presented, whose sub-problems are proven to attain nontrivial strong duality. A proof-of-concept experiment demonstrates the effectiveness of the proposed approach. |
Brendon G. Anderson · Samuel Pfrommer · Somayeh Sojoudi 🔗 |
-
|
Backward Reachability for Neural Feedback Loops
(
Poster
)
The increasing prevalence of neural networks (NNs) in safety-critical applications calls for methods to certify their behavior and guarantee safety. This paper presents a backward reachability approach for safety verification of neural feedback loops (NFLs), i.e., closed-loop systems with NN control policies. While recent works have focused on forward reachability as a strategy for safety certification of NFLs, backward reachability offers advantages over the forward strategy, particularly in obstacle avoidance scenarios. Prior works have developed techniques for backward reachability analysis for systems without NNs, but the presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible. To overcome these challenges, we use existing forward NN analysis tools to find affine bounds on the control inputs and solve a series of linear programs (LPs) to efficiently find an approximation of the backprojection (BP) set, i.e., the set of states for which the NN control policy will drive the system to a given target set. We present an algorithm to iteratively find BP set estimates over a given time horizon and demonstrate the ability to reduce conservativeness in the BP set estimates by up to 88\% with low additional computational cost. We use numerical results from a double integrator model to verify the efficacy of these algorithms and demonstrate the ability to certify safety for a linearized ground robot model in a collision avoidance scenario where forward reachability fails. |
Nicholas Rober · Michael Everett · Jonathan How 🔗 |
-
|
IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound
(
Poster
)
link »
Please use the email address from the speaker's ICML profile. If the user has multiple ICML profiles, then use the profile the speaker will register with. When a valid ICML profile email address entered, a green checkbox will appear. The registration status is indicated to the right of the speaker's name with the first letter of the session. Tutorials + Conference Sessions would appear T,C. Underlined means the registration is complimentary. Brackets indicates that the person checked in into the meeting. |
Alessandro De Palma · Rudy Bunel · Krishnamurthy Dvijotham · M. Pawan Kumar · Robert Stanforth 🔗 |
-
|
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
(
Poster
)
Data poisoning attacks aim at manipulating model behaviors through distorting training data. Previously, an aggregation-based certified defense, Deep Partition Aggregation (DPA), was proposed to mitigate this threat. DPA predicts through an aggregation of base classifiers trained on disjoint subsets of data, thus restricting its sensitivity to dataset distortions. In this work, we propose an improved certified defense against general poisoning attacks, namely Finite Aggregation. In contrast to DPA, which directly splits the training set into disjoint subsets, our method first splits the training set into smaller disjoint subsets and then combines duplicates of them to build larger (but not disjoint) subsets for training base classifiers. This reduces the worst-case impacts of poison samples and thus improves certified robustness bounds. In addition, we offer an alternative view of our method, bridging the designs of deterministic and stochastic aggregation-based certified defenses. Empirically, our proposed Finite Aggregation consistently improves certificates on MNIST, CIFAR-10, and GTSRB, boosting certified fractions by up to 3.05%, 3.87% and 4.77%, respectively, while keeping the same clean accuracies as DPA's, effectively establishing a new state of the art in (pointwise) certified robustness against data poisoning. |
Wenxiao Wang · Alexander Levine · Soheil Feizi 🔗 |
-
|
Characterizing Neural Network Verification for Systems with NN4SysBench
(
Poster
)
We present NN4SysBench, a benchmark suite for neural network verification, comprised of benchmarks from neural networks for systems (or NN4Sys). NN4Sys is booming: there are hundreds of proposals of using neural networks in computer systems—databases, OSes, and networked systems—that are safety critical. We observe that NN4Sys has some unique characteristics that to- day’s neural network verification tools overlooked. This benchmark aims at bridging the gap between NN4Sys and NN-verification by tailoring impactful NN4Sys instances to benchmarks that today’s NN-verification tools can work on. |
Haoyu He · Cheng Tan 🔗 |
-
|
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
(
Poster
)
A variety of methods have been proposed to try to explain how deep neural networks make their decisions. Key to those approaches is the need to sample the pixel space efficiently in order to derive importance maps. However, it has been shown that the sampling methods used to date introduce biases and other artifacts, leading to inaccurate estimates of the importance of individual pixels and severely limit the reliability of current explainability methods. Unfortunately, the alternative -- to exhaustively sample the image space is computationally prohibitive. In this paper, we introduce EVA (Explaining using Verified perturbation Analysis) -- the first explainability method guarantee to have an exhaustive exploration of a perturbation space. Specifically, we leverage the beneficial properties of verified perturbation analysis -- time efficiency, tractability and guaranteed complete coverage of a manifold -- to efficiently characterize the input variables that are most likely to drive the model decision. We evaluate the approach systematically and demonstrate state-of-the-art results on multiple benchmarks. |
Melanie Ducoffe · David Vigouroux · Thomas Serre · Remi Cadene · Mikael Capelle · Thomas FEL 🔗 |
Author Information
Huan Zhang (CMU)
Leslie Rice (Carnegie Mellon University)
Kaidi Xu (Drexel University)
aditi raghunathan (stanford university)
Wan-Yi Lin (Robert Bosch LLC)
Cho-Jui Hsieh (UCLA)
Clark Barrett (Stanford University)
Martin Vechev (ETH Zurich)
Zico Kolter (Carnegie Mellon University / Bosch Center for AI)
More from the Same Authors
-
2021 : Empirical robustification of pre-trained classifiers »
Mohammad Sadegh Norouzzadeh · Wan-Yi Lin · Leonid Boytsov · Leslie Rice · Huan Zhang · Filipe Condessa · Zico Kolter -
2021 : Certified robustness against adversarial patch attacks via randomized cropping »
Wan-Yi Lin · Fatemeh Sheikholeslami · jinghao shi · Leslie Rice · Zico Kolter -
2021 : Fast Certified Robust Training with Short Warmup »
Zhouxing Shi · Yihan Wang · Huan Zhang · Jinfeng Yi · Cho-Jui Hsieh -
2021 : Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification »
Shiqi Wang · Huan Zhang · Kaidi Xu · Xue Lin · Suman Jana · Cho-Jui Hsieh · Zico Kolter -
2021 : Assessing Generalization of SGD via Disagreement Rates »
YiDing Jiang · Vaishnavh Nagarajan · Zico Kolter -
2021 : Automated Discovery of Adaptive Attacks on Adversarial Defenses »
Chengyuan Yao · Pavol Bielik · Petar Tsankov · Martin Vechev -
2021 : Automating Power Networks: Improving RL Agent Robustness with Adversarial Training »
Alexander Pan · Yongkyun Lee · Huan Zhang -
2022 : Characterizing Datapoints via Second-Split Forgetting »
Pratyush Maini · Saurabh Garg · Zachary Lipton · Zico Kolter -
2022 : Toward Certified Robustness Against Real-World Distribution Shifts »
Haoze Wu · TERUHIRO TAGOMORI · Alex Robey · Fengjun Yang · Nikolai Matni · George J. Pappas · Hamed Hassani · Corina Pasareanu · Clark Barrett -
2022 : Improving adversarial robustness via joint classification and multiple explicit detection classes »
Sina Baharlouei · Fatemeh Sheikholeslami · Meisam Razaviyayn · Zico Kolter -
2022 : Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift »
Christina Baek · Yiding Jiang · aditi raghunathan · Zico Kolter -
2023 Poster: Can Neural Network Memorization Be Localized? »
Pratyush Maini · Michael Mozer · Hanie Sedghi · Zachary Lipton · Zico Kolter · Chiyuan Zhang -
2023 Poster: Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory »
Justin Cui · Ruochen Wang · Si Si · Cho-Jui Hsieh -
2023 Poster: TabLeak: Tabular Data Leakage in Federated Learning »
Mark Vero · Mislav Balunovic · Dimitar I. Dimitrov · Martin Vechev -
2023 Poster: FARE: Provably Fair Representation Learning with Practical Certificates »
Nikola Jovanović · Mislav Balunovic · Dimitar I. Dimitrov · Martin Vechev -
2023 Poster: Low-Variance Gradient Estimation in Unrolled Computation Graphs with ES-Single »
Paul Vicol · Zico Kolter · Kevin Swersky -
2023 Poster: Representer Point Selection for Explaining Regularized High-dimensional Models »
Che-Ping Tsai · Jiong Zhang · Hsiang-Fu Yu · Eli Chien · Cho-Jui Hsieh · Pradeep Ravikumar -
2023 Poster: Mimetic Initialization of Self-Attention Layers »
Asher Trockman · Zico Kolter -
2023 Poster: Towards Robust and Safe Reinforcement Learning with Benign Off-policy Data »
Zuxin Liu · Zijian Guo · Zhepeng Cen · Huan Zhang · Yihang Yao · Hanjiang Hu · Ding Zhao -
2023 Poster: Abstracting Imperfect Information Away from Two-Player Zero-Sum Games »
Samuel Sokota · Ryan D'Orazio · Chun Kai Ling · David Wu · Zico Kolter · Noam Brown -
2023 Poster: PINA: Leveraging Side Information in eXtreme Multi-label Classification via Predicted Instance Neighborhood Aggregation »
Eli Chien · Jiong Zhang · Cho-Jui Hsieh · Jyun-Yu Jiang · Wei-Cheng Chang · Olgica Milenkovic · Hsiang-Fu Yu -
2023 Oral: Mimetic Initialization of Self-Attention Layers »
Asher Trockman · Zico Kolter -
2023 Workshop: 2nd Workshop on Formal Verification of Machine Learning »
Mark Müller · Brendon G. Anderson · Leslie Rice · Zhouxing Shi · Shubham Ugare · Huan Zhang · Martin Vechev · Zico Kolter · Somayeh Sojoudi · Cho-Jui Hsieh -
2022 Affinity Workshop: Queer in AI @ ICML 2022 Affinity Workshop »
Huan Zhang · Arjun Subramonian · Sharvani Jha · William Agnew · Krunoslav Lehman Pavasovic -
2022 : Paper 15: On the Robustness of Safe Reinforcement Learning under Observational Perturbations »
Zuxin Liu · Zhepeng Cen · Huan Zhang · Jie Tan · Bo Li · Ding Zhao -
2022 : Characterizing Neural Network Verification for Systems with NN4SysBench »
Haoyu He · Tianhao Wei · Huan Zhang · Changliu Liu · Cheng Tan -
2022 Poster: A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks »
Huan Zhang · Shiqi Wang · Kaidi Xu · Yihan Wang · Suman Jana · Cho-Jui Hsieh · Zico Kolter -
2022 Poster: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Poster: On Distribution Shift in Learning-based Bug Detectors »
Jingxuan He · Luca Beurer-Kellner · Martin Vechev -
2022 Spotlight: On Distribution Shift in Learning-based Bug Detectors »
Jingxuan He · Luca Beurer-Kellner · Martin Vechev -
2022 Spotlight: A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks »
Huan Zhang · Shiqi Wang · Kaidi Xu · Yihan Wang · Suman Jana · Cho-Jui Hsieh · Zico Kolter -
2022 Spotlight: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Poster: Communicating via Markov Decision Processes »
Samuel Sokota · Christian Schroeder · Maximilian Igl · Luisa Zintgraf · Phil Torr · Martin Strohmeier · Zico Kolter · Shimon Whiteson · Jakob Foerster -
2022 Spotlight: Communicating via Markov Decision Processes »
Samuel Sokota · Christian Schroeder · Maximilian Igl · Luisa Zintgraf · Phil Torr · Martin Strohmeier · Zico Kolter · Shimon Whiteson · Jakob Foerster -
2022 Social: Black in AI and Queer in AI Joint Social Event »
Victor Silva · Huan Zhang · Nathaniel Rose · Arjun Subramonian · Krunoslav Lehman Pavasovic · Ana Da Hora -
2021 : Contributed Talk #5 »
Wan-Yi Lin -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2021 Poster: Overcoming Catastrophic Forgetting by Bayesian Generative Regularization »
PEI-HUNG Chen · Wei Wei · Cho-Jui Hsieh · Bo Dai -
2021 Spotlight: Overcoming Catastrophic Forgetting by Bayesian Generative Regularization »
PEI-HUNG Chen · Wei Wei · Cho-Jui Hsieh · Bo Dai -
2021 Poster: TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer »
Berkay Berabi · Jingxuan He · Veselin Raychev · Martin Vechev -
2021 Poster: Scalable Certified Segmentation via Randomized Smoothing »
Marc Fischer · Maximilian Baader · Martin Vechev -
2021 Spotlight: TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer »
Berkay Berabi · Jingxuan He · Veselin Raychev · Martin Vechev -
2021 Spotlight: Scalable Certified Segmentation via Randomized Smoothing »
Marc Fischer · Maximilian Baader · Martin Vechev -
2021 Poster: DORO: Distributional and Outlier Robust Optimization »
Runtian Zhai · Chen Dan · Zico Kolter · Pradeep Ravikumar -
2021 Poster: RATT: Leveraging Unlabeled Data to Guarantee Generalization »
Saurabh Garg · Sivaraman Balakrishnan · Zico Kolter · Zachary Lipton -
2021 Spotlight: DORO: Distributional and Outlier Robust Optimization »
Runtian Zhai · Chen Dan · Zico Kolter · Pradeep Ravikumar -
2021 Oral: RATT: Leveraging Unlabeled Data to Guarantee Generalization »
Saurabh Garg · Sivaraman Balakrishnan · Zico Kolter · Zachary Lipton -
2021 Poster: On Proximal Policy Optimization's Heavy-tailed Gradients »
Saurabh Garg · Joshua Zhanson · Emilio Parisotto · Adarsh Prasad · Zico Kolter · Zachary Lipton · Sivaraman Balakrishnan · Ruslan Salakhutdinov · Pradeep Ravikumar -
2021 Poster: Stabilizing Equilibrium Models by Jacobian Regularization »
Shaojie Bai · Vladlen Koltun · Zico Kolter -
2021 Spotlight: On Proximal Policy Optimization's Heavy-tailed Gradients »
Saurabh Garg · Joshua Zhanson · Emilio Parisotto · Adarsh Prasad · Zico Kolter · Zachary Lipton · Sivaraman Balakrishnan · Ruslan Salakhutdinov · Pradeep Ravikumar -
2021 Spotlight: Stabilizing Equilibrium Models by Jacobian Regularization »
Shaojie Bai · Vladlen Koltun · Zico Kolter -
2021 Poster: PODS: Policy Optimization via Differentiable Simulation »
Miguel Angel Zamora Mora · Momchil Peychev · Sehoon Ha · Martin Vechev · Stelian Coros -
2021 Spotlight: PODS: Policy Optimization via Differentiable Simulation »
Miguel Angel Zamora Mora · Momchil Peychev · Sehoon Ha · Martin Vechev · Stelian Coros -
2020 : Invited Talk: Zico Kolter (Q&A) »
Zico Kolter -
2020 : Invited Talk: Zico Kolter »
Zico Kolter -
2020 Poster: On Lp-norm Robustness of Ensemble Decision Stumps and Trees »
Yihan Wang · Huan Zhang · Hongge Chen · Duane Boning · Cho-Jui Hsieh -
2020 Poster: Learning to Encode Position for Transformer with Continuous Dynamical Model »
Xuanqing Liu · Hsiang-Fu Yu · Inderjit Dhillon · Cho-Jui Hsieh -
2020 Poster: Adversarial Robustness Against the Union of Multiple Perturbation Models »
Pratyush Maini · Eric Wong · Zico Kolter -
2020 Poster: Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid Flow Prediction »
Filipe de Avila Belbute-Peres · Thomas Economon · Zico Kolter -
2020 Poster: Adversarial Robustness for Code »
Pavol Bielik · Martin Vechev -
2020 Poster: Adversarial Attacks on Probabilistic Autoregressive Forecasting Models »
Raphaël Dang-Nhu · Gagandeep Singh · Pavol Bielik · Martin Vechev -
2020 Poster: Certified Robustness to Label-Flipping Attacks via Randomized Smoothing »
Elan Rosenfeld · Ezra Winston · Pradeep Ravikumar · Zico Kolter -
2020 Poster: Stabilizing Differentiable Architecture Search via Perturbation-based Regularization »
Xiangning Chen · Cho-Jui Hsieh -
2020 Poster: An Investigation of Why Overparameterization Exacerbates Spurious Correlations »
Shiori Sagawa · aditi raghunathan · Pang Wei Koh · Percy Liang -
2020 Poster: Overfitting in adversarially robust deep learning »
Leslie Rice · Eric Wong · Zico Kolter -
2019 Poster: Certified Adversarial Robustness via Randomized Smoothing »
Jeremy Cohen · Elan Rosenfeld · Zico Kolter -
2019 Poster: Wasserstein Adversarial Examples via Projected Sinkhorn Iterations »
Eric Wong · Frank R Schmidt · Zico Kolter -
2019 Oral: Wasserstein Adversarial Examples via Projected Sinkhorn Iterations »
Eric Wong · Frank R Schmidt · Zico Kolter -
2019 Oral: Certified Adversarial Robustness via Randomized Smoothing »
Jeremy Cohen · Elan Rosenfeld · Zico Kolter -
2019 Poster: SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver »
Po-Wei Wang · Priya Donti · Bryan Wilder · Zico Kolter -
2019 Poster: Robust Decision Trees Against Adversarial Examples »
Hongge Chen · Huan Zhang · Duane Boning · Cho-Jui Hsieh -
2019 Poster: Adversarial camera stickers: A physical camera-based attack on deep learning systems »
Juncheng Li · Frank R Schmidt · Zico Kolter -
2019 Poster: DL2: Training and Querying Neural Networks with Logic »
Marc Fischer · Mislav Balunovic · Dana Drachsler-Cohen · Timon Gehr · Ce Zhang · Martin Vechev -
2019 Oral: DL2: Training and Querying Neural Networks with Logic »
Marc Fischer · Mislav Balunovic · Dana Drachsler-Cohen · Timon Gehr · Ce Zhang · Martin Vechev -
2019 Oral: SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver »
Po-Wei Wang · Priya Donti · Bryan Wilder · Zico Kolter -
2019 Oral: Robust Decision Trees Against Adversarial Examples »
Hongge Chen · Huan Zhang · Duane Boning · Cho-Jui Hsieh -
2019 Oral: Adversarial camera stickers: A physical camera-based attack on deep learning systems »
Juncheng Li · Frank R Schmidt · Zico Kolter -
2018 Poster: Training Neural Machines with Trace-Based Supervision »
Matthew Mirman · Dimitar Dimitrov · Pavle Djordjevic · Timon Gehr · Martin Vechev -
2018 Oral: Training Neural Machines with Trace-Based Supervision »
Matthew Mirman · Dimitar Dimitrov · Pavle Djordjevic · Timon Gehr · Martin Vechev -
2018 Poster: Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope »
Eric Wong · Zico Kolter -
2018 Poster: Differentiable Abstract Interpretation for Provably Robust Neural Networks »
Matthew Mirman · Timon Gehr · Martin Vechev -
2018 Oral: Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope »
Eric Wong · Zico Kolter -
2018 Oral: Differentiable Abstract Interpretation for Provably Robust Neural Networks »
Matthew Mirman · Timon Gehr · Martin Vechev -
2017 Poster: Input Convex Neural Networks »
Brandon Amos · Lei Xu · Zico Kolter -
2017 Poster: OptNet: Differentiable Optimization as a Layer in Neural Networks »
Brandon Amos · Zico Kolter -
2017 Poster: A Semismooth Newton Method for Fast, Generic Convex Programming »
Alnur Ali · Eric Wong · Zico Kolter -
2017 Talk: OptNet: Differentiable Optimization as a Layer in Neural Networks »
Brandon Amos · Zico Kolter -
2017 Talk: Input Convex Neural Networks »
Brandon Amos · Lei Xu · Zico Kolter -
2017 Talk: A Semismooth Newton Method for Fast, Generic Convex Programming »
Alnur Ali · Eric Wong · Zico Kolter