Timezone: »
Robust perception is still challenging due to the internal vulnerability of DNNs to adversarial examples as well as the external uncertainty of sensing data, e.g. sensor placement and motion perturbation. Recent work can only give provable robustness guarantees in a probabilistic way which is not enough for safety-critical scenarios due to false positive certificates. To this end, we propose the first deterministic provable defense framework against camera motion by extending the verification of neural networks (VNN) method from lp bounded perturbation to parameterized camera motion space for robotics applications. Through the dense partitions of image projection from 3D dense point cloud to fully cover all the pixels, all the pixel values can be bounded by linear relaxations using linear programming, which makes the camera motion perturbation verifiable and compatible with current incomplete and complete formal VNN methods given DNN models. Extensive experiments are conducted on the Metaroom dataset for the dense image projection and our sound and complete method is more computationally efficient than the randomized smoothing based method at small perturbation radii.
Author Information
Hanjiang Hu (Carnegie Mellon University)
Changliu Liu (Carnegie Mellon University)
Ding Zhao (Carnegie Mellon University)
More from the Same Authors
-
2021 : IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance »
Ruixuan Liu · Changliu Liu -
2022 : An Optical Pulse Stacking Environment and Reinforcement Learning Benchmarks »
Abulikemu Abuduweili · Changliu Liu -
2022 : Group Distributionally Robust Reinforcement Learning with Hierarchical Latent Variables »
Mengdi Xu · Peide Huang · Visak Kumar · Jielin Qiu · Chao Fang · Kuan-Hui Lee · Xuewei Qi · Henry Lam · Bo Li · Ding Zhao -
2022 : Paper 22: Multimodal Unsupervised Car Segmentation via Adaptive Aerial Image-to-Image Translation »
Haohong Lin · Zhepeng Cen · Peide Huang · Hanjiang Hu -
2022 : Paper 2: SeasonDepth: Cross-Season Monocular Depth Prediction Dataset and Benchmark under Multiple Environments »
Ding Zhao · Hitesh Arora · Jiacheng Zhu · Zuxin Liu · Wenhao Ding -
2022 : Paper 10: CausalAF: Causal Autoregressive Flow for Safety-Critical Scenes Generation »
Wenhao Ding · Haohong Lin · Bo Li · Ding Zhao · Hitesh Arora -
2023 : DiffScene: Diffusion-Based Safety-Critical Scenario Generation for Autonomous Vehicles »
Chejian Xu · Ding Zhao · Alberto Sngiovanni Vincentelli · Bo Li -
2023 : Seeing is not Believing: Robust Reinforcement Learning against Spurious Correlation »
Wenhao Ding · Laixi Shi · Yuejie Chi · Ding Zhao -
2023 : Learning from Sparse Offline Datasets via Conservative Density Estimation »
Zhepeng Cen · Zuxin Liu · Zitong Wang · Yihang Yao · Henry Lam · Ding Zhao -
2023 : Offline Reinforcement Learning with Imbalanced Datasets »
Li Jiang · Sijie Cheng · Jielin Qiu · Victor Chan · Ding Zhao -
2023 : Semantically Adversarial Scene Generation with Explicit Knowledge Guidance for Autonomous Driving »
Wenhao Ding · Haohong Lin · Bo Li · Ding Zhao -
2023 : Learning Shared Safety Constraints from Multi-task Demonstrations »
Konwoo Kim · Gokul Swamy · Zuxin Liu · Ding Zhao · Sanjiban Choudhury · Steven Wu -
2023 : Visual-based Policy Learning with Latent Language Encoding »
Jielin Qiu · Mengdi Xu · William Han · Bo Li · Ding Zhao -
2023 : Can Brain Signals Reveal Inner Alignment with Human Languages? »
Jielin Qiu · William Han · Jiacheng Zhu · Mengdi Xu · Douglas Weber · Bo Li · Ding Zhao -
2023 : Multimodal Representation Learning of Cardiovascular Magnetic Resonance Imaging »
Jielin Qiu · Peide Huang · Makiya Nakashima · Jaehyun Lee · Jiacheng Zhu · Wilson Tang · Pohao Chen · Christopher Nguyen · Byung-Hak Kim · Debbie Kwon · Douglas Weber · Ding Zhao · David Chen -
2023 : Learning Shared Safety Constraints from Multi-task Demonstrations »
Konwoo Kim · Gokul Swamy · Zuxin Liu · Ding Zhao · Sanjiban Choudhury · Steven Wu -
2023 Poster: Constrained Decision Transformer for Offline Safe Reinforcement Learning »
Zuxin Liu · Zijian Guo · Yihang Yao · Zhepeng Cen · Wenhao Yu · Tingnan Zhang · Ding Zhao -
2023 Poster: Towards Robust and Safe Reinforcement Learning with Benign Off-policy Data »
Zuxin Liu · Zijian Guo · Zhepeng Cen · Huan Zhang · Yihang Yao · Hanjiang Hu · Ding Zhao -
2023 Poster: Bayesian Reparameterization of Reward-Conditioned Reinforcement Learning with Energy-based Models »
Wenhao Ding · Tong Che · Ding Zhao · Marco Pavone -
2023 Poster: Interpolation for Robust Learning: Data Augmentation on Wasserstein Geodesics »
Jiacheng Zhu · Jielin Qiu · Aritra Guha · Zhuolin Yang · XuanLong Nguyen · Bo Li · Ding Zhao -
2022 : Invited Talk 6 (Changliu Liu): Applications of Neural Verification on Robotics »
Changliu Liu -
2022 : Paper 15: On the Robustness of Safe Reinforcement Learning under Observational Perturbations »
Zuxin Liu · Zhepeng Cen · Huan Zhang · Jie Tan · Bo Li · Ding Zhao -
2022 : New adversarial ML applications on safety-critical human-robot systems »
Changliu Liu -
2022 : Paper 16: Constrained Model-based Reinforcement Learning via Robust Planning »
Zuxin Liu · Ding Zhao -
2022 : Characterizing Neural Network Verification for Systems with NN4SysBench »
Haoyu He · Tianhao Wei · Huan Zhang · Changliu Liu · Cheng Tan -
2022 Poster: Constrained Variational Policy Optimization for Safe Reinforcement Learning »
Zuxin Liu · Zhepeng Cen · Vladislav Isenbaev · Wei Liu · Steven Wu · Bo Li · Ding Zhao -
2022 Spotlight: Constrained Variational Policy Optimization for Safe Reinforcement Learning »
Zuxin Liu · Zhepeng Cen · Vladislav Isenbaev · Wei Liu · Steven Wu · Bo Li · Ding Zhao