Skip to yearly menu bar Skip to main content


Search All 2023 Events
 

51 Results

<<   <   Page 2 of 5   >   >>
Poster
Thu 13:30 How Many Perturbations Break This Model? Evaluating Robustness Beyond Adversarial Accuracy
Raphaël Olivier · Bhiksha Raj
Workshop
RODEO: Robust Out-of-distribution Detection via Exposing Adaptive Outliers
Workshop
Unsupervised Adversarial Detection without Extra Model: Training Loss Should Change
Workshop
DiffScene: Diffusion-Based Safety-Critical Scenario Generation for Autonomous Vehicles
Workshop
Towards Modular Learning of Deep Causal Generative Models
Md Musfiqur Rahman · Murat Kocaoglu
Workshop
Large Language Models for Code: Security Hardening and Adversarial Testing
Jingxuan He · Martin Vechev
Workshop
Why do universal adversarial attacks work on large language models?: Geometry might be the answer
Workshop
How Can Neuroscience Help Us Build More Robust Deep Neural Networks?
Workshop
Introducing Vision into Large Language Models Expands Attack Surfaces and Failure Implications
Workshop
FACADE: A Framework for Adversarial Circuit Anomaly Detection and Evaluation
Poster
Wed 14:00 PAC-Bayesian Generalization Bounds for Adversarial Generative Models
Sokhna Diarra Mbacke · Florence Clerc · Pascal Germain
Workshop
Scoring Black-Box Models for Adversarial Robustness
Jian Vora · Pranay Reddy Samala