Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Fri Jun 14 08:30 AM -- 06:00 PM (PDT) @ 104 B
Workshop on the Security and Privacy of Machine Learning
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song





Workshop Home Page

As machine learning has increasingly been deployed in critical real-world applications, the dangers of manipulation and misuse of these models has become of paramount importance to public safety and user privacy. In applications such as online content recognition to financial analytics to autonomous vehicles all have shown the be vulnerable to adversaries wishing to manipulate the models or mislead models to their malicious ends.
This workshop will focus on recent research and future directions about the security and privacy problems in real-world machine learning systems. We aim to bring together experts from machine learning, security, and privacy communities in an attempt to highlight recent work in these area as well as to clarify the foundations of secure and private machine learning strategies. We seek to come to a consensus on a rigorous framework to formulate adversarial attacks targeting machine learning models, and to characterize the properties that ensure the security and privacy of machine learning systems. Finally, we hope to chart out important directions for future work and cross-community collaborations.

Patrick McDaniel (Talk)
Una-May O'Reilly (Talk)
Enhancing Gradient-based Attacks with Symbolic Intervals (contributed talk)
Adversarial Policies: Attacking Deep Reinforcement Learning (spotlight)
Le Song (Talk)
Allen Qi (Talk)
Private vqSGD: Vector-Quantized Stochastic Gradient Descent (contributed talk)
Ziko Kolter (Talk)
Provable Certificates for Adversarial Examples:Fitting a Ball in the Union of Polytopes (contributed talk)
Poster Session #1 (poster session)
Alexander Madry (Talk)
Been Kim (Talk)
Theoretically Principled Trade-off between Robustness and Accuracy (contributed talk)
Model weight theft with just noise inputs: The curious case of the petulant attacker (spotlight)
Panel (panel)
Poster Sesson #2 (poster session)