Workshop
Workshop on the Security and Privacy of Machine Learning
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song

Fri Jun 14th 08:30 AM -- 06:00 PM @ 104 B
Event URL: https://icml2019workshop.github.io/ »

As machine learning has increasingly been deployed in critical real-world applications, the dangers of manipulation and misuse of these models has become of paramount importance to public safety and user privacy. In applications such as online content recognition to financial analytics to autonomous vehicles all have shown the be vulnerable to adversaries wishing to manipulate the models or mislead models to their malicious ends.
This workshop will focus on recent research and future directions about the security and privacy problems in real-world machine learning systems. We aim to bring together experts from machine learning, security, and privacy communities in an attempt to highlight recent work in these area as well as to clarify the foundations of secure and private machine learning strategies. We seek to come to a consensus on a rigorous framework to formulate adversarial attacks targeting machine learning models, and to characterize the properties that ensure the security and privacy of machine learning systems. Finally, we hope to chart out important directions for future work and cross-community collaborations.

09:00 AM Patrick McDaniel (Talk) Video » 
09:30 AM Una-May O'Reilly (Talk) Video » 
10:00 AM Enhancing Gradient-based Attacks with Symbolic Intervals (contributed talk)
10:20 AM Adversarial Policies: Attacking Deep Reinforcement Learning (spotlight)
10:45 AM Le Song (Talk) Video » 
11:15 AM Allen Qi (Talk) Video » 
11:45 AM Private vqSGD: Vector-Quantized Stochastic Gradient Descent (contributed talk) Video » 
01:15 PM Ziko Kolter (Talk) Video » 
01:45 PM Provable Certificates for Adversarial Examples:Fitting a Ball in the Union of Polytopes (contributed talk) Video » 
02:05 PM Poster Session #1 (poster session)
02:45 PM Alexander Madry (Talk) Video » 
03:15 PM Been Kim (Talk) Video » 
03:45 PM Theoretically Principled Trade-off between Robustness and Accuracy (contributed talk) Video » 
04:05 PM Model weight theft with just noise inputs: The curious case of the petulant attacker (spotlight) Video » 
04:15 PM Panel (panel) Video » 
05:15 PM Poster Sesson #2 (poster session)

Author Information

Nicolas Papernot (Google Brain)
Florian Tramer (Stanford University)
Bo Li (UIUC)
Dan Boneh (Stanford University)
David Evans (University of Virginia)
Somesh Jha (University of Wisconsin, Madison)
Percy Liang (Stanford University)
Patrick McDaniel (The Pennsylvania State University)
Jacob Steinhardt (University of California, Berkeley)
Dawn Song (University of California, Berkeley)

More from the Same Authors