Timezone: »

Workshop on the Security and Privacy of Machine Learning
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song

Fri Jun 14 08:30 AM -- 06:00 PM (PDT) @ 104 B
Event URL: https://icml2019workshop.github.io/ »

As machine learning has increasingly been deployed in critical real-world applications, the dangers of manipulation and misuse of these models has become of paramount importance to public safety and user privacy. In applications such as online content recognition to financial analytics to autonomous vehicles all have shown the be vulnerable to adversaries wishing to manipulate the models or mislead models to their malicious ends.
This workshop will focus on recent research and future directions about the security and privacy problems in real-world machine learning systems. We aim to bring together experts from machine learning, security, and privacy communities in an attempt to highlight recent work in these area as well as to clarify the foundations of secure and private machine learning strategies. We seek to come to a consensus on a rigorous framework to formulate adversarial attacks targeting machine learning models, and to characterize the properties that ensure the security and privacy of machine learning systems. Finally, we hope to chart out important directions for future work and cross-community collaborations.

Fri 9:00 a.m. - 9:30 a.m.
Patrick McDaniel (Talk) [ Video
Fri 9:30 a.m. - 10:00 a.m.
Una-May O'Reilly (Talk) [ Video
Fri 10:00 a.m. - 10:20 a.m.
Enhancing Gradient-based Attacks with Symbolic Intervals (contributed talk)
Fri 10:20 a.m. - 10:30 a.m.
Adversarial Policies: Attacking Deep Reinforcement Learning (spotlight)
Fri 10:45 a.m. - 11:15 a.m.
Le Song (Talk) [ Video
Fri 11:15 a.m. - 11:45 a.m.
Allen Qi (Talk) [ Video
Fri 11:45 a.m. - 12:05 p.m.
Private vqSGD: Vector-Quantized Stochastic Gradient Descent (contributed talk) [ Video
Fri 1:15 p.m. - 1:45 p.m.
Ziko Kolter (Talk) [ Video
Fri 1:45 p.m. - 2:05 p.m.
Provable Certificates for Adversarial Examples:Fitting a Ball in the Union of Polytopes (contributed talk) [ Video
Fri 2:05 p.m. - 2:45 p.m.
Poster Session #1 (poster session)
Fri 2:45 p.m. - 3:15 p.m.
Alexander Madry (Talk) [ Video
Fri 3:15 p.m. - 3:45 p.m.
Been Kim (Talk) [ Video
Fri 3:45 p.m. - 4:05 p.m.
Theoretically Principled Trade-off between Robustness and Accuracy (contributed talk) [ Video
Fri 4:05 p.m. - 4:15 p.m.
Model weight theft with just noise inputs: The curious case of the petulant attacker (spotlight) [ Video
Fri 4:15 p.m. - 5:15 p.m.
Panel (panel) [ Video
Fri 5:15 p.m. - 6:00 p.m.
Poster Sesson #2 (poster session)

Author Information

Nicolas Papernot (Google Brain)
Florian Tramer (Stanford University)
Bo Li (UIUC)
Dan Boneh (Stanford University)
David Evans (University of Virginia)
Somesh Jha (University of Wisconsin, Madison)
Percy Liang (Stanford University)
Patrick McDaniel (The Pennsylvania State University)
Jacob Steinhardt (University of California, Berkeley)
Dawn Song (University of California, Berkeley)

More from the Same Authors