Workshop
|
|
Like Oil and Water: Group Robustness and Poisoning Defenses Don’t Mix
|
|
Workshop
|
|
An Interactive Human-Machine Learning Interface for Collecting and Learning from Complex Annotations
Jonathan Erskine · Raul Santos-Rodriguez · Alexander Hepburn · Matt Clifford
|
|
Workshop
|
|
When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?
|
|
Workshop
|
|
Accurate, Explainable, and Private Models: Providing Recourse While Minimizing Training Data Leakage
|
|
Workshop
|
|
On feasibility of intent obfuscating attacks
|
|
Workshop
|
|
Unsupervised Adversarial Detection without Extra Model: Training Loss Should Change
|
|
Workshop
|
|
On feasibility of intent obfuscating attacks
ZhaoBin Li · Patrick Shafto
|
|
Workshop
|
|
How Can Neuroscience Help Us Build More Robust Deep Neural Networks?
|
|
Workshop
|
|
Adversarial Attacks and Defenses in Explainable Artificial Intelligence: A Survey
|
|
Workshop
|
|
Adversarial Attacks and Defenses in Explainable Artificial Intelligence: A Survey
Hubert Baniecki · Przemyslaw Biecek
|
|
Poster
|
Tue 17:00
|
Auto-Differentiation of Relational Computations for Very Large Scale Machine Learning
Yuxin Tang · Zhimin Ding · Dimitrije Jankov · Binhang Yuan · Daniel Bourgeois · Chris Jermaine
|
|
Poster
|
Wed 17:00
|
Forget Unlearning: Towards True Data-Deletion in Machine Learning
Rishav Chourasia · Neil Shah
|
|