Timezone: »

Certifying Ensembles: A General Certification Theory with S-Lipschitzness
Aleksandar Petrov · Francisco Eiras · Amartya Sanyal · Phil Torr · Adel Bibi
Event URL: https://openreview.net/forum?id=7MHKSZX6uE »

Improving and guaranteeing the robustness of deep learning models has been a topic of intense research. Ensembling, which combines several classifiers to provide a better model, has been shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept drift. However, the impact of ensembling on certified robustness is less well understood. In this work, we generalise Lipschitz continuity by introducing S-Lipschitz classifiers, which we use to analyse the theoretical robustness of ensembles. Our results are precise conditions when ensembles of robust classifiers are more robust than any constituent classifier, as well as conditions when they are less robust.

Author Information

Aleksandar Petrov (University of Oxford)
Francisco Eiras (University of Oxford)
Amartya Sanyal (Max Planck Institute for Intelligent Systems Tuebingen)

Postdoc at Max Planck Institute for Intelligent Systems Tuebingen Postdoc at ETH Zurich D.Phil student at __University of Oxford__ Research Intern at Facebook AI Research

Phil Torr (Oxford)
Adel Bibi (University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors