Invited Talk
in
Workshop: 2nd Workshop on Formal Verification of Machine Learning
Prof. Gagandeep Singh (UIUC): Trust and Safety with Certified AI
Gagandeep Singh
Real-world adoption of deep neural networks (DNNs) in critical applications requires ensuring strong generalization beyond testing datasets. Unfortunately, the standard practice of measuring DNN performance on a finite set of test inputs cannot ensure DNN safety on inputs in the wild. In this talk, I will focus on how certified AI can be leveraged as a service to bridge this gap by building DNNs with strong generalization on an infinite set of unseen inputs. In the process, I will discuss some of our recent work for building trust and safety in diverse domains such as vision, systems, finance, and more. I will also describe a path toward making certified AI more scalable, easy to develop, and accessible to DNN developers lacking formal backgrounds.