Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Principles of Distribution Shift (PODS)

Noisy Learning for Neural ODEs Acts as a Robustness Locus Widening

Martin Gonzalez · Loic Cantat


Abstract:

We investigate several problems and challenges of evaluating the robustness of Differential Equation-based (DE) networks against synthetic shifts. We propose a novel and simple accuracy metric that can be used to evaluate intrinsic robustness and validate dataset corruption simulators. We also propose methodology recommendations destined for evaluating many faces of neural DEs' robustness and for comparing them with their discrete counterparts rigorously. We then use this criteria to evaluate a cheap data augmentation technique as a reliable way for demonstrating the natural robustness of neural ODEs against simulated image corruptions across multiple datasets.

Chat is not available.