Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Adversarial Training in Continuous-Time Models and Irregularly Sampled Time-Series

Alvin Li · Mathias Lechner · Alexander Amini · Daniela Rus

Keywords: [ continuous-time models ] [ Adversarial training ] [ irregularly sampled time series ]


Abstract:

This study presents the first steps of exploring the effects of adversarial training on continuous-time models and irregularly sampled time series data. Historically, these models and sampling techniques have been largely neglected in adversarial learning research, leading to a significant gap in our understanding of their performance under adversarial conditions. To address this, we conducted an empirical study of adversarial training techniques applied to time-continuous model architectures and sampling methods. Our findings suggest that while standard continuous-time models tend to outperform their discrete counterparts (especially on irregularly sampled datasets), this performance advantage diminishes almost entirely when adversarial training is employed. This indicates that adversarial training may interfere with the time-continuous representation, effectively neutralizing the benefits typically associated with these models. We believe these insights will be critical in guiding further advancements in adversarial learning research for continuous-time models.

Chat is not available.