Skip to yearly menu bar Skip to main content


Oral
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Adversarial Training in Continuous-Time Models and Irregularly Sampled Time-Series: A First Look

Keywords: [ continuous-time models ] [ Adversarial training ] [ irregularly sampled time series ]


Abstract:

This study presents the first steps of exploring the effects of adversarial training on continuous-time models and irregularly sampled time series data. Historically, these models and sampling techniques have been largely neglected in adversarial learning research, leading to a significant gap in our understanding of their performance under adversarial conditions. To address this, we conduct an empirical study of adversarial training techniques applied to time-continuous model architectures and sampling methods. Our findings suggest that while continuous-time models tend to outperform their discrete counterparts when trained conventionally, this performance advantage diminishes almost entirely when adversarial training is employed. This indicates that adversarial training may interfere with the time-continuous representation, effectively neutralizing the benefits typically associated with these models. We believe these first insights will be important for guiding further studies and advancements in the understanding of adversarial learning in continuous-time models.

Chat is not available.