Skip to yearly menu bar Skip to main content


Tutorial

Machine Learning with Signal Processing

Arno Solin


Abstract:

Many ML tasks share practical goals and theoretical foundations with signal processing (consider, e.g., spectral and kernel methods, differential equation systems, sequential sampling techniques, and control theory). Signal processing methods are an integral part of many sub-fields in ML, with links to, for example, Reinforcement learning, Hamiltonian Monte Carlo, Gaussian process (GP) models, Bayesian optimization, and neural ODEs/SDEs.

This tutorials aims to cover aspects in machine learning that link to both discrete-time and continuous-time signal processing methods. Special focus is put on introducing stochastic differential equations (SDEs), state space models, and recursive estimation (Bayesian filtering and smoothing) for Gaussian process models. The goals are to (i) teach basic principles of direct links between signal processing and machine learning, (ii) provide an intuitive hands-on understanding of what stochastic differential equations are all about, (iii) show how these methods have real benefits in speeding up learning, improving inference, and model building—with illustrative and practical application examples. This is to show how ML can leverage existing theory to improve and accelerate research, and to provide a unifying overview to the ICML community members working in the intersection of these methods.

Chat is not available.