Timezone: »

State and parameter learning with PARIS particle Gibbs
Gabriel Cardoso · Yazid Janati el idrissi · Sylvain Le Corff · Eric Moulines · Jimmy Olsson

Tue Jul 25 02:00 PM -- 04:30 PM (PDT) @ Exhibit Hall 1 #739

Non-linear state-space models, also known as general hidden Markov models (HMM), are ubiquitous in statistical machine learning, being the most classical generative models for serial data and sequences. Learning in HMM, either via Maximum Likelihood Estimation (MLE) or Markov Score Climbing (MSC) requires the estimation of the- smoothing expectation of some additive functionals. Controlling the bias and the variance of this estimation is crucial to establish the convergence of learning algorithms. Our first contribution is to design a novel additive smoothing algorithm, the Parisian particle Gibbs (PPG) sampler, which can be viewed as a PaRIS (Olsson, Westerborn 2017) algorithm driven by conditional SMC moves, resulting in bias-reduced estimates of the targeted quantities. We substantiate the PPG algorithm with theoretical results, including new bounds on bias and variance as well as deviation inequalities. We then establish, in the learning context, and under standard assumptions, non-asymptotic bounds highlighting the value of bias reduction and the implicit Rao--Blackwellization of PPG. These are the first non-asymptotic results of this kind in this setting. We illustrate our theoretical results with numerical experiments supporting our claims.

Author Information

Gabriel Cardoso (École Polytechnique)
Yazid Janati el idrissi (telecom sudparis)
Sylvain Le Corff (Sorbonne Université, LPSM)
Eric Moulines (Ecole Polytechnique)
Jimmy Olsson

More from the Same Authors