Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Self-supervision in Audio and Speech

Invited Talk: Representation learning on sequential data with latent priors

Jan Chorowski


Abstract:

Unsupervised learning of data representations is still an open problem of machine learning. However, the data often has a latent structure which can be exploited to improve learned representations. We will consider two domains having a rich latent structure: speech and handwriting. Both can be interpreted as time signals that encode a natural language message. We show how matching certain properties of the implied latent representation, such as using discrete latent units, explicit modeling of duration, or learned latent dynamics can improve representations obtained using deep neural autoencoders.

Link to the video: https://slideslive.com/38930728/representation-learning-on-sequential-data-with-latent-priors

Chat is not available.