Skip to yearly menu bar Skip to main content


Poster

Adversarial Generation of Time-Frequency Features with application in audio synthesis

Andrés Marafioti · Nathanaël Perraudin · Nicki Holighaus · Piotr Majdak

Pacific Ballroom #73

Keywords: [ Unsupervised Learning ] [ Speech Processing ] [ Generative Adversarial Networks ] [ Deep Learning Theory ] [ Deep Generative Models ]


Abstract:

Time-frequency (TF) representations provide powerful and intuitive features for the analysis of time series such as audio. But still, generative modeling of audio in the TF domain is a subtle matter. Consequently, neural audio synthesis widely relies on directly modeling the waveform and previous attempts at unconditionally synthesizing audio from neurally generated invertible TF features still struggle to produce audio at satisfying quality. In this article, focusing on the short-time Fourier transform, we discuss the challenges that arise in audio synthesis based on generated invertible TF features and how to overcome them. We demonstrate the potential of deliberate generative TF modeling by training a generative adversarial network (GAN) on short-time Fourier features. We show that by applying our guidelines, our TF-based network was able to outperform a state-of-the-art GAN generating waveforms directly, despite the similar architecture in the two networks.

Live content is unavailable. Log in and register to view live content