Skip to yearly menu bar Skip to main content


Poster

Creative Text-to-Audio Generation via Synthesizer Programming

Manuel Cherep · Nikhil Singh · Jessica Shand

Hall C 4-9 #117
[ ] [ Project Page ] [ Paper PDF ]
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

Neural audio synthesis methods now allow specifying ideas in natural language. However, these methods produce results that cannot be easily tweaked, as they are based on large latent spaces and up to billions of uninterpretable parameters. We propose a text-to-audio generation method that leverages a virtual modular sound synthesizer with only 78 parameters. Synthesizers have long been used by skilled sound designers for media like music and film due to their flexibility and intuitive controls. Our method, CTAG, iteratively updates a synthesizer's parameters to produce high-quality audio renderings of text prompts that can be easily inspected and tweaked. Sounds produced this way are also more abstract, capturing essential conceptual features over fine-grained acoustic details, akin to how simple sketches can vividly convey visual concepts. Our results show how CTAG produces sounds that are distinctive, perceived as artistic, and yet similarly identifiable to recent neural audio synthesis models, positioning it as a valuable and complementary tool.

Chat is not available.