Skip to yearly menu bar Skip to main content


Oral

Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion

Yujia Huang · Adishree Ghatare · Yuanzhe Liu · ziniu hu · Qinsheng Zhang · Chandramouli Shama Sastry · Siddharth Gururani · Sageev Oore · Yisong Yue

Hall A8
[ ] [ Visit Oral 2D Music and audio ]
Tue 23 Jul 7:30 a.m. — 7:45 a.m. PDT

Abstract:

We study the problem of symbolic music generation (e.g., generating piano rolls), with a technical focus on non-differentiable rule guidance. Musical rules are often expressed in symbolic form on note characteristics, such as note density or chord progression, many of which are non-differentiable which pose a challenge when using them for guided diffusion. We propose Stochastic Control Guidance (SCG), a novel guidance method that only requires forward evaluation of rule functions that can work with pre-trained diffusion models in a plug-and-play way, thus achieving training-free guidance for non-differentiable rules for the first time. Additionally, we introduce a latent diffusion architecture for symbolic music generation with high time resolution, which can be composed with SCG in a plug-and-play fashion. Compared to standard strong baselines in symbolic music generation, this framework demonstrates marked advancements in music quality and rule-based controllability, outperforming current state-of-the-art generators in a variety of settings. For detailed demonstrations, code and model checkpoints, please visit our project website.

Chat is not available.