Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Machine Learning for Music Discovery

Making Efficient use of Musical Annotations

Brian McFee

[ ]
[ Video
2019 Invited Talk

Abstract:

Many tasks in audio-based music analysis require building mappings between complex representational spaces, such as the input audio signal (or spectral representation), and structured, time-varying output such as pitch, harmony, instrumentation, rhythm, or structure. These mappings encode musical domain knowledge, and involve processing and integrating knowledge at multiple scales simultaneously. It typically takes humans years of training and practice to master these concepts, and as a result, data collection for sophisticated musical analysis tasks is often costly and time-consuming. With limited available data with reliable annotations, it can be difficult to build robust models to automate music annotation by computational means. However, musical problems often exhibit a great deal of structure, either in the input or output representations, or even between related tasks, which can be effectively leveraged to reduce data requirements. In this talk, I will survey several recent manifestations of this phenomenon across different music and audio analysis problems, drawing on recent work from the NYU Music and Audio Research Lab.

Chat is not available.