Skip to Main content Skip to Navigation
Journal articles

Lyrics segmentation via bimodal text–audio representation

Abstract : Song lyrics contain repeated patterns that have been proven to facilitate automated lyrics segmentation, with the final goal of detecting the building blocks (e.g., chorus, verse) of a song text. Our contribution in this article is twofold. First, we introduce a convolutional neural network (CNN)-based model that learns to segment the lyrics based on their repetitive text structure. We experiment with novel features to reveal different kinds of repetitions in the lyrics, for instance based on phonetical and syntactical properties. Second, using a novel corpus where the song text is synchronized to the audio of the song, we show that the text and audio modalities capture complementary structure of the lyrics and that combining both is beneficial for lyrics segmentation performance. For the purely text-based lyrics segmentation on a dataset of 103k lyrics, we achieve an F-score of 67.4%, improving on the state of the art (59.2% F-score). On the synchronized text–audio dataset of 4.8k songs, we show that the additional audio features improve segmentation performance to 75.3% F-score, significantly outperforming the purely text-based approaches.
Complete list of metadata
Contributor : Elena Cabrio Connect in order to contact the contributor
Submitted on : Thursday, October 14, 2021 - 1:44:14 PM
Last modification on : Thursday, August 4, 2022 - 4:55:02 PM
Long-term archiving on: : Saturday, January 15, 2022 - 6:54:55 PM


Files produced by the author(s)



Michael Fell, Yaroslav Nechaev, Gabriel Meseguer-Brocal, Elena Cabrio, Fabien Gandon, et al.. Lyrics segmentation via bimodal text–audio representation. Natural Language Engineering, Cambridge University Press (CUP), 2021, 28 (3), pp.1-20. ⟨10.1017/S1351324921000024⟩. ⟨hal-03295581⟩



Record views


Files downloads