Skip to Main content Skip to Navigation
Directions of work or proceedings

Deep Learning Techniques for Music Generation -- A Survey

Abstract : This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: • Objective – What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. – For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). • Representation – What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. – What format is to be used? Examples are: MIDI, piano roll or text. – How will the representation be encoded? Examples are: scalar, one-hot or many-hot. • Architecture – What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. • Challenges – What are the limitations and open challenges? Examples are: variability, interactivity and creativity. • Strategy – How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described in this survey and are used to exemplify the various choices of objective, representation, architecture, challenges and strategies. The last part of the paper includes some discussion and some prospects. This paper is a simplified (weak DRM1) version of the following book [13]: Jean-Pierre Briot, Gaëtan Hadjeres and François-David Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 2019. Hardcover ISBN: 978-3-319-70162-2. eBook ISBN: 978-3-319-70163-9. Series ISSN: 2509- 6575.
Complete list of metadata

Cited literature [223 references]  Display  Hide  Download
Contributor : Jean-Pierre Briot <>
Submitted on : Thursday, April 9, 2020 - 7:43:49 PM
Last modification on : Tuesday, March 23, 2021 - 9:28:02 AM


Files produced by the author(s)


  • HAL Id : hal-01660772, version 4
  • ARXIV : 1709.01620


Jean-Pierre Briot, Gaëtan Hadjeres, François-David Pachet. Deep Learning Techniques for Music Generation -- A Survey. 2019. ⟨hal-01660772v4⟩



Record views


Files downloads


Données de recherche

doi: web.