L. A. Gatys, A. S. Ecker, and M. Bethge, Image style transfer using convolutional neural networks, CVPR, 2016.

D. Chen, J. Liao, L. Yuan, N. Yu, and G. Hua, Coherent online video style transfer, ICCV, 2017.

R. J. Skerry-ryan, E. Battenberg, Y. Xiao, Y. Wang, D. Stanton et al., Towards end-to-end prosody transfer for expressive speech synthesis with Tacotron, ICML, 2018.

A. P. Noam-mor, L. Wold, and Y. Taigman, A universal music translation network, ICLR, 2019.

T. Shen, T. Lei, R. Barzilay, and T. S. Jaakkola, Style transfer from non-parallel text by cross-alignment, NIPS, 2017.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, Image-to-image translation with conditional adversarial networks, CVPR, 2017.

J. Zhu, T. Park, P. Isola, and A. A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, ICCV, 2017.

I. Malik and C. H. Ek, Neural translation of musical style, ArXiv, 2017.

E. Nakamura, K. Shibata, R. Nishikimi, and K. Yoshii, Unsupervised melody style conversion, ICASSP, 2019.

O. Cífka, U. Şimşekli and G. Richard, Supervised symbolic music style translation using synthetic data, ISMIR, 2019.
DOI : 10.5281/zenodo.3527878

F. Li, R. Fergus, and P. Perona, One-shot learning of object categories, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.28, pp.594-611, 2006.

D. J. Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. Wierstra, One-shot generalization in deep generative models, ICML, 2016.

S. Dai, Z. Zhang, and G. Xia, Music style transfer: A position paper, Proceedings of the 6th International Workshop on Musical Metacreation (MUME), 2018.

Y. Hung, I. P. Chiang, Y. Chen, and Y. Yang, Musical composition style transfer via disentangled timbre representations, IJCAI, 2019.

F. Pachet and P. Roy, Non-conformant harmonization: the Real Book in the style of Take 6, ICCC, 2014.

G. Hadjeres, J. Sakellariou, and F. Pachet, Style imitation and chord invention in polyphonic music with exponential families, ArXiv, 2016.

G. Brunner, A. Konrad, Y. Wang, and R. Wattenhofer, MIDI-VAE: Modeling dynamics and instrumentation of music with applications to style transfer, in ISMIR, 2018.

G. Brunner, Y. Wang, R. Wattenhofer, and S. Zhao, Symbolic music genre transfer with CycleGAN, in ICTAI, 2018.

W. Lu and L. Su, Transferring the style of homophonic music using recurrent neural networks and autoregressive models, in ISMIR, 2018.

C. Lu, M. Xue, C. Chang, C. Lee, and L. Su, Play as you like: Timbre-enhanced multi-modal music style transfer, AAAI, 2018.

S. Huang, Q. Li, C. Anil, X. Bao, S. Oore et al., Tim-breTron: A WaveNet(CycleGAN(CQT(Audio))) pipeline for musical timbre transfer, ICLR, 2019.

G. Hadjeres and F. Pachet, DeepBach: a steerable model for Bach chorales generation, ICML, 2017.

C. A. Huang, T. Cooijmans, A. Roberts, A. C. Courville, and D. Eck, Counterpoint by convolution, 2017.

K. Choi, C. Hawthorne, I. Simon, M. Dinculescu, and J. Engel, Encoding musical style with transformer autoencoders, ArXiv, 2019.

S. Lattner and M. Grachten, High-level control of drum track generation using learned patterns of rhythmic interaction, 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2019.

E. Grinstein, N. Q. Duong, A. Ozerov, and P. Pérez, Audio style transfer, ICASSP, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01626389

J. Driedger, T. Prätzlich, and M. Müller, Let it Bee -towards NMFinspired audio mosaicing, ISMIR, 2015.

C. J. Tralie, Cover song synthesis by analogy, ISMIR, 2018.

A. Zils and F. Pachet, Musical mosaicing, COST G-6 Conference on Digital Audio Effects (DAFX-01), 2001.

Y. Zhang, W. Cai, and Y. Zhang, Separating style and content for generalized style transfer, CVPR, 2018.

Y. Broze and D. Shanahan, Diachronic changes in jazz harmony, An Interdisciplinary Journal, vol.31, issue.1, pp.32-45, 2013.

I. Simon and S. Oore, Performance RNN: Generating music with expressive timing and dynamics, Magenta Blog, 2017.

C. Payne, MuseNet, 2019.

D. Clevert, T. Unterthiner, and S. Hochreiter, Fast and accurate deep network learning by exponential linear units (ELUs)," in ICLR, 2016.

K. Cho, B. Van-merrienboer, D. Gülçehre, F. Bahdanau, H. Bougares et al., Learning phrase representations using RNN encoder-decoder for statistical machine translation, EMNLP, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01433235

D. Bahdanau, K. Cho, and Y. Bengio, Neural machine translation by jointly learning to align and translate, ICLR, 2015.

D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, ICLR, 2015.

C. Mckay, Automatic genre classification of MIDI recordings, 2004.

J. Sakellariou, F. Tria, V. Loreto, and F. Pachet, Maximum entropy models capture melodic styles, Scientific Reports, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01585581

C. Mckay and I. Fujinaga, The Bodhidharma system and the results of the MIREX 2005 symbolic genre classification contest, ISMIR, 2005.

, General MIDI system level 1, MIDI Manufacturers Association, 1991.

L. V. Maaten and G. Hinton, Visualizing data using t-SNE, Journal of Machine Learning Research, vol.9, pp.2579-2605, 2008.

A. Roberts, J. Engel, C. Raffel, C. Hawthorne, and D. Eck, A hierarchical latent vector model for learning long-term structure in music, in ICML, 2018.

J. Thickstun, Z. Harchaoui, D. P. Foster and S. M. Kakade, Coupled recurrent models for polyphonic music composition, ISMIR, 2018.