, But this methodology is more limited when the objective is not to generate music highly conformant to a relatively narrow style (and corpus), as in the case of J. S. Bach chorales 11 , but to generate more creative music. Moreover, if we consider as a general objective for a system the capacity to assist composers and musicians 12 , rather than to autonomously generate music (see Section 1.1.2), we should maybe consider as an evaluation criteria the satisfaction of the composer (notably, if the assistance of the computer allowed him to compose and create music that he may consider not having been possible otherwise)

, This will unfortunately favor the quality (actually, the conformance) of the generated musical content regarding the learnt style, rather than its intrinsic quality (interest). Current experiments and directions to promote creativity rely mostly on constraints to avoid plagiarism and/or heuristics to incentive a generation outside the "comfort zone" that the deep architecture has learned from the corpus, while balancing elements of surprise with predictability/understandability. Such creativity control may be applied during the training phase (the case of the CAN architecture, see Section 6.13.2), or (for most types of control, see Sections 6.10 and 6.13) during the generation phase. Some alternative (and complementary) direction to better model such an element of surprise could be to include a model of an artificial listener with some model of expectation, vol.13

, Last, some additional fundamental limitation is that current deep learning techniques for learning and generating music are based on artefacts, actual musical data, independently of the processes and the culture that have led to them. If we want to envision more profound systems, it is likely that we will have to incorporate some modeling of the context and the process leading to musical artefacts and not so the artefacts themselves. Indeed, when considering art history

, About the possibility for more systematic objective criteria for evaluation, we can for example look at the analysis by Theis et al. for the case of image generation [187]. The authors state that an evaluation of image generative models is multicriteria via different possible metrics, such as log-likelihood, Parzen window estimates, or qualitative visual fidelity, and that a good result with respect to one criterion does not necessarily imply a good

, Bach chorales, and more generally speaking Bach music, are often used for experiments and evaluation, because the corpus is quite homogeneous regarding a given style (e.g., preludes, chorales. . . ) as well as quality. It also fits particularly well with algorithmic composition

, As, for instance, pioneered by the FlowComposer prototype

, Constructing a corpus with the "best considered" musical pieces, independently of the style (classical, jazz, pop, etc.) -as could do a museum or an exhibition presenting in a single room its best artefacts of different nature and origin -, is not likely to produce interesting results because such a corpus is too much sparse and heterogeneous

E. G. , the extension of classical harmony based on triads (only root, third and fifth) to extended chords

E. G. , movements like dodecaphonism or free jazz

, The modeling of the context is one of the limitations of current deep learning architectures and is a topic of ongoing research. An illustrating real counterexample is the case of a Chinese woman (chairwoman of China's biggest air conditioners maker) who had found her face displayed in 2018 in the port city of Ningbo on a huge screen that displays images of people caught jaywalking by surveillance References

M. Allan, K. I. Christopher, and . Williams, Harmonising chorales by probabilistic inference, Advances in Neural Information Processing Systems, vol.17, pp.25-32, 2005.

G. Amato, M. Behrmann, F. Bimbot, B. Caramiaux, F. Falchi et al., Enrico Turrin, Thierry Vieville, and Emmanuel Vincent. AI in the media and creative industries, 2019.

G. Assayag, C. Rueda, M. Laurson, C. Agon, and O. Delerue, Computer assisted composition at IRCAM: From PatchWork to OpenMusic, Computer Music Journal (CMJ), vol.23, issue.3, pp.59-72, 1999.

J. Lei, R. Ba, and . Caruana, Do deep nets really need to be deep?, 2014.

J. S. Bach, 389 Chorales (Choral-Gesange), 1985.

D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen et al., How to explain individual classification decisions, Journal of Machine Learning Research, issue.11, pp.1803-1831, 2010.

G. Barbieri, F. Pachet, P. Roy, and M. Esposti, Markov constraints for generating lyrics with style, Proceedings of the 20th European Conference on Artificial Intelligence (ECAI 2012), pp.115-120, 2012.

Y. Bengio, A. Courville, and P. Vincent, Representation learning: A review and new perspectives, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol.35, issue.8, pp.1798-1828, 2013.

P. Bojanowski, A. Joulin, D. Lopez-paz, and A. Szlam, Optimizing the latent space of generative networks, 2017.

D. Bouchacourt, E. Denton, T. Kulkarni, H. Lee, and S. Narayanaswamy, NIPS 2017 Workshop on Learning Disentangled Representations: from Perception to Control, 2017.

N. Boulanger-lewandowski, Y. Bengio, and P. Vincent, Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription, Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp.1159-1166, 2012.

N. Boulanger-lewandowski, Y. Bengio, and P. Vincent, Chapter 14th -Modeling and generating sequences of polyphonic music with the RNN-RBM, Deep Learning Tutorial -Release 0.1, pp.149-158, 2015.

M. Bretan, G. Weinberg, and L. Heck, A unit selection methodology for music generation using deep neural networks, Proceedings of the 8th International Conference on Computational Creativity (ICCC 2017), pp.72-79, 2017.

J. Briot, G. Hadjeres, and F. Pachet, Deep learning techniques for music generation -A survey, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01660772

J. Briot, G. Hadjeres, and F. Pachet, Deep Learning Techniques for Music Generation. Computational Synthesis and Creative Systems, 2019.
URL : https://hal.archives-ouvertes.fr/hal-01840918

J. , P. Briot, and F. Pachet, Music generation by deep learning -Challenges and directions, Neural Computing and Applications (NCAA), 2018.
URL : https://hal.archives-ouvertes.fr/hal-01660753

S. Carter, Z. Armstrong, L. Schubert, I. Johnson, and C. Olah, Activation atlas. Distill, 2019.

D. Castelvecchi, The black box of AI, Nature, vol.538, pp.20-23, 2016.

E. and C. Cherry, Some experiments on the recognition of speech, with one and two ears, The Journal of the Acoustical Society of America, vol.25, issue.5, pp.975-979, 1953.

K. Cho, C. Bart-van-merriënboer, D. Gulcehre, F. Bahdanau, H. Bougares et al., Learning phrase representations using RNN Encoder-Decoder for statistical machine translation, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01433235

K. Choi, G. Fazekas, K. Cho, and M. Sandler, A tutorial on deep learning for music information retrieval, 2017.

K. Choi, G. Fazekas, and M. Sandler, Text-based LSTM networks for automatic music composition, 1st Conference on Computer Simulation of Musical Creativity (CSMC 16), 2016.

F. Chollet, Building autoencoders in Keras, 2016.

A. Choromanska, M. Henaff, M. Mathieu, G. Ben-arous, and Y. Lecun, The loss surfaces of multilayer networks, 2015.

J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, Empirical evaluation of gated recurrent neural networks on sequence modeling, 2014.

Y. Chung, C. Wu, C. Shen, H. Lee, and L. Lee, Audio Word2Vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder, 2016.

D. Cope, The Algorithmic Composer. A-R Editions, 2000.

D. Cope, Computer Models of Musical Creativity, 2005.

F. Costa, T. Gärtner, A. Passerini, and F. Pachet, Constructive Machine Learning -Workshop Proceedings, 2016.

M. Dahia, H. Santana, E. Trajano, C. Sandroni, and G. Ramalho, Generating rhythmic accompaniment for guitar: the Cyber-João case study, Proceedings of the IX Brazilian Symposium on Computer Music (SBCM 2003), pp.7-13, 2003.

S. Dai, Z. Zhang, and G. G. Xia, Music style transfer issues: A position paper, 2018.

E. Trajano-de-lima and G. Ramalho, On rhythmic pattern extraction in bossa nova music, Proceedings of the 9th International Conference on Music Information Retrieval (ISMIR 2008), pp.641-646, 2008.

T. Roger, A. Dean, and . Mclean, The Oxford Handbook of Algorithmic Music. Oxford Handbooks, 2018.

J. Deltorn, Deep creations: Intellectual property and the automata, Frontiers in Digital Humanities, vol.4, 2017.

M. Denil, L. Bazzani, H. Larochelle, and N. De-freitas, Learning where to attend with deep architectures for image tracking, 2011.

G. Desjardins, A. Courville, and Y. Bengio, Disentangling factors of variation via generative entangling, 2012.

R. Dipietro, A friendly introduction to cross-entropy loss

C. Doersch, Tutorial on variational autoencoders, 2016.

P. Domingos, A few useful things to know about machine learning, Communications of the ACM (CACM), vol.55, issue.10, pp.78-87, 2012.

K. Doya and E. Uchibe, The Cyber Rodent project: Exploration of adaptive mechanisms for self-preservation and selfreproduction, Adaptive Behavior, vol.13, issue.2, pp.149-160, 2005.

S. Dubnov and G. Surges, Chapter 6 -Delegating creativity: Use of musical algorithms in machine listening and composition, Digital Da Vinci -Computers in Music, pp.127-158, 2014.

K. Ebcioglu, An expert system for harmonizing four-part chorales, Computer Music Journal (CMJ), vol.12, issue.3, pp.43-51, 1988.

D. Eck, J. Schmidhuber, ;. Manno, and S. , A first look at music composition using LSTM recurrent neural networks, 2002.

R. Eldan and O. Shamir, The power of depth for feedforward neural networks, 2016.

A. Elgammal, B. Liu, M. Elhoseiny, and M. Mazzone, CAN: Creative adversarial networks generating "art" by learning about styles and deviating from style norms, 2017.

, Deep learning machine solves the cocktail party problem, MIT Technology Review, 2015.

D. Erhan, Y. Bengio, A. Courville, P. Manzagol, and P. Vincent, Why does unsupervised pre-training help deep learning, Journal of Machine Learning Research, issue.11, pp.625-660, 2010.

D. Eck,

F. Pachet, Flow Machines -Artificial Intelligence for the future of music, 2012.

O. Fabius, R. Joost, and . Van-amersfoort, Variational recurrent auto-encoders, 2015.

J. David-fernández and F. Vico, AI methods in algorithmic composition: A comprehensive survey, Journal of Artificial Intelligence Research, issue.48, pp.513-582, 2013.

R. Fiebrink and B. Caramiaux, The machine learning algorithm as creative musical tool, 2016.

D. Foote, D. Yang, and M. Rohaninejad, Audio style transfer -Do androids dream of electric beats?, 2016.

E. Foxley, Nottingham Database

E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Professional Computing Series, 1995.

L. A. Gatys, A. S. Ecker, and M. Bethge, A neural algorithm of artistic style, 2015.

L. A. Gatys, A. S. Ecker, and M. Bethge, Image style transfer using convolutional neural networks, Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.2414-2423, 2016.

R. Gauldin, A Practical Approach to Eighteenth-Century Counterpoint, 1988.

M. Genesereth and Y. Björnsson, The international general game playing competition. AI Magazine, pp.107-111, 2013.

A. Felix, J. Gers, and . Schmidhuber, Recurrent nets that time and count, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, vol.3, pp.189-194, 2000.

K. Goel, R. Vohra, and J. K. Sahoo, Polyphonic music generation by modeling temporal dependencies using a RNN-DBN, Proceedings of the International Conference on Artificial Neural Networks, number 8681 in Theoretical Computer Science and General Issues, pp.217-224, 2014.

M. Good, MusicXML for notation and analysis, The Virtual Score: Representation, Retrieval, Restoration, pp.113-124, 2001.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, 2016.

I. J. Goodfellow, J. Pouget-abadie, M. Mirza, B. Xu, D. Warde-farley et al., Generative adversarial nets, 2014.

A. Graves, Generating sequences with recurrent neural networks, 2014.

A. Graves, G. Wayne, and I. Danihelka, Neural Turing machines, 2014.

G. Hadjeres, Interactive Deep Generative Models for Symbolic Music, 2018.
URL : https://hal.archives-ouvertes.fr/tel-02108994

G. Hadjeres and F. Nielsen, Interactive music generation with positional constraints using Anticipation-RNN, 2017.

G. Hadjeres, F. Nielsen, and F. Pachet, GLSR-VAE: Geodesic latent space regularization for variational autoencoder architectures, 2017.

G. Hadjeres, F. Pachet, and F. Nielsen, DeepBach: a steerable model for Bach chorales generation, 2017.

J. Hao, Hao staff piano roll sheet music

D. Härnel and . Chordnet, Learning and producing voice leading with neural networks and dynamic programming, Journal of New Music Research (JNMR), vol.33, issue.4, pp.387-397, 2004.

T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer Series in Statistics, 2009.

K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, 2015.

D. Herremans and C. Chuan, Deep Learning for Music -Workshop Proceedings, 2017.

D. Herremans and C. Chuan, The emergence of deep learning: new opportunities for music and audio technologies. Neural Computing and Applications (NCAA), 2019.

W. Hewlett, F. Bennion, E. Correia, and S. Rasmussen, MuseData -an electronic library of classical music scores

A. Lejaren, L. M. Hiller, and . Isaacson, Experimental Music: Composition with an Electronic Computer, 1959.

G. E. Hinton, Training products of experts by minimizing contrastive divergence, Neural Computation, vol.14, issue.8, pp.1771-1800, 2002.

G. E. Hinton, S. Osindero, and Y. Teh, A fast learning algorithm for deep belief nets, Neural Computation, vol.18, issue.7, pp.1527-1554, 2006.

G. E. Hinton and R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science, vol.313, issue.5786, pp.504-507, 2006.

G. E. Hinton and T. J. Sejnowski, Learning and relearning in Boltzmann machines, Parallel Distributed Processing -Explorations in the Microstructure of Cognition: Volume 1 Foundations, pp.282-317, 1986.

S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Computation, vol.9, issue.8, pp.1735-1780, 1997.

D. Hofstadter, Staring Emmy straight in the eye-and doing my best not to flinch, Virtual MusicComputer Synthesis of Musical Style, pp.33-82, 2001.

. Hooktheory and . Theorytabs,

K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks, vol.4, issue.2, pp.251-257, 1991.

A. Huang and R. Wu, Deep learning for music, 2016.

C. Huang, D. Duvenaud, and K. Z. Gajos, ChordRipple: Recommending chords to help novice composers go beyond the ordinary, Proceedings of the 21st International Conference on Intelligent User Interfaces (IUI 16), pp.241-250, 2016.

C. Huang, A. Vaswani, J. Uszkoreit, N. Shazeer, I. S. et al., Music transformer: Generating music with long-term structure generating music with long-term structure, 2018.

E. J. Humphrey, J. P. Bello, and Y. Lecun, Feature learning and deep architectures: New directions for music informatics, Journal of Intelligent Information Systems, vol.41, issue.3, pp.461-481, 2013.

P. Hutchings and J. Mccormack, Using autonomous agents to improvise music compositions in real-time, Computational Intelligence in Music, Sound, Art and Design -6th International Conference, pp.114-127, 2017.

S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, 2015.

N. Jaques, S. Gu, R. E. Turner, and D. Eck, Tuning recurrent neural networks with reinforcement learning, 2016.

D. Johnson, Composing music with recurrent neural networks, 2015.

D. D. Johnson, Generating polyphonic music using tied parallel networks, Computational Intelligence in Music, Sound, Art and Design -6th International Conference, pp.128-143, 2017.

L. Pack-kaelbling, M. L. Littman, and A. W. Moore, Reinforcement learning: A survey, Journal of Artificial Intelligence Research (JAIR), issue.4, pp.237-285, 1996.

U. Karn, An intuitive explanation of convolutional neural networks, 2016.

A. Karpathy, The unreasonable effectiveness of recurrent neural networks, 2015.

J. Keith, The Session

P. Kindermans, S. Hooker, J. Adebayo, M. Alber, T. Kristof et al., The (un)reliability of saliency methods, 2017.

P. Kindermans, K. T. Schütt, M. Alber, K. Müller, D. Erhan et al., Learning how to explain neural networks: PatternNet and PatternAttribution, 2017.

P. Diederik, M. Kingma, and . Welling, Auto-encoding variational Bayes, 2014.

J. Koutník, K. Gomez, and J. Schmidhuber, A Clockwork RNN, 2014.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Proceedings of the 25th International Conference on Neural Information Processing Systems, vol.1, pp.1097-1105, 2012.

B. Krueger, Classical Piano Midi Page

A. Kurenkov, A 'brief' history of neural nets and deep learning, 2015.

P. Lam, MCMC methods: Gibbs sampling and the Metropolis-Hastings algorithm

K. J. Lang, A. H. Waibel, and G. E. Hinton, A time-delay neural network architecture for isolated word recognition, Neural Networks, vol.3, issue.1, pp.23-43, 1990.

S. Lattner, M. Grachten, and G. Widmer, Imposing higher-level structure in polyphonic music generation using convolutional restricted Boltzmann machines and constraints, Journal of Creative Music Systems (JCMS), vol.2, 2018.

V. Quoc, M. Le, R. Ranzato, M. Monga, K. Devin et al., Building high-level features using large scale unsupervised learning, 29th International Conference on Machine Learning, 2012.

Y. Lecun and Y. Bengio, Convolutional networks for images, speech, and time-series, The handbook of brain theory and neural networks, pp.255-258, 1998.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition. Proceedings of the IEEE, vol.86, pp.2278-2324, 1998.

Y. Lecun, C. Cortes, and C. J. Burges, The MNIST database of handwritten digits, 1998.

Y. Lecun, J. S. Denker, and S. A. Solla, Optimal brain damage, Advances in Neural Information Processing Systems, vol.2, pp.598-605, 1990.

H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng, Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations, Proceedings of the 26th Annual International Conference on Machine Learning (ICML 2009), pp.609-616, 2009.

A. Lemme, R. F. Reinhart, and J. Steil, Online learning and generalization of parts-based image representations by non-negative sparse autoencoders, Neural Networks, vol.33, pp.194-203, 2012.

F. Li, A. Karpathy, and J. Johnson, Convolutional neural networks (CNNs / ConvNets) -CS231n Convolutional neural networks for visual recognition Lecture Notes, 2016.

F. Liang and . Bachbot, , 2016.

F. Liang and . Bachbot, Automatic composition in the style of Bach chorales -Developing, analyzing, and evaluating a deep LSTM model for musical style, Machine Learning, Speech, and Language Technology, 2016.

H. Lim, S. Ryu, and K. Lee, Chord generation from symbolic melody using BLSTM networks, Proceedings of the 18th International Society for Music Information Retrieval Conference, pp.621-627, 2017.

Q. Lyu, Z. Wu, J. Zhu, and H. Meng, Modelling high-dimensional sequences with LSTM-RTRBM: Application to polyphonic music generation, Proceedings of the 24th International Conference on Artificial Intelligence, pp.4138-4139, 2015.

S. Madjiheurem, L. Qu, and C. Walder, Chord2Vec: Learning musical chord embeddings, Proceedings of the Constructive Machine Learning Workshop at 30th Conference on Neural Information Processing Systems (NIPS 2016), 2016.

D. Makris, M. Kaliakatsos-papakostas, I. Karydis, and K. L. Kermanidis, Combining LSTM and feed forward neural networks for conditional rhythm composition, Engineering Applications of Neural Networks: 18th International Conference, pp.570-582, 2017.

I. Malik and C. H. Ek, Neural translation of musical style, 2017.

S. Mallat, GANs vs VAEs, 2018.

R. Manzelli, V. Thakkar, A. Siahkamari, and B. Kulis, Conditioning deep generative raw audio models for structured automatic music, Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR 2018), pp.182-189, 2018.

H. Huanru, T. Mao, G. W. Shin, and . Cottrell, DeepJ: Style-specific music generation, 2018.

J. A. Maurer, A brief history of algorithmic composition, 1999.

S. Mehri, K. Kumar, I. Gulrajani, R. Kumar, S. Jain et al., SampleRNN: An unconditional end-to-end neural audio generation model, 2017.

T. Mikolov, K. Chen, G. Corrado, and J. Dean, Efficient estimation of word representations in vector space, 2013.

M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry, 1969.

T. M. Mitchell, Machine Learning, 1997.

, MIDI Manufacturers Association (MMA). MIDI Specifications

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou et al., Playing Atari with deep reinforcement learning, 2013.

.. Olof-mogren, Continuous recurrent neural networks with adversarial training, 2016.

G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K. Müller, Explaining nonlinear classification decisions with deep Taylor decomposition, Pattern Recognition, issue.65, pp.211-222, 2017.

A. Mordvintsev, C. Olah, and M. Tyka, Deep Dream, 2015.

D. Morris, I. Simon, and S. Basu, Exposing parameters of a trained dynamic model for interactive music creation, Proceedings of the 23rd AAAI Conference on Artificial Intelligence (AAAI 2008), pp.784-791, 2008.

C. Michael and . Mozer, Neural network composition by prediction: Exploring the benefits of psychophysical constraints and multiscale processing, Connection Science, vol.6, issue.2-3, pp.247-280, 1994.

K. P. Murphy, Machine Learning: a Probabilistic Perspective, 2012.

A. Ng, Sparse autoencoder -CS294A/CS294W Lecture notes -Deep Learning and Unsupervised Feature Learning Course, 2011.

A. Ng, CS229 Lecture notes -Machine Learning Course -Part I Linear Regression, 2016.

A. Ng, CS229 Lecture notes -Machine Learning Course -Part IV Generative Learning algorithms, Autumn, 2016.

G. Nierhaus, Algorithmic Composition: Paradigms of Automated Music Generation, 2009.

J. Martin, A. Osborne, and . Rubinstein, A Course in Game Theory, 1994.

F. Pachet, Beyond the cybernetic jam fantasy: The Continuator, IEEE Computer Graphics and Applications (CG&A), vol.4, issue.1, pp.31-35, 2004.

F. Pachet, J. Suzda, and D. Martín, A comprehensive online database of machine-readable leadsheets for Jazz standards, Proceedings of the 14th International Society for Music Information Retrieval Conference (ISMIR 2013), pp.275-280, 2013.

F. Pachet, A. Papadopoulos, and P. Roy, Sampling variations of sequences for structured music generation, Proceedings of the 18th International Society for Music Information Retrieval Conference, pp.167-173, 2017.

F. Pachet and P. Roy, Markov constraints: Steerable generation of Markov sequences, Constraints, vol.16, issue.2, pp.148-172, 2011.

F. Pachet and P. Roy, Imitative leadsheet generation with user constraints, ECAI 2014 -Proceedings of the 21st European Conference on Artificial Intelligence, Frontiers in Artificial Intelligence and Applications, pp.1077-1078, 2014.

F. Pachet, P. Roy, and G. Barbieri, Finite-length Markov processes with constraints, Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI 2011), pp.635-642, 2011.

A. Papadopoulos, F. Pachet, and P. Roy, Generating non-plagiaristic Markov sequences with max order sampling, Creativity and Universality in Language, 2016.

A. Papadopoulos, P. Roy, and F. Pachet, Assisted lead sheet composition using FlowComposer, Principles and Practice of Constraint Programming: 22nd International Conference, pp.769-785, 2016.

A. Parvat, J. Chavan, and S. Kadam, A survey of deep-learning frameworks, Proceedings of the International Conference on Inventive Systems and Control (ICISC 2017), 2017.

K. Pei, Y. Cao, J. Yang, and S. Jana, Deepxplore: Automated whitebox testing of deep learning systems, September 2017

F. Preiswerk, Shannon entropy in the context of machine learning and AI

M. Ramona, G. Cabral, and F. Pachet, Capturing a musician's groove: Generation of realistic accompaniments from single song recordings, Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015) -Demos Track, pp.4140-4141, 2015.

B. Ramsundar and R. B. Zadeh, TensorFlow for Deep Learning, 2018.

D. Ringach and R. Shapley, Reverse correlation in neurophysiology, Cognitive Science, vol.28, pp.147-166, 2004.

C. Roads, The Computer Music Tutorial, 1996.

A. Roberts, MusicVae supplementary materials

A. Roberts, J. Engel, C. Raffel, C. Hawthorne, and D. Eck, A hierarchical latent vector model for learning longterm structure in music, Proceedings of the 35th International Conference on Machine Learning, 2018.

A. Roberts, J. Engel, C. Raffel, C. Hawthorne, and D. Eck, A hierarchical latent vector model for learning long-term structure in music, 2018.

A. Roberts, J. Engel, C. Raffel, I. Simon, and C. Hawthorne, MusicVAE: Creating a palette for musical scores with machine learning, 2018.

S. Ronaghan, Deep learning: Which loss and activation functions should I use?

F. Rosenblatt, The Perceptron -A perceiving and recognizing automaton, 1957.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning representations by back-propagating errors, Nature, vol.323, issue.6088, pp.533-536, 1986.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh et al., ImageNet Large Scale Visual Recognition Challenge, International Journal of Computer Vision (IJCV), vol.115, issue.3, pp.211-252, 2015.

T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford et al., Improved techniques for training GANs, 2016.

A. M. Sarroff and M. Casey, Musical audio synthesis using autoencoding neural nets, 2014.

M. Schuster and K. K. Paliwal, Bidirectional recurrent neural networks, IEEE Transactions on Signal Processing, issue.11, pp.2673-2681, 1997.

M. Shaw and D. Garlan, Software Architecture: Perspectives on an Emerging Discipline, 1996.

R. N. Shepard, Geometric approximations to the structure of musical pitch, Psychological Review, issue.89, pp.305-333, 1982.

I. Simon and S. Oore, Performance RNN: Generating music with expressive timing and dynamics, vol.29

I. Simon, A. Roberts, C. Raffel, J. Engel, C. Hawthorne et al., Learning a latent space of multitrack measures, 2018.

, Innovating for writers and artists, Spotify for Artists

M. Steedman, A generative grammar for Jazz chord sequences, Music Perception, vol.2, issue.1, pp.52-77, 1984.

L. Bob, J. Sturm, and . Santos, The endless traditional music session

B. L. Sturm, J. F. Santos, O. Ben-tal, and I. Korshunova, Music transcription modelling and composition using deep learning, Proceedings of the 1st Conference on Computer Simulation of Musical Creativity (CSCM 16), 2016.

F. Sun, DeepHear -Composing and harmonizing music with neural networks

I. Sutskever, O. Vinyals, and Q. V. Le, Sequence to sequence learning with neural networks, Advances in Neural Information Processing Systems, vol.27, pp.3104-3112, 2014.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed et al., Going deeper with convolutions, 2014.

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan et al., Intriguing properties of neural networks, 2014.

L. Tao, Facial recognition snares China's air con queen Dong Mingzhu for jaywalking, but it's not what it seems. South China Morning Post, 2018.

D. Temperley, The Cognition of Basic Musical Structures, 2011.

, The International Association for Computational Creativity. International Conferences on Computational Creativity (ICCC)

L. Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models, 2015.

J. Thickstun, Z. Harchaoui, and S. Kakade, Learning features of music from scratch, 2016.

A. Tikhonov and I. P. Yamshchikov, Music generation with variational recurrent autoencoder supported by history, 2017.

M. Peter and . Todd, A connectionist approach to algorithmic composition, Computer Music Journal (CMJ), vol.13, issue.4, pp.27-43, 1989.

A. M. Turing, Computing machinery and intelligence, pp.433-460, 1950.

D. Ulyanov and V. Lebedev, Audio texture synthesis and style transfer, 2016.

G. Urban, J. Krzysztof, S. E. Geras, O. Kahou, S. Aslan et al., Abdelrahman Mohamed, Matthai Philipose, Matt Richardson, and Rich Caruana. Do deep convolutional nets really need to be deep (or even convolutional)?, 2016.

A. Van-den-oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals et al., WaveNet: A generative model for raw audio, 2016.

A. Van-den-oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves et al., Conditional image generation with PixelCNN decoders, 2016.

A. Hado-van-hasselt, D. Guez, and . Silver, Deep reinforcement learning with double Q-learning, 2015.

V. N. Vapnik, The Nature of Statistical Learning Theory. Statistics for Engineering and Information Science, 1995.

A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones et al., Attention is all you need, 2017.

K. Veselý, A. Ghoshal, L. Burget, and D. Povey, Sequence-discriminative training of deep neural networks, Proceedings of the 14th Annual Conference of the International Speech Communication Association, pp.2345-2349, 2013.

C. Walder, Modelling symbolic music: Beyond the piano roll, 2016.

C. Walder, Symbolic Music Data Version 1.0, 2016.

C. Walshaw, ABC notation home page

J. C. Christopher, P. Watkins, and . Dayan, Q-learning, Machine Learning, vol.8, pp.279-292, 1992.

P. Raymond, D. Whorley, and . Conklin, Music generation from statistical models of harmony, Journal of New Music Research (JNMR), vol.45, issue.2, pp.160-183, 2016.

. Wikiart and . Org, WikiArt -Visual Art Encyclopedia

C. Wild, What a disentangled net we weave: Representation learning in VAEs (Pt. 1), 2018.

M. Wooldridge, An Introduction to MultiAgent Systems, 2009.

L. Wyse, Audio spectrogram representations for processing with convolutional neural networks, Proceedings of the 1st International Workshop on Deep Learning for Music, pp.37-41, 2017.

, Iannis Xenakis. Formalized Music: Thought and Mathematics in Composition, 1963.

. Yamaha,

X. Yan, J. Yang, K. Sohn, and H. Lee, Attribute2Image: Conditional image generation from visual attributes, 2016.

L. Yang, S. Chou, and Y. Yang, MidiNet: A convolutional generative adversarial network for symbolic-domain music generation, Proceedings of the 18th International Society for Music Information Retrieval Conference, pp.324-331, 2017.

J. Georg-zilly, R. Srivastava, J. Koutník, and J. Schmidhuber, Recurrent highway networks, 2017.