R. Calandra, T. Raiko, M. Deisenroth, and F. Pouzols, Learning Deep Belief Networks from Non-stationary Streams, Artificial Neural Networks and Machine Learning?ICANN 2012, pp.379-386, 2012.
DOI : 10.1007/978-3-642-33266-1_47

C. Fernando, D. Banarse, C. Blundell, Y. Zwols, D. Ha et al., Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint, 2017.

I. Goodfellow, J. Pouget-abadie, M. Mirza, B. Xu, D. Warde-farley et al., Generative adversarial nets, Advances in neural information processing systems, pp.2672-2680, 2014.

S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint, 2015.

D. Kingma and J. Ba, Adam: A method for stochastic optimization. arXiv preprint, 2014.

Y. Lecun, Y. Bengio, and G. Hinton, Deep learning, Nature, vol.9, issue.7553, pp.436-444, 2015.
DOI : 10.1007/s10994-013-5335-x

A. Radford, L. Metz, and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, 2015.

A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick et al., Progressive neural networks. arXiv preprint, 2016.

N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting, Journal of Machine Learning Research, vol.15, issue.1 4, pp.1929-1958, 2014.

G. I. Webb, R. Hyde, H. Cao, H. L. Nguyen, and F. Petitjean, Characterizing concept drift, Data Mining and Knowledge Discovery, vol.23, issue.1, pp.964-994, 2016.
DOI : 10.1145/1401890.1401987