M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.

Y. Adi, C. Baum, M. Cisse, B. Pinkas, and J. Keshet, Turning your weakness into a strength: Watermarking deep neural networks by backdooring, 27th {USENIX} Security Symposium ({USENIX} Security 18), pp.1615-1631, 2018.

E. Van-den-berg, Some insights into the geometry and training of neural networks, 2016.

G. W. Braudaway, K. A. Magerlein, C. Mintzer, and F. , Color correct digital watermarking of images, United States Patent, vol.5530759, 1996.

N. Carlini and D. A. Wagner, Audio adversarial examples: Targeted attacks on speech-to-text, 2018.

C. Y. Chang and S. J. Su, A neural-network-based robust watermarking scheme, 2005.

F. Chollet, , 2015.

T. Davchev, T. Korres, S. Fotiadis, N. Antonopoulos, and S. Ramamoorthy, An empirical evaluation of adversarial robustness under transfer learning, ICML Workshop on Understanding and Improving General-ization in Deep Learning, 2019.

V. Duddu, D. Samanta, D. V. Rao, and V. E. Balas, Stealing neural networks via timing side channels, 2018.

V. Duddu, D. Samanta, D. V. Rao, and V. E. Balas, Stealing neural networks via timing side channels, 2018.

I. J. Goodfellow, J. Shlens, and C. Szegedy, Explaining and harnessing adversarial examples, 2015.

K. Grosse, P. Manoharan, N. Papernot, M. Backes, and P. D. Mcdaniel, On the (statistical) detection of adversarial examples, 2017.

J. Guo and M. Potkonjak, Watermarking deep neural networks for embedded systems, 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp.1-8, 2018.

F. Hartung and M. Kutter, Multimedia watermarking techniques, Proceedings of the IEEE, vol.87, issue.7, pp.1079-1107, 1999.

Q. V. Le, N. Jaitly, and G. E. Hinton, A simple way to initialize recurrent networks of rectified linear units, 2015.

L. Merrer, E. Perez, P. Trédan, and G. , Adversarial frontier stitching for remote neural network watermarking, 2017.
URL : https://hal.archives-ouvertes.fr/hal-02043818

L. Merrer, E. Trédan, and G. , Tampernn: Efficient tampering detection of deployed neural nets, 2019.

Y. Lecun, C. Cortes, and C. J. Burges, The mnist database of handwritten digits, 1998.

S. Li, A. Neupane, S. Paul, C. Song, S. V. Krishnamurthy et al., Adversarial perturbations against real-time video classification systems, 2018.

Y. Liu, S. Ma, Y. Aafer, W. C. Lee, J. Zhai et al., Trojaning attack on neural networks. In: NDSS, 2017.

S. Moosavi-dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, Universal adversarial perturbations, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01992067

Y. Nagai, Y. Uchida, S. Sakazawa, and S. Satoh, Digital watermarking for deep neural networks, International Journal of Multimedia Information Retrieval, vol.7, issue.1, pp.3-16, 2018.

S. J. Oh, M. Augustin, M. Fritz, and B. Schiele, Towards reverseengineering black-box neural networks, International Conference on Learning Representations, 2018.

N. Papernot, N. Carlini, I. Goodfellow, R. Feinman, F. Faghri et al., cleverhans v2.0.0: an adversarial machine learning library, 2017.

N. Papernot, P. Mcdaniel, I. Goodfellow, S. Jha, Z. B. Celik et al., Practical black-box attacks against machine learning, ASIA CCS, 2017.

N. Papernot, P. Mcdaniel, S. Jha, M. Fredrikson, Z. Berkay-celik et al., The Limitations of Deep Learning in Adversarial Settings, 2015.

N. Papernot, P. D. Mcdaniel, S. Jha, M. Fredrikson, Z. B. Celik et al., The limitations of deep learning in adversarial settings, 2015.

B. D. Rouhani, H. Chen, and F. Koushanfar, Deepsigns: A generic watermarking framework for IP protection of deep learning models, 2018.

A. Rozsa, M. Günther, and T. E. Boult, Are accuracy and robustness correlated? In: ICMLA, 2016.

T. S. Sethi and M. Kantardzic, Data driven exploratory attacks on black box classifiers in adversarial domains, Neurocomputing, vol.289, pp.129-143, 2018.

A. Shafahi, W. R. Huang, C. Studer, S. Feizi, and T. Goldstein, Are adversarial examples inevitable?, 2018.

H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu et al., Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning, IEEE Transactions on Medical Imaging, vol.35, issue.5, pp.1285-1298, 2016.

F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, Stealing machine learning models via prediction apis, USENIX Security Symposium, 2016.

F. Tramèr, A. Kurakin, N. Papernot, D. Boneh, and P. Mcdaniel, Ensemble adversarial training: Attacks and defenses, 2017.

Y. Uchida, Y. Nagai, S. Sakazawa, and S. Satoh, Embedding watermarks into deep neural networks, p.ICMR, 2017.

R. G. Van-schyndel, A. Z. Tirkel, and C. F. Osborne, A digital watermark, Proceedings of 1st International Conference on Image Processing, vol.2, pp.86-90, 1994.

B. Wang and N. Z. Gong, Stealing hyperparameters in machine learning, 2018.

X. Yuan, P. He, Q. Zhu, and X. Li, Adversarial examples: Attacks and defenses for deep learning, IEEE Transactions on Neural Networks and Learning Systems, pp.1-20, 2019.

J. Zhang, Z. Gu, J. Jang, H. Wu, M. P. Stoecklin et al., Protecting intellectual property of deep neural networks with watermarking, Proceedings of the 2018 on Asia Conference on Computer and Communications Security, pp.159-172, 2018.

X. Zhao, Q. Liu, H. Zheng, and B. Y. Zhao, Towards graph watermarks, 2015.