J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt et al., Sanity checks for saliency maps, Advances in Neural Information Processing Systems, pp.9505-9515, 2018.

M. Alber, S. Lapuschkin, P. Seegerer, M. Hägele, K. T. Schütt et al., , 2018.

S. Bach, A. Binder, G. Montavon, F. Klauschen, K. R. Müller et al., On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, PloS one, vol.10, issue.7, 2015.

D. Balduzzi, B. Mcwilliams, and T. Butler-yeoman, Neural taylor approximations: Convergence and exploration in rectifier networks, Proceedings of the 34th International Conference on Machine Learning, vol.70, pp.351-360, 2017.

R. Fong and A. Vedaldi, Interpretable Explanations of Black Boxes by Meaningful Perturbation, IEEE International Conference on Computer Vision (ICCV, pp.3449-3457, 2017.

K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE CVPR, pp.770-778, 2016.

T. L. Van-den-heuvel, D. De-bruijn, C. L. De-korte, and B. V. Ginneken, Automated measurement of fetal head circumference using 2d ultrasound images, PLOS ONE, vol.13, issue.8, pp.1-20, 2018.

S. Lathuilière, P. Mesejo, X. Alameda-pineda, and R. Horaud, A comprehensive analysis of deep regression, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.41, pp.1-17, 2019.

G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K. R. Müller, Explaining nonlinear classification decisions with deep taylor decomposition, Pattern Recognition, vol.65, pp.211-222, 2017.

N. J. Morch, U. Kjems, L. K. Hansen, C. Svarer, I. Law et al., Visualization of neural networks using saliency maps, Proceedings of IEEE International Conference on Neural Networks, vol.4, pp.2085-2090, 1995.

W. Samek, A. Binder, G. Montavon, S. Lapuschkin, and K. R. Müller, Evaluating the visualization of what a deep neural network has learned, IEEE transactions on neural networks and learning systems, vol.28, issue.11, pp.2660-2673, 2016.

W. Samek and K. R. Müller, Towards explainable artificial intelligence, Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp.5-22, 2019.

A. Shrikumar, P. Greenside, A. Shcherbina, and A. Kundaje, Not just a black box: Learning important features through propagating activation differences, 2016.

K. Simonyan, A. Vedaldi, and A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps, 2014.

K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, 2015.

A. Singh, S. Sengupta, and V. Lakshminarayanan, Explainable deep learning models in medical image analysis, Journal of Imaging, vol.6, p.52, 2020.

D. Smilkov, N. Thorat, B. Kim, F. B. Viégas, and M. Wattenberg, Smoothgrad: removing noise by adding noise, Workshop on Visualization for Deep Learning, ICML, 2017.

J. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, Striving for simplicity: The all convolutional net, ICLR (workshop track, 2015.

M. Sundararajan, A. Taly, and Q. Yan, Axiomatic attribution for deep networks, Proceedings of the 34th International Conference on Machine Learning, vol.70, pp.3319-3328, 2017.

M. D. Zeiler and R. Fergus, Visualizing and understanding convolutional networks, European conference on computer vision, pp.818-833, 2014.

J. Zhang, C. Petitjean, P. Lopez, and S. Ainouz, Direct estimation of fetal head circumference from ultrasound images based on regression cnn, Medical Imaging with Deep Learning, 2020.

L. M. Zintgraf, T. S. Cohen, T. Adel, and M. Welling, Visualizing deep neural network decisions: Prediction difference analysis, 2017.