T. Gerkmann and R. C. Hendriks, Unbiased MMSE-based noise power estimation with low complexity and low tracking delay, IEEE Transactions on Audio, Speech and Language Processing, vol.20, issue.4, pp.1383-1393, 2012.

F. Weninger, J. R. Hershey, J. L. Roux, and B. Schuller, Discriminatively trained recurrent neural networks for singlechannel speech separation, IEEE GlobalSIP. IEEE, pp.577-581, 2014.

O. L. Frost, An algorithm for linearly constrained adaptive array processing, Proceedings of the IEEE, vol.60, issue.8, pp.926-935, 1972.

E. Vincent, T. Virtanen, and S. Gannot, Audio source separation and speech enhancement, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01881431

S. Doclo and M. Moonen, GSVD-based optimal filtering for single and multimicrophone speech enhancement, IEEE Transactions on Signal Processing, vol.50, issue.9, pp.2230-2244, 2002.

S. Doclo, A. Spriet, J. Wouters, and M. Moonen, Frequencydomain criterion for the speech distortion weighted multichannel Wiener filter for robust noise reduction, Speech Communication, vol.49, issue.7-8, pp.636-656, 2007.
URL : https://hal.archives-ouvertes.fr/hal-00499178

A. Bertrand, S. Doclo, S. Gannot, N. Ono, and T. Van-waterschoot, Special issue on wireless acoustic sensor networks and ad hoc microphone arrays, Signal Processing, vol.107, issue.C, pp.1-3, 2015.

A. Bertrand, J. Callebaut, and M. Moonen, Adaptive distributed noise reduction for speech enhancement in wireless acoustic sensor networks, Proc. of IWAENC, 2010.

A. Bertrand and M. Moonen, Distributed adaptive nodespecific signal estimation in fully connected sensor networks -Part I: Sequential node updating, IEEE Transactions on Signal Processing, vol.58, issue.10, pp.5277-5291, 2010.

Y. Zeng and R. C. Hendriks, Distributed estimation of the inverse of the correlation matrix for privacy preserving beamforming, Signal Processing, vol.107, pp.109-122, 2015.

R. Heusdens, G. Zhang, R. C. Hendriks, Y. Zeng, and W. B. Kleijn, Distributed mvdr beamforming for (wireless) microphone networks using message passing, IWAENC, pp.1-4, 2012.

M. O'connor and W. B. Kleijn, Diffusion-based distributed MVDR beamformer, IEEE Proc. of ICASSP, pp.810-814, 2014.

S. Gergen, R. Martin, and N. Madhu, Source separation by feature-based clustering of microphones in ad hoc arrays, IWAENC, pp.530-534, 2018.

S. A. Vorobyov, A. B. Gershman, and Z. Q. Luo, Robust adaptive beamforming using worst-case performance optimization: A solution to the signal mismatch problem, IEEE Transactions on Signal Processing, vol.51, issue.2, pp.313-324, 2003.

A. Narayanan and D. Wang, Ideal ratio mask estimation using deep neural networks for robust speech recognition, IEEE ICASSP, pp.7092-7096, 2013.

J. Heymann, L. Drude, and R. Haeb-umbach, Neural network based spectral mask estimation for acoustic beamforming, IEEE ICASSP, pp.196-200, 2016.

L. Perotin, R. Serizel, E. Vincent, and A. Guérin, CRNNbased joint azimuth and elevation localization with the ambisonics intensity vector, 16th International Workshop on Acoustic Signal Enhancement (IWAENC), pp.241-245, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01840453

A. A. Nugraha, A. Liutkus, and E. Vincent, Multichannel audio source separation with deep neural networks, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol.24, issue.10, pp.1652-1664, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01163369

Y. Jiang, D. Wang, R. Liu, and Z. Feng, Binaural classification for reverberant speech segregation using deep neural networks, IEEE/ACM Transactions on Audio, Speech and Language Processing, vol.22, issue.12, pp.2112-2121, 2014.

S. Adavanne, A. Politis, and T. Virtanen, Direction of arrival estimation for multiple sound sources using convolutional recurrent neural network," in EUSIPCO, pp.1462-1466, 2018.

S. Chakrabarty and E. A. Habets, Time-Frequency Masking Based Online Multi-Channel Speech Enhancement With Convolutional Recurrent Neural Networks, IEEE Journal of Selected Topics in Signal Processing, vol.13, issue.4, pp.1-1, 2019.

L. Perotin, R. Serizel, E. Vincent, and A. Guérin, Multichannel speech separation with recurrent neural networks from high-order ambisonics recordings, IEEE Proc. of ICASSP, pp.36-40, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01699759

R. Serizel, M. Moonen, B. Van-dijk, and J. Wouters, Lowrank Approximation Based Multichannel Wiener Filter Algorithms for Noise Reduction with Application in Cochlear Implants, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol.22, issue.4, pp.785-799, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01390918

V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, Librispeech: an ASR corpus based on public domain audio books, IEEE Proc. of ICASSP, pp.5206-5210, 2015.

J. Barker, R. Marxer, E. Vincent, and S. Watanabe, The third CHiME speech separation and recognition challenge: Dataset, task and baselines, IEEE ASRU, pp.504-511, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01211376

G. Hinton, N. Srivastava, and K. Swersky, COURS-ERA: Neural networks for machine learning -lecture 6a, 2012.

E. Vincent, R. Gribonval, and C. Févotte, Performance measurement in blind audio source separation, IEEE Transactions on Audio, Speech, and Language Processing, vol.14, issue.4, pp.1462-1469, 2006.
URL : https://hal.archives-ouvertes.fr/inria-00544230