W. H. Sumby and I. Pollack, Visual Contribution to Speech Intelligibility in Noise, The Journal of the Acoustical Society of America, vol.26, issue.2, pp.212-215, 1954.
DOI : 10.1121/1.1907309

A. Macleod and Q. Summerfield, Quantifying the contribution of vision to speech perception in noise, British Journal of Audiology, vol.4, issue.2, pp.131-141, 1987.
DOI : 10.1044/jshr.1702.270

V. Aubanel, C. Davis, and J. Kim, Explaining the visual and masked-visual advantage in speech perception in noise: The role of visual phonetic cues, Proc. of FAAVSP, 2015.

E. Vatikiotis-bateson and K. G. , Auditory-Visual Speech Processing, The Handbook of Speech Production, pp.178-199, 2015.
DOI : 10.1016/S0167-6393(98)00048-X

C. G. Fisher, Confusions Among Visually Perceived Consonants, Journal of Speech Language and Hearing Research, vol.11, issue.4, pp.796-804, 1968.
DOI : 10.1044/jshr.1104.796

M. D. Ramage, Disproving Visemes As The Basic Visual Unit Of Speech, 2013.

M. A. Cathiard, M. Lallouache, S. H. Mohammadi, and C. Abry, Configurational vs. temporal coherence in audio-visual speech perception, Proc. of ICPhS, 1995.

M. A. Cathiard, G. Tiberghien, A. Tseva, M. Lallouache, and P. Escudier, Visual perception of anticipatory rounding during acoustic pauses: A cross-language study, Proc. of ICPhS, 1991.

K. W. Grant and P. Seitz, The use of visible speech cues for improving auditory detection of spoken sentences, The Journal of the Acoustical Society of America, vol.108, issue.3, p.1197, 2000.
DOI : 10.1121/1.1288668

C. E. Schroeder, P. Lakatos, Y. Kajikawa, S. Partan, and A. Puce, Neuronal oscillations and visual amplification of speech, Trends in Cognitive Sciences, vol.12, issue.3, pp.106-113, 2008.
DOI : 10.1016/j.tics.2008.01.002

E. Zion-golumbic, D. Poeppel, and C. E. Schroeder, Temporal context in speech processing and attentional stream selection: A behavioral and neural perspective, Brain and Language, vol.122, issue.3, pp.151-161, 2012.
DOI : 10.1016/j.bandl.2011.12.010

L. H. Arnal, B. Morillon, C. A. Kell, and A. Giraud, Dual Neural Routing of Visual Facilitation in Speech Processing, Journal of Neuroscience, vol.29, issue.43, pp.13-445, 2009.
DOI : 10.1523/JNEUROSCI.3194-09.2009

URL : https://hal.archives-ouvertes.fr/inserm-00429667

J. J. Stekelenburg and J. Vroomen, Neural Correlates of Multisensory Integration of Ecologically Valid Audiovisual Events, Journal of Cognitive Neuroscience, vol.15, issue.12, pp.1964-1973, 2007.
DOI : 10.1038/nn1263

V. Van-wassenhove, K. W. Grant, and D. Poeppel, Visual speech speeds up the neural processing of auditory speech, Proceedings of the National Academy of Sciences, vol.8, issue.4, pp.1181-1186, 2005.
DOI : 10.1016/j.tics.2004.02.002

T. Paris, J. Kim, and C. Davis, Visual form predictions facilitate auditory processing at the N1, Neuroscience, vol.343, pp.157-164, 2017.
DOI : 10.1016/j.neuroscience.2016.09.023

C. Chandrasekaran, A. Trubanova, S. Stillittano, A. Caplier, and A. A. Ghazanfar, The Natural Statistics of Audiovisual Speech, PLoS Computational Biology, vol.304, issue.7, p.1000436, 2009.
DOI : 10.1371/journal.pcbi.1000436.g009

URL : https://hal.archives-ouvertes.fr/hal-00432777

J. Schwartz and C. Savariaux, No, There Is No 150 ms Lead of Visual Speech on Auditory Speech, but a Range of Audiovisual Asynchronies Varying from Small Audio Lead to Large Audio Lag, PLoS Computational Biology, vol.3, issue.2, 2014.
DOI : 10.1371/journal.pcbi.1003743.s004

URL : https://hal.archives-ouvertes.fr/hal-01073649

J. Vroomen and M. Keetels, Perception of intersensory synchrony: A tutorial review, Attention, Perception, & Psychophysics, vol.381, issue.4, pp.871-884, 2010.
DOI : 10.1016/j.neulet.2005.01.085

J. Kim and C. Davis, Investigating the audio???visual speech detection advantage, Speech Communication, vol.44, issue.1-4, pp.19-30, 2004.
DOI : 10.1016/j.specom.2004.09.008

J. H. Venezia, S. M. Thurman, W. Matchin, S. E. George, G. Hickok-ahumada et al., Timing in audiovisual speech perception: A mini review and new psychophysical data, Attention, Perception, & Psychophysics, vol.30, issue.3, pp.583-601, 1971.
DOI : 10.1006/jpho.2002.0165

F. Gosselin and P. G. Schyns, Bubbles: a technique to reveal the use of information in recognition tasks, Vision Research, vol.41, issue.17, pp.2261-2271, 2001.
DOI : 10.1016/S0042-6989(01)00097-9

H. Mcgurk and J. Macdonald, Hearing lips and seeing voices, Nature, vol.65, issue.5588, pp.746-748, 1976.
DOI : 10.1044/jshr.1704.619

J. Morton, S. Marcus, and C. Frankish, Perceptual centers (P-centers)., Psychological Review, vol.83, issue.5, p.405, 1976.
DOI : 10.1037/0033-295X.83.5.405

S. K. Scott, P-centers in speech: An acoustic analysis, 1993.
DOI : 10.1121/1.404580

C. E. Schroeder and P. Lakatos, Low-frequency neuronal oscillations as instruments of sensory selection, Trends in Neurosciences, vol.32, issue.1, pp.9-18, 2009.
DOI : 10.1016/j.tins.2008.09.012

A. Giraud and D. Poeppel, Cortical oscillations and speech processing: emerging computational principles and operations, Nature Neuroscience, vol.15, issue.4, pp.511-517, 2012.
DOI : 10.1121/1.1945807

URL : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4461038/pdf

V. Aubanel, C. Davis, and J. Kim, Exploring the Role of Brain Oscillations in Speech Perception in Noise: Intelligibility of Isochronously Retimed Speech, Frontiers in Human Neuroscience, vol.9, issue.651, 2016.
DOI : 10.3389/fnhum.2015.00651

C. Kayser, C. I. Petkov, and N. K. Logothetis, Visual Modulation of Neurons in Auditory Cortex, Cerebral Cortex, vol.18, issue.7, pp.1560-1574, 2008.
DOI : 10.1093/cercor/bhm187

J. Schwartz, F. Berthommier, and C. Savariaux, Seeing to hear better: evidence for early audio-visual interactions in speech identification, Cognition, vol.93, issue.2, pp.69-78, 2004.
DOI : 10.1016/j.cognition.2004.01.006

URL : https://hal.archives-ouvertes.fr/hal-00186797

U. P. Mcgee, W. D. Pachl, and . Voiers, IEEE Recommended practice for speech quality measurements, IEEE Trans. Audio Acoust, pp.225-246, 1969.

V. Aubanel, C. Davis, and J. Kim, The MAVA corpus, 2016.

M. Kleiner, D. Brainard, D. Pelli, A. Ingling, R. Murray et al., What's new in Psychtoolbox-3, Perception, vol.36, issue.14, p.1, 2007.

D. Bates, M. Mächler, B. M. Bolker, and S. C. Walker, Fitting Linear Mixed-Effects Models Using lme4, Journal of Statistical Software, pp.1-48, 2015.

R. Campbell and B. Dodd, Hearing by eye, Quarterly Journal of Experimental Psychology, vol.38, issue.1, pp.85-99, 1980.
DOI : 10.1037/11774-000