R. Campbell, B. Dodd, and D. Burnham, Hearing by Eye II. The Psychology of Speechreading and Auditory- Visual Speech, 1998.
DOI : 10.1080/00335558008248235

T. F. Cootes, G. J. Edwards, C. J. Taylor, and P. Heracleous, Active appearance models, International Conference on Smart Objects & Ambient Intelligence, pp.681-685, 2001.
DOI : 10.1109/34.927467

M. Higashikawa, K. Nakai, A. Sakakura, and H. Takahashi, Perceived pitch of whispered vowels-relationship with formant frequencies: A preliminary study, Journal of Voice, vol.10, issue.2, pp.155-158
DOI : 10.1016/S0892-1997(96)80042-7

M. Higashikawa and F. D. Minifie, Acoustical-Perceptual Correlates of "Whisper Pitch" in Synthetically Generated Vowels, Journal of Speech Language and Hearing Research, vol.42, issue.3, pp.583-591, 1999.
DOI : 10.1044/jslhr.4203.583

T. Hueber, Continuous-speech phone recognition from ultrasound and optical images of the tongue and lips, Interspeech, 2007.

T. Ito, K. Takeda, and F. Itakura, Analysis and recognition of whispered speech, Speech Communication, vol.45, issue.2, pp.139-152, 2005.
DOI : 10.1016/j.specom.2003.10.005

A. Kain and M. W. Macon, Spectral voice conversion for text-to-speech synthesis, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181), pp.285-288, 1998.
DOI : 10.1109/ICASSP.1998.674423

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.3361

M. Nakagiri, Improving body transmitted unvoiced speech with statistical voice conversion, Interspeech, pp.2270-2273, 2006.

Y. Nakajima, Non-Audible Murmur (NAM) Recognition, IEICE Transactions on Information and Systems, vol.89, issue.1, pp.2601-2604, 2003.
DOI : 10.1093/ietisy/e89-d.1.1

L. Revéret, G. Bailly, and P. Badin, MOTHER: a new generation of talking heads providing a flexibile articulatory control for video-realistic speech animation, International Conference on Speech and Language Processing, pp.755-758, 2000.

Y. Stylianou, O. Cappé, and E. Moulines, Continuous probabilistic transform for voice conversion, IEEE Transactions on Speech and Audio Processing, vol.6, issue.2, pp.131-142, 1998.
DOI : 10.1109/89.661472

T. Toda, A. W. Black, and K. Tokuda, Statistical mapping between articulatory movements and acoustic spectrum using a Gaussian mixture model, Speech Communication, vol.50, issue.3, pp.215-227, 2008.
DOI : 10.1016/j.specom.2007.09.001

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.157.6833

T. Toda and K. Shikano, NAM-to-Speech conversion with gaussian mixture models, 1957.

V. Tran, G. Bailly, H. Loevenbruck, and T. Toda, Predicting F 0 and voicing from NAM-captured whispered speech, Speech Prosody, 2008.
DOI : 10.1016/j.specom.2009.11.005

URL : https://hal.archives-ouvertes.fr/hal-00333290