O. Arikan, D. A. Forsyth, O. 'brien, and J. F. , Motion synthesis from annotations, ACM Transactions on Graphics, vol.22, issue.3, pp.402-410, 2003.
DOI : 10.1145/882262.882284

C. Awad, N. Courty, K. Duarte, L. Naour, T. Gibet et al., A Combined Semantic and Motion Capture Database for Real-Time Sign Language Synthesis, Proceedings of the 9th International Conference on Intelligent Virtual Agents, pp.432-470, 2009.
DOI : 10.1007/978-3-642-04380-2_47

URL : https://hal.archives-ouvertes.fr/hal-00493426

D. Brentari, A Prosodic Model of Sign Language Phonology, 1999.

Y. Cao, W. C. Tien, P. Faloutsos, and F. Pighin, Expressive speech-driven facial animation, ACM Transactions on Graphics, vol.24, issue.4, pp.1283-302, 2005.
DOI : 10.1145/1095878.1095881

J. Cassell, J. Sullivan, S. Prevost, C. , and E. F. , Embodied Conversational Agents, 2000.

J. Chai and J. Hodgins, Constraint-based motion optimization using a statistical dynamic model, ACM Transactions on Graphics, vol.26, issue.3, pp.686-696, 2007.

Y. Chiu, C. Wu, H. Su, and C. Cheng, Joint Optimization of Word Alignment and Epenthesis Generation for Chinese to Taiwanese Sign Synthesis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.29, issue.1, pp.28-39, 2007.
DOI : 10.1109/TPAMI.2007.250597

L. Csató and M. Opper, Sparse On-Line Gaussian Processes, Neural Computation, vol.14, issue.3, pp.641-68, 2002.
DOI : 10.1109/34.735807

M. Delorme, M. Filhol, and A. Braffort, An architecture for sign language synthesis, Proc. of Gesture Workshop 2009, 2009.

Z. Deng, P. Chiang, P. Fox, and U. Newmann, Animating blendshape faces by cross-mapping motion capture data, Proceedings of the 2006 symposium on Interactive 3D graphics and games , SI3D '06, pp.43-48, 2006.
DOI : 10.1145/1111411.1111419

Z. Deng, U. Newmann, J. P. Lewis, T. Kim, M. Bulut et al., Expressive Facial Animation Synthesis by Learning Speech Coarticulation and Expression Spaces, IEEE Transactions on Visualization and Computer Graphics, vol.12, issue.6, pp.1523-1557, 2006.
DOI : 10.1109/TVCG.2006.90

K. Duarte and S. Gibet, Heterogeneous data sources for signed language analysis and synthesis: The signcom project, Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (19-21). European Language Resources Association (ELRA), 2010.
URL : https://hal.archives-ouvertes.fr/hal-00503249

R. Elliott, J. Glauert, V. Jennings, and J. Kennaway, An overview of the sigml notation and sigml signing software system, Workshop on the Representation and Processing of Signed Languages, 4th Int'l Conf. on Language Resources and Evaluation, 2004.

S. Fotinea, E. Efthimiou, G. Caridakis, and K. Karpouzis, A knowledge-based sign synthesis architecture, Universal Access in the Information Society, vol.1, issue.1???2, pp.405-418, 2008.
DOI : 10.1007/s10209-007-0094-8

S. Gibet, T. Lebourque, and P. Marteau, High-level Specification and Animation of Communicative Gestures, Journal of Visual Languages & Computing, vol.12, issue.6, pp.657-687, 2001.
DOI : 10.1006/jvlc.2001.0202

K. Grochow, S. Martin, A. Hertzmann, and Z. Popovic, Style-based inverse kinematics, ACM Transactions on Graphics, vol.23, issue.3, pp.522-531, 2004.
DOI : 10.1145/1015706.1015755

E. Gu and N. Badler, Visual Attention and Eye Gaze During Multiparty Conversations with Distractions, Proceedings of the 6th International Conference on Intelligent Virtual Agents, pp.193-204, 2006.
DOI : 10.1007/11821830_16

B. Hartmann, M. Mancini, P. , and C. , Implementing expressive gesture synthesis for embodied conversational agents. Lecture Notes in Computer Science : Gesture in Human-Computer Interaction and Simulation 3881, pp.188-199, 2006.

M. Huenerfauth, A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language, ACM Transactions on Accessible Computing, vol.2, issue.2, pp.1-931, 2009.
DOI : 10.1145/1530064.1530067

M. Huenerfauth, L. Zhao, E. Gu, A. , and J. , Evaluating American Sign Language generation through the participation of native ASL signers, Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility , Assets '07, pp.211-218, 2007.
DOI : 10.1145/1296843.1296879

M. Huenerfauth, L. Zhou, E. Gu, A. , and J. , Design and evaluation of an American Sign Language generator, Proceedings of the Workshop on Embodied Language Processing, EmbodiedNLP '07, 2007.
DOI : 10.3115/1610065.1610072

A. Héloir and M. Kipp, REAL-TIME ANIMATION OF INTERACTIVE AGENTS: SPECIFICATION AND REALIZATION, Applied Artificial Intelligence, vol.24, issue.6, pp.375-391, 2010.
DOI : 10.1007/978-3-540-74997-4_10

L. Ikemoto, O. Arikan, and D. Forsyth, Generalizing motion edits with Gaussian processes, ACM Transactions on Graphics, vol.28, issue.1, pp.1-12, 2009.
DOI : 10.1145/1477926.1477927

R. E. Johnson and S. K. Liddell, Sign Language Phonetics: Architecture and Description . Forthcoming, 2009.

T. Johnston, The lexical database of Auslan (Australian Sign Language), Proceedings of the First Intersign Workshop: Lexical Databases, 1998.
DOI : 10.1075/sll.4.1-2.11joh

A. Kendon, Tools, Language and Cognition, Chapter Human gesture, pp.43-62, 1993.

J. R. Kennaway, Experience with and Requirements for a Gesture Description Language for Synthetic Animation, Proc. of Gesture Workshop 2003, 2003.
DOI : 10.1007/978-3-540-24598-8_28

J. R. Kennaway, J. R. Glauert, and I. Zwitserlood, Providing signed content on the Internet by synthesized animation, ACM Transactions on Computer-Human Interaction, vol.14, issue.3, p.15, 2007.
DOI : 10.1145/1279700.1279705

M. Kipp, M. Neff, K. H. Kipp, A. , and I. , Toward natural gesture synthesis: Evaluating gesture units in a data-driven approach, Intelligent Virtual Agents (IVA'07, pp.15-28, 2007.

S. Kita, I. Van-gijn, and H. Van-der-hulst, Movement phases in signs and co-speech gestures, and their transcription by human coders, Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction, pp.23-35, 1997.
DOI : 10.1007/BFb0052986

S. Kopp, B. Krenn, S. Marsella, N. Marshall, C. Pelachad et al., Towards a Common Framework for Multimodal Generation: The Behavior Markup Language, Proc. of Intelligent Virtual Agents, pp.205-217, 2006.
DOI : 10.1007/11821830_17

S. Kopp and I. Wachsmuth, Synthesizing multimodal utterances for conversational agents, Computer Animation and Virtual Worlds, vol.15, issue.1, pp.39-52, 2004.
DOI : 10.1002/cav.6

L. Kovar, M. Gleicher, and F. Pighin, Emotion from motion, Proc. of Int. Conf. on Computer Graphics and Interactive Techniques, pp.473-482, 2002.

A. Kranstedt, S. Kopp, and I. Wachsmuth, MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents, Proceedings of the AAMAS02 Workshop on Embodied Conversational Agents -let's specify and evaluate them, 2002.

S. P. Lee, J. B. Badler, and N. I. Badler, Eyes alive, ACM Transactions on Graphics, vol.21, issue.3, pp.637-644, 2002.

C. K. Liu and Z. Popovi´cpopovi´c, Synthesis of complex dynamic character motion from simple animations, Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pp.408-424, 2002.

X. Liu, T. Mao, S. Xia, Y. Yu, W. et al., Facial animation by optimized blendshapes from motion capture data, Computer Animation and Virtual Worlds, vol.6, issue.4, pp.3-4, 2008.
DOI : 10.1002/cav.248

V. Lombardo, F. Nunnari, and R. Damiano, A Virtual Interpreter for the Italian Sign Language, pp.201-207, 2010.
DOI : 10.1007/978-3-642-15892-6_22

X. Ma and Z. Deng, Natural eye motion synthesis by modeling gaze-head coupling, IEEE Virtual Reality Conference, pp.143-50, 2009.

D. Mcneill, Hand and Mind -What Gestures Reveal about Thought, 1992.

T. Mukai and S. Kuriyama, Geostatistical motion interpolation, ACM Transactions on Graphics, vol.24, issue.3, pp.1062-1070, 2005.
DOI : 10.1145/1073204.1073313

M. Neff, M. Kipp, I. Albrecht, and H. Seidel, Gesture modeling and animation based on a probabilistic re-creation of speaker style, ACM Transactions on Graphics, vol.27, issue.1, pp.233-51, 2008.
DOI : 10.1145/1330511.1330516

H. Noot and Z. Ruttkay, Variations in gesturing and speech by GESTYLE, International Journal of Human-Computer Studies, vol.62, issue.2, pp.211-229, 2005.
DOI : 10.1016/j.ijhcs.2004.11.007

S. Prillwitz, R. Leven, H. Zienert, T. Hanke, and J. Henning, Hamburg Notation System for Sign Languages -An Introductory Guide, 1989.

C. E. Rasmussen and C. K. Williams, Gaussian Processes in Machine Learning, 2005.
DOI : 10.1162/089976602317250933

W. C. Stokoe, Sign Language Structure, Annual Review of Anthropology, vol.9, issue.1, pp.3-37, 1960.
DOI : 10.1146/annurev.an.09.100180.002053

M. Stone, D. Decarlo, I. Oh, C. Rodriguez, A. Stere et al., Speaking with hands, Proceedings of ACM SIGGRAPH, pp.506-519, 2004.
DOI : 10.1145/1015706.1015753

S. Tak and H. Ko, A physically-based motion retargeting filter, ACM Transactions on Graphics, vol.24, issue.1, pp.98-117, 2005.
DOI : 10.1145/1037957.1037963

D. Tolani, A. Goswami, and N. I. Badler, Real-Time Inverse Kinematics Techniques for Anthropomorphic Limbs, Graphical Models, vol.62, issue.5, pp.353-388, 2000.
DOI : 10.1006/gmod.2000.0528

H. Vilhalmsson, N. Cantelmo, J. Cassell, N. Chafai, M. Kipp et al., The behavior markup language: Recent developments and challenges, 2007.

C. Vogler and D. Metaxas, Handshapes and Movements: Multiple-Channel American Sign Language Recognition, Gesture-Based Communication in Human-Computer Interaction, pp.431-432, 2004.
DOI : 10.1007/978-3-540-24598-8_23

J. Wang and B. Bodenheimer, Synthesis and evaluation of linear motion transitions, ACM Transactions on Graphics, vol.27, issue.1, pp.1-15, 2008.
DOI : 10.1145/1330511.1330512

J. Wang, S. M. Drucker, M. Agrawala, and M. F. Cohen, The cartoon animation filter, ACM Transactions on Graphics, vol.25, issue.3, pp.1169-1173, 2006.
DOI : 10.1145/1141911.1142010

T. Warabi, The reaction time of eye-head coordination in man, Neuroscience Letters, vol.6, issue.1, pp.47-51, 1977.
DOI : 10.1016/0304-3940(77)90063-5