Abstract : We describe in this paper a system for automatically synthesizing deaf signing animations from motion data captured on real deaf subjects. Moreover, we create a virtual agent endowed with expressive gestures. Our attention is focused on the expressiveness of gesture (what type of gesture: fluidity, tension, anger) and on its semantic representations. Our approach relies on a data-driven animation scheme. From motion data captured thanks to an optical system and data gloves, we try to extract relevant features of communicative gestures, and to re-synthesize them afterwards with style variation. Within this framework, a motion database containing the whole body, hands motion and facial expressions has been built. The analysis of signals makes possible the enrichment of this database by including segmentation and annotation descriptors. Analysis and synthesis algorithms are applied to the generation of a set of French Sign Language gestures. Key words Communication for deaf people, sign language gestures, virtual signer agent, gesture database.