Learning Voice Representation Using Knowledge Distillation For Automatic Voice Casting - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Learning Voice Representation Using Knowledge Distillation For Automatic Voice Casting

Résumé

The search for professional voice-actors for audiovisual productions is a sensitive task, performed by the artistic directors (ADs). The ADs have a strong appetite for new talents/voices but cannot perform large scale auditions. Automatic tools able to suggest the most suited voices are of a great interest for audiovisual industry. In previous works, we showed the existence of acoustic information allowing to mimic the AD's choices. However, the only available information is the ADs' choices from the already dubbed multimedia productions. In this paper, we propose a representation-learning based strategy to build a character/role representation, called p-vector. In addition, the large variability between audiovisual productions makes difficult to have homogeneous training datasets. We overcome this difficulty by using knowledge distillation methods to take advantage of external datasets. Experiments are conducted on video-game voice excerpts. Results show a significant improvement using the p-vector, compared to the speaker-based x-vectors representation.
Fichier principal
Vignette du fichier
Learning_Voice_Representation_Using_Knowledge_Distillation_For_Automatic_Voice_Casting.pdf (229.83 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02572383 , version 1 (13-05-2020)

Identifiants

  • HAL Id : hal-02572383 , version 1

Citer

Adrien Gresse, Mathias Quillot, Richard Dufour, Jean-François Bonastre. Learning Voice Representation Using Knowledge Distillation For Automatic Voice Casting. Interspeech, Oct 2020, Shanghai, China. ⟨hal-02572383⟩

Collections

UNIV-AVIGNON LIA
164 Consultations
330 Téléchargements

Partager

Gmail Facebook X LinkedIn More