Skip to Main content Skip to Navigation
New interface
Conference papers

Learning Voice Representation Using Knowledge Distillation For Automatic Voice Casting

Abstract : The search for professional voice-actors for audiovisual productions is a sensitive task, performed by the artistic directors (ADs). The ADs have a strong appetite for new talents/voices but cannot perform large scale auditions. Automatic tools able to suggest the most suited voices are of a great interest for audiovisual industry. In previous works, we showed the existence of acoustic information allowing to mimic the AD's choices. However, the only available information is the ADs' choices from the already dubbed multimedia productions. In this paper, we propose a representation-learning based strategy to build a character/role representation, called p-vector. In addition, the large variability between audiovisual productions makes difficult to have homogeneous training datasets. We overcome this difficulty by using knowledge distillation methods to take advantage of external datasets. Experiments are conducted on video-game voice excerpts. Results show a significant improvement using the p-vector, compared to the speaker-based x-vectors representation.
Document type :
Conference papers
Complete list of metadata

Cited literature [22 references]  Display  Hide  Download
Contributor : Adrien Gresse Connect in order to contact the contributor
Submitted on : Wednesday, May 13, 2020 - 4:09:59 PM
Last modification on : Friday, November 12, 2021 - 11:50:56 AM


Files produced by the author(s)


  • HAL Id : hal-02572383, version 1



Adrien Gresse, Mathias Quillot, Richard Dufour, Jean-François Bonastre. Learning Voice Representation Using Knowledge Distillation For Automatic Voice Casting. Interspeech, Oct 2020, Shanghai, China. ⟨hal-02572383⟩



Record views


Files downloads