Skip to Main content Skip to Navigation
Conference papers

Vector-Quantized Timbre Representation

Abstract : Timbre is a set of perceptual attributes that identifies different types of sound sources. Although its definition is usually elusive, it can be seen from a signal processing viewpoint as all the spectral features that are perceived independently from pitch and loudness. Some works have studied high-level timbre synthesis by analyzing the feature relationships of different instruments, but acoustic properties remain entangled and generation bound to individual sounds. This paper targets a more flexible synthesis of an individual timbre by learning an approximate decomposition of its spectral properties with a set of generative features. We introduce an auto-encoder with a discrete latent space that is disentangled from loudness in order to learn a quantized representation of a given timbre distribution. Timbre transfer can be performed by encoding any variable-length input signals into the quantized latent features that are decoded according to the learned timbre. We detail results for translating audio between orchestral instruments and singing voice, as well as transfers from vocal imitations to instruments as an intuitive modality to drive sound synthesis. Furthermore, we can map the discrete latent space to acoustic descriptors and directly perform descriptor-based synthesis.
Document type :
Conference papers
Complete list of metadata
Contributor : Philippe Esling <>
Submitted on : Monday, April 26, 2021 - 11:46:29 AM
Last modification on : Wednesday, April 28, 2021 - 3:36:23 AM

Links full text


  • HAL Id : hal-03208036, version 1
  • ARXIV : 2007.06349


Adrien Bitton, Philippe Esling, Tatsuya Harada. Vector-Quantized Timbre Representation. International Computer Music Conference, Jul 2021, Santiago, Chile. ⟨hal-03208036⟩



Record views