Interactive Sound Texture Synthesis Through Semi-Automatic User Annotations - Institut de Recherche et Coordination Acoustique/Musique Accéder directement au contenu
Chapitre D'ouvrage Année : 2014

Interactive Sound Texture Synthesis Through Semi-Automatic User Annotations

Résumé

We present a way to make environmental recordings controllable again by the use of continuous annotations of the high-level semantic parameter one wishes to control, e.g. wind strength or crowd excitation level. A partial annotation can be propagated to cover the entire recording via cross-modal analysis between gesture and sound by canonical time warping (CTW). The annotations serve as a descriptor for lookup in corpus-based concatenative synthesis in order to invert the sound/annotation relationship. The workflow has been evaluated by a preliminary subject test and results on canonical correlation analysis (CCA) show high consistency between annotations and a small set of audio descriptors being well correlated with them. An experiment of the propagation of annotations shows the superior performance of CTW over CCA with as little as 20 s of annotated material.
Fichier principal
Vignette du fichier
index.pdf (2.36 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01161076 , version 1 (08-06-2015)

Identifiants

  • HAL Id : hal-01161076 , version 1

Citer

Diemo Schwarz, Baptiste Caramiaux. Interactive Sound Texture Synthesis Through Semi-Automatic User Annotations. Springer International Publishing; Aramaki, M., Derrien, O., Kronland-Martinet, R., Ystad, S. Sound, Music, and Motion, Lecture Notes in Computer Science, Vol. 8905, pp.372-392, 2014. ⟨hal-01161076⟩
185 Consultations
129 Téléchargements

Partager

Gmail Facebook X LinkedIn More