Real-Time Corpus-Based Concatenative Synthesis with CataRT

Diemo Schwarz 1, 2 Grégory Beller 1, 2 Bruno Verbrugghe 1, 2 Sam Britton 1, 2
1 Equipe Interactions musicales temps-réel
STMS - Sciences et Technologies de la Musique et du Son
2 Analyse et synthèse sonores [Paris]
STMS - Sciences et Technologies de la Musique et du Son
Abstract : The concatenative real-time sound synthesis system CataRT plays grains from a large corpus of segmented and descriptor-analysed sounds according to proximity to a target position in the descriptor space. This can be seen as a content-based extension to granular synthesis providing direct access to specific sound characteristics. CataRT is implemented in Max/MSP using the FTM library and an SQL database. Segmentation and MPEG-7 descriptors are loaded from SDIF files or generated on-the-fly. CataRT allows to explore the corpus interactively or via a target sequencer, to resynthesise an audio file or live input with the source sounds, or to experiment with expressive speech synthesis and gestural control.
Complete list of metadatas

Cited literature [14 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01161358
Contributor : Ircam Ircam <>
Submitted on : Tuesday, June 30, 2015 - 1:03:09 PM
Last modification on : Thursday, March 21, 2019 - 2:22:41 PM
Long-term archiving on : Friday, October 9, 2015 - 5:31:31 PM

File

index.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01161358, version 1

Citation

Diemo Schwarz, Grégory Beller, Bruno Verbrugghe, Sam Britton. Real-Time Corpus-Based Concatenative Synthesis with CataRT. 9th International Conference on Digital Audio Effects (DAFx), Sep 2006, Montreal, Canada. pp.279-282. ⟨hal-01161358⟩

Share

Metrics

Record views

391

Files downloads

318