Skip to Main content Skip to Navigation
Conference papers

Foreground-Background Ambient Sound Scene Separation

Abstract : Ambient sound scenes typically comprise multiple short events occurring on top of a somewhat stationary background. We consider the task of separating these events from the background, which we call foreground-background ambient sound scene separation. We propose a deep learning-based separation framework with a suitable feature normaliza-tion scheme and an optional auxiliary network capturing the background statistics, and we investigate its ability to handle the great variety of sound classes encountered in ambient sound scenes, which have often not been seen in training. To do so, we create single-channel foreground-background mixtures using isolated sounds from the DESED and Audioset datasets, and we conduct extensive experiments with mixtures of seen or unseen sound classes at various signal-to-noise ratios. Our experimental findings demonstrate the generalization ability of the proposed approach.
Complete list of metadatas

Cited literature [19 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02567542
Contributor : Mauricio Michel Olvera Zambrano <>
Submitted on : Saturday, July 25, 2020 - 8:15:43 PM
Last modification on : Wednesday, July 29, 2020 - 1:41:26 PM

Files

conference_101719.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02567542, version 2
  • ARXIV : 2005.07006

Citation

Michel Olvera, Emmanuel Vincent, Romain Serizel, Gilles Gasso. Foreground-Background Ambient Sound Scene Separation. 28th European Signal Processing Conference (EUSIPCO), Jan 2021, Amsterdam, Netherlands. ⟨hal-02567542v2⟩

Share

Metrics

Record views

58

Files downloads

29