Estimating animal acoustic diversity in tropical environments using unsupervised multiresolution analysis

Abstract : Ecoacoustic monitoring has proved to be a viable approach to capture ecological data related to animal communities. While experts can manually annotate audio samples, the analysis of large datasets can be significantly facilitated by automatic pattern recognition methods. Unsupervised learning methods, which do not require labelled data, are particularly well suited to analyse poorly documented habitats, such as tropical environments. Here we propose a new method, named Multiresolution Analysis of Acoustic Diversity (MAAD), to automate the detection of relevant structure in audio data. MAAD was designed to decompose the acoustic community into few elementary components (soundtypes) based on their time–frequency attributes. First, we used the short-time Fourier transform to detect regions of interest (ROIs) in the time–frequency domain. Then, we characterised these ROIs by (1) estimating the median frequency and (2) by running a 2D wavelet analysis at multiple scales and angles. Finally, we grouped the ROIs using a model-based subspace clustering technique so that ROIs were automatically annotated and clustered into soundtypes. To test the performance of the automatic method, we applied MAAD to two distinct tropical environments in French Guiana, a lowland high rainforest and a rock savanna, and we compared manual and automatic annotations using the adjusted Rand index. The similarity between the manual and automated partitions was high and consistent, indicating that the clusters found are intelligible and can be used for further analysis. Moreover, the weight of the features estimated by the clustering process revealed important information about the structure of the acoustic communities. In particular, the median frequency had the strongest effect on modelling the clusters and on classification performance, suggesting a role in community organisation. The number of clusters found in MAAD can be regarded as an estimation of the soundtype richness in a given environment. MAAD is a comprehensive and promising method to automatically analyse passive acoustic recordings. Combining MAAD and manual analysis would maximally exploit the strengths of both human reasoning and computer algorithms. Thereby, the composition of the acoustic community could be estimated accurately, quickly and at large scale.
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01830049
Contributor : Alain Perignon <>
Submitted on : Wednesday, July 4, 2018 - 3:07:45 PM
Last modification on : Thursday, February 20, 2020 - 1:34:53 AM

Identifiers

Citation

Juan-Sebastiano Ulloa, Thierry Aubin, Diego Llusia, Charles Bouveyron, Jérôme Sueur. Estimating animal acoustic diversity in tropical environments using unsupervised multiresolution analysis. Ecological Indicators, Elsevier, 2018, 90, pp.346 - 355. ⟨10.1016/j.ecolind.2018.03.026⟩. ⟨hal-01830049⟩

Share

Metrics

Record views

162