Virtual acoustic rendering by state wave synthesis - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Virtual acoustic rendering by state wave synthesis

Résumé

In the context of the class of virtual acoustic simulation techniques that rely on traveling wave rendering as dictated by path-tracing methods (e.g, image-source, ray-tracing, beam-tracing) we introduce State Wave Synthesis (SWS), a novel framework for the efficient rendering of sound traveling waves as exchanged between multiple directional sound sources and multiple directional sound receivers in time-varying conditions.The proposed virtual acoustic rendering framework represents sound-emitting and sound-receiving objects as multiple-input, multiple-output dynamical systems. Each input or output corresponds to a sound traveling wave received or emitted by the object from/to different orientations or at/from different positions of the object. To allow for multiple arriving/departing waves from/to different orientations and/or positions of an object in dynamic conditions, we introduce a discrete-time state-space system formulation that allows the inputs or the outputs of a system to mutate dynamically. The SWS framework treats virtual source or receiver objects as time-varying dynamical systems in state-space modal form, each allowing for an unlimited number of sound traveling wave inputs and outputs.To model the sound emission and/or reception behavior of an object, data may be collected from measurements. These measurements, which may comprise real or virtual impulse or frequency responses from a real physical object or a numerical physical model of an object, are jointly processed to design a multiple-input, multiple-output state-space model with mutable inputs and/or outputs. This mutable state-space model enables the simulation of direction- and/or position-dependent, frequency-dependent sound wave emission or reception of the object. At run-time, each of the mutable state-space object models may present any number of inputs or outputs, with each input or output associated to a received/emitted sound traveling wave from/to specific arrival/departure position or orientation. In a first formulation, the sound wave form, the traveling of sound waves between object models is simulated by means of delay lines of time-varying length. In a second formulation, the state wave form, the traveling of sound waves between object models is simulated by way of propagating the state variables of source objects along delay lines of time-varying length. SWS allows the accurate simulation of frequency-dependent source directivity and receiver directivity in time-varying conditions without any time-domain or frequency-domain explicit convolution processing. In addition, the framework enables time-varying, obstacle-induced frequency-dependent attenuation of traveling waves without any dedicated digital filters. SWS facilitates the implementation of efficient virtual acoustic rendering engines either as software or in dedicated hardware, allowing realizations in which the number of delay lines is independent of the number of traveling wave paths being simulated. Moreover, the method enables a straightforward dynamic coupling between virtual acoustic objects and their physics-based simulation counterparts as performed by computer for animation, virtual reality, video-games, music synthesis, or other applications.In this presentation we will introduce the foundations of SWS and employ a real acoustic violin and a real human head as illustrative examples for a source object and a receiver object respectively. In light of available implementation possibilities, we will examine the basic memory requirements and computational cost of the rendering framework and suggest how to conveniently include minimum-phase diffusive elements to procure additional diffuse field contributions if necessary. Finally, we will expose limitations and discuss future opportunities for development.
Fichier principal
Vignette du fichier
000005.pdf (472.18 Ko) Télécharger le fichier
Origine : Publication financée par une institution
Loading...

Dates et versions

hal-02275169 , version 1 (30-08-2019)

Identifiants

Citer

Esteban Maestre, Gary P. Scavone, Julius O. Smith. Virtual acoustic rendering by state wave synthesis. EAA Spatial Audio Signal Processing Symposium, Sep 2019, Paris, France. pp.31-36, ⟨10.25836/sasp.2019.05⟩. ⟨hal-02275169⟩
231 Consultations
177 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More