information

Type
Atelier / Formation
performance location
Ircam, Salle Igor-Stravinsky (Paris)
date
December 6, 2024

Systems for Augmented and Extended Reality (AR/XR) aim at rendering virtual content into the user’s natural environment or seemingly modify the properties of the actual environment. A future vision, for example, is to replace a person’s speech with the same text spoken in foreign language, or offering users more control over which parts of the actual environment they want to hear. Such ideas require analysing the natural acoustic environment and render the contents accordingly in the best case without any noticeable delay. Achieving high physical accuracy of the simulated content, remains challenging under these circumstances. How accurately does it need to be?
The expectations and perceptual require a lot on the specific content and application.
How can we make use of that? Is there a simple technical solution?
This presentation will discuss different technical approaches that seem very promising.


Methodological advances for Audio Augmented Reality and its applications

As part of the project HAIKUS (ANR-19-CE23-0023), funded by the French national research agency, IRCAM, LORIA and IJLRA organized a one-day workshop focusing on methodological advances for Audio Augmented Reality and its applications.




Audio Augmented Reality (AAR) seeks to integrate computer-generated and/or pre-recorded auditory content into the listener's real-world environment. Hearing plays a vital role in understanding and interacting with our spatial environment. It significantly enhances the auditory experience and increases user engagement in Augmented Reality (AR) applications, particularly in artistic creation, cultural mediation, entertainment and communication industries.




Audio-signal processors are a key component of the AAR workflow, as they are required for real-time control of 3D sound spatialisation and artificial reverberation applied to virtual sound events. These tools have now reached a level of maturity, capable of supporting large multichannel loudspeaker systems as well as binaural rendering on headphones. However, the accuracy of the spatial processing applied to virtual sound objects is essential to ensure their seamless integration into the listener's real environment, thereby guaranteeing a high-quality user experience. To achieve this level of integration, methods are needed to identify the acoustic properties of the environment and adjust the spatialization engine's parameters accordingly. Ideally, such methods should enable automatic inference of the acoustic channel's characteristics, based solely on live recordings of the natural, and often dynamic, sounds present in the real environment (e.g. voices, noise, ambient sounds, moving sources). These topics are gaining increasing attention, especially in light of recent advances on data-driven approaches within the field of acoustics. In parallel, perceptual studies are conducted to define the level of requirements needed to guarantee a coherent sound experience.



Organising committee: Antoine Deleforge (INRIA), François Ollivier (MPIA-IJLRA), Olivier Warusfel (IRCAM)

speakers

From the same archive

- Antoine Deleforge

Estimating acoustic parameters, such as the localization of a sound source, the geometry, or the acoustical properties of an environment from audio recordings, is a crucial component of audio augmented reality systems. These tasks become es

December 6, 2024

Video

- Olivier Warusfel, Benoît Alary

December 6, 2024

Video

- Toon van Waterschoot

The room impulse response (RIR) provides a fundamental representation of room acoustics for a spatially invariant source-observer combination. Dynamic audio rendering in extended reality (XR) applications however requires a room acoustics m

December 6, 2024

Video

Avancées méthodologiques pour la Réalité Augmentée Audio et ses Applications : Introduction - Olivier Warusfel

December 6, 2024

Video

- Cagdas Tuna

Knowledge of geometric properties of a room may be very beneficial for many audio applications, including sound source localization, sound reproduction, and augmented and virtual reality. Room geometry inference (RGI) deals with the problem

December 6, 2024

Video

- François Ollivier

This presentation covers the design, characteristics and implementation of a spherical microphone array using 256 Mems cells (HOSMA). This HOSMA is designed for directional analysis of room acoustics at order 15. The array uses advanced tec

December 6, 2024

Video

Common-slope modelling for 6DoF Audio - Sebastian Schlecht

In spatial audio, accurately modelling sound field decay is critical for realistic 6DoF audio experiences. This talk introduces the common-slope model, a compact approach that utilizes an energetic sound field description to represent spati

December 6, 2024

Video

- Olivier Warusfel

AAR aims to seamlessly merge virtual sound events into the listener’s real environment. To this end, various audio rendering models can be used to spatialise virtual sound events in real time and apply reverberation effects that match the a

December 6, 2024

Video

share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.