Vous constatez une erreur ?
NaN:NaN
00:00
For sonic interactive systems, the definition of user-specific mappings between sensors capturing performer’s gesture and sound engine parameters can be a complex task, especially when using large network of sensors to control a high number of synthesis variables. Generative techniques based on machine learning can compute such mappings only if users provide a sufficient amount of examples embedding and underlying learnable model. Instead, the combination of automated listening and unsupervised learning techniques can minimize effort and expertise required for implementing personalized mapping, while rising the perceptual relevance of the control abstraction. The vocal control of sound synthesis is presented as a challenging context for this mapping approach.
Vous constatez une erreur ?
1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43
Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche
Hôtel de Ville, Rambuteau, Châtelet, Les Halles
Institut de Recherche et de Coordination Acoustique/Musique
Copyright © 2022 Ircam. All rights reserved.