informations

Type
Séminaire / Conférence
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
01 h 01 min
date
1 juillet 2015

For sonic interactive systems, the definition of user-specific mappings between sensors capturing performer’s gesture and sound engine parameters can be a complex task, especially when using large network of sensors to control a high number of synthesis variables. Generative techniques based on machine learning can compute such mappings only if users provide a sufficient amount of examples embedding and underlying learnable model. Instead, the combination of automated listening and unsupervised learning techniques can minimize effort and expertise required for implementing personalized mapping, while rising the perceptual relevance of the control abstraction. The vocal control of sound synthesis is presented as a challenging context for this mapping approach.

intervenants


partager


Vous constatez une erreur ?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.