informations

Type
Ensemble de conférences, symposium, congrès
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
47 min
date
19 mars 2021

An overview of AI for Music and Audio Generation

I’ll discuss recent advances in AI for music creation, focusing on Machine Learning (ML) and Human-Computer Interaction (HCI) coming from our Magenta project (g.co/magenta). I’ll argue that generative ML models by themselves are of limited creative value because they are hard to use in our current music creation workflows. This motivates research in HCI and especially good user interface design. I’ll talk about a promising audio-generation project called Differentiable Digital Signal Processing (DDSP; Jesse Engel et al.) and about recent progress in modeling musical scores using Music Transformer (Anna Huang et al.). I’ll also talk about work done in designing experimental interfaces for composers and musicians. Time permitting I’ll relate this to similar work in the domain of creative writing. Overall my message will be one of restrained enthusiasm: Recent research in ML has offered some amazing advances in tools for music creation, but aside from a few outlier examples, we’ve yet to bring these models successfully into creative practice.

intervenants


partager


Vous constatez une erreur ?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.