informations

Type
Conférence scientifique et/ou technique
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
01 h 00 min
date
4 septembre 2013

Malcolm SLANEY, invité par l’équipe Représentation Musicale,
présente en anglais :

“Pitch-Gesture Modeling Using Subband Autocorrelation Change Detection”

Calculating speaker pitch (or f0) is typically the first computational step in modeling tone and intonation for spoken language understanding. Usually pitch is treated as a fixed, single-valued quantity. The inherent ambiguity judging the octave of pitch, as well as spurious values, leads to errors in modeling pitch gestures that propagate in a computational pipeline.
We present an alternative that instead measures changes in the harmonic structure using a subband autocorrelation change detector (SACD).
This approach builds upon new machine-learning ideas for how to integrate autocorrelation information across subbands. Importantly however, for modeling gestures, we preserve multiple hypotheses and integrate information from all harmonics over time. The benefits of SACD over standard pitch approaches include robustness to noise and amount of voicing. This is important for real-world data in terms of both acoustic conditions and speaking style.
We discuss applications in tone and intonation modeling, and demonstrate the efficacy of the approach in a Mandarin Chinese tone-classification experiment. Results suggest that SACD could replace conventional pitch-based methods for modeling gestures in selected spoken-language processing tasks.

  • “””””” “””””” * “””””” “””””” * “””””” “”””””

Biography

Malcolm Slaney (Fellow, IEEE) is a Principal Scientist in Microsoft Research’s Conversational Systems Research Center in Mountain View, CA.
Before that he held the same title at Yahoo! Research, where he worked on multimedia analysis and music- and image-retrieval algorithms in databases with billions of items. He is also a (consulting) Professor at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA), Stanford, CA, where he has led the Hearing Seminar for the last 20 years.
Before Yahoo!, he has worked at Bell Laboratory, Schlumberger Palo Alto Research, Apple Computer, Interval Research, and IBM’s Almaden Research Center. For the last several years he has helped lead the auditory and attention groups at the NSF-sponsored Telluride Neuromorphic Cognition Workshop. He is a coauthor, with A. C. Kak, of the IEEE book Principles of Computerized Tomographic Imaging. This book was republished by SIAM in their Classics in Applied Mathematics series. He is coeditor, with S. Greenberg, of the book Computational Models of Auditory Function.
Prof. Slaney has served as an Associate Editor of the IEEE TRANSACTIONS ON AUDIO, SPEECH, AND SIGNAL PROCESSING, IEEE MULTIMEDIA MAGAZINE, the PROCEEDINGS OF THE IEEE, and the ACM Transactions on Multimedia Computing, Communications, and Applications.


Pitch-Gesture Modeling Using Subband Autocorrelation Change Detection

Malcolm SLANEY, invité par l'équipe Représentation Musicale, présente en anglais : "Pitch-Gesture Modeling Using Subband Autocorrelation Change Detection" Calculating speaker pitch (or f0) is typically the first computational step in modeling tone and intonation for spoken language understanding. Usually pitch is treated as a fixed, single-valued quantity. The inherent ambiguity judging the octave of pitch, as well as spurious values, leads to errors in modeling pitch gestures that propagate in a computational pipeline. We present an alternative that instead measures changes in the harmonic structure using a subband autocorrelation change detector (SACD). This approach builds upon new machine-learning ideas for how to integrate autocorrelation information across subbands. Importantly however, for modeling gestures, we preserve multiple hypotheses and integrate information from all harmonics over time. The benefits of SACD over standard pitch approaches include robustness to noise and amount of voicing. This is important for real-world data in terms of both acoustic conditions and speaking style. We discuss applications in tone and intonation modeling, and demonstrate the efficacy of the approach in a Mandarin Chinese tone-classification experiment. Results suggest that SACD could replace conventional pitch-based methods for modeling gestures in selected spoken-language processing tasks. * """""" * """""" * * """""" * """""" * * """""" * """""" * Biography Malcolm Slaney (Fellow, IEEE) is a Principal Scientist in Microsoft Research's Conversational Systems Research Center in Mountain View, CA. Before that he held the same title at Yahoo! Research, where he worked on multimedia analysis and music- and image-retrieval algorithms in databases with billions of items. He is also a (consulting) Professor at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA), Stanford, CA, where he has led the Hearing Seminar for the last 20 years. Before Yahoo!, he has worked at Bell Laboratory, Schlumberger Palo Alto Research, Apple Computer, Interval Research, and IBM's Almaden Research Center. For the last several years he has helped lead the auditory and attention groups at the NSF-sponsored Telluride Neuromorphic Cognition Workshop. He is a coauthor, with A. C. Kak, of the IEEE book Principles of Computerized Tomographic Imaging. This book was republished by SIAM in their Classics in Applied Mathematics series. He is coeditor, with S. Greenberg, of the book Computational Models of Auditory Function. Prof. Slaney has served as an Associate Editor of the IEEE TRANSACTIONS ON AUDIO, SPEECH, AND SIGNAL PROCESSING, IEEE MULTIMEDIA MAGAZINE, the PROCEEDINGS OF THE IEEE, and the ACM Transactions on Multimedia Computing, Communications, and Applications.

intervenants


partager


Vous constatez une erreur ?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.