Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 19 von 241
IEEE transactions on affective computing, 2017-10, Vol.8 (4), p.534-545
2017
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Audio-Facial Laughter Detection in Naturalistic Dyadic Conversations
Ist Teil von
  • IEEE transactions on affective computing, 2017-10, Vol.8 (4), p.534-545
Ort / Verlag
Piscataway: IEEE
Erscheinungsjahr
2017
Quelle
IEEE Xplore
Beschreibungen/Notizen
  • We address the problem of continuous laughter detection over audio-facial input streams obtained from naturalistic dyadic conversations. We first present meticulous annotation of laughters, cross-talks and environmental noise in an audio-facial database with explicit 3D facial mocap data. Using this annotated database, we rigorously investigate the utility of facial information, head movement and audio features for laughter detection. We identify a set of discriminative features using mutual information-based criteria, and show how they can be used with classifiers based on support vector machines (SVMs) and time delay neural networks (TDNNs). Informed by the analysis of the individual modalities, we propose a multimodal fusion setup for laughter detection using different classifier-feature combinations. We also effectively incorporate bagging into our classification pipeline to address the class imbalance problem caused by the scarcity of positive laughter instances. Our results indicate that a combination of TDNNs and SVMs lead to superior detection performance, and bagging effectively addresses data imbalance. Our experiments show that our multimodal approach supported by bagging compares favorably to the state of the art in presence of detrimental factors such as cross-talk, environmental noise, and data imbalance.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX