Perceptual audio features for emotion detection

Mehmet Cenk Sezgin, Bilge Gunsel*, Gunes Karabulut Kurt

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

60 Citations (Scopus)

Abstract

In this article, we propose a new set of acoustic features for automatic emotion recognition from audio. The features are based on the perceptual quality metrics that are given in perceptual evaluation of audio quality known as ITU BS.1387 recommendation. Starting from the outer and middle ear models of the auditory system, we base our features on the masked perceptual loudness which defines relatively objective criteria for emotion detection. The features computed in critical bands based on the reference concept include the partial loudness of the emotional difference, emotional difference-to-perceptual mask ratio, measures of alterations of temporal envelopes, measures of harmonics of the emotional difference, the occurrence probability of emotional blocks, and perceptual bandwidth. A soft-majority voting decision rule that strengthens the conventional majority voting is proposed to assess the classifier outputs. Compared to the state-of-the-art systems including Munich Open-Source Emotion and Affect Recognition Toolkit, Hidden Markov Toolkit, and Generalized Discriminant Analysis, it is shown that the emotion recognition rates are improved between 7-16% for EMO-DB and 7-11% in VAM for "all" and "valence" tasks.

Original languageEnglish
Article number16
JournalEurasip Journal on Audio, Speech, and Music Processing
Volume2012
Issue number1
DOIs
Publication statusPublished - 2012

Keywords

  • Audio emotion recognition
  • PEAQ
  • Perceptual audio feature extraction

Fingerprint

Dive into the research topics of 'Perceptual audio features for emotion detection'. Together they form a unique fingerprint.

Cite this