Exploring the Potential of Multimodal Emotion Recognition for Hearing-Impaired Children Using Physiological Signals and Facial Expressions

Seyma Takir, Elif Toprak, Pinar Uluer, Duygun Erol Barkana, Hatice Kose

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

This study proposes an approach for emotion recognition in children with hearing impairments by utilizing physiological and facial cues and fusing them using machine learning techniques. The study is a part of a child-robot interaction project to support children with hearing impairments with affective applications in clinical setups and hospital environments and improve their social well-being. Physiological signals and facial expressions of children were collected and annotated by the collaborating psychologists as pleasant, unpleasant, and neutral, using the video recordings of the sessions. Both single and multimodal approaches are used to classify emotions using this data. The model trained using only facial expression features yielded a result of 43.67%. When only physiological data was used, the result increased to 58.68%. Finally, when the features of these two different modalities were fused in the feature layer, the accuracy further increased to 74.96% demonstrating that the multimodal approach for this data set has significantly improved the recognition of pleasant, unpleasant, and neutral emotions in children with hearing impairments.

Original languageEnglish
Title of host publicationICMI 2023 Companion - Companion Publication of the 25th International Conference on Multimodal Interaction
PublisherAssociation for Computing Machinery
Pages398-405
Number of pages8
ISBN (Electronic)9798400703218
DOIs
Publication statusPublished - 9 Oct 2023
Event25th International Conference on Multimodal Interaction, ICMI 2023 Companion - Paris, France
Duration: 9 Oct 202313 Oct 2023

Publication series

NameACM International Conference Proceeding Series

Conference

Conference25th International Conference on Multimodal Interaction, ICMI 2023 Companion
Country/TerritoryFrance
CityParis
Period9/10/2313/10/23

Bibliographical note

Publisher Copyright:
© 2023 Owner/Author.

Keywords

  • child-machine interaction
  • emotion detection
  • facial expression
  • multimodality
  • physiological signals
  • sensor fusion

Fingerprint

Dive into the research topics of 'Exploring the Potential of Multimodal Emotion Recognition for Hearing-Impaired Children Using Physiological Signals and Facial Expressions'. Together they form a unique fingerprint.

Cite this