Contrastive learning based facial action unit detection in children with hearing impairment for a socially assistive robot platform

Cemal Gurpinar*, Seyma Takir, Erhan Bicer, Pinar Uluer, Nafiz Arica, Hatice Kose

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents a contrastive learning-based facial action unit detection system for children with hearing impairments to be used on a socially assistive humanoid robot platform. The spontaneous facial data of children with hearing impairments was collected during an interaction study with Pepper humanoid robot, and tablet-based game. Since the collected dataset is composed of limited number of instances, a novel domain adaptation extension is applied to improve facial action unit detection performance, using some well-known labelled datasets of adults and children. Furthermore, since facial action unit detection is a multi-label classification problem, a new smoothing parameter, β, is introduced to adjust the contribution of similar samples to the loss function of the contrastive learning. The results show that the domain adaptation approach using children's data (CAFE) performs better than using adult's data (DISFA). In addition, using the smoothing parameter β leads to a significant improvement on the recognition performance.

Original languageEnglish
Article number104572
JournalImage and Vision Computing
Volume128
DOIs
Publication statusPublished - Dec 2022

Bibliographical note

Publisher Copyright:
© 2022 Elsevier B.V.

Keywords

  • Child-robot interaction
  • Contrastive learning
  • Covariate shift
  • Domain adaptation
  • Facial action unit detection
  • Transfer learning

Fingerprint

Dive into the research topics of 'Contrastive learning based facial action unit detection in children with hearing impairment for a socially assistive robot platform'. Together they form a unique fingerprint.

Cite this