Özet
This study presents an assistive robotic system enhanced with emotion recognition capabilities for children with hearing disabilities. The system is designed and developed for the audiometry tests and rehabilitation of children in a clinical setting and includes a social humanoid robot (Pepper), an interactive interface, gamified audiometry tests, sensory setup and a machine/deep learning based emotion recognition module. Three scenarios involving conventional setup, tablet setup and setup with the robot+tablet are evaluated with 16 children having cochlear implant or hearing aid. Several machine learning techniques and deep learning models are used for the classification of the three test setups and for the classification of the emotions (pleasant, neutral, unpleasant) of children using the recorded physiological signals by E4 wristband. The results show that the collected signals during the tests can be separated successfully and the positive and negative emotions of children can be better distinguished when they interact with the robot than in the other two setups. In addition, the children’s objective and subjective evaluations as well as their impressions about the robot and its emotional behaviors are analyzed and discussed extensively.
Orijinal dil | İngilizce |
---|---|
Sayfa (başlangıç-bitiş) | 643-660 |
Sayfa sayısı | 18 |
Dergi | International Journal of Social Robotics |
Hacim | 15 |
Basın numarası | 4 |
DOI'lar | |
Yayın durumu | Yayınlandı - Nis 2023 |
Bibliyografik not
Publisher Copyright:© 2021, The Author(s), under exclusive licence to Springer Nature B.V.
Finansman
We would like to thank collaborating audiologists Dr. Selma Yilar, Talha Cogen and Busra Gokce from Istanbul University Cerrahpasa Medical Faculty for their contributions to this study. This study is supported by The Scientific and Technological Research Council of Turkey (TÜBİTAK) under the grant number 118E214. This work is supported by the Turkish Academy of Sciences in scheme of the Outstanding Young Scientist Award (TÜBA-GEBİP). Research supported by The Scientific and Technological Research Council of Turkey (TÜBİTAK) under the grant number 118E214. We would like to thank collaborating audiologists Dr. Selma Yilar, Talha Cogen and Busra Gokce from Istanbul University Cerrahpasa Medical Faculty for their contributions to this study. This study is supported by The Scientific and Technological Research Council of Turkey (TÜBİTAK) under the grant number 118E214. This work is supported by the Turkish Academy of Sciences in scheme of the Outstanding Young Scientist Award (TÜBA-GEBİP).
Finansörler | Finansör numarası |
---|---|
TÜBA-GEBİP | |
TÜBİTAK | |
Istanbul Üniversitesi | |
Türkiye Bilimsel ve Teknolojik Araştırma Kurumu | 118E214 |
Türkiye Bilimler Akademisi |