Enabling multimodal human-robot interaction for the Karlsruhe humanoid robot

Rainer Stiefelhagen*, Hazim Kemal Ekenel, Christian Fügen, Petra Gieselmann, Hartwig Holzapfel, Florian Kraft, Kai Nickel, Micheal Voit, Alex Waibel

*Bu çalışma için yazışmadan sorumlu yazar

Araştırma sonucu: ???type-name???Makalebilirkişi

114 Atıf (Scopus)

Özet

In this paper, we present our work in building technologies for natural multimodal human-robot interaction. We present our systems for spontaneous speech recognition, multimodal dialogue processing, and visual perception of a user, which includes localization, tracking, and identification of the user, recognition of pointing gestures, as well as the recognition of a person's head orientation. Each of the components is described in the paper and experimental results are presented. We also present several experiments on multimodal human-robot interaction, such as interaction using speech and gestures, the automatic determination of the addressee during human-human-robot interaction, as well on interactive learning of dialogue strategies. The work and the components presented here constitute the core building blocks for audiovisual perception of humans and multimodal human-robot interaction used for the humanoid robot developed within the German research project (Sonderforschungsbereich) on humanoid cooperative robots.

Orijinal dilİngilizce
Sayfa (başlangıç-bitiş)840-851
Sayfa sayısı12
DergiIEEE Transactions on Robotics
Hacim23
Basın numarası5
DOI'lar
Yayın durumuYayınlandı - 1 Eki 2007
Harici olarak yayınlandıEvet

Finansman

Manuscript received October 14, 2006; revised May 23, 2007. This paper was recommended for publication by Associate Editor C. Laschi and Editor H. Arai upon evaluation of the reviewers’ comments. This work was supported in part by the German Research Foundation (DFG) under Sonderforschungsbereich SFB 588—Humanoid Robots. This paper was presented in part at the 13th European Signal Processing Conference, Antalya, Turkey, 2005, in part at the ICSLP, Jeju-Islands, Korea, 2004, in part at the International Conference on Multimodal Interfaces (ICMI), State College, 2004, in part at the KI 2006, Bremen, Germany, in part at the INTERSPEECH, Pittsburgh, PA, 2006, in part at the Proceedings of the 7th International Conference on Multimodal Interfaces, Trento, Italy, October 4–6, 2005, in part at the Sixth International Conference on Face and Gesture Recognition—FG 2004, May, Seoul, Korea, and in part at the First International CLEAR Evaluation Workshop, Southampton, U.K., April 2006.

FinansörlerFinansör numarası
Deutsche ForschungsgemeinschaftSFB 588

    Parmak izi

    Enabling multimodal human-robot interaction for the Karlsruhe humanoid robot' araştırma başlıklarına git. Birlikte benzersiz bir parmak izi oluştururlar.

    Alıntı Yap