Abstract
In this paper, a multimodal system is designed in the form of an active audio-vision in order to improve the perceptual capability of a robot in a noisy environment. The system running in real-time consists of 1) audition modality, 2) a complementary vision modality and 3) motion modality incorporating intelligent behaviors based on the data obtained from both modalities. The tasks of audition and vision are to detect, localize and track a speaker independently. The aim of motion modality is to enable a robot to have intelligent and human-like behaviors by using localization results from the sensor fusion. The system is implemented on a mobile robot platform in a real-time environment and the speaker tracking performance of the fusion is confirmed to be improved compared to each of sensory modalities.
Translated title of the contribution | Audio-visual human tracking for active robot perception |
---|---|
Original language | Turkish |
Title of host publication | 2015 23rd Signal Processing and Communications Applications Conference, SIU 2015 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1264-1267 |
Number of pages | 4 |
ISBN (Electronic) | 9781467373869 |
DOIs | |
Publication status | Published - 19 Jun 2015 |
Event | 2015 23rd Signal Processing and Communications Applications Conference, SIU 2015 - Malatya, Turkey Duration: 16 May 2015 → 19 May 2015 |
Publication series
Name | 2015 23rd Signal Processing and Communications Applications Conference, SIU 2015 - Proceedings |
---|
Conference
Conference | 2015 23rd Signal Processing and Communications Applications Conference, SIU 2015 |
---|---|
Country/Territory | Turkey |
City | Malatya |
Period | 16/05/15 → 19/05/15 |
Bibliographical note
Publisher Copyright:© 2015 IEEE.