Aktif Robot Algilamasi Için Görsel-Işitsel Insan Takibi

Translated title of the contribution: Audio-visual human tracking for active robot perception

Baris Bayram, Gokhan Ince

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

In this paper, a multimodal system is designed in the form of an active audio-vision in order to improve the perceptual capability of a robot in a noisy environment. The system running in real-time consists of 1) audition modality, 2) a complementary vision modality and 3) motion modality incorporating intelligent behaviors based on the data obtained from both modalities. The tasks of audition and vision are to detect, localize and track a speaker independently. The aim of motion modality is to enable a robot to have intelligent and human-like behaviors by using localization results from the sensor fusion. The system is implemented on a mobile robot platform in a real-time environment and the speaker tracking performance of the fusion is confirmed to be improved compared to each of sensory modalities.

Translated title of the contributionAudio-visual human tracking for active robot perception
Original languageTurkish
Title of host publication2015 23rd Signal Processing and Communications Applications Conference, SIU 2015 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1264-1267
Number of pages4
ISBN (Electronic)9781467373869
DOIs
Publication statusPublished - 19 Jun 2015
Event2015 23rd Signal Processing and Communications Applications Conference, SIU 2015 - Malatya, Turkey
Duration: 16 May 201519 May 2015

Publication series

Name2015 23rd Signal Processing and Communications Applications Conference, SIU 2015 - Proceedings

Conference

Conference2015 23rd Signal Processing and Communications Applications Conference, SIU 2015
Country/TerritoryTurkey
CityMalatya
Period16/05/1519/05/15

Bibliographical note

Publisher Copyright:
© 2015 IEEE.

Fingerprint

Dive into the research topics of 'Audio-visual human tracking for active robot perception'. Together they form a unique fingerprint.

Cite this