Multimodal Detection and Classification of Robot Manipulation Failures

Arda Inceoglu*, Eren Erdal Aksoy, Sanem Sariel

*Bu çalışma için yazışmadan sorumlu yazar

Araştırma sonucu: Dergiye katkıMakalebilirkişi

4 Atıf (Scopus)

Özet

An autonomous service robot should be able to interact with its environment safely and robustly without requiring human assistance. Unstructured environments are challenging for robots since the exact prediction of outcomes is not always possible. Even when the robot behaviors are well-designed, the unpredictable nature of the physical robot-object interaction may lead to failures in object manipulation. In this letter, we focus on detecting and classifying both manipulation and post-manipulation phase failures using the same exteroception setup. We cover a diverse set of failure types for primary tabletop manipulation actions. In order to detect these failures, we propose FINO-Net (Inceoglu et al., 2021), a deep multimodal sensor fusion-based classifier network architecture. FINO-Net accurately detects and classifies failures from raw sensory data without any additional information on task description and scene state. In this work, we use our extended FAILURE dataset (Inceoglu et al., 2021) with 99 new multimodal manipulation recordings and annotate them with their corresponding failure types. FINO-Net achieves 0.87 failure detection and 0.80 failure classification F1 scores. Experimental results show that FINO-Net is also appropriate for real-time use.

Orijinal dilİngilizce
Sayfa (başlangıç-bitiş)1396-1403
Sayfa sayısı8
DergiIEEE Robotics and Automation Letters
Hacim9
Basın numarası2
DOI'lar
Yayın durumuYayınlandı - 1 Şub 2024

Bibliyografik not

Publisher Copyright:
© 2016 IEEE.

Parmak izi

Multimodal Detection and Classification of Robot Manipulation Failures' araştırma başlıklarına git. Birlikte benzersiz bir parmak izi oluştururlar.

Alıntı Yap