TY - GEN
T1 - Scene interpretation for self-aware cognitive robots
AU - Ozturk, Melodi Deniz
AU - Ersen, Mustafa
AU - Kapotoglu, Melis
AU - Koc, Cagatay
AU - Sariel-Talay, Sanem
AU - Yalcin, Hulya
PY - 2014
Y1 - 2014
N2 - We propose a visual scene interpretation system for cognitive robots to maintain a consistent world model about their environments. This interpretation system is for our lifelong experimental learning framework that allows robots analyze failure contexts to ensure robustness in their future tasks. To efficiently analyze failure contexts, scenes should be interpreted appropriately. In our system, LINE-MOD and HS histograms are used to recognize objects with/without textures. Moreover, depth-based segmentation is applied for identifying unknown objects in the scene. This information is also used to augment the recognition performance. The world model includes not only the objects detected in the environment but also their spatial relations to efficiently represent contexts. Extracting unary and binary relations such as on, on-ground, clear and near is useful for symbolic representation of the scenes. We test the performance of our system on recognizing objects, determining spatial predicates, and maintaining consistency of the world model of the robot in the real world. Our preliminary results reveal that our system can be successfully used to extract spatial relations in a scene and to create a consistent model of the world by using the information gathered from the onboard RGB-D sensor as the robot explores its environment.
AB - We propose a visual scene interpretation system for cognitive robots to maintain a consistent world model about their environments. This interpretation system is for our lifelong experimental learning framework that allows robots analyze failure contexts to ensure robustness in their future tasks. To efficiently analyze failure contexts, scenes should be interpreted appropriately. In our system, LINE-MOD and HS histograms are used to recognize objects with/without textures. Moreover, depth-based segmentation is applied for identifying unknown objects in the scene. This information is also used to augment the recognition performance. The world model includes not only the objects detected in the environment but also their spatial relations to efficiently represent contexts. Extracting unary and binary relations such as on, on-ground, clear and near is useful for symbolic representation of the scenes. We test the performance of our system on recognizing objects, determining spatial predicates, and maintaining consistency of the world model of the robot in the real world. Our preliminary results reveal that our system can be successfully used to extract spatial relations in a scene and to create a consistent model of the world by using the information gathered from the onboard RGB-D sensor as the robot explores its environment.
UR - http://www.scopus.com/inward/record.url?scp=84904858443&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84904858443
SN - 9781577356462
T3 - AAAI Spring Symposium - Technical Report
SP - 89
EP - 96
BT - Qualitative Representations for Robots - Papers from the AAAI Spring Symposium, Technical Report
PB - AI Access Foundation
T2 - 2014 AAAI Spring Symposium
Y2 - 24 March 2014 through 26 March 2014
ER -