Connecting Knowledge-Level Planning and Task Execution on a Humanoid Robot using Object-Action Complexes

Ronald Petrick, Nils Adermann, Tamim Asfour, Mark Steedman, Rüdiger Dillmann

Research output: Contribution to conferencePosterpeer-review

Abstract

We present an approach to robot control that integrates an automated planner and execution monitor on a humanoid robot platform. Our approach uses a formalism called an Object-Action Complex to overcome some of the representational differences that arise between the low-level control mechanisms and high-level reasoning components of the system.

High-level plans are built using PKS, a conditional planner that operates with incomplete information and sensing actions. Unlike traditional planners, PKS operates at the "knowledge level" by explicitly modelling what the planner knows, and does not know, during plan generation. PKS operates closely with an execution monitor that compares predicted states against observed states, to make decisions concerning plan continuation, object resensing, and replanning.

High-level plans are executed on ARMAR, a humanoid robot platform featuring a 7-degree-of-freedom (DOF) head with foveated vision, a 3-DOF torso, two 7-DOF arms, and two 5-finger hands, each with tactile sensors and 8 DOFs. ARMAR includes a number of sensorimotor processes that enable it to act autonomously in unstructured environments.

Task planning and task execution are connected using Object-Action Complexes (OACs), a universal representation usable at all levels of a cognitive architecture. OACs combine the representational and computational efficiency of STRIPS rules, and the object- and situation-oriented concept of affordance, together with the logical clarity of formalisms like the event calculus. OACs are motivated by the idea that objects and actions are inseparably intertwined in cognitive processing and, thus, can be viewed as the building blocks of cognition.

This poster illustrates how OACs arise at both the planning and robot levels, and how they are used for robot-level task execution. We demonstrate our current system with an example from a kitchen environment, requiring movement between multiple work areas, choice of hand for object manipulation, and the ability to manipulate objects of various shapes and configurations.

This work forms part of the EU FP6 PACO-PLUS project, investigating action, cognition, and learning in real-world robot environments.
Original languageEnglish
Number of pages1
Publication statusPublished - 1 Jan 2010
Eventcogsys - Zurich, Switzerland, United Kingdom
Duration: 27 Jan 2010 → …

Conference

Conferencecogsys
Country/TerritoryUnited Kingdom
CityZurich, Switzerland
Period27/01/10 → …

Fingerprint

Dive into the research topics of 'Connecting Knowledge-Level Planning and Task Execution on a Humanoid Robot using Object-Action Complexes'. Together they form a unique fingerprint.

Cite this