Using Kernel Perceptrons to Learn Action Effects for Planning

Kira Mourao, Ron Petrick, Mark Steedman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We investigate the problem of learning action effects in STRIPS and ADL planning domains. Our approach is based on a kernel perceptron learning model, where action and state information is encoded in a compact vector representation as input to the learning mechanism, and resulting state changes are produced as output. Empirical results of our approach indicate efficient training and prediction times, with low average error rates (< 3%) when tested on STRIPS and ADL versions of an object manipulation scenario. This work is part of a project to integrate machine learning techniques with a planning system, as part of a larger cognitive architecture linking a high-level
reasoning component with a low-level robot/vision system.
Original languageEnglish
Title of host publicationProceedings of the International Conference on Cognitive Systems (CogSys 2008)
Pages45-50
Number of pages6
Publication statusPublished - Apr 2008

Fingerprint

Dive into the research topics of 'Using Kernel Perceptrons to Learn Action Effects for Planning'. Together they form a unique fingerprint.

Cite this