Abstract / Description of output
A robot coexisting with humans must not only be able to perform physical tasks, but must also be able to interact with humans in a socially appropriate manner. In many social settings, this involves the use of social signals like gaze, facial expression, and language. In this paper, we discuss the problem of planning social and task-based actions for a robot that must interact with multiple human agents in a dynamic domain. We show how social states are inferred from low-level sensors, using vision and speech as input modalities, and use a general purpose knowledge-level planner to model task, dialogue, and social actions, as an alternative to current mainstream methods of interaction management. The resulting system has been evaluated in a real-world study with human subjects, in a simple bartending scenario.
Original language | English |
---|---|
Title of host publication | Workshop of the UK Planning and Scheduling Special Interest Group (PlanSIG 2012) |
Number of pages | 8 |
Publication status | Published - 1 Dec 2012 |