Abstract / Description of output
Understanding activities of people in a monitored
environment is a topic of active research, motivated by applications
requiring context-awareness. Inferring future agent
motion is useful not only for improving tracking accuracy,
but also for planning in an interactive motion task. Despite
rapid advances in the area of activity forecasting, many state-of-the-art
methods are still cumbersome for use on realistic
robots. This is due to the requirement of having good semantic
scene and map labelling, as well as assumptions made regarding
possible goals and types of motion. Many emerging applications
require robots with modest sensory and computational ability
to robustly perform such activity forecasting in high density
and dynamic environments. We address this by combining a
novel multi-camera tracking method, efficient multi-resolution
representations of state and a standard Inverse Reinforcement
Learning (IRL) technique, to demonstrate performance that is
sometimes better than the state-of-the-art in the literature. In
this framework, the IRL method uses agent trajectories from a
distributed tracker, and the output reward functions, describing
the agent’s goal-oriented navigation within a Markov Decision
Process (MDP) model, can be used to estimate the agent’s set
of possible future activities. We conclude with a quantitative
evaluation comparing the proposed method against others from
the literature.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Robotics and Automation (ICRA) 2015, Workshop on Machine Learning for Social Robotics |
Number of pages | 6 |
Publication status | Published - 2015 |