IRL-based prediction of goals for dynamic environments

Fabio Previtali, Alejandro (Alex) Bordallo, Subramanian Ramamoorthy

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Understanding activities of people in a monitored environment is a topic of active research, motivated by applications requiring context-awareness. Inferring future agent motion is useful not only for improving tracking accuracy, but also for planning in an interactive motion task. Despite rapid advances in the area of activity forecasting, many state-of-the-art methods are still cumbersome for use on realistic robots. This is due to the requirement of having good semantic scene and map labelling, as well as assumptions made regarding possible goals and types of motion. Many emerging applications require robots with modest sensory and computational ability to robustly perform such activity forecasting in high density and dynamic environments. We address this by combining a novel multi-camera tracking method, efficient multi-resolution representations of state and a standard Inverse Reinforcement Learning (IRL) technique, to demonstrate performance that is sometimes better than the state-of-the-art in the literature. In this framework, the IRL method uses agent trajectories from a distributed tracker, and the output reward functions, describing the agent’s goal-oriented navigation within a Markov Decision Process (MDP) model, can be used to estimate the agent’s set of possible future activities. We conclude with a quantitative evaluation comparing the proposed method against others from the literature.
Original languageEnglish
Title of host publicationIEEE International Conference on Robotics and Automation (ICRA) 2015, Workshop on Machine Learning for Social Robotics
Number of pages6
Publication statusPublished - 2015


Dive into the research topics of 'IRL-based prediction of goals for dynamic environments'. Together they form a unique fingerprint.

Cite this