Learning Structured Representations of Spatial and Interactive Dynamics for Trajectory Prediction in Crowded Scenes

Todor Davchev, Michael Burke, Subramanian Ramamoorthy

Research output: Contribution to journalSpecial issuepeer-review

Abstract / Description of output

Context plays a significant role in the generation of motion for dynamic agents in interactive environments. This work proposes a modular method that utilises a learned model of the environment for motion prediction. This modularity explicitly allows for unsupervised adaptation of trajectory prediction models to unseen environments and new tasks by relying on unlabelled image data only. We model both the spatial and dynamic aspects of a given environment alongside the per agent motions. This results in more informed motion prediction and allows for performance comparable to the state-of-the-art. We highlight the model’s prediction capability using a benchmark pedestrian prediction problem and a robot manipulation task and show that we can transfer the predictor across these tasks in a completely unsupervised way. The proposed approach allows for robust and label efficient forward modelling, and relaxes the need for full model retraining in new environments.
Original languageEnglish
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume6
Issue number2
Early online date28 Dec 2020
DOIs
Publication statusPublished - 1 Apr 2021

Keywords / Materials (for Non-textual outputs)

  • Representation Learning
  • Novel Deep Learning Methods

Fingerprint

Dive into the research topics of 'Learning Structured Representations of Spatial and Interactive Dynamics for Trajectory Prediction in Crowded Scenes'. Together they form a unique fingerprint.

Cite this