RigidFusion: Robot Localisation and Mapping in Environments with Large Dynamic Rigid Objects

Ran Long, Christian Rauch, Tianwei Zhang, Vladimir Ivan, Sethu Vijayakumar

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

This work presents a novel RGB-D SLAM approach to simultaneously segment, track and reconstruct the static background and large dynamic rigid objects that can occlude major portions of the camera view. Previous approaches treat dynamic parts of a scene as outliers and are thus limited to a small amount of changes in the scene, or rely on prior information for all objects in the scene to enable robust camera tracking. Here, we propose to treat all dynamic parts as one rigid body and simultaneously segment and track both static and dynamic components. We, therefore, enable simultaneous localisation and reconstruction of both the static background and rigid dynamic components in environments where dynamic objects cause large occlusion.

We evaluate our approach on multiple challenging scenes with large dynamic occlusion. The evaluation demonstrates that our approach achieves better motion segmentation, localisation and mapping without requiring prior knowledge of the dynamic object’s shape and appearance.
Original languageEnglish
Pages (from-to)3703 - 3710
Number of pages8
JournalIEEE Robotics and Automation Letters
Issue number2
Early online date17 Mar 2021
Publication statusPublished - 1 Apr 2021

Keywords / Materials (for Non-textual outputs)

  • SLAM
  • visual tracking
  • sensor fusion


Dive into the research topics of 'RigidFusion: Robot Localisation and Mapping in Environments with Large Dynamic Rigid Objects'. Together they form a unique fingerprint.

Cite this