Mapping the MIT Stata Center: Large-scale Integrated Visual and RGB-D SLAM

Maurice Fallon, Hordur Johannsson, Michael Kaess, David M. Rosen, Elias Muggler, John J. Leonard

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

This paper describes progress towards creating an integrated large-scale visual and RGB-D mapping and localization system to operate in the MIT Stata Center. The output of a real-time, temporally scalable 6-DOF visual SLAM system is used to generate low fidelity maps that are used by the Kinect Monte Carlo Localization (KMCL) algorithm. This localization algorithm can track the camera pose during aggressive motion and can aid in recovery from visual odometry failures. The localization algorithm uses dense depth information to track its location in the map, which is less sensitive to large viewpoint changes than feature-based approaches, e.g. traversing in opposite direction up and down a hallway. The low fidelity map also makes the system more resilient to clutter and small changes in the environment. The integration of the localization algorithm with the mapping algorithm enables the system to operate in novel environments and allows for robust navigation through the mapped area—even under aggressive motion. A major part of this project has been the collection of a large dataset of the ten-floor MIT Stata Center with a PR2 robot, which currently consists of approximately 40 kilometers of distance traveled. This paper describes ongoing efforts to obtain centimenter-level ground-truth for the robot motion, using prior building models.
Original languageEnglish
Title of host publicationRSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras
Number of pages3
Publication statusPublished - Jul 2012

Fingerprint

Dive into the research topics of 'Mapping the MIT Stata Center: Large-scale Integrated Visual and RGB-D SLAM'. Together they form a unique fingerprint.

Cite this