Learning 3D Scene Semantics and Structure from a Single Depth Image

Bo Yang, Zihang Lai, Xiaoxuan Lu, Shuyu Lin, Hongkai Wen, Andrew Markham, Niki Trigoni

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

In this paper, we aim to understand the semantics and 3D structure of a scene from a single depth image. Recent deep neural networks based methods aim to simultaneously learn object class labels and infer the 3D shape of a scene represented by a large voxel grid. However, individual objects within the scene are usually only represented by a few voxels leading to a loss of geometric detail. In addition, significant computational and memory resources are required to process the large scale voxel grid of a whole scene. To address this, we propose an efficient and holistic pipeline, 3R-Depth, to simultaneously learn the semantics and structure of a scene from a single depth image. Our key idea is to deeply fuse an efficient 3D shape estimator with existing recognition (e.g., ResNets) and segmentation (e.g., MaskR-CNN) techniques. Object level semantics and latent feature maps are extracted and then fed to a shape estimator to extract the 3D shape. Extensive experiments are conducted on large-scale synthesized indoor scene datasets, quantitatively and qualitatively demonstrating the merits and superior performance of 3R-Depth.
Original languageEnglish
Title of host publication2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages3802
ISBN (Electronic)978-1-5386-6100-0
ISBN (Print)978-1-5386-6101-7
Publication statusPublished - 17 Dec 2018

Publication series

ISSN (Print)2160-7508
ISSN (Electronic)2160-7516


Dive into the research topics of 'Learning 3D Scene Semantics and Structure from a Single Depth Image'. Together they form a unique fingerprint.

Cite this