Learning to Generate and Reconstruct 3D Meshes with only 2D Supervision

Paul Henderson, Vittorio Ferrari

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to these existing approaches, while also supporting weaker supervision scenarios. Importantly, it can be trained purely from 2D images, without ground-truth pose annotations, and with a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach on synthetic data in various settings, showing that (i) it learns to disentangle shape from pose; (ii) using shading in the loss improves performance; (iii) our model is comparable or superior to state-of-the-art voxel-based approaches on quantitative metrics, while producing results that are visually more pleasing; (iv) it still performs well when given supervision weaker than in prior works.
Original languageEnglish
Title of host publicationProceedings of the 29th British Machine Vision Conference (BMVC 2018)
Number of pages13
Publication statusPublished - 2018


Dive into the research topics of 'Learning to Generate and Reconstruct 3D Meshes with only 2D Supervision'. Together they form a unique fingerprint.

Cite this