Dynamic 3D Reconstruction Improvement via Intensity Video Guided 4D Fusion

Jie Zhang, Christos Maniatis, Luis Horna, Robert Fisher

Research output: Contribution to journalArticlepeer-review


The availability of high-speed 3D video sensors has greatly facilitated 3D shape acquisition of dynamic and deformable objects, but high frame rate 3D reconstruction is always degraded by spatial noise and temporal fluctuations. This paper presents a simple yet powerful dynamic 3D reconstruction improvement algorithm based on intensity video guided multi-frame 4D fusion. Temporal tracking of intensity image points (of moving and deforming objects) allows registration of the corresponding 3D model points, whose 3D noise and fluctuations are then reduced by spatio-temporal multi-frame 4D fusion. We conducted simulated noise tests and real experiments on four 3D objects using a 1000 fps 3D video sensor. The results demonstrate that the proposed algorithm is effective at reducing 3D noise and is robust against intensity noise. It outperforms existing algorithms with good scalability on both stationary and dynamic objects.
Original languageEnglish
Pages (from-to)540-547
Number of pages8
JournalJournal of visual communication and image representation
Early online date18 Jul 2018
Publication statusPublished - Aug 2018


Dive into the research topics of 'Dynamic 3D Reconstruction Improvement via Intensity Video Guided 4D Fusion'. Together they form a unique fingerprint.

Cite this