Depth Map Fusion with Camera Position Refinement

Radim Tylecek, Radim Sara

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We present a novel algorithm for image-based surface reconstruction from a set of calibrated images. The problem is formulated in Bayesian framework, where estimates of depth and visibility in a set of selected cameras are iteratively improved. The core of the algorithm is the minimisation of overall geometric L2 error between measured 3D points and the depth estimates.
In the visibility estimation task, the algorithm aims at outlier detection and noise suppression, as both types of errors are often present in the stereo output. The geometrical formulation allows for simultaneous refinement of the external
camera parameters, which is an essential step for obtaining accurate results even when the calibration is not precisely known. We show that the results obtained with our method are comparable to other state-of-the-art techniques.
Original languageEnglish
Title of host publicationComputer Vision Winter Workshop 2009
PublisherPattern Recognition and Image Processing (PRIP) Group
Number of pages5
Publication statusPublished - 2009


Dive into the research topics of 'Depth Map Fusion with Camera Position Refinement'. Together they form a unique fingerprint.

Cite this