Interactive light source position estimation for augmented reality with an RGB-D camera

Bastiaan J. Boom, Sergio Orts-Escolano, Xin X. Ning, Steven McDonagh, Peter Sandilands, Robert B. Fisher

Research output: Contribution to journalArticlepeer-review

Abstract

The first hybrid CPU-GPU based method for estimating a point light source position in a scene recorded by an RGB-D camera is presented. The image and depth information from the Kinect is enough to estimate a light position in a scene, which allows for the rendering of synthetic objects into a scene that appears realistic enough for augmented reality purposes. This method does not require a light probe or other physical device. To make this method suitable for augmented reality, we developed a hybrid implementation that performs light estimation in under 1 second. This is sufficient for most augmented reality scenarios because both the position of the light source and the position of the Kinect are typically fixed. The method is able to estimate the angle of the light source with an average error of 20°. By rendering synthetic objects into the recorded scene, we illustrate that this accuracy is good enough for the rendered objects to look realistic.
Original languageEnglish
Article numbere1686
Number of pages16
JournalComputer Animation and Virtual Worlds
Volume28
Issue number1
Early online date8 Dec 2015
DOIs
Publication statusPublished - Jan 2017

Keywords

  • light source estimation, augmented reality, GPU implementation, RGB-D camera

Fingerprint

Dive into the research topics of 'Interactive light source position estimation for augmented reality with an RGB-D camera'. Together they form a unique fingerprint.

Cite this