Abstract / Description of output
Image synthesis algorithms are commonly compared on the basis of running times and/or perceived quality of the generated images. In the case of Monte Carlo techniques, assessment often entails a qualitative impression of convergence toward a reference standard and severity of visible noise; these amount to subjective assessments of the mean and variance of the estimators, respectively. In this paper we argue that such assessments should be augmented by well-known statistical hypothesis testing methods. In particular, we show how to perform a number of such tests to assess random variables that commonly arise in image synthesis such as those estimating irradiance, radiance, pixel color, etc. We explore five broad categories of tests: 1) determining whether the mean is equal to a reference standard, such as an analytical value, 2) determining that the variance is bounded by a given constant, 3) comparing the means of two different random variables, 4) comparing the variances of two different random variables, and 5) verifying that two random variables stem from the same parent distribution. The level of significance of these tests can be controlled by a parameter. We demonstrate that these tests can be used for objective evaluation of Monte Carlo estimators to support claims of zero or small bias and to provide quantitative assessments of variance reduction techniques. We also show how these tests can be used to detect errors in sampling or in computing the density of an importance function in MC integrations.
Original language | English |
---|---|
Title of host publication | Computer Graphics and Applications, 2007. PG '07. 15th Pacific Conference on |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 106-115 |
Number of pages | 10 |
ISBN (Print) | 978-0-7695-3009-3 |
DOIs | |
Publication status | Published - 4 Dec 2007 |