Bayesian Inverse Problems with Heterogeneous Variance

Natalia Bochkina, Jenovah Rodrigues

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

We consider inverse problems in Hilbert spaces contaminated by Gaussian noisewhose covariance operator is not identity (i.e. it is not a white noise), and usea Bayesian approach to find its regularised smooth solution. We consider the so-called conjugate diagonal setting where the covariance operator of the noise andthe covariance operator of the prior distribution are diagonal in the correspondingorthogonal bases of the Hilbert spaces defined by the forward operator of the inverseproblem. Firstly, we derive the minimax rate of convergence in such problems withknown covariance operator of the noise, showing that in the case of heterogeneousvariance the ill-posed inverse problem can become self-regularised in some cases whenthe eigenvalues of the variance operator decay to zero, achieving parametric rate ofconvergence - as far as we are aware, this is a striking novel result that have not beenobserved before in nonparametric problems. Secondly, we give a general expression ofthe rate of contraction of the posterior distribution in case of known noise covarianceoperator in case the noise level is small, for a given prior distribution. We alsoinvestigate when this contraction rate coincides with the optimal rate in the minimaxsense which is typically used as a benchmark for studying the posterior contractionrates. We apply our results to known variance operators with polynomially decreasingor increasing eigenvalues as an example. We also discuss when the plug-in estimatorof the eigenvalues of the covariance operator of the noise does not affect the rate of thecontraction of the posterior distribution of the signal. The Empirical Bayes estimatorof prior smoothness proposed in [Knapik et al.(2012)] applies to our setting partiallywhen the problem does not have the parametric rate of convergence. We also show thatplugging in the maximum marginal likelihood estimator of the prior scaling parameterleads to the optimal posterior contraction rate, adaptively. Effect of the choice ofthe prior parameters on the contraction in such models is illustrated on simulated(synthetic) data with Volterra operator.
Original languageEnglish
Pages (from-to)1116-1151
JournalScandinavian Journal of Statistics
Volume50
Issue number3
Early online date30 Nov 2022
DOIs
Publication statusPublished - 30 Sept 2023

Fingerprint

Dive into the research topics of 'Bayesian Inverse Problems with Heterogeneous Variance'. Together they form a unique fingerprint.

Cite this