Abstract
This paper introduces a new neural network based prior for real valued functions. Each weight and bias of the neural network has an independent Gaussian prior, with the key novelty that the variances decrease in the width of the network in such a way that the resulting function is well defined in the limit of an infinite width network. We show that the induced posterior over functions is amenable to Monte Carlo sampling using Hilbert space Markov chain Monte Carlo (MCMC) methods. This type of MCMC is stable under mesh refinement, i.e. the acceptance probability does not degenerate as more parameters of the function's prior are introduced, even ad infinitum. We demonstrate these advantages over other function space priors, for example in Bayesian Reinforcement Learning.
Original language | English |
---|---|
Pages (from-to) | 46-66 |
Number of pages | 21 |
Journal | Journal of the Royal Statistical Society: Statistical Methodology Series B |
Volume | 85 |
Issue number | 1 |
DOIs | |
Publication status | Published - 31 Jan 2023 |