Projects per year
Abstract / Description of output
Convolutional Neural Networks (CNNs) are now a well-established tool for solving computational imaging problems. Modern CNN-based algorithms obtain state-of-the-art performance in diverse image restoration problems. Furthermore, it has been recently shown that, despite being highly overparameterized, networks trained with a single corrupted image can still perform as well as fully trained networks. We introduce a formal link between such networks through their neural tangent kernel (NTK), and well-known non-local filtering techniques, such as non-local means or BM3D. The filtering function associated with a given network architecture can be obtained in closed form without need to train the network, being fully characterized by the random initialization of the network weights. While the NTK theory accurately predicts the filter associated with networks trained using standard gradient descent, our analysis shows that it falls short to explain the behaviour of networks trained using the popular Adam optimizer. The latter achieves a larger change of weights in hidden layers, adapting the non-local filtering function during training. We evaluate our findings via extensive image denoising experiments.
Original language | Undefined/Unknown |
---|---|
Pages | 1-25 |
Number of pages | 25 |
Publication status | Published - 3 Jun 2020 |
Keywords / Materials (for Non-textual outputs)
- cs.CV
- eess.IV
- eess.SP
- 68T07
Projects
- 1 Finished
-
C-SENSE: Exploiting low dimensional models in sensing, computation and signal processing
1/09/16 → 31/08/22
Project: Research
Research output
- 1 Conference contribution
-
The Neural Tangent Link Between CNN Denoisers and Non-Local Filters
Tachella, J., Tang, B. & Davies, M. E., 2 Nov 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Institute of Electrical and Electronics Engineers, p. 8618-8627Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
Open Access