The role of invariance in spectral complexity-based generalization bounds

Konstantinos Pitas, Andreas Loukas, Mike Davies, Pierre Vandergheynst

Research output: Contribution to conferencePaperpeer-review

Abstract

Deep convolutional neural networks (CNNs) have been shown to be able to fit a random labeling over data while still being able to generalize well for normal labels. Describing CNN capacity through a posteriori measures of complexity has been recently proposed to tackle this apparent paradox. These complexity measures are usually validated by showing that they correlate empirically with GE; being empirically larger for networks trained on random vs normal labels. Focusing on the case of spectral complexity we investigate theoretically and empirically the insensitivity of the complexity measure to invariances relevant to CNNs, and show several limitations of spectral complexity that occur as a result. For a specific formulation of spectral complexity we show that it results in the same upper bound complexity estimates for convolutional and locally connected architectures (which don't have the same favorable invariance properties). This is contrary to common intuition and empirical results.
Original languageEnglish
Publication statusPublished - 23 May 2019
EventNeurIPs 2019 workshop on Machine learning with Guarantees -
Duration: 9 Dec 2019 → …

Conference

ConferenceNeurIPs 2019 workshop on Machine learning with Guarantees
Period9/12/19 → …

Keywords

  • cs.LG
  • stat.ML

Fingerprint

Dive into the research topics of 'The role of invariance in spectral complexity-based generalization bounds'. Together they form a unique fingerprint.

Cite this