Geometry-aware training of factorized layers in tensor Tucker format

Emanuele Zangrando, Steffen Schotthöfer, Jonas Kusch, Gianluca Ceruti, Francesco Tudisco

Research output: Contribution to conferencePaperpeer-review

Abstract

Reducing parameter redundancies in neural network architectures is crucial for achieving feasible computational and memory requirements during training and inference phases. Given its easy implementation and flexibility, one promising approach is layer factorization, which reshapes weight tensors into a matrix format and parameterizes them as the product of two small rank matrices. However, this approach typically requires an initial full-model warm-up phase, prior knowledge of a feasible rank, and it is sensitive to parameter initialization. In this work, we introduce a novel approach to train the factors of a Tucker decomposition of the weight tensors. Our training proposal proves to be optimal in locally approximating the original unfactorized dynamics independently of the initialization. Furthermore, the rank of each mode is dynamically updated during training. We provide a theoretical analysis of the algorithm, showing convergence, approximation and local descent guarantees. The method's performance is further illustrated through a variety of experiments, showing remarkable training compression rates and comparable or even better performance than the full baseline and alternative layer factorization strategies.
Original languageEnglish
Pages129743-129773
DOIs
Publication statusPublished - 28 Feb 2025
EventThe Thirty-Eighth Annual Conference on Neural Information Processing Systems - Vancouver Convention Center, Vancouver, Canada
Duration: 10 Dec 202415 Dec 2024
Conference number: 38
https://neurips.cc/Conferences/2024

Conference

ConferenceThe Thirty-Eighth Annual Conference on Neural Information Processing Systems
Abbreviated titleNeurIPS 2024
Country/TerritoryCanada
CityVancouver
Period10/12/2415/12/24
Internet address

Fingerprint

Dive into the research topics of 'Geometry-aware training of factorized layers in tensor Tucker format'. Together they form a unique fingerprint.

Cite this