Understanding Learning Dynamics Of Language Models with SVCCA

Naomi Saphra, Adam Lopez

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Research has shown that neural models implicitly encode linguistic features, but there has been no research showing how these encodings arise as the models are trained. We present the first study on the learning dynamics of neural language models, using a simple and flexible analysis method called Singular Vector Canonical Correlation Analysis (SVCCA), which enables us to compare learned representations across time and across models, without the need to evaluate directly on annotated data. We probe the evolution of syntactic, semantic, and topic representations and find that part-of-speech is learned earlier than topic; that  recurrent layers become more similar to those of a tagger during training; and embedding layers less similar. Our results and methods could inform better learning algorithms for NLP models, possibly to incorporate linguistic information more effectively.
Original languageEnglish
Title of host publicationProceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics
Place of PublicationMinneapolis, Minnesota
PublisherAssociation for Computational Linguistics
Pages3257–3267
Number of pages12
Volume1
DOIs
Publication statusPublished - 7 Jun 2019
Event2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics - Minneapolis, United States
Duration: 2 Jun 20197 Jun 2019
https://naacl2019.org/

Conference

Conference2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Abbreviated titleNAACL-HLT 2019
CountryUnited States
CityMinneapolis
Period2/06/197/06/19
Internet address

Fingerprint Dive into the research topics of 'Understanding Learning Dynamics Of Language Models with SVCCA'. Together they form a unique fingerprint.

Cite this