Abstract / Description of output
Classifiers in natural language processing (NLP) often have a large number of output classes. For example, neural language models (LMs) and machine translation (MT) models both predict tokens from a vocabulary of thousands. The Softmax output layer of these models typically receives as input a dense feature representation, which has much lower dimensionality than the output. In theory, the result is some words may be impossible to be predicted via argmax, irrespective of input features, and empirically, there is evidence this happens in small language models (Demeter et al., 2020). In this paper we ask whether it can happen in practical large language models and translation models. To do so, we develop algorithms to detect such unargmaxable tokens in public models. We find that 13 out of 150 models do indeed have such tokens; however, they are very infrequent and unlikely to impact model quality. We release our algorithms and code to the public.
Original language | English |
---|---|
Title of host publication | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |
Editors | Smaranda Muresan, Preslav Nakov, Aline Villavicencio |
Place of Publication | Dublin, Ireland |
Publisher | Association for Computational Linguistics |
Pages | 6738-6758 |
Number of pages | 21 |
DOIs | |
Publication status | Published - 1 May 2022 |
Event | 60th Annual Meeting of the Association for Computational Linguistics - The Convention Centre Dublin, Dublin, Ireland Duration: 22 May 2022 → 27 May 2022 https://www.2022.aclweb.org |
Conference
Conference | 60th Annual Meeting of the Association for Computational Linguistics |
---|---|
Abbreviated title | ACL 2022 |
Country/Territory | Ireland |
City | Dublin |
Period | 22/05/22 → 27/05/22 |
Internet address |