Abstract
In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone. We evaluate the performance of off-the-shelf singlevector
and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remarkably well.
and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remarkably well.
Original language | English |
---|---|
Title of host publication | Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications |
Publisher | Association for Computational Linguistics |
Pages | 79-90 |
Number of pages | 12 |
DOIs | |
Publication status | Published - 4 Apr 2017 |
Event | SENSE: The first workshop on Sense, Concept and Entity Representations and their Applications - Valencia, Spain Duration: 4 Apr 2017 → … https://sites.google.com/site/senseworkshop2017/home |
Workshop
Workshop | SENSE: The first workshop on Sense, Concept and Entity Representations and their Applications |
---|---|
Abbreviated title | SENSE 2017 |
Country/Territory | Spain |
City | Valencia |
Period | 4/04/17 → … |
Internet address |