Abstract
Efficient word representations play an important role in solving various problems related to Natural Language Processing (NLP), data mining, text mining etc. The issue of data sparsity poses a great challenge in creating efficient word representation model for solving the underlying problem. The problem is more intensified in resource-poor scenario due to the absence of sufficient amount of corpus. In this work we propose to minimize the effect of data sparsity by leveraging bilingual word embeddings learned through a parallel corpus. We train and evaluate Long Short Term Memory (LSTM) based architecture for aspect level sentiment classification. The neural network architecture is further assisted by the hand-crafted features for the prediction. We show the efficacy of the proposed model against state-of-the-art methods in two experimental setups i.e. multi-lingual and cross-lingual.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) |
Place of Publication | New Orleans, Louisiana |
Publisher | Association for Computational Linguistics |
Pages | 572-582 |
Number of pages | 11 |
ISBN (Electronic) | 978-1-948087-27-8 |
DOIs | |
Publication status | Published - 1 Jun 2018 |
Event | 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Hyatt Regency New Orleans Hotel, New Orleans, United States Duration: 1 Jun 2018 → 6 Jun 2018 http://naacl2018.org/ |
Conference
Conference | 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies |
---|---|
Abbreviated title | NAACL HLT 2018 |
Country/Territory | United States |
City | New Orleans |
Period | 1/06/18 → 6/06/18 |
Internet address |