Abstract / Description of output
The use of pre-trained language models such as BERT and ULMFiT has become increasingly popular in shared tasks, due to their powerful language modelling capabilities. Our entry to SemEval uses ERNIE 2.0, a language model which is pre-trained on a large number of tasks to enrich the semantic and syntactic information learned. ERNIE’s knowledge masking pre-training task is a unique method for learning about named entities, and we hypothesise that it may be of use in a dataset which is built on news headlines and which contains many named entities. We optimize the hyperparameters in a regression and classification model and find that the hyperparameters we selected helped to make bigger gains in the classification model than the regression model.
Original language | English |
---|---|
Title of host publication | Proceedings of the Fourteenth Workshop on Semantic Evaluation |
Publisher | Association for Computational Linguistics |
Pages | 1049–1054 |
Number of pages | 6 |
ISBN (Print) | 978-1-952148-31-6 |
Publication status | Published - 12 Dec 2020 |
Event | International Workshop on Semantic Evaluation 2020 - Barcelona, Spain Duration: 12 Dec 2020 → 13 Dec 2020 http://alt.qcri.org/semeval2020/# |
Workshop
Workshop | International Workshop on Semantic Evaluation 2020 |
---|---|
Abbreviated title | SemEval 2020 |
Country/Territory | Spain |
City | Barcelona |
Period | 12/12/20 → 13/12/20 |
Internet address |