Smash at SemEval-2020 Task 7: Optimizing the Hyperparameters of ERNIE 2.0 for Humor Ranking and Rating

Julie-Anne Meaney, Steve R. Wilson, Walid Magdy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

The use of pre-trained language models such as BERT and ULMFiT has become increasingly popular in shared tasks, due to their powerful language modelling capabilities. Our entry to SemEval uses ERNIE 2.0, a language model which is pre-trained on a large number of tasks to enrich the semantic and syntactic information learned. ERNIE’s knowledge masking pre-training task is a unique method for learning about named entities, and we hypothesise that it may be of use in a dataset which is built on news headlines and which contains many named entities. We optimize the hyperparameters in a regression and classification model and find that the hyperparameters we selected helped to make bigger gains in the classification model than the regression model.
Original languageEnglish
Title of host publicationProceedings of the Fourteenth Workshop on Semantic Evaluation
PublisherAssociation for Computational Linguistics
Pages1049–1054
Number of pages6
ISBN (Print)978-1-952148-31-6
Publication statusPublished - 12 Dec 2020
EventInternational Workshop on Semantic Evaluation 2020 - Barcelona, Spain
Duration: 12 Dec 202013 Dec 2020
http://alt.qcri.org/semeval2020/#

Workshop

WorkshopInternational Workshop on Semantic Evaluation 2020
Abbreviated titleSemEval 2020
Country/TerritorySpain
CityBarcelona
Period12/12/2013/12/20
Internet address

Fingerprint

Dive into the research topics of 'Smash at SemEval-2020 Task 7: Optimizing the Hyperparameters of ERNIE 2.0 for Humor Ranking and Rating'. Together they form a unique fingerprint.

Cite this