SemEval-2016 Task 3: Community Question Answering

Preslav Nakov, Lluís Màrquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, Abed Alhakim Freihat, Jim Glass, Bilal Randeree

Research output: Chapter in Book/Report/Conference proceedingConference contribution


This paper describes the SemEval–2016 Task 3 on Community Question Answering, which we offered in English and Arabic. For English, we had three subtasks: Question–Comment Similarity (subtask A), Question–Question Similarity (B), and Question–External Comment Similarity (C). For Arabic, we had another subtask: Rerank the correct answers for a new question (D). Eighteen teams participated in the task, submitting a total of 95 runs (38 primary and 57
contrastive) for the four subtasks. A variety of approaches and features were used by the participating systems to address the different subtasks, which are summarized in this paper. The best systems achieved an official score
(MAP) of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and D, respectively. These scores are significantly better than those for the baselines that we provided. For subtask A, the best system improved over the 2015 winner by 3 points absolute in terms of Accuracy.
Original languageEnglish
Title of host publicationProceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016
PublisherAssociation for Computational Linguistics (ACL)
Number of pages21
ISBN (Print)978-1-941643-95-2
Publication statusPublished - Jun 2016
Event10th International Workshop on Semantic Evaluation - San Diego, United States
Duration: 16 Jun 201617 Jun 2016


Conference10th International Workshop on Semantic Evaluation
Abbreviated titleSemEval 2016
Country/TerritoryUnited States
CitySan Diego
Internet address

Cite this