TY - JOUR
T1 - Multi-label classification for biomedical literature
T2 - an overview of the BioCreative VII LitCovid Track for COVID-19 literature topic annotations
AU - Chen, Qingyu
AU - Allot, Alexis
AU - Leaman, Robert
AU - Islamaj, Rezarta
AU - Du, Jingcheng
AU - Fang, Li
AU - Wang, Kai
AU - Xu, Shuo
AU - Zhang, Yuefu
AU - Bagherzadeh, Parsa
AU - Bergler, Sabine
AU - Bhatnagar, Aakash
AU - Bhavsar, Nidhir
AU - Chang, Yung-Chun
AU - Lin, Sheng-Jie
AU - Tang, Wentai
AU - Zhang, Hongtong
AU - Tavchioski, Ilija
AU - Pollak, Senja
AU - Tian, Shubo
AU - Zhang, Jinfeng
AU - Otmakhova, Yulia
AU - Yepes, Antonio Jimeno
AU - Dong, Hang
AU - Wu, Honghan
AU - Dufour, Richard
AU - Labrak, Yanis
AU - Chatterjee, Niladri
AU - Tandon, Kushagri
AU - Laleye, Fréjus A A
AU - Rakotoson, Loïc
AU - Chersoni, Emmanuele
AU - Gu, Jinghang
AU - Friedrich, Annemarie
AU - Pujari, Subhash Chandra
AU - Chizhikova, Mariia
AU - Sivadasan, Naveen
AU - Vg, Saipradeep
AU - Lu, Zhiyong
N1 - Published by Oxford University Press 2022. This work is written by (a) US Government employee(s) and is in the public domain in the US.
PY - 2022/8/31
Y1 - 2022/8/31
N2 - The coronavirus disease 2019 (COVID-19) pandemic has been severely impacting global society since December 2019. The related findings such as vaccine and drug development have been reported in biomedical literature-at a rate of about 10 000 articles on COVID-19 per month. Such rapid growth significantly challenges manual curation and interpretation. For instance, LitCovid is a literature database of COVID-19-related articles in PubMed, which has accumulated more than 200 000 articles with millions of accesses each month by users worldwide. One primary curation task is to assign up to eight topics (e.g. Diagnosis and Treatment) to the articles in LitCovid. The annotated topics have been widely used for navigating the COVID literature, rapidly locating articles of interest and other downstream studies. However, annotating the topics has been the bottleneck of manual curation. Despite the continuing advances in biomedical text-mining methods, few have been dedicated to topic annotations in COVID-19 literature. To close the gap, we organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature. The BioCreative LitCovid dataset-consisting of over 30 000 articles with manually reviewed topics-was created for training and testing. It is one of the largest multi-label classification datasets in biomedical scientific literature. Nineteen teams worldwide participated and made 80 submissions in total. Most teams used hybrid systems based on transformers. The highest performing submissions achieved 0.8875, 0.9181 and 0.9394 for macro-F1-score, micro-F1-score and instance-based F1-score, respectively. Notably, these scores are substantially higher (e.g. 12%, higher for macro F1-score) than the corresponding scores of the state-of-art multi-label classification method. The level of participation and results demonstrate a successful track and help close the gap between dataset curation and method development. The dataset is publicly available via https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/ for benchmarking and further development. Database URL https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/.
AB - The coronavirus disease 2019 (COVID-19) pandemic has been severely impacting global society since December 2019. The related findings such as vaccine and drug development have been reported in biomedical literature-at a rate of about 10 000 articles on COVID-19 per month. Such rapid growth significantly challenges manual curation and interpretation. For instance, LitCovid is a literature database of COVID-19-related articles in PubMed, which has accumulated more than 200 000 articles with millions of accesses each month by users worldwide. One primary curation task is to assign up to eight topics (e.g. Diagnosis and Treatment) to the articles in LitCovid. The annotated topics have been widely used for navigating the COVID literature, rapidly locating articles of interest and other downstream studies. However, annotating the topics has been the bottleneck of manual curation. Despite the continuing advances in biomedical text-mining methods, few have been dedicated to topic annotations in COVID-19 literature. To close the gap, we organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature. The BioCreative LitCovid dataset-consisting of over 30 000 articles with manually reviewed topics-was created for training and testing. It is one of the largest multi-label classification datasets in biomedical scientific literature. Nineteen teams worldwide participated and made 80 submissions in total. Most teams used hybrid systems based on transformers. The highest performing submissions achieved 0.8875, 0.9181 and 0.9394 for macro-F1-score, micro-F1-score and instance-based F1-score, respectively. Notably, these scores are substantially higher (e.g. 12%, higher for macro F1-score) than the corresponding scores of the state-of-art multi-label classification method. The level of participation and results demonstrate a successful track and help close the gap between dataset curation and method development. The dataset is publicly available via https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/ for benchmarking and further development. Database URL https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/.
KW - COVID-19/epidemiology
KW - Data Mining/methods
KW - Databases, Factual
KW - Humans
KW - PubMed
KW - Publications
U2 - 10.1093/database/baac069
DO - 10.1093/database/baac069
M3 - Article
C2 - 36043400
SN - 1758-0463
VL - 2022
JO - Database : the journal of biological databases and curation
JF - Database : the journal of biological databases and curation
ER -