Abstract
Reading comprehension tests are receiving increased attention within the NLP community as a controlled test-bed for developing, evaluating and comparing robust question answering (NLQA) methods. To support this, we have enriched the MITRE CBC4Kids corpus with multiple XML annotation layers recording the output of various tokenizers, lemmatizers, a stemmer, a semantic tagger, POS taggers and syntactic parsers. Using this resource, we have built a baseline NLQA system for wordoverlap based answer retrieval.
Original language | English |
---|---|
Title of host publication | In Proceedings of the 3rd International Workshop on Linguistically Interpreted Corpora |
Pages | 39-46 |
Number of pages | 8 |
DOIs | |
Publication status | Published - 2003 |