PRES: A Score Metric for Evaluating Recall-oriented Information Retrieval Applications

Walid Magdy, Gareth J.F. Jones

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Information retrieval (IR) evaluation scores are generally designed to measure the effectiveness with which relevant documents are identified and retrieved. Many scores have been proposed for this purpose over the years. These have primarily focused on aspects of precision and recall, and while these are often discussed with equal importance, in practice most attention has been given to precision focused metrics. Even for recall-oriented IR tasks of growing importance, such as patent retrieval, these precision based scores remain the primary evaluation measures. Our study examines different evaluation measures for a recall-oriented patent retrieval task and demonstrates the limitations of the current scores in comparing different IR systems for this task. We introduce PRES, a novel evaluation metric for this type of application taking account of recall and the user's search effort. The behaviour of PRES is demonstrated on 48 runs from the CLEF-IP 2009 patent retrieval track. A full analysis of the performance of PRES shows its suitability for measuring the retrieval effectiveness of systems from a recall focused perspective taking into account the user's expected search effort.
Original languageEnglish
Title of host publicationProceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval
Place of PublicationNew York, NY, USA
Number of pages8
ISBN (Print)978-1-4503-0153-4
Publication statusPublished - 2010

Publication series

NameSIGIR '10


Dive into the research topics of 'PRES: A Score Metric for Evaluating Recall-oriented Information Retrieval Applications'. Together they form a unique fingerprint.

Cite this