Fast query expansion using approximations of relevance models

Marc-Allen Cartright, James Allan, Victor Lavrenko, Andrew McGregor

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Pseudo-relevance feedback (PRF) improves search quality by expanding the query using terms from high-ranking documents from an initial retrieval. Although PRF can often result in large gains in effectiveness, running two queries is time consuming, limiting its applicability. We describe a PRF method that uses corpus pre-processing to achieve query-time speeds that are near those of the original queries. Specifically, Relevance Modeling, a language modeling based PRF method, can be recast to benefit substantially from finding pairwise document relationships in advance. Using the resulting Fast Relevance Model (fastRM), we substantially reduce the online retrieval time and still benefit from expansion. We further explore methods for reducing the preprocessing time and storage requirements of the approach, allowing us to achieve up to a 10% increase in MAP over unexpanded retrieval, while only requiring 1% of the time of standard expansion.
Original languageEnglish
Title of host publicationProceedings of the 19th ACM international conference on Information and knowledge management (CIKM '10)
Place of PublicationNew York, NY, USA
Number of pages4
ISBN (Print)978-1-4503-0099-5
Publication statusPublished - 2010

Keywords / Materials (for Non-textual outputs)

  • distributed computing
  • pseudo-relevance feedback
  • relevance model


Dive into the research topics of 'Fast query expansion using approximations of relevance models'. Together they form a unique fingerprint.

Cite this