Evaluating Automatic Summaries of Meeting Recordings

Gabriel Murray, Steve Renals, Jean Carletta, Johanna Moore

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The research below explores schemes for evaluating automatic summaries of business meetings, using the ICSI Meeting Corpus (Janin et al., 2003). Both automatic and subjective evaluations were carried out, with a central interest being whether or not the two types of evaluations correlate with each other. The evaluation metrics were used to compare and contrast differing approaches to automatic summarization, the deterioration of summary quality on ASR output versus manual transcripts, and to determine whether manual extracts are rated significantly higher than automatic extracts.
Original languageEnglish
Title of host publicationProceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization
Subtitle of host publicationMTSE
PublisherAssociation for Computational Linguistics
Pages33-40
Number of pages8
Publication statusPublished - 2005
EventIntrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization (ACL-05 Workshop) - University of Michigan, Ann Arbor, United States
Duration: 29 Jun 200529 Jun 2005

Workshop

WorkshopIntrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization (ACL-05 Workshop)
Country/TerritoryUnited States
CityAnn Arbor
Period29/06/0529/06/05

Fingerprint

Dive into the research topics of 'Evaluating Automatic Summaries of Meeting Recordings'. Together they form a unique fingerprint.

Cite this