Edinburgh Research Explorer

Evaluating Automatic Summaries of Meeting Recordings

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Original languageEnglish
Title of host publicationProceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization
Subtitle of host publicationMTSE
PublisherAssociation for Computational Linguistics
Pages33-40
Number of pages8
Publication statusPublished - 2005
EventIntrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization (ACL-05 Workshop) - University of Michigan, Ann Arbor, United States
Duration: 29 Jun 200529 Jun 2005

Workshop

WorkshopIntrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization (ACL-05 Workshop)
CountryUnited States
CityAnn Arbor
Period29/06/0529/06/05

Abstract

The research below explores schemes for evaluating automatic summaries of business meetings, using the ICSI Meeting Corpus (Janin et al., 2003). Both automatic and subjective evaluations were carried out, with a central interest being whether or not the two types of evaluations correlate with each other. The evaluation metrics were used to compare and contrast differing approaches to automatic summarization, the deterioration of summary quality on ASR output versus manual transcripts, and to determine whether manual extracts are rated significantly higher than automatic extracts.

Download statistics

No data available

ID: 18570797