Abstract
The research below explores schemes for evaluating automatic summaries of business meetings, using the ICSI Meeting Corpus (Janin et al., 2003). Both automatic and subjective evaluations were carried out, with a central interest being whether or not the two types of evaluations correlate with each other. The evaluation metrics were used to compare and contrast differing approaches to automatic summarization, the deterioration of summary quality on ASR output versus manual transcripts, and to determine whether manual extracts are rated significantly higher than automatic extracts.
Original language | English |
---|---|
Title of host publication | Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization |
Subtitle of host publication | MTSE |
Publisher | Association for Computational Linguistics |
Pages | 33-40 |
Number of pages | 8 |
Publication status | Published - 2005 |
Event | Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization (ACL-05 Workshop) - University of Michigan, Ann Arbor, United States Duration: 29 Jun 2005 → 29 Jun 2005 |
Workshop
Workshop | Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization (ACL-05 Workshop) |
---|---|
Country/Territory | United States |
City | Ann Arbor |
Period | 29/06/05 → 29/06/05 |