Title Generation with Quasi-Synchronous Grammar

Kristian Woodsend, Yansong Feng, Mirella Lapata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The task of selecting information and rendering it appropriately appears in multiple contexts in summarization. In this paper we present a model that simultaneously optimizes selection and rendering preferences. The model operates over a phrase-based representation of the source document which we obtain by merging PCFG parse trees and dependency graphs. Selection preferences for individual phrases are learned discriminatively, while a quasi-synchronous grammar (Smith and Eisner, 2006) captures rendering preferences such as paraphrases and compressions. Based on an integer linear programming formulation, the model learns to generate summaries that satisfy both types of preferences, while ensuring that length, topic coverage and grammar constraints are met. Experiments on headline and image caption generation show that our method obtains state-of-the-art performance using essentially the same model for both tasks without any major modifications.
Original languageEnglish
Title of host publicationProceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP 2010, 9-11 October 2010, MIT Stata Center, Massachusetts, USA, A meeting of SIGDAT, a Special Interest Group of the ACL
PublisherAssociation for Computational Linguistics
Pages513-523
Number of pages11
Publication statusPublished - 2010

Fingerprint

Dive into the research topics of 'Title Generation with Quasi-Synchronous Grammar'. Together they form a unique fingerprint.

Cite this