Visualizing timelines: evolutionary summarization via iterative reinforcement between text and image streams

Rui Yan, Xiaojun Wan, Mirella Lapata, Wayne Xin Zhao, Pu-Jen Cheng, Xiaoming Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a novel graph-based framework for timeline summarization, the task of creating different summaries for different timestamps but for the same topic. Our work extends timeline summarization to a multimodal setting and creates timelines that are both textual and visual. Our approach exploits the fact that news documents are often accompanied by pictures and the two share some common content. Our model optimizes local summary creation and global timeline generation jointly following an iterative approach based on mutual reinforcement and co-ranking. In our algorithm, individual summaries are generated by taking into account the mutual dependencies between sentences and images, and are iteratively refined by considering how they contribute to the global timeline and its coherence. Experiments on real-world datasets show that the timelines produced by our model outperform several competitive baselines both in terms of ROUGE and when assessed by human evaluators.
Original languageEnglish
Title of host publicationCIKM '12 Proceedings of the 21st ACM international conference on Information and knowledge management
PublisherACM
Pages275-284
Number of pages10
ISBN (Print)978-1-4503-1156-4
DOIs
Publication statusPublished - 2012

Fingerprint Dive into the research topics of 'Visualizing timelines: evolutionary summarization via iterative reinforcement between text and image streams'. Together they form a unique fingerprint.

Cite this