Automatic Caption Generation for News Images

Yansong Feng, Mirella Lapata

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

This paper is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Examples include video and image retrieval as well as the development of tools that aid visually impaired individuals to access pictorial information. Our approach leverages the vast resource of pictures available on the web and the fact that many of them are captioned and colocated with thematically related documents. Our model learns to create captions from a database of news articles, the pictures embedded in them, and their captions, and consists of two stages. Content selection identifies what the image and accompanying article are about, whereas surface realization determines how to verbalize the chosen content. We approximate content selection with a probabilistic image annotation model that suggests keywords for an image. The model postulates that images and their textual descriptions are generated by a shared set of latent variables (topics) and is trained on a weakly labeled dataset (which treats the captions and associated news articles as image labels). Inspired by recent work in summarization, we propose extractive and abstractive surface realization models. Experimental results show that it is viable to generate captions that are pertinent to the specific content of an image and its associated article, while permitting creativity in the description. Indeed, the output of our abstractive model compares favorably to handwritten captions and is often superior to extractive methods.
Original languageEnglish
Pages (from-to)797-812
Number of pages16
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume35
Issue number4
DOIs
Publication statusPublished - Apr 2013

Fingerprint

Dive into the research topics of 'Automatic Caption Generation for News Images'. Together they form a unique fingerprint.

Cite this