Long Short-Term Memory-Networks for Machine Reading

Jianpeng Cheng, Li Dong, Mirella Lapata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

In this paper we address the question of how to render sequence-level networks better at handling structured input. We propose a machine reading simulator which processes text incrementally from left to right and performs shallow reasoning with memory and attention. The reader extends the Long Short-Term Memory architecture with a memory network in place of a single memory cell. This enables adaptive memory usage during recurrence with neural attention, offering a way to weakly induce relations among tokens. The system is initially designed to process a single sequence but we also demonstrate how to integrate it with an encoder-decoder architecture. Experiments on language modeling, sentiment analysis, and natural language inference show that our model matches or outperforms the state of the art.
Original languageEnglish
Title of host publicationProceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
PublisherAssociation for Computational Linguistics
Number of pages11
ISBN (Print)978-1-945626-25-8
Publication statusPublished - 5 Nov 2016
Event2016 Conference on Empirical Methods in Natural Language Processing - Austin, United States
Duration: 1 Nov 20165 Nov 2016


Conference2016 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2016
Country/TerritoryUnited States
Internet address


Dive into the research topics of 'Long Short-Term Memory-Networks for Machine Reading'. Together they form a unique fingerprint.

Cite this