Hierarchical Sketch Induction for Paraphrase Generation

Tom Hosking, Hao Tang, Mirella Lapata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose a generative model of paraphrase generation, that encourages syntactic diversity by conditioning on an explicit syntactic sketch. We introduce Hierarchical Refinement Quantized Variational Autoencoders (HRQ-VAE), a method for learning decompositions of dense encodings as a sequence of discrete latent variables that make iterative refinements of increasing granularity. This hierarchy of codes is learned through end-to-end training, and represents fine-to-coarse grained information about the input. We use HRQ-VAE to encode the syntactic form of an input sentence as a path through the hierarchy, allowing us to more easily predict syntactic sketches at test time. Extensive experiments, including a human evaluation, confirm that HRQ-VAE learns a hierarchical representation of the input space, and generates paraphrases of higher quality than previous systems.
Original languageEnglish
Title of host publicationProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Place of PublicationDublin, Ireland
PublisherAssociation for Computational Linguistics
Pages2489-2501
Number of pages13
DOIs
Publication statusPublished - 1 May 2022
Event60th Annual Meeting of the Association for Computational Linguistics - The Convention Centre Dublin, Dublin, Ireland
Duration: 22 May 202227 May 2022
https://www.2022.aclweb.org

Conference

Conference60th Annual Meeting of the Association for Computational Linguistics
Abbreviated titleACL 2022
Country/TerritoryIreland
CityDublin
Period22/05/2227/05/22
Internet address

Fingerprint

Dive into the research topics of 'Hierarchical Sketch Induction for Paraphrase Generation'. Together they form a unique fingerprint.

Cite this