Bootstrapping Semantic Analyzers from Non-Contradictory Texts

Ivan Titov, Mikhail Kozhevnikov

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We argue that groups of unannotated texts with overlapping and non-contradictory semantics represent a valuable source of information for learning semantic representations. A simple and efficient inference method recursively induces joint semantic representations for each group and discovers correspondence between lexical entries and latent semantic concepts. We consider the generative semantics-text correspondence model (Liang et al., 2009) and demonstrate that exploiting the noncontradiction relation between texts leads to substantial improvements over natural baselines on a problem of analyzing human-written weather forecasts.
Original languageEnglish
Title of host publicationACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden
PublisherAssociation for Computational Linguistics
Number of pages10
Publication statusPublished - 2010


Dive into the research topics of 'Bootstrapping Semantic Analyzers from Non-Contradictory Texts'. Together they form a unique fingerprint.

Cite this