Bootstrapping Semantic Role Labelers from Parallel Data

Mikhail Kozhevnikov, Ivan Titov

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We present an approach which uses the similarity in semantic structure of bilingual parallel sentences to bootstrap a pair of semantic role labeling (SRL) models. The setting is similar to co-training, except for the intermediate model required to convert the SRL structure between the two annotation schemes used for different languages. Our approach can facilitate the construction of SRL models for resource-poor languages, while preserving the annotation schemes designed for the target language and making use of the limited resources available for it. We evaluate the model on four language pairs, English vs German, Spanish, Czech and Chinese. Consistent improvements are observed over the self-training baseline.
Original languageEnglish
Title of host publicationProceedings of the Second Joint Conference on Lexical and Computational Semantics, *SEM 2013, June 13-14, 2013, Atlanta, Georgia, USA.
PublisherAssociation for Computational Linguistics
Number of pages11
ISBN (Print)978-1-937284-48-0
Publication statusPublished - Jun 2013


Dive into the research topics of 'Bootstrapping Semantic Role Labelers from Parallel Data'. Together they form a unique fingerprint.

Cite this