VeeAlign: Multifaceted Context Representation Using Dual Attention for Ontology Alignment

Vivek Iyer, Arvind Agarwal, Harshit Kumar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Ontology Alignment is an important research problem applied to various fields such as data integration, data transfer, data preparation, etc. State-of-the-art (SOTA) Ontology Alignment systems typically use naive domain-dependent approaches with handcrafted rules or domain-specific architectures, making them unscalable and inefficient. In this work, we propose VeeAlign, a Deep Learning based model that uses a novel dual-attention mechanism to compute the contextualized representation of a concept which, in turn, is used to discover alignments. By doing this, not only is our approach able to exploit both syntactic and semantic information encoded in ontologies, it is also, by design, flexible and scalable to different domains with minimal effort. We evaluate our model on four different datasets from different domains and languages, and establish its superiority through these results as well as detailed ablation studies. The code and datasets used are available at https://github.com/Remorax/VeeAlign.
Original languageEnglish
Title of host publicationProceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Place of PublicationOnline and Punta Cana, Dominican Republic
PublisherAssociation for Computational Linguistics
Pages10780-10792
Number of pages13
ISBN (Electronic)978-1-955917-09-4
Publication statusPublished - 7 Nov 2021
Event2021 Conference on Empirical Methods in Natural Language Processing - Punta Cana, Dominican Republic
Duration: 7 Nov 202111 Nov 2021
https://2021.emnlp.org/

Conference

Conference2021 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2021
Country/TerritoryDominican Republic
CityPunta Cana
Period7/11/2111/11/21
Internet address

Fingerprint

Dive into the research topics of 'VeeAlign: Multifaceted Context Representation Using Dual Attention for Ontology Alignment'. Together they form a unique fingerprint.

Cite this