Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment

Zhuo Chen, Lingbing Guo, Yin Fang, Yichi Zhang, Jiaoyan Chen, Jeff Z. Pan, Yangning Li, Huajun Chen, Wen Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

As a crucial extension of entity alignment (EA), multi-modal entity alignment (MMEA) aims to identify identical entities across disparate knowledge graphs (KGs) by exploiting associated visual information. However, existing MMEA approaches primarily concentrate on the fusion paradigm of multi-modal entity features, while neglecting the challenges presented by the pervasive phenomenon of missing and intrinsic ambiguity of visual images. In this paper, we present a further analysis of visual modality incompleteness, benchmarking latest MMEA models on our proposed dataset MMEA-UMVM, where the types of alignment KGs covering bilingual and monolingual, with standard (non-iterative) and iterative training paradigms to evaluate the model performance. Our research indicates that, in the face of modality incompleteness, models succumb to overfitting the modality noise, and exhibit performance oscillations or declines at high rates of missing modality. This proves that the inclusion of additional multi-modal data can sometimes adversely affect EA. To address these challenges, we introduce UMAEA, a robust multi-modal entity alignment approach designed to tackle uncertainly missing and ambiguous visual modalities. It consistently achieves SOTA performance across all 97 benchmark splits, significantly surpassing existing baselines with limited parameters and time consumption, while effectively alleviating the identified limitations of other models. Our code and benchmark data are available at https://github.com/zjukg/UMAEA.
Original languageEnglish
Title of host publicationThe Semantic Web -- ISWC 2023
EditorsTerry R. Payne, Valentina Presutti, Guilin Qi, María Poveda-Villalón, Giorgos Stoilos, Laura Hollink, Zoi Kaoudi, Gong Cheng, Juanzi Li
Place of PublicationCham
PublisherSpringer Nature Switzerland AG
Pages121-139
Number of pages19
Volume14265
ISBN (Electronic)9783031472404
ISBN (Print)9783031472398
DOIs
Publication statusPublished - 27 Oct 2023
Event22nd International Semantic Web Conference - Athens, Greece
Duration: 6 Nov 202310 Nov 2023
Conference number: 22
https://iswc2023.semanticweb.org/

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference22nd International Semantic Web Conference
Abbreviated titleISWC 2023
Country/TerritoryGreece
CityAthens
Period6/11/2310/11/23
Internet address

Keywords / Materials (for Non-textual outputs)

  • entity alignment
  • knowledge graph
  • multi-modal learning
  • uncertainly missing modality

Fingerprint

Dive into the research topics of 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment'. Together they form a unique fingerprint.

Cite this