Interpreting Knowlege Graph Relation Representation From Word Embeddings

Carl Allen, Ivana Balazevic, Timothy Hospedales

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Many models learn representations of knowledge graph data by exploiting its low-rank latent structure, encoding known relations between entities and enabling unknown facts to be inferred. To predict whether a relation holds between entities, embeddings are typically compared in the latent space following a relation-specific mapping. Whilst their predictive performance has steadily improved, how such models capture the underlying latent structure of semantic information remains unexplained. Building on recent theoretical understanding of word embeddings, we categorise knowledge graph relations into three types and for each derive explicit requirements of their representations. We show that empirical properties of relation representations and the relative performance of leading knowledge graph representation methods are justified by our analysis.
Original languageEnglish
Title of host publicationInternational Conference on Learning Representations (ICLR 2021)
Number of pages16
Publication statusPublished - 4 May 2021
EventNinth International Conference on Learning Representations 2021 - Virtual Conference
Duration: 4 May 20217 May 2021
https://iclr.cc/Conferences/2021/Dates

Conference

ConferenceNinth International Conference on Learning Representations 2021
Abbreviated titleICLR 2021
CityVirtual Conference
Period4/05/217/05/21
Internet address

Fingerprint

Dive into the research topics of 'Interpreting Knowlege Graph Relation Representation From Word Embeddings'. Together they form a unique fingerprint.

Cite this