Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Multi-source knowledge graph completion (MKGC) seeks to predict missing triples in a target KG by leveraging triples from multi-source KGs (e.g., different languages or domains). Existing studies typically learn and fuse multi-source KG representations solely with alignments or fusion modules, which can be affected by redundant information within KGs. This issue can conceal task-relevant information in representations, impeding further improvements when scaling to numerous KGs. To this end, we propose IMKGC, an information-theoretic MKGC framework to learn minimal sufficient representations. In particular, IMKGC learns entity representations explicitly preserving endogenous contextual information within each KG, exogenous complementary information from other KGs, and consistent information of equivalent entities, while suppressing redundant information through variational constraints. Furthermore, we achieve compressed relation representations with a devised relation reasoning decoder that captures relatedness among relations, also improving triple prediction. Extensive experiments on 14 KGs across three multilingual and multi-domain benchmarks demonstrate that IMKGC significantly outperforms previous state-of-the-art methods, especially in redundant scenarios. Our code will be released at \url{https://xxx} for the research community and now in the supplementary material.
