Probability Calibration for Knowledge Graph Embedding Models

Pedro Tabacof, Luca Costabello

Keywords: calibration, graph embedding, graph networks, knowledge graph embeddings, knowledge graphs, regression, representation learning

Tues Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Tues Session 4 (17:00-19:00 GMT) [Live QA] [Cal]

Abstract: Knowledge graph embedding research has overlooked the problem of probability calibration. We show popular embedding models are indeed uncalibrated. That means probability estimates associated to predicted triples are unreliable. We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs. We propose to use Platt scaling and isotonic regression alongside our method. Experiments on three datasets with ground truth negatives show our contribution leads to well calibrated models when compared to the gold standard of using negatives. We get significantly better results than the uncalibrated models from all calibration methods. We show isotonic regression offers the best the performance overall, not without trade-offs. We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.

Similar Papers

Distance-Based Learning from Errors for Confidence Calibration
Chen Xing, Sercan Arik, Zizhao Zhang, Tomas Pfister,
Composition-based Multi-Relational Graph Convolutional Networks
Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, Partha Talukdar,
GraphZoom: A Multi-level Spectral Approach for Accurate and Scalable Graph Embedding
Chenhui Deng, Zhiqiang Zhao, Yongyu Wang, Zhiru Zhang, Zhuo Feng,
Discrepancy Ratio: Evaluating Model Performance When Even Experts Disagree on the Truth
Igor Lovchinsky, Alon Daks, Israel Malkin, Pouya Samangouei, Ardavan Saeedi, Yang Liu, Swami Sankaranarayanan, Tomer Gafner, Ben Sternlieb, Patrick Maher, Nathan Silberman,