Learn to Explain Efficiently via Neural Logic Inductive Learning

Yuan Yang, Le Song

Keywords: attention, interpretability

Thurs Session 4 (17:00-19:00 GMT) [Live QA] [Cal]
Thurs Session 5 (20:00-22:00 GMT) [Live QA] [Cal]

Abstract: The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems. In this work, we study the learning to explain the problem in the scope of inductive logic programming (ILP). We propose Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data. In experiments, compared with the state-of-the-art models, we find NLIL is able to search for rules that are x10 times longer while remaining x3 times faster. We also show that NLIL can scale to large image datasets, i.e. Visual Genome, with 1M entities.

Similar Papers

Abductive Commonsense Reasoning
Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, Yejin Choi,
DiffTaichi: Differentiable Programming for Physical Simulation
Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, Fredo Durand,
Differentiable learning of numerical rules in knowledge graphs
Po-Wei Wang, Daria Stepanova, Csaba Domokos, J. Zico Kolter,
In Search for a SAT-friendly Binarized Neural Network Architecture
Nina Narodytska, Hongce Zhang, Aarti Gupta, Toby Walsh,