Causal Discovery with Reinforcement Learning

Shengyu Zhu, Ignavier Ng, Zhitang Chen

Keywords: optimization, reinforcement learning, structure learning

Wed Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Wed Session 2 (08:00-10:00 GMT) [Live QA] [Cal]
Wednesday: Actions and Counterfactuals

Abstract: Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint.

Similar Papers

Gradient-Based Neural DAG Learning
Sébastien Lachapelle, Philippe Brouillard, Tristan Deleu, Simon Lacoste-Julien,
Observational Overfitting in Reinforcement Learning
Xingyou Song, Yiding Jiang, Stephen Tu, Yilun Du, Behnam Neyshabur,
Projection-Based Constrained Policy Optimization
Tsung-Yen Yang, Justinian Rosca, Karthik Narasimhan, Peter J. Ramadge,