Making Sense of Reinforcement Learning and Probabilistic Inference

Brendan O'Donoghue, Ian Osband, Catalin Ionescu

Keywords: bayesian inference, reinforcement learning

Mon Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Mon Session 5 (20:00-22:00 GMT) [Live QA] [Cal]
Monday: Reliable RL

Abstract: Reinforcement learning (RL) combines a control problem with statistical estimation: The system dynamics are not known to the agent, but can be learned through experience. A recent line of research casts ‘RL as inference’ and suggests a particular framework to generalize the RL problem as probabilistic inference. Our paper surfaces a key shortcoming in that approach, and clarifies the sense in which RL can be coherently cast as an inference problem. In particular, an RL agent must consider the effects of its actions upon future rewards and observations: The exploration-exploitation tradeoff. In all but the most simple settings, the resulting inference is computationally intractable so that practical RL algorithms must resort to approximation. We demonstrate that the popular ‘RL as inference’ approximation can perform poorly in even very basic problems. However, we show that with a small modification the framework does yield algorithms that can provably perform well, and we show that the resulting algorithm is equivalent to the recently proposed K-learning, which we further connect with Thompson sampling.

Similar Papers

On the Weaknesses of Reinforcement Learning for Neural Machine Translation
Leshem Choshen, Lior Fox, Zohar Aizenbud, Omri Abend,
AMRL: Aggregated Memory For Reinforcement Learning
Jacob Beck, Kamil Ciosek, Sam Devlin, Sebastian Tschiatschek, Cheng Zhang, Katja Hofmann,
Dynamics-Aware Embeddings
William Whitney, Rajat Agarwal, Kyunghyun Cho, Abhinav Gupta,