Disagreement-Regularized Imitation Learning

Kiante Brantley, Wen Sun, Mikael Henaff

Keywords: adversarial, continuous control, imitation learning, reinforcement learning, uncertainty

Mon Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Mon Session 2 (08:00-10:00 GMT) [Live QA] [Cal]
Monday: Reliable RL

Abstract: We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning. It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost. Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize. We prove a regret bound for the algorithm which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails. We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning.

Similar Papers

State Alignment-based Imitation Learning
Fangchen Liu, Zhan Ling, Tongzhou Mu, Hao Su,
Adversarial Policies: Attacking Deep Reinforcement Learning
Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, Stuart Russell,