Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning

Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee

Keywords: generalization, regularization, reinforcement learning, robotics

Mon Session 4 (17:00-19:00 GMT) [Live QA] [Cal]
Mon Session 5 (20:00-22:00 GMT) [Live QA] [Cal]

Abstract: Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images. In this paper, we propose a simple technique to improve a generalization ability of deep RL agents by introducing a randomized (convolutional) neural network that randomly perturbs input observations. It enables trained agents to adapt to new domains by learning robust features invariant across varied and randomized environments. Furthermore, we consider an inference method based on the Monte Carlo approximation to reduce the variance induced by this randomization. We demonstrate the superiority of our method across 2D CoinRun, 3D DeepMind Lab exploration and 3D robotics control tasks: it significantly outperforms various regularization and data augmentation methods for the same purpose.

Similar Papers

Toward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control
Tsui-Wei Weng, Krishnamurthy (Dj) Dvijotham, Jonathan Uesato, Kai Xiao, Sven Gowal, Robert Stanforth, Pushmeet Kohli,
Learning Efficient Parameter Server Synchronization Policies for Distributed SGD
Rong Zhu, Sheng Yang, Andreas Pfadler, Zhengping Qian, Jingren Zhou,
Composing Task-Agnostic Policies with Deep Reinforcement Learning
Ahmed H. Qureshi, Jacob J. Johnson, Yuzhe Qin, Taylor Henderson, Byron Boots, Michael C. Yip,