Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies

Sungryull Sohn, Hyunjae Woo, Jongwook Choi, Honglak Lee

Keywords: meta reinforcement learning, reinforcement learning

Mon Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Mon Session 3 (12:00-14:00 GMT) [Live QA] [Cal]

Abstract: We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent. The agent needs to quickly adapt to the task over few episodes during adaptation phase to maximize the return in the test phase. Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference (MSGI), which infers the latent parameter of the task by interacting with the environment and maximizes the return given the latent parameter. To facilitate learning, we adopt an intrinsic reward inspired by upper confidence bound (UCB) that encourages efficient exploration. Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter, and to adapt more efficiently than existing meta RL and hierarchical RL methods.

Similar Papers

Meta-Q-Learning
Rasool Fakoor, Pratik Chaudhari, Stefano Soatto, Alexander J. Smola,
Meta-Learning without Memorization
Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, Chelsea Finn,
Improving Generalization in Meta Reinforcement Learning using Learned Objectives
Louis Kirsch, Sjoerd van Steenkiste, Juergen Schmidhuber,
Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks
Hae Beom Lee, Hayeon Lee, Donghyun Na, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang,