Learning to Coordinate Manipulation Skills via Skill Behavior Diversification

Youngwoon Lee, Jingyun Yang, Joseph J. Lim

Keywords: hierarchical reinforcement learning, reinforcement learning

Wed Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Wed Session 5 (20:00-22:00 GMT) [Live QA] [Cal]

Abstract: When mastering a complex manipulation task, humans often decompose the task into sub-skills of their body parts, practice the sub-skills independently, and then execute the sub-skills together. Similarly, a robot with multiple end-effectors can perform complex tasks by coordinating sub-skills of each end-effector. To realize temporal and behavioral coordination of skills, we propose a modular framework that first individually trains sub-skills of each end-effector with skill behavior diversification, and then learns to coordinate end-effectors using diverse behaviors of the skills. We demonstrate that our proposed framework is able to efficiently coordinate skills to solve challenging collaborative control tasks such as picking up a long bar, placing a block inside a container while pushing the container with two robot arms, and pushing a box with two ant agents. Videos and code are available at https://clvrai.com/coordination

Similar Papers

Sub-policy Adaptation for Hierarchical Reinforcement Learning
Alexander Li, Carlos Florensa, Ignasi Clavera, Pieter Abbeel,
Option Discovery using Deep Skill Chaining
Akhil Bagaria, George Konidaris,
Masked Based Unsupervised Content Transfer
Ron Mokady, Sagie Benaim, Lior Wolf, Amit Bermano,
Once for All: Train One Network and Specialize it for Efficient Deployment
Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, Song Han,