MetaPix: Few-Shot Video Retargeting

Jessica Lee, Deva Ramanan, Rohit Girdhar

Keywords: adversarial, fewshot learning, gan, generative models, meta learning, unsupervised

Tues Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Tues Session 5 (20:00-22:00 GMT) [Live QA] [Cal]

Abstract: We address the task of unsupervised retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt – or personalize – a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task.

Similar Papers

Meta-Learning without Memorization
Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, Chelsea Finn,
Meta-Q-Learning
Rasool Fakoor, Pratik Chaudhari, Stefano Soatto, Alexander J. Smola,
Few-shot Text Classification with Distributional Signatures
Yujia Bao, Menghua Wu, Shiyu Chang, Regina Barzilay,