Vid2Game: Controllable Characters Extracted from Real-World Videos

Oran Gafni, Lior Wolf, Yaniv Taigman

Keywords:

Thurs Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Thurs Session 2 (08:00-10:00 GMT) [Live QA] [Cal]

Abstract: We extract a controllable model from a video of a person performing a certain activity. The model generates novel image sequences of that person, according to user-defined control signals, typically marking the displacement of the moving body. The generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person. The method is based on two networks. The first maps a current pose, and a single-instance control signal to the next pose. The second maps the current pose, the new pose, and a given background, to an output frame. Both networks include multiple novelties that enable high-quality performance. This is demonstrated on multiple characters extracted from various videos of dancers and athletes.

Similar Papers

MetaPix: Few-Shot Video Retargeting
Jessica Lee, Deva Ramanan, Rohit Girdhar,
Deep Orientation Uncertainty Learning based on a Bingham Loss
Igor Gilitschenski, Roshni Sahoo, Wilko Schwarting, Alexander Amini, Sertac Karaman, Daniela Rus,
Scaling Autoregressive Video Models
Dirk Weissenborn, Oscar Täckström, Jakob Uszkoreit,