CoPhy: Counterfactual Learning of Physical Dynamics

Fabien Baradel, Natalia Neverova, Julien Mille, Greg Mori, Christian Wolf

Keywords: capacity, reasoning, video prediction

Wed Session 2 (08:00-10:00 GMT) [Live QA] [Cal]
Wed Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Wednesday: Actions and Counterfactuals

Abstract: Understanding causes and effects in mechanical systems is an essential component of reasoning in the physical world. This work poses a new problem of counterfactual learning of object mechanics from visual input. We develop the CoPhy benchmark to assess the capacity of the state-of-the-art models for causal physical reasoning in a synthetic 3D environment and propose a model for learning the physical dynamics in a counterfactual setting. Having observed a mechanical experiment that involves, for example, a falling tower of blocks, a set of bouncing balls or colliding objects, we learn to predict how its outcome is affected by an arbitrary intervention on its initial conditions, such as displacing one of the objects in the scene. The alternative future is predicted given the altered past and a latent representation of the confounders learned by the model in an end-to-end fashion with no supervision. We compare against feedforward video prediction baselines and show how observing alternative experiences allows the network to capture latent physical properties of the environment, which results in significantly more accurate predictions at the level of super human performance.

Similar Papers

Structured Object-Aware Physics Prediction for Video Modeling and Planning
Jannik Kossen, Karl Stelzner, Marcel Hussing, Claas Voelcker, Kristian Kersting,
VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation
Manoj Kumar, Mohammad Babaeizadeh, Dumitru Erhan, Chelsea Finn, Sergey Levine, Laurent Dinh, Durk Kingma,
Learning to Control PDEs with Differentiable Physics
Philipp Holl, Nils Thuerey, Vladlen Koltun,