DeepV2D: Video to Depth with Differentiable Structure from Motion

Zachary Teed, Jia Deng

Keywords:

Mon Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Mon Session 2 (08:00-10:00 GMT) [Live QA] [Cal]

Abstract: We propose DeepV2D, an end-to-end deep learning architecture for predicting depth from video. DeepV2D combines the representation ability of neural networks with the geometric principles governing image formation. We compose a collection of classical geometric algorithms, which are converted into trainable modules and combined into an end-to-end differentiable architecture. DeepV2D interleaves two stages: motion estimation and depth estimation. During inference, motion and depth estimation are alternated and converge to accurate depth.

Similar Papers

Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving
Yurong You, Yan Wang, Wei-Lun Chao, Divyansh Garg, Geoff Pleiss, Bharath Hariharan, Mark Campbell, Kilian Q. Weinberger,
Semantically-Guided Representation Learning for Self-Supervised Monocular Depth
Vitor Guizilini, Rui Hou, Jie Li, Rares Ambrus, Adrien Gaidon,
RNA Secondary Structure Prediction By Learning Unrolled Algorithms
Xinshi Chen, Yu Li, Ramzan Umarov, Xin Gao, Le Song,