Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN)

Peter Sorrenson, Carsten Rother, Ullrich Köthe

Keywords: disentanglement, nonlinear ica, representation learning

Tues Session 2 (08:00-10:00 GMT) [Live QA] [Cal]
Tues Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Tuesday: Representation Learning

Abstract: A central question of representation learning asks under which conditions it is possible to reconstruct the true latent variables of an arbitrarily complex generative process. Recent breakthrough work by Khemakhem et al. (2019) on nonlinear ICA has answered this question for a broad class of conditional generative processes. We extend this important result in a direction relevant for application to real-world data. First, we generalize the theory to the case of unknown intrinsic problem dimension and prove that in some special (but not very restrictive) cases, informative latent variables will be automatically separated from noise by an estimating model. Furthermore, the recovered informative latent variables will be in one-to-one correspondence with the true latent variables of the generating process, up to a trivial component-wise transformation. Second, we introduce a modification of the RealNVP invertible neural network architecture (Dinh et al. (2016)) which is particularly suitable for this type of problem: the General Incompressible-flow Network (GIN). Experiments on artificial data and EMNIST demonstrate that theoretical predictions are indeed verified in practice. In particular, we provide a detailed set of exactly 22 informative latent variables extracted from EMNIST.

Similar Papers

Controlling generative models with continuous factors of variations
Antoine Plumerault, Hervé Le Borgne, Céline Hudelot,
Mixed-curvature Variational Autoencoders
Ondrej Skopek, Octavian-Eugen Ganea, Gary Bécigneul,
Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control
Nir Levine, Yinlam Chow, Rui Shu, Ang Li, Mohammad Ghavamzadeh, Hung Bui,