Smoothness and Stability in GANs

Casey Chu, Kentaro Minami, Kenji Fukumizu

Keywords: adversarial, gan, gradient descent, stability

Thurs Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Thurs Session 2 (08:00-10:00 GMT) [Live QA] [Cal]

Abstract: Generative adversarial networks, or GANs, commonly display unstable behavior during training. In this work, we develop a principled theoretical framework for understanding the stability of various types of GANs. In particular, we derive conditions that guarantee eventual stationarity of the generator when it is trained with gradient descent, conditions that must be satisfied by the divergence that is minimized by the GAN and the generator's architecture. We find that existing GAN variants satisfy some, but not all, of these conditions. Using tools from convex analysis, optimal transport, and reproducing kernels, we construct a GAN that fulfills these conditions simultaneously. In the process, we explain and clarify the need for various existing GAN stabilization techniques, including Lipschitz constraints, gradient penalties, and smooth activation functions.

Similar Papers

Real or Not Real, that is the Question
Yuanbo Xiangli, Yubin Deng, Bo Dai, Chen Change Loy, Dahua Lin,
Consistency Regularization for Generative Adversarial Networks
Han Zhang, Zizhao Zhang, Augustus Odena, Honglak Lee,
A Closer Look at the Optimization Landscapes of Generative Adversarial Networks
Hugo Berard, Gauthier Gidel, Amjad Almahairi, Pascal Vincent, Simon Lacoste-Julien,