On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach

Yuanhao Wang, Guodong Zhang, Jimmy Ba

Keywords: adversarial, gan, gradient descent, momentum, optimization

Mon Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Mon Session 3 (12:00-14:00 GMT) [Live QA] [Cal]

Abstract: Many tasks in modern machine learning can be formulated as finding equilibria in sequential games. In particular, two-player zero-sum sequential games, also known as minimax optimization, have received growing interest. It is tempting to apply gradient descent to solve minimax optimization given its popularity and success in supervised learning. However, it has been noted that naive application of gradient descent fails to find some local minimax and can converge to non-local-minimax points. In this paper, we propose Follow-the-Ridge (FR), a novel algorithm that provably converges to and only converges to local minimax. We show theoretically that the algorithm addresses the notorious rotational behaviour of gradient dynamics, and is compatible with preconditioning and positive momentum. Empirically, FR solves toy minimax problems and improves the convergence of GAN training compared to the recent minimax optimization algorithms.

Similar Papers

Neural Policy Gradient Methods: Global Optimality and Rates of Convergence
Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang,
Escaping Saddle Points Faster with Stochastic Momentum
Jun-Kun Wang, Chi-Heng Lin, Jacob Abernethy,