Gradient Descent GAN Optimization is Locally Stable
Vaishnavh Nagarajan, J. Zico Kolter NIPS 2017 Abstract Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the gradient descent form of GAN optimization i. e., the natural setting where we simultaneously take small gradient steps in both generator and discriminator parameters. We show that even though GAN optimization does not correspond to a convexconcave game (even for simple parameterizations), under proper conditions, equilibrium points of this optimization procedure are still emphlocally asymptotically stable for the traditional GAN formulation. On the other hand, we show that the recently proposed Wasserstein GAN can have nonconvergent limit cycles near equilibrium. Motivated by this stability analysis, we propose an additional regularization term for gradient descent GAN updates, which emphis able to guar
|
|