Extreme Classification via Adversarial Softmax Approximation

Robert Bamler, Stephan Mandt

Keywords: adversarial, regression

Thurs Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Thurs Session 4 (17:00-19:00 GMT) [Live QA] [Cal]

Abstract: Training a classifier over a large number of classes, known as 'extreme classification', has become a topic of major interest with applications in technology, science, and e-commerce. Traditional softmax regression induces a gradient cost proportional to the number of classes C, which often is prohibitively expensive. A popular scalable softmax approximation relies on uniform negative sampling, which suffers from slow convergence due a poor signal-to-noise ratio. In this paper, we propose a simple training method for drastically enhancing the gradient signal by drawing negative samples from an adversarial model that mimics the data distribution. Our contributions are three-fold: (i) an adversarial sampling mechanism that produces negative samples at a cost only logarithmic in C, thus still resulting in cheap gradient updates; (ii) a mathematical proof that this adversarial sampling minimizes the gradient variance while any bias due to non-uniform sampling can be removed; (iii) experimental results on large scale data sets that show a reduction of the training time by an order of magnitude relative to several competitive baselines.

Similar Papers

Adaptive Correlated Monte Carlo for Contextual Categorical Sequence Generation
Xinjie Fan, Yizhe Zhang, Zhendong Wang, Mingyuan Zhou,
Fast is better than free: Revisiting adversarial training
Eric Wong, Leslie Rice, J. Zico Kolter,
BayesOpt Adversarial Attack
Binxin Ru, Adam Cobb, Arno Blaas, Yarin Gal,
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, John E. Hopcroft,