Reducing Transformer Depth on Demand with Structured Dropout

Angela Fan, Edouard Grave, Armand Joulin

Keywords: dropout, language modeling, machine translation, nlp, overfitting, pruning, question answering, regularization, transformer

Thurs Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Thurs Session 4 (17:00-19:00 GMT) [Live QA] [Cal]

Abstract: Overparametrized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality than when training from scratch or using distillation.

Similar Papers

Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Junjie LIU, Zhe XU, Runbin SHI, Ray C. C. Cheung, Hayden K.H. So,
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, Cho-Jui Hsieh,
A Signal Propagation Perspective for Pruning Neural Networks at Initialization
Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, Philip H. S. Torr,