Understanding and Robustifying Differentiable Architecture Search

Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, Frank Hutter

Keywords: attention, automl, computer vision, generalization, neural architecture search, nlp, optimization, regression, regularization

Thurs Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Thurs Session 5 (20:00-22:00 GMT) [Live QA] [Cal]
Thursday: Network Architectures

Abstract: Differentiable Architecture Search (DARTS) has attracted a lot of attention due to its simplicity and small search costs achieved by a continuous relaxation and an approximation of the resulting bi-level optimization problem. However, DARTS does not work robustly for new problems: we identify a wide range of search spaces for which DARTS yields degenerate architectures with very poor test performance. We study this failure mode and show that, while DARTS successfully minimizes validation loss, the found solutions generalize poorly when they coincide with high validation loss curvature in the architecture space. We show that by adding one of various types of regularization we can robustify DARTS to find solutions with less curvature and better generalization properties. Based on these observations, we propose several simple variations of DARTS that perform substantially more robustly in practice. Our observations are robust across five search spaces on three image classification tasks and also hold for the very different domains of disparity estimation (a dense regression task) and language modelling.

Similar Papers

Distributionally Robust Neural Networks
Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, Percy Liang,
Distance-Based Learning from Errors for Confidence Calibration
Chen Xing, Sercan Arik, Zizhao Zhang, Tomas Pfister,
You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings
Daniel Ruffinelli, Samuel Broscheit, Rainer Gemulla,