Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning

Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov, Dmitry Vetrov

Keywords: ensemble learning, ensembles, uncertainty, uncertainty estimation

Wed Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Wed Session 4 (17:00-19:00 GMT) [Live QA] [Cal]

Abstract: Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance.

Similar Papers

Conservative Uncertainty Estimation By Fitting Prior Networks
Kamil Ciosek, Vincent Fortuin, Ryota Tomioka, Katja Hofmann, Richard Turner,
Ensemble Distribution Distillation
Andrey Malinin, Bruno Mlodozeniec, Mark Gales,
Deep Orientation Uncertainty Learning based on a Bingham Loss
Igor Gilitschenski, Roshni Sahoo, Wilko Schwarting, Alexander Amini, Sertac Karaman, Daniela Rus,