Fair Resource Allocation in Federated Learning

Tian Li, Maziar Sanjabi, Ahmad Beirami, Virginia Smith

Keywords: distributed, distributed optimization, fairness, federated learning, optimization

Thurs Session 4 (17:00-19:00 GMT) [Live QA] [Cal]
Thurs Session 5 (20:00-22:00 GMT) [Live QA] [Cal]

Abstract: Federated learning involves training statistical models in massive, heterogeneous networks. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair (specifically, a more uniform) accuracy distribution across devices in federated networks. To solve q-FFL, we devise a communication-efficient method, q-FedAvg, that is suited to federated networks. We validate both the effectiveness of q-FFL and the efficiency of q-FedAvg on a suite of federated datasets with both convex and non-convex models, and show that q-FFL (along with q-FedAvg) outperforms existing baselines in terms of the resulting fairness, flexibility, and efficiency.

Similar Papers

Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
Qingfeng Lan, Yangchen Pan, Alona Fyshe, Martha White,
CAQL: Continuous Action Q-Learning
Moonkyung Ryu, Yinlam Chow, Ross Anderson, Christian Tjandraatmadja, Craig Boutilier,
Optimistic Exploration even with a Pessimistic Initialisation
Tabish Rashid, Bei Peng, Wendelin Boehmer, Shimon Whiteson,