Federated Learning for All: A Reinforcement Learning-Based Approach for Ensuring Fairness in Client Selection

In federated learning, selecting participating devices (clients) is critical due to their inherent diversity. Clients typically hold non-IID data and possess varying computational and communication capabilities, which introduces heterogeneity that can impact overall system performance. Ignoring this...

Full description

Saved in:
Bibliographic Details
Main Authors: Saeedeh Ghazi, Saeed Farzi, Amirhossein Nikoofard
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11072670/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In federated learning, selecting participating devices (clients) is critical due to their inherent diversity. Clients typically hold non-IID data and possess varying computational and communication capabilities, which introduces heterogeneity that can impact overall system performance. Ignoring this heterogeneity may result in underutilization of valuable resources. This paper introduces FairFedDRL, a fair client selection approach based on Double Deep Q-learning, designed to ensure equitable participation opportunities for all clients, thereby enhancing both fairness and performance. The method integrates data and system heterogeneity, along with selection history, into the reward function, and leverages Jain’s fairness index to balance selection frequency with client diversity. This ensures all clients have fair opportunities to participate, leading to more balanced and effective learning. Experiments conducted on three benchmark datasets—MNIST, CIFAR-10, and CINIC-10—utilize variance, Gini coefficient, Jain’s fairness index, and entropy as fairness metrics, with accuracy used to assess performance. Results show that FairFedDRL improves the Gini coefficient by 44.28% and Jain’s fairness index by 21.91% on MNIST, and by 23.44% and 7.42%, respectively, on CIFAR-10, and 67.4% and 53.47% on CINIC-10 alongside noticeable gains in overall model performance.
ISSN:2169-3536