FWFA: Fairness-Weighted Federated Aggregation for Privacy-Aware Decision Intelligence
Ensuring fairness in automated decision-making is a critical challenge, especially in organizational contexts like recruitment, performance evaluation, and promotion. As machine learning (ML) and artificial intelligence (AI) increasingly influence such decisions, promoting responsible AI that minimi...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11075742/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Ensuring fairness in automated decision-making is a critical challenge, especially in organizational contexts like recruitment, performance evaluation, and promotion. As machine learning (ML) and artificial intelligence (AI) increasingly influence such decisions, promoting responsible AI that minimizes bias while preserving data privacy has become essential. However, existing fairness-aware models are often centralized or ill-equipped to handle non-IID data, limiting their real-world applicability. This study introduces a novel federated learning framework, Fairness-Weighted Federated Aggregation (FWFA), which integrates fairness-aware weighting into the model aggregation process. Each client’s contribution is scaled using a fairness score computed from key metrics Demographic Parity (DP), Statistical Parity Difference (SPD), and Disparate Impact Ratio (DIR). A synthetically generated dataset simulating diverse employee profiles across five professional domains was used to replicate real-world heterogeneity and imbalance. Across 20 communication rounds, FWFA achieved a DP of 0.91, an SPD of 0.06, and a DIR of 0.91, outperforming baseline methods WA+FL and SMOTE+FL, while maintaining an accuracy of 0.84. Additionally, a dynamic weighting mechanism was simulated by varying fairness thresholds to explore adaptive aggregation behavior, revealing a controllable trade-off between fairness and model performance. To further strengthen privacy guarantees, differential privacy was integrated into the FWFA framework, resulting in minimal performance degradation while retaining key fairness properties. These findings reinforce FWFA’s role as a robust, privacy-preserving solution for fair collaborative decision-making in federated environments, supporting the broader vision of ethical and trustworthy AI in real-world systems. |
---|---|
ISSN: | 2169-3536 |