Federated learning with heterogeneous data and models based on global decision boundary distillation

Abstract Data heterogeneity and performance disparities among heterogeneous models are critical challenges in federated learning with heterogeneous data and models, which limit its practical applicability and degrade local model performance. To address these challenges, we propose Federated Learning...

Full description

Saved in:
Bibliographic Details
Main Authors: Kejun Zhang, Jun Wang, Wenbin Wang, Taiheng Zeng, Pengcheng Li, Xunxi Wang, Tingrui Zhang
Format: Article
Language:English
Published: Springer 2025-06-01
Series:Journal of King Saud University: Computer and Information Sciences
Subjects:
Online Access:https://doi.org/10.1007/s44443-025-00097-0
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Data heterogeneity and performance disparities among heterogeneous models are critical challenges in federated learning with heterogeneous data and models, which limit its practical applicability and degrade local model performance. To address these challenges, we propose Federated Learning with Heterogeneous Data and Models Based on Global Decision Boundary Distillation (Fed-GDBD). For data heterogeneity, Fed-GDBD employs local prototype clustering to effectively capture and condense private data distribution information, and incorporates irrelevant-class knowledge distillation during local supervised learning to explicitly model the posterior relationships among classes, thereby mitigating knowledge forgetting in local domains. To address model performance disparities, Fed-GDBD innovatively incorporates global decision boundary distillation. By maintaining an updated global decision boundary learner on the server, the approach optimizes local models from a global decision boundary perspective, reducing the impact of conflicting information. We provide a theoretical analysis of the convergence rate of Fed-GDBD under non-convex objectives. Extensive experiments on four datasets demonstrate that Fed-GDBD outperforms state-of-the-art federated learning methods across various statistical heterogeneity settings and better adaptability to challenging scenarios.
ISSN:1319-1578
2213-1248