Joint multi-dimensional resource optimization for model compression in wireless federated learning

In the edge computing scenarios, resource-constrained and particiption of the dynamically terminal devices of network in federated learning cause high latency and high energy consumption. An efficient and environmentally friendly federated learning algorithm based on a three-tier cloud-edge-terminal...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHU Guangzhao, ZHU Xiaorong, XU Ding
Format: Article
Language:Chinese
Published: China InfoCom Media Group 2025-06-01
Series:物联网学报
Subjects:
Online Access:http://www.wlwxb.com.cn/zh/article/doi/10.11959/j.issn.2096-3750.2025.00391/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the edge computing scenarios, resource-constrained and particiption of the dynamically terminal devices of network in federated learning cause high latency and high energy consumption. An efficient and environmentally friendly federated learning algorithm based on a three-tier cloud-edge-terminal architecture was proposed. Firstly, by introducing model compression techniques into the three-tier federated learning structure, a theoretical analysis was conducted on the model convergence rate, training latency, and energy consumption. Subsequently, based on the theoretical analysis, a problem was formulated to minimize the global model training latency and energy consumption under a certain model convergence rate by jointly optimizing the terminal devices' transmission power, computing power, and model compression rate. Finally, by decomposing the problem into three sub-optimization problems and solving them alternately, a joint alternating optimization algorithm was designed to obtain the optimal solution for the original problem. Experimental results demonstrate that the proposed algorithm is adaptable to large-scale edge computing scenarios. It achieves reductions of 71.54% and 48.76%, respectively, in latency and energy consumption compared with traditional three-layer federated learning algorithms, while ensuring the convergence rate of the model, and effectively reduces the latency and energy consumption generated by global model training.
ISSN:2096-3750