Edge caching strategy based on multi-agent deep reinforcement learning in cloud-edge-end scenarios

In cloud-edge-end scenarios, edge caching technology aims to promote collaborative content distribution among edge nodes, thereby alleviating the traffic load on backhaul links and enhancing service quality. Considering the dynamic changes in content popularity, a time convolution network based cont...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG Haiyan, CHANG Bo, LUO Jian
Format: Article
Language:Chinese
Published: Editorial Department of Journal on Communications 2025-06-01
Series:Tongxin xuebao
Subjects:
Online Access:http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2025108/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In cloud-edge-end scenarios, edge caching technology aims to promote collaborative content distribution among edge nodes, thereby alleviating the traffic load on backhaul links and enhancing service quality. Considering the dynamic changes in content popularity, a time convolution network based content request state prediction (TCNCRSP) model for predicting content popularity was proposed. On this basis, aiming to maximize cumulative rewards, a multi-agent deep reinforcement learning algorithm based on edge caching strategy was proposed. This strategy was implemented using long short-term memory (LSTM) network in the cloud to perform dimensionality reduction on the state data of each edge node, thereby generating low-dimensional global states. This approach was used to reduce the communication costs required for state sharing. The experimental results show that the proposed methods significantly improve the cache hit rate and service quality, while also reducing system overhead.
ISSN:1000-436X