Optimization of real-time transmission and coding algorithm for high quality film and television content based on 6G wireless communication technology

Although existing video coding standards, such as H.264/AVC, H.265/HEVC, and H.266/VVC, have made some progress in compression efficiency, they still have limitations such as high computational complexity and poor adaptability to different content types. In terms of AI-based coding methods, there is...

Full description

Saved in:
Bibliographic Details
Main Authors: Jiabin Fu, Shu Zhang
Format: Article
Language:English
Published: Elsevier 2025-09-01
Series:Egyptian Informatics Journal
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1110866525001380
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Although existing video coding standards, such as H.264/AVC, H.265/HEVC, and H.266/VVC, have made some progress in compression efficiency, they still have limitations such as high computational complexity and poor adaptability to different content types. In terms of AI-based coding methods, there is still a lack of systematic research in fully leveraging the potential of 6G networks and achieving real-time transmission of high-quality film and television content. In this paper, we propose an innovative video coding framework that aims to achieve efficient and adaptive video transmission by combining traditional video coding techniques with deep learning models. The core of the framework lies in the use of convolutional neural network (CNN) to enhance the motion estimation accuracy and optimize the residual information by adaptive loop filter. In the motion estimation stage, the CNN-based model generates a high-precision motion vector field, and trains the model by minimizing the mean square error between the predicted and true values. Meanwhile, a multi-layer coding technique is introduced to adapt to different network conditions, where each layer represents a different bit rate and quality level, enabling the end device to select the appropriate decoding layer according to the current network conditions. In addition, the adaptive loop filter dynamically adjusts its parameters according to the video content to reduce compression artifacts and maintain image details. To evaluate the performance of the framework, we conduct experiments on multiple publicly available datasets, including Vid4, UHD-TEST, and HEVC standard test sequences. The experimental results show that our framework significantly outperforms conventional H.264/AVC, H.265/HEVC, VP9 and next-generation VVC coding standards, as well as deep learning-based DVC and DLVC methods in objective metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). In particular, our framework achieves a PSNR of 42.5 dB and a SSIM value of 0.98 at 1000 kbps bit rate. In addition, our framework also performs well in 6G wireless communication environments in terms of transmission delay and packet loss, with an average transmission delay of 100 ms and a packet loss rate of 1.5 %.
ISSN:1110-8665