Deep Learning-Based Layout Analysis Method for Complex Layout Image Elements

The layout analysis of elements is indispensable in graphic design, as effective layout design not only facilitates the delivery of visual information but also enhances the overall esthetic appeal to the audience. The combination of deep learning and graphic design has gradually turned into a popula...

Full description

Saved in:
Bibliographic Details
Main Authors: Yunfei Zhong, Yumei Pu, Xiaoxuan Li, Wenxuan Zhou, Hongjian He, Yuyang Chen, Lang Zhong, Danfei Liu
Format: Article
Language:English
Published: MDPI AG 2025-07-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/14/7797
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The layout analysis of elements is indispensable in graphic design, as effective layout design not only facilitates the delivery of visual information but also enhances the overall esthetic appeal to the audience. The combination of deep learning and graphic design has gradually turned into a popular research direction in graphic design in recent years. However, in the era of rapid development of artificial intelligence, the analysis of layout still requires manual participation. To address this problem, this paper proposes a method for analyzing the layout of complex layout image elements based on the improved DeepLabv3++ model. The method reduces the number of model parameters and training time by replacing the backbone network. To improve the effect of multi-scale semantic feature extraction, the null rate of ASPP is fine-tuned, and the model is trained by self-constructed movie poster dataset. The experimental results show that the improved DeepLabv3+ model achieves a better segmentation effect on the self-constructed poster dataset, with MIoU reaching 75.60%. Compared with the classical models such as FCN, PSPNet, and DeepLabv3, the improved model in this paper effectively reduces the number of model parameters and training time while also ensuring the accuracy of the model.
ISSN:2076-3417