MFF: A Deep Learning Model for Multi-Modal Image Fusion Based on Multiple Filters
Multi-modal image fusion mainly refers to the feature fusion of two or more different images taken from the same perspective range to increase the amount of information contained in an image. This study proposes a multi-modal image fusion deep network called the MFF network. Compared with traditiona...
Saved in:
Main Authors: | Yuequn Wang, Zhengwei Li, Jianli Wang, Leqiang Yang, Bo Dong, Hanfu Zhang, Jie Liu |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10877823/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
LGFusion: Frequency-Aware Dual-Branch Integration Network for Infrared and Visible Image Fusion
by: Ruizhe Shang, et al.
Published: (2025-01-01) -
Infrared and Visible Image Fusion via Residual Interactive Transformer and Cross-Attention Fusion
by: Liquan Zhao, et al.
Published: (2025-07-01) -
Infrared and visible image fusion based on multi-scale transform and sparse low-rank representation
by: Yangkun Zou, et al.
Published: (2025-07-01) -
Local Information-Driven Hierarchical Fusion of SAR and Visible Images via Refined Modal Salient Features
by: Yunzhong Yan, et al.
Published: (2025-07-01) -
LightMFF: A Simple and Efficient Ultra-Lightweight Multi-Focus Image Fusion Network
by: Xinzhe Xie, et al.
Published: (2025-07-01)