Multimodal Rumor Detection by Online Balance Multimodal Representation Learning
Multimodal approaches have been theoretically and empirically shown to outperform unimodal methods. Paradoxically, leading unimodal architectures sometimes surpass multimodal systems trained in a joint framework. Previous studies have shown that this counterintuitive outcome stems from the disparate...
Saved in:
Main Authors: | Jianing Ren, Tingting Zhong |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11072677/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Multi-level fusion with fine-grained alignment for multimodal sentiment analysis
by: Xiaoge Li, et al.
Published: (2025-06-01) -
Leveraging Bird Eye View Video and Multimodal Large Language Models for Real-Time Intersection Control and Reasoning
by: Sari Masri, et al.
Published: (2025-05-01) -
Fine-tuning or prompting on LLMs: evaluating knowledge graph construction task
by: Hussam Ghanem, et al.
Published: (2025-06-01) -
Low-Rank Adaptation of Pre-Trained Large Vision Models for Improved Lung Nodule Malignancy Classification
by: Benjamin P. Veasey, et al.
Published: (2025-01-01) -
MPVT: An Efficient Multi-Modal Prompt Vision Tracker for Visual Target Tracking
by: Jianyu Xie, et al.
Published: (2025-07-01)