When Pixels Speak Louder: Unravelling the Synergy of Text–Image Integration in Multimodal Review Helpfulness

Images contain more visual semantic information. Consumers first view multimodal online reviews with images. Research on the helpfulness of reviews on e-commerce platforms mainly focuses on text, lacking insights into the product attributes reflected by review images and the relationship between ima...

Full description

Saved in:
Bibliographic Details
Main Authors: Chao Ma, Chen Yang, Ying Yu
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Journal of Theoretical and Applied Electronic Commerce Research
Subjects:
Online Access:https://www.mdpi.com/0718-1876/20/2/144
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Images contain more visual semantic information. Consumers first view multimodal online reviews with images. Research on the helpfulness of reviews on e-commerce platforms mainly focuses on text, lacking insights into the product attributes reflected by review images and the relationship between images and text. Studying the relationship between images and text in online reviews can better explain consumer behavior and help consumers make purchasing decisions. Taking multimodal online review data from shopping platforms as the research object, this study proposes a research framework based on the Cognitive Theory of Multimedia Learning (CTML). It utilizes multiple pre-trained models, such as BLIP2 and machine learning methods, to construct metrics. A fuzzy-set qualitative comparative analysis (fsQCA) is conducted to explore the configurational effects of antecedent variables of multimodal online reviews on review helpfulness. The study identifies five configurational paths that lead to high review helpfulness. Specific review cases are used to examine the contribution paths of these configurations to perceived helpfulness, providing a new perspective for future research on multimodal online reviews. Targeted recommendations are made for operators and merchants based on the research findings, offering theoretical support for platforms to fully leverage the potential value of user-generated content.
ISSN:0718-1876