Understanding dimensions of trust in AI through quantitative cognition: Implications for human-AI collaboration.

Human-AI collaborative innovation relies on effective and clearly defined role allocation, yet empirical research in this area remains limited. To address this gap, we construct a cognitive taxonomy trust in AI framework to describe and explain its interactive mechanisms in human-AI collaboration, s...

Full description

Saved in:
Bibliographic Details
Main Authors: Weizheng Jiang, Dongqin Li, Chun Liu
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0326558
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Human-AI collaborative innovation relies on effective and clearly defined role allocation, yet empirical research in this area remains limited. To address this gap, we construct a cognitive taxonomy trust in AI framework to describe and explain its interactive mechanisms in human-AI collaboration, specifically its complementary and inhibitive effects. Specifically, we examine the alignment between trust in AI and different cognitive levels, identifying key drivers that facilitate both lower-order and higher-order cognition through AI. Furthermore, by analyzing the interactive effects of multidimensional trust in AI, we explore its complementary and inhibitive influences. We collected data from finance and business administration interns using surveys and the After-Action Review method and analyzed them using the gradient descent algorithm. The findings reveal a dual effect of trust in AI on cognition: while functional and emotional trust enhance higher-order cognition, the transparency dimension of cognitive trust inhibits cognitive processes. These insights provide a theoretical foundation for understanding trust in AI in human-AI collaboration and offer practical guidance for university-industry partnerships and knowledge innovation.
ISSN:1932-6203