Understanding dimensions of trust in AI through quantitative cognition: Implications for human-AI collaboration.
Human-AI collaborative innovation relies on effective and clearly defined role allocation, yet empirical research in this area remains limited. To address this gap, we construct a cognitive taxonomy trust in AI framework to describe and explain its interactive mechanisms in human-AI collaboration, s...
Saved in:
Main Authors: | Weizheng Jiang, Dongqin Li, Chun Liu |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2025-01-01
|
Series: | PLoS ONE |
Online Access: | https://doi.org/10.1371/journal.pone.0326558 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Understanding Human-AI Interaction in Healthcare: The Mediating Role of Trust and Moderating Influence of Cognitive Load
by: Isparan Shanthi, et al.
Published: (2024-11-01) -
How the Human–Artificial Intelligence (AI) Collaboration Affects Cyberloafing: An AI Identity Perspective
by: Jin-Qian Xu, et al.
Published: (2025-06-01) -
AI in healthcare: Weighing innovation with trust, ethics, and human touch
by: Talat Waseem, et al.
Published: (2025-06-01) -
Who Wants to Try AI? Profiling AI Adopters and AI‐Trusting Publics in South Korea
by: Hyelim Lee, et al.
Published: (2025-05-01) -
AI Ethics: Should you trust AI with your medical diagnosis?
by: Weerawut Rainmanee
Published: (2025-05-01)