Accuracy of ChatGPT-3.5, ChatGPT-4o, Copilot, Gemini, Claude, and Perplexity in advising on lumbosacral radicular pain against clinical practice guidelines: cross-sectional study
IntroductionArtificial Intelligence (AI) chatbots, which generate human-like responses based on extensive data, are becoming important tools in healthcare by providing information on health conditions, treatments, and preventive measures, acting as virtual assistants. However, their performance in a...
Saved in:
Main Authors: | Giacomo Rossettini, Silvia Bargeri, Chad Cook, Stefania Guida, Alvisa Palese, Lia Rodeghiero, Paolo Pillastrini, Andrea Turolla, Greta Castellini, Silvia Gianola |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-06-01
|
Series: | Frontiers in Digital Health |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fdgth.2025.1574287/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Politicians vs ChatGPT
by: Davide Garassino, et al.
Published: (2024-07-01) -
Recherchieren mit ChatGPT?
by: Friedrich Quaasdorf
Published: (2024-12-01) -
Learners’ Acceptance of ChatGPT in School
by: Matthias Conrad, et al.
Published: (2025-07-01) -
The Role of ChatGPT in Dermatology Diagnostics
by: Ziad Khamaysi, et al.
Published: (2025-06-01) -
Large language models’ performances regarding common patient questions about osteoarthritis: A comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Perplexity
by: Mingde Cao, et al.
Published: (2025-12-01)