Comparative Performance of Chatbots in Endodontic Clinical Decision Support: A 4-Day Accuracy and Consistency Study
Introduction and Aims: Despite the use of artificial intelligence, which is increasingly prevalent in healthcare settings, concerns remain regarding its reliability and accuracy. The study assessed the overall, difficulty level-specific, and day-to-day accuracy and consistency of 5 AI chatbots—ChatG...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-10-01
|
Series: | International Dental Journal |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S0020653925002072 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Introduction and Aims: Despite the use of artificial intelligence, which is increasingly prevalent in healthcare settings, concerns remain regarding its reliability and accuracy. The study assessed the overall, difficulty level-specific, and day-to-day accuracy and consistency of 5 AI chatbots—ChatGPT-3.5, ChatGPT-4.o, Gemini 2.0 Flash, Copilot, and Copilot Pro—in answering clinically relevant endodontic questions. Methods: Seventy-six correct/incorrect questions were developed by 2 endodontists and categorized by an expert into 3 difficulty levels (Basic [B]-, Intermediate [I]-, and Advanced [A]- level]. Twenty questions from each difficulty level were selected from a set of 74 validated questions (B, n = 26; I, n = 24; A, n = 24), resulting in a total of 60 questions. The questions were asked of the chatbots over a period of 4 days, at 3 different times each day (morning, afternoon, and evening). Results: ChatGPT-4.o achieved the highest overall accuracy (82.5%) and perfect performance in the B-level category (95.0%), while Copilot Pro had the lowest accuracy (74.03%). Gemini and ChatGPT-3.5 showed similar overall accuracy. Gemini’s accuracy significantly improved over time, whereas significant decreases were noted in the Copilot Pro model across days, and no significant change was detected in both ChatGPT models and Copilot. In the B-level category, while Copilot Pro showed a significant decrease in accuracy rates, and in the B- and I-level categories, Copilot showed a significant increase in accuracy rates over the days. In the A-level category, Gemini demonstrated a significant increase in accuracy rates over the days. Conclusions: ChatGPT-4.o demonstrated superior performance, whereas Copilot and Copilot Pro showed insufficient accuracy. ChatGPT-3.5 and Gemini may be acceptable for general queries but require caution in more advanced cases. Clinical Relevance: ChatGPT-4.o demonstrated the highest overall accuracy and consistency in all question categories over 4 days, suggesting its potential as a reliable tool for clinical decision-making. |
---|---|
ISSN: | 0020-6539 |