(NewsNation) — Artificial intelligence (AI) chatbot ChatGPT incorrectly diagnosed more than eight in 10 pediatric cases, new research reviewed by The Hill revealed.
The study published this week in JAMA Pediatrics found 83% of diagnoses given by ChatGPT version 3.5 were in error: 72% were incorrect, and 11% were “clinically related but too broad to be considered a diagnosis.”
Researchers entered 100 pediatric case challenges from JAMA and the New England Journal of Medicine over the past 10 years into ChatGPT version 3.5 with the prompt: “List a differential diagnosis and a final diagnosis.” Then, two physician researchers scored ChatGPT’s diagnoses as either correct, incorrect or “did not fully capture diagnosis.”
“The chatbot evaluated in this study—unlike physicians—was not able to identify some relationships, such as that between autism and vitamin deficiencies. To improve the generative AI chatbot’s diagnostic accuracy, more selective training is likely required,” the study said.
Even though the study showed a high rate of diagnostic errors, researchers recommended further analysis into the use of AI by medical providers.
The Hill contributed to this report.