Objective: Does a Tree-of-Thought prompt and reconsideration of Isabel Pro's differential improve ChatGPT-4's accuracy; does increasing expert panel size improve ChatGPT-4's accuracy; does ChatGPT-4 produce consistent outputs in sequential requests; what is the frequency of fabricated references?
Materials and methods: Isabel Pro, a computerized diagnostic decision support system, and ChatGPT-4, a large language model. Using 201 cases from the New England Journal of Medicine, each system produced a differential diagnosis ranked by likelihood. Statistics were Mean Reciprocal Rank, Recall at Rank, Average Rank, Number of Correct Diagnoses, and Rank Improvement. For reproducibility, the study compared the initial expert panel run to each subsequent run, using the r-squared calculation from a scatter plot of each run.
Results: ChatGPT-4 improved MRR and Recall at 10 to 0.72 but produced fewer correct diagnoses and lower average rank. Reconsideration of the Isabel Pro differential produced an improvement in Recall at 10 of 11%. The expert panel size of two produced the best result. The reproducibility runs were within 4% on average for Recall at 10, but the scatterplots showed an r-squared ranging from 0.44 to 034, suggesting poor reproducibility. Reference accuracy was 34.8% for citations and 37.8% for DOIs.
Discussion: ChatGPT-4 performs well with images and electrocardiography and in administrative practice management, but diagnosis has not proven as promising.
Conclusions: As noted above, the results demonstrate concerns for diagnostic accuracy, reproducibility, and reference citation accuracy. Until these issues are resolved, clinical usage for diagnosis will be minimal, if at all.
扫码关注我们
求助内容:
应助结果提醒方式:
