Purpose
Large language models have been assessed for their ability to receive and answer medical questions. Recently, there has been a new large language model named DeepSeek released, which has not been assessed for medical accuracy. This is the first study to assess DeepSeek for accuracy in responding to medical questions.
Methods and Materials
We prompted DeepSeek-R1 and several models of ChatGPT with 600 radiation oncology examination questions from national radiation oncology in-service multiple-choice examinations. These questions are used by medical residents in preparation for their certifying board examination and assess knowledge on anatomy, treatment planning, cancer epidemiology, and landmark trials. We recorded each model’s accuracy, total prompt and completion tokens used, and total run time. Accuracy was compared across question categories and between models. Type I error was set at 0.05.
Results
DeepSeek-R1 answered 84.0% of questions correctly, requiring 59 seconds per question. DeepSeek-R1 demonstrated a significant difference in accuracy by question categories (P = .012) and was least accurate for questions about landmark studies (74.2% accuracy). ChatGPT o1 answered 89.0% of questions correctly, requiring 10 seconds per question. ChatGPT o1’s accuracy did not significantly differ across question categories (93.5% accurate on questions about landmark studies). DeepSeek-R1 used 7.2% more tokens than ChatGPT o1. At February 2025 prices, DeepSeek-R1 costs up to $1.56, compared with ChatGPT’s $37.96.
Conclusion
DeepSeek-R1 is less accurate and answers more slowly compared with ChatGPT o1, but is less costly at the time of this manuscript preparation. Careful analysis and consideration of the current landscape and performance of each model is needed before implementation of DeepSeek-R1 or ChatGPT o1 to determine if the added financial costs of ChatGPT o1 are within the intended goals of improved accuracy and efficiency.
扫码关注我们
求助内容:
应助结果提醒方式:
