Objectives
We evaluated the diagnostic performance and resource efficiency of three multimodal-reasoning-models for radiological image interpretation.
Methods
Using three multimodal-reasoning-models, we analyzed 73 cases under different conditions (Imaging-Only and Combined-Descriptive-Text) with three system prompt types (basic [without system prompt], original [specialized-role], and chain-of-thought [CoT] prompts). Quiz cases were extracted from the Korean Society of Ultrasound in Medicine Website, along with corresponding human benchmark data. Diagnostic performance was assessed through Multiple-Choice (MCQ) and Differential-Diagnosis (DDx) outputs. Resource utilization was measured by token consumption for each case across all scenarios. Pearson correlation coefficients were calculated to evaluate associations between token usage and diagnostic accuracy.
Results
For imaging-only input, under CoT prompt, o1 demonstrated superior accuracy of 56.2 %, surpassing the 55.9 % human benchmark compared to Claude-3.7-Sonnet (49.3 %) and Gemini-2.0-Flash-Thinking-Experimental (37 %) for MCQ. The integration of descriptive-text inputs substantially increased performance across all models, with o1 achieving the highest accuracy (71.2 %, with basic and original). This performance advantage was most pronounced in DDx. Original prompts utilized fewer output tokens while maintaining comparable accuracy for o1 (Imaging-Only with DDx: original vs. basic, CoT prompts, all p < 0.01). Intra-model analysis revealed a negative correlation between accuracy and output token for o1 (r = -0.41), while inter-model analysis showed strong positive correlations between total token and accuracy (r = 0.93 for Imaging-Only with MCQ).
Conclusion
The paradoxical relationship between resource utilization and diagnostic accuracy suggests that model architecture fundamentally determines baseline performance, while prompt optimization influences efficiency within architectural constraints on multimodal-reasoning-models.
扫码关注我们
求助内容:
应助结果提醒方式:
