Background: Temporomandibular disorders (TMDs) and orofacial pain (OFP) demand advanced diagnostic and clinical reasoning skills in dental education. Traditional simulations with real patients face limitations in availability and standardization. Generative artificial intelligence (GAI), such as ChatGPT-3.5, has emerged as a potential tool for clinical training.
Methods: This blinded, cross-sectional, crossover study involved 30 undergraduate dental students, each completing two simulated cases: one with ChatGPT-3.5 and one with standardized real patients. Cases were developed and validated by TMD/OFP specialists via the Delphi method and incorporated into the AI through structured prompts. Quantitative parameters (such as responses, word count, follow-up questions, reformulations, and diagnostic accuracy) and qualitative aspects (such as empathy, clarity, and relevant information) were analyzed.
Results: GAI simulations provided higher information density (231 vs.167 relevant units; p < 0.001) and clearer reasoning flow. Students interacting with real patients asked more follow-up questions (p = 0.004) and required more reformulations (p = 0.011), indicating more adaptive communication. Diagnostic accuracy did not differ significantly (p > 0.05). Relevant information correlated positively with diagnostic accuracy (r = 0.484; p = 0.007), whereas total word count correlated negatively (r = -0.386; p = 0.035).
Conclusions: ChatGPT-3.5 matched real patient simulations in diagnostic reasoning for TMD/OFP. Combining GAI's scalability and standardization with real patient variability may optimize clinical competency training.
扫码关注我们
求助内容:
应助结果提醒方式:
