首页 > 最新文献

Dento maxillo facial radiology最新文献

英文 中文
Leveraging multimodal large language model chatbots in oral radiology: a comprehensive evaluation using questions from a korean dental university. 利用口腔放射学中的多模态大语言模型聊天机器人:使用韩国牙科大学的问题进行综合评估。
IF 2.9 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE Pub Date : 2025-12-12 DOI: 10.1093/dmfr/twaf083
Hui Jeong, Kug Jin Jeon, Chena Lee, Yoon Joo Choi, Gyu-Dong Jo, Sang-Sun Han

Objectives: This study aimed to conduct a comprehensive evaluation of general-purpose multimodal large language model (LLM) chatbots in oral radiology.

Methods: Ninety text- and image-based oral radiology questions from a Korean dental university were extracted and categorized into six educational contents and two question types. ChatGPT-4o and Gemini 2.0 Flash were evaluated with following items: accuracy with group differences across six contents (using Fisher's exact test with Bonferroni correction, p < 0.0167), answer consistency across ten repeated outputs (evaluated as the mean agreement and Fleiss' kappa coefficient), and hallucination (evaluated as the mean of the 5-point Global Quality Score assigned by two oral radiologists).

Results: Multimodal AI chatbots (ChatGPT-4o and Gemini 2.0 Flash) achieved excellent performance on text-based questions with over 80% accuracy but showed limited performance on image-based tasks, with accuracy under 30%. Additionally, image-based tasks exhibited high response variability, and hallucinations were frequently observed, providing incorrect information. These findings suggest that AI chatbots are not yet suitable for reliable use in oral radiology.

Conclusions: This study provided timely insights into the capabilities and limitations of general-purpose multimodal LLM chatbots in the oral radiology, and will serve as a foundation for more safe and effective applications of AI chatbots in the oral radiology field in the future.

Advances in knowledge: This is the first study to comprehensively assess multimodal LLM chatbots in oral radiology. It provides key insights into the performance benchmarks for AI chatbots in oral radiology, promoting the responsible and transparent integration of AI into dental education.

目的:本研究旨在对口腔放射学中的通用多模态大语言模型(LLM)聊天机器人进行综合评估。方法:选取韩国某口腔大学的90道基于文本和图像的口腔放射学问题,将其分为6个教学内容和2种题型。chatgpt - 40和Gemini 2.0 Flash通过以下项目进行评估:六个内容的组差异准确性(使用Fisher的精确测试和Bonferroni校正,p)结果:多模式AI聊天机器人(chatgpt - 40和Gemini 2.0 Flash)在基于文本的问题上取得了出色的表现,准确率超过80%,但在基于图像的任务上表现有限,准确率低于30%。此外,基于图像的任务表现出很高的反应变异性,并且经常观察到幻觉,提供不正确的信息。这些发现表明,人工智能聊天机器人还不适合在口腔放射学中可靠地使用。结论:本研究及时揭示了通用多模态LLM聊天机器人在口腔放射学中的能力和局限性,为未来AI聊天机器人在口腔放射学领域更加安全有效的应用奠定了基础。知识进展:这是第一个全面评估口腔放射学中多模态LLM聊天机器人的研究。它为人工智能聊天机器人在口腔放射学中的性能基准提供了关键见解,促进了人工智能与牙科教育的负责任和透明整合。
{"title":"Leveraging multimodal large language model chatbots in oral radiology: a comprehensive evaluation using questions from a korean dental university.","authors":"Hui Jeong, Kug Jin Jeon, Chena Lee, Yoon Joo Choi, Gyu-Dong Jo, Sang-Sun Han","doi":"10.1093/dmfr/twaf083","DOIUrl":"https://doi.org/10.1093/dmfr/twaf083","url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to conduct a comprehensive evaluation of general-purpose multimodal large language model (LLM) chatbots in oral radiology.</p><p><strong>Methods: </strong>Ninety text- and image-based oral radiology questions from a Korean dental university were extracted and categorized into six educational contents and two question types. ChatGPT-4o and Gemini 2.0 Flash were evaluated with following items: accuracy with group differences across six contents (using Fisher's exact test with Bonferroni correction, p < 0.0167), answer consistency across ten repeated outputs (evaluated as the mean agreement and Fleiss' kappa coefficient), and hallucination (evaluated as the mean of the 5-point Global Quality Score assigned by two oral radiologists).</p><p><strong>Results: </strong>Multimodal AI chatbots (ChatGPT-4o and Gemini 2.0 Flash) achieved excellent performance on text-based questions with over 80% accuracy but showed limited performance on image-based tasks, with accuracy under 30%. Additionally, image-based tasks exhibited high response variability, and hallucinations were frequently observed, providing incorrect information. These findings suggest that AI chatbots are not yet suitable for reliable use in oral radiology.</p><p><strong>Conclusions: </strong>This study provided timely insights into the capabilities and limitations of general-purpose multimodal LLM chatbots in the oral radiology, and will serve as a foundation for more safe and effective applications of AI chatbots in the oral radiology field in the future.</p><p><strong>Advances in knowledge: </strong>This is the first study to comprehensively assess multimodal LLM chatbots in oral radiology. It provides key insights into the performance benchmarks for AI chatbots in oral radiology, promoting the responsible and transparent integration of AI into dental education.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145741500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the accuracy of multiple-choice questions using different Artificial Intelligence-driven tools - An observational study. 使用不同的人工智能驱动工具评估多项选择题的准确性——一项观察性研究。
IF 2.9 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE Pub Date : 2025-12-05 DOI: 10.1093/dmfr/twaf085
Mohammed Najmuddin

Objectives: The objective was to assess the accuracy of different AI tools in providing the right responses for multiple choice questions (MCQs) and the time taken to complete the responses.

Methods: The study included 80 MCQs, each with four options and a correct answer related to oral radiology used to assess the knowledge and skill across 5 different domains. The accuracy levels of ChatGPT, ChatGPT-4o (4o), Microsoft Co-pilot, DeepSeek, Gemini, and Meta AI were assessed and compared using the Chi-Square test. In addition, One-way ANOVA was used to compare the response time between different AI chatbots.

Results: Overall, Microsoft Co-pilot and ChatGPT-4o had the highest accuracy. ChatGPT had the fastest response time. Microsoft Co-pilot and DeepSeek, though not significant, had the highest accuracy for knowledge-based and skill-based queries. For accuracy on 5 domains, Microsoft Co-pilot and ChatGPT-4o had an accuracy of 100% for radiographic safety and DeepSeek was more accurate for radiographic diagnosis. Students took more time to respond than the collective time taken by AI chatbots.

Conclusion: Microsoft Co-pilot had an overall higher accuracy, responded more accurately for knowledge-based questions, and was 100% accurate for queries related to radiographic safety. ChatGPT-4o had the second highest accuracy and DeepSeek served better for radiographic diagnosis.

Advances in knowledge: This study is the first to systematically compare the accuracy and response time of multiple AI-driven tools in answering domain-specific MCQs in oral radiology. It highlights significant variability in performance across platforms, offering novel insights into the suitability of AI chatbots for educational use in dentistry.

目的:目的是评估不同人工智能工具在为多项选择题(mcq)提供正确答案方面的准确性以及完成答案所需的时间。方法:研究包括80个mcq,每个mcq有4个选项和一个与口腔放射学相关的正确答案,用于评估5个不同领域的知识和技能。采用卡方检验对ChatGPT、ChatGPT- 40(40)、Microsoft Co-pilot、DeepSeek、Gemini和Meta AI的准确率水平进行评估和比较。此外,采用单因素方差分析比较不同AI聊天机器人之间的响应时间。结果:总体而言,Microsoft Co-pilot和chatgpt - 40具有最高的准确性。ChatGPT的响应时间最快。微软的Co-pilot和DeepSeek虽然不显著,但在基于知识和技能的查询中准确率最高。在5个领域的准确性方面,Microsoft Co-pilot和chatgpt - 40在放射学安全性方面的准确性为100%,而DeepSeek在放射学诊断方面的准确性更高。学生们回答问题的时间比人工智能聊天机器人的总时间要长。结论:Microsoft Co-pilot总体上具有更高的准确性,对基于知识的问题的回答更准确,对与放射照相安全相关的问题的回答准确率为100%。chatgpt - 40的准确率第二高,而DeepSeek在放射诊断方面表现更好。知识进展:本研究首次系统地比较了多种人工智能驱动工具在回答口腔放射学领域特定mcq时的准确性和响应时间。它强调了跨平台性能的显著差异,为人工智能聊天机器人在牙科教育应用中的适用性提供了新的见解。
{"title":"Assessing the accuracy of multiple-choice questions using different Artificial Intelligence-driven tools - An observational study.","authors":"Mohammed Najmuddin","doi":"10.1093/dmfr/twaf085","DOIUrl":"https://doi.org/10.1093/dmfr/twaf085","url":null,"abstract":"<p><strong>Objectives: </strong>The objective was to assess the accuracy of different AI tools in providing the right responses for multiple choice questions (MCQs) and the time taken to complete the responses.</p><p><strong>Methods: </strong>The study included 80 MCQs, each with four options and a correct answer related to oral radiology used to assess the knowledge and skill across 5 different domains. The accuracy levels of ChatGPT, ChatGPT-4o (4o), Microsoft Co-pilot, DeepSeek, Gemini, and Meta AI were assessed and compared using the Chi-Square test. In addition, One-way ANOVA was used to compare the response time between different AI chatbots.</p><p><strong>Results: </strong>Overall, Microsoft Co-pilot and ChatGPT-4o had the highest accuracy. ChatGPT had the fastest response time. Microsoft Co-pilot and DeepSeek, though not significant, had the highest accuracy for knowledge-based and skill-based queries. For accuracy on 5 domains, Microsoft Co-pilot and ChatGPT-4o had an accuracy of 100% for radiographic safety and DeepSeek was more accurate for radiographic diagnosis. Students took more time to respond than the collective time taken by AI chatbots.</p><p><strong>Conclusion: </strong>Microsoft Co-pilot had an overall higher accuracy, responded more accurately for knowledge-based questions, and was 100% accurate for queries related to radiographic safety. ChatGPT-4o had the second highest accuracy and DeepSeek served better for radiographic diagnosis.</p><p><strong>Advances in knowledge: </strong>This study is the first to systematically compare the accuracy and response time of multiple AI-driven tools in answering domain-specific MCQs in oral radiology. It highlights significant variability in performance across platforms, offering novel insights into the suitability of AI chatbots for educational use in dentistry.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145687409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance and Clinical Applicability of AI Models for Jawbone Lesion Classification: A Systematic Review with Meta-analysis and Introduction of a Clinical Interpretation Score. 人工智能颌骨病变分类模型的性能和临床适用性:meta分析的系统评价和临床解释评分的引入
IF 2.9 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE Pub Date : 2025-12-04 DOI: 10.1093/dmfr/twaf086
Jonas Ver Berne, Minh Ton That, Reinhilde Jacobs

Objectives: To evaluate the diagnostic accuracy and generalizability of artificial-intelligence (AI) models for radiographic classification of jawbone cysts and tumours, and to propose a Clinical Interpretation Score (CIS) that rates the transparency and real-world readiness of published AI tools.

Methods: Eligible studies reporting sensitivity and specificity of AI classifiers on panoramic radiographs or cone-beam CT were retrieved. Two reviewers applied JBI risk-of-bias criteria and extracted 2 x 2 tables and relevant metrics. Pooled estimates were calculated with random-effects meta-analysis; heterogeneity was quantified with I2.

Results: Nineteen studies were included, predominantly reporting convolutional neural networks. Pooled specificity was consistently high (≥0.90) across lesions, whereas sensitivity ranged widely (0.50-1.00). Stafne bone cavities achieved near-perfect metrics; ameloblastoma and odontogenic keratocyst showed moderate sensitivity (0.62-0.85) but retained high specificity. Cone-beam CT improved sensitivity relative to panoramic imaging. Substantial heterogeneity (I2 > 50% in most comparisons) reflected variable prevalence, imaging protocols and validation strategies.

Conclusions: AI models demonstrate promising diagnostic performance in classifying several jawbone lesions, though their accuracy is influenced by imaging modality, lesion type, and prevalence. Despite encouraging technical results, many studies lack transparent reporting and external validation, limiting their clinical interpretability. The Clinical Interpretation Score (CIS) provides a structured framework to evaluate the methodological transparency and clinical readiness of AI tools, helping to distinguish between technically sound models and those suitable for integration into diagnostic workflows.

目的:评估用于颌骨囊肿和肿瘤放射学分类的人工智能(AI)模型的诊断准确性和通用性,并提出临床解释评分(CIS),对已发表的AI工具的透明度和现实世界的准备程度进行评分。方法:检索人工智能分类器在全景x线片或锥束CT上的敏感性和特异性的合格研究。两名审稿人采用JBI偏倚风险标准,提取2 × 2表和相关指标。采用随机效应荟萃分析计算汇总估计值;用I2定量分析异质性。结果:纳入了19项研究,主要报道了卷积神经网络。不同病变的合并特异性始终很高(≥0.90),而敏感性范围很广(0.50-1.00)。杆状骨腔达到了近乎完美的指标;成釉细胞瘤和牙源性角化囊肿敏感性中等(0.62 ~ 0.85),但特异性较高。相对于全景成像,锥束CT提高了灵敏度。大量的异质性(在大多数比较中为50%)反映了不同的患病率、成像方案和验证策略。结论:人工智能模型在分类几种颌骨病变方面表现出良好的诊断性能,尽管其准确性受到成像方式、病变类型和患病率的影响。尽管技术结果令人鼓舞,但许多研究缺乏透明的报告和外部验证,限制了其临床可解释性。临床解释评分(CIS)提供了一个结构化框架来评估人工智能工具的方法透明度和临床准备情况,有助于区分技术上合理的模型和适合集成到诊断工作流程中的模型。
{"title":"Performance and Clinical Applicability of AI Models for Jawbone Lesion Classification: A Systematic Review with Meta-analysis and Introduction of a Clinical Interpretation Score.","authors":"Jonas Ver Berne, Minh Ton That, Reinhilde Jacobs","doi":"10.1093/dmfr/twaf086","DOIUrl":"https://doi.org/10.1093/dmfr/twaf086","url":null,"abstract":"<p><strong>Objectives: </strong>To evaluate the diagnostic accuracy and generalizability of artificial-intelligence (AI) models for radiographic classification of jawbone cysts and tumours, and to propose a Clinical Interpretation Score (CIS) that rates the transparency and real-world readiness of published AI tools.</p><p><strong>Methods: </strong>Eligible studies reporting sensitivity and specificity of AI classifiers on panoramic radiographs or cone-beam CT were retrieved. Two reviewers applied JBI risk-of-bias criteria and extracted 2 x 2 tables and relevant metrics. Pooled estimates were calculated with random-effects meta-analysis; heterogeneity was quantified with I2.</p><p><strong>Results: </strong>Nineteen studies were included, predominantly reporting convolutional neural networks. Pooled specificity was consistently high (≥0.90) across lesions, whereas sensitivity ranged widely (0.50-1.00). Stafne bone cavities achieved near-perfect metrics; ameloblastoma and odontogenic keratocyst showed moderate sensitivity (0.62-0.85) but retained high specificity. Cone-beam CT improved sensitivity relative to panoramic imaging. Substantial heterogeneity (I2 > 50% in most comparisons) reflected variable prevalence, imaging protocols and validation strategies.</p><p><strong>Conclusions: </strong>AI models demonstrate promising diagnostic performance in classifying several jawbone lesions, though their accuracy is influenced by imaging modality, lesion type, and prevalence. Despite encouraging technical results, many studies lack transparent reporting and external validation, limiting their clinical interpretability. The Clinical Interpretation Score (CIS) provides a structured framework to evaluate the methodological transparency and clinical readiness of AI tools, helping to distinguish between technically sound models and those suitable for integration into diagnostic workflows.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145676760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CBCT-Based Grading System for Oropharyngeal Airway Narrowing: A Novel Diagnostic Framework for Multidisciplinary Clinical Use. 基于cbct的口咽气道狭窄分级系统:多学科临床应用的新型诊断框架。
IF 2.9 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE Pub Date : 2025-12-04 DOI: 10.1093/dmfr/twaf084
Ajay G Nayak, Sunanda Bhatnagar

This technical report presents a novel CBCT-Based Grading System for Oropharyngeal Airway Narrowing, designed to provide clinicians with a standardised, objective method to assess oropharyngeal airway narrowing using Cone-Beam Computed Tomography (CBCT). The grading system is developed based on the least surface area on axial section measurements/minimal cross-sectional area (MCA) on CBCT. It classifies oropharyngeal narrowing into five distinct grades (Grade 0 to 4). Each grade also has subcategories that correspond to specific anatomical regions-distal to the soft palate (P), distal to the base of the tongue (T), or distal to both the soft palate and the tongue (B) - and includes precise surface area ranges, contributing to better understanding. Traditional methods have commonly relied upon lateral cephalometry or supine CT; however, CBCT offers 3D mapping in a natural upright position, ensuring functional relevance of the airway assessment. Owing to its high spatial resolution, adequate contrast between the soft tissue and empty space, relatively low radiation dose compared to multidetector row CT and visibility of the upper airway by utilising a large field of view (FOV) protocol, makes CBCT a useful diagnostic tool for evaluation of the airway. The fact that CBCT is taken in a sitting or standing position, where the head is in equilibrium and orofacial, neck musculature is in voluntary control, vis-à-vis the supine position, where this control is taken over by the autonomic nervous system, and the distal part of the soft palate compresses the already narrowed airway further adds to its usefulness. CBCT imaging, with its three-dimensional mapping capabilities, allows for precise visualisation of the airway from the level of the posterior nasal spine, where the hard palate ends, extending to the epiglottis-thus measuring the oropharyngeal airway. The system is particularly useful for early detection and evaluation of conditions such as obstructive sleep apnoea, hypertrophy of the nasopharyngeal tonsils (adenoids), predicting difficult airways for ease of intubation, guiding orthognathic surgical interventions, craniofacial anomalies, and complex orthognathic surgical planning. It holds promise for integration into AI-enabled diagnostic platforms and digital imaging software, offering consistency in research and practice. This report details the rationale, grading criteria, anatomical references, and potential applications of this classification. The system offers a streamlined approach for identifying airway compromise, ultimately aiding multidisciplinary use in optimising patient outcomes.

本技术报告提出了一种新的基于CBCT的口咽气道狭窄分级系统,旨在为临床医生提供一种标准化、客观的方法来评估使用锥形束计算机断层扫描(CBCT)的口咽气道狭窄。分级系统是基于轴向截面测量的最小表面积/ CBCT的最小横截面积(MCA)开发的。它将口咽变窄分为五个不同的等级(0至4级)。每个等级也有对应于特定解剖区域的子类别——远至软腭(P),远至舌根(T),或远至软腭和舌头(B)——包括精确的表面积范围,有助于更好地理解。传统的方法通常依赖于侧位测量或仰卧位CT;然而,CBCT提供自然直立位置的3D映射,确保气道评估的功能相关性。由于其高空间分辨率,软组织和空白空间之间的充分对比,与多探测器行CT相比相对较低的辐射剂量以及利用大视场(FOV)协议的上呼吸道可见性,使CBCT成为评估气道的有用诊断工具。事实上,CBCT是在坐着或站着的位置进行的,头部处于平衡状态,口面部,颈部肌肉组织处于自主控制状态,而-à-vis仰卧位,这种控制由自主神经系统接管,软腭的远端部分压迫已经狭窄的气道进一步增加了它的实用性。CBCT成像具有三维制图能力,可以精确地显示从鼻后棘(硬腭的末端)到会阴的气道,从而测量口咽气道。该系统特别适用于早期发现和评估诸如阻塞性睡眠呼吸暂停、鼻咽扁桃体(腺样体)肥大、预测气管插管困难、指导正颌手术干预、颅面异常和复杂的正颌手术计划等疾病。它有望集成到支持人工智能的诊断平台和数字成像软件中,从而在研究和实践中提供一致性。本报告详细介绍了这种分类的基本原理、分级标准、解剖学参考和潜在应用。该系统为识别气道损害提供了一种简化的方法,最终有助于优化患者预后的多学科应用。
{"title":"CBCT-Based Grading System for Oropharyngeal Airway Narrowing: A Novel Diagnostic Framework for Multidisciplinary Clinical Use.","authors":"Ajay G Nayak, Sunanda Bhatnagar","doi":"10.1093/dmfr/twaf084","DOIUrl":"https://doi.org/10.1093/dmfr/twaf084","url":null,"abstract":"<p><p>This technical report presents a novel CBCT-Based Grading System for Oropharyngeal Airway Narrowing, designed to provide clinicians with a standardised, objective method to assess oropharyngeal airway narrowing using Cone-Beam Computed Tomography (CBCT). The grading system is developed based on the least surface area on axial section measurements/minimal cross-sectional area (MCA) on CBCT. It classifies oropharyngeal narrowing into five distinct grades (Grade 0 to 4). Each grade also has subcategories that correspond to specific anatomical regions-distal to the soft palate (P), distal to the base of the tongue (T), or distal to both the soft palate and the tongue (B) - and includes precise surface area ranges, contributing to better understanding. Traditional methods have commonly relied upon lateral cephalometry or supine CT; however, CBCT offers 3D mapping in a natural upright position, ensuring functional relevance of the airway assessment. Owing to its high spatial resolution, adequate contrast between the soft tissue and empty space, relatively low radiation dose compared to multidetector row CT and visibility of the upper airway by utilising a large field of view (FOV) protocol, makes CBCT a useful diagnostic tool for evaluation of the airway. The fact that CBCT is taken in a sitting or standing position, where the head is in equilibrium and orofacial, neck musculature is in voluntary control, vis-à-vis the supine position, where this control is taken over by the autonomic nervous system, and the distal part of the soft palate compresses the already narrowed airway further adds to its usefulness. CBCT imaging, with its three-dimensional mapping capabilities, allows for precise visualisation of the airway from the level of the posterior nasal spine, where the hard palate ends, extending to the epiglottis-thus measuring the oropharyngeal airway. The system is particularly useful for early detection and evaluation of conditions such as obstructive sleep apnoea, hypertrophy of the nasopharyngeal tonsils (adenoids), predicting difficult airways for ease of intubation, guiding orthognathic surgical interventions, craniofacial anomalies, and complex orthognathic surgical planning. It holds promise for integration into AI-enabled diagnostic platforms and digital imaging software, offering consistency in research and practice. This report details the rationale, grading criteria, anatomical references, and potential applications of this classification. The system offers a streamlined approach for identifying airway compromise, ultimately aiding multidisciplinary use in optimising patient outcomes.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145676791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Mouse-Tracking Study on Visualization of Panoramic Radiographs Among Oral & Maxillofacial Radiology Faculty. 口腔颌面放射科全景x线影像可视化的小鼠追踪研究。
IF 2.9 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE Pub Date : 2025-12-04 DOI: 10.1093/dmfr/twaf087
Li Zhen Lim, Chun Ning Fong, Joey Jia Yi Lee, Pei Hua Cher, Yu Fan Sim, Peggy P Lee

Objectives: Panoramic radiographs (PANs) are a commonly used dental imaging modality and are complex to evaluate. This study aimed to use mouse-tracking technology to identify the visualization characteristics of experienced Oral & Maxillofacial Radiology (OMR) faculty when reviewing PANs and to assess relationships with detection accuracy.

Methods: Seventeen OMR faculty in U.S. dental schools with 5 to 38 years of experience were recruited. Participants were shown 17 PANs (5 "Normal" and 12 "Pathology") over Zoom and instructed to move their mouse cursor in sync with their eye movements. Data collection was based on video recordings with X, Y coordinates auto-detected using mouse-tracking algorithms. Parameters collected included detection accuracy, time taken, completeness of search coverage, revisits and search patterns used.

Results: Most common search patterns were Dental to Periphery (41.5%) and Periphery to Dental (37.7%). The mean accuracy score for all cases with pathology was 84.6%. Except for 2 cases with subtle findings (condylar fracture and fibrous dysplasia), mean accuracy scores ranged from 76.5% to 100%. There were no associations between search patterns and detection accuracy. Participants took longer durations and performed more complete searches with more revisits on "Normal" cases. They were more likely to use shorter search times with smaller coverage and perform single searches when correctly detecting lesions.

Conclusions: Experienced OMRs were able to detect lesions without scanning the entire PAN and within a single search, regardless of the search pattern used.

Advances in knowledge: This is the first study using mouse-tracking algorithms to evaluate visualization patterns on PANs. Experienced OMRs can conduct efficient and accurate reviews of PANs regardless of search strategy.

目的:全景x线片(PANs)是一种常用的牙科成像方式,但评估起来很复杂。本研究旨在利用鼠标跟踪技术来识别口腔颌面放射学(OMR)教师在检查pan时的可视化特征,并评估其与检测准确性的关系。方法:招募了17名具有5 ~ 38年经验的美国牙科学校的OMR教员。研究人员向参与者展示了17张通过Zoom拍摄的画面(5张“正常”,12张“病态”),并指示他们随着眼球运动同步移动鼠标光标。数据收集基于视频记录,使用鼠标跟踪算法自动检测X, Y坐标。收集的参数包括检测精度、所用时间、搜索范围的完整性、重新访问和使用的搜索模式。结果:最常见的搜索模式是Dental to Periphery(41.5%)和peripheral to Dental(37.7%)。所有病理病例的平均准确率评分为84.6%。除了2例轻微发现(髁突骨折和纤维发育不良)外,平均准确率评分为76.5%至100%。搜索模式和检测准确率之间没有关联。参与者花了更长的时间,对“正常”案例进行了更多的访问,并进行了更完整的搜索。他们更有可能使用更短的搜索时间和更小的覆盖范围,并在正确检测病变时执行单一搜索。结论:经验丰富的omr能够在不扫描整个PAN的情况下检测病变,并且在一次搜索中,无论使用哪种搜索模式。知识进展:这是第一次使用鼠标跟踪算法来评估pan上的可视化模式。无论采用何种搜索策略,经验丰富的omr都可以对pan进行有效和准确的审查。
{"title":"A Mouse-Tracking Study on Visualization of Panoramic Radiographs Among Oral & Maxillofacial Radiology Faculty.","authors":"Li Zhen Lim, Chun Ning Fong, Joey Jia Yi Lee, Pei Hua Cher, Yu Fan Sim, Peggy P Lee","doi":"10.1093/dmfr/twaf087","DOIUrl":"https://doi.org/10.1093/dmfr/twaf087","url":null,"abstract":"<p><strong>Objectives: </strong>Panoramic radiographs (PANs) are a commonly used dental imaging modality and are complex to evaluate. This study aimed to use mouse-tracking technology to identify the visualization characteristics of experienced Oral & Maxillofacial Radiology (OMR) faculty when reviewing PANs and to assess relationships with detection accuracy.</p><p><strong>Methods: </strong>Seventeen OMR faculty in U.S. dental schools with 5 to 38 years of experience were recruited. Participants were shown 17 PANs (5 \"Normal\" and 12 \"Pathology\") over Zoom and instructed to move their mouse cursor in sync with their eye movements. Data collection was based on video recordings with X, Y coordinates auto-detected using mouse-tracking algorithms. Parameters collected included detection accuracy, time taken, completeness of search coverage, revisits and search patterns used.</p><p><strong>Results: </strong>Most common search patterns were Dental to Periphery (41.5%) and Periphery to Dental (37.7%). The mean accuracy score for all cases with pathology was 84.6%. Except for 2 cases with subtle findings (condylar fracture and fibrous dysplasia), mean accuracy scores ranged from 76.5% to 100%. There were no associations between search patterns and detection accuracy. Participants took longer durations and performed more complete searches with more revisits on \"Normal\" cases. They were more likely to use shorter search times with smaller coverage and perform single searches when correctly detecting lesions.</p><p><strong>Conclusions: </strong>Experienced OMRs were able to detect lesions without scanning the entire PAN and within a single search, regardless of the search pattern used.</p><p><strong>Advances in knowledge: </strong>This is the first study using mouse-tracking algorithms to evaluate visualization patterns on PANs. Experienced OMRs can conduct efficient and accurate reviews of PANs regardless of search strategy.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145676799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel cross-attentive network for classifying cervical metastatic lymph nodes on B- and D-mode ultrasound images in oral squamous cell carcinoma. 一种新的交叉关注网络用于口腔鳞状细胞癌B和d型超声图像上的宫颈转移淋巴结分类。
IF 2.9 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE Pub Date : 2025-11-09 DOI: 10.1093/dmfr/twaf082
Yu-Ri Kim, Ji Yong Han, Su Yang, Jong Woo Kim, Kyung-Hoe Huh, Min-Suk Heo, Sam-Sun Lee, Won-Jin Yi, Jo-Eun Kim

Objectives: This study proposes a deep convolutional neural network model that integrates B-mode and D-mode ultrasound images to classify metastatic lymph nodes in patients with oral squamous cell carcinoma.

Methods: A shared backbone network incorporating a cross-attention mechanism was employed to enhance feature-level interactions between dual-input ultrasound images. A total of six convolutional neural network architectures (VGG16, SqueezeNet, ResNet50, EfficientNet B3, ConvNext, DenseNet121) were implemented within a shared backbone framework to investigate optimal performance. For each network, diagnostic performance was compared between dual-input and single-input ultrasound. In addition, model performance was evaluated against human observers with different levels of experience.

Results: The model using DenseNet121 as a shared backbone with an integrated cross-attention layer (LNM-Net) achieved the highest classification accuracy (85.3%) when utilizing dual-input images, surpassing the diagnostic performance of residents. The cross-attention module improved feature fusion, reducing false positives by suppressing modality-specific noise.

Conclusion: LNM-Net demonstrates strong potential as a clinical decision-support tool for preoperative lymph node metastasis assessment in oral squamous cell carcinoma. Despite current limitations such as dataset size and cross-institutional variability, the model offers a promising supplementary aid, particularly in settings with limited radiological expertise.

Advances in knowledge: This study develops a novel cross-attentive network using dual-input B- and D-mode ultrasound images to classify metastatic lymph nodes in oral squamous cell carcinoma.

目的:本研究提出一种融合b、d超声图像的深度卷积神经网络模型,用于口腔鳞状细胞癌患者转移淋巴结的分类。方法:采用包含交叉注意机制的共享骨干网来增强双输入超声图像之间的特征级交互。共有六个卷积神经网络架构(VGG16, SqueezeNet, ResNet50, EfficientNet B3, ConvNext, DenseNet121)在共享骨干框架内实现,以研究最佳性能。对于每个网络,比较双输入和单输入超声的诊断性能。此外,对具有不同经验水平的人类观察者进行了模型性能评估。结果:采用DenseNet121作为共享骨干网并集成交叉注意层(LNM-Net)的模型在使用双输入图像时获得了最高的分类准确率(85.3%),超过了居民的诊断性能。交叉注意模块改进了特征融合,通过抑制模态特定的噪声来减少误报。结论:LNM-Net作为口腔鳞状细胞癌术前淋巴结转移评估的临床决策支持工具具有很强的潜力。尽管目前存在数据集大小和跨机构可变性等限制,但该模型提供了一个有希望的补充援助,特别是在放射学专业知识有限的情况下。知识进展:本研究开发了一种新的交叉关注网络,使用双输入B和d模式超声图像对口腔鳞状细胞癌的转移淋巴结进行分类。
{"title":"A novel cross-attentive network for classifying cervical metastatic lymph nodes on B- and D-mode ultrasound images in oral squamous cell carcinoma.","authors":"Yu-Ri Kim, Ji Yong Han, Su Yang, Jong Woo Kim, Kyung-Hoe Huh, Min-Suk Heo, Sam-Sun Lee, Won-Jin Yi, Jo-Eun Kim","doi":"10.1093/dmfr/twaf082","DOIUrl":"https://doi.org/10.1093/dmfr/twaf082","url":null,"abstract":"<p><strong>Objectives: </strong>This study proposes a deep convolutional neural network model that integrates B-mode and D-mode ultrasound images to classify metastatic lymph nodes in patients with oral squamous cell carcinoma.</p><p><strong>Methods: </strong>A shared backbone network incorporating a cross-attention mechanism was employed to enhance feature-level interactions between dual-input ultrasound images. A total of six convolutional neural network architectures (VGG16, SqueezeNet, ResNet50, EfficientNet B3, ConvNext, DenseNet121) were implemented within a shared backbone framework to investigate optimal performance. For each network, diagnostic performance was compared between dual-input and single-input ultrasound. In addition, model performance was evaluated against human observers with different levels of experience.</p><p><strong>Results: </strong>The model using DenseNet121 as a shared backbone with an integrated cross-attention layer (LNM-Net) achieved the highest classification accuracy (85.3%) when utilizing dual-input images, surpassing the diagnostic performance of residents. The cross-attention module improved feature fusion, reducing false positives by suppressing modality-specific noise.</p><p><strong>Conclusion: </strong>LNM-Net demonstrates strong potential as a clinical decision-support tool for preoperative lymph node metastasis assessment in oral squamous cell carcinoma. Despite current limitations such as dataset size and cross-institutional variability, the model offers a promising supplementary aid, particularly in settings with limited radiological expertise.</p><p><strong>Advances in knowledge: </strong>This study develops a novel cross-attentive network using dual-input B- and D-mode ultrasound images to classify metastatic lymph nodes in oral squamous cell carcinoma.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145480573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cephalometric tracing: comparing artificial intelligence and augmented intelligence on online platforms. 头颅测量追踪:在线平台上人工智能和增强智能的比较。
IF 2.9 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE Pub Date : 2025-11-01 DOI: 10.1093/dmfr/twaf045
Edna Alejandra Gallardo-Lopez, Lucila Massu Yoshizaki Akinaga Moreira, Murilo Henrique Cruz, Núbia Rafaelle Oliveira Meneses, Suelen Cavalcante Ferreira Schumiski, Daniela Miranda Richarte de Andrade Salgado, Edgard Michel-Crosato, Claudio Costa

Objective: This research aimed to evaluate the results of cephalometric analyses obtained by artificial intelligence (AI) from the RadioCef, EasyCeph, and WebCeph platforms, and their variability due to modifications made by the user.

Methods: In this cross-sectional observational study, seventy cephalometric radiographs were analysed using the AI of the platforms. Subsequently, four examiners with different areas of expertise and levels of experience examined each landmark, correcting its location if necessary.

Results: The Pog, L1 tip, B, and Go landmarks on the RadioCef; Pn, Me, Pog, U1 tip, and UL on the EasyCeph; and Pog, Me, and B on the WebCeph showed a modification equal to or greater than 90%. More experienced examiners modified a greater number of landmarks. The repeated measures ANOVA test reported statistically significant differences concerning the SNA, SNB, ANB, SN-GoGn, FMIA, FMA, and IMPA angles (P < .05) for fully automated and semi-automated analyses. ICC values reported intra-observer agreement levels from poor (ICC = 0.27) to perfect (ICC = 1), and inter-observer agreement showed good to excellent reliability (ICC = 0.88-0.99).

Conclusions: Fully automated cephalometric analysis presents variations according to modifications made by the examiners. This represents a challenge to the knowledge of the orthodontist, influencing the diagnosis and treatment planning. Therefore, the use of augmented intelligence in cephalometric analysis is still suggested based on the results obtained for each platform.

Advances in knowledge: Cephalometric AI platforms show variability in landmark location accuracy. User modifications significantly impact automated analysis results.

目的:本研究旨在评估人工智能从RadioCef®、EasyCeph®和WebCeph®平台获得的头颅测量分析结果及其因用户修改而产生的变异性。方法:在本横断面观察研究中,使用平台人工智能对70张头颅x线片进行分析。随后,四名具有不同专业知识领域和经验水平的审查员检查了每个地标,并在必要时纠正其位置。结果:RadioCef®上的Pog、L1尖端、B和Go标记;EasyCeph®上的Pn、Me、Pog、U1尖端和UL;在WebCeph®上,Pog、Me和B的修饰等于或大于90%。更有经验的审查员修改了更多的地标。重复测量方差分析报告了SNA、SNB、ANB、SN-GoGn、FMIA、FMA和IMPA角度的统计学差异(p)。结论:全自动头颅测量分析显示,根据审核员的修改,出现了变化。这对正畸医生的知识构成了挑战,影响了诊断和治疗计划。因此,根据每个平台获得的结果,仍然建议在头测分析中使用增强智能。
{"title":"Cephalometric tracing: comparing artificial intelligence and augmented intelligence on online platforms.","authors":"Edna Alejandra Gallardo-Lopez, Lucila Massu Yoshizaki Akinaga Moreira, Murilo Henrique Cruz, Núbia Rafaelle Oliveira Meneses, Suelen Cavalcante Ferreira Schumiski, Daniela Miranda Richarte de Andrade Salgado, Edgard Michel-Crosato, Claudio Costa","doi":"10.1093/dmfr/twaf045","DOIUrl":"10.1093/dmfr/twaf045","url":null,"abstract":"<p><strong>Objective: </strong>This research aimed to evaluate the results of cephalometric analyses obtained by artificial intelligence (AI) from the RadioCef, EasyCeph, and WebCeph platforms, and their variability due to modifications made by the user.</p><p><strong>Methods: </strong>In this cross-sectional observational study, seventy cephalometric radiographs were analysed using the AI of the platforms. Subsequently, four examiners with different areas of expertise and levels of experience examined each landmark, correcting its location if necessary.</p><p><strong>Results: </strong>The Pog, L1 tip, B, and Go landmarks on the RadioCef; Pn, Me, Pog, U1 tip, and UL on the EasyCeph; and Pog, Me, and B on the WebCeph showed a modification equal to or greater than 90%. More experienced examiners modified a greater number of landmarks. The repeated measures ANOVA test reported statistically significant differences concerning the SNA, SNB, ANB, SN-GoGn, FMIA, FMA, and IMPA angles (P < .05) for fully automated and semi-automated analyses. ICC values reported intra-observer agreement levels from poor (ICC = 0.27) to perfect (ICC = 1), and inter-observer agreement showed good to excellent reliability (ICC = 0.88-0.99).</p><p><strong>Conclusions: </strong>Fully automated cephalometric analysis presents variations according to modifications made by the examiners. This represents a challenge to the knowledge of the orthodontist, influencing the diagnosis and treatment planning. Therefore, the use of augmented intelligence in cephalometric analysis is still suggested based on the results obtained for each platform.</p><p><strong>Advances in knowledge: </strong>Cephalometric AI platforms show variability in landmark location accuracy. User modifications significantly impact automated analysis results.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"667-673"},"PeriodicalIF":2.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144076820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oriented tooth detection: a CBCT image processing method integrated with RoI transformer. 定向牙齿检测:一种结合RoI变压器的CBCT图像处理方法。
IF 2.9 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE Pub Date : 2025-11-01 DOI: 10.1093/dmfr/twaf049
Ziyi Zhao, Bo Wu, Sha Su, Dongdong Liu, Zejie Wu, Runtao Gao, Nan Zhang

Objectives: Cone beam computed tomography (CBCT) has revolutionized dental imaging due to its high spatial resolution and ability to provide detailed 3-dimensional reconstructions of dental structures. This study introduces an innovative CBCT image processing method using an oriented object detection approach integrated with a Region of Interest (RoI) transformer.

Methods: This study addresses the challenge of accurate tooth detection and classification in PAN derived from CBCT, introducing an innovative oriented object detection approach, which has not been previously applied in dental imaging. This method better aligns with the natural growth patterns of teeth, allowing for more accurate detection and classification of molars, premolars, canines, and incisors. By integrating RoI transformer, the model demonstrates relatively acceptable performance metrics compared to conventional horizontal detection methods while also offering enhanced visualization capabilities. Furthermore, post-processing techniques, including distance and greyscale value constraints, are employed to correct classification errors and reduce false positives, especially in areas with missing teeth.

Results: The experimental results indicate that the proposed method achieves an accuracy of 98.48%, a recall of 97.21%, an F1 score of 97.21%, and an mean average precision (mAP) of 98.12% in tooth detection.

Conclusions: The proposed method enhances the accuracy of tooth detection in CBCT-derived PAN by reducing background interference and improving the visualization of tooth orientation.

目的:锥形束计算机断层扫描(CBCT)由于其高空间分辨率和提供详细的牙齿结构三维重建的能力,已经彻底改变了牙科成像。本研究介绍了一种创新的CBCT图像处理方法,该方法使用定向目标检测方法与感兴趣区域(RoI)变压器相结合。方法:本研究解决了CBCT衍生的PAN中准确的牙齿检测和分类的挑战,引入了一种创新的定向目标检测方法,该方法此前尚未应用于牙科成像。这种方法更符合牙齿的自然生长模式,可以更准确地检测和分类磨牙、前磨牙、犬齿和门牙。通过集成RoI变压器,与传统的水平检测方法相比,该模型展示了相对可接受的性能指标,同时还提供了增强的可视化功能。此外,后处理技术,包括距离和灰度值约束,用于纠正分类错误和减少误报,特别是在缺牙区域。结果:实验结果表明,该方法在牙齿检测中准确率为98.48%,召回率为97.21%,F1评分为97.21%,mAP为98.12%。结论:该方法通过减少背景干扰和提高牙齿方位的可视化,提高了cbct衍生PAN的牙齿检测精度。
{"title":"Oriented tooth detection: a CBCT image processing method integrated with RoI transformer.","authors":"Ziyi Zhao, Bo Wu, Sha Su, Dongdong Liu, Zejie Wu, Runtao Gao, Nan Zhang","doi":"10.1093/dmfr/twaf049","DOIUrl":"10.1093/dmfr/twaf049","url":null,"abstract":"<p><strong>Objectives: </strong>Cone beam computed tomography (CBCT) has revolutionized dental imaging due to its high spatial resolution and ability to provide detailed 3-dimensional reconstructions of dental structures. This study introduces an innovative CBCT image processing method using an oriented object detection approach integrated with a Region of Interest (RoI) transformer.</p><p><strong>Methods: </strong>This study addresses the challenge of accurate tooth detection and classification in PAN derived from CBCT, introducing an innovative oriented object detection approach, which has not been previously applied in dental imaging. This method better aligns with the natural growth patterns of teeth, allowing for more accurate detection and classification of molars, premolars, canines, and incisors. By integrating RoI transformer, the model demonstrates relatively acceptable performance metrics compared to conventional horizontal detection methods while also offering enhanced visualization capabilities. Furthermore, post-processing techniques, including distance and greyscale value constraints, are employed to correct classification errors and reduce false positives, especially in areas with missing teeth.</p><p><strong>Results: </strong>The experimental results indicate that the proposed method achieves an accuracy of 98.48%, a recall of 97.21%, an F1 score of 97.21%, and an mean average precision (mAP) of 98.12% in tooth detection.</p><p><strong>Conclusions: </strong>The proposed method enhances the accuracy of tooth detection in CBCT-derived PAN by reducing background interference and improving the visualization of tooth orientation.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"695-705"},"PeriodicalIF":2.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144612067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dental imaging in Singapore: a survey of 2D radiographic techniques and CBCT practices. 新加坡牙科成像:二维放射技术和CBCT实践的调查。
IF 2.9 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE Pub Date : 2025-11-01 DOI: 10.1093/dmfr/twaf033
Almond Y Q Ng, Clement W M Lai, Clarissa X H Ho, Li Zhen Lim

Objectives: This survey was conducted to identify dental radiography practices and knowledge gaps among dentists in Singapore with respect to both 2D and 3D imaging.

Methods: This was a cross-sectional survey conducted via an electronic platform from May to June 2023. All Singapore Dental Council-registered dentists were eligible to participate. We gathered data on demographics, intraoral (IO) radiography usage, cone-beam CT (CBCT) training, current CBCT usage, and interpretation.

Results: Three-hundred and five out of 2605 registered dentists completed the online survey (11.7% response rate). Radiograph positioning holders, digital imaging, and round collimation were used by most respondents for IO radiography (85.6%, 88.9%, and 89.2%, respectively). One-hundred and forty-two participants (46.6%) underwent CBCT training, and most lacked training in image acquisition. Dentists expressed interest in CBCT interpretation and the use of viewing software; 219 dentists (71.8%) are CBCT users. Most (74.9%) took 0-5 CBCT scans monthly. Implant planning was the most common indication (24.8%). Among CBCT users, 85.9% report some or all their own scans while 70.7% would engage a reporting service if available.

Conclusions: This survey provides insights in dental imaging that require increased educational efforts in Singapore. For IO radiography, there should be greater emphasis on rectangular collimation. With CBCT, there is potential for training in image acquisition, interpretation, and the use of viewing software.

Advances in knowledge: Identifying knowledge and training gaps in dental imaging is critical to ensure the safe use of ionizing radiation on patients.

目的:本调查旨在确定新加坡牙医在2D和3D成像方面的牙科x线摄影实践和知识差距。方法:于2023年5 - 6月通过电子平台进行横断面调查。所有在新加坡牙医管理委员会注册的牙医都有资格参加。我们收集了人口统计学、口内(IO) x线摄影使用、锥形束计算机断层扫描(CBCT)训练、当前CBCT使用和解释的数据。结果:2605名注册牙医中有305名完成了在线调查,回复率为11.7%。大多数受访者在IO x线摄影中使用x线片定位支架、数字成像和圆形准直(分别为85.6%、88.9%和89.2%)。142名参与者(46.6%)接受过CBCT训练,大多数缺乏图像采集训练。牙医表达了对CBCT解读和查看软件使用的兴趣。219名牙医(71.8%)使用CBCT。大多数患者(74.9%)每月进行0 - 5次CBCT扫描。种植计划是最常见的指征(24.8%)。85.9%的CBCT用户报告了自己的部分或全部扫描,而70.7%的人会在可用的情况下使用报告服务。结论:这项调查提供了在牙科成像的见解,需要增加教育努力在新加坡。对于IO射线照相,应该更加强调矩形准直。有了CBCT,就有可能进行图像采集、解释和观看软件使用方面的培训。知识进步:确定牙科成像方面的知识和培训差距对于确保对患者安全使用电离辐射至关重要。
{"title":"Dental imaging in Singapore: a survey of 2D radiographic techniques and CBCT practices.","authors":"Almond Y Q Ng, Clement W M Lai, Clarissa X H Ho, Li Zhen Lim","doi":"10.1093/dmfr/twaf033","DOIUrl":"10.1093/dmfr/twaf033","url":null,"abstract":"<p><strong>Objectives: </strong>This survey was conducted to identify dental radiography practices and knowledge gaps among dentists in Singapore with respect to both 2D and 3D imaging.</p><p><strong>Methods: </strong>This was a cross-sectional survey conducted via an electronic platform from May to June 2023. All Singapore Dental Council-registered dentists were eligible to participate. We gathered data on demographics, intraoral (IO) radiography usage, cone-beam CT (CBCT) training, current CBCT usage, and interpretation.</p><p><strong>Results: </strong>Three-hundred and five out of 2605 registered dentists completed the online survey (11.7% response rate). Radiograph positioning holders, digital imaging, and round collimation were used by most respondents for IO radiography (85.6%, 88.9%, and 89.2%, respectively). One-hundred and forty-two participants (46.6%) underwent CBCT training, and most lacked training in image acquisition. Dentists expressed interest in CBCT interpretation and the use of viewing software; 219 dentists (71.8%) are CBCT users. Most (74.9%) took 0-5 CBCT scans monthly. Implant planning was the most common indication (24.8%). Among CBCT users, 85.9% report some or all their own scans while 70.7% would engage a reporting service if available.</p><p><strong>Conclusions: </strong>This survey provides insights in dental imaging that require increased educational efforts in Singapore. For IO radiography, there should be greater emphasis on rectangular collimation. With CBCT, there is potential for training in image acquisition, interpretation, and the use of viewing software.</p><p><strong>Advances in knowledge: </strong>Identifying knowledge and training gaps in dental imaging is critical to ensure the safe use of ionizing radiation on patients.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"649-658"},"PeriodicalIF":2.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653774/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143986347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The performance of large language models in dentomaxillofacial radiology: a systematic review. 大语言模型在牙颌面放射学中的表现:系统回顾。
IF 2.9 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE Pub Date : 2025-11-01 DOI: 10.1093/dmfr/twaf060
Zekai Liu, Andrew Nalley, Jing Hao, Qi Yong H Ai, Andy Wai Kan Yeung, Ray Tanaka, Kuo Feng Hung

Objectives: This study aimed to systematically review the current performance of large language models (LLMs) in dento-maxillofacial radiology (DMFR).

Methods: Five electronic databases were used to identify studies that developed, fine-tuned, or evaluated LLMs for DMFR-related tasks. Data extracted included study purpose, LLM type, images/text source, applied language, dataset characteristics, input and output, performance outcomes, evaluation methods, and reference standards. Customized assessment criteria adapted from the TRIPOD-LLM reporting guideline were used to evaluate the risk-of-bias in the included studies specifically regarding the clarity of dataset origin, the robustness of performance evaluation methods, and the validity of the reference standards.

Results: The initial search yielded 1621 titles, and 19 studies were included. These studies investigated the use of LLMs for tasks including the production and answering of DMFR-related qualification exams and educational questions (n = 8), diagnosis and treatment recommendations (n = 7), and radiology report generation and patient communication (n = 4). LLMs demonstrated varied performance in diagnosing dental conditions, with accuracy ranging from 37% to 92.5% and expert ratings for differential diagnosis and treatment planning between 3.6 and 4.7 on a 5-point scale. For DMFR-related qualification exams and board-style questions, LLMs achieved correctness rates between 33.3% and 86.1%. Automated radiology report generation showed moderate performance with accuracy ranging from 70.4% to 81.3%.

Conclusions: LLMs demonstrate promising potential in DMFR, particularly for diagnostic, educational, and report generation tasks. However, their current accuracy, completeness, and consistency remain variable. Further development, validation, and standardization are needed before LLMs can be reliably integrated as supportive tools in clinical workflows and educational settings.

目的:本研究旨在系统回顾目前大型语言模型(LLMs)在牙颌面放射学(DMFR)中的表现。方法:使用五个电子数据库来识别开发,微调或评估llm用于dmfr相关任务的研究。提取的数据包括研究目的、法学硕士类型、图像/文本来源、应用语言、数据集特征、输入和输出、绩效结果、评估方法和参考标准。根据TRIPOD-LLM报告指南定制的评估标准用于评估纳入研究的偏倚风险,特别是关于数据集来源的清晰度、性能评估方法的稳健性和参考标准的有效性。结果:最初的检索产生了1621个标题,其中包括19项研究。这些研究调查了llm在dmfr相关资格考试和教育问题的制作和回答(n = 8),诊断和治疗建议(n = 7)以及放射学报告生成和患者沟通(n = 4)中的使用情况。llm在诊断牙齿状况方面表现出不同的表现,准确率在37-92.5%之间,专家对鉴别诊断和治疗计划的评分在5分制的3.6-4.7之间。对于dmfr相关的资格考试和董事会式问题,法学硕士的正确率在33.3-86.1%之间。自动生成放射学报告的准确度在70.4-81.3%之间。结论:llm在DMFR中表现出良好的潜力,特别是在诊断、教育和报告生成任务方面。然而,它们当前的准确性、完整性和一致性仍然是可变的。在法学硕士能够可靠地整合为临床工作流程和教育环境中的支持工具之前,需要进一步的开发、验证和标准化。
{"title":"The performance of large language models in dentomaxillofacial radiology: a systematic review.","authors":"Zekai Liu, Andrew Nalley, Jing Hao, Qi Yong H Ai, Andy Wai Kan Yeung, Ray Tanaka, Kuo Feng Hung","doi":"10.1093/dmfr/twaf060","DOIUrl":"10.1093/dmfr/twaf060","url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to systematically review the current performance of large language models (LLMs) in dento-maxillofacial radiology (DMFR).</p><p><strong>Methods: </strong>Five electronic databases were used to identify studies that developed, fine-tuned, or evaluated LLMs for DMFR-related tasks. Data extracted included study purpose, LLM type, images/text source, applied language, dataset characteristics, input and output, performance outcomes, evaluation methods, and reference standards. Customized assessment criteria adapted from the TRIPOD-LLM reporting guideline were used to evaluate the risk-of-bias in the included studies specifically regarding the clarity of dataset origin, the robustness of performance evaluation methods, and the validity of the reference standards.</p><p><strong>Results: </strong>The initial search yielded 1621 titles, and 19 studies were included. These studies investigated the use of LLMs for tasks including the production and answering of DMFR-related qualification exams and educational questions (n = 8), diagnosis and treatment recommendations (n = 7), and radiology report generation and patient communication (n = 4). LLMs demonstrated varied performance in diagnosing dental conditions, with accuracy ranging from 37% to 92.5% and expert ratings for differential diagnosis and treatment planning between 3.6 and 4.7 on a 5-point scale. For DMFR-related qualification exams and board-style questions, LLMs achieved correctness rates between 33.3% and 86.1%. Automated radiology report generation showed moderate performance with accuracy ranging from 70.4% to 81.3%.</p><p><strong>Conclusions: </strong>LLMs demonstrate promising potential in DMFR, particularly for diagnostic, educational, and report generation tasks. However, their current accuracy, completeness, and consistency remain variable. Further development, validation, and standardization are needed before LLMs can be reliably integrated as supportive tools in clinical workflows and educational settings.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"613-631"},"PeriodicalIF":2.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144834414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Dento maxillo facial radiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1