Objectives: The purpose of this study is to prospectively investigate the reference values of masseter and temporal muscle thicknesses by ultrasonography and muscle hardness values by shear wave elastography in healthy adults.
Methods: The sample of the study consisted of a total of 160 healthy individuals aged between 18 and 59, including 80 women and 80 men. By examining the right and left sides of each participant, thickness and hardness values were obtained for 320 masseter muscles and 320 temporal muscles in total.
Results: The mean masseter muscle thickness was found to be 1.09 cm at rest and 1.40 cm in contraction. The mean temporal muscle thickness was found to be 0.88 cm at rest and 0.98 cm in contraction. The thickness values of the masseter and temporal muscles were significantly greater in the male participants than in the female participants (P < .001). While there were significant differences between the right and left masseter muscle thickness values at rest and in contraction, the values of the temporal muscles did not show a significant difference between the sides. While the resting hardness (rSWE) of the masseter muscle was transversally 6.91 kPa and longitudinally 8.49 kPa, these values in contraction (cSWE) were found, respectively, 31.40 and 35.65 kPa. The median temporal muscle hardness values were 8.84 kPa at rest and 20.43 kPa in contraction. Masseter and temporal muscle hardness values at rest and in contraction were significantly higher among the male participants compared to the female participants (P < .001).
Conclusion: In this study, reference values for the thickness and hardness of the masseter and temporal muscles are reported. Knowing these values will make it easier to assess pain in the masseter and temporal muscles and determine the diagnosis and prognosis of masticatory muscle pathologies by allowing the morphological and functional assessments of these muscles, and it will identify ranges for reference parameters.
{"title":"Determination of masseter and temporal muscle thickness by ultrasound and muscle hardness by shear wave elastography in healthy adults as reference values.","authors":"Ayşe Nur Koruyucu, Firdevs Aşantoğrol","doi":"10.1093/dmfr/twad013","DOIUrl":"10.1093/dmfr/twad013","url":null,"abstract":"<p><strong>Objectives: </strong>The purpose of this study is to prospectively investigate the reference values of masseter and temporal muscle thicknesses by ultrasonography and muscle hardness values by shear wave elastography in healthy adults.</p><p><strong>Methods: </strong>The sample of the study consisted of a total of 160 healthy individuals aged between 18 and 59, including 80 women and 80 men. By examining the right and left sides of each participant, thickness and hardness values were obtained for 320 masseter muscles and 320 temporal muscles in total.</p><p><strong>Results: </strong>The mean masseter muscle thickness was found to be 1.09 cm at rest and 1.40 cm in contraction. The mean temporal muscle thickness was found to be 0.88 cm at rest and 0.98 cm in contraction. The thickness values of the masseter and temporal muscles were significantly greater in the male participants than in the female participants (P < .001). While there were significant differences between the right and left masseter muscle thickness values at rest and in contraction, the values of the temporal muscles did not show a significant difference between the sides. While the resting hardness (rSWE) of the masseter muscle was transversally 6.91 kPa and longitudinally 8.49 kPa, these values in contraction (cSWE) were found, respectively, 31.40 and 35.65 kPa. The median temporal muscle hardness values were 8.84 kPa at rest and 20.43 kPa in contraction. Masseter and temporal muscle hardness values at rest and in contraction were significantly higher among the male participants compared to the female participants (P < .001).</p><p><strong>Conclusion: </strong>In this study, reference values for the thickness and hardness of the masseter and temporal muscles are reported. Knowing these values will make it easier to assess pain in the masseter and temporal muscles and determine the diagnosis and prognosis of masticatory muscle pathologies by allowing the morphological and functional assessments of these muscles, and it will identify ranges for reference parameters.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"137-152"},"PeriodicalIF":3.3,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rashmi S, Srinath S, Prashanth S Murthy, Seema Deshmukh
Objectives: The objectives of this study are to explore and evaluate the automation of anatomical landmark localization in cephalometric images using machine learning techniques, with a focus on feature extraction and combinations, contextual analysis, and model interpretability through Shapley Additive exPlanations (SHAP) values.
Methods: We conducted extensive experimentation on a private dataset of 300 lateral cephalograms to thoroughly study the annotation results obtained using pixel feature descriptors including raw pixel, gradient magnitude, gradient direction, and histogram-oriented gradient (HOG) values. The study includes evaluation and comparison of these feature descriptions calculated at different contexts namely local, pyramid, and global. The feature descriptor obtained using individual combinations is used to discern between landmark and nonlandmark pixels using classification method. Additionally, this study addresses the opacity of LGBM ensemble tree models across landmarks, introducing SHAP values to enhance interpretability.
Results: The performance of feature combinations was assessed using metrics like mean radial error, standard deviation, success detection rate (SDR) (2 mm), and test time. Remarkably, among all the combinations explored, both the HOG and gradient direction operations demonstrated significant performance across all context combinations. At the contextual level, the global texture outperformed the others, although it came with the trade-off of increased test time. The HOG in the local context emerged as the top performer with an SDR of 75.84% compared to others.
Conclusions: The presented analysis enhances the understanding of the significance of different features and their combinations in the realm of landmark annotation but also paves the way for further exploration of landmark-specific feature combination methods, facilitated by explainability.
{"title":"Landmark annotation through feature combinations: a comparative study on cephalometric images with in-depth analysis of model's explainability.","authors":"Rashmi S, Srinath S, Prashanth S Murthy, Seema Deshmukh","doi":"10.1093/dmfr/twad011","DOIUrl":"10.1093/dmfr/twad011","url":null,"abstract":"<p><strong>Objectives: </strong>The objectives of this study are to explore and evaluate the automation of anatomical landmark localization in cephalometric images using machine learning techniques, with a focus on feature extraction and combinations, contextual analysis, and model interpretability through Shapley Additive exPlanations (SHAP) values.</p><p><strong>Methods: </strong>We conducted extensive experimentation on a private dataset of 300 lateral cephalograms to thoroughly study the annotation results obtained using pixel feature descriptors including raw pixel, gradient magnitude, gradient direction, and histogram-oriented gradient (HOG) values. The study includes evaluation and comparison of these feature descriptions calculated at different contexts namely local, pyramid, and global. The feature descriptor obtained using individual combinations is used to discern between landmark and nonlandmark pixels using classification method. Additionally, this study addresses the opacity of LGBM ensemble tree models across landmarks, introducing SHAP values to enhance interpretability.</p><p><strong>Results: </strong>The performance of feature combinations was assessed using metrics like mean radial error, standard deviation, success detection rate (SDR) (2 mm), and test time. Remarkably, among all the combinations explored, both the HOG and gradient direction operations demonstrated significant performance across all context combinations. At the contextual level, the global texture outperformed the others, although it came with the trade-off of increased test time. The HOG in the local context emerged as the top performer with an SDR of 75.84% compared to others.</p><p><strong>Conclusions: </strong>The presented analysis enhances the understanding of the significance of different features and their combinations in the realm of landmark annotation but also paves the way for further exploration of landmark-specific feature combination methods, facilitated by explainability.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"115-126"},"PeriodicalIF":3.3,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanns Leonhard Kaatsch, Florian Fulisch, Daniel Dillinger, Laura Kubitscheck, Benjamin V Becker, Joel Piechotka, Marc A Brockmann, Matthias F Froelich, Stefan O Schoenberg, Daniel Overhoff, Stephan Waldeck
Purpose: This study investigated the differences in subjective and objective image parameters as well as dose exposure of photon-counting CT (PCCT) compared to cone-beam CT (CBCT) in paranasal sinus imaging for the assessment of rhinosinusitis and sinonasal anatomy.
Methods: This single-centre retrospective study included 100 patients, who underwent either clinically indicated PCCT or CBCT of the paranasal sinus. Two blinded experienced ENT radiologists graded image quality and delineation of specific anatomical structures on a 5-point Likert scale. In addition, contrast-to-noise ratio (CNR) and applied radiation doses were compared among both techniques.
Results: Image quality and delineation of bone structures in paranasal sinus PCCT was subjectively rated superior by both readers compared to CBCT (P < .001). CNR was significantly higher for photon-counting CT (P < .001). Mean effective dose for PCCT examinations was significantly lower than for CBCT (0.038 mSv ± 0.009 vs. 0.14 mSv ± 0.011; P < .001).
Conclusion: In a performance comparison of PCCT and a modern CBCT scanner in paranasal sinus imaging, we demonstrated that first-use PCCT in clinical routine provides higher subjective image quality accompanied by higher CNR at close to a quarter of the dose exposure compared to CBCT.
{"title":"Ultra-low-dose photon-counting CT of paranasal sinus: an in vivo comparison of radiation dose and image quality to cone-beam CT.","authors":"Hanns Leonhard Kaatsch, Florian Fulisch, Daniel Dillinger, Laura Kubitscheck, Benjamin V Becker, Joel Piechotka, Marc A Brockmann, Matthias F Froelich, Stefan O Schoenberg, Daniel Overhoff, Stephan Waldeck","doi":"10.1093/dmfr/twad010","DOIUrl":"10.1093/dmfr/twad010","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the differences in subjective and objective image parameters as well as dose exposure of photon-counting CT (PCCT) compared to cone-beam CT (CBCT) in paranasal sinus imaging for the assessment of rhinosinusitis and sinonasal anatomy.</p><p><strong>Methods: </strong>This single-centre retrospective study included 100 patients, who underwent either clinically indicated PCCT or CBCT of the paranasal sinus. Two blinded experienced ENT radiologists graded image quality and delineation of specific anatomical structures on a 5-point Likert scale. In addition, contrast-to-noise ratio (CNR) and applied radiation doses were compared among both techniques.</p><p><strong>Results: </strong>Image quality and delineation of bone structures in paranasal sinus PCCT was subjectively rated superior by both readers compared to CBCT (P < .001). CNR was significantly higher for photon-counting CT (P < .001). Mean effective dose for PCCT examinations was significantly lower than for CBCT (0.038 mSv ± 0.009 vs. 0.14 mSv ± 0.011; P < .001).</p><p><strong>Conclusion: </strong>In a performance comparison of PCCT and a modern CBCT scanner in paranasal sinus imaging, we demonstrated that first-use PCCT in clinical routine provides higher subjective image quality accompanied by higher CNR at close to a quarter of the dose exposure compared to CBCT.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 2","pages":"103-108"},"PeriodicalIF":3.3,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139706368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fernanda B Martins, Millena B Oliveira, Leandro M Oliveira, Alan Grupioni Lourenço, Luiz Renato Paranhos, Ana Carolina F Motta
Objectives: To evaluate the accuracy of major salivary gland ultrasonography (SGUS) in relation to minor salivary gland biopsy (mSGB) in the diagnosis of Sjögren's syndrome (SS).
Methods: A systematic review and meta-analysis were performed. Ten databases were searched to identify studies that compared the accuracy of SGUS and mSGB. The risk of bias was assessed, data were extracted, and univariate and bivariate random-effects meta-analyses were done.
Results: A total of 5000 records were identified; 13 studies were included in the qualitative synthesis and 10 in the quantitative synthesis. The first meta-analysis found a sensitivity of 0.86 (95% CI: 0.74-0.92) and specificity of 0.87 (95% CI: 0.81-0.92) for the predictive value of SGUS scoring in relation to the result of mSGB. In the second meta-analysis, mSGB showed higher sensitivity and specificity than SGUS. Sensitivity was 0.80 (95% CI: 0.74-0.85) for mSGB and 0.71 (95% CI: 0.58-0.81) for SGUS, and specificity was 0.94 (95% CI: 0.87-0.97) for mSGB and 0.89 (95% CI: 0.82-0.94) for SGUS.
Conclusions: The diagnostic accuracy of SGUS was similar to that of mSGB. SGUS is an effective diagnostic test that shows good sensitivity and high specificity, in addition to being a good tool for prognosis and for avoiding unnecessary biopsies. More studies using similar methodologies are needed to assess the accuracy of SGUS in predicting the result of mSGB. Our results will contribute to decision-making for the implementation of SGUS as a diagnostic tool for SS, considering the advantages of this method.
{"title":"Diagnostic accuracy of ultrasonography in relation to salivary gland biopsy in Sjögren's syndrome: a systematic review with meta-analysis.","authors":"Fernanda B Martins, Millena B Oliveira, Leandro M Oliveira, Alan Grupioni Lourenço, Luiz Renato Paranhos, Ana Carolina F Motta","doi":"10.1093/dmfr/twad007","DOIUrl":"10.1093/dmfr/twad007","url":null,"abstract":"<p><strong>Objectives: </strong>To evaluate the accuracy of major salivary gland ultrasonography (SGUS) in relation to minor salivary gland biopsy (mSGB) in the diagnosis of Sjögren's syndrome (SS).</p><p><strong>Methods: </strong>A systematic review and meta-analysis were performed. Ten databases were searched to identify studies that compared the accuracy of SGUS and mSGB. The risk of bias was assessed, data were extracted, and univariate and bivariate random-effects meta-analyses were done.</p><p><strong>Results: </strong>A total of 5000 records were identified; 13 studies were included in the qualitative synthesis and 10 in the quantitative synthesis. The first meta-analysis found a sensitivity of 0.86 (95% CI: 0.74-0.92) and specificity of 0.87 (95% CI: 0.81-0.92) for the predictive value of SGUS scoring in relation to the result of mSGB. In the second meta-analysis, mSGB showed higher sensitivity and specificity than SGUS. Sensitivity was 0.80 (95% CI: 0.74-0.85) for mSGB and 0.71 (95% CI: 0.58-0.81) for SGUS, and specificity was 0.94 (95% CI: 0.87-0.97) for mSGB and 0.89 (95% CI: 0.82-0.94) for SGUS.</p><p><strong>Conclusions: </strong>The diagnostic accuracy of SGUS was similar to that of mSGB. SGUS is an effective diagnostic test that shows good sensitivity and high specificity, in addition to being a good tool for prognosis and for avoiding unnecessary biopsies. More studies using similar methodologies are needed to assess the accuracy of SGUS in predicting the result of mSGB. Our results will contribute to decision-making for the implementation of SGUS as a diagnostic tool for SS, considering the advantages of this method.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"91-102"},"PeriodicalIF":2.9,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139097507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objectives: The objective of this study is to assess the accuracy of computer-assisted periodontal classification bone loss staging using deep learning (DL) methods on panoramic radiographs and to compare the performance of various models and layers.
Methods: Panoramic radiographs were diagnosed and classified into 3 groups, namely "healthy," "Stage1/2," and "Stage3/4," and stored in separate folders. The feature extraction stage involved transferring and retraining the feature extraction layers and weights from 3 models, namely ResNet50, DenseNet121, and InceptionV3, which were proposed for classifying the ImageNet dataset, to 3 DL models designed for classifying periodontal bone loss. The features obtained from global average pooling (GAP), global max pooling (GMP), or flatten layers (FL) of convolutional neural network (CNN) models were used as input to the 8 different machine learning (ML) models. In addition, the features obtained from the GAP, GMP, or FL of the DL models were reduced using the minimum redundancy maximum relevance (mRMR) method and then classified again with 8 ML models.
Results: A total of 2533 panoramic radiographs, including 721 in the healthy group, 842 in the Stage1/2 group, and 970 in the Stage3/4 group, were included in the dataset. The average performance values of DenseNet121 + GAP-based and DenseNet121 + GAP + mRMR-based ML techniques on 10 subdatasets and ML models developed using 2 feature selection techniques outperformed CNN models.
Conclusions: The new DenseNet121 + GAP + mRMR-based support vector machine model developed in this study achieved higher performance in periodontal bone loss classification compared to other models in the literature by detecting effective features from raw images without the need for manual selection.
研究目的本研究旨在评估使用深度学习(DL)方法对全景X光片进行计算机辅助牙周分类骨质流失分期的准确性,并比较各种模型和层的性能:对全景X光片进行诊断,并将其分为3组,即 "健康"、"1/2期 "和 "3/4期",分别存储在不同的文件夹中。特征提取阶段包括将用于ImageNet数据集分类的3个模型(即ResNet50、DenseNet121和InceptionV3)的特征提取层和权重转移到用于牙周骨质流失分类的3个DL模型,并对其进行再训练。从卷积神经网络(CNN)模型的全局平均池化(GAP)、全局最大池化(GMP)或扁平化层(FL)获得的特征被用作 8 个不同机器学习(ML)模型的输入。此外,使用最小冗余最大相关性(mRMR)方法对从卷积神经网络模型的 GAP、GMP 或 FL 中获得的特征进行缩减,然后用 8 个 ML 模型再次进行分类:数据集共包含 2533 张全景照片,其中健康组 721 张,1/2 期组 842 张,3/4 期组 970 张。基于 DenseNet121 + GAP 和基于 DenseNet121 + GAP + mRMR 的 ML 技术在 10 个子数据集上的平均性能值以及使用 2 种特征选择技术开发的 ML 模型的性能均优于 CNN 模型:本研究开发的基于 DenseNet121 + GAP + mRMR 的支持向量机模型无需人工选择,就能从原始图像中检测出有效特征,与文献中的其他模型相比,该模型在牙周骨缺失分类中取得了更高的性能。
{"title":"Comparison of deep learning methods for the radiographic detection of patients with different periodontitis stages.","authors":"Berceste Guler Ayyildiz, Rukiye Karakis, Busra Terzioglu, Durmus Ozdemir","doi":"10.1093/dmfr/twad003","DOIUrl":"10.1093/dmfr/twad003","url":null,"abstract":"<p><strong>Objectives: </strong>The objective of this study is to assess the accuracy of computer-assisted periodontal classification bone loss staging using deep learning (DL) methods on panoramic radiographs and to compare the performance of various models and layers.</p><p><strong>Methods: </strong>Panoramic radiographs were diagnosed and classified into 3 groups, namely \"healthy,\" \"Stage1/2,\" and \"Stage3/4,\" and stored in separate folders. The feature extraction stage involved transferring and retraining the feature extraction layers and weights from 3 models, namely ResNet50, DenseNet121, and InceptionV3, which were proposed for classifying the ImageNet dataset, to 3 DL models designed for classifying periodontal bone loss. The features obtained from global average pooling (GAP), global max pooling (GMP), or flatten layers (FL) of convolutional neural network (CNN) models were used as input to the 8 different machine learning (ML) models. In addition, the features obtained from the GAP, GMP, or FL of the DL models were reduced using the minimum redundancy maximum relevance (mRMR) method and then classified again with 8 ML models.</p><p><strong>Results: </strong>A total of 2533 panoramic radiographs, including 721 in the healthy group, 842 in the Stage1/2 group, and 970 in the Stage3/4 group, were included in the dataset. The average performance values of DenseNet121 + GAP-based and DenseNet121 + GAP + mRMR-based ML techniques on 10 subdatasets and ML models developed using 2 feature selection techniques outperformed CNN models.</p><p><strong>Conclusions: </strong>The new DenseNet121 + GAP + mRMR-based support vector machine model developed in this study achieved higher performance in periodontal bone loss classification compared to other models in the literature by detecting effective features from raw images without the need for manual selection.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"32-42"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003609/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalia Kazimierczak, Wojciech Kazimierczak, Zbigniew Serafin, Paweł Nowicki, Tomasz Jankowski, Agnieszka Jankowska, Joanna Janiszewska-Olszowska
Objectives: To compare artificial intelligence (AI)-driven web-based platform and manual measurements for analysing facial asymmetry in craniofacial CT examinations.
Methods: The study included 95 craniofacial CT scans from patients aged 18-30 years. The degree of asymmetry was measured based on AI platform-predefined anatomical landmarks: sella (S), condylion (Co), anterior nasal spine (ANS), and menton (Me). The concordance between the results of automatic asymmetry reports and manual linear 3D measurements was calculated. The asymmetry rate (AR) indicator was determined for both automatic and manual measurements, and the concordance between them was calculated. The repeatability of manual measurements in 20 randomly selected subjects was assessed. The concordance of measurements of quantitative variables was assessed with interclass correlation coefficient (ICC) according to the Shrout and Fleiss classification.
Results: Erroneous AI tracings were found in 16.8% of cases, reducing the analysed cases to 79. The agreement between automatic and manual asymmetry measurements was very low (ICC < 0.3). A lack of agreement between AI and manual AR analysis (ICC type 3 = 0) was found. The repeatability of manual measurements and AR calculations showed excellent correlation (ICC type 2 > 0.947).
Conclusions: The results indicate that the rate of tracing errors and lack of agreement with manual AR analysis make it impossible to use the tested AI platform to assess the degree of facial asymmetry.
目的比较人工智能(AI)驱动的网络平台和人工测量方法,以分析颅面部 CT 检查中的面部不对称情况:研究包括 95 例 18-30 岁患者的颅面部 CT 扫描。不对称程度根据 AI 平台预先定义的解剖地标进行测量:蝶鞍 (S)、髁突 (Co)、前鼻骨棘 (ANS) 和耳廓 (Me)。计算了自动不对称报告结果与手动线性三维测量结果之间的一致性。确定了自动和手动测量的不对称率(AR)指标,并计算了两者之间的一致性。对随机抽取的 20 名受试者的手动测量重复性进行了评估。根据 Shrout 和 Fleiss 分类法,使用类间相关系数(ICC)评估定量变量测量的一致性:自动和人工不对称测量的一致性非常低(ICC < 0.3)。人工智能和手动 AR 分析之间缺乏一致性(ICC 类型 3 = 0)。人工测量和 AR 计算的重复性显示出极好的相关性(ICC 类型 2 > 0.947):结果表明,由于追踪错误率和与人工 AR 分析缺乏一致性,因此无法使用测试的人工智能平台来评估面部不对称程度。
{"title":"Skeletal facial asymmetry: reliability of manual and artificial intelligence-driven analysis.","authors":"Natalia Kazimierczak, Wojciech Kazimierczak, Zbigniew Serafin, Paweł Nowicki, Tomasz Jankowski, Agnieszka Jankowska, Joanna Janiszewska-Olszowska","doi":"10.1093/dmfr/twad006","DOIUrl":"10.1093/dmfr/twad006","url":null,"abstract":"<p><strong>Objectives: </strong>To compare artificial intelligence (AI)-driven web-based platform and manual measurements for analysing facial asymmetry in craniofacial CT examinations.</p><p><strong>Methods: </strong>The study included 95 craniofacial CT scans from patients aged 18-30 years. The degree of asymmetry was measured based on AI platform-predefined anatomical landmarks: sella (S), condylion (Co), anterior nasal spine (ANS), and menton (Me). The concordance between the results of automatic asymmetry reports and manual linear 3D measurements was calculated. The asymmetry rate (AR) indicator was determined for both automatic and manual measurements, and the concordance between them was calculated. The repeatability of manual measurements in 20 randomly selected subjects was assessed. The concordance of measurements of quantitative variables was assessed with interclass correlation coefficient (ICC) according to the Shrout and Fleiss classification.</p><p><strong>Results: </strong>Erroneous AI tracings were found in 16.8% of cases, reducing the analysed cases to 79. The agreement between automatic and manual asymmetry measurements was very low (ICC < 0.3). A lack of agreement between AI and manual AR analysis (ICC type 3 = 0) was found. The repeatability of manual measurements and AR calculations showed excellent correlation (ICC type 2 > 0.947).</p><p><strong>Conclusions: </strong>The results indicate that the rate of tracing errors and lack of agreement with manual AR analysis make it impossible to use the tested AI platform to assess the degree of facial asymmetry.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"52-59"},"PeriodicalIF":3.3,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003660/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pei Liu, Renpeng Li, Yong Cheng, Bo Li, Lili Wei, Wei Li, Xiaolong Guo, Hang Li, Fang Wang
Objectives: This study aims to evaluate the morphological features of gubernacular tract (GT) for erupting permanent mandibular canines at different ages from 5 to 9 years old with a three-dimensional (3D) measurement method.
Methods: The cone-beam CT images of 50 patients were divided into five age groups. The 3D models of the GT for mandibular canines were reconstructed and analysed. The characteristics of the GT, including length, diameter, ellipticity, tortuosity, superficial area, volume, and the angle between the canine and GT, were evaluated using a centreline fitting algorithm.
Results: Among the 100 GTs that were examined, the length of the GT for mandibular canines decreased between the ages of 5 and 9 years, while the diameter increased until the age of 7 years. Additionally, the ellipticity and tortuosity of the GT decreased as age advanced. The superficial area and volume exhibited a trend of initially increasing and then decreasing. The morphological variations of the GT displayed heterogeneous changes during different periods.
Conclusions: The 3D measurement method effectively portrayed the morphological attributes of the GT for mandibular canines. The morphological characteristics of the GT during the eruption process exhibited significant variations. The variations in morphological changes may indicate different stages of mandibular canine eruption.
{"title":"Morphological variation of gubernacular tracts for permanent mandibular canines in eruption: a three-dimensional analysis.","authors":"Pei Liu, Renpeng Li, Yong Cheng, Bo Li, Lili Wei, Wei Li, Xiaolong Guo, Hang Li, Fang Wang","doi":"10.1093/dmfr/twad008","DOIUrl":"10.1093/dmfr/twad008","url":null,"abstract":"<p><strong>Objectives: </strong>This study aims to evaluate the morphological features of gubernacular tract (GT) for erupting permanent mandibular canines at different ages from 5 to 9 years old with a three-dimensional (3D) measurement method.</p><p><strong>Methods: </strong>The cone-beam CT images of 50 patients were divided into five age groups. The 3D models of the GT for mandibular canines were reconstructed and analysed. The characteristics of the GT, including length, diameter, ellipticity, tortuosity, superficial area, volume, and the angle between the canine and GT, were evaluated using a centreline fitting algorithm.</p><p><strong>Results: </strong>Among the 100 GTs that were examined, the length of the GT for mandibular canines decreased between the ages of 5 and 9 years, while the diameter increased until the age of 7 years. Additionally, the ellipticity and tortuosity of the GT decreased as age advanced. The superficial area and volume exhibited a trend of initially increasing and then decreasing. The morphological variations of the GT displayed heterogeneous changes during different periods.</p><p><strong>Conclusions: </strong>The 3D measurement method effectively portrayed the morphological attributes of the GT for mandibular canines. The morphological characteristics of the GT during the eruption process exhibited significant variations. The variations in morphological changes may indicate different stages of mandibular canine eruption.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"60-66"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003659/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huan-Zhong Su, Long-Cheng Hong, Mei Huang, Feng Zhang, Yu-Hui Wu, Zuo-Bing Zhang, Xiao-Dong Zhang
Objectives: Accurate distinguishing between immunoglobulin G4-related sialadenitis (IgG4-RS) and primary Sjögren syndrome (pSS) is crucial due to their different treatment approaches. This study aimed to construct and validate a nomogram based on the ultrasound (US) scoring system for the differentiation of IgG4-RS and pSS.
Methods: A total of 193 patients with a clinical diagnosis of IgG4-RS or pSS treated at our institution were enrolled in the training cohort (n = 135; IgG4-RS = 28, pSS = 107) and the validation cohort (n = 58; IgG4-RS = 15, pSS = 43). The least absolute shrinkage and selection operator regression algorithm was utilized to screen the most optimal clinical features and US scoring parameters. A model for the differential diagnosis of IgG4-RS or pSS was built using logistic regression and visualized as a nomogram. The performance levels of the nomogram model were evaluated and validated in both the training and validation cohorts.
Results: The nomogram incorporating clinical features and US scoring parameters showed better predictive value in differentiating IgG4-RS from pSS, with the area under the curves of 0.947 and 0.958 for the training cohort and the validation cohort, respectively. Decision curve analysis demonstrated that the nomogram was clinically useful.
Conclusions: A nomogram based on the US scoring system showed favourable predictive efficacy in differentiating IgG4-RS from pSS. It has the potential to aid in clinical decision-making.
{"title":"A nomogram based on ultrasound scoring system for differentiating between immunoglobulin G4-related sialadenitis and primary Sjögren syndrome.","authors":"Huan-Zhong Su, Long-Cheng Hong, Mei Huang, Feng Zhang, Yu-Hui Wu, Zuo-Bing Zhang, Xiao-Dong Zhang","doi":"10.1093/dmfr/twad005","DOIUrl":"10.1093/dmfr/twad005","url":null,"abstract":"<p><strong>Objectives: </strong>Accurate distinguishing between immunoglobulin G4-related sialadenitis (IgG4-RS) and primary Sjögren syndrome (pSS) is crucial due to their different treatment approaches. This study aimed to construct and validate a nomogram based on the ultrasound (US) scoring system for the differentiation of IgG4-RS and pSS.</p><p><strong>Methods: </strong>A total of 193 patients with a clinical diagnosis of IgG4-RS or pSS treated at our institution were enrolled in the training cohort (n = 135; IgG4-RS = 28, pSS = 107) and the validation cohort (n = 58; IgG4-RS = 15, pSS = 43). The least absolute shrinkage and selection operator regression algorithm was utilized to screen the most optimal clinical features and US scoring parameters. A model for the differential diagnosis of IgG4-RS or pSS was built using logistic regression and visualized as a nomogram. The performance levels of the nomogram model were evaluated and validated in both the training and validation cohorts.</p><p><strong>Results: </strong>The nomogram incorporating clinical features and US scoring parameters showed better predictive value in differentiating IgG4-RS from pSS, with the area under the curves of 0.947 and 0.958 for the training cohort and the validation cohort, respectively. Decision curve analysis demonstrated that the nomogram was clinically useful.</p><p><strong>Conclusions: </strong>A nomogram based on the US scoring system showed favourable predictive efficacy in differentiating IgG4-RS from pSS. It has the potential to aid in clinical decision-making.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"43-51"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003662/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objectives: Improved tools based on deep learning can be used to accurately number and identify teeth. This study aims to review the use of deep learning in tooth numbering and identification.
Methods: An electronic search was performed through October 2023 on PubMed, Scopus, Cochrane, Google Scholar, IEEE, arXiv, and medRxiv. Studies that used deep learning models with segmentation, object detection, or classification tasks for teeth identification and numbering of human dental radiographs were included. For risk of bias assessment, included studies were critically analysed using quality assessment of diagnostic accuracy studies (QUADAS-2). To generate plots for meta-analysis, MetaDiSc and STATA 17 (StataCorp LP, College Station, TX, USA) were used. Pooled outcome diagnostic odds ratios (DORs) were determined through calculation.
Results: The initial search yielded 1618 studies, of which 29 were eligible based on the inclusion criteria. Five studies were found to have low bias across all domains of the QUADAS-2 tool. Deep learning has been reported to have an accuracy range of 81.8%-99% in tooth identification and numbering and a precision range of 84.5%-99.94%. Furthermore, sensitivity was reported as 82.7%-98% and F1-scores ranged from 87% to 98%. Sensitivity was 75.5%-98% and specificity was 79.9%-99%. Only 6 studies found the deep learning model to be less than 90% accurate. The average DOR of the pooled data set was 1612, the sensitivity was 89%, the specificity was 99%, and the area under the curve was 96%.
Conclusion: Deep learning models successfully can detect, identify, and number teeth on dental radiographs. Deep learning-powered tooth numbering systems can enhance complex automated processes, such as accurately reporting which teeth have caries, thus aiding clinicians in making informed decisions during clinical practice.
{"title":"Deep learning for tooth identification and numbering on dental radiography: a systematic review and meta-analysis.","authors":"Soroush Sadr, Rata Rokhshad, Yasaman Daghighi, Mohsen Golkar, Fateme Tolooie Kheybari, Fatemeh Gorjinejad, Atousa Mataji Kojori, Parisa Rahimirad, Parnian Shobeiri, Mina Mahdian, Hossein Mohammad-Rahimi","doi":"10.1093/dmfr/twad001","DOIUrl":"10.1093/dmfr/twad001","url":null,"abstract":"<p><strong>Objectives: </strong>Improved tools based on deep learning can be used to accurately number and identify teeth. This study aims to review the use of deep learning in tooth numbering and identification.</p><p><strong>Methods: </strong>An electronic search was performed through October 2023 on PubMed, Scopus, Cochrane, Google Scholar, IEEE, arXiv, and medRxiv. Studies that used deep learning models with segmentation, object detection, or classification tasks for teeth identification and numbering of human dental radiographs were included. For risk of bias assessment, included studies were critically analysed using quality assessment of diagnostic accuracy studies (QUADAS-2). To generate plots for meta-analysis, MetaDiSc and STATA 17 (StataCorp LP, College Station, TX, USA) were used. Pooled outcome diagnostic odds ratios (DORs) were determined through calculation.</p><p><strong>Results: </strong>The initial search yielded 1618 studies, of which 29 were eligible based on the inclusion criteria. Five studies were found to have low bias across all domains of the QUADAS-2 tool. Deep learning has been reported to have an accuracy range of 81.8%-99% in tooth identification and numbering and a precision range of 84.5%-99.94%. Furthermore, sensitivity was reported as 82.7%-98% and F1-scores ranged from 87% to 98%. Sensitivity was 75.5%-98% and specificity was 79.9%-99%. Only 6 studies found the deep learning model to be less than 90% accurate. The average DOR of the pooled data set was 1612, the sensitivity was 89%, the specificity was 99%, and the area under the curve was 96%.</p><p><strong>Conclusion: </strong>Deep learning models successfully can detect, identify, and number teeth on dental radiographs. Deep learning-powered tooth numbering systems can enhance complex automated processes, such as accurately reporting which teeth have caries, thus aiding clinicians in making informed decisions during clinical practice.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"5-21"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003608/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139106005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objectives: Machine learning (ML) algorithms are a portion of artificial intelligence that may be used to create more accurate algorithmic procedures for estimating an individual's dental age or defining an age classification. This study aims to use ML algorithms to evaluate the efficacy of pulp/tooth area ratio (PTR) in cone-beam CT (CBCT) images to predict dental age classification in adults.
Methods: CBCT images of 236 Turkish individuals (121 males and 115 females) from 18 to 70 years of age were included. PTRs were calculated for six teeth in each individual, and a total of 1416 PTRs encompassed the study dataset. Support vector machine, classification and regression tree, and random forest (RF) models for dental age classification were employed. The accuracy of these techniques was compared. To facilitate this evaluation process, the available data were partitioned into training and test datasets, maintaining a proportion of 70% for training and 30% for testing across the spectrum of ML algorithms employed. The correct classification performances of the trained models were evaluated.
Results: The models' performances were found to be low. The models' highest accuracy and confidence intervals were found to belong to the RF algorithm.
Conclusions: According to our results, models were found to be low in performance but were considered as a different approach. We suggest examining the different parameters derived from different measuring techniques in the data obtained from CBCT images in order to develop ML algorithms for age classification in forensic situations.
{"title":"Machine learning assessment of dental age classification based on cone-beam CT images: a different approach.","authors":"Ozlem B Dogan, Hatice Boyacioglu, Dincer Goksuluk","doi":"10.1093/dmfr/twad009","DOIUrl":"10.1093/dmfr/twad009","url":null,"abstract":"<p><strong>Objectives: </strong>Machine learning (ML) algorithms are a portion of artificial intelligence that may be used to create more accurate algorithmic procedures for estimating an individual's dental age or defining an age classification. This study aims to use ML algorithms to evaluate the efficacy of pulp/tooth area ratio (PTR) in cone-beam CT (CBCT) images to predict dental age classification in adults.</p><p><strong>Methods: </strong>CBCT images of 236 Turkish individuals (121 males and 115 females) from 18 to 70 years of age were included. PTRs were calculated for six teeth in each individual, and a total of 1416 PTRs encompassed the study dataset. Support vector machine, classification and regression tree, and random forest (RF) models for dental age classification were employed. The accuracy of these techniques was compared. To facilitate this evaluation process, the available data were partitioned into training and test datasets, maintaining a proportion of 70% for training and 30% for testing across the spectrum of ML algorithms employed. The correct classification performances of the trained models were evaluated.</p><p><strong>Results: </strong>The models' performances were found to be low. The models' highest accuracy and confidence intervals were found to belong to the RF algorithm.</p><p><strong>Conclusions: </strong>According to our results, models were found to be low in performance but were considered as a different approach. We suggest examining the different parameters derived from different measuring techniques in the data obtained from CBCT images in order to develop ML algorithms for age classification in forensic situations.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"67-73"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003658/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}