Deep learning-based apical lesion segmentation from panoramic radiographs.

IF 1.7 Q3 DENTISTRY, ORAL SURGERY & MEDICINE Imaging Science in Dentistry Pub Date : 2022-12-01 DOI:10.5624/isd.20220078
Il-Seok Song, Hak-Kyun Shin, Ju-Hee Kang, Jo-Eun Kim, Kyung-Hoe Huh, Won-Jin Yi, Sam-Sun Lee, Min-Suk Heo
{"title":"Deep learning-based apical lesion segmentation from panoramic radiographs.","authors":"Il-Seok Song,&nbsp;Hak-Kyun Shin,&nbsp;Ju-Hee Kang,&nbsp;Jo-Eun Kim,&nbsp;Kyung-Hoe Huh,&nbsp;Won-Jin Yi,&nbsp;Sam-Sun Lee,&nbsp;Min-Suk Heo","doi":"10.5624/isd.20220078","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Convolutional neural networks (CNNs) have rapidly emerged as one of the most promising artificial intelligence methods in the field of medical and dental research. CNNs can provide an effective diagnostic methodology allowing for the detection of early-staged diseases. Therefore, this study aimed to evaluate the performance of a deep CNN algorithm for apical lesion segmentation from panoramic radiographs.</p><p><strong>Materials and methods: </strong>A total of 1000 panoramic images showing apical lesions were separated into training (n=800, 80%), validation (n=100, 10%), and test (n=100, 10%) datasets. The performance of identifying apical lesions was evaluated by calculating the precision, recall, and F1-score.</p><p><strong>Results: </strong>In the test group of 180 apical lesions, 147 lesions were segmented from panoramic radiographs with an intersection over union (IoU) threshold of 0.3. The F1-score values, as a measure of performance, were 0.828, 0.815, and 0.742, respectively, with IoU thresholds of 0.3, 0.4, and 0.5.</p><p><strong>Conclusion: </strong>This study showed the potential utility of a deep learning-guided approach for the segmentation of apical lesions. The deep CNN algorithm using U-Net demonstrated considerably high performance in detecting apical lesions.</p>","PeriodicalId":51714,"journal":{"name":"Imaging Science in Dentistry","volume":"52 4","pages":"351-357"},"PeriodicalIF":1.7000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/88/49/isd-52-351.PMC9807797.pdf","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Imaging Science in Dentistry","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5624/isd.20220078","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 9

Abstract

Purpose: Convolutional neural networks (CNNs) have rapidly emerged as one of the most promising artificial intelligence methods in the field of medical and dental research. CNNs can provide an effective diagnostic methodology allowing for the detection of early-staged diseases. Therefore, this study aimed to evaluate the performance of a deep CNN algorithm for apical lesion segmentation from panoramic radiographs.

Materials and methods: A total of 1000 panoramic images showing apical lesions were separated into training (n=800, 80%), validation (n=100, 10%), and test (n=100, 10%) datasets. The performance of identifying apical lesions was evaluated by calculating the precision, recall, and F1-score.

Results: In the test group of 180 apical lesions, 147 lesions were segmented from panoramic radiographs with an intersection over union (IoU) threshold of 0.3. The F1-score values, as a measure of performance, were 0.828, 0.815, and 0.742, respectively, with IoU thresholds of 0.3, 0.4, and 0.5.

Conclusion: This study showed the potential utility of a deep learning-guided approach for the segmentation of apical lesions. The deep CNN algorithm using U-Net demonstrated considerably high performance in detecting apical lesions.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度学习的全景x线片根尖病变分割。
目的:卷积神经网络(cnn)已迅速成为医学和牙科研究领域最有前途的人工智能方法之一。cnn可以提供一种有效的诊断方法,允许检测早期疾病。因此,本研究旨在评估深度CNN算法在全景x线照片根尖病变分割中的性能。材料和方法:共1000张显示根尖病变的全景图像被分为训练(n=800, 80%)、验证(n=100, 10%)和测试(n=100, 10%)数据集。通过计算准确率、召回率和f1评分来评估识别根尖病变的性能。结果:试验组180个根尖病变中,从全景x线片上分割出147个病变,IoU阈值为0.3。作为绩效衡量指标的f1得分值分别为0.828、0.815和0.742,IoU阈值为0.3、0.4和0.5。结论:本研究显示了一种深度学习引导的方法在根尖病变分割中的潜在效用。使用U-Net的深度CNN算法在检测根尖病变方面表现出相当高的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Imaging Science in Dentistry
Imaging Science in Dentistry DENTISTRY, ORAL SURGERY & MEDICINE-
CiteScore
2.90
自引率
11.10%
发文量
42
期刊最新文献
Classification of mandibular molar furcation involvement in periapical radiographs by deep learning. Clinical validity and precision of deep learning-based cone-beam computed tomography automatic landmarking algorithm. Combination of metal artifact reduction and sharpening filter application for horizontal root fracture diagnosis in teeth adjacent to a zirconia implant. Erratum to: McCune-Albright syndrome with acromegaly: A case report with characteristic radiographic features of fibrous dysplasia. Evaluation of deep learning and convolutional neural network algorithms for mandibular fracture detection using radiographic images: A systematic review and meta-analysis.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1