A Deep Learning Model to Predict Breast Implant Texture Types Using Ultrasonography Images: Feasibility Development Study.

IF 2 Q3 HEALTH CARE SCIENCES & SERVICES JMIR Formative Research Pub Date : 2024-11-05 DOI:10.2196/58776
Ho Heon Kim, Won Chan Jeong, Kyungran Pi, Angela Soeun Lee, Min Soo Kim, Hye Jin Kim, Jae Hong Kim
{"title":"A Deep Learning Model to Predict Breast Implant Texture Types Using Ultrasonography Images: Feasibility Development Study.","authors":"Ho Heon Kim, Won Chan Jeong, Kyungran Pi, Angela Soeun Lee, Min Soo Kim, Hye Jin Kim, Jae Hong Kim","doi":"10.2196/58776","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Breast implants, including textured variants, have been widely used in aesthetic and reconstructive mammoplasty. However, the textured type, which is one of the shell texture types of breast implants, has been identified as a possible etiologic factor for lymphoma, specifically breast implant-associated anaplastic large cell lymphoma (BIA-ALCL). Identifying the shell texture type of the implant is critical to diagnosing BIA-ALCL. However, distinguishing the shell texture type can be difficult due to the loss of human memory and medical history. An alternative approach is to use ultrasonography, but this method also has limitations in quantitative assessment.</p><p><strong>Objective: </strong>This study aims to determine the feasibility of using a deep learning model to classify the shell texture type of breast implants and make robust predictions from ultrasonography images from heterogeneous sources.</p><p><strong>Methods: </strong>A total of 19,502 breast implant images were retrospectively collected from heterogeneous sources, including images captured from both Canon and GE devices, images of ruptured implants, and images without implants, as well as publicly available images. The Canon images were trained using ResNet-50. The model's performance on the Canon dataset was evaluated using stratified 5-fold cross-validation. Additionally, external validation was conducted using the GE and publicly available datasets. The area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (PRAUC) were calculated based on the contribution of the pixels with Gradient-weighted Class Activation Mapping (Grad-CAM). To identify the significant pixels for classification, we masked the pixels that contributed less than 10%, up to a maximum of 100%. To assess the model's robustness to uncertainty, Shannon entropy was calculated for 4 image groups: Canon, GE, ruptured implants, and without implants.</p><p><strong>Results: </strong>The deep learning model achieved an average AUROC of 0.98 and a PRAUC of 0.88 in the Canon dataset. The model achieved an AUROC of 0.985 and a PRAUC of 0.748 for images captured with GE devices. Additionally, the model predicted an AUROC of 0.909 and a PRAUC of 0.958 for the publicly available dataset. This model maintained the PRAUC values for quantitative validation when masking up to 90% of the least-contributing pixels and the remnant pixels in breast shell layers. Furthermore, the prediction uncertainty increased in the following order: Canon (0.066), GE (0072), ruptured implants (0.371), and no implants (0.777).</p><p><strong>Conclusions: </strong>We have demonstrated the feasibility of using deep learning to predict the shell texture type of breast implants. This approach quantifies the shell texture types of breast implants, supporting the first step in the diagnosis of BIA-ALCL.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"8 ","pages":"e58776"},"PeriodicalIF":2.0000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11576615/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Formative Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/58776","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Breast implants, including textured variants, have been widely used in aesthetic and reconstructive mammoplasty. However, the textured type, which is one of the shell texture types of breast implants, has been identified as a possible etiologic factor for lymphoma, specifically breast implant-associated anaplastic large cell lymphoma (BIA-ALCL). Identifying the shell texture type of the implant is critical to diagnosing BIA-ALCL. However, distinguishing the shell texture type can be difficult due to the loss of human memory and medical history. An alternative approach is to use ultrasonography, but this method also has limitations in quantitative assessment.

Objective: This study aims to determine the feasibility of using a deep learning model to classify the shell texture type of breast implants and make robust predictions from ultrasonography images from heterogeneous sources.

Methods: A total of 19,502 breast implant images were retrospectively collected from heterogeneous sources, including images captured from both Canon and GE devices, images of ruptured implants, and images without implants, as well as publicly available images. The Canon images were trained using ResNet-50. The model's performance on the Canon dataset was evaluated using stratified 5-fold cross-validation. Additionally, external validation was conducted using the GE and publicly available datasets. The area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (PRAUC) were calculated based on the contribution of the pixels with Gradient-weighted Class Activation Mapping (Grad-CAM). To identify the significant pixels for classification, we masked the pixels that contributed less than 10%, up to a maximum of 100%. To assess the model's robustness to uncertainty, Shannon entropy was calculated for 4 image groups: Canon, GE, ruptured implants, and without implants.

Results: The deep learning model achieved an average AUROC of 0.98 and a PRAUC of 0.88 in the Canon dataset. The model achieved an AUROC of 0.985 and a PRAUC of 0.748 for images captured with GE devices. Additionally, the model predicted an AUROC of 0.909 and a PRAUC of 0.958 for the publicly available dataset. This model maintained the PRAUC values for quantitative validation when masking up to 90% of the least-contributing pixels and the remnant pixels in breast shell layers. Furthermore, the prediction uncertainty increased in the following order: Canon (0.066), GE (0072), ruptured implants (0.371), and no implants (0.777).

Conclusions: We have demonstrated the feasibility of using deep learning to predict the shell texture type of breast implants. This approach quantifies the shell texture types of breast implants, supporting the first step in the diagnosis of BIA-ALCL.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用超声造影图像预测乳房植入物纹理类型的深度学习模型:可行性开发研究。
背景:乳房植入物(包括纹理型)已被广泛应用于乳房美容和整形术中。然而,作为乳房假体外壳纹理类型之一的纹理型假体已被确定为淋巴瘤(特别是乳房假体相关性无细胞大细胞淋巴瘤(BIA-ALCL))的可能致病因素。确定植入物的外壳质地类型是诊断 BIA-ALCL 的关键。然而,由于人类记忆和病史的缺失,区分外壳质地类型可能很困难。另一种方法是使用超声波检查,但这种方法在定量评估方面也有局限性:本研究旨在确定使用深度学习模型对乳房植入物的外壳纹理类型进行分类的可行性,并从异构来源的超声波图像中进行稳健预测:共回顾性收集了19502张不同来源的乳房植入物图像,包括佳能和通用电气设备采集的图像、破裂植入物图像、无植入物图像以及公开可用的图像。佳能图像使用 ResNet-50 进行训练。模型在佳能数据集上的性能使用分层 5 倍交叉验证进行评估。此外,还使用通用电气和公开数据集进行了外部验证。根据像素对梯度加权类激活映射(Grad-CAM)的贡献,计算了接收者操作特征曲线下面积(AUROC)和精度-召回曲线下面积(PRAUC)。为了识别对分类有重要意义的像素,我们屏蔽了贡献率低于 10% 的像素,最高可达 100%。为了评估模型对不确定性的稳健性,我们计算了 4 组图像的香农熵:结果:在佳能数据集中,深度学习模型的平均 AUROC 为 0.98,PRAUC 为 0.88。在使用通用电气设备拍摄的图像中,该模型的 AUROC 为 0.985,PRAUC 为 0.748。此外,该模型预测公开数据集的 AUROC 为 0.909,PRAUC 为 0.958。该模型在屏蔽了多达 90% 的贡献最小像素和乳房外壳层中的残余像素后,仍能保持定量验证的 PRAUC 值。此外,预测不确定性按以下顺序增加:佳能(0.066)、通用电气(0072)、植入物破裂(0.371)和无植入物(0.777):我们证明了使用深度学习预测乳房植入物外壳纹理类型的可行性。这种方法可以量化乳房植入物的外壳纹理类型,为 BIA-ALCL 的第一步诊断提供支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
JMIR Formative Research
JMIR Formative Research Medicine-Medicine (miscellaneous)
CiteScore
2.70
自引率
9.10%
发文量
579
审稿时长
12 weeks
期刊最新文献
Characteristics, Barriers, and Facilitators of Virtual Decision-Making Capacity Assessments During the COVID-19 Pandemic: Online Survey. Web-Based Platform for Systematic Reviews and Meta-Analyses of Traditional Chinese Medicine: Platform Development Study. Embedding Technology-Assisted Parenting Interventions in Real-World Settings to Empower Parents of Children With Adverse Childhood Experiences: Co-Design Study. Short-Form Video Informed Consent Compared With Written Consent for Adolescents and Young Adults: Randomized Experiment. Population Characteristics in Justice Health Research Based on PubMed Abstracts From 1963 to 2023: Text Mining Study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1