Exploratory study on the enhancement of O-RADS application effectiveness for novice ultrasonographers via deep learning.

IF 2.1 3区 医学 Q2 OBSTETRICS & GYNECOLOGY Archives of Gynecology and Obstetrics Pub Date : 2024-11-23 DOI:10.1007/s00404-024-07837-z
Tao Liu, Kuo Miao, Gaoqiang Tan, Hanqi Bu, Mingda Xu, Qiming Zhang, Qin Liu, Xiaoqiu Dong
{"title":"Exploratory study on the enhancement of O-RADS application effectiveness for novice ultrasonographers via deep learning.","authors":"Tao Liu, Kuo Miao, Gaoqiang Tan, Hanqi Bu, Mingda Xu, Qiming Zhang, Qin Liu, Xiaoqiu Dong","doi":"10.1007/s00404-024-07837-z","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>The study aimed to create a deep convolutional neural network (DCNN) model based on ConvNeXt-Tiny to identify classic benign lesions (CBL) from other lesions (OL) within the Ovarian-Adnexal Reporting and Data System (O-RADS), enhancing the system's utility for novice ultrasonographers.</p><p><strong>Methods: </strong>Two sets of sonographic images of pathologically confirmed adnexal lesions were retrospectively collected [development dataset (DD) and independent test dataset (ITD)]. The ConvNeXt-Tiny model, optimized through transfer learning, was trained on the DD using the original images directly and after automatic lesion segmentation by a U-Net model. Models derived from both training paradigms were validated on the ITD for sensitivity, specificity, accuracy, and area under the curve (AUC). Two novice ultrasonographers were assessed in O-RADS with and without assistance from the model for Application Effectiveness.</p><p><strong>Results: </strong>The ConvNeXt-Tiny model trained on original images scored AUCs of 0.978 for DD and 0.955 for ITD, while the U-Net segmented image model achieved 0.967 for DD and 0.923 for ITD; neither showed significant differences. When assessing the malignancy of lesions using O-RADS 4 and 5, the diagnostic performances of two novice ultrasonographers and senior ultrasonographer, as well as model-assisted classifications, showed no significant differences, except for one novice's low accuracy. This approach reduced classification time by 62 and 64 min. The kappa values with senior doctors' classifications rose from 0.776 and 0.761 to 0.914 and 0.903, respectively.</p><p><strong>Conclusion: </strong>The ConvNeXt-Tiny model demonstrated excellent and stable performance in distinguishing CBL from OL within O-RADS. The diagnostic performance of novice ultrasonographers using O-RADS is essentially equivalent to that of senior ultrasonographer, and the assistance of the model can enhance their classification efficiency and consistency with the results of senior ultrasonographer.</p>","PeriodicalId":8330,"journal":{"name":"Archives of Gynecology and Obstetrics","volume":" ","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Archives of Gynecology and Obstetrics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00404-024-07837-z","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OBSTETRICS & GYNECOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: The study aimed to create a deep convolutional neural network (DCNN) model based on ConvNeXt-Tiny to identify classic benign lesions (CBL) from other lesions (OL) within the Ovarian-Adnexal Reporting and Data System (O-RADS), enhancing the system's utility for novice ultrasonographers.

Methods: Two sets of sonographic images of pathologically confirmed adnexal lesions were retrospectively collected [development dataset (DD) and independent test dataset (ITD)]. The ConvNeXt-Tiny model, optimized through transfer learning, was trained on the DD using the original images directly and after automatic lesion segmentation by a U-Net model. Models derived from both training paradigms were validated on the ITD for sensitivity, specificity, accuracy, and area under the curve (AUC). Two novice ultrasonographers were assessed in O-RADS with and without assistance from the model for Application Effectiveness.

Results: The ConvNeXt-Tiny model trained on original images scored AUCs of 0.978 for DD and 0.955 for ITD, while the U-Net segmented image model achieved 0.967 for DD and 0.923 for ITD; neither showed significant differences. When assessing the malignancy of lesions using O-RADS 4 and 5, the diagnostic performances of two novice ultrasonographers and senior ultrasonographer, as well as model-assisted classifications, showed no significant differences, except for one novice's low accuracy. This approach reduced classification time by 62 and 64 min. The kappa values with senior doctors' classifications rose from 0.776 and 0.761 to 0.914 and 0.903, respectively.

Conclusion: The ConvNeXt-Tiny model demonstrated excellent and stable performance in distinguishing CBL from OL within O-RADS. The diagnostic performance of novice ultrasonographers using O-RADS is essentially equivalent to that of senior ultrasonographer, and the assistance of the model can enhance their classification efficiency and consistency with the results of senior ultrasonographer.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过深度学习提高超声波新手的 O-RADS 应用效率的探索性研究。
目的:该研究旨在创建一个基于 ConvNeXt-Tiny 的深度卷积神经网络(DCNN)模型,以识别卵巢-附件报告和数据系统(O-RADS)中的典型良性病变(CBL)和其他病变(OL),提高该系统对超声新手的实用性:方法: 我们回顾性地收集了两组经病理确诊的附件病变声像图[开发数据集(DD)和独立测试数据集(ITD)]。通过迁移学习优化的 ConvNeXt-Tiny 模型直接使用原始图像并在 U-Net 模型自动分割病灶后在 DD 上进行了训练。在 ITD 上对两种训练范式得出的模型进行了灵敏度、特异性、准确性和曲线下面积(AUC)验证。两名超声波新手在有模型协助和没有模型协助的情况下接受了 O-RADS 应用效果评估:在原始图像上训练的 ConvNeXt-Tiny 模型在 DD 和 ITD 方面的 AUC 分别为 0.978 和 0.955,而 U-Net 分割图像模型在 DD 和 ITD 方面的 AUC 分别为 0.967 和 0.923;两者均无显著差异。在使用 O-RADS 4 和 5 评估病变的恶性程度时,除了一名新手的准确率较低外,两名新手超声技师和资深超声技师的诊断表现以及模型辅助分类均无明显差异。这种方法将分类时间分别缩短了 62 分钟和 64 分钟。高级医生分类的卡帕值分别从 0.776 和 0.761 升至 0.914 和 0.903:ConvNeXt-Tiny模型在O-RADS内区分CBL和OL方面表现出了卓越而稳定的性能。新手超声技师使用 O-RADS 的诊断性能与资深超声技师基本相当,而模型的辅助可以提高他们的分类效率以及与资深超声技师结果的一致性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.70
自引率
15.40%
发文量
493
审稿时长
1 months
期刊介绍: Founded in 1870 as "Archiv für Gynaekologie", Archives of Gynecology and Obstetrics has a long and outstanding tradition. Since 1922 the journal has been the Organ of the Deutsche Gesellschaft für Gynäkologie und Geburtshilfe. "The Archives of Gynecology and Obstetrics" is circulated in over 40 countries world wide and is indexed in "PubMed/Medline" and "Science Citation Index Expanded/Journal Citation Report". The journal publishes invited and submitted reviews; peer-reviewed original articles about clinical topics and basic research as well as news and views and guidelines and position statements from all sub-specialties in gynecology and obstetrics.
期刊最新文献
The effects of the intrapartum care model given in line with the recommendations of the World Health Organization (WHO) on the mother's maternal behavior towards her baby, breastfeeding self-efficacy, breastfeeding success, and hospital discharge readiness: a randomized controlled trial. Association between placental site and successful induction of labor among postdate primiparous women. Clinical comparison of laparoscopic and open surgical approaches for uterus-preserving myomectomy: a retrospective analysis on patient-reported outcome, postoperative morbidity and pregnancy outcomes. Effect of mother's active pushing at cesarean delivery: a randomized controlled trial. Relationship of female pelvic floor muscle function and body composition: cross-sectional study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1