用于新生儿肺部超声波自动分类的深度学习方法,并评估人与人工智能的交互一致性。

IF 7 2区 医学 Q1 BIOLOGY Computers in biology and medicine Pub Date : 2024-12-01 Epub Date: 2024-11-05 DOI:10.1016/j.compbiomed.2024.109315
Noreen Fatima, Umair Khan, Xi Han, Emanuela Zannin, Camilla Rigotti, Federico Cattaneo, Giulia Dognini, Maria Luisa Ventura, Libertario Demi
{"title":"用于新生儿肺部超声波自动分类的深度学习方法,并评估人与人工智能的交互一致性。","authors":"Noreen Fatima, Umair Khan, Xi Han, Emanuela Zannin, Camilla Rigotti, Federico Cattaneo, Giulia Dognini, Maria Luisa Ventura, Libertario Demi","doi":"10.1016/j.compbiomed.2024.109315","DOIUrl":null,"url":null,"abstract":"<p><p>Neonatal respiratory disorders pose significant challenges in clinical settings, often requiring rapid and accurate diagnostic solutions for effective management. Lung ultrasound (LUS) has emerged as a promising tool to evaluate respiratory conditions in neonates. This evaluation is mainly based on the interpretation of visual patterns (horizontal artifacts, vertical artifacts, and consolidations). Automated interpretation of these patterns can assist clinicians in their evaluations. However, developing AI-based solutions for this purpose is challenging, primarily due to the lack of annotated data and inherent subjectivity in expert interpretations. This study aims to propose an automated solution for the reliable interpretation of patterns in LUS videos of newborns. We employed two distinct strategies. The first strategy is a frame-to-video-level approach that computes frame-level predictions from deep learning (DL) models trained from scratch (F2V-TS) along with fine-tuning pre-trained models (F2V-FT) followed by aggregation of those predictions for video-level evaluation. The second strategy is a direct video classification approach (DV) for evaluating LUS data. To evaluate our methods, we used LUS data from 34 neonatal patients comprising of 70 exams with annotations provided by three expert human operators (3HOs). Results show that within the frame-to-video-level approach, F2V-FT achieved the best performance with an accuracy of 77% showing moderate agreement with the 3HOs. while the direct video classification approach resulted in an accuracy of 72%, showing substantial agreement with the 3HOs, our proposed study lays down the foundation for reliable AI-based solutions for newborn LUS data evaluation.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109315"},"PeriodicalIF":7.0000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep learning approaches for automated classification of neonatal lung ultrasound with assessment of human-to-AI interrater agreement.\",\"authors\":\"Noreen Fatima, Umair Khan, Xi Han, Emanuela Zannin, Camilla Rigotti, Federico Cattaneo, Giulia Dognini, Maria Luisa Ventura, Libertario Demi\",\"doi\":\"10.1016/j.compbiomed.2024.109315\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Neonatal respiratory disorders pose significant challenges in clinical settings, often requiring rapid and accurate diagnostic solutions for effective management. Lung ultrasound (LUS) has emerged as a promising tool to evaluate respiratory conditions in neonates. This evaluation is mainly based on the interpretation of visual patterns (horizontal artifacts, vertical artifacts, and consolidations). Automated interpretation of these patterns can assist clinicians in their evaluations. However, developing AI-based solutions for this purpose is challenging, primarily due to the lack of annotated data and inherent subjectivity in expert interpretations. This study aims to propose an automated solution for the reliable interpretation of patterns in LUS videos of newborns. We employed two distinct strategies. The first strategy is a frame-to-video-level approach that computes frame-level predictions from deep learning (DL) models trained from scratch (F2V-TS) along with fine-tuning pre-trained models (F2V-FT) followed by aggregation of those predictions for video-level evaluation. The second strategy is a direct video classification approach (DV) for evaluating LUS data. To evaluate our methods, we used LUS data from 34 neonatal patients comprising of 70 exams with annotations provided by three expert human operators (3HOs). Results show that within the frame-to-video-level approach, F2V-FT achieved the best performance with an accuracy of 77% showing moderate agreement with the 3HOs. while the direct video classification approach resulted in an accuracy of 72%, showing substantial agreement with the 3HOs, our proposed study lays down the foundation for reliable AI-based solutions for newborn LUS data evaluation.</p>\",\"PeriodicalId\":10578,\"journal\":{\"name\":\"Computers in biology and medicine\",\"volume\":\"183 \",\"pages\":\"109315\"},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2024-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in biology and medicine\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1016/j.compbiomed.2024.109315\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/11/5 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in biology and medicine","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.compbiomed.2024.109315","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/11/5 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

新生儿呼吸系统疾病给临床带来了巨大挑战,通常需要快速准确的诊断方法才能进行有效治疗。肺部超声(LUS)已成为评估新生儿呼吸系统状况的一种很有前途的工具。这种评估主要基于对视觉模式(水平伪影、垂直伪影和合并)的解读。对这些模式的自动解读可以帮助临床医生进行评估。然而,为此目的开发基于人工智能的解决方案具有挑战性,这主要是由于缺乏注释数据以及专家解读固有的主观性。本研究旨在为可靠解读新生儿 LUS 视频中的模式提出一种自动化解决方案。我们采用了两种不同的策略。第一种策略是帧到视频级别的方法,通过从头开始训练的深度学习(DL)模型(F2V-TS)计算帧级别的预测,同时对预训练模型进行微调(F2V-FT),然后将这些预测汇总,进行视频级别的评估。第二种策略是直接采用视频分类方法 (DV) 评估 LUS 数据。为了评估我们的方法,我们使用了 34 名新生儿患者的 LUS 数据,其中包括 70 项检查,并由三名人类操作专家(3HOs)提供注释。结果表明,在帧到视频级方法中,F2V-FT 取得了最佳性能,准确率为 77%,显示出与 3HOs 的适度一致性。而直接视频分类方法的准确率为 72%,显示出与 3HOs 的实质性一致性,我们提出的研究为新生儿 LUS 数据评估的可靠人工智能解决方案奠定了基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep learning approaches for automated classification of neonatal lung ultrasound with assessment of human-to-AI interrater agreement.

Neonatal respiratory disorders pose significant challenges in clinical settings, often requiring rapid and accurate diagnostic solutions for effective management. Lung ultrasound (LUS) has emerged as a promising tool to evaluate respiratory conditions in neonates. This evaluation is mainly based on the interpretation of visual patterns (horizontal artifacts, vertical artifacts, and consolidations). Automated interpretation of these patterns can assist clinicians in their evaluations. However, developing AI-based solutions for this purpose is challenging, primarily due to the lack of annotated data and inherent subjectivity in expert interpretations. This study aims to propose an automated solution for the reliable interpretation of patterns in LUS videos of newborns. We employed two distinct strategies. The first strategy is a frame-to-video-level approach that computes frame-level predictions from deep learning (DL) models trained from scratch (F2V-TS) along with fine-tuning pre-trained models (F2V-FT) followed by aggregation of those predictions for video-level evaluation. The second strategy is a direct video classification approach (DV) for evaluating LUS data. To evaluate our methods, we used LUS data from 34 neonatal patients comprising of 70 exams with annotations provided by three expert human operators (3HOs). Results show that within the frame-to-video-level approach, F2V-FT achieved the best performance with an accuracy of 77% showing moderate agreement with the 3HOs. while the direct video classification approach resulted in an accuracy of 72%, showing substantial agreement with the 3HOs, our proposed study lays down the foundation for reliable AI-based solutions for newborn LUS data evaluation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers in biology and medicine
Computers in biology and medicine 工程技术-工程:生物医学
CiteScore
11.70
自引率
10.40%
发文量
1086
审稿时长
74 days
期刊介绍: Computers in Biology and Medicine is an international forum for sharing groundbreaking advancements in the use of computers in bioscience and medicine. This journal serves as a medium for communicating essential research, instruction, ideas, and information regarding the rapidly evolving field of computer applications in these domains. By encouraging the exchange of knowledge, we aim to facilitate progress and innovation in the utilization of computers in biology and medicine.
期刊最新文献
An adaptive enhanced human memory algorithm for multi-level image segmentation for pathological lung cancer images. Integrating multimodal learning for improved vital health parameter estimation. Riemannian manifold-based geometric clustering of continuous glucose monitoring to improve personalized diabetes management. Transformative artificial intelligence in gastric cancer: Advancements in diagnostic techniques. Artificial intelligence and deep learning algorithms for epigenetic sequence analysis: A review for epigeneticists and AI experts.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1