基于人工智能的超声心动图视图分类

J. Naser, E. Lee, S. Pislaru, Gal Tsaban, Jeffrey G Malins, John I Jackson, D. Anisuzzaman, Behrouz Rostami, Francisco Lopez-Jimenez, Paul A. Friedman, Garvan C. Kane, Patricia A. Pellikka, Z. Attia
{"title":"基于人工智能的超声心动图视图分类","authors":"J. Naser, E. Lee, S. Pislaru, Gal Tsaban, Jeffrey G Malins, John I Jackson, D. Anisuzzaman, Behrouz Rostami, Francisco Lopez-Jimenez, Paul A. Friedman, Garvan C. Kane, Patricia A. Pellikka, Z. Attia","doi":"10.1093/ehjdh/ztae015","DOIUrl":null,"url":null,"abstract":"\n \n \n Augmenting echocardiography with artificial intelligence would allow for automated assessment of routine parameters and identification of disease patterns not easily recognized otherwise. View classification is an essential first step before deep learning can be applied to the echocardiogram.\n \n \n \n We trained 2- and 3-dimensional convolutional neural networks (CNNs) using transthoracic echocardiographic (TTE) studies obtained from 909 patients to classify 9 view categories [10,269 videos]. TTE studies from 229 patients were used in internal validation [2,582 videos]. CNNs were tested on 100 patients with comprehensive TTE studies [where the 2 examples chosen by CNNs as most likely to represent a view were evaluated] and 408 patients with five view categories obtained via point of care ultrasound (POCUS).\n \n \n \n The overall accuracy of the 2-dimensional CNN was 96.8% and the averaged area under the curve (AUC) was 0.997 on the comprehensive TTE testing set; these numbers were 98.4% and 0.998, respectively, on the POCUS set. For the 3-dimensional CNN, the accuracy and AUC were 96.3% and 0.998 for full TTE studies and 95.0% and 0.996 on POCUS videos, respectively. The positive predictive value, which defined correctly identified predicted views, was higher with 2- rather than 3-dimensional networks, exceeding 93% in apical, short axis aortic valve, and parasternal long axis left ventricle views.\n \n \n \n An automated view classifier utilizing CNNs was able to classify cardiac views obtained using TTE and POCUS with high accuracy. The view classifier will facilitate the application of deep learning to echocardiography.\n","PeriodicalId":508387,"journal":{"name":"European Heart Journal - Digital Health","volume":"16 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence-based classification of echocardiographic views\",\"authors\":\"J. Naser, E. Lee, S. Pislaru, Gal Tsaban, Jeffrey G Malins, John I Jackson, D. Anisuzzaman, Behrouz Rostami, Francisco Lopez-Jimenez, Paul A. Friedman, Garvan C. Kane, Patricia A. Pellikka, Z. Attia\",\"doi\":\"10.1093/ehjdh/ztae015\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n \\n \\n Augmenting echocardiography with artificial intelligence would allow for automated assessment of routine parameters and identification of disease patterns not easily recognized otherwise. View classification is an essential first step before deep learning can be applied to the echocardiogram.\\n \\n \\n \\n We trained 2- and 3-dimensional convolutional neural networks (CNNs) using transthoracic echocardiographic (TTE) studies obtained from 909 patients to classify 9 view categories [10,269 videos]. TTE studies from 229 patients were used in internal validation [2,582 videos]. CNNs were tested on 100 patients with comprehensive TTE studies [where the 2 examples chosen by CNNs as most likely to represent a view were evaluated] and 408 patients with five view categories obtained via point of care ultrasound (POCUS).\\n \\n \\n \\n The overall accuracy of the 2-dimensional CNN was 96.8% and the averaged area under the curve (AUC) was 0.997 on the comprehensive TTE testing set; these numbers were 98.4% and 0.998, respectively, on the POCUS set. For the 3-dimensional CNN, the accuracy and AUC were 96.3% and 0.998 for full TTE studies and 95.0% and 0.996 on POCUS videos, respectively. The positive predictive value, which defined correctly identified predicted views, was higher with 2- rather than 3-dimensional networks, exceeding 93% in apical, short axis aortic valve, and parasternal long axis left ventricle views.\\n \\n \\n \\n An automated view classifier utilizing CNNs was able to classify cardiac views obtained using TTE and POCUS with high accuracy. The view classifier will facilitate the application of deep learning to echocardiography.\\n\",\"PeriodicalId\":508387,\"journal\":{\"name\":\"European Heart Journal - Digital Health\",\"volume\":\"16 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Heart Journal - Digital Health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/ehjdh/ztae015\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Heart Journal - Digital Health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/ehjdh/ztae015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

利用人工智能增强超声心动图可实现常规参数的自动评估,并识别不易识别的疾病模式。在将深度学习应用于超声心动图之前,视图分类是必不可少的第一步。 我们使用从 909 名患者处获得的经胸超声心动图(TTE)研究结果训练了 2 维和 3 维卷积神经网络(CNN),以对 9 种视图类别进行分类 [10,269 视频]。229 名患者的 TTE 研究结果用于内部验证 [2,582 个视频]。CNN 对 100 名患者的综合 TTE 研究进行了测试(对 CNN 选定的最有可能代表某一视图的 2 个示例进行了评估),并对 408 名患者通过护理点超声(POCUS)获得的 5 个视图类别进行了测试。 在综合 TTE 测试集上,二维 CNN 的总体准确率为 96.8%,平均曲线下面积 (AUC) 为 0.997;在 POCUS 测试集上,这两个数字分别为 98.4% 和 0.998。对于三维 CNN,全面 TTE 研究的准确率和 AUC 分别为 96.3% 和 0.998,POCUS 视频的准确率和 AUC 分别为 95.0% 和 0.996。二维网络比三维网络的阳性预测值更高,在心尖切面、短轴主动脉瓣切面和胸骨旁长轴左心室切面上的阳性预测值超过了 93%。 利用 CNN 的自动视图分类器能够对通过 TTE 和 POCUS 获得的心脏视图进行高精度分类。该视图分类器将促进深度学习在超声心动图中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artificial intelligence-based classification of echocardiographic views
Augmenting echocardiography with artificial intelligence would allow for automated assessment of routine parameters and identification of disease patterns not easily recognized otherwise. View classification is an essential first step before deep learning can be applied to the echocardiogram. We trained 2- and 3-dimensional convolutional neural networks (CNNs) using transthoracic echocardiographic (TTE) studies obtained from 909 patients to classify 9 view categories [10,269 videos]. TTE studies from 229 patients were used in internal validation [2,582 videos]. CNNs were tested on 100 patients with comprehensive TTE studies [where the 2 examples chosen by CNNs as most likely to represent a view were evaluated] and 408 patients with five view categories obtained via point of care ultrasound (POCUS). The overall accuracy of the 2-dimensional CNN was 96.8% and the averaged area under the curve (AUC) was 0.997 on the comprehensive TTE testing set; these numbers were 98.4% and 0.998, respectively, on the POCUS set. For the 3-dimensional CNN, the accuracy and AUC were 96.3% and 0.998 for full TTE studies and 95.0% and 0.996 on POCUS videos, respectively. The positive predictive value, which defined correctly identified predicted views, was higher with 2- rather than 3-dimensional networks, exceeding 93% in apical, short axis aortic valve, and parasternal long axis left ventricle views. An automated view classifier utilizing CNNs was able to classify cardiac views obtained using TTE and POCUS with high accuracy. The view classifier will facilitate the application of deep learning to echocardiography.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Predicting early-stage coronary artery disease using machine learning and routine clinical biomarkers improved by augmented virtual data Why Thorough Open Data Descriptions Matters More Than Ever in the Age of AI: Opportunities for Cardiovascular Research Meet Key Digital Health thought leaders: Jagmeet (Jag) Singh Machine learning-based prediction of 1-year all-cause mortality in patients undergoing CRT implantation: Validation of the SEMMELWEIS-CRT score in the European CRT Survey I dataset Effect of Urban Environment on Cardiovascular Health: A Feasibility Pilot Study using Machine Learning to Predict Heart Rate Variability in Heart Failure Patients
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1