Improving ultrasound video classification: an evaluation of novel deep learning methods in echocardiography.

James P Howard, Jeremy Tan, Matthew J Shun-Shin, Dina Mahdi, Alexandra N Nowbar, Ahran D Arnold, Yousif Ahmad, Peter McCartney, Massoud Zolgharni, Nick W F Linton, Nilesh Sutaria, Bushra Rana, Jamil Mayet, Daniel Rueckert, Graham D Cole, Darrel P Francis
{"title":"Improving ultrasound video classification: an evaluation of novel deep learning methods in echocardiography.","authors":"James P Howard,&nbsp;Jeremy Tan,&nbsp;Matthew J Shun-Shin,&nbsp;Dina Mahdi,&nbsp;Alexandra N Nowbar,&nbsp;Ahran D Arnold,&nbsp;Yousif Ahmad,&nbsp;Peter McCartney,&nbsp;Massoud Zolgharni,&nbsp;Nick W F Linton,&nbsp;Nilesh Sutaria,&nbsp;Bushra Rana,&nbsp;Jamil Mayet,&nbsp;Daniel Rueckert,&nbsp;Graham D Cole,&nbsp;Darrel P Francis","doi":"10.21037/jmai.2019.10.03","DOIUrl":null,"url":null,"abstract":"<p><p>Echocardiography is the commonest medical ultrasound examination, but automated interpretation is challenging and hinges on correct recognition of the 'view' (imaging plane and orientation). Current state-of-the-art methods for identifying the view computationally involve 2-dimensional convolutional neural networks (CNNs), but these merely classify individual frames of a video in isolation, and ignore information describing the movement of structures throughout the cardiac cycle. Here we explore the efficacy of novel CNN architectures, including time-distributed networks and two-stream networks, which are inspired by advances in human action recognition. We demonstrate that these new architectures more than halve the error rate of traditional CNNs from 8.1% to 3.9%. These advances in accuracy may be due to these networks' ability to track the movement of specific structures such as heart valves throughout the cardiac cycle. Finally, we show the accuracies of these new state-of-the-art networks are approaching expert agreement (3.6% discordance), with a similar pattern of discordance between views.</p>","PeriodicalId":73815,"journal":{"name":"Journal of medical artificial intelligence","volume":"3 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.21037/jmai.2019.10.03","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of medical artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21037/jmai.2019.10.03","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28

Abstract

Echocardiography is the commonest medical ultrasound examination, but automated interpretation is challenging and hinges on correct recognition of the 'view' (imaging plane and orientation). Current state-of-the-art methods for identifying the view computationally involve 2-dimensional convolutional neural networks (CNNs), but these merely classify individual frames of a video in isolation, and ignore information describing the movement of structures throughout the cardiac cycle. Here we explore the efficacy of novel CNN architectures, including time-distributed networks and two-stream networks, which are inspired by advances in human action recognition. We demonstrate that these new architectures more than halve the error rate of traditional CNNs from 8.1% to 3.9%. These advances in accuracy may be due to these networks' ability to track the movement of specific structures such as heart valves throughout the cardiac cycle. Finally, we show the accuracies of these new state-of-the-art networks are approaching expert agreement (3.6% discordance), with a similar pattern of discordance between views.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
改进超声视频分类:超声心动图中新型深度学习方法的评价。
超声心动图是最常见的医学超声检查,但自动解释具有挑战性,并且取决于对“视图”(成像平面和方向)的正确识别。目前最先进的计算识别视图的方法包括二维卷积神经网络(cnn),但这些方法仅仅对视频的单个帧进行隔离分类,而忽略了整个心脏周期中描述结构运动的信息。在这里,我们探讨了新型CNN架构的有效性,包括时间分布网络和双流网络,它们受到人类动作识别技术进步的启发。我们证明,这些新架构将传统cnn的错误率从8.1%降低到3.9%,减少了一半以上。这些准确性的进步可能是由于这些网络在整个心脏周期中跟踪特定结构(如心脏瓣膜)运动的能力。最后,我们表明这些新的最先进的网络的准确性正在接近专家协议(3.6%不一致),观点之间的不一致模式类似。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.30
自引率
0.00%
发文量
0
期刊最新文献
Artificial intelligence in periodontology and implantology—a narrative review Exploring the capabilities and limitations of large language models in nuclear medicine knowledge with primary focus on GPT-3.5, GPT-4 and Google Bard Hybrid artificial intelligence outcome prediction using features extraction from stress perfusion cardiac magnetic resonance images and electronic health records Analysis of factors influencing maternal mortality and newborn health—a machine learning approach Efficient glioma grade prediction using learned features extracted from convolutional neural networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1