Outcome Prediction and Murmur Detection in Sets of Phonocardiograms by a Deep Learning-Based Ensemble Approach

Sven Festag, Gideon Stein, Tim Büchner, M. Shadaydeh, J. Denzler, C. Spreckelsen
{"title":"Outcome Prediction and Murmur Detection in Sets of Phonocardiograms by a Deep Learning-Based Ensemble Approach","authors":"Sven Festag, Gideon Stein, Tim Büchner, M. Shadaydeh, J. Denzler, C. Spreckelsen","doi":"10.22489/CinC.2022.137","DOIUrl":null,"url":null,"abstract":"We, the team UKJ_FSU, propose a deep learning system for the prediction of congenital heart diseases. Our method is able to predict the clinical outcomes (normal, abnormal) of patients as well as to identify heart murmur (present, absent, unclear) based on phonocardiograms recorded at different auscultation locations. The system we propose is an ensemble of four temporal convolutional networks with identical topologies, each specialized in identifying murmurs and predicting patient outcome from a phonocardiogram taken at one specific auscultation location. Their intermediate outputs are augmented by the manually ascertained patient features such as age group, sex, height, and weight. The outputs of the four networks are combined to form a single final decision as demanded by the rules of the George B. Moody PhysioNet Challenge 2022. On the first task of this challenge, the murmur detection, our model reached a weighted accuracy of 0.567 with respect to the validation set. On the outcome prediction task (second task) the ensemble led to a mean outcome cost of 10679 on the same set. By focusing on the clinical outcome prediction and tuning some of the hyper-parameters only for this task, our model reached a cost score of 12373 on the official test set (rank 13 of 39). The same model scored a weighted accuracy of 0.458 regarding the murmur detection on the test set (rank 37 of 40).","PeriodicalId":117840,"journal":{"name":"2022 Computing in Cardiology (CinC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Computing in Cardiology (CinC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22489/CinC.2022.137","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

We, the team UKJ_FSU, propose a deep learning system for the prediction of congenital heart diseases. Our method is able to predict the clinical outcomes (normal, abnormal) of patients as well as to identify heart murmur (present, absent, unclear) based on phonocardiograms recorded at different auscultation locations. The system we propose is an ensemble of four temporal convolutional networks with identical topologies, each specialized in identifying murmurs and predicting patient outcome from a phonocardiogram taken at one specific auscultation location. Their intermediate outputs are augmented by the manually ascertained patient features such as age group, sex, height, and weight. The outputs of the four networks are combined to form a single final decision as demanded by the rules of the George B. Moody PhysioNet Challenge 2022. On the first task of this challenge, the murmur detection, our model reached a weighted accuracy of 0.567 with respect to the validation set. On the outcome prediction task (second task) the ensemble led to a mean outcome cost of 10679 on the same set. By focusing on the clinical outcome prediction and tuning some of the hyper-parameters only for this task, our model reached a cost score of 12373 on the official test set (rank 13 of 39). The same model scored a weighted accuracy of 0.458 regarding the murmur detection on the test set (rank 37 of 40).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度学习集成方法的心音图预后预测和杂音检测
我们,UKJ_FSU团队,提出了一个用于预测先天性心脏病的深度学习系统。我们的方法能够预测患者的临床结果(正常、异常),并根据不同听诊位置记录的心音图识别心脏杂音(存在、不存在、不清楚)。我们提出的系统是四个具有相同拓扑结构的时间卷积网络的集合,每个网络都专门用于识别杂音并预测患者在特定听诊位置的心音图结果。他们的中间输出通过人工确定的患者特征(如年龄、性别、身高和体重)得到增强。根据George B. Moody PhysioNet挑战赛2022的规则要求,将四个网络的输出组合成一个最终决定。对于这个挑战的第一个任务,杂音检测,我们的模型相对于验证集达到了0.567的加权精度。在结果预测任务(第二个任务)上,集成导致同一集合上的平均结果成本为10679。通过专注于临床结果预测并仅针对该任务调整一些超参数,我们的模型在官方测试集中达到了12373的成本分数(39个中的第13位)。同样的模型在测试集上的杂音检测的加权精度为0.458(40个中的第37位)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Nonlinear Dynamic Response of Intrapartum Fetal Heart Rate to Uterine Pressure Heart Pulse Demodulation from Emfit Mattress Sensor Using Spectral and Source Separation Techniques Automated Algorithm for QRS Detection in Cardiac Arrest Patients with PEA Extraction Algorithm for Morphologically Preserved Non-Invasive Multi-Channel Fetal ECG Improved Pulse Pressure Estimation Based on Imaging Photoplethysmographic Signals
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1