基于随机增强的CLSTMU-NET的早期妊娠视频显著性预测。

Elizaveta Savochkina, Lok Hin Lee, He Zhao, Lior Drukker, Aris T Papageorghiou, J Alison Noble
{"title":"基于随机增强的CLSTMU-NET的早期妊娠视频显著性预测。","authors":"Elizaveta Savochkina,&nbsp;Lok Hin Lee,&nbsp;He Zhao,&nbsp;Lior Drukker,&nbsp;Aris T Papageorghiou,&nbsp;J Alison Noble","doi":"10.1109/ISBI52829.2022.9761585","DOIUrl":null,"url":null,"abstract":"<p><p>In this paper we develop a multi-modal video analysis algorithm to predict where a sonographer should look next. Our approach uses video and expert knowledge, defined by gaze tracking data, which is acquired during routine first-trimester fetal ultrasound scanning. Specifically, we propose a spatio-temporal convolutional LSTMU-Net neural network (cLSTMU-Net) for video saliency prediction with stochastic augmentation. The architecture design consists of a U-Net based encoder-decoder network and a cLSTM to take into account temporal information. We compare the performance of the cLSTMU-Net alongside spatial-only architectures for the task of predicting gaze in first trimester ultrasound videos. Our study dataset consists of 115 clinically acquired first trimester US videos and a total of 45, 666 video frames. We adopt a Random Augmentation strategy (RA) from a stochastic augmentation policy search to improve model performance and reduce over-fitting. The proposed cLSTMU-Net using a video clip of 6 frames outperforms the baseline approach on all saliency metrics: KLD, SIM, NSS and CC (2.08, 0.28, 4.53 and 0.42 versus 2.16, 0.27, 4.34 and 0.39).</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614063/pdf/EMS159391.pdf","citationCount":"0","resultStr":"{\"title\":\"First Trimester video Saliency Prediction using CLSTMU-NET with Stochastic Augmentation.\",\"authors\":\"Elizaveta Savochkina,&nbsp;Lok Hin Lee,&nbsp;He Zhao,&nbsp;Lior Drukker,&nbsp;Aris T Papageorghiou,&nbsp;J Alison Noble\",\"doi\":\"10.1109/ISBI52829.2022.9761585\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In this paper we develop a multi-modal video analysis algorithm to predict where a sonographer should look next. Our approach uses video and expert knowledge, defined by gaze tracking data, which is acquired during routine first-trimester fetal ultrasound scanning. Specifically, we propose a spatio-temporal convolutional LSTMU-Net neural network (cLSTMU-Net) for video saliency prediction with stochastic augmentation. The architecture design consists of a U-Net based encoder-decoder network and a cLSTM to take into account temporal information. We compare the performance of the cLSTMU-Net alongside spatial-only architectures for the task of predicting gaze in first trimester ultrasound videos. Our study dataset consists of 115 clinically acquired first trimester US videos and a total of 45, 666 video frames. We adopt a Random Augmentation strategy (RA) from a stochastic augmentation policy search to improve model performance and reduce over-fitting. The proposed cLSTMU-Net using a video clip of 6 frames outperforms the baseline approach on all saliency metrics: KLD, SIM, NSS and CC (2.08, 0.28, 4.53 and 0.42 versus 2.16, 0.27, 4.34 and 0.39).</p>\",\"PeriodicalId\":74566,\"journal\":{\"name\":\"Proceedings. IEEE International Symposium on Biomedical Imaging\",\"volume\":\"2022 \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614063/pdf/EMS159391.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. IEEE International Symposium on Biomedical Imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISBI52829.2022.9761585\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE International Symposium on Biomedical Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBI52829.2022.9761585","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在本文中,我们开发了一种多模态视频分析算法来预测超声医师下一步应该看哪里。我们的方法使用视频和专家知识,由凝视跟踪数据定义,这些数据是在常规妊娠早期胎儿超声扫描中获得的。具体来说,我们提出了一种时空卷积LSTMU-Net神经网络(cLSTMU-Net),用于随机增强视频显著性预测。架构设计包括基于U-Net的编码器-解码器网络和考虑时序信息的cLSTM。我们比较了cLSTMU-Net和纯空间架构在预测妊娠早期超声视频凝视任务中的性能。我们的研究数据集包括115个临床获得的妊娠早期美国视频和总共45,666个视频帧。我们采用随机增强策略搜索中的随机增强策略(RA)来提高模型性能并减少过拟合。使用6帧视频剪辑的拟议cLSTMU-Net在所有显着性指标上优于基线方法:KLD, SIM, NSS和CC(2.08, 0.28, 4.53和0.42相对于2.16,0.27,4.34和0.39)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
First Trimester video Saliency Prediction using CLSTMU-NET with Stochastic Augmentation.

In this paper we develop a multi-modal video analysis algorithm to predict where a sonographer should look next. Our approach uses video and expert knowledge, defined by gaze tracking data, which is acquired during routine first-trimester fetal ultrasound scanning. Specifically, we propose a spatio-temporal convolutional LSTMU-Net neural network (cLSTMU-Net) for video saliency prediction with stochastic augmentation. The architecture design consists of a U-Net based encoder-decoder network and a cLSTM to take into account temporal information. We compare the performance of the cLSTMU-Net alongside spatial-only architectures for the task of predicting gaze in first trimester ultrasound videos. Our study dataset consists of 115 clinically acquired first trimester US videos and a total of 45, 666 video frames. We adopt a Random Augmentation strategy (RA) from a stochastic augmentation policy search to improve model performance and reduce over-fitting. The proposed cLSTMU-Net using a video clip of 6 frames outperforms the baseline approach on all saliency metrics: KLD, SIM, NSS and CC (2.08, 0.28, 4.53 and 0.42 versus 2.16, 0.27, 4.34 and 0.39).

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Blood Harmonisation of Endoscopic Transsphenoidal Surgical Video Frames on Phantom Models. DART: DEFORMABLE ANATOMY-AWARE REGISTRATION TOOLKIT FOR LUNG CT REGISTRATION WITH KEYPOINTS SUPERVISION. ROBUST QUANTIFICATION OF PERCENT EMPHYSEMA ON CT VIA DOMAIN ATTENTION: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY. SIFT-DBT: SELF-SUPERVISED INITIALIZATION AND FINE-TUNING FOR IMBALANCED DIGITAL BREAST TOMOSYNTHESIS IMAGE CLASSIFICATION. QUANTIFYING HIPPOCAMPAL SHAPE ASYMMETRY IN ALZHEIMER'S DISEASE USING OPTIMAL SHAPE CORRESPONDENCES.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1