基于随机增强策略搜索的单帧显著性预测的孕早期凝视模式估计。

Elizaveta Savochkina, Lok Hin Lee, Lior Drukker, Aris T Papageorghiou, J Alison Noble
{"title":"基于随机增强策略搜索的单帧显著性预测的孕早期凝视模式估计。","authors":"Elizaveta Savochkina,&nbsp;Lok Hin Lee,&nbsp;Lior Drukker,&nbsp;Aris T Papageorghiou,&nbsp;J Alison Noble","doi":"10.1007/978-3-030-80432-9_28","DOIUrl":null,"url":null,"abstract":"<p><p>While performing an ultrasound (US) scan, sonographers direct their gaze at regions of interest to verify that the correct plane is acquired and to interpret the acquisition frame. Predicting sonographer gaze on US videos is useful for identification of spatio-temporal patterns that are important for US scanning. This paper investigates utilizing sonographer gaze, in the form of gaze-tracking data, in a multimodal imaging deep learning framework to assist the analysis of the first trimester fetal ultrasound scan. Specifically, we propose an encoderdecoder convolutional neural network with skip connections to predict the visual gaze for each frame using 115 first trimester ultrasound videos; 29,250 video frames for training, 7,290 for validation and 9,126 for testing. We find that the dataset of our size benefits from automated data augmentation, which in turn, alleviates model overfitting and reduces structural variation imbalance of US anatomical views between the training and test datasets. Specifically, we employ a stochastic augmentation policy search method to improve segmentation performance. Using the learnt policies, our models outperform the baseline: KLD, SIM, NSS and CC (2.16, 0.27, 4.34 and 0.39 versus 3.17, 0.21, 2.92 and 0.28).</p>","PeriodicalId":93336,"journal":{"name":"Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7611594/pdf/EMS132092.pdf","citationCount":"0","resultStr":"{\"title\":\"First Trimester Gaze Pattern Estimation Using Stochastic Augmentation Policy Search for Single Frame Saliency Prediction.\",\"authors\":\"Elizaveta Savochkina,&nbsp;Lok Hin Lee,&nbsp;Lior Drukker,&nbsp;Aris T Papageorghiou,&nbsp;J Alison Noble\",\"doi\":\"10.1007/978-3-030-80432-9_28\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>While performing an ultrasound (US) scan, sonographers direct their gaze at regions of interest to verify that the correct plane is acquired and to interpret the acquisition frame. Predicting sonographer gaze on US videos is useful for identification of spatio-temporal patterns that are important for US scanning. This paper investigates utilizing sonographer gaze, in the form of gaze-tracking data, in a multimodal imaging deep learning framework to assist the analysis of the first trimester fetal ultrasound scan. Specifically, we propose an encoderdecoder convolutional neural network with skip connections to predict the visual gaze for each frame using 115 first trimester ultrasound videos; 29,250 video frames for training, 7,290 for validation and 9,126 for testing. We find that the dataset of our size benefits from automated data augmentation, which in turn, alleviates model overfitting and reduces structural variation imbalance of US anatomical views between the training and test datasets. Specifically, we employ a stochastic augmentation policy search method to improve segmentation performance. Using the learnt policies, our models outperform the baseline: KLD, SIM, NSS and CC (2.16, 0.27, 4.34 and 0.39 versus 3.17, 0.21, 2.92 and 0.28).</p>\",\"PeriodicalId\":93336,\"journal\":{\"name\":\"Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7611594/pdf/EMS132092.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-030-80432-9_28\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2021/7/6 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-80432-9_28","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/7/6 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在进行超声(US)扫描时,超声技师将他们的目光引导到感兴趣的区域,以验证是否获得了正确的平面,并解释获取帧。预测超声医师对美国视频的注视对于识别对美国扫描很重要的时空模式是有用的。本文研究了在多模态成像深度学习框架中利用超声医师注视的形式,以注视跟踪数据的形式,协助分析孕早期胎儿超声扫描。具体来说,我们提出了一个带跳跃连接的编码器-解码器卷积神经网络来预测115个妊娠早期超声视频的每帧视觉凝视;29,250帧视频用于培训,7,290帧用于验证,9,126帧用于测试。我们发现我们的数据集受益于自动化的数据增强,这反过来又缓解了模型过拟合,减少了训练和测试数据集之间美国解剖视图的结构变化不平衡。具体而言,我们采用随机增强策略搜索方法来提高分割性能。使用学习到的策略,我们的模型优于基线:KLD, SIM, NSS和CC(2.16, 0.27, 4.34和0.39对3.17,0.21,2.92和0.28)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
First Trimester Gaze Pattern Estimation Using Stochastic Augmentation Policy Search for Single Frame Saliency Prediction.

While performing an ultrasound (US) scan, sonographers direct their gaze at regions of interest to verify that the correct plane is acquired and to interpret the acquisition frame. Predicting sonographer gaze on US videos is useful for identification of spatio-temporal patterns that are important for US scanning. This paper investigates utilizing sonographer gaze, in the form of gaze-tracking data, in a multimodal imaging deep learning framework to assist the analysis of the first trimester fetal ultrasound scan. Specifically, we propose an encoderdecoder convolutional neural network with skip connections to predict the visual gaze for each frame using 115 first trimester ultrasound videos; 29,250 video frames for training, 7,290 for validation and 9,126 for testing. We find that the dataset of our size benefits from automated data augmentation, which in turn, alleviates model overfitting and reduces structural variation imbalance of US anatomical views between the training and test datasets. Specifically, we employ a stochastic augmentation policy search method to improve segmentation performance. Using the learnt policies, our models outperform the baseline: KLD, SIM, NSS and CC (2.16, 0.27, 4.34 and 0.39 versus 3.17, 0.21, 2.92 and 0.28).

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
First Trimester Gaze Pattern Estimation Using Stochastic Augmentation Policy Search for Single Frame Saliency Prediction. Dense Depth Estimation from Stereo Endoscopy Videos Using Unsupervised Optical Flow Methods.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1