首页 > 最新文献

Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)最新文献

英文 中文
First Trimester Gaze Pattern Estimation Using Stochastic Augmentation Policy Search for Single Frame Saliency Prediction. 基于随机增强策略搜索的单帧显著性预测的孕早期凝视模式估计。
Elizaveta Savochkina, Lok Hin Lee, Lior Drukker, Aris T Papageorghiou, J Alison Noble

While performing an ultrasound (US) scan, sonographers direct their gaze at regions of interest to verify that the correct plane is acquired and to interpret the acquisition frame. Predicting sonographer gaze on US videos is useful for identification of spatio-temporal patterns that are important for US scanning. This paper investigates utilizing sonographer gaze, in the form of gaze-tracking data, in a multimodal imaging deep learning framework to assist the analysis of the first trimester fetal ultrasound scan. Specifically, we propose an encoderdecoder convolutional neural network with skip connections to predict the visual gaze for each frame using 115 first trimester ultrasound videos; 29,250 video frames for training, 7,290 for validation and 9,126 for testing. We find that the dataset of our size benefits from automated data augmentation, which in turn, alleviates model overfitting and reduces structural variation imbalance of US anatomical views between the training and test datasets. Specifically, we employ a stochastic augmentation policy search method to improve segmentation performance. Using the learnt policies, our models outperform the baseline: KLD, SIM, NSS and CC (2.16, 0.27, 4.34 and 0.39 versus 3.17, 0.21, 2.92 and 0.28).

在进行超声(US)扫描时,超声技师将他们的目光引导到感兴趣的区域,以验证是否获得了正确的平面,并解释获取帧。预测超声医师对美国视频的注视对于识别对美国扫描很重要的时空模式是有用的。本文研究了在多模态成像深度学习框架中利用超声医师注视的形式,以注视跟踪数据的形式,协助分析孕早期胎儿超声扫描。具体来说,我们提出了一个带跳跃连接的编码器-解码器卷积神经网络来预测115个妊娠早期超声视频的每帧视觉凝视;29,250帧视频用于培训,7,290帧用于验证,9,126帧用于测试。我们发现我们的数据集受益于自动化的数据增强,这反过来又缓解了模型过拟合,减少了训练和测试数据集之间美国解剖视图的结构变化不平衡。具体而言,我们采用随机增强策略搜索方法来提高分割性能。使用学习到的策略,我们的模型优于基线:KLD, SIM, NSS和CC(2.16, 0.27, 4.34和0.39对3.17,0.21,2.92和0.28)。
{"title":"First Trimester Gaze Pattern Estimation Using Stochastic Augmentation Policy Search for Single Frame Saliency Prediction.","authors":"Elizaveta Savochkina,&nbsp;Lok Hin Lee,&nbsp;Lior Drukker,&nbsp;Aris T Papageorghiou,&nbsp;J Alison Noble","doi":"10.1007/978-3-030-80432-9_28","DOIUrl":"https://doi.org/10.1007/978-3-030-80432-9_28","url":null,"abstract":"<p><p>While performing an ultrasound (US) scan, sonographers direct their gaze at regions of interest to verify that the correct plane is acquired and to interpret the acquisition frame. Predicting sonographer gaze on US videos is useful for identification of spatio-temporal patterns that are important for US scanning. This paper investigates utilizing sonographer gaze, in the form of gaze-tracking data, in a multimodal imaging deep learning framework to assist the analysis of the first trimester fetal ultrasound scan. Specifically, we propose an encoderdecoder convolutional neural network with skip connections to predict the visual gaze for each frame using 115 first trimester ultrasound videos; 29,250 video frames for training, 7,290 for validation and 9,126 for testing. We find that the dataset of our size benefits from automated data augmentation, which in turn, alleviates model overfitting and reduces structural variation imbalance of US anatomical views between the training and test datasets. Specifically, we employ a stochastic augmentation policy search method to improve segmentation performance. Using the learnt policies, our models outperform the baseline: KLD, SIM, NSS and CC (2.16, 0.27, 4.34 and 0.39 versus 3.17, 0.21, 2.92 and 0.28).</p>","PeriodicalId":93336,"journal":{"name":"Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7611594/pdf/EMS132092.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39379950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dense Depth Estimation from Stereo Endoscopy Videos Using Unsupervised Optical Flow Methods. 使用无监督光学流方法从立体内窥镜视频中估计密集深度
Zixin Yang, Richard Simon, Yangming Li, Cristian A Linte

In the context of Minimally Invasive Surgery, estimating depth from stereo endoscopy plays a crucial role in three-dimensional (3D) reconstruction, surgical navigation, and augmentation reality (AR) visualization. However, the challenges associated with this task are three-fold: 1) feature-less surface representations, often polluted by artifacts, pose difficulty in identifying correspondence; 2) ground truth depth is difficult to estimate; and 3) an endoscopy image acquisition accompanied by accurately calibrated camera parameters is rare, as the camera is often adjusted during an intervention. To address these difficulties, we propose an unsupervised depth estimation framework (END-flow) based on an unsupervised optical flow network trained on un-rectified binocular videos without calibrated camera parameters. The proposed END-flow architecture is compared with traditional stereo matching, self-supervised depth estimation, unsupervised optical flow, and supervised methods implemented on the Stereo Correspondence and Reconstruction of Endoscopic Data (SCARED) Challenge dataset. Experimental results show that our method outperforms several state-of-the-art techniques and achieves a close performance to that of supervised methods.

在微创手术中,从立体内窥镜中估计深度在三维(3D)重建、手术导航和增强现实(AR)可视化中起着至关重要的作用。然而,这项任务面临着三方面的挑战:1)无特征的表面表示通常会受到伪影的污染,给识别对应关系带来困难;2)难以估计地面真实深度;3)内窥镜图像采集伴随着精确校准的相机参数非常罕见,因为相机通常会在干预过程中进行调整。为了解决这些困难,我们提出了一种无监督深度估计框架(END-flow),该框架基于在未校正双目视频上训练的无监督光流网络,无需校准相机参数。在内窥镜数据立体对应与重构(SCARED)挑战赛数据集上,将所提出的END-flow架构与传统的立体匹配、自监督深度估计、无监督光流和监督方法进行了比较。实验结果表明,我们的方法优于几种最先进的技术,并达到了与有监督方法接近的性能。
{"title":"Dense Depth Estimation from Stereo Endoscopy Videos Using Unsupervised Optical Flow Methods.","authors":"Zixin Yang, Richard Simon, Yangming Li, Cristian A Linte","doi":"10.1007/978-3-030-80432-9_26","DOIUrl":"10.1007/978-3-030-80432-9_26","url":null,"abstract":"<p><p>In the context of Minimally Invasive Surgery, estimating depth from stereo endoscopy plays a crucial role in three-dimensional (3D) reconstruction, surgical navigation, and augmentation reality (AR) visualization. However, the challenges associated with this task are three-fold: 1) feature-less surface representations, often polluted by artifacts, pose difficulty in identifying correspondence; 2) ground truth depth is difficult to estimate; and 3) an endoscopy image acquisition accompanied by accurately calibrated camera parameters is rare, as the camera is often adjusted during an intervention. To address these difficulties, we propose an unsupervised depth estimation framework (END-flow) based on an unsupervised optical flow network trained on un-rectified binocular videos without calibrated camera parameters. The proposed END-flow architecture is compared with traditional stereo matching, self-supervised depth estimation, unsupervised optical flow, and supervised methods implemented on the Stereo Correspondence and Reconstruction of Endoscopic Data (SCARED) Challenge dataset. Experimental results show that our method outperforms several state-of-the-art techniques and achieves a close performance to that of supervised methods.</p>","PeriodicalId":93336,"journal":{"name":"Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9125693/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85941062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1