Robust photometric stereo endoscopy via deep learning trained on synthetic data (Conference Presentation)

Faisal Mahmood, Daniel Borders, Richard J. Chen, Jordan A. Sweer, S. Tilley, N. Nishioka, J. Stayman, N. Durr
{"title":"Robust photometric stereo endoscopy via deep learning trained on synthetic data (Conference Presentation)","authors":"Faisal Mahmood, Daniel Borders, Richard J. Chen, Jordan A. Sweer, S. Tilley, N. Nishioka, J. Stayman, N. Durr","doi":"10.1117/12.2509878","DOIUrl":null,"url":null,"abstract":"Colorectal cancer is the second leading cause of cancer deaths in the United States and causes over 50,000 deaths annually. The standard of care for colorectal cancer detection and prevention is an optical colonoscopy and polypectomy. However, over 20% of the polyps are typically missed during a standard colonoscopy procedure and 60% of colorectal cancer cases are attributed to these missed polyps. Surface topography plays a vital role in identification and characterization of lesions, but topographic features often appear subtle to a conventional endoscope. Chromoendoscopy can highlight topographic features of the mucosa and has shown to improve lesion detection rate, but requires dedicated training and increases procedure time. Photometric stereo endoscopy captures this topography but is qualitative due to unknown working distances from each point of mucosa to the endoscope. In this work, we use deep learning to estimate a depth map from an endoscope camera with four alternating light sources. Since endoscopy videos with ground truth depth maps are challenging to attain, we generated synthetic data using graphical rendering from an anatomically realistic 3D colon model and a forward model of a virtual endoscope with alternating light sources. We propose an encoder-decoder style deep network, where the encoder is split into four branches of sub-encoder networks that simultaneously extract features from each of the four sources and fuse these feature maps as the network goes deeper. This is complemented by skip connections, which maintain spatial consistency when the features are decoded. We demonstrate that, when compared to monocular depth estimation, this setup can reduce the average NRMS error for depth estimation in a silicone colon phantom by 38% and in a pig colon by 31%.","PeriodicalId":309073,"journal":{"name":"Multimodal Biomedical Imaging XIV","volume":"94 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multimodal Biomedical Imaging XIV","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2509878","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Colorectal cancer is the second leading cause of cancer deaths in the United States and causes over 50,000 deaths annually. The standard of care for colorectal cancer detection and prevention is an optical colonoscopy and polypectomy. However, over 20% of the polyps are typically missed during a standard colonoscopy procedure and 60% of colorectal cancer cases are attributed to these missed polyps. Surface topography plays a vital role in identification and characterization of lesions, but topographic features often appear subtle to a conventional endoscope. Chromoendoscopy can highlight topographic features of the mucosa and has shown to improve lesion detection rate, but requires dedicated training and increases procedure time. Photometric stereo endoscopy captures this topography but is qualitative due to unknown working distances from each point of mucosa to the endoscope. In this work, we use deep learning to estimate a depth map from an endoscope camera with four alternating light sources. Since endoscopy videos with ground truth depth maps are challenging to attain, we generated synthetic data using graphical rendering from an anatomically realistic 3D colon model and a forward model of a virtual endoscope with alternating light sources. We propose an encoder-decoder style deep network, where the encoder is split into four branches of sub-encoder networks that simultaneously extract features from each of the four sources and fuse these feature maps as the network goes deeper. This is complemented by skip connections, which maintain spatial consistency when the features are decoded. We demonstrate that, when compared to monocular depth estimation, this setup can reduce the average NRMS error for depth estimation in a silicone colon phantom by 38% and in a pig colon by 31%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于合成数据训练的深度学习的鲁棒光度立体内窥镜(会议报告)
结直肠癌是美国癌症死亡的第二大原因,每年导致超过5万人死亡。诊断和预防结直肠癌的标准护理是光学结肠镜检查和息肉切除术。然而,在标准的结肠镜检查过程中,超过20%的息肉通常会被遗漏,60%的结直肠癌病例归因于这些遗漏的息肉。表面形貌在病变的识别和表征中起着至关重要的作用,但在传统的内窥镜下,地形特征往往显得很微妙。彩色内镜可以突出粘膜的地形特征,提高病变检出率,但需要专门的培训和增加手术时间。光度立体内窥镜捕捉到这种地形,但由于从粘膜的每个点到内窥镜的工作距离未知,因此是定性的。在这项工作中,我们使用深度学习来估计具有四个交替光源的内窥镜相机的深度图。由于具有地面真实深度图的内窥镜视频具有挑战性,因此我们使用解剖学逼真的3D结肠模型和具有交替光源的虚拟内窥镜正演模型的图形渲染来生成合成数据。我们提出了一个编码器-解码器风格的深度网络,其中编码器被分成四个子编码器网络的分支,这些分支同时从四个源中提取特征,并随着网络的深入融合这些特征映射。这是跳跃连接的补充,它在特征被解码时保持空间一致性。我们证明,与单目深度估计相比,这种设置可以将硅胶结肠幻影深度估计的平均NRMS误差降低38%,猪结肠深度估计的平均NRMS误差降低31%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Front Matter: Volume 10871 Combined reflectance confocal microscopy-optical coherence tomography for detection and deep margin assessment of basal cell carcinomas: a clinical study (Conference Presentation) Robust photometric stereo endoscopy via deep learning trained on synthetic data (Conference Presentation) Fusion of optical coherence tomography and mesoscopic fluorescence molecular tomography via Laplacian spatial priors (Conference Presentation) Toward co-localized OCT surveillance of laser therapy using real-time speckle decorrelation (Conference Presentation)
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1