Gastric Location Classification During Esophagogastroduodenoscopy Using Deep Neural Networks

A. Ding, Ying Li, Qilei Chen, Yu Cao, Benyuan Liu, Shu Han Chen, Xiaowei Liu
{"title":"Gastric Location Classification During Esophagogastroduodenoscopy Using Deep Neural Networks","authors":"A. Ding, Ying Li, Qilei Chen, Yu Cao, Benyuan Liu, Shu Han Chen, Xiaowei Liu","doi":"10.1109/BIBE52308.2021.9635273","DOIUrl":null,"url":null,"abstract":"Esophagogastroduodenoscopy (EGD) is a common procedure that visualizes the esophagus, stomach, and the duodenum by inserting a camera, attached to a long flexible tube, into the patient's mouth and down the stomach. A comprehensive EGD needs to examine all gastric locations, but since the camera is controlled manually, it is easy to miss some surface area and create diagnostic blind spots, which often result in life-costing oversights of early gastric cancer and other serious illnesses. In order to address this problem, we train a convolutional neural network to classify gastric locations based on the camera feed during an EGD, and based on the classifier and a triggering algorithm we propose, we build a video processing system that checks off each location as visited, allowing human operators to keep track of which locations they have visited and which they have not. Based on collected clinical patient reports, we consider six gastric locations, and we add a background class to our classifier to accomodate for the frames in EGD videos that do not resemble the six defined classes (including when the camera is outside of the patient body). Our best classifier achieves 98 % accuracy within the six gastric locations and 88 % accuracy including the background class, and our video processing system clearly checks off gastric locations in an expected order when being tested on recorded EGD videos. Lastly, we use class activation mapping to provide human-readable insight into how our trained classifier works.","PeriodicalId":343724,"journal":{"name":"2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE)","volume":"57 50","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIBE52308.2021.9635273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Esophagogastroduodenoscopy (EGD) is a common procedure that visualizes the esophagus, stomach, and the duodenum by inserting a camera, attached to a long flexible tube, into the patient's mouth and down the stomach. A comprehensive EGD needs to examine all gastric locations, but since the camera is controlled manually, it is easy to miss some surface area and create diagnostic blind spots, which often result in life-costing oversights of early gastric cancer and other serious illnesses. In order to address this problem, we train a convolutional neural network to classify gastric locations based on the camera feed during an EGD, and based on the classifier and a triggering algorithm we propose, we build a video processing system that checks off each location as visited, allowing human operators to keep track of which locations they have visited and which they have not. Based on collected clinical patient reports, we consider six gastric locations, and we add a background class to our classifier to accomodate for the frames in EGD videos that do not resemble the six defined classes (including when the camera is outside of the patient body). Our best classifier achieves 98 % accuracy within the six gastric locations and 88 % accuracy including the background class, and our video processing system clearly checks off gastric locations in an expected order when being tested on recorded EGD videos. Lastly, we use class activation mapping to provide human-readable insight into how our trained classifier works.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度神经网络的食管胃十二指肠镜胃定位分类
食管胃十二指肠镜检查(EGD)是一种常见的检查方法,通过将连接在一根长柔性管上的照相机插入患者的口腔并沿着胃向下,观察食管、胃和十二指肠。全面的EGD需要检查胃的所有部位,但由于相机是手动控制的,很容易错过一些表面区域并产生诊断盲点,这往往导致对早期胃癌和其他严重疾病的疏忽,导致生命损失。为了解决这个问题,我们训练了一个卷积神经网络,根据EGD期间的摄像机馈送对胃的位置进行分类,并基于我们提出的分类器和触发算法,我们建立了一个视频处理系统,检查每个访问过的位置,允许人类操作员跟踪他们访问过的位置和没有访问过的位置。根据收集到的临床患者报告,我们考虑了六个胃位置,并在分类器中添加了一个背景类,以适应EGD视频中与六个定义类不相似的帧(包括当摄像机在患者身体外时)。我们最好的分类器在六个胃位置内达到98%的准确率,包括背景类在内达到88%的准确率,我们的视频处理系统在录制的EGD视频上进行测试时,可以按照预期的顺序清楚地检查胃位置。最后,我们使用类激活映射来提供对训练过的分类器如何工作的人类可读的洞察。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Structural, antimicrobial, and molecular docking study of 3-(1-(4-hydroxyphenyl)amino) ethylidene)chroman-2,4-dione and its corresponding Pd complex Multiple-Activation Parallel Convolution Network in Combination with t-SNE for the Classification of Mild Cognitive Impairment Analyzing the Impact of Resampling Approaches on Chest X-Ray Images for COVID-19 Identification in a Local Hierarchical Classification Scenario Analysis of knee joint forces in different types of jumps of top futsal players at the beginning and at the end of the preparation period Design and evaluation of a noninvasive tongue-computer interface for individuals with severe disabilities
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1