CoInNet: A Convolution-Involution Network With a Novel Statistical Attention for Automatic Polyp Segmentation

Samir Jain;Rohan Atale;Anubhav Gupta;Utkarsh Mishra;Ayan Seal;Aparajita Ojha;Joanna Jaworek-Korjakowska;Ondrej Krejcar
{"title":"CoInNet: A Convolution-Involution Network With a Novel Statistical Attention for Automatic Polyp Segmentation","authors":"Samir Jain;Rohan Atale;Anubhav Gupta;Utkarsh Mishra;Ayan Seal;Aparajita Ojha;Joanna Jaworek-Korjakowska;Ondrej Krejcar","doi":"10.1109/TMI.2023.3320151","DOIUrl":null,"url":null,"abstract":"Polyps are very common abnormalities in human gastrointestinal regions. Their early diagnosis may help in reducing the risk of colorectal cancer. Vision-based computer-aided diagnostic systems automatically identify polyp regions to assist surgeons in their removal. Due to their varying shape, color, size, texture, and unclear boundaries, polyp segmentation in images is a challenging problem. Existing deep learning segmentation models mostly rely on convolutional neural networks that have certain limitations in learning the diversity in visual patterns at different spatial locations. Further, they fail to capture inter-feature dependencies. Vision transformer models have also been deployed for polyp segmentation due to their powerful global feature extraction capabilities. But they too are supplemented by convolution layers for learning contextual local information. In the present paper, a polyp segmentation model CoInNet is proposed with a novel feature extraction mechanism that leverages the strengths of convolution and involution operations and learns to highlight polyp regions in images by considering the relationship between different feature maps through a statistical feature attention unit. To further aid the network in learning polyp boundaries, an anomaly boundary approximation module is introduced that uses recursively fed feature fusion to refine segmentation results. It is indeed remarkable that even tiny-sized polyps with only 0.01% of an image area can be precisely segmented by CoInNet. It is crucial for clinical applications, as small polyps can be easily overlooked even in the manual examination due to the voluminous size of wireless capsule endoscopy videos. CoInNet outperforms thirteen state-of-the-art methods on five benchmark polyp segmentation datasets.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10266385/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Polyps are very common abnormalities in human gastrointestinal regions. Their early diagnosis may help in reducing the risk of colorectal cancer. Vision-based computer-aided diagnostic systems automatically identify polyp regions to assist surgeons in their removal. Due to their varying shape, color, size, texture, and unclear boundaries, polyp segmentation in images is a challenging problem. Existing deep learning segmentation models mostly rely on convolutional neural networks that have certain limitations in learning the diversity in visual patterns at different spatial locations. Further, they fail to capture inter-feature dependencies. Vision transformer models have also been deployed for polyp segmentation due to their powerful global feature extraction capabilities. But they too are supplemented by convolution layers for learning contextual local information. In the present paper, a polyp segmentation model CoInNet is proposed with a novel feature extraction mechanism that leverages the strengths of convolution and involution operations and learns to highlight polyp regions in images by considering the relationship between different feature maps through a statistical feature attention unit. To further aid the network in learning polyp boundaries, an anomaly boundary approximation module is introduced that uses recursively fed feature fusion to refine segmentation results. It is indeed remarkable that even tiny-sized polyps with only 0.01% of an image area can be precisely segmented by CoInNet. It is crucial for clinical applications, as small polyps can be easily overlooked even in the manual examination due to the voluminous size of wireless capsule endoscopy videos. CoInNet outperforms thirteen state-of-the-art methods on five benchmark polyp segmentation datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CoInNet:一种具有新颖统计注意的卷积对合网络,用于自动多边形分割。
息肉是人类胃肠道常见的异常现象。他们的早期诊断可能有助于降低结直肠癌癌症的风险。基于视觉的计算机辅助诊断系统自动识别息肉区域,以帮助外科医生切除息肉。由于息肉的形状、颜色、大小、纹理和边界不清楚,图像中的息肉分割是一个具有挑战性的问题。现有的深度学习分割模型大多依赖于卷积神经网络,该网络在学习不同空间位置的视觉模式多样性方面具有一定的局限性。此外,它们无法捕获功能间的依赖关系。视觉变换器模型由于其强大的全局特征提取能力,也已被用于息肉分割。但它们也由用于学习上下文局部信息的卷积层来补充。在本文中,提出了一种息肉分割模型CoInNet,该模型具有一种新的特征提取机制,该机制利用卷积和对合运算的优势,并通过统计特征注意力单元考虑不同特征图之间的关系,学习突出图像中的息肉区域。为了进一步帮助网络学习息肉边界,引入了一个异常边界近似模块,该模块使用递归馈送的特征融合来细化分割结果。值得注意的是,即使是图像面积只有0.01%的微小息肉也可以通过CoInNet精确分割。这对临床应用至关重要,因为即使在手动检查中,由于无线胶囊内窥镜检查视频的体积很大,小息肉也很容易被忽视。CoInNet在五个基准息肉分割数据集上优于十三种最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
AttriPrompter: Auto-Prompting with Attribute Semantics for Zero-shot Nuclei Detection via Visual-Language Pre-trained Models. Multi-Organ Foundation Model for Universal Ultrasound Image Segmentation with Task Prompt and Anatomical Prior. SPIRiT-Diffusion: Self-Consistency Driven Diffusion Model for Accelerated MRI. Three-Dimensional Variable Slab-Selective Projection Acquisition Imaging. ConvexAdam: Self-Configuring Dual-Optimisation-Based 3D Multitask Medical Image Registration.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1