Image Segmentation for Colorectal cancer histopathological images analysis

Meng-Ling Wu, Jui-Hung Chang, P. Chung
{"title":"Image Segmentation for Colorectal cancer histopathological images analysis","authors":"Meng-Ling Wu, Jui-Hung Chang, P. Chung","doi":"10.1109/RASSE54974.2022.9989848","DOIUrl":null,"url":null,"abstract":"Colorectal cancer (CRC) is the third most common malignancy and the second most deadly cancer. The most efficient way to determine CRC staging is to analyze whole slide digital pathology images; therefore, it is certainly important to ensure the accuracy of pathology slide analysis.We can obtain medical quantized data of pathological images by implementing deep learning methods. These methods not only can light pathologists’ load but also can provide accurate computing results.In this paper, we use U-2-NET as our backbone to perform Binary Image Segmentation on CRC pathology slides. CRC pathology slides have a variety of non-conforming shapes and colors which is an enormous challenge for detecting cancer areas. U-2-NET was originally used in the Salient Object Detection (SOD) task to find the most unique regions of human attention, which can be used to identify abnormal regions in pathological slices. Moreover, the RSU block of U-2-NET can handle long-term and short-term dependencies, which we believe helps maintain contextual information. With the large computational costs, U-2-NET is hard to implement for application. Our purposed method can use preprocessing, image-selecting mechanisms and transfer learning concepts to solve this problem.Our results show that the model trained with a small part of the data set and a modified small object function has the best results for Binary Image Segmentation of colorectal cancer pathology sections by U-2-NET, with the best IOU (0.77) and Dice Loss (0.83) compared with other models (MSRFCNN, FCN, SegNet, and Unet). Furthermore, after transferring learning using pre-trained weights from the SOD dataset, the results are improved compared to those of learning the network from scratch.","PeriodicalId":382440,"journal":{"name":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RASSE54974.2022.9989848","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Colorectal cancer (CRC) is the third most common malignancy and the second most deadly cancer. The most efficient way to determine CRC staging is to analyze whole slide digital pathology images; therefore, it is certainly important to ensure the accuracy of pathology slide analysis.We can obtain medical quantized data of pathological images by implementing deep learning methods. These methods not only can light pathologists’ load but also can provide accurate computing results.In this paper, we use U-2-NET as our backbone to perform Binary Image Segmentation on CRC pathology slides. CRC pathology slides have a variety of non-conforming shapes and colors which is an enormous challenge for detecting cancer areas. U-2-NET was originally used in the Salient Object Detection (SOD) task to find the most unique regions of human attention, which can be used to identify abnormal regions in pathological slices. Moreover, the RSU block of U-2-NET can handle long-term and short-term dependencies, which we believe helps maintain contextual information. With the large computational costs, U-2-NET is hard to implement for application. Our purposed method can use preprocessing, image-selecting mechanisms and transfer learning concepts to solve this problem.Our results show that the model trained with a small part of the data set and a modified small object function has the best results for Binary Image Segmentation of colorectal cancer pathology sections by U-2-NET, with the best IOU (0.77) and Dice Loss (0.83) compared with other models (MSRFCNN, FCN, SegNet, and Unet). Furthermore, after transferring learning using pre-trained weights from the SOD dataset, the results are improved compared to those of learning the network from scratch.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于结直肠癌组织病理图像分析的图像分割
结直肠癌(CRC)是第三大最常见的恶性肿瘤,也是第二大最致命的癌症。确定结直肠癌分期最有效的方法是分析整片数字病理图像;因此,确保病理切片分析的准确性是非常重要的。通过实施深度学习方法,我们可以获得病理图像的医学量化数据。这些方法不仅减轻了病理学家的负担,而且可以提供准确的计算结果。在本文中,我们使用U-2-NET作为主干,对CRC病理切片进行二值图像分割。结直肠癌病理切片具有各种不一致的形状和颜色,这对检测癌症区域是一个巨大的挑战。U-2-NET最初用于显著目标检测(SOD)任务,以寻找人类注意力最独特的区域,这些区域可用于识别病理切片中的异常区域。此外,U-2-NET的RSU块可以处理长期和短期依赖,我们认为这有助于维护上下文信息。由于U-2-NET计算成本大,难以实现应用。我们的目标方法可以使用预处理、图像选择机制和迁移学习概念来解决这个问题。我们的研究结果表明,使用一小部分数据集和改进的小目标函数训练的模型与其他模型(MSRFCNN, FCN, SegNet和Unet)相比,U-2-NET对结直肠癌病理切片的二值图像分割效果最好,IOU(0.77)和Dice Loss(0.83)最好。此外,在使用来自SOD数据集的预训练权值进行迁移学习后,与从头开始学习网络相比,结果有所改善。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Design 4x1 Space-Time Conjugate Two-Path Full-Rate OFDM Systems YOLO-Based Deep-Learning Gaze Estimation Technology by Combining Geometric Feature and Appearance Based Technologies for Smart Advertising Displays Graph Neural Networks for HD EMG-based Movement Intention Recognition: An Initial Investigation Bert Based Chinese Sentiment Analysis for Automatic Censoring of Dynamic Electronic Scroll An Image Feature Points Assisted Point Cloud Matching Scheme in Odometry Estimation for SLAM Systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1