Predicting pancreatic diseases from fundus images using deep learning

Yiting Wu, Pinqi Fang, Xiangning Wang, Jie Shen
{"title":"Predicting pancreatic diseases from fundus images using deep learning","authors":"Yiting Wu, Pinqi Fang, Xiangning Wang, Jie Shen","doi":"10.1007/s00371-024-03619-5","DOIUrl":null,"url":null,"abstract":"<p>Pancreatic cancer (PC) is an extremely deadly cancer, with mortality rates closely tied to its frequency of occurrence. By the time of diagnosis, pancreatic cancer often presents at an advanced stage, and has often spread to other parts of the body. Due to the poor survival outcomes, PDAC is the fifth leading cause of global cancer death. The 5-year relative survival rate of pancreatic cancer was about 6% and the lowest level in all cancers. Currently, there are no established guidance for screening individuals at high risk for pancreatic cancer, including those with a family history of the pancreatic disease or chronic pancreatitis (CP). With the development of medicine, fundus maps can now predict many systemic diseases. Subsequently, the association between ocular changes and a few pancreatic diseases was also discovered. Therefore, our objective is to construct a deep learning model aimed at identifying correlations between ocular features and significant pancreatic ailments. The utilization of AI and fundus images has extended beyond the investigation of ocular disorders. Hence, in order to solve the tasks of PC and CP classification, we propose a brand new deep learning model (PANet) that integrates pre-trained CNN network, multi-scale feature modules, attention mechanisms, and an FC classifier. PANet adopts a ResNet34 backbone and selectively integrates attention modules to construct its fundamental architecture. To enhance feature extraction capability, PANet combines multi-scale feature modules before the attention module. Our model is trained and evaluated using a dataset comprising 1300 fundus images. The experimental outcomes illustrate the successful realization of our objectives, with the model achieving an accuracy of 91.50% and an area under the receiver operating characteristic curve (AUC) of 96.00% in PC classification, and an accuracy of 95.60% and an AUC of 99.20% in CP classification. Our study establishes a characterizing link between ocular features and major pancreatic diseases, providing a non-invasive, convenient, and complementary method for screening and detection of pancreatic diseases.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03619-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Pancreatic cancer (PC) is an extremely deadly cancer, with mortality rates closely tied to its frequency of occurrence. By the time of diagnosis, pancreatic cancer often presents at an advanced stage, and has often spread to other parts of the body. Due to the poor survival outcomes, PDAC is the fifth leading cause of global cancer death. The 5-year relative survival rate of pancreatic cancer was about 6% and the lowest level in all cancers. Currently, there are no established guidance for screening individuals at high risk for pancreatic cancer, including those with a family history of the pancreatic disease or chronic pancreatitis (CP). With the development of medicine, fundus maps can now predict many systemic diseases. Subsequently, the association between ocular changes and a few pancreatic diseases was also discovered. Therefore, our objective is to construct a deep learning model aimed at identifying correlations between ocular features and significant pancreatic ailments. The utilization of AI and fundus images has extended beyond the investigation of ocular disorders. Hence, in order to solve the tasks of PC and CP classification, we propose a brand new deep learning model (PANet) that integrates pre-trained CNN network, multi-scale feature modules, attention mechanisms, and an FC classifier. PANet adopts a ResNet34 backbone and selectively integrates attention modules to construct its fundamental architecture. To enhance feature extraction capability, PANet combines multi-scale feature modules before the attention module. Our model is trained and evaluated using a dataset comprising 1300 fundus images. The experimental outcomes illustrate the successful realization of our objectives, with the model achieving an accuracy of 91.50% and an area under the receiver operating characteristic curve (AUC) of 96.00% in PC classification, and an accuracy of 95.60% and an AUC of 99.20% in CP classification. Our study establishes a characterizing link between ocular features and major pancreatic diseases, providing a non-invasive, convenient, and complementary method for screening and detection of pancreatic diseases.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用深度学习从眼底图像预测胰腺疾病
胰腺癌(PC)是一种极其致命的癌症,死亡率与其发病频率密切相关。在确诊时,胰腺癌往往已是晚期,而且往往已扩散到身体的其他部位。由于生存率低,胰腺癌已成为全球第五大癌症死因。胰腺癌的 5 年相对生存率约为 6%,是所有癌症中最低的。目前,对于胰腺癌高危人群(包括有胰腺疾病家族史或慢性胰腺炎(CP)患者)的筛查还没有既定的指南。随着医学的发展,眼底图现在可以预测许多系统性疾病。随后,眼底变化与一些胰腺疾病之间的关联也被发现。因此,我们的目标是构建一个深度学习模型,旨在识别眼部特征与重大胰腺疾病之间的相关性。人工智能和眼底图像的应用已经超出了眼部疾病的调查范围。因此,为了解决 PC 和 CP 分类任务,我们提出了一种全新的深度学习模型(PANet),该模型集成了预训练 CNN 网络、多尺度特征模块、注意力机制和 FC 分类器。PANet 采用 ResNet34 作为骨干,并有选择地集成了注意力模块,从而构建了其基本架构。为了增强特征提取能力,PANet 在注意力模块之前结合了多尺度特征模块。我们使用由 1300 张眼底图像组成的数据集对我们的模型进行了训练和评估。实验结果表明,模型成功实现了我们的目标,在 PC 分类中的准确率达到 91.50%,接收者操作特征曲线下面积(AUC)达到 96.00%;在 CP 分类中的准确率达到 95.60%,接收者操作特征曲线下面积(AUC)达到 99.20%。我们的研究建立了眼部特征与主要胰腺疾病之间的联系,为胰腺疾病的筛查和检测提供了一种无创、便捷的补充方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Predicting pancreatic diseases from fundus images using deep learning A modal fusion network with dual attention mechanism for 6D pose estimation Crafting imperceptible and transferable adversarial examples: leveraging conditional residual generator and wavelet transforms to deceive deepfake detection HCT-Unet: multi-target medical image segmentation via a hybrid CNN-transformer Unet incorporating multi-axis gated multi-layer perceptron HASN: hybrid attention separable network for efficient image super-resolution
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1