DFC-Net:用于视网膜图像质量评估的双路频域交叉注意力融合网络。

IF 2.9 2区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Biomedical optics express Pub Date : 2024-10-17 eCollection Date: 2024-11-01 DOI:10.1364/BOE.531292
Xiaoyan Kui, Zeru Hai, Beiji Zou, Wei Liang, Liming Chen
{"title":"DFC-Net:用于视网膜图像质量评估的双路频域交叉注意力融合网络。","authors":"Xiaoyan Kui, Zeru Hai, Beiji Zou, Wei Liang, Liming Chen","doi":"10.1364/BOE.531292","DOIUrl":null,"url":null,"abstract":"<p><p>Retinal image quality assessment (RIQA) is crucial for diagnosing various eye diseases and ensuring the accuracy of diagnostic analyses based on retinal fundus images. Traditional deep convolutional neural networks (CNNs) for RIQA face challenges such as over-reliance on RGB image brightness and difficulty in differentiating closely ranked image quality categories. To address these issues, we introduced the Dual-Path Frequency-domain Cross-attention Network (DFC-Net), which integrates RGB images and contrast-enhanced images using contrast-limited adaptive histogram equalization (CLAHE) as dual inputs. This approach improves structure detail detection and feature extraction. We also incorporated a frequency-domain attention mechanism (FDAM) to focus selectively on frequency components indicative of quality degradations and a cross-attention mechanism (CAM) to optimize the integration of dual inputs. Our experiments on the EyeQ and RIQA-RFMiD datasets demonstrated significant improvements, achieving a precision of 0.8895, recall of 0.8923, F1-score of 0.8909, and a Kappa score of 0.9191 on the EyeQ dataset. On the RIQA-RFMiD dataset, the precision was 0.702, recall 0.6729, F1-score 0.6869, and Kappa score 0.7210, outperforming current state-of-the-art approaches.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"15 11","pages":"6399-6415"},"PeriodicalIF":2.9000,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11563343/pdf/","citationCount":"0","resultStr":"{\"title\":\"DFC-Net: a dual-path frequency-domain cross-attention fusion network for retinal image quality assessment.\",\"authors\":\"Xiaoyan Kui, Zeru Hai, Beiji Zou, Wei Liang, Liming Chen\",\"doi\":\"10.1364/BOE.531292\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Retinal image quality assessment (RIQA) is crucial for diagnosing various eye diseases and ensuring the accuracy of diagnostic analyses based on retinal fundus images. Traditional deep convolutional neural networks (CNNs) for RIQA face challenges such as over-reliance on RGB image brightness and difficulty in differentiating closely ranked image quality categories. To address these issues, we introduced the Dual-Path Frequency-domain Cross-attention Network (DFC-Net), which integrates RGB images and contrast-enhanced images using contrast-limited adaptive histogram equalization (CLAHE) as dual inputs. This approach improves structure detail detection and feature extraction. We also incorporated a frequency-domain attention mechanism (FDAM) to focus selectively on frequency components indicative of quality degradations and a cross-attention mechanism (CAM) to optimize the integration of dual inputs. Our experiments on the EyeQ and RIQA-RFMiD datasets demonstrated significant improvements, achieving a precision of 0.8895, recall of 0.8923, F1-score of 0.8909, and a Kappa score of 0.9191 on the EyeQ dataset. On the RIQA-RFMiD dataset, the precision was 0.702, recall 0.6729, F1-score 0.6869, and Kappa score 0.7210, outperforming current state-of-the-art approaches.</p>\",\"PeriodicalId\":8969,\"journal\":{\"name\":\"Biomedical optics express\",\"volume\":\"15 11\",\"pages\":\"6399-6415\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11563343/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical optics express\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1364/BOE.531292\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/11/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"BIOCHEMICAL RESEARCH METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical optics express","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1364/BOE.531292","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/11/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

摘要

视网膜图像质量评估(RIQA)对于诊断各种眼科疾病和确保基于视网膜眼底图像的诊断分析的准确性至关重要。用于 RIQA 的传统深度卷积神经网络(CNN)面临着过度依赖 RGB 图像亮度和难以区分等级紧密的图像质量类别等挑战。为了解决这些问题,我们引入了双路频域交叉注意网络(DFC-Net),它将 RGB 图像和使用对比度限制自适应直方图均衡化(CLAHE)的对比度增强图像整合为双路输入。这种方法改进了结构细节检测和特征提取。我们还采用了频域关注机制(FDAM)来选择性地关注表明质量下降的频率成分,以及交叉关注机制(CAM)来优化双输入的整合。我们在 EyeQ 和 RIQA-RFMiD 数据集上进行的实验证明了这一方法的显著改进,在 EyeQ 数据集上的精确度达到 0.8895,召回率达到 0.8923,F1 分数达到 0.8909,Kappa 分数达到 0.9191。在 RIQA-RFMiD 数据集上,精确度为 0.702,召回率为 0.6729,F1 分数为 0.6869,Kappa 分数为 0.7210,超过了目前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DFC-Net: a dual-path frequency-domain cross-attention fusion network for retinal image quality assessment.

Retinal image quality assessment (RIQA) is crucial for diagnosing various eye diseases and ensuring the accuracy of diagnostic analyses based on retinal fundus images. Traditional deep convolutional neural networks (CNNs) for RIQA face challenges such as over-reliance on RGB image brightness and difficulty in differentiating closely ranked image quality categories. To address these issues, we introduced the Dual-Path Frequency-domain Cross-attention Network (DFC-Net), which integrates RGB images and contrast-enhanced images using contrast-limited adaptive histogram equalization (CLAHE) as dual inputs. This approach improves structure detail detection and feature extraction. We also incorporated a frequency-domain attention mechanism (FDAM) to focus selectively on frequency components indicative of quality degradations and a cross-attention mechanism (CAM) to optimize the integration of dual inputs. Our experiments on the EyeQ and RIQA-RFMiD datasets demonstrated significant improvements, achieving a precision of 0.8895, recall of 0.8923, F1-score of 0.8909, and a Kappa score of 0.9191 on the EyeQ dataset. On the RIQA-RFMiD dataset, the precision was 0.702, recall 0.6729, F1-score 0.6869, and Kappa score 0.7210, outperforming current state-of-the-art approaches.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biomedical optics express
Biomedical optics express BIOCHEMICAL RESEARCH METHODS-OPTICS
CiteScore
6.80
自引率
11.80%
发文量
633
审稿时长
1 months
期刊介绍: The journal''s scope encompasses fundamental research, technology development, biomedical studies and clinical applications. BOEx focuses on the leading edge topics in the field, including: Tissue optics and spectroscopy Novel microscopies Optical coherence tomography Diffuse and fluorescence tomography Photoacoustic and multimodal imaging Molecular imaging and therapies Nanophotonic biosensing Optical biophysics/photobiology Microfluidic optical devices Vision research.
期刊最新文献
Super resolution reconstruction of fluorescence microscopy images by a convolutional network with physical priors. Physics-guided deep learning-based real-time image reconstruction of Fourier-domain optical coherence tomography. On bench evaluation of intraocular lenses: performance of a commercial interferometer. Predictive coding compressive sensing optical coherence tomography hardware implementation. Development of silicone-based phantoms for biomedical optics from 400 to 1550 nm.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1