首页 > 最新文献

International Journal of Imaging Systems and Technology最新文献

英文 中文
Self-Supervised Transfer Learning of Cross-Domains Histopathological Images for Cancer Diagnosis 跨领域组织病理图像的自监督迁移学习用于癌症诊断
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-03 DOI: 10.1002/ima.70278
Jianbo Zhu, Zihan Wang, Jinjin Wu, Chenbei Li, Lan Li, Linwei Shang, Huijie Wang, Chao Tu, Jianhua Yin

Whole-slide imaging assisted by deep learning has been employed to help the digital pathology, while limited by the scarcity of paired label data. To address this issue, a novel self-supervised image modeling framework, PathMAE, is proposed to effectively enlarge the labeled dataset in a cross-domain way, where cross-dataset and even cross-disease histopathological images can be used for model training. PathMAE integrates masked image modeling and contrastive learning to effectively learn transferable visual representations from unlabeled WSIs. The framework comprises two key components: a Swin-Transformer-based encoder-decoder (SMED) with a window-masking strategy for local feature reconstruction, and a Dynamic Memory Contrastive Learning (DMCL) module for enhancing global semantic alignment via memory-guided feature comparison. Experimental results on three public histopathology datasets demonstrate the robustness and generalizability of the proposed method. In cross-disease transfer (BreakHis → Osteosarcoma), PathMAE achieved 97.15% accuracy and 99.03% AUC; in cross-dataset transfer (BreakHis → Camelyon16), it obtained 84.67% accuracy and 88.04% AUC. These findings validate the capability of PathMAE as a scalable and domain-adaptive image analysis framework, offering new potential for building reliable computational pathology systems under limited supervision.

深度学习辅助的全切片成像已被用于帮助数字病理,但受到配对标签数据稀缺的限制。为了解决这一问题,提出了一种新的自监督图像建模框架PathMAE,以跨域的方式有效地扩大标记数据集,其中跨数据集甚至跨疾病的组织病理学图像可用于模型训练。PathMAE集成了掩模图像建模和对比学习,可以有效地从未标记的wsi中学习可转移的视觉表示。该框架包括两个关键组件:一个基于swing - transformer的编码器-解码器(SMED),该编码器具有用于局部特征重建的窗口掩蔽策略,以及一个动态记忆对比学习(DMCL)模块,该模块通过记忆引导的特征比较来增强全局语义对齐。在三个公共组织病理学数据集上的实验结果证明了该方法的鲁棒性和泛化性。在跨疾病转移(BreakHis→骨肉瘤)中,PathMAE的准确率为97.15%,AUC为99.03%;在跨数据集传输(BreakHis→Camelyon16)中,准确率为84.67%,AUC为88.04%。这些发现验证了PathMAE作为一个可扩展和自适应领域的图像分析框架的能力,为在有限的监督下建立可靠的计算病理系统提供了新的潜力。
{"title":"Self-Supervised Transfer Learning of Cross-Domains Histopathological Images for Cancer Diagnosis","authors":"Jianbo Zhu,&nbsp;Zihan Wang,&nbsp;Jinjin Wu,&nbsp;Chenbei Li,&nbsp;Lan Li,&nbsp;Linwei Shang,&nbsp;Huijie Wang,&nbsp;Chao Tu,&nbsp;Jianhua Yin","doi":"10.1002/ima.70278","DOIUrl":"https://doi.org/10.1002/ima.70278","url":null,"abstract":"<div>\u0000 \u0000 <p>Whole-slide imaging assisted by deep learning has been employed to help the digital pathology, while limited by the scarcity of paired label data. To address this issue, a novel self-supervised image modeling framework, PathMAE, is proposed to effectively enlarge the labeled dataset in a cross-domain way, where cross-dataset and even cross-disease histopathological images can be used for model training. PathMAE integrates masked image modeling and contrastive learning to effectively learn transferable visual representations from unlabeled WSIs. The framework comprises two key components: a Swin-Transformer-based encoder-decoder (SMED) with a window-masking strategy for local feature reconstruction, and a Dynamic Memory Contrastive Learning (DMCL) module for enhancing global semantic alignment via memory-guided feature comparison. Experimental results on three public histopathology datasets demonstrate the robustness and generalizability of the proposed method. In cross-disease transfer (BreakHis → Osteosarcoma), PathMAE achieved 97.15% accuracy and 99.03% AUC; in cross-dataset transfer (BreakHis → Camelyon16), it obtained 84.67% accuracy and 88.04% AUC. These findings validate the capability of PathMAE as a scalable and domain-adaptive image analysis framework, offering new potential for building reliable computational pathology systems under limited supervision.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Iris Recognition With Stand-Alone Embedded Processor Based on AI Model 基于AI模型的独立嵌入式处理器实时虹膜识别
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-03 DOI: 10.1002/ima.70279
Shih-Chang Hsia, Jhong-Hao Luo

This study focuses on iris recognition using deep learning techniques. The EfficientDet network model was employed for both iris detection and recognition tasks. Four datasets were utilized to train and evaluate the deep learning network. The model was trained to extract iris features and classify individuals based on their unique iris patterns. The proposed method achieved a high recognition rate of over 98% across multiple dataset evaluations. For real-time implementation on an embedded system, the trained model was quantized to an 8-bit integer format to accommodate resource-constrained devices. Despite this quantization, the recognition accuracy remained high, reaching 97%. By incorporating an Edge TPU accelerator alongside a Raspberry Pi system, the processing speed reached up to 10 frames per second during real-time iris camera testing, demonstrating the feasibility of real-time iris recognition. An intruder test was conducted to assess the system's robustness in preventing unauthorized access. The False Acceptance Rate (FAR) was measured to assess the likelihood of incorrectly accepting an unauthorized individual. Experimental results show that the FAR can be reduced to zero by applying additional temporal constraints, effectively preventing unauthorized individuals from passing the iris recognition-based access control system.

本研究的重点是使用深度学习技术进行虹膜识别。虹膜检测和识别任务均采用了effentdet网络模型。使用四个数据集来训练和评估深度学习网络。训练该模型提取虹膜特征,并根据个体独特的虹膜模式对个体进行分类。该方法在多个数据集评估中实现了98%以上的高识别率。为了在嵌入式系统上实时实现,训练模型被量化为8位整数格式,以适应资源受限的设备。尽管进行了量化,但识别准确率仍然很高,达到97%。通过将Edge TPU加速器与树莓派系统结合在一起,在实时虹膜相机测试中,处理速度达到每秒10帧,证明了实时虹膜识别的可行性。进行了入侵者测试,以评估系统在防止未经授权的访问方面的稳健性。错误接受率(FAR)的测量是为了评估错误接受未经授权的个人的可能性。实验结果表明,在基于虹膜识别的门禁系统中加入额外的时间约束,可以将FAR降至零,有效地阻止了未经授权的个人通过。
{"title":"Real-Time Iris Recognition With Stand-Alone Embedded Processor Based on AI Model","authors":"Shih-Chang Hsia,&nbsp;Jhong-Hao Luo","doi":"10.1002/ima.70279","DOIUrl":"https://doi.org/10.1002/ima.70279","url":null,"abstract":"<div>\u0000 \u0000 <p>This study focuses on iris recognition using deep learning techniques. The EfficientDet network model was employed for both iris detection and recognition tasks. Four datasets were utilized to train and evaluate the deep learning network. The model was trained to extract iris features and classify individuals based on their unique iris patterns. The proposed method achieved a high recognition rate of over 98% across multiple dataset evaluations. For real-time implementation on an embedded system, the trained model was quantized to an 8-bit integer format to accommodate resource-constrained devices. Despite this quantization, the recognition accuracy remained high, reaching 97%. By incorporating an Edge TPU accelerator alongside a Raspberry Pi system, the processing speed reached up to 10 frames per second during real-time iris camera testing, demonstrating the feasibility of real-time iris recognition. An intruder test was conducted to assess the system's robustness in preventing unauthorized access. The False Acceptance Rate (FAR) was measured to assess the likelihood of incorrectly accepting an unauthorized individual. Experimental results show that the FAR can be reduced to zero by applying additional temporal constraints, effectively preventing unauthorized individuals from passing the iris recognition-based access control system.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrafast Single Pulse Imaging With Large Field of View for Dynamic Measurement Based on Virtually Imaged Phased Array 基于虚拟成像相控阵的大视场超快单脉冲成像动态测量
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-02 DOI: 10.1002/ima.70284
Lei Chen, Ai Liu, Peng Cai, Hang Ming, Yu Long, Yujia Li, Yulong Cao, Ligang Huang, Daqiu Zhou, Lei Gao

An ultrafast two-dimensional (2D) imaging system with large field of view is proposed by leveraging virtually imaged phased array through Galilean beam expanding. By integrating wavelength division multiplexing technology of a virtual imaging phased array with spatial spectroscopy of a grating, broadband spectra of an ultrafast laser pulse are mapped onto a 2D plane. Subsequently, Galilean structure is employed for beam expansion, thereby achieving a large field-of-view 2D imaging capability. A dispersion Fourier transform module is also incorporated into the system, enabling the mapping of wavelength to time, facilitating ultrafast imaging through the detection of pulse signals by high-speed photodetector. The experimental setup employs a home-made mode-locked laser with a spectral bandwidth of 12 nm, capable of large-field imaging of 6 × 20 mm2, representing a 20-fold expansion compared to the unexpanded field of view, and an imaging frame rate of up to 7.75 MHz is obtained. The dynamic imaging capability of the system is demonstrated through imaging a freely falling sphere. The proposed imaging system, with its large field of view and high frame rate, is ideal for high-speed flow diagnostics.

利用伽利略波束扩展的虚拟成像相控阵技术,提出了一种具有大视场的超快二维成像系统。通过将虚拟成像相控阵的波分复用技术与光栅的空间光谱相结合,将超快激光脉冲的宽带光谱映射到二维平面上。随后,采用伽利略结构进行波束扩展,从而实现大视场的二维成像能力。色散傅立叶变换模块也被纳入到系统中,使波长映射到时间,通过高速光电探测器检测脉冲信号,促进超快速成像。实验装置采用自制的光谱带宽为12 nm的锁模激光器,具有6 × 20 mm2的大视场成像能力,与未扩展视场相比扩展了20倍,成像帧率高达7.75 MHz。通过对自由落体进行成像,验证了该系统的动态成像能力。该成像系统具有大视场和高帧率,是高速流诊断的理想选择。
{"title":"Ultrafast Single Pulse Imaging With Large Field of View for Dynamic Measurement Based on Virtually Imaged Phased Array","authors":"Lei Chen,&nbsp;Ai Liu,&nbsp;Peng Cai,&nbsp;Hang Ming,&nbsp;Yu Long,&nbsp;Yujia Li,&nbsp;Yulong Cao,&nbsp;Ligang Huang,&nbsp;Daqiu Zhou,&nbsp;Lei Gao","doi":"10.1002/ima.70284","DOIUrl":"https://doi.org/10.1002/ima.70284","url":null,"abstract":"<div>\u0000 \u0000 <p>An ultrafast two-dimensional (2D) imaging system with large field of view is proposed by leveraging virtually imaged phased array through Galilean beam expanding. By integrating wavelength division multiplexing technology of a virtual imaging phased array with spatial spectroscopy of a grating, broadband spectra of an ultrafast laser pulse are mapped onto a 2D plane. Subsequently, Galilean structure is employed for beam expansion, thereby achieving a large field-of-view 2D imaging capability. A dispersion Fourier transform module is also incorporated into the system, enabling the mapping of wavelength to time, facilitating ultrafast imaging through the detection of pulse signals by high-speed photodetector. The experimental setup employs a home-made mode-locked laser with a spectral bandwidth of 12 nm, capable of large-field imaging of 6 × 20 mm<sup>2</sup>, representing a 20-fold expansion compared to the unexpanded field of view, and an imaging frame rate of up to 7.75 MHz is obtained. The dynamic imaging capability of the system is demonstrated through imaging a freely falling sphere. The proposed imaging system, with its large field of view and high frame rate, is ideal for high-speed flow diagnostics.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Segmentation of the Outer and Inner Foveal Avascular Zone by Convolutional Filters 基于卷积滤波器的内外中央凹无血管区自动分割
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-02 DOI: 10.1002/ima.70282
Carlos Ruiz-Tabuenca, Isabel Pinilla, Elvira Orduna-Hospital, Francisco Javier Salgado-Remacha

In this paper a new algorithm for segmentation of the foveal avascular zone in optical coherence tomography angiography images of the superficial capillary plexus is presented and evaluated. The algorithm is based on convolutional techniques, and for evaluation it has been compared with a collection of manual segmentations. Besides its performance, the main novelty presented is the ability to distinguish the purely avascular zone from the transitional environment whose importance has been recently pointed out. Its capability has been tested on images of patients with different types of diabetes mellitus, obtaining error rates between 1% and 1.5%. In addition, statistical data is shown for the segmented areas (including the transition zone, which had never been studied before) as a function of the type of diabetes. Moreover, a linear trend in outer and inner axis ratios is also observed. Overall, the algorithm represents a new approach in the analysis of optical coherence tomography angiography images, offering clinicians a new and reliable tool for objective foveal avascular zone segmentation of the superficial capillary plexus. Both the code and the dataset used are also made public in the cited repositories.

本文提出了一种新的分割浅毛细血管丛光学相干断层成像图像中央凹无血管区的算法,并对其进行了评价。该算法基于卷积技术,并与一组手动分割进行了比较。除了性能之外,提出的主要新颖之处是能够区分纯无血管带和过渡环境,这一点最近已被指出。它的能力已经在不同类型糖尿病患者的图像上进行了测试,错误率在1%到1.5%之间。此外,统计数据显示了分割区域(包括以前从未研究过的过渡区)作为糖尿病类型的函数。此外,还观察到外轴比和内轴比呈线性趋势。总的来说,该算法代表了光学相干断层血管造影图像分析的新方法,为临床医生提供了一种新的可靠工具,用于客观分割浅毛细血管丛的中央凹无血管区。所使用的代码和数据集也在引用的存储库中公开。
{"title":"Automatic Segmentation of the Outer and Inner Foveal Avascular Zone by Convolutional Filters","authors":"Carlos Ruiz-Tabuenca,&nbsp;Isabel Pinilla,&nbsp;Elvira Orduna-Hospital,&nbsp;Francisco Javier Salgado-Remacha","doi":"10.1002/ima.70282","DOIUrl":"https://doi.org/10.1002/ima.70282","url":null,"abstract":"<p>In this paper a new algorithm for segmentation of the foveal avascular zone in optical coherence tomography angiography images of the superficial capillary plexus is presented and evaluated. The algorithm is based on convolutional techniques, and for evaluation it has been compared with a collection of manual segmentations. Besides its performance, the main novelty presented is the ability to distinguish the purely avascular zone from the transitional environment whose importance has been recently pointed out. Its capability has been tested on images of patients with different types of diabetes mellitus, obtaining error rates between 1% and 1.5%. In addition, statistical data is shown for the segmented areas (including the transition zone, which had never been studied before) as a function of the type of diabetes. Moreover, a linear trend in outer and inner axis ratios is also observed. Overall, the algorithm represents a new approach in the analysis of optical coherence tomography angiography images, offering clinicians a new and reliable tool for objective foveal avascular zone segmentation of the superficial capillary plexus. Both the code and the dataset used are also made public in the cited repositories.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70282","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DenseUNet Architecture With Asymmetric Kernels for Automatic Segmentation of Medical Imaging 基于非对称核的医学图像自动分割的致密网结构
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-30 DOI: 10.1002/ima.70277
F. Duque-Vazquez Edgar, Cruz-Aceves Ivan, E. Sanchez-Yanez Raul, Jonathan Cepeda-Negrete

Medical imaging is a core component of modern healthcare, essential for early disease diagnosis and effective treatment planning. Deep learning has emerged as a powerful tool to enhance medical image analysis, particularly in tasks such as segmentation, which is essential for identifying and delineating anatomical structures. A notable segmentation challenge involves accurately detecting narrow and elongated features. A novel DenseUNet architecture, enhanced with asymmetric convolutional kernels and a squeeze-and-excitation block, is proposed. It is specifically designed to adapt to such shape characteristics. The iterated local search metaheuristic is employed to optimize the kernel size within a search space of 152$$ {15}^2 $$, and a squeeze-and-excitation block is integrated to enhance feature recalibration and network efficiency. The best-performing asymmetric kernel achieved a processing time 5423 s faster than that of the conventional kernels. The proposed architecture is evaluated using the Dice coefficient and benchmarked against state-of-the-art architectures using three databases (TMJ: Temporomandibular joints, DCA1: Coronary arteries, and ICA: Coronary arteries), achieving Dice scores of 0.7800, 0.8231, and 0.8862, respectively. These enhancements demonstrate improved segmentation performance and contribute to the development of more accurate and robust medical imaging tools.

医学影像是现代医疗保健的核心组成部分,对疾病的早期诊断和有效的治疗计划至关重要。深度学习已经成为增强医学图像分析的强大工具,特别是在分割等任务中,这对于识别和描绘解剖结构至关重要。一个值得注意的分割挑战包括准确地检测狭窄和拉长的特征。提出了一种新的致密网结构,增强了非对称卷积核和压缩激励块。它是专门为适应这种形状特征而设计的。采用迭代局部搜索元启发式算法在15 2 $$ {15}^2 $$的搜索空间内优化核大小,并集成了压缩和激励块以提高特征重新校准和网络效率。性能最好的非对称内核的处理时间比传统内核快5423秒。使用Dice系数对所提出的架构进行评估,并使用三个数据库(TMJ:颞下颌关节,DCA1:冠状动脉和ICA:冠状动脉)对最先进的架构进行基准测试,分别获得了0.7800,0.8231和0.8862的Dice分数。这些增强展示了改进的分割性能,并有助于开发更准确和强大的医学成像工具。
{"title":"DenseUNet Architecture With Asymmetric Kernels for Automatic Segmentation of Medical Imaging","authors":"F. Duque-Vazquez Edgar,&nbsp;Cruz-Aceves Ivan,&nbsp;E. Sanchez-Yanez Raul,&nbsp;Jonathan Cepeda-Negrete","doi":"10.1002/ima.70277","DOIUrl":"https://doi.org/10.1002/ima.70277","url":null,"abstract":"<div>\u0000 \u0000 <p>Medical imaging is a core component of modern healthcare, essential for early disease diagnosis and effective treatment planning. Deep learning has emerged as a powerful tool to enhance medical image analysis, particularly in tasks such as segmentation, which is essential for identifying and delineating anatomical structures. A notable segmentation challenge involves accurately detecting narrow and elongated features. A novel DenseUNet architecture, enhanced with asymmetric convolutional kernels and a squeeze-and-excitation block, is proposed. It is specifically designed to adapt to such shape characteristics. The iterated local search metaheuristic is employed to optimize the kernel size within a search space of <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mn>15</mn>\u0000 <mn>2</mn>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$$ {15}^2 $$</annotation>\u0000 </semantics></math>, and a squeeze-and-excitation block is integrated to enhance feature recalibration and network efficiency. The best-performing asymmetric kernel achieved a processing time 5423 s faster than that of the conventional kernels. The proposed architecture is evaluated using the Dice coefficient and benchmarked against state-of-the-art architectures using three databases (TMJ: Temporomandibular joints, DCA1: Coronary arteries, and ICA: Coronary arteries), achieving Dice scores of 0.7800, 0.8231, and 0.8862, respectively. These enhancements demonstrate improved segmentation performance and contribute to the development of more accurate and robust medical imaging tools.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145905256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Correlation Between Ocular Diseases for Retinal Layer Fractal Dimensions Analysis Using Multiclass Segmentation With Attention U-Net 基于关注U-Net的多类分割视网膜层分维分析中眼部疾病相关性研究
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-29 DOI: 10.1002/ima.70274
M. Saranya, K. A. Sunitha, A. Asuntha, Pratyusha Ganne

This study proposes a novel diagnostic approach to retinal disease detection by combining deep learning-based segmentation with fractal dimension (FD) analysis on optical coherence tomography (OCT) images. Our primary goal is to enhance early detection of retinal diseases such as age-related macular degeneration (AMD), diabetic retinopathy (DR), central serious retinopathy (CSR), and macular hole (MH). We introduce the Attention U-shaped Network (AUNet), which builds upon the UNet architecture with Attention Gates (AGs) to improve focus on pathological structures in complex cases, achieving a high segmentation accuracy of 98.5% and a mean Intersection over Union (mIoU) of 0.91, outperforming existing models like UNet++ and DeepLabV3+. Coupled with Fourier and Higuchi FD analysis, our method quantitatively assesses the complexity of retinal layers, identifying structural patterns that serve as early indicators of neural degeneration. Statistical tests reveal significant differences in FD values between diseased and healthy groups, underscoring the predictive power of retinal layers like the retinal pigment epithelium (RPE), inner nuclear layer (INL), outer nuclear layer (ONL), and ellipsoid zone (EZ). This combined AUNet-FD approach represents an innovative tool for early diagnosis of retinal diseases, potentially enhancing clinical decision-making through precise, non-invasive analysis.

本研究提出了一种新的视网膜疾病诊断方法,将基于深度学习的分割与光学相干断层扫描(OCT)图像的分形维数(FD)分析相结合。我们的主要目标是提高视网膜疾病的早期检测,如年龄相关性黄斑变性(AMD)、糖尿病性视网膜病变(DR)、中枢严重视网膜病变(CSR)和黄斑孔(MH)。我们引入了注意力u型网络(Attention U-shaped Network, AUNet),该网络基于UNet架构和注意力门(Attention Gates, AGs)来提高对复杂情况下病理结构的关注,实现了98.5%的高分割准确率和0.91的平均交集比(Intersection over Union, mIoU),优于UNet++和DeepLabV3+等现有模型。结合傅里叶和Higuchi FD分析,我们的方法定量评估视网膜层的复杂性,识别作为神经变性早期指标的结构模式。统计检验显示患病组和健康组之间FD值有显著差异,强调视网膜色素上皮(RPE)、内核层(INL)、外核层(ONL)和椭球带(EZ)等视网膜层的预测能力。这种联合AUNet-FD方法代表了一种早期诊断视网膜疾病的创新工具,有可能通过精确、非侵入性的分析来提高临床决策。
{"title":"Investigating the Correlation Between Ocular Diseases for Retinal Layer Fractal Dimensions Analysis Using Multiclass Segmentation With Attention U-Net","authors":"M. Saranya,&nbsp;K. A. Sunitha,&nbsp;A. Asuntha,&nbsp;Pratyusha Ganne","doi":"10.1002/ima.70274","DOIUrl":"https://doi.org/10.1002/ima.70274","url":null,"abstract":"<div>\u0000 \u0000 <p>This study proposes a novel diagnostic approach to retinal disease detection by combining deep learning-based segmentation with fractal dimension (FD) analysis on optical coherence tomography (OCT) images. Our primary goal is to enhance early detection of retinal diseases such as age-related macular degeneration (AMD), diabetic retinopathy (DR), central serious retinopathy (CSR), and macular hole (MH). We introduce the Attention U-shaped Network (AUNet), which builds upon the UNet architecture with Attention Gates (AGs) to improve focus on pathological structures in complex cases, achieving a high segmentation accuracy of 98.5% and a mean Intersection over Union (mIoU) of 0.91, outperforming existing models like UNet++ and DeepLabV3+. Coupled with Fourier and Higuchi FD analysis, our method quantitatively assesses the complexity of retinal layers, identifying structural patterns that serve as early indicators of neural degeneration. Statistical tests reveal significant differences in FD values between diseased and healthy groups, underscoring the predictive power of retinal layers like the retinal pigment epithelium (RPE), inner nuclear layer (INL), outer nuclear layer (ONL), and ellipsoid zone (EZ). This combined AUNet-FD approach represents an innovative tool for early diagnosis of retinal diseases, potentially enhancing clinical decision-making through precise, non-invasive analysis.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145905245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comprehensive Deep-Learning Framework Integrating Lesion Segmentation and Stage Classification for Enhanced Diabetic Retinopathy Diagnosis 结合病变分割和分期分类的综合深度学习框架增强糖尿病视网膜病变诊断
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-28 DOI: 10.1002/ima.70272
Ramazan İncir, Ferhat Bozkurt

Diabetic retinopathy (DR), one of the most prevalent microvascular complications of diabetes, stands as a leading cause of vision loss globally. Due to its asymptomatic nature in early stages, delayed diagnosis and staging may result in irreversible visual impairment. Therefore, accurate and simultaneous lesion segmentation and stage classification of DR are of critical clinical importance. In this study, a two-stage, end-to-end, holistic framework is proposed for automated DR diagnosis. In the first stage, an Improved U-Net architecture enhanced with residual blocks and additional convolutional layers is employed to segment small and low-contrast lesions such as microaneurysms, hemorrhages, and hard/soft exudates with high precision. Model hyperparameters are optimized using the harmony search algorithm to enhance training efficiency. In the second stage, lesion-based weight maps obtained from the segmentation step are applied to fundus images from the APTOS dataset, generating enriched inputs for classification. A vision transformer (ViT)-based model, augmented with a Convolutional Block Attention Module (CBAM), is utilized to improve feature extraction. In addition, features derived from ViT are further refined using a graph convolutional network (GCN) and traditional machine-learning classifiers. The proposed approach achieves high performance in multi-class DR stage classification. Compared to existing studies, the framework demonstrates notable improvements in both segmentation and classification accuracy, offering a robust and generalizable solution for DR diagnosis.

糖尿病视网膜病变(DR)是糖尿病最常见的微血管并发症之一,是全球视力丧失的主要原因。由于其早期无症状,延误诊断和分期可能导致不可逆的视力损害。因此,准确、同步的病变分割和DR分期具有重要的临床意义。在这项研究中,提出了一个两阶段,端到端的整体框架,用于自动诊断DR。在第一阶段,采用改进的U-Net结构,增强了残余块和额外的卷积层,以高精度分割小的和低对比度的病变,如微动脉瘤、出血和硬/软渗出物。采用和谐搜索算法优化模型超参数,提高训练效率。第二阶段,将分割步骤得到的基于病灶的权重图应用于APTOS数据集中的眼底图像,生成丰富的分类输入。利用基于视觉变换(ViT)的模型,增强卷积块注意模块(CBAM)来改进特征提取。此外,使用图卷积网络(GCN)和传统的机器学习分类器进一步细化从ViT派生的特征。该方法在多类DR阶段分类中具有较高的性能。与现有研究相比,该框架在分割和分类精度方面都有显著提高,为DR诊断提供了鲁棒性和可泛化的解决方案。
{"title":"A Comprehensive Deep-Learning Framework Integrating Lesion Segmentation and Stage Classification for Enhanced Diabetic Retinopathy Diagnosis","authors":"Ramazan İncir,&nbsp;Ferhat Bozkurt","doi":"10.1002/ima.70272","DOIUrl":"https://doi.org/10.1002/ima.70272","url":null,"abstract":"<div>\u0000 \u0000 <p>Diabetic retinopathy (DR), one of the most prevalent microvascular complications of diabetes, stands as a leading cause of vision loss globally. Due to its asymptomatic nature in early stages, delayed diagnosis and staging may result in irreversible visual impairment. Therefore, accurate and simultaneous lesion segmentation and stage classification of DR are of critical clinical importance. In this study, a two-stage, end-to-end, holistic framework is proposed for automated DR diagnosis. In the first stage, an Improved U-Net architecture enhanced with residual blocks and additional convolutional layers is employed to segment small and low-contrast lesions such as microaneurysms, hemorrhages, and hard/soft exudates with high precision. Model hyperparameters are optimized using the harmony search algorithm to enhance training efficiency. In the second stage, lesion-based weight maps obtained from the segmentation step are applied to fundus images from the APTOS dataset, generating enriched inputs for classification. A vision transformer (ViT)-based model, augmented with a Convolutional Block Attention Module (CBAM), is utilized to improve feature extraction. In addition, features derived from ViT are further refined using a graph convolutional network (GCN) and traditional machine-learning classifiers. The proposed approach achieves high performance in multi-class DR stage classification. Compared to existing studies, the framework demonstrates notable improvements in both segmentation and classification accuracy, offering a robust and generalizable solution for DR diagnosis.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EGCF-Net Edge-Aware Graph-Based Attention Network With Enhanced Contextual Features for Medical Image Segmentation 基于增强上下文特征的EGCF-Net边缘感知图关注网络用于医学图像分割
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-28 DOI: 10.1002/ima.70276
Santoshi Gorli, Ratnakar Dash

Accurate segmentation of medical images is critical for disease diagnosis and treatment planning. However, challenges such as fuzzy boundaries, low contrast, and complex anatomical structures often hinder performance. We propose EGCF-Net, a novel U-Net-based architecture that integrates a hybrid encoder with an edge-aware graph-based attention network to address these limitations. The hybrid encoder integrates the Swin transformer and Kronecker convolution to capture global and local contextual dependencies. Additionally, skip connections are enhanced using an edge-aware graph-based attention module, which combines graph spatial attention and graph channel attention to dynamically model spatial correlations and edge-aware contextual affinities. This design leads to edge-enhanced boundary delineation and improved regional consistency. We evaluate EGCF-Net on four benchmark datasets (Synapse, Kidney Stone, ISIC 2016, and ISIC 2018), achieving Dice scores of 84.70%, 93.07%, 91.07%, and 88.62%, respectively, surpassing existing state-of-the-art methods. Quantitative and qualitative results further validate the efficacy and robustness of the proposed approach, highlighting its potential for advancing medical image segmentation.

医学图像的准确分割是疾病诊断和治疗计划的关键。然而,诸如模糊边界、低对比度和复杂解剖结构等挑战往往会阻碍性能。我们提出了EGCF-Net,这是一种新颖的基于u - net的架构,它将混合编码器与基于边缘感知图的注意力网络集成在一起,以解决这些限制。混合编码器集成了Swin变压器和Kronecker卷积,以捕获全局和局部上下文依赖关系。此外,使用基于边缘感知的图形注意模块增强了跳过连接,该模块结合了图形空间注意和图形通道注意来动态建模空间相关性和边缘感知上下文亲和力。这种设计导致边缘增强边界划定和提高区域一致性。我们在四个基准数据集(Synapse, Kidney Stone, ISIC 2016和ISIC 2018)上对EGCF-Net进行了评估,分别获得了84.70%,93.07%,91.07%和88.62%的Dice分数,超过了现有的最先进的方法。定量和定性结果进一步验证了所提出方法的有效性和鲁棒性,突出了其在推进医学图像分割方面的潜力。
{"title":"EGCF-Net Edge-Aware Graph-Based Attention Network With Enhanced Contextual Features for Medical Image Segmentation","authors":"Santoshi Gorli,&nbsp;Ratnakar Dash","doi":"10.1002/ima.70276","DOIUrl":"https://doi.org/10.1002/ima.70276","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate segmentation of medical images is critical for disease diagnosis and treatment planning. However, challenges such as fuzzy boundaries, low contrast, and complex anatomical structures often hinder performance. We propose EGCF-Net, a novel U-Net-based architecture that integrates a hybrid encoder with an edge-aware graph-based attention network to address these limitations. The hybrid encoder integrates the Swin transformer and Kronecker convolution to capture global and local contextual dependencies. Additionally, skip connections are enhanced using an edge-aware graph-based attention module, which combines graph spatial attention and graph channel attention to dynamically model spatial correlations and edge-aware contextual affinities. This design leads to edge-enhanced boundary delineation and improved regional consistency. We evaluate EGCF-Net on four benchmark datasets (Synapse, Kidney Stone, ISIC 2016, and ISIC 2018), achieving Dice scores of 84.70%, 93.07%, 91.07%, and 88.62%, respectively, surpassing existing state-of-the-art methods. Quantitative and qualitative results further validate the efficacy and robustness of the proposed approach, highlighting its potential for advancing medical image segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StackDeVNet: An Explainable Stacking Ensemble of DenseNets and Vision Transformers for Advanced Gastrointestinal Disease Detection 用于高级胃肠道疾病检测的可解释的densenet和视觉变压器堆叠集成
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-26 DOI: 10.1002/ima.70275
Osman Güler

Gastrointestinal disorders include diseases that negatively affect people's daily life and carry the risk of cancer. Therefore, accurate and early diagnosis of these diseases is important for treatment process of patients. Deep learning architectures, which have achieved significant success in medical image analysis, are effectively used in early diagnosis and diagnosis systems. Therefore, in this study, a new approach that achieves higher accuracy in the detection of gastrointestinal diseases by combining DenseNet and Vision Transformer models with stacking ensemble is proposed. As a result of the experiments, the proposed model achieved 99.06% accuracy in a single test and 98.64% accuracy on mean as a result of 5-fold cross-validation. The proposed approach shows promising accuracy and reliability as evidenced by the results of experiments on the KvasirV2 dataset, and has the potential to be an effective method for the detection of gastrointestinal diseases. To improve model interpretability, the Explainable AI technique Grad-CAM and attention map visualizations were used, allowing visual justification of the model's predictions and highlighting clinically relevant regions in endoscopic images. The model obtained by combining DenseNet and Vision Transformer models with the stacking ensemble method is expected to be an example for future studies in the field of health and image processing, especially gastrointestinal diseases.

胃肠道疾病包括对人们日常生活产生负面影响并具有癌症风险的疾病。因此,准确、早期诊断这些疾病对患者的治疗过程至关重要。深度学习架构在医学图像分析方面取得了显著的成功,可以有效地应用于早期诊断和诊断系统。因此,本研究提出了一种将DenseNet和Vision Transformer模型与堆叠集成相结合的方法,以达到更高的胃肠道疾病检测精度。实验结果表明,该模型在单次测试中准确率达到99.06%,经过5次交叉验证,平均准确率达到98.64%。KvasirV2数据集上的实验结果表明,该方法具有良好的准确性和可靠性,有可能成为检测胃肠道疾病的有效方法。为了提高模型的可解释性,使用了Explainable AI技术Grad-CAM和注意图可视化,允许对模型的预测进行视觉证明,并在内镜图像中突出显示临床相关区域。将DenseNet和Vision Transformer模型与叠加集成方法相结合得到的模型有望成为未来健康和图像处理领域,特别是胃肠道疾病领域研究的范例。
{"title":"StackDeVNet: An Explainable Stacking Ensemble of DenseNets and Vision Transformers for Advanced Gastrointestinal Disease Detection","authors":"Osman Güler","doi":"10.1002/ima.70275","DOIUrl":"https://doi.org/10.1002/ima.70275","url":null,"abstract":"<div>\u0000 \u0000 <p>Gastrointestinal disorders include diseases that negatively affect people's daily life and carry the risk of cancer. Therefore, accurate and early diagnosis of these diseases is important for treatment process of patients. Deep learning architectures, which have achieved significant success in medical image analysis, are effectively used in early diagnosis and diagnosis systems. Therefore, in this study, a new approach that achieves higher accuracy in the detection of gastrointestinal diseases by combining DenseNet and Vision Transformer models with stacking ensemble is proposed. As a result of the experiments, the proposed model achieved 99.06% accuracy in a single test and 98.64% accuracy on mean as a result of 5-fold cross-validation. The proposed approach shows promising accuracy and reliability as evidenced by the results of experiments on the KvasirV2 dataset, and has the potential to be an effective method for the detection of gastrointestinal diseases. To improve model interpretability, the Explainable AI technique Grad-CAM and attention map visualizations were used, allowing visual justification of the model's predictions and highlighting clinically relevant regions in endoscopic images. The model obtained by combining DenseNet and Vision Transformer models with the stacking ensemble method is expected to be an example for future studies in the field of health and image processing, especially gastrointestinal diseases.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Attention-Guided Deep Learning Approach for Classifying 39 Skin Lesion Types 基于注意力引导的深度学习方法对39种皮肤病变类型进行分类
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-22 DOI: 10.1002/ima.70269
Sauda Adiv Hanum, Ashim Dey, Muhammad Ashad Kabir

The skin, the largest organ of the human body, is vulnerable to numerous pathological conditions collectively referred to as skin lesions, encompassing a wide spectrum of dermatoses. Diagnosing these lesions remains challenging for medical practitioners due to their subtle visual differences, many of which are imperceptible to the naked eye. While not all lesions are malignant, some serve as early indicators of serious diseases such as skin cancer, emphasizing the urgent need for accurate and timely diagnostic tools. This study advances dermatological diagnostics by curating a comprehensive and balanced dataset containing 9360 dermoscopic and clinical images across 39 lesion categories, synthesized from five publicly available datasets. Five state-of-the-art deep learning architectures—MobileNetV2, Xception, InceptionV3, EfficientNetB1, and Vision Transformer (ViT)—were systematically evaluated on this dataset. To enhance model precision and robustness, Efficient Channel Attention (ECA) and Convolutional Block Attention Module (CBAM) mechanisms were integrated into these architectures. Extensive evaluation across multiple performance metrics demonstrated that the Vision Transformer with CBAM achieved the best results, with 93.46% accuracy, 94% precision, 93% recall, 93% F1-score, and 93.67% specificity. These findings highlight the effectiveness of attention-guided Vision Transformers in addressing complex, large-scale, multi-class skin lesion classification. By combining dataset diversity with advanced attention mechanisms, the proposed framework provides a reliable and interpretable tool to assist medical professionals in accurate and efficient lesion diagnosis, thereby contributing to improved clinical decision-making and patient outcomes.

皮肤是人体最大的器官,容易受到许多病理状况的影响,统称为皮肤病变,包括广泛的皮肤病。诊断这些病变仍然具有挑战性的医疗从业者,由于他们的细微的视觉差异,其中许多是难以察觉的肉眼。虽然并非所有病变都是恶性的,但有些病变可以作为皮肤癌等严重疾病的早期指标,强调迫切需要准确和及时的诊断工具。本研究通过整理一个全面而平衡的数据集,包括39个病变类别的9360张皮肤镜和临床图像,从5个公开的数据集合成,从而推进了皮肤科诊断。五个最先进的深度学习架构——mobilenetv2、Xception、InceptionV3、EfficientNetB1和Vision Transformer (ViT)——在这个数据集上进行了系统评估。为了提高模型的精度和鲁棒性,将高效通道注意(ECA)和卷积块注意模块(CBAM)机制集成到这些体系结构中。对多个性能指标的广泛评估表明,带有CBAM的Vision Transformer达到了最佳效果,准确率为93.46%,精密度为94%,召回率为93%,f1评分为93%,特异性为93.67%。这些发现强调了注意引导视觉变形在处理复杂、大规模、多类别皮肤病变分类中的有效性。通过将数据集多样性与先进的注意机制相结合,所提出的框架提供了一个可靠且可解释的工具,以帮助医疗专业人员准确有效地诊断病变,从而有助于改善临床决策和患者预后。
{"title":"An Attention-Guided Deep Learning Approach for Classifying 39 Skin Lesion Types","authors":"Sauda Adiv Hanum,&nbsp;Ashim Dey,&nbsp;Muhammad Ashad Kabir","doi":"10.1002/ima.70269","DOIUrl":"https://doi.org/10.1002/ima.70269","url":null,"abstract":"<div>\u0000 \u0000 <p>The skin, the largest organ of the human body, is vulnerable to numerous pathological conditions collectively referred to as skin lesions, encompassing a wide spectrum of dermatoses. Diagnosing these lesions remains challenging for medical practitioners due to their subtle visual differences, many of which are imperceptible to the naked eye. While not all lesions are malignant, some serve as early indicators of serious diseases such as skin cancer, emphasizing the urgent need for accurate and timely diagnostic tools. This study advances dermatological diagnostics by curating a comprehensive and balanced dataset containing 9360 dermoscopic and clinical images across 39 lesion categories, synthesized from five publicly available datasets. Five state-of-the-art deep learning architectures—MobileNetV2, Xception, InceptionV3, EfficientNetB1, and Vision Transformer (ViT)—were systematically evaluated on this dataset. To enhance model precision and robustness, Efficient Channel Attention (ECA) and Convolutional Block Attention Module (CBAM) mechanisms were integrated into these architectures. Extensive evaluation across multiple performance metrics demonstrated that the Vision Transformer with CBAM achieved the best results, with 93.46% accuracy, 94% precision, 93% recall, 93% F1-score, and 93.67% specificity. These findings highlight the effectiveness of attention-guided Vision Transformers in addressing complex, large-scale, multi-class skin lesion classification. By combining dataset diversity with advanced attention mechanisms, the proposed framework provides a reliable and interpretable tool to assist medical professionals in accurate and efficient lesion diagnosis, thereby contributing to improved clinical decision-making and patient outcomes.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145825131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Imaging Systems and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1