首页 > 最新文献

Journal of Imaging最新文献

英文 中文
Non-Invasive Detection of Prostate Cancer with Novel Time-Dependent Diffusion MRI and AI-Enhanced Quantitative Radiological Interpretation: PROS-TD-AI. 新型时间相关扩散MRI和人工智能增强定量放射学解释的前列腺癌无创检测:pro - td - ai。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-22 DOI: 10.3390/jimaging12010053
Baltasar Ramos, Cristian Garrido, Paulette Narváez, Santiago Gelerstein Claro, Haotian Li, Rafael Salvador, Constanza Vásquez-Venegas, Iván Gallegos, Víctor Castañeda, Cristian Acevedo, Gonzalo Cárdenas, Camilo G Sotomayor

Prostate cancer (PCa) is the most common malignancy in men worldwide. Multiparametric MRI (mpMRI) improves the detection of clinically significant PCa (csPCa); however, it remains limited by false-positive findings and inter-observer variability. Time-dependent diffusion (TDD) MRI provides microstructural information that may enhance csPCa characterization beyond standard mpMRI. This prospective observational diagnostic accuracy study protocol describes the evaluation of PROS-TD-AI, an in-house developed AI workflow integrating TDD-derived metrics for zone-aware csPCa risk prediction. PROS-TD-AI will be compared with PI-RADS v2.1 in routine clinical imaging using MRI-targeted prostate biopsy as the reference standard.

前列腺癌是世界范围内男性最常见的恶性肿瘤。多参数MRI (mpMRI)提高了临床显著性前列腺癌(csPCa)的检测;然而,它仍然受到假阳性结果和观察者之间差异的限制。时间相关扩散(TDD) MRI提供了微结构信息,可以增强csPCa的表征,超出标准的mpMRI。这项前瞻性观察性诊断准确性研究方案描述了pro - td -AI的评估,pro - td -AI是一种内部开发的AI工作流程,集成了用于区域感知csPCa风险预测的tdd衍生指标。将pro - td - ai与PI-RADS v2.1在常规临床影像学上进行比较,以mri靶向前列腺活检作为参考标准。
{"title":"Non-Invasive Detection of Prostate Cancer with Novel Time-Dependent Diffusion MRI and AI-Enhanced Quantitative Radiological Interpretation: PROS-TD-AI.","authors":"Baltasar Ramos, Cristian Garrido, Paulette Narváez, Santiago Gelerstein Claro, Haotian Li, Rafael Salvador, Constanza Vásquez-Venegas, Iván Gallegos, Víctor Castañeda, Cristian Acevedo, Gonzalo Cárdenas, Camilo G Sotomayor","doi":"10.3390/jimaging12010053","DOIUrl":"10.3390/jimaging12010053","url":null,"abstract":"<p><p>Prostate cancer (PCa) is the most common malignancy in men worldwide. Multiparametric MRI (mpMRI) improves the detection of clinically significant PCa (csPCa); however, it remains limited by false-positive findings and inter-observer variability. Time-dependent diffusion (TDD) MRI provides microstructural information that may enhance csPCa characterization beyond standard mpMRI. This prospective observational diagnostic accuracy study protocol describes the evaluation of PROS-TD-AI, an in-house developed AI workflow integrating TDD-derived metrics for zone-aware csPCa risk prediction. PROS-TD-AI will be compared with PI-RADS v2.1 in routine clinical imaging using MRI-targeted prostate biopsy as the reference standard.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843277/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Frequency GPR Image Fusion Based on Convolutional Sparse Representation to Enhance Road Detection. 基于卷积稀疏表示的多频GPR图像融合增强道路检测。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-22 DOI: 10.3390/jimaging12010052
Liang Fang, Feng Yang, Yuanjing Fang, Junli Nie

Single-frequency ground penetrating radar (GPR) systems are fundamentally constrained by a trade-off between penetration depth and resolution, alongside issues like narrow bandwidth and ringing interference. To break this limitation, we have developed a multi-frequency data fusion technique grounded in convolutional sparse representation (CSR). The proposed methodology involves spatially registering multi-frequency GPR signals and fusing them via a CSR framework, where the convolutional dictionaries are derived from simulated high-definition GPR data. Extensive evaluation using information entropy, average gradient, mutual information, and visual information fidelity demonstrates the superiority of our method over traditional fusion approaches (e.g., weighted average, PCA, 2D wavelets). Tests on simulated and real data confirm that our CSR-based fusion successfully synergizes the deep penetration of low frequencies with the fine resolution of high frequencies, leading to substantial gains in GPR image clarity and interpretability.

单频探地雷达(GPR)系统从根本上受限于穿透深度和分辨率之间的权衡,以及窄带宽和振铃干扰等问题。为了打破这一限制,我们开发了一种基于卷积稀疏表示(CSR)的多频数据融合技术。所提出的方法包括对多频GPR信号进行空间注册,并通过CSR框架融合它们,其中卷积字典来自模拟的高清GPR数据。使用信息熵、平均梯度、互信息和视觉信息保真度的广泛评估表明,我们的方法优于传统的融合方法(例如,加权平均、主成分分析、二维小波)。模拟和真实数据的测试证实,我们基于csr的融合成功地将低频的深度穿透与高频的精细分辨率协同起来,从而大大提高了探地雷达图像的清晰度和可解释性。
{"title":"Multi-Frequency GPR Image Fusion Based on Convolutional Sparse Representation to Enhance Road Detection.","authors":"Liang Fang, Feng Yang, Yuanjing Fang, Junli Nie","doi":"10.3390/jimaging12010052","DOIUrl":"10.3390/jimaging12010052","url":null,"abstract":"<p><p>Single-frequency ground penetrating radar (GPR) systems are fundamentally constrained by a trade-off between penetration depth and resolution, alongside issues like narrow bandwidth and ringing interference. To break this limitation, we have developed a multi-frequency data fusion technique grounded in convolutional sparse representation (CSR). The proposed methodology involves spatially registering multi-frequency GPR signals and fusing them via a CSR framework, where the convolutional dictionaries are derived from simulated high-definition GPR data. Extensive evaluation using information entropy, average gradient, mutual information, and visual information fidelity demonstrates the superiority of our method over traditional fusion approaches (e.g., weighted average, PCA, 2D wavelets). Tests on simulated and real data confirm that our CSR-based fusion successfully synergizes the deep penetration of low frequencies with the fine resolution of high frequencies, leading to substantial gains in GPR image clarity and interpretability.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843019/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADAM-Net: Anatomy-Guided Attentive Unsupervised Domain Adaptation for Joint MG Segmentation and MGD Grading. ADAM-Net:关节MG分割和MGD分级的解剖导向细心无监督域自适应。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-21 DOI: 10.3390/jimaging12010050
Junbin Fang, Xuan He, You Jiang, Mini Han Wang

Meibomian gland dysfunction (MGD) is a leading cause of dry eye disease, assessable through gland atrophy degree. While deep learning (DL) has advanced meibomian gland (MG) segmentation and MGD classification, existing methods treat these tasks independently and suffer from domain shift across multi-center imaging devices. We propose ADAM-Net, an attention-guided unsupervised domain adaptation multi-task framework that jointly models MG segmentation and MGD classification. Our model introduces structure-aware multi-task learning and anatomy-guided attention to enhance feature sharing, suppress background noise, and improve glandular region perception. For the cross-domain tasks MGD-1K→{K5M, CR-2, LV II}, this study systematically evaluates the overall performance of ADAM-Net from multiple perspectives. The experimental results show that ADAM-Net achieves classification accuracies of 77.93%, 74.86%, and 81.77% on the target domains, significantly outperforming current mainstream unsupervised domain adaptation (UDA) methods. The F1-score and the Matthews correlation coefficient (MCC-score) indicate that the model maintains robust discriminative capability even under class-imbalanced scenarios. t-SNE visualizations further validate its cross-domain feature alignment capability. These demonstrate that ADAM-Net exhibits strong robustness and interpretability in multi-center scenarios and provide an effective solution for automated MGD assessment.

睑板腺功能障碍(MGD)是干眼病的主要原因,可通过腺体萎缩程度来评估。虽然深度学习(DL)具有先进的睑板腺(MG)分割和MGD分类,但现有方法对这些任务进行独立处理,并且存在跨多中心成像设备的域漂移问题。我们提出了ADAM-Net,这是一个注意力引导的无监督领域自适应多任务框架,它联合建模了MG分割和MGD分类。我们的模型引入了结构感知多任务学习和解剖引导注意,以增强特征共享,抑制背景噪声,并改善腺体区域感知。对于跨域任务MGD-1K→{K5M, CR-2, LV II},本研究从多个角度系统评估了ADAM-Net的整体性能。实验结果表明,ADAM-Net在目标域上的分类准确率分别为77.93%、74.86%和81.77%,显著优于当前主流的无监督域自适应(UDA)方法。F1-score和Matthews相关系数(MCC-score)表明,即使在类别失衡的情况下,该模型仍保持稳健的判别能力。t-SNE可视化进一步验证了其跨域特征对齐能力。这表明ADAM-Net在多中心场景下表现出强大的鲁棒性和可解释性,为自动化MGD评估提供了有效的解决方案。
{"title":"ADAM-Net: Anatomy-Guided Attentive Unsupervised Domain Adaptation for Joint MG Segmentation and MGD Grading.","authors":"Junbin Fang, Xuan He, You Jiang, Mini Han Wang","doi":"10.3390/jimaging12010050","DOIUrl":"10.3390/jimaging12010050","url":null,"abstract":"<p><p>Meibomian gland dysfunction (MGD) is a leading cause of dry eye disease, assessable through gland atrophy degree. While deep learning (DL) has advanced meibomian gland (MG) segmentation and MGD classification, existing methods treat these tasks independently and suffer from domain shift across multi-center imaging devices. We propose ADAM-Net, an attention-guided unsupervised domain adaptation multi-task framework that jointly models MG segmentation and MGD classification. Our model introduces structure-aware multi-task learning and anatomy-guided attention to enhance feature sharing, suppress background noise, and improve glandular region perception. For the cross-domain tasks MGD-1K→{K5M, CR-2, LV II}, this study systematically evaluates the overall performance of ADAM-Net from multiple perspectives. The experimental results show that ADAM-Net achieves classification accuracies of 77.93%, 74.86%, and 81.77% on the target domains, significantly outperforming current mainstream unsupervised domain adaptation (UDA) methods. The F1-score and the Matthews correlation coefficient (MCC-score) indicate that the model maintains robust discriminative capability even under class-imbalanced scenarios. t-SNE visualizations further validate its cross-domain feature alignment capability. These demonstrate that ADAM-Net exhibits strong robustness and interpretability in multi-center scenarios and provide an effective solution for automated MGD assessment.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12842610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chest Radiography Optimization: Identifying the Optimal kV for Image Quality in a Phantom Study. 胸部x线摄影优化:在幻影研究中确定图像质量的最佳kV。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-21 DOI: 10.3390/jimaging12010049
Ioannis Antonakos, Kyriakos Kokkinogoulis, Maria Giannopoulou, Efstathios P Efstathopoulos

Chest radiography remains one of the most frequently performed imaging examinations, highlighting the need for optimization of acquisition parameters to balance image quality and radiation dose. This study presents a phantom-based quantitative evaluation of chest radiography acquisition settings using a digital radiography system (AGFA DR 600). Measurements were performed at three tube voltage levels across simulated patient-equivalent thicknesses generated using PMMA slabs, with a Leeds TOR 15FG image quality phantom positioned centrally in the imaging setup. Image quality was quantitatively assessed using signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR), which were calculated from mean pixel values obtained from repeated acquisitions. Radiation exposure was evaluated through estimation of entrance surface dose (ESD). The analysis demonstrated that dose-normalized performance metrics favored intermediate tube voltages for slim and average patient-equivalent thicknesses, while higher voltages were required to maintain image quality in obese-equivalent conditions. Overall, image quality and dose were found to be strongly dependent on the combined selection of tube voltage and phantom thickness. These findings indicate that modest adjustments to tube voltage selection may improve the balance between image quality and radiation dose in chest radiography. Nevertheless, as the present work is based on phantom measurements, further validation using clinical images and observer-based studies is required before any modification of routine radiographic practice.

胸部x线摄影仍然是最常用的影像学检查之一,强调需要优化采集参数以平衡图像质量和辐射剂量。本研究采用数字放射照相系统(AGFA DR 600)对胸部x线摄影采集设置进行基于幻象的定量评估。测量是在三个管电压水平下进行的,模拟的病人等效厚度是由PMMA板产生的,利兹TOR 15FG图像质量幻象位于成像装置的中央。图像质量通过信噪比(SNR)和噪声对比比(CNR)进行定量评估,这两项指标由重复采集的平均像素值计算得出。通过估算入口表面剂量(ESD)来评估辐射暴露。分析表明,剂量归一化的性能指标倾向于中间管电压,用于瘦和平均患者等效厚度,而在肥胖等效条件下需要更高的电压来保持图像质量。总的来说,图像质量和剂量被发现强烈依赖于管电压和幻膜厚度的组合选择。这些结果表明,适度调整管电压选择可以改善胸片图像质量和辐射剂量之间的平衡。然而,由于目前的工作是基于虚幻测量,在常规放射学实践的任何修改之前,需要使用临床图像和基于观察者的研究进一步验证。
{"title":"Chest Radiography Optimization: Identifying the Optimal kV for Image Quality in a Phantom Study.","authors":"Ioannis Antonakos, Kyriakos Kokkinogoulis, Maria Giannopoulou, Efstathios P Efstathopoulos","doi":"10.3390/jimaging12010049","DOIUrl":"10.3390/jimaging12010049","url":null,"abstract":"<p><p>Chest radiography remains one of the most frequently performed imaging examinations, highlighting the need for optimization of acquisition parameters to balance image quality and radiation dose. This study presents a phantom-based quantitative evaluation of chest radiography acquisition settings using a digital radiography system (AGFA DR 600). Measurements were performed at three tube voltage levels across simulated patient-equivalent thicknesses generated using PMMA slabs, with a Leeds TOR 15FG image quality phantom positioned centrally in the imaging setup. Image quality was quantitatively assessed using signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR), which were calculated from mean pixel values obtained from repeated acquisitions. Radiation exposure was evaluated through estimation of entrance surface dose (ESD). The analysis demonstrated that dose-normalized performance metrics favored intermediate tube voltages for slim and average patient-equivalent thicknesses, while higher voltages were required to maintain image quality in obese-equivalent conditions. Overall, image quality and dose were found to be strongly dependent on the combined selection of tube voltage and phantom thickness. These findings indicate that modest adjustments to tube voltage selection may improve the balance between image quality and radiation dose in chest radiography. Nevertheless, as the present work is based on phantom measurements, further validation using clinical images and observer-based studies is required before any modification of routine radiographic practice.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843376/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Diagnosis of Pulmonary Emphysema on Low-Dose CT Using ResNet Embeddings. 低剂量CT ResNet包埋诊断肺气肿的可解释性研究。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-21 DOI: 10.3390/jimaging12010051
Talshyn Sarsembayeva, Madina Mansurova, Ainash Oshibayeva, Stepan Serebryakov

Accurate and interpretable detection of pulmonary emphysema on low-dose computed tomography (LDCT) remains a critical challenge for large-scale screening and population health studies. This work proposes a quality-controlled and interpretable deep learning pipeline for emphysema assessment using ResNet-152 embeddings. The pipeline integrates automated lung segmentation, quality-control filtering, and extraction of 2048-dimensional embeddings from mid-lung patches, followed by analysis using logistic regression, LASSO, and recursive feature elimination (RFE). The embeddings are further fused with quantitative CT (QCT) markers, including %LAA, Perc15, and total lung volume (TLV), to enhance robustness and interpretability. Bootstrapped validation demonstrates strong diagnostic performance (ROC-AUC = 0.996, PR-AUC = 0.962, balanced accuracy = 0.931) with low computational cost. The proposed approach shows that ResNet embeddings pretrained on CT data can be effectively reused without retraining for emphysema characterization, providing a reproducible and explainable framework suitable as a research and screening-support framework for population-level LDCT analysis.

低剂量计算机断层扫描(LDCT)对肺气肿的准确和可解释的检测仍然是大规模筛查和人群健康研究的关键挑战。这项工作提出了一个使用ResNet-152嵌入的质量控制和可解释的肺气肿评估深度学习管道。该管道集成了自动肺分割、质量控制过滤和从中肺斑块提取2048维嵌入,然后使用逻辑回归、LASSO和递归特征消除(RFE)进行分析。嵌入进一步与定量CT (QCT)标记融合,包括%LAA, Perc15和总肺容量(TLV),以增强鲁棒性和可解释性。bootstrap验证具有较强的诊断性能(ROC-AUC = 0.996, PR-AUC = 0.962,平衡精度= 0.931)和较低的计算成本。该方法表明,在CT数据上预训练的ResNet嵌入可以有效地重复使用,而无需再训练来进行肺气肿表征,从而提供了一个可重复和可解释的框架,适合作为群体水平LDCT分析的研究和筛选支持框架。
{"title":"Interpretable Diagnosis of Pulmonary Emphysema on Low-Dose CT Using ResNet Embeddings.","authors":"Talshyn Sarsembayeva, Madina Mansurova, Ainash Oshibayeva, Stepan Serebryakov","doi":"10.3390/jimaging12010051","DOIUrl":"10.3390/jimaging12010051","url":null,"abstract":"<p><p>Accurate and interpretable detection of pulmonary emphysema on low-dose computed tomography (LDCT) remains a critical challenge for large-scale screening and population health studies. This work proposes a quality-controlled and interpretable deep learning pipeline for emphysema assessment using ResNet-152 embeddings. The pipeline integrates automated lung segmentation, quality-control filtering, and extraction of 2048-dimensional embeddings from mid-lung patches, followed by analysis using logistic regression, LASSO, and recursive feature elimination (RFE). The embeddings are further fused with quantitative CT (QCT) markers, including %LAA, Perc15, and total lung volume (TLV), to enhance robustness and interpretability. Bootstrapped validation demonstrates strong diagnostic performance (ROC-AUC = 0.996, PR-AUC = 0.962, balanced accuracy = 0.931) with low computational cost. The proposed approach shows that ResNet embeddings pretrained on CT data can be effectively reused without retraining for emphysema characterization, providing a reproducible and explainable framework suitable as a research and screening-support framework for population-level LDCT analysis.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-Enhanced Expectation Maximization for Emission Tomography. 发射断层成像的图增强期望最大化。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-20 DOI: 10.3390/jimaging12010048
Ryosuke Kasai, Hideki Otsuka

Emission tomography, including single-photon emission computed tomography (SPECT), requires image reconstruction from noisy and incomplete projection data. The maximum-likelihood expectation maximization (MLEM) algorithm is widely used due to its statistical foundation and non-negativity preservation, but it is highly sensitive to noise, particularly in low-count conditions. Although total variation (TV) regularization can reduce noise, it often oversmooths structural details and requires careful parameter tuning. We propose a Graph-Enhanced Expectation Maximization (GREM) algorithm that incorporates graph-based neighborhood information into an MLEM-type multiplicative reconstruction scheme. The method is motivated by a penalized formulation combining a Kullback-Leibler divergence term with a graph Laplacian regularization term, promoting local structural consistency while preserving edges. The resulting update retains the multiplicative structure of MLEM and preserves the non-negativity of the image estimates. Numerical experiments using synthetic phantoms under multiple noise levels, as well as clinical 99mTc-GSA liver SPECT data, demonstrate that GREM consistently outperforms conventional MLEM and TV-regularized MLEM in terms of PSNR and MS-SSIM. These results indicate that GREM provides an effective and practical approach for edge-preserving noise suppression in emission tomography without relying on external training data.

发射断层扫描,包括单光子发射计算机断层扫描(SPECT),需要从噪声和不完整的投影数据中重建图像。最大似然期望最大化(MLEM)算法由于其统计基础和非负性保存而得到广泛应用,但它对噪声非常敏感,特别是在低计数条件下。虽然总变分(TV)正则化可以降低噪声,但它往往过于平滑结构细节,需要仔细的参数调整。我们提出了一种图增强期望最大化(GREM)算法,该算法将基于图的邻域信息整合到mlem类型的乘法重构方案中。该方法采用Kullback-Leibler散度项与图拉普拉斯正则化项相结合的惩罚式,在保持边的同时提高了局部结构的一致性。结果更新保留了MLEM的乘法结构,并保留了图像估计的非负性。使用多种噪声水平下的合成图像以及临床99mTc-GSA肝脏SPECT数据进行的数值实验表明,在PSNR和MS-SSIM方面,GREM始终优于传统MLEM和tv -正则化MLEM。这些结果表明,在不依赖外部训练数据的情况下,GREM为发射层析成像中保持边缘的噪声抑制提供了一种有效而实用的方法。
{"title":"Graph-Enhanced Expectation Maximization for Emission Tomography.","authors":"Ryosuke Kasai, Hideki Otsuka","doi":"10.3390/jimaging12010048","DOIUrl":"10.3390/jimaging12010048","url":null,"abstract":"<p><p>Emission tomography, including single-photon emission computed tomography (SPECT), requires image reconstruction from noisy and incomplete projection data. The maximum-likelihood expectation maximization (MLEM) algorithm is widely used due to its statistical foundation and non-negativity preservation, but it is highly sensitive to noise, particularly in low-count conditions. Although total variation (TV) regularization can reduce noise, it often oversmooths structural details and requires careful parameter tuning. We propose a Graph-Enhanced Expectation Maximization (GREM) algorithm that incorporates graph-based neighborhood information into an MLEM-type multiplicative reconstruction scheme. The method is motivated by a penalized formulation combining a Kullback-Leibler divergence term with a graph Laplacian regularization term, promoting local structural consistency while preserving edges. The resulting update retains the multiplicative structure of MLEM and preserves the non-negativity of the image estimates. Numerical experiments using synthetic phantoms under multiple noise levels, as well as clinical <sup>99m</sup>Tc-GSA liver SPECT data, demonstrate that GREM consistently outperforms conventional MLEM and TV-regularized MLEM in terms of PSNR and MS-SSIM. These results indicate that GREM provides an effective and practical approach for edge-preserving noise suppression in emission tomography without relying on external training data.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843213/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Retinal Nerve Fiber Segmentation and the Influence of Intersubject Variability in Ocular Parameters on the Mapping of Retinal Sites to the Pointwise Orientation Angles. 视网膜神经纤维自动分割及眼参数的主体间变异性对视网膜点向取向角映射的影响。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-19 DOI: 10.3390/jimaging12010047
Diego Luján Villarreal, Adriana Leticia Vera-Tizatl

The current study investigates the influence of intersubject variability in ocular characteristics on the mapping of visual field (VF) sites to the pointwise directional angles in retinal nerve fiber layer (RNFL) bundle traces. In addition, the performance efficacy on the mapping of VF sites to the optic nerve head (ONH) was compared to ground truth baselines. Fundus photographs of 546 eyes of 546 healthy subjects (with no history of ocular disease or diabetic retinopathy) were enhanced digitally and RNFL bundle traces were segmented based on the Personalized Estimated Segmentation (PES) algorithm's core technique. A 24-2 VF grid pattern was overlaid onto the photographs in order to relate VF test points to intersecting RNFL bundles. The PES algorithm effectively traced RNFL bundles in fundus images, achieving an average accuracy of 97.6% relative to the Jansonius map through the application of 10th-order Bezier curves. The PES algorithm assembled an average of 4726 RNFL bundles per fundus image based on 4975 sampling points, obtaining a total of 2,580,505 RNFL bundles based on 2,716,321 sampling points. The influence of ocular parameters could be evaluated for 34 out of 52 VF locations. The ONH-fovea angle and the ONH position in relation to the fovea were the most prominent predictors for variations in the mapping of retinal locations to the pointwise directional angle (p < 0.001). The variation explained by the model (R2 value) ranges from 27.6% for visual field location 15 to 77.8% in location 22, with a mean of 56%. Significant individual variability was found in the mapping of VF sites to the ONH, with a mean standard deviation (95% limit) of 16.55° (median 17.68°) for 50 out of 52 VF locations, ranging from less than 1° to 44.05°. The mean entry angles differed from previous baselines by a range of less than 1° to 23.9° (average difference of 10.6° ± 5.53°), and RMSE of 11.94.

本研究探讨了眼特征的主体间变异性对视场(VF)位点映射到视网膜神经纤维层(RNFL)束迹点方向角度的影响。此外,在视神经头(ONH)的VF位置映射上的性能效果与地面真实基线进行了比较。对546名健康受试者(无眼部疾病史或糖尿病视网膜病变)的546只眼眼底照片进行数字化增强,并基于个性化估计分割(PES)算法的核心技术对RNFL束迹进行分割。为了将VF测试点与相交的RNFL束相关联,在照片上覆盖了24-2 VF网格模式。PES算法通过应用10阶Bezier曲线,有效地跟踪眼底图像中的RNFL束,相对于Jansonius图,平均准确率达到97.6%。PES算法在4975个采样点上平均组装了4726个RNFL束,在2716321个采样点上获得了2,580,505个RNFL束。52个VF位置中的34个可以评估眼部参数的影响。ONH-中央窝角和ONH位置与中央窝的关系是视网膜位置映射到点向方向角变化的最显著预测因子(p < 0.001)。模型解释的变异(R2值)范围从视野位置15的27.6%到视野位置22的77.8%,平均为56%。在VF位点与ONH的映射中发现了显著的个体差异,52个VF位点中有50个的平均标准偏差(95%极限)为16.55°(中位数17.68°),范围从小于1°到44.05°。平均入角与先前基线的差异小于1°至23.9°(平均差异为10.6°±5.53°),RMSE为11.94。
{"title":"Automatic Retinal Nerve Fiber Segmentation and the Influence of Intersubject Variability in Ocular Parameters on the Mapping of Retinal Sites to the Pointwise Orientation Angles.","authors":"Diego Luján Villarreal, Adriana Leticia Vera-Tizatl","doi":"10.3390/jimaging12010047","DOIUrl":"10.3390/jimaging12010047","url":null,"abstract":"<p><p>The current study investigates the influence of intersubject variability in ocular characteristics on the mapping of visual field (VF) sites to the pointwise directional angles in retinal nerve fiber layer (RNFL) bundle traces. In addition, the performance efficacy on the mapping of VF sites to the optic nerve head (ONH) was compared to ground truth baselines. Fundus photographs of 546 eyes of 546 healthy subjects (with no history of ocular disease or diabetic retinopathy) were enhanced digitally and RNFL bundle traces were segmented based on the Personalized Estimated Segmentation (PES) algorithm's core technique. A 24-2 VF grid pattern was overlaid onto the photographs in order to relate VF test points to intersecting RNFL bundles. The PES algorithm effectively traced RNFL bundles in fundus images, achieving an average accuracy of 97.6% relative to the Jansonius map through the application of 10th-order Bezier curves. The PES algorithm assembled an average of 4726 RNFL bundles per fundus image based on 4975 sampling points, obtaining a total of 2,580,505 RNFL bundles based on 2,716,321 sampling points. The influence of ocular parameters could be evaluated for 34 out of 52 VF locations. The ONH-fovea angle and the ONH position in relation to the fovea were the most prominent predictors for variations in the mapping of retinal locations to the pointwise directional angle (<i>p</i> < 0.001). The variation explained by the model (<i>R</i><sup>2</sup> value) ranges from 27.6% for visual field location 15 to 77.8% in location 22, with a mean of 56%. Significant individual variability was found in the mapping of VF sites to the ONH, with a mean standard deviation (95% limit) of 16.55° (median 17.68°) for 50 out of 52 VF locations, ranging from less than 1° to 44.05°. The mean entry angles differed from previous baselines by a range of less than 1° to 23.9° (average difference of 10.6° ± 5.53°), and RMSE of 11.94.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843398/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cross-Device and Cross-OS Benchmark of Modern Web Animation Systems. 现代网页动画系统的跨设备和跨操作系统基准测试。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-15 DOI: 10.3390/jimaging12010045
Tajana Koren Ivančević, Trpimir Jeronim Ježić, Nikolina Stanić Loknar

Although modern web technologies increasingly rely on high-performance rendering methods to support rich visual content across a range of devices and operating systems, the field remains significantly under-researched. The performance of animated visual elements is affected by numerous factors, including browsers, operating systems, GPU acceleration, scripting load, and device limitations. This study systematically evaluates animation performance across multiple platforms using a unified set of circle-based animations implemented with eight web-compatible technologies, including HTML, CSS, SVG, JavaScript, Canvas, and WebGL. Animations were evaluated under controlled feature combinations involving random motion, distance, colour variation, blending, and transformations, with object counts ranging from 10 to 10,000. Measurements were conducted on desktop operating systems (Windows, macOS, Linux) and mobile platforms (iOS, Android), using CPU utilisation, GPU memory usage, and frame rate (FPS) as key metrics. Results show that DOM-based approaches maintain stable performance at 100 animated objects but exhibit notable degradation by 500 objects. Canvas-based rendering extends usability to higher object counts, while WebGL demonstrates the most stable performance at large scales (5000-10,000 objects). These findings provide concrete guidance for selecting appropriate animation technologies based on scene complexity and target platform.

尽管现代web技术越来越依赖于高性能渲染方法来支持跨设备和操作系统的丰富视觉内容,但该领域的研究仍显着不足。动画视觉元素的性能受到许多因素的影响,包括浏览器、操作系统、GPU加速、脚本负载和设备限制。本研究系统地评估了跨多个平台的动画性能,使用一套统一的基于圆圈的动画,实现了八种网络兼容技术,包括HTML, CSS, SVG, JavaScript, Canvas和WebGL。动画在包括随机运动、距离、颜色变化、混合和转换在内的可控特征组合下进行评估,对象数量从10到10,000不等。测量是在桌面操作系统(Windows, macOS, Linux)和移动平台(iOS, Android)上进行的,使用CPU利用率,GPU内存使用率和帧率(FPS)作为关键指标。结果表明,基于dom的方法在100个动画对象时保持稳定的性能,但在500个对象时表现出明显的退化。基于画布的渲染扩展了更高对象数量的可用性,而WebGL在大规模(5000-10,000个对象)上展示了最稳定的性能。这些发现为基于场景复杂性和目标平台选择合适的动画技术提供了具体的指导。
{"title":"A Cross-Device and Cross-OS Benchmark of Modern Web Animation Systems.","authors":"Tajana Koren Ivančević, Trpimir Jeronim Ježić, Nikolina Stanić Loknar","doi":"10.3390/jimaging12010045","DOIUrl":"10.3390/jimaging12010045","url":null,"abstract":"<p><p>Although modern web technologies increasingly rely on high-performance rendering methods to support rich visual content across a range of devices and operating systems, the field remains significantly under-researched. The performance of animated visual elements is affected by numerous factors, including browsers, operating systems, GPU acceleration, scripting load, and device limitations. This study systematically evaluates animation performance across multiple platforms using a unified set of circle-based animations implemented with eight web-compatible technologies, including HTML, CSS, SVG, JavaScript, Canvas, and WebGL. Animations were evaluated under controlled feature combinations involving random motion, distance, colour variation, blending, and transformations, with object counts ranging from 10 to 10,000. Measurements were conducted on desktop operating systems (Windows, macOS, Linux) and mobile platforms (iOS, Android), using CPU utilisation, GPU memory usage, and frame rate (FPS) as key metrics. Results show that DOM-based approaches maintain stable performance at 100 animated objects but exhibit notable degradation by 500 objects. Canvas-based rendering extends usability to higher object counts, while WebGL demonstrates the most stable performance at large scales (5000-10,000 objects). These findings provide concrete guidance for selecting appropriate animation technologies based on scene complexity and target platform.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843483/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146053603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dual Stream Deep Learning Framework for Alzheimer's Disease Detection Using MRI Sonification. MRI超声检测阿尔茨海默病的双流深度学习框架。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-15 DOI: 10.3390/jimaging12010046
Nadia A Mohsin, Mohammed H Abdul Ameer

Alzheimer's Disease (AD) is an advanced brain illness that affects millions of individuals across the world. It causes gradual damage to the brain cells, leading to memory loss and cognitive dysfunction. Although Magnetic Resonance Imaging (MRI) is widely used in AD diagnosis, the existing studies rely solely on the visual representations, leaving alternative features unexplored. The objective of this study is to explore whether MRI sonification can provide complementary diagnostic information when combined with conventional image-based methods. In this study, we propose a novel dual-stream multimodal framework that integrates 2D MRI slices with their corresponding audio representations. MRI images are transformed into audio signals using a multi-scale, multi-orientation Gabor filtering, followed by a Hilbert space-filling curve to preserve spatial locality. The image and sound modalities are processed using a lightweight CNN and YAMNet, respectively, then fused via logistic regression. The experimental results of the multimodal achieved the highest accuracy in distinguishing AD from Cognitively Normal (CN) subjects at 98.2%, 94% for AD vs. Mild Cognitive Impairment (MCI), and 93.2% for MCI vs. CN. This work provides a new perspective and highlights the potential of audio transformation of imaging data for feature extraction and classification.

阿尔茨海默病(AD)是一种晚期脑部疾病,影响着全世界数百万人。它会逐渐损害脑细胞,导致记忆丧失和认知功能障碍。虽然磁共振成像(MRI)广泛用于阿尔茨海默病的诊断,但现有的研究仅依赖于视觉表征,没有探索其他特征。本研究的目的是探讨MRI超声是否可以与传统的基于图像的方法相结合,提供补充的诊断信息。在这项研究中,我们提出了一种新的双流多模态框架,将二维MRI切片与其相应的音频表示相结合。采用多尺度、多方向Gabor滤波将MRI图像转换为音频信号,然后采用希尔伯特空间填充曲线保持空间局域性。图像和声音模式分别使用轻量级CNN和YAMNet进行处理,然后通过逻辑回归进行融合。实验结果表明,在区分AD和认知正常(CN)受试者方面,多模态方法的准确率最高,分别为98.2%、94%和93.2%。这项工作提供了一个新的视角,并突出了图像数据的音频转换在特征提取和分类方面的潜力。
{"title":"A Dual Stream Deep Learning Framework for Alzheimer's Disease Detection Using MRI Sonification.","authors":"Nadia A Mohsin, Mohammed H Abdul Ameer","doi":"10.3390/jimaging12010046","DOIUrl":"10.3390/jimaging12010046","url":null,"abstract":"<p><p>Alzheimer's Disease (AD) is an advanced brain illness that affects millions of individuals across the world. It causes gradual damage to the brain cells, leading to memory loss and cognitive dysfunction. Although Magnetic Resonance Imaging (MRI) is widely used in AD diagnosis, the existing studies rely solely on the visual representations, leaving alternative features unexplored. The objective of this study is to explore whether MRI sonification can provide complementary diagnostic information when combined with conventional image-based methods. In this study, we propose a novel dual-stream multimodal framework that integrates 2D MRI slices with their corresponding audio representations. MRI images are transformed into audio signals using a multi-scale, multi-orientation Gabor filtering, followed by a Hilbert space-filling curve to preserve spatial locality. The image and sound modalities are processed using a lightweight CNN and YAMNet, respectively, then fused via logistic regression. The experimental results of the multimodal achieved the highest accuracy in distinguishing AD from Cognitively Normal (CN) subjects at 98.2%, 94% for AD vs. Mild Cognitive Impairment (MCI), and 93.2% for MCI vs. CN. This work provides a new perspective and highlights the potential of audio transformation of imaging data for feature extraction and classification.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12842745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146053827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Feature Fusion Underwater Image Enhancement Model Based on Perceptual Vision Swin Transformer. 基于感知视觉Swin变压器的深度特征融合水下图像增强模型。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-14 DOI: 10.3390/jimaging12010044
Shasha Tian, Adisorn Sirikham, Jessada Konpang, Chuyang Wang

Underwater optical images are the primary carriers of underwater scene information, playing a crucial role in marine resource exploration, underwater environmental monitoring, and engineering inspection. However, wavelength-dependent absorption and scattering severely deteriorate underwater images, leading to reduced contrast, chromatic distortions, and loss of structural details. To address these issues, we propose a U-shaped underwater image enhancement framework that integrates Swin-Transformer blocks with lightweight attention and residual modules. A Dual-Window Multi-Head Self-Attention (DWMSA) in the bottleneck models long-range context while preserving fine local structure. A Global-Aware Attention Map (GAMP) adaptively re-weights channels and spatial locations to focus on severely degraded regions. A Feature-Augmentation Residual Network (FARN) stabilizes deep training and emphasizes texture and color fidelity. Trained with a combination of Charbonnier, perceptual, and edge losses, our method achieves state-of-the-art results in PSNR and SSIM, the lowest LPIPS, and improvements in UIQM and UCIQE on the UFO-120 and EUVP datasets, with average metrics of PSNR 29.5 dB, SSIM 0.94, LPIPS 0.17, UIQM 3.62, and UCIQE 0.59. Qualitative results show reduced color cast, restored contrast, and sharper details. Code, weights, and evaluation scripts will be released to support reproducibility.

水下光学图像是水下场景信息的主要载体,在海洋资源勘探、水下环境监测、工程检测等方面发挥着至关重要的作用。然而,波长依赖的吸收和散射严重恶化水下图像,导致对比度降低,色度失真和结构细节的损失。为了解决这些问题,我们提出了一个u形水下图像增强框架,该框架集成了swing - transformer块和轻量级关注和剩余模块。一种基于瓶颈的双窗口多头自关注(DWMSA)模型,在保持良好的局部结构的同时,对远程上下文进行建模。全局感知注意力地图(GAMP)自适应地重新加权通道和空间位置,以关注严重退化的区域。特征增强残差网络(FARN)稳定深度训练并强调纹理和色彩保真度。结合Charbonnier、感知和边缘损失进行训练,我们的方法在UFO-120和EUVP数据集上获得了最先进的PSNR和SSIM,最低LPIPS,以及UIQM和UCIQE的改进,平均指标为PSNR 29.5 dB, SSIM 0.94, LPIPS 0.17, UIQM 3.62和UCIQE 0.59。定性结果显示色偏减少,对比度恢复,细节更清晰。将发布代码、权重和评估脚本以支持可再现性。
{"title":"A Deep Feature Fusion Underwater Image Enhancement Model Based on Perceptual Vision Swin Transformer.","authors":"Shasha Tian, Adisorn Sirikham, Jessada Konpang, Chuyang Wang","doi":"10.3390/jimaging12010044","DOIUrl":"10.3390/jimaging12010044","url":null,"abstract":"<p><p>Underwater optical images are the primary carriers of underwater scene information, playing a crucial role in marine resource exploration, underwater environmental monitoring, and engineering inspection. However, wavelength-dependent absorption and scattering severely deteriorate underwater images, leading to reduced contrast, chromatic distortions, and loss of structural details. To address these issues, we propose a U-shaped underwater image enhancement framework that integrates Swin-Transformer blocks with lightweight attention and residual modules. A Dual-Window Multi-Head Self-Attention (DWMSA) in the bottleneck models long-range context while preserving fine local structure. A Global-Aware Attention Map (GAMP) adaptively re-weights channels and spatial locations to focus on severely degraded regions. A Feature-Augmentation Residual Network (FARN) stabilizes deep training and emphasizes texture and color fidelity. Trained with a combination of Charbonnier, perceptual, and edge losses, our method achieves state-of-the-art results in PSNR and SSIM, the lowest LPIPS, and improvements in UIQM and UCIQE on the UFO-120 and EUVP datasets, with average metrics of PSNR 29.5 dB, SSIM 0.94, LPIPS 0.17, UIQM 3.62, and UCIQE 0.59. Qualitative results show reduced color cast, restored contrast, and sharper details. Code, weights, and evaluation scripts will be released to support reproducibility.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12842990/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146053870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1