首页 > 最新文献

Journal of Imaging最新文献

英文 中文
Explainable Radiomics-Based Model for Automatic Image Quality Assessment in Breast Cancer DCE MRI Data. 基于放射组学的乳腺癌DCE MRI数据图像质量自动评估模型。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-11-19 DOI: 10.3390/jimaging11110417
Georgios S Ioannidis, Katerina Nikiforaki, Aikaterini Dovrou, Vassilis Kilintzis, Grigorios Kalliatakis, Oliver Diaz, Karim Lekadir, Kostas Marias

This study aims to develop an explainable radiomics-based model for the automatic assessment of image quality in breast cancer Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data. A cohort of 280 images obtained from a public database was annotated by two clinical experts, resulting in 110 high-quality and 110 low-quality images. The proposed methodology involved the extraction of 819 radiomic features and 2 No-Reference image quality metrics per patient, using both the whole image and the background as regions of interest. Feature extraction was performed under two scenarios: (i) from a sample of 12 slices per patient, and (ii) from the middle slice of each patient. Following model training, a range of machine learning classifiers were applied with explainability assessed through SHapley Additive Explanations (SHAP). The best performance was achieved in the second scenario, where combining features from the whole image and background with a support vector machine classifier yielded sensitivity, specificity, accuracy, and AUC values of 85.51%, 80.01%, 82.76%, and 89.37%, respectively. This proposed model demonstrates potential for integration into clinical practice and may also serve as a valuable resource for large-scale repositories and subgroup analyses aimed at ensuring fairness and explainability.

本研究旨在开发一种可解释的基于放射组学的模型,用于自动评估乳腺癌动态对比增强磁共振成像(DCE-MRI)数据的图像质量。两名临床专家对从公共数据库获得的280张图像进行了注释,得到110张高质量图像和110张低质量图像。提出的方法包括提取每位患者的819个放射学特征和2个无参考图像质量指标,使用整个图像和背景作为感兴趣的区域。在两种情况下进行特征提取:(i)从每个患者的12片样本中提取,(ii)从每个患者的中间切片中提取。在模型训练之后,应用一系列机器学习分类器,并通过SHapley加性解释(SHAP)评估可解释性。在第二种场景中获得了最好的性能,其中将整个图像和背景的特征与支持向量机分类器相结合,其灵敏度、特异性、准确性和AUC值分别为85.51%、80.01%、82.76%和89.37%。该模型展示了整合到临床实践中的潜力,也可以作为旨在确保公平性和可解释性的大规模知识库和亚组分析的宝贵资源。
{"title":"Explainable Radiomics-Based Model for Automatic Image Quality Assessment in Breast Cancer DCE MRI Data.","authors":"Georgios S Ioannidis, Katerina Nikiforaki, Aikaterini Dovrou, Vassilis Kilintzis, Grigorios Kalliatakis, Oliver Diaz, Karim Lekadir, Kostas Marias","doi":"10.3390/jimaging11110417","DOIUrl":"10.3390/jimaging11110417","url":null,"abstract":"<p><p>This study aims to develop an explainable radiomics-based model for the automatic assessment of image quality in breast cancer Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data. A cohort of 280 images obtained from a public database was annotated by two clinical experts, resulting in 110 high-quality and 110 low-quality images. The proposed methodology involved the extraction of 819 radiomic features and 2 No-Reference image quality metrics per patient, using both the whole image and the background as regions of interest. Feature extraction was performed under two scenarios: (i) from a sample of 12 slices per patient, and (ii) from the middle slice of each patient. Following model training, a range of machine learning classifiers were applied with explainability assessed through SHapley Additive Explanations (SHAP). The best performance was achieved in the second scenario, where combining features from the whole image and background with a support vector machine classifier yielded sensitivity, specificity, accuracy, and AUC values of 85.51%, 80.01%, 82.76%, and 89.37%, respectively. This proposed model demonstrates potential for integration into clinical practice and may also serve as a valuable resource for large-scale repositories and subgroup analyses aimed at ensuring fairness and explainability.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 11","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seam Carving Forgery Detection Through Multi-Perspective Explainable AI. 基于多角度可解释AI的缝刻伪造检测。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-11-18 DOI: 10.3390/jimaging11110416
Miguel José das Neves, Felipe Rodrigues Perche Mahlow, Renato Dias de Souza, Paulo Roberto G Hernandes, José Remo Ferreira Brega, Kelton Augusto Pontara da Costa

This paper addresses the critical challenge of detecting content-aware image manipulations, specifically focusing on seam carving forgery. While deep learning models, particularly Convolutional Neural Networks (CNNs), have shown promise in this area, their black-box nature limits their trustworthiness in high-stakes domains like digital forensics. To address this gap, we propose and validate a framework for interpretable forgery detection, termed E-XAI (Ensemble Explainable AI). Conceptually inspired by Ensemble Learning, our framework's novelty lies not in combining predictive models, but in integrating a multi-perspective ensemble of explainability techniques. Specifically, we combine SHAP for fine-grained, pixel-level feature attribution with Grad-CAM for region-level localization to create a more robust and holistic interpretation of a single, custom-trained CNN's decisions. Our approach is validated on a purpose-built, balanced, binary-class dataset of 10,300 images. The results demonstrate high classification performance on an unseen test set, with a 95% accuracy and a 99% precision for the forged class. Furthermore, we analyze the model's robustness against JPEG compression, a common real-world perturbation. More importantly, the application of the E-XAI framework reveals how the model identifies subtle forgery artifacts, providing transparent, visual evidence for its decisions. This work contributes a robust end-to-end pipeline for interpretable image forgery detection, enhancing the trust and reliability of AI systems in information security.

本文解决了检测内容感知图像操作的关键挑战,特别关注接缝雕刻伪造。虽然深度学习模型,特别是卷积神经网络(cnn)在这一领域显示出了希望,但它们的黑箱性质限制了它们在数字取证等高风险领域的可信度。为了解决这一差距,我们提出并验证了一个可解释的伪造检测框架,称为E-XAI (Ensemble Explainable AI)。在概念上受到集成学习的启发,我们的框架的新颖性不在于组合预测模型,而在于集成可解释性技术的多角度集成。具体来说,我们将用于细粒度、像素级特征属性的SHAP与用于区域级定位的Grad-CAM相结合,以创建对单个、自定义训练的CNN决策的更强大、更全面的解释。我们的方法在一个专门构建的、平衡的、包含10,300张图像的二进制数据集上得到了验证。结果表明,在未见的测试集上,该方法具有较高的分类性能,锻造类的准确率为95%,精度为99%。此外,我们分析了模型对JPEG压缩的鲁棒性,这是一种常见的现实世界的扰动。更重要的是,E-XAI框架的应用揭示了该模型如何识别微妙的伪造工件,为其决策提供透明的视觉证据。这项工作为可解释的图像伪造检测提供了强大的端到端管道,增强了人工智能系统在信息安全方面的信任和可靠性。
{"title":"Seam Carving Forgery Detection Through Multi-Perspective Explainable AI.","authors":"Miguel José das Neves, Felipe Rodrigues Perche Mahlow, Renato Dias de Souza, Paulo Roberto G Hernandes, José Remo Ferreira Brega, Kelton Augusto Pontara da Costa","doi":"10.3390/jimaging11110416","DOIUrl":"10.3390/jimaging11110416","url":null,"abstract":"<p><p>This paper addresses the critical challenge of detecting content-aware image manipulations, specifically focusing on seam carving forgery. While deep learning models, particularly Convolutional Neural Networks (CNNs), have shown promise in this area, their black-box nature limits their trustworthiness in high-stakes domains like digital forensics. To address this gap, we propose and validate a framework for interpretable forgery detection, termed E-XAI (Ensemble Explainable AI). Conceptually inspired by Ensemble Learning, our framework's novelty lies not in combining predictive models, but in integrating a multi-perspective ensemble of explainability techniques. Specifically, we combine SHAP for fine-grained, pixel-level feature attribution with Grad-CAM for region-level localization to create a more robust and holistic interpretation of a single, custom-trained CNN's decisions. Our approach is validated on a purpose-built, balanced, binary-class dataset of 10,300 images. The results demonstrate high classification performance on an unseen test set, with a 95% accuracy and a 99% precision for the forged class. Furthermore, we analyze the model's robustness against JPEG compression, a common real-world perturbation. More importantly, the application of the E-XAI framework reveals how the model identifies subtle forgery artifacts, providing transparent, visual evidence for its decisions. This work contributes a robust end-to-end pipeline for interpretable image forgery detection, enhancing the trust and reliability of AI systems in information security.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 11","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653248/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-Shot Adaptation of Foundation Vision Models for PCB Defect Inspection. PCB缺陷检测中基础视觉模型的少镜头自适应。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-11-17 DOI: 10.3390/jimaging11110415
Sang-Jeong Lee

Automated Optical Inspection (AOI) of Printed Circuit Boards (PCBs) suffers from scarce labeled data and frequent domain shifts caused by variations in camera optics, illumination, and product design. These limitations hinder the development of accurate and reliable deep-learning models in manufacturing settings. To address this challenge, this study systematically benchmarks three Parameter-Efficient Fine-Tuning (PEFT) strategies-Linear Probe, Low-Rank Adaptation (LoRA), and Visual Prompt Tuning (VPT)-applied to two representative foundation vision models: the Contrastive Language-Image Pretraining Vision Transformer (CLIP-ViT-B/16) and the Self-Distillation with No Labels Vision Transformer (DINOv2-S/14). The models are evaluated on six-class PCB defect classification tasks under few-shot (k = 5, 10, 20) and full-data regimes, analyzing both performance and reliability. Experiments show that VPT achieves 0.99 ± 0.01 accuracy and 0.998 ± 0.001 macro-Area Under the Precision-Recall Curve (macro-AUPRC), reducing classification error by approximately 65% compared with Linear and LoRA while tuning fewer than 1.5% of backbone parameters. Reliability, assessed by the stability of precision-recall behavior across different decision thresholds, improved as the number of labeled samples increased. Furthermore, class-wise and few-shot analyses revealed that VPT adapts more effectively to rare defect types such as Spur and Spurious Copper while maintaining near-ceiling performance on simpler categories (Short, Pinhole). These findings collectively demonstrate that prompt-based adaptation offers a quantitatively favorable trade-off between accuracy, efficiency, and reliability. Practically, this positions VPT as a scalable strategy for factory-level AOI, enabling the rapid deployment of robust defect inspection models even when labeled data is scarce.

印刷电路板(pcb)的自动光学检测(AOI)受到标记数据稀缺和由相机光学、照明和产品设计变化引起的频繁域移位的困扰。这些限制阻碍了制造环境中准确可靠的深度学习模型的发展。为了解决这一挑战,本研究系统地对三种参数高效微调(PEFT)策略——线性探针、低秩自适应(LoRA)和视觉提示调谐(VPT)进行了基准测试,并将其应用于两种具有代表性的基础视觉模型:对比语言-图像预训练视觉转换器(CLIP-ViT-B/16)和无标签自蒸馏视觉转换器(DINOv2-S/14)。在少弹(k = 5,10,20)和全数据模式下对六类PCB缺陷分类任务进行了评估,分析了模型的性能和可靠性。实验表明,在Precision-Recall Curve (macro-AUPRC)下,VPT的准确率为0.99±0.01,宏面积为0.998±0.001,与Linear和LoRA相比,在主干参数调优率不到1.5%的情况下,分类误差降低了约65%。可靠性,通过不同决策阈值的精确召回行为的稳定性来评估,随着标记样本数量的增加而提高。此外,分级和少射分析表明,VPT更有效地适应罕见的缺陷类型,如Spur和Spurious Copper,同时在更简单的类别(Short, Pinhole)上保持接近上限的性能。这些发现共同表明,基于即时的适应在准确性、效率和可靠性之间提供了定量的有利权衡。实际上,这将VPT定位为工厂级AOI的可伸缩策略,即使在标记数据稀缺的情况下,也可以快速部署健壮的缺陷检查模型。
{"title":"Few-Shot Adaptation of Foundation Vision Models for PCB Defect Inspection.","authors":"Sang-Jeong Lee","doi":"10.3390/jimaging11110415","DOIUrl":"10.3390/jimaging11110415","url":null,"abstract":"<p><p>Automated Optical Inspection (AOI) of Printed Circuit Boards (PCBs) suffers from scarce labeled data and frequent domain shifts caused by variations in camera optics, illumination, and product design. These limitations hinder the development of accurate and reliable deep-learning models in manufacturing settings. To address this challenge, this study systematically benchmarks three Parameter-Efficient Fine-Tuning (PEFT) strategies-Linear Probe, Low-Rank Adaptation (LoRA), and Visual Prompt Tuning (VPT)-applied to two representative foundation vision models: the Contrastive Language-Image Pretraining Vision Transformer (CLIP-ViT-B/16) and the Self-Distillation with No Labels Vision Transformer (DINOv2-S/14). The models are evaluated on six-class PCB defect classification tasks under few-shot (k = 5, 10, 20) and full-data regimes, analyzing both performance and reliability. Experiments show that VPT achieves 0.99 ± 0.01 accuracy and 0.998 ± 0.001 macro-Area Under the Precision-Recall Curve (macro-AUPRC), reducing classification error by approximately 65% compared with Linear and LoRA while tuning fewer than 1.5% of backbone parameters. Reliability, assessed by the stability of precision-recall behavior across different decision thresholds, improved as the number of labeled samples increased. Furthermore, class-wise and few-shot analyses revealed that VPT adapts more effectively to rare defect types such as Spur and Spurious Copper while maintaining near-ceiling performance on simpler categories (Short, Pinhole). These findings collectively demonstrate that prompt-based adaptation offers a quantitatively favorable trade-off between accuracy, efficiency, and reliability. Practically, this positions VPT as a scalable strategy for factory-level AOI, enabling the rapid deployment of robust defect inspection models even when labeled data is scarce.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 11","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653441/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Smarter Orthopedic Care: Classifying Plantar Footprints from RGB Images Using Vision Transformers and CNNs. 迈向更智能的骨科护理:使用视觉变压器和cnn从RGB图像中分类足底足迹。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-11-16 DOI: 10.3390/jimaging11110414
Lidia Yolanda Ramírez-Rios, Jesús Everardo Olguín-Tiznado, Edgar Rene Ramos-Acosta, Everardo Inzunza-Gonzalez, Julio César Cano-Gutiérrez, Enrique Efrén García-Guerrero, Claudia Camargo-Wilson

The anatomical structure of the foot can be assessed by examining the plantar footprint for orthopedic intervention. In fact, there is a relationship between a specific type of foot and multiple musculoskeletal disorders, which are among the main ailments affecting the lower extremities, where its accurate classification is essential for early diagnosis. This work aims to develop a method for accurately classifying the plantar footprint and hindfoot, specifically concerning the sagittal plane. A custom image dataset was created, comprising 603 RGB plantar images that were modified and augmented. Six state-of-the-art models have been trained and evaluated: swin_tiny_patch4_window7_224, convnextv2_tiny, deit3_base_patch16_224, xception41, inception-v4, and efficientnet_b0. Among them, the swin_tiny_patch4_window7_224 model achieved 98.013% accuracy, demonstrating its potential as a reliable and low-cost tool for clinical screening and diagnosis of foot-related conditions.

足部的解剖结构可以通过检查足底足迹来评估矫形干预。事实上,一种特定类型的足与多种肌肉骨骼疾病之间存在关系,这些疾病是影响下肢的主要疾病之一,其准确分类对于早期诊断至关重要。这项工作的目的是开发一种方法,准确分类足底足迹和后足,特别是有关矢状面。创建了一个自定义图像数据集,包含603张经过修改和增强的RGB足底图像。已经训练并评估了六个最先进的模型:swin_tiny_patch4_window7_224、convnextv2_tiny、deit3_base_patch16_224、xception41、inception-v4和efficientnet_b0。其中,swin_tiny_patch4_window7_224模型的准确率达到了98.013%,显示了其作为临床筛查和诊断足部相关疾病的可靠和低成本工具的潜力。
{"title":"Toward Smarter Orthopedic Care: Classifying Plantar Footprints from RGB Images Using Vision Transformers and CNNs.","authors":"Lidia Yolanda Ramírez-Rios, Jesús Everardo Olguín-Tiznado, Edgar Rene Ramos-Acosta, Everardo Inzunza-Gonzalez, Julio César Cano-Gutiérrez, Enrique Efrén García-Guerrero, Claudia Camargo-Wilson","doi":"10.3390/jimaging11110414","DOIUrl":"10.3390/jimaging11110414","url":null,"abstract":"<p><p>The anatomical structure of the foot can be assessed by examining the plantar footprint for orthopedic intervention. In fact, there is a relationship between a specific type of foot and multiple musculoskeletal disorders, which are among the main ailments affecting the lower extremities, where its accurate classification is essential for early diagnosis. This work aims to develop a method for accurately classifying the plantar footprint and hindfoot, specifically concerning the sagittal plane. A custom image dataset was created, comprising 603 RGB plantar images that were modified and augmented. Six state-of-the-art models have been trained and evaluated: swin_tiny_patch4_window7_224, convnextv2_tiny, deit3_base_patch16_224, xception41, inception-v4, and efficientnet_b0. Among them, the swin_tiny_patch4_window7_224 model achieved 98.013% accuracy, demonstrating its potential as a reliable and low-cost tool for clinical screening and diagnosis of foot-related conditions.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 11","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sentinel-2-Based Forest Health Survey of ICP Forests Level I and II Plots in Hungary. 匈牙利ICP一级和二级样地基于哨兵-2的森林健康调查
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-11-14 DOI: 10.3390/jimaging11110413
Tamás Molnár, Bence Bolla, Orsolya Szabó, András Koltay

Forest damage has been increasingly recorded over the past decade in both Europe and Hungary, primarily due to prolonged droughts, causing a decline in forest health. In the framework of ICP Forests, the forest damage has been monitored for decades; however, it is labour-intensive and time-consuming. Satellite-based remote sensing offers a rapid and efficient method for assessing large-scale damage events, combining the ground-based ICP Forests datasets. This study utilised cloud computing and Sentinel-2 satellite imagery to monitor forest health and detect anomalies. Standardised NDVI (Z NDVI) maps were produced for the period from 2017 to 2023 to identify disturbances in the forest. The research focused on seven active ICP Forests Level II and 78 Level I plots in Hungary. Z NDVI values were divided into five categories based on damage severity, and there was agreement between Level II field data and satellite imagery. In 2017, severe damage was caused by late frost and wind; however, the forest recovered by 2018. Another decline was observed in 2021 due to wind and in 2022 due to drought. Data from the ICP Forests Level I plots, which represent forest condition in Hungary, indicated that 80% of the monitored stands were damaged, with 30% suffering moderate damage and 15% experiencing severe damage. Z NDVI classifications aligned with the field data, showing widespread forest damage across the country.

在过去十年中,欧洲和匈牙利的森林破坏越来越多,主要是由于长期干旱导致森林健康状况下降。在国际森林合作方案的框架内,对森林损害进行了几十年的监测;然而,它是劳动密集型和耗时的。卫星遥感结合地面ICP森林数据集,为评估大规模灾害事件提供了一种快速有效的方法。这项研究利用云计算和哨兵2号卫星图像监测森林健康状况并发现异常情况。制作了2017年至2023年期间的标准化NDVI (Z NDVI)地图,以确定森林中的干扰。研究的重点是匈牙利境内7个活跃的ICP森林II级和78个I级样地。zndvi值根据破坏程度分为5类,II级野外数据与卫星图像基本一致。2017年,晚霜和风造成了严重破坏;然而,森林在2018年恢复了。另一次下降是在2021年,原因是风,2022年是干旱。来自代表匈牙利森林状况的ICP森林一级样地的数据表明,80%的监测林分遭到破坏,30%遭受中度破坏,15%遭受严重破坏。NDVI分类与实地数据一致,显示了全国范围内广泛的森林破坏。
{"title":"Sentinel-2-Based Forest Health Survey of ICP Forests Level I and II Plots in Hungary.","authors":"Tamás Molnár, Bence Bolla, Orsolya Szabó, András Koltay","doi":"10.3390/jimaging11110413","DOIUrl":"10.3390/jimaging11110413","url":null,"abstract":"<p><p>Forest damage has been increasingly recorded over the past decade in both Europe and Hungary, primarily due to prolonged droughts, causing a decline in forest health. In the framework of ICP Forests, the forest damage has been monitored for decades; however, it is labour-intensive and time-consuming. Satellite-based remote sensing offers a rapid and efficient method for assessing large-scale damage events, combining the ground-based ICP Forests datasets. This study utilised cloud computing and Sentinel-2 satellite imagery to monitor forest health and detect anomalies. Standardised NDVI (Z NDVI) maps were produced for the period from 2017 to 2023 to identify disturbances in the forest. The research focused on seven active ICP Forests Level II and 78 Level I plots in Hungary. Z NDVI values were divided into five categories based on damage severity, and there was agreement between Level II field data and satellite imagery. In 2017, severe damage was caused by late frost and wind; however, the forest recovered by 2018. Another decline was observed in 2021 due to wind and in 2022 due to drought. Data from the ICP Forests Level I plots, which represent forest condition in Hungary, indicated that 80% of the monitored stands were damaged, with 30% suffering moderate damage and 15% experiencing severe damage. Z NDVI classifications aligned with the field data, showing widespread forest damage across the country.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 11","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653305/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boundary-Guided Differential Attention: Enhancing Camouflaged Object Detection Accuracy. 边界引导差分注意:提高伪装目标检测精度。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-11-14 DOI: 10.3390/jimaging11110412
Hongliang Zhang, Bolin Xu, Sanxin Jiang

Camouflaged Object Detection (COD) is a challenging computer vision task aimed at accurately identifying and segmenting objects seamlessly blended into their backgrounds. This task has broad applications across medical image segmentation, defect detection, agricultural image detection, security monitoring, and scientific research. Traditional COD methods often struggle with precise segmentation due to the high similarity between camouflaged objects and their surroundings. In this study, we introduce a Boundary-Guided Differential Attention Network (BDA-Net) to address these challenges. BDA-Net first extracts boundary features by fusing multi-scale image features and applying channel attention. Subsequently, it employs a differential attention mechanism, guided by these boundary features, to highlight camouflaged objects and suppress background information. The weighted features are then progressively fused to generate accurate camouflage object masks. Experimental results on the COD10K, NC4K, and CAMO datasets demonstrate that BDA-Net outperforms most state-of-the-art COD methods, achieving higher accuracy. Here we show that our approach improves detection accuracy by up to 3.6% on key metrics, offering a robust solution for precise camouflaged object segmentation.

伪装目标检测(COD)是一项具有挑战性的计算机视觉任务,旨在准确识别和分割无缝融合到背景中的物体。该任务在医学图像分割、缺陷检测、农业图像检测、安全监控、科学研究等领域有着广泛的应用。由于伪装对象与其周围环境的高度相似性,传统的COD方法往往难以精确分割。在本研究中,我们引入了一个边界引导差分注意网络(BDA-Net)来解决这些挑战。BDA-Net首先通过融合多尺度图像特征并应用通道关注提取边界特征。随后,在这些边界特征的引导下,采用差分注意机制来突出伪装对象并抑制背景信息。然后将加权特征逐渐融合以生成精确的伪装对象掩模。在COD10K、NC4K和CAMO数据集上的实验结果表明,BDA-Net优于大多数最先进的COD方法,实现了更高的精度。在这里,我们展示了我们的方法在关键指标上提高了高达3.6%的检测精度,为精确的伪装对象分割提供了一个强大的解决方案。
{"title":"Boundary-Guided Differential Attention: Enhancing Camouflaged Object Detection Accuracy.","authors":"Hongliang Zhang, Bolin Xu, Sanxin Jiang","doi":"10.3390/jimaging11110412","DOIUrl":"10.3390/jimaging11110412","url":null,"abstract":"<p><p>Camouflaged Object Detection (COD) is a challenging computer vision task aimed at accurately identifying and segmenting objects seamlessly blended into their backgrounds. This task has broad applications across medical image segmentation, defect detection, agricultural image detection, security monitoring, and scientific research. Traditional COD methods often struggle with precise segmentation due to the high similarity between camouflaged objects and their surroundings. In this study, we introduce a Boundary-Guided Differential Attention Network (BDA-Net) to address these challenges. BDA-Net first extracts boundary features by fusing multi-scale image features and applying channel attention. Subsequently, it employs a differential attention mechanism, guided by these boundary features, to highlight camouflaged objects and suppress background information. The weighted features are then progressively fused to generate accurate camouflage object masks. Experimental results on the COD10K, NC4K, and CAMO datasets demonstrate that BDA-Net outperforms most state-of-the-art COD methods, achieving higher accuracy. Here we show that our approach improves detection accuracy by up to 3.6% on key metrics, offering a robust solution for precise camouflaged object segmentation.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 11","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653314/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TASA: Text-Anchored State-Space Alignment for Long-Tailed Image Classification. 文本锚定的长尾图像分类的状态空间对齐。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-11-13 DOI: 10.3390/jimaging11110410
Long Li, Tinglei Jia, Huaizhi Yue, Huize Cheng, Yongfeng Bu, Zhaoyang Zhang

Long-tailed image classification remains challenging for vision-language models. Head classes dominate training while tail classes are underrepresented and noisy, and short prompts with weak text supervision further amplify head bias. This paper presents TASA, an end-to-end framework that stabilizes textual supervision and enhances cross-modal fusion. A Semantic Distribution Modulation (SDM) module constructs class-specific text prototypes by cosine-weighted fusion of multiple LLM-generated descriptions with a canonical template, providing stable and diverse semantic anchors without training text parameters. Dual-Space Cross-Modal Fusion (DCF) module incorporates selective-scan state-space blocks into both image and text branches, enabling bidirectional conditioning and efficient feature fusion through a lightweight multilayer perceptron. Together with a margin-aware alignment loss, TASA aligns images with class prototypes for classification without requiring paired image-text data or per-class prompt tuning. Experiments on CIFAR-10/100-LT, ImageNet-LT, and Places-LT demonstrate consistent improvements across many-, medium-, and few-shot groups. Ablation studies confirm that DCF yields the largest single-module gain, while SDM and DCF combined provide the most robust and balanced performance. These results highlight the effectiveness of integrating text-driven prototypes with state-space fusion for long-tailed classification.

对于视觉语言模型来说,长尾图像分类仍然是一个挑战。头部类在训练中占主导地位,而尾部类的代表性不足且嘈杂,短提示和弱文本监督进一步放大了头部偏见。本文提出了一种端到端框架TASA,它稳定了文本监督并增强了跨模态融合。语义分布调制(Semantic Distribution Modulation, SDM)模块通过余弦加权融合多个llm生成的描述和一个规范模板来构建特定类的文本原型,在不需要训练文本参数的情况下提供稳定和多样化的语义锚。双空间跨模态融合(DCF)模块将选择性扫描状态空间块集成到图像和文本分支中,通过轻量级多层感知器实现双向调节和有效的特征融合。与边缘感知对齐损失一起,TASA将图像与分类原型对齐,而不需要配对的图像-文本数据或每个类的提示调优。在CIFAR-10/100-LT、ImageNet-LT和Places-LT上的实验表明,在多组、中组和少组中都有一致的改进。烧蚀研究证实,DCF产生最大的单模块增益,而SDM和DCF结合提供最稳健和平衡的性能。这些结果突出了将文本驱动原型与状态空间融合相结合用于长尾分类的有效性。
{"title":"TASA: Text-Anchored State-Space Alignment for Long-Tailed Image Classification.","authors":"Long Li, Tinglei Jia, Huaizhi Yue, Huize Cheng, Yongfeng Bu, Zhaoyang Zhang","doi":"10.3390/jimaging11110410","DOIUrl":"10.3390/jimaging11110410","url":null,"abstract":"<p><p>Long-tailed image classification remains challenging for vision-language models. Head classes dominate training while tail classes are underrepresented and noisy, and short prompts with weak text supervision further amplify head bias. This paper presents TASA, an end-to-end framework that stabilizes textual supervision and enhances cross-modal fusion. A Semantic Distribution Modulation (SDM) module constructs class-specific text prototypes by cosine-weighted fusion of multiple LLM-generated descriptions with a canonical template, providing stable and diverse semantic anchors without training text parameters. Dual-Space Cross-Modal Fusion (DCF) module incorporates selective-scan state-space blocks into both image and text branches, enabling bidirectional conditioning and efficient feature fusion through a lightweight multilayer perceptron. Together with a margin-aware alignment loss, TASA aligns images with class prototypes for classification without requiring paired image-text data or per-class prompt tuning. Experiments on CIFAR-10/100-LT, ImageNet-LT, and Places-LT demonstrate consistent improvements across many-, medium-, and few-shot groups. Ablation studies confirm that DCF yields the largest single-module gain, while SDM and DCF combined provide the most robust and balanced performance. These results highlight the effectiveness of integrating text-driven prototypes with state-space fusion for long-tailed classification.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 11","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Radiance Fields: Driven Exploration of Visual Communication and Spatial Interaction Design for Immersive Digital Installations. 神经辐射场:沉浸式数字装置视觉传达与空间交互设计的驱动探索。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-11-13 DOI: 10.3390/jimaging11110411
Wanshu Li, Yuanhui Hu

In immersive digital devices, high environmental complexity can lead to rendering delays and loss of interactive details, resulting in a fragmented experience. This paper proposes a lightweight NeRF (Neural Radiance Fields) modeling and multimodal perception fusion method. First, a sparse hash code is constructed based on Instant-NGP (Instant Neural Graphics Primitives) to accelerate scene radiance field generation. Second, parameter distillation and channel pruning are used to reduce the model's size and reduce computational overheads. Next, multimodal data from a depth camera and an IMU (Inertial Measurement Unit) is fused, and Kalman filtering is used to improve pose tracking accuracy. Finally, the optimized NeRF model is integrated into the Unity engine, utilizing custom shaders and asynchronous rendering to achieve low-latency viewpoint responsiveness. Experiments show that the file size of this method in high-complexity scenes is only 79.5 MB ± 5.3 MB, and the first loading time is only 2.9 s ± 0.4 s, effectively reducing rendering latency. The SSIM is 0.951 ± 0.016 at 1.5 m/s, and the GME is 7.68 ± 0.15 at 1.5 m/s. It can stably restore texture details and edge sharpness under dynamic viewing angles. In scenarios that support 3-5 people interacting simultaneously, the average interaction response delay is only 16.3 ms, and the average jitter error is controlled at 0.12°, significantly improving spatial interaction performance. In conclusion, this study provides effective technical solutions for high-quality immersive interaction in complex public scenarios. Future work will explore the framework's adaptability in larger-scale dynamic environments and further optimize the network synchronization mechanism for multi-user concurrency.

在沉浸式数字设备中,高环境复杂性可能导致渲染延迟和交互细节的丢失,从而导致碎片化的体验。提出了一种轻量级的NeRF (Neural Radiance Fields)建模和多模态感知融合方法。首先,基于Instant- ngp (Instant Neural Graphics Primitives)构造稀疏散列码,加速场景亮度场的生成;其次,采用参数精馏和通道剪枝来减小模型的尺寸和减少计算开销。其次,融合深度相机和惯性测量单元的多模态数据,利用卡尔曼滤波提高姿态跟踪精度;最后,优化的NeRF模型集成到Unity引擎中,利用自定义着色器和异步渲染来实现低延迟的视点响应。实验表明,该方法在高复杂度场景下的文件大小仅为79.5 MB±5.3 MB,首次加载时间仅为2.9 s±0.4 s,有效降低了渲染延迟。在1.5 m/s下,SSIM为0.951±0.016;在1.5 m/s下,GME为7.68±0.15。在动态视角下稳定还原纹理细节和边缘清晰度。在支持3-5人同时交互的场景下,平均交互响应延迟仅为16.3 ms,平均抖动误差控制在0.12°,显著提高了空间交互性能。总之,本研究为复杂公共场景下的高质量沉浸式交互提供了有效的技术解决方案。未来的工作将探索该框架在更大规模动态环境中的适应性,并进一步优化多用户并发的网络同步机制。
{"title":"Neural Radiance Fields: Driven Exploration of Visual Communication and Spatial Interaction Design for Immersive Digital Installations.","authors":"Wanshu Li, Yuanhui Hu","doi":"10.3390/jimaging11110411","DOIUrl":"10.3390/jimaging11110411","url":null,"abstract":"<p><p>In immersive digital devices, high environmental complexity can lead to rendering delays and loss of interactive details, resulting in a fragmented experience. This paper proposes a lightweight NeRF (Neural Radiance Fields) modeling and multimodal perception fusion method. First, a sparse hash code is constructed based on Instant-NGP (Instant Neural Graphics Primitives) to accelerate scene radiance field generation. Second, parameter distillation and channel pruning are used to reduce the model's size and reduce computational overheads. Next, multimodal data from a depth camera and an IMU (Inertial Measurement Unit) is fused, and Kalman filtering is used to improve pose tracking accuracy. Finally, the optimized NeRF model is integrated into the Unity engine, utilizing custom shaders and asynchronous rendering to achieve low-latency viewpoint responsiveness. Experiments show that the file size of this method in high-complexity scenes is only 79.5 MB ± 5.3 MB, and the first loading time is only 2.9 s ± 0.4 s, effectively reducing rendering latency. The SSIM is 0.951 ± 0.016 at 1.5 m/s, and the GME is 7.68 ± 0.15 at 1.5 m/s. It can stably restore texture details and edge sharpness under dynamic viewing angles. In scenarios that support 3-5 people interacting simultaneously, the average interaction response delay is only 16.3 ms, and the average jitter error is controlled at 0.12°, significantly improving spatial interaction performance. In conclusion, this study provides effective technical solutions for high-quality immersive interaction in complex public scenarios. Future work will explore the framework's adaptability in larger-scale dynamic environments and further optimize the network synchronization mechanism for multi-user concurrency.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 11","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully Automated AI-Based Digital Workflow for Mirroring of Healthy and Defective Craniofacial Models. 健康和缺陷颅面模型镜像的全自动人工智能数字工作流程。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-11-12 DOI: 10.3390/jimaging11110407
Michel Beyer, Julian Grossi, Alexandru Burde, Sead Abazi, Lukas Seifert, Joachim Polligkeit, Neha Umakant Chodankar, Florian M Thieringer

The accurate reconstruction of craniofacial defects requires the precise segmentation and mirroring of healthy anatomy. Conventional workflows rely on manual interaction, making them time-consuming and subject to operator variability. This study developed and validated a fully automated digital pipeline that integrates deep learning-based segmentation with algorithmic mirroring for craniofacial reconstruction. A total of 388 cranial CT scans were used to train a three-dimensional nnU-Net model for skull and mandible segmentation. A Principal Component Analysis-Iterative Closest Point (PCA-ICP) algorithm was then applied to compute the sagittal symmetry plane and perform mirroring. Automated results were compared with expert-generated segmentations and manually defined symmetry planes using Dice Similarity Coefficient (DSC), Mean Surface Distance (MSD), Hausdorff Distance (HD), and angular deviation. The nnU-Net achieved high segmentation accuracy for both the mandible (mean DSC 0.956) and the skull (mean DSC 0.965). Mirroring results showed minimal angular deviation from expert reference planes (mandible: 1.32° ± 0.71° in defect cases, 1.58° ± 1.12° in intact cases; skull: 1.75° ± 0.84° in defect cases, 1.15° ± 0.81° in intact cases). The presence of defects did not significantly affect accuracy. This automated workflow demonstrated robust performance and clinical applicability, offering standardized, reproducible, and time-efficient planning for craniofacial reconstruction.

颅面缺损的准确重建需要健康解剖结构的精确分割和镜像。传统的工作流程依赖于手动交互,这使得它们非常耗时,并且受制于操作人员的变化。本研究开发并验证了一种全自动数字管道,该管道将基于深度学习的分割与颅面重建的算法镜像相结合。利用388张颅脑CT扫描图像训练三维nnU-Net模型,用于颅骨和下颌骨的分割。然后采用主成分分析-迭代最近点(PCA-ICP)算法计算矢状对称面并进行镜像。使用骰子相似系数(DSC)、平均表面距离(MSD)、豪斯多夫距离(HD)和角偏差,将自动结果与专家生成的分割和手动定义的对称平面进行比较。nnU-Net对下颌骨(平均DSC为0.956)和颅骨(平均DSC为0.965)均取得了较高的分割精度。镜像结果显示与专家参考平面的角度偏差最小(下颌骨:缺损病例1.32°±0.71°,完整病例1.58°±1.12°;颅骨:缺损病例1.75°±0.84°,完整病例1.15°±0.81°)。缺陷的存在对精度没有显著影响。该自动化工作流程展示了强大的性能和临床适用性,为颅面重建提供了标准化、可重复和高效的计划。
{"title":"Fully Automated AI-Based Digital Workflow for Mirroring of Healthy and Defective Craniofacial Models.","authors":"Michel Beyer, Julian Grossi, Alexandru Burde, Sead Abazi, Lukas Seifert, Joachim Polligkeit, Neha Umakant Chodankar, Florian M Thieringer","doi":"10.3390/jimaging11110407","DOIUrl":"10.3390/jimaging11110407","url":null,"abstract":"<p><p>The accurate reconstruction of craniofacial defects requires the precise segmentation and mirroring of healthy anatomy. Conventional workflows rely on manual interaction, making them time-consuming and subject to operator variability. This study developed and validated a fully automated digital pipeline that integrates deep learning-based segmentation with algorithmic mirroring for craniofacial reconstruction. A total of 388 cranial CT scans were used to train a three-dimensional nnU-Net model for skull and mandible segmentation. A Principal Component Analysis-Iterative Closest Point (PCA-ICP) algorithm was then applied to compute the sagittal symmetry plane and perform mirroring. Automated results were compared with expert-generated segmentations and manually defined symmetry planes using Dice Similarity Coefficient (DSC), Mean Surface Distance (MSD), Hausdorff Distance (HD), and angular deviation. The nnU-Net achieved high segmentation accuracy for both the mandible (mean DSC 0.956) and the skull (mean DSC 0.965). Mirroring results showed minimal angular deviation from expert reference planes (mandible: 1.32° ± 0.71° in defect cases, 1.58° ± 1.12° in intact cases; skull: 1.75° ± 0.84° in defect cases, 1.15° ± 0.81° in intact cases). The presence of defects did not significantly affect accuracy. This automated workflow demonstrated robust performance and clinical applicability, offering standardized, reproducible, and time-efficient planning for craniofacial reconstruction.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 11","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653981/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Resolution Peripheral Quantitative Computed Tomography (HR-pQCT) for Assessment of Avascular Necrosis of the Lunate. 高分辨率外周定量计算机断层扫描(HR-pQCT)评估月骨缺血性坏死。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-11-12 DOI: 10.3390/jimaging11110406
Esin Rothenfluh, Georg F Erbach, Léna G Dietrich, Laura De Pellegrin, Daniela A Frauchiger, Rainer J Egli

This exploratory study investigates the feasibility and diagnostic value of high-resolution peripheral quantitative computed tomography (HR-pQCT) in detecting structural and microarchitectural changes in lunate avascular necrosis (AVN), or Kienböck's disease. Five adult patients with unilateral AVN underwent either MRI or CT, alongside HR-pQCT of both wrists. Imaging features such as subchondral remodeling, joint space narrowing, and bone fragmentation were assessed across modalities. HR-pQCT detected at least one additional pathological feature not seen on MRI or CT in four of five patients and revealed early subchondral changes in two contralateral asymptomatic wrists. Quantitative measurements of bone volume fraction (BV/TV) further indicated altered trabecular structure correlating with disease stage. These findings suggest that HR-pQCT may offer enhanced sensitivity for early-stage AVN and better delineation of disease extent, which is critical for informed surgical planning. While limited by small sample size, this study provides preliminary evidence supporting HR-pQCT as a complementary imaging tool in the assessment of lunate AVN, with potential to improve early detection, staging accuracy, and individualized treatment strategies.

本探索性研究探讨了高分辨率外周定量计算机断层扫描(HR-pQCT)在检测月骨缺血性坏死(AVN)或Kienböck疾病的结构和微结构变化方面的可行性和诊断价值。5例单侧AVN的成年患者行MRI或CT检查,同时行双腕HR-pQCT检查。影像学特征,如软骨下重构,关节间隙狭窄,骨碎裂进行评估。HR-pQCT在5例患者中的4例中发现了至少一个MRI或CT未见的额外病理特征,并在两个对侧无症状手腕中发现了早期软骨下改变。骨体积分数(BV/TV)的定量测量进一步表明小梁结构的改变与疾病分期相关。这些结果表明,HR-pQCT可以提高早期AVN的敏感性,更好地描绘疾病范围,这对知情的手术计划至关重要。虽然样本量有限,但本研究提供了初步证据,支持HR-pQCT作为评估月骨AVN的补充成像工具,具有提高早期发现、分期准确性和个性化治疗策略的潜力。
{"title":"High-Resolution Peripheral Quantitative Computed Tomography (HR-pQCT) for Assessment of Avascular Necrosis of the Lunate.","authors":"Esin Rothenfluh, Georg F Erbach, Léna G Dietrich, Laura De Pellegrin, Daniela A Frauchiger, Rainer J Egli","doi":"10.3390/jimaging11110406","DOIUrl":"10.3390/jimaging11110406","url":null,"abstract":"<p><p>This exploratory study investigates the feasibility and diagnostic value of high-resolution peripheral quantitative computed tomography (HR-pQCT) in detecting structural and microarchitectural changes in lunate avascular necrosis (AVN), or Kienböck's disease. Five adult patients with unilateral AVN underwent either MRI or CT, alongside HR-pQCT of both wrists. Imaging features such as subchondral remodeling, joint space narrowing, and bone fragmentation were assessed across modalities. HR-pQCT detected at least one additional pathological feature not seen on MRI or CT in four of five patients and revealed early subchondral changes in two contralateral asymptomatic wrists. Quantitative measurements of bone volume fraction (BV/TV) further indicated altered trabecular structure correlating with disease stage. These findings suggest that HR-pQCT may offer enhanced sensitivity for early-stage AVN and better delineation of disease extent, which is critical for informed surgical planning. While limited by small sample size, this study provides preliminary evidence supporting HR-pQCT as a complementary imaging tool in the assessment of lunate AVN, with potential to improve early detection, staging accuracy, and individualized treatment strategies.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 11","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12653468/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1