首页 > 最新文献

Journal of Imaging最新文献

英文 中文
FF-Mamba-YOLO: An SSM-Based Benchmark for Forest Fire Detection in UAV Remote Sensing Images. FF-Mamba-YOLO:基于ssm的无人机遥感影像森林火灾探测基准
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-13 DOI: 10.3390/jimaging12010043
Binhua Guo, Dinghui Liu, Zhou Shen, Tiebin Wang

Timely and accurate detection of forest fires through unmanned aerial vehicle (UAV) remote sensing target detection technology is of paramount importance. However, multiscale targets and complex environmental interference in UAV remote sensing images pose significant challenges during detection tasks. To address these obstacles, this paper presents FF-Mamba-YOLO, a novel framework based on the principles of Mamba and YOLO (You Only Look Once) that leverages innovative modules and architectures to overcome these limitations. Specifically, we introduce MFEBlock and MFFBlock based on state space models (SSMs) in the backbone and neck parts of the network, respectively, enabling the model to effectively capture global dependencies. Second, we construct CFEBlock, a module that performs feature enhancement before SSM processing, improving local feature processing capabilities. Furthermore, we propose MGBlock, which adopts a dynamic gating mechanism, enhancing the model's adaptive processing capabilities and robustness. Finally, we enhance the structure of Path Aggregation Feature Pyramid Network (PAFPN) to improve feature fusion quality and introduce DySample to enhance image resolution without significantly increasing computational costs. Experimental results on our self-constructed forest fire image dataset demonstrate that the model achieves 67.4% mAP@50, 36.3% mAP@50:95, and 64.8% precision, outperforming previous state-of-the-art methods. These results highlight the potential of FF-Mamba-YOLO in forest fire monitoring.

利用无人机(UAV)遥感目标探测技术对森林火灾进行及时、准确的探测是至关重要的。然而,无人机遥感图像中的多尺度目标和复杂的环境干扰给检测任务带来了重大挑战。为了解决这些障碍,本文提出了FF-Mamba-YOLO,这是一个基于Mamba和YOLO(你只看一次)原则的新框架,利用创新的模块和架构来克服这些限制。具体来说,我们分别在网络的骨干和颈部部分引入了基于状态空间模型(ssm)的MFEBlock和MFFBlock,使模型能够有效地捕获全局依赖关系。其次,构建了在SSM处理前进行特征增强的CFEBlock模块,提高了局部特征处理能力。此外,我们还提出了采用动态门控机制的MGBlock,增强了模型的自适应处理能力和鲁棒性。最后,我们改进了路径聚合特征金字塔网络(Path Aggregation Feature Pyramid Network, PAFPN)的结构以提高特征融合质量,并在不显著增加计算成本的情况下引入dyssample来提高图像分辨率。在自建的森林火灾图像数据集上的实验结果表明,该模型的精度分别达到67.4% mAP@50、36.3% mAP@50:95和64.8%,优于现有的先进方法。这些结果突出了FF-Mamba-YOLO在森林火灾监测中的潜力。
{"title":"FF-Mamba-YOLO: An SSM-Based Benchmark for Forest Fire Detection in UAV Remote Sensing Images.","authors":"Binhua Guo, Dinghui Liu, Zhou Shen, Tiebin Wang","doi":"10.3390/jimaging12010043","DOIUrl":"10.3390/jimaging12010043","url":null,"abstract":"<p><p>Timely and accurate detection of forest fires through unmanned aerial vehicle (UAV) remote sensing target detection technology is of paramount importance. However, multiscale targets and complex environmental interference in UAV remote sensing images pose significant challenges during detection tasks. To address these obstacles, this paper presents FF-Mamba-YOLO, a novel framework based on the principles of Mamba and YOLO (You Only Look Once) that leverages innovative modules and architectures to overcome these limitations. Specifically, we introduce MFEBlock and MFFBlock based on state space models (SSMs) in the backbone and neck parts of the network, respectively, enabling the model to effectively capture global dependencies. Second, we construct CFEBlock, a module that performs feature enhancement before SSM processing, improving local feature processing capabilities. Furthermore, we propose MGBlock, which adopts a dynamic gating mechanism, enhancing the model's adaptive processing capabilities and robustness. Finally, we enhance the structure of Path Aggregation Feature Pyramid Network (PAFPN) to improve feature fusion quality and introduce DySample to enhance image resolution without significantly increasing computational costs. Experimental results on our self-constructed forest fire image dataset demonstrate that the model achieves 67.4% mAP@50, 36.3% mAP@50:95, and 64.8% precision, outperforming previous state-of-the-art methods. These results highlight the potential of FF-Mamba-YOLO in forest fire monitoring.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12842753/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GLCN: Graph-Aware Locality-Enhanced Cross-Modality Re-ID Network. GLCN:图形感知位置增强跨模态Re-ID网络。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-13 DOI: 10.3390/jimaging12010042
Junjie Cao, Yuhang Yu, Rong Rong, Xing Xie

Cross-modality person re-identification faces challenges such as illumination discrepancies, local occlusions, and inconsistent modality structures, leading to misalignment and sensitivity issues. We propose GLCN, a framework that addresses these problems by enhancing representation learning through locality enhancement, cross-modality structural alignment, and intra-modality compactness. Key components include the Locality-Preserved Cross-branch Fusion (LPCF) module, which combines Local-Positional-Channel Gating (LPCG) for local region and positional sensitivity; Cross-branch Context Interpolated Attention (CCIA) for stable cross-branch consistency; and Graph-Enhanced Center Geometry Alignment (GE-CGA), which aligns class-center similarity structures across modalities to preserve category-level relationships. We also introduce Intra-Modal Prototype Discrepancy Mining Loss (IPDM-Loss) to reduce intra-class variance and improve inter-class separation, thereby creating more compact identity structures in both RGB and IR spaces. Extensive experiments on SYSU-MM01, RegDB, and other benchmarks demonstrate the effectiveness of our approach.

跨模态人的再识别面临着诸如光照差异、局部遮挡和模态结构不一致等挑战,从而导致不对齐和敏感性问题。我们提出了GLCN框架,该框架通过局部性增强、跨模态结构对齐和模态内紧密性来增强表征学习,从而解决了这些问题。关键组件包括局部保持交叉分支融合(LPCF)模块,该模块结合了局部区域和位置灵敏度的局部位置通道门控(LPCG);跨分支上下文内插注意(CCIA)实现稳定的跨分支一致性图形增强中心几何对齐(GE-CGA),它跨模态对齐类中心相似结构,以保持类别级关系。我们还引入了模态内原型差异挖掘损失(IPDM-Loss)来减少类内方差和改善类间分离,从而在RGB和IR空间中创建更紧凑的身份结构。在SYSU-MM01、RegDB和其他基准测试上的大量实验证明了我们的方法的有效性。
{"title":"GLCN: Graph-Aware Locality-Enhanced Cross-Modality Re-ID Network.","authors":"Junjie Cao, Yuhang Yu, Rong Rong, Xing Xie","doi":"10.3390/jimaging12010042","DOIUrl":"10.3390/jimaging12010042","url":null,"abstract":"<p><p>Cross-modality person re-identification faces challenges such as illumination discrepancies, local occlusions, and inconsistent modality structures, leading to misalignment and sensitivity issues. We propose GLCN, a framework that addresses these problems by enhancing representation learning through locality enhancement, cross-modality structural alignment, and intra-modality compactness. Key components include the Locality-Preserved Cross-branch Fusion (LPCF) module, which combines Local-Positional-Channel Gating (LPCG) for local region and positional sensitivity; Cross-branch Context Interpolated Attention (CCIA) for stable cross-branch consistency; and Graph-Enhanced Center Geometry Alignment (GE-CGA), which aligns class-center similarity structures across modalities to preserve category-level relationships. We also introduce Intra-Modal Prototype Discrepancy Mining Loss (IPDM-Loss) to reduce intra-class variance and improve inter-class separation, thereby creating more compact identity structures in both RGB and IR spaces. Extensive experiments on SYSU-MM01, RegDB, and other benchmarks demonstrate the effectiveness of our approach.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibrated Transformer Fusion for Dual-View Low-Energy CESM Classification. 校准变压器融合双视图低能CESM分类。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-13 DOI: 10.3390/jimaging12010041
Ahmed A H Alkurdi, Amira Bibo Sallow

Contrast-enhanced spectral mammography (CESM) provides low-energy images acquired in standard craniocaudal (CC) and mediolateral oblique (MLO) views, and clinical interpretation relies on integrating both views. This study proposes a dual-view classification framework that combines deep CNN feature extraction with transformer-based fusion for breast-side classification using low-energy (DM) images from CESM acquisitions (Normal vs. Tumorous; benign and malignant merged). The evaluation was conducted using 5-fold stratified group cross-validation with patient-level grouping to prevent leakage across folds. The final configuration (Model E) integrates dual-backbone feature extraction, transformer fusion, MC-dropout inference for uncertainty estimation, and post hoc logistic calibration. Across the five held-out test folds, Model E achieved a mean accuracy of 96.88% ± 2.39% and a mean F1-score of 97.68% ± 1.66%. The mean ROC-AUC and PR-AUC were 0.9915 ± 0.0098 and 0.9968 ± 0.0029, respectively. Probability quality was supported by a mean Brier score of 0.0236 ± 0.0145 and a mean expected calibration error (ECE) of 0.0334 ± 0.0171. An ablation study (Models A-E) was also reported to quantify the incremental contribution of dual-view input, transformer fusion, and uncertainty calibration. Within the limits of this retrospective single-center setting, these results suggest that dual-view transformer fusion can provide strong discrimination while also producing calibrated probabilities and uncertainty outputs that are relevant for decision support.

对比增强光谱乳房x线摄影(CESM)提供在标准颅侧(CC)和中外侧斜(MLO)视图下获得的低能图像,临床解释依赖于整合这两个视图。本研究提出了一种双视图分类框架,该框架结合了深度CNN特征提取和基于变压器的融合,使用来自CESM采集的低能(DM)图像进行乳房侧分类(正常与肿瘤;良性和恶性合并)。评估采用5层分层组交叉验证和患者水平分组进行,以防止跨层渗漏。最终配置(模型E)集成了双主干特征提取、变压器融合、MC-dropout推理的不确定性估计和事后逻辑校准。在5个测试折叠中,模型E的平均准确率为96.88%±2.39%,平均f1评分为97.68%±1.66%。ROC-AUC均值为0.9915±0.0098,PR-AUC均值为0.9968±0.0029。平均Brier评分为0.0236±0.0145,平均期望校准误差(ECE)为0.0334±0.0171。一项消融研究(模型A-E)也被报道量化了双视图输入、变压器融合和不确定度校准的增量贡献。在这种回顾性单中心设置的限制下,这些结果表明,双视图变压器融合可以提供强大的辨别能力,同时还可以产生与决策支持相关的校准概率和不确定性输出。
{"title":"Calibrated Transformer Fusion for Dual-View Low-Energy CESM Classification.","authors":"Ahmed A H Alkurdi, Amira Bibo Sallow","doi":"10.3390/jimaging12010041","DOIUrl":"10.3390/jimaging12010041","url":null,"abstract":"<p><p>Contrast-enhanced spectral mammography (CESM) provides low-energy images acquired in standard craniocaudal (CC) and mediolateral oblique (MLO) views, and clinical interpretation relies on integrating both views. This study proposes a dual-view classification framework that combines deep CNN feature extraction with transformer-based fusion for breast-side classification using low-energy (DM) images from CESM acquisitions (Normal vs. Tumorous; benign and malignant merged). The evaluation was conducted using 5-fold stratified group cross-validation with patient-level grouping to prevent leakage across folds. The final configuration (Model E) integrates dual-backbone feature extraction, transformer fusion, MC-dropout inference for uncertainty estimation, and post hoc logistic calibration. Across the five held-out test folds, Model E achieved a mean accuracy of 96.88% ± 2.39% and a mean F1-score of 97.68% ± 1.66%. The mean ROC-AUC and PR-AUC were 0.9915 ± 0.0098 and 0.9968 ± 0.0029, respectively. Probability quality was supported by a mean Brier score of 0.0236 ± 0.0145 and a mean expected calibration error (ECE) of 0.0334 ± 0.0171. An ablation study (Models A-E) was also reported to quantify the incremental contribution of dual-view input, transformer fusion, and uncertainty calibration. Within the limits of this retrospective single-center setting, these results suggest that dual-view transformer fusion can provide strong discrimination while also producing calibrated probabilities and uncertainty outputs that are relevant for decision support.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12842785/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dual-UNet Diffusion Framework for Personalized Panoramic Generation. 个性化全景生成的双网扩散框架。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-11 DOI: 10.3390/jimaging12010040
Jing Shen, Leigang Huo, Chunlei Huo, Shiming Xiang

While text-to-image and customized generation methods demonstrate strong capabilities in single-image generation, they fall short in supporting immersive applications that require coherent 360° panoramas. Conversely, existing panorama generation models lack customization capabilities. In panoramic scenes, reference objects often appear as minor background elements and may be multiple in number, while reference images across different views exhibit weak correlations. To address these challenges, we propose a diffusion-based framework for customized multi-view image generation. Our approach introduces a decoupled feature injection mechanism within a dual-UNet architecture to handle weakly correlated reference images, effectively integrating spatial information by concurrently feeding both reference images and noise into the denoising branch. A hybrid attention mechanism enables deep fusion of reference features and multi-view representations. Furthermore, a data augmentation strategy facilitates viewpoint-adaptive pose adjustments, and panoramic coordinates are employed to guide multi-view attention. The experimental results demonstrate our model's effectiveness in generating coherent, high-quality customized multi-view images.

虽然文本到图像和自定义生成方法在单幅图像生成方面表现出强大的能力,但它们在支持需要连贯360°全景的沉浸式应用方面却存在不足。相反,现有的全景生成模型缺乏定制功能。在全景场景中,参考对象通常作为次要背景元素出现,并且可能有多个,而不同视图中的参考图像表现出弱相关性。为了解决这些挑战,我们提出了一个基于扩散的自定义多视图图像生成框架。我们的方法在双unet架构中引入了一种解耦的特征注入机制来处理弱相关参考图像,通过将参考图像和噪声同时馈送到去噪分支中,有效地整合空间信息。混合注意机制实现了参考特征和多视图表示的深度融合。此外,数据增强策略促进了视点自适应姿态调整,并采用全景坐标引导多视点注意力。实验结果表明,该模型能够有效地生成高质量的自定义多视点图像。
{"title":"A Dual-UNet Diffusion Framework for Personalized Panoramic Generation.","authors":"Jing Shen, Leigang Huo, Chunlei Huo, Shiming Xiang","doi":"10.3390/jimaging12010040","DOIUrl":"10.3390/jimaging12010040","url":null,"abstract":"<p><p>While text-to-image and customized generation methods demonstrate strong capabilities in single-image generation, they fall short in supporting immersive applications that require coherent 360° panoramas. Conversely, existing panorama generation models lack customization capabilities. In panoramic scenes, reference objects often appear as minor background elements and may be multiple in number, while reference images across different views exhibit weak correlations. To address these challenges, we propose a diffusion-based framework for customized multi-view image generation. Our approach introduces a decoupled feature injection mechanism within a dual-UNet architecture to handle weakly correlated reference images, effectively integrating spatial information by concurrently feeding both reference images and noise into the denoising branch. A hybrid attention mechanism enables deep fusion of reference features and multi-view representations. Furthermore, a data augmentation strategy facilitates viewpoint-adaptive pose adjustments, and panoramic coordinates are employed to guide multi-view attention. The experimental results demonstrate our model's effectiveness in generating coherent, high-quality customized multi-view images.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843003/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146053774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Learning of Deep Embeddings for Classification and Identification of Dental Implants. 牙种植体分类与识别的深度嵌入自监督学习。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-09 DOI: 10.3390/jimaging12010039
Amani Almalki, Abdulrahman Almalki, Longin Jan Latecki

This study proposes an automated system using deep learning-based object detection to identify implant systems, leveraging recent progress in self-supervised learning, specifically masked image modeling (MIM). We advocate for self-pre-training, emphasizing that its advantages when acquiring suitable pre-training data is challenging. The proposed Masked Deep Embedding (MDE) pre-training method, extending the masked autoencoder (MAE) transformer, significantly enhances dental implant detection performance compared to baselines. Specifically, the proposed method achieves a best detection performance of AP = 96.1, outperforming supervised ViT and MAE baselines by up to +2.9 AP. In addition, we address the absence of a comprehensive dataset for implant design, enhancing an existing dataset under dental expert supervision. This augmentation includes annotations for implant design, such as coronal, middle, and apical parts, resulting in a unique Implant Design Dataset (IDD). The contributions encompass employing self-supervised learning for limited dental radiograph data, replacing MAE's patch reconstruction with patch embeddings, achieving substantial performance improvement in implant detection, and expanding possibilities through the labeling of implant design. This study paves the way for AI-driven solutions in implant dentistry, providing valuable tools for dentists and patients facing implant-related challenges.

本研究提出了一个自动化系统,使用基于深度学习的对象检测来识别植入系统,利用自监督学习的最新进展,特别是掩膜图像建模(MIM)。我们提倡自我预训练,强调它在获取合适的预训练数据时的优势是具有挑战性的。本文提出的掩膜深度嵌入(MDE)预训练方法,扩展了掩膜自编码器(MAE)变压器,与基线相比,显著提高了种植体检测性能。具体来说,该方法的最佳检测性能为AP = 96.1,比有监督的ViT和MAE基线高出+2.9 AP。此外,我们解决了缺乏全面的种植体设计数据集的问题,在牙科专家的监督下增强了现有数据集。这种增强包括对种植体设计的注释,如冠状、中间和根尖部分,从而形成一个独特的种植体设计数据集(IDD)。贡献包括对有限的牙科x光片数据采用自我监督学习,用贴片嵌入取代MAE的贴片重建,在种植体检测方面取得实质性的性能改进,并通过种植体设计标记扩大可能性。该研究为人工智能驱动的种植牙科解决方案铺平了道路,为面临种植相关挑战的牙医和患者提供了有价值的工具。
{"title":"Self-Supervised Learning of Deep Embeddings for Classification and Identification of Dental Implants.","authors":"Amani Almalki, Abdulrahman Almalki, Longin Jan Latecki","doi":"10.3390/jimaging12010039","DOIUrl":"10.3390/jimaging12010039","url":null,"abstract":"<p><p>This study proposes an automated system using deep learning-based object detection to identify implant systems, leveraging recent progress in self-supervised learning, specifically masked image modeling (MIM). We advocate for self-pre-training, emphasizing that its advantages when acquiring suitable pre-training data is challenging. The proposed Masked Deep Embedding (MDE) pre-training method, extending the masked autoencoder (MAE) transformer, significantly enhances dental implant detection performance compared to baselines. Specifically, the proposed method achieves a best detection performance of AP = 96.1, outperforming supervised ViT and MAE baselines by up to +2.9 AP. In addition, we address the absence of a comprehensive dataset for implant design, enhancing an existing dataset under dental expert supervision. This augmentation includes annotations for implant design, such as coronal, middle, and apical parts, resulting in a unique Implant Design Dataset (IDD). The contributions encompass employing self-supervised learning for limited dental radiograph data, replacing MAE's patch reconstruction with patch embeddings, achieving substantial performance improvement in implant detection, and expanding possibilities through the labeling of implant design. This study paves the way for AI-driven solutions in implant dentistry, providing valuable tools for dentists and patients facing implant-related challenges.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12842735/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCT-Diff: Seamless Contextual Tracking via Diffusion Trajectory. SCT-Diff:通过扩散轨迹无缝上下文跟踪。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-09 DOI: 10.3390/jimaging12010038
Guohao Nie, Xingmei Wang, Debin Zhang, He Wang

Existing detection-based trackers exploit temporal contexts by updating appearance models or modeling target motion. However, the sequential one-shot integration of temporal priors risks amplifying error accumulation, as frame-level template matching restricts comprehensive spatiotemporal analysis. To address this, we propose SCT-Diff, a video-level framework that holistically estimates target trajectories. Specifically, SCT-Diff processes video clips globally via a diffusion model to incorporate bidirectional spatiotemporal awareness, where reverse diffusion steps progressively refine noisy trajectory proposals into optimal predictions. Crucially, SCT-Diff enables iterative correction of historical trajectory hypotheses by observing future contexts within a sliding time window. This closed-loop feedback from future frames preserves temporal consistency and breaks the error propagation chain under complex appearance variations. For joint modeling of appearance and motion dynamics, we formulate trajectories as unified discrete token sequences. The designed Mamba-based expert decoder bridges visual features with language-formulated trajectories, enabling lightweight yet coherent sequence modeling. Extensive experiments demonstrate SCT-Diff's superior efficiency and performance, achieving 75.4% AO on GOT-10k while maintaining real-time computational efficiency.

现有的基于检测的跟踪器通过更新外观模型或建模目标运动来利用时间上下文。然而,时序的一次性先验积分存在放大误差积累的风险,因为帧级模板匹配限制了综合的时空分析。为了解决这个问题,我们提出了SCT-Diff,这是一个视频级框架,可以全面估计目标轨迹。具体而言,SCT-Diff通过扩散模型对视频片段进行全局处理,以结合双向时空感知,其中反向扩散步骤逐步将噪声轨迹建议细化为最佳预测。至关重要的是,SCT-Diff可以通过在滑动时间窗口内观察未来背景来迭代校正历史轨迹假设。这种来自未来帧的闭环反馈保持了时间一致性,打破了复杂外观变化下的误差传播链。对于外观和运动动力学的联合建模,我们将轨迹表述为统一的离散标记序列。设计的基于mamba的专家解码器将视觉特征与语言制定的轨迹连接起来,实现轻量级但连贯的序列建模。大量的实验证明SCT-Diff具有卓越的效率和性能,在保持实时计算效率的同时,在GOT-10k上实现了75.4%的AO。
{"title":"SCT-Diff: Seamless Contextual Tracking via Diffusion Trajectory.","authors":"Guohao Nie, Xingmei Wang, Debin Zhang, He Wang","doi":"10.3390/jimaging12010038","DOIUrl":"10.3390/jimaging12010038","url":null,"abstract":"<p><p>Existing detection-based trackers exploit temporal contexts by updating appearance models or modeling target motion. However, the sequential one-shot integration of temporal priors risks amplifying error accumulation, as frame-level template matching restricts comprehensive spatiotemporal analysis. To address this, we propose SCT-Diff, a video-level framework that holistically estimates target trajectories. Specifically, SCT-Diff processes video clips globally via a diffusion model to incorporate bidirectional spatiotemporal awareness, where reverse diffusion steps progressively refine noisy trajectory proposals into optimal predictions. Crucially, SCT-Diff enables iterative correction of historical trajectory hypotheses by observing future contexts within a sliding time window. This closed-loop feedback from future frames preserves temporal consistency and breaks the error propagation chain under complex appearance variations. For joint modeling of appearance and motion dynamics, we formulate trajectories as unified discrete token sequences. The designed Mamba-based expert decoder bridges visual features with language-formulated trajectories, enabling lightweight yet coherent sequence modeling. Extensive experiments demonstrate SCT-Diff's superior efficiency and performance, achieving 75.4% AO on GOT-10k while maintaining real-time computational efficiency.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843046/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empirical Evaluation of UNet for Segmentation of Applicable Surfaces for Seismic Sensor Installation. UNet分割地震传感器安装适用曲面的经验评价。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-08 DOI: 10.3390/jimaging12010034
Mikhail Uzdiaev, Marina Astapova, Andrey Ronzhin, Aleksandra Figurek

The deployment of wireless seismic nodal systems necessitates the efficient identification of optimal locations for sensor installation, considering factors such as ground stability and the absence of interference. Semantic segmentation of satellite imagery has advanced significantly, and its application to this specific task remains unexplored. This work presents a baseline empirical evaluation of the U-Net architecture for the semantic segmentation of surfaces applicable for seismic sensor installation. We utilize a novel dataset of Sentinel-2 multispectral images, specifically labeled for this purpose. The study investigates the impact of pretrained encoders (EfficientNetB2, Cross-Stage Partial Darknet53-CSPDarknet53, and Multi-Axis Vision Transformer-MAxViT), different combinations of Sentinel-2 spectral bands (Red, Green, Blue (RGB), RGB+Near Infrared (NIR), 10-bands with 10 and 20 m/pix spatial resolution, full 13-band), and a technique for improving small object segmentation by modifying the input convolutional layer stride. Experimental results demonstrate that the CSPDarknet53 encoder generally outperforms the others (IoU = 0.534, Precision = 0.716, Recall = 0.635). The combination of RGB and Near-Infrared bands (10 m/pixel resolution) yielded the most robust performance across most configurations. Reducing the input stride from 2 to 1 proved beneficial for segmenting small linear objects like roads. The findings establish a baseline for this novel task and provide practical insights for optimizing deep learning models in the context of automated seismic nodal network installation planning.

无线地震节点系统的部署需要有效地识别传感器安装的最佳位置,同时考虑到地面稳定性和无干扰等因素。卫星图像的语义分割已经取得了很大的进展,但其在这一特定任务中的应用仍未探索。这项工作提出了适用于地震传感器安装的表面语义分割的U-Net架构的基线经验评估。我们利用了Sentinel-2多光谱图像的新数据集,专门用于此目的。研究了预训练编码器(EfficientNetB2、Cross-Stage Partial Darknet53-CSPDarknet53和多轴视觉变压器- maxvit)、Sentinel-2光谱波段(红、绿、蓝(RGB)、RGB+近红外(NIR)、10波段(10和20 m/pix空间分辨率)、全13波段)的不同组合的影响,以及通过修改输入卷积层步幅来改善小目标分割的技术。实验结果表明,CSPDarknet53编码器总体上优于其他编码器(IoU = 0.534, Precision = 0.716, Recall = 0.635)。RGB和近红外波段(10米/像素分辨率)的组合在大多数配置中产生了最强大的性能。将输入步幅从2减少到1对于分割小的线性对象(如道路)是有益的。研究结果为这项新任务奠定了基础,并为在自动化地震节点网络安装规划的背景下优化深度学习模型提供了实用的见解。
{"title":"Empirical Evaluation of UNet for Segmentation of Applicable Surfaces for Seismic Sensor Installation.","authors":"Mikhail Uzdiaev, Marina Astapova, Andrey Ronzhin, Aleksandra Figurek","doi":"10.3390/jimaging12010034","DOIUrl":"10.3390/jimaging12010034","url":null,"abstract":"<p><p>The deployment of wireless seismic nodal systems necessitates the efficient identification of optimal locations for sensor installation, considering factors such as ground stability and the absence of interference. Semantic segmentation of satellite imagery has advanced significantly, and its application to this specific task remains unexplored. This work presents a baseline empirical evaluation of the U-Net architecture for the semantic segmentation of surfaces applicable for seismic sensor installation. We utilize a novel dataset of Sentinel-2 multispectral images, specifically labeled for this purpose. The study investigates the impact of pretrained encoders (EfficientNetB2, Cross-Stage Partial Darknet53-CSPDarknet53, and Multi-Axis Vision Transformer-MAxViT), different combinations of Sentinel-2 spectral bands (Red, Green, Blue (RGB), RGB+Near Infrared (NIR), 10-bands with 10 and 20 m/pix spatial resolution, full 13-band), and a technique for improving small object segmentation by modifying the input convolutional layer stride. Experimental results demonstrate that the CSPDarknet53 encoder generally outperforms the others (IoU = 0.534, Precision = 0.716, Recall = 0.635). The combination of RGB and Near-Infrared bands (10 m/pixel resolution) yielded the most robust performance across most configurations. Reducing the input stride from 2 to 1 proved beneficial for segmenting small linear objects like roads. The findings establish a baseline for this novel task and provide practical insights for optimizing deep learning models in the context of automated seismic nodal network installation planning.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Degradation-Aware Multi-Stage Fusion for Underwater Image Enhancement. 水下图像增强的退化感知多阶段融合。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-08 DOI: 10.3390/jimaging12010037
Lian Xie, Hao Chen, Jin Shu

Underwater images frequently suffer from color casts, low illumination, and blur due to wavelength-dependent absorption and scattering. We present a practical two-stage, modular, and degradation-aware framework designed for real-time enhancement, prioritizing deployability on edge devices. Stage I employs a lightweight CNN to classify inputs into three dominant degradation classes (color cast, low light, blur) with 91.85% accuracy on an EUVP subset. Stage II applies three scene-specific lightweight enhancement pipelines and fuses their outputs using two alternative learnable modules: a global Linear Fusion and a LiteUNetFusion (spatially adaptive weighting with optional residual correction). Compared to the three single-scene optimizers (average PSNR = 19.0 dB; mean UCIQE ≈ 0.597; mean UIQM ≈ 2.07), the Linear Fusion improves PSNR by +2.6 dB on average and yields roughly +20.7% in UCIQE and +21.0% in UIQM, while maintaining low latency (~90 ms per 640 × 480 frame on an Intel i5-13400F (Intel Corporation, Santa Clara, CA, USA). The LiteUNetFusion further refines results: it raises PSNR by +1.5 dB over the Linear model (23.1 vs. 21.6 dB), brings modest perceptual gains (UCIQE from 0.72 to 0.74, UIQM 2.5 to 2.8) at a runtime of ≈125 ms per 640 × 480 frame, and better preserves local texture and color consistency in mixed-degradation scenes. We release implementation details for reproducibility and discuss limitations (e.g., occasional blur/noise amplification and domain generalization) together with future directions.

水下图像经常遭受色偏,低照度,模糊由于波长依赖的吸收和散射。我们提出了一个实用的两阶段、模块化和退化感知框架,旨在实时增强,优先考虑边缘设备上的可部署性。第一阶段使用轻量级CNN将输入分类为三个主要的退化类别(色偏、弱光、模糊),在EUVP子集上的准确率为91.85%。第二阶段应用三个场景特定的轻量级增强管道,并使用两个可选的可学习模块融合它们的输出:一个全局线性融合和一个LiteUNetFusion(空间自适应加权,可选残差校正)。与三种单场景优化器(平均PSNR = 19.0 dB;平均UCIQE≈0.597;平均UIQM≈2.07)相比,线性融合将PSNR平均提高了+2.6 dB, UCIQE和UIQM分别提高了+20.7%和+21.0%,同时保持了低延迟(在Intel i5-13400F (Intel Corporation, Santa Clara, CA, USA)上每640 × 480帧约90 ms)。LiteUNetFusion进一步改进了结果:与线性模型相比,它将PSNR提高了+1.5 dB (23.1 vs. 21.6 dB),在每640 × 480帧≈125 ms的运行时带来适度的感知增益(UCIQE从0.72到0.74,UIQM从2.5到2.8),并且在混合退化场景中更好地保留了局部纹理和颜色一致性。我们发布了可重复性的实现细节,并讨论了限制(例如,偶尔的模糊/噪声放大和域泛化)以及未来的方向。
{"title":"Degradation-Aware Multi-Stage Fusion for Underwater Image Enhancement.","authors":"Lian Xie, Hao Chen, Jin Shu","doi":"10.3390/jimaging12010037","DOIUrl":"10.3390/jimaging12010037","url":null,"abstract":"<p><p>Underwater images frequently suffer from color casts, low illumination, and blur due to wavelength-dependent absorption and scattering. We present a practical two-stage, modular, and degradation-aware framework designed for real-time enhancement, prioritizing deployability on edge devices. Stage I employs a lightweight CNN to classify inputs into three dominant degradation classes (color cast, low light, blur) with 91.85% accuracy on an EUVP subset. Stage II applies three scene-specific lightweight enhancement pipelines and fuses their outputs using two alternative learnable modules: a global Linear Fusion and a LiteUNetFusion (spatially adaptive weighting with optional residual correction). Compared to the three single-scene optimizers (average PSNR = 19.0 dB; mean UCIQE ≈ 0.597; mean UIQM ≈ 2.07), the Linear Fusion improves PSNR by +2.6 dB on average and yields roughly +20.7% in UCIQE and +21.0% in UIQM, while maintaining low latency (~90 ms per 640 × 480 frame on an Intel i5-13400F (Intel Corporation, Santa Clara, CA, USA). The LiteUNetFusion further refines results: it raises PSNR by +1.5 dB over the Linear model (23.1 vs. 21.6 dB), brings modest perceptual gains (UCIQE from 0.72 to 0.74, UIQM 2.5 to 2.8) at a runtime of ≈125 ms per 640 × 480 frame, and better preserves local texture and color consistency in mixed-degradation scenes. We release implementation details for reproducibility and discuss limitations (e.g., occasional blur/noise amplification and domain generalization) together with future directions.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12843447/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hierarchical Deep Learning Architecture for Diagnosing Retinal Diseases Using Cross-Modal OCT to Fundus Translation in the Lack of Paired Data. 在缺乏配对数据的情况下,利用交叉模态OCT对眼底翻译诊断视网膜疾病的层次深度学习架构。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-08 DOI: 10.3390/jimaging12010036
Ekaterina A Lopukhova, Gulnaz M Idrisova, Timur R Mukhamadeev, Grigory S Voronkov, Ruslan V Kutluyarov, Elizaveta P Topolskaya

The paper focuses on automated diagnosis of retinal diseases, particularly Age-related Macular Degeneration (AMD) and diabetic retinopathy (DR), using optical coherence tomography (OCT), while addressing three key challenges: disease comorbidity, severe class imbalance, and the lack of strictly paired OCT and fundus data. We propose a hierarchical modular deep learning system designed for multi-label OCT screening with conditional routing to specialized staging modules. To enable DR staging when fundus images are unavailable, we use cross-modal alignment between OCT and fundus representations. This approach involves training a latent bridge that projects OCT embeddings into the fundus feature space. We enhance clinical reliability through per-class threshold calibration and implement quality control checks for OCT-only DR staging. Experiments demonstrate robust multi-label performance (macro-F1 =0.989±0.006 after per-class threshold calibration) and reliable calibration (ECE =2.1±0.4%), and OCT-only DR staging is feasible in 96.1% of cases that meet the quality control criterion.

本文重点研究了使用光学相干断层扫描(OCT)自动诊断视网膜疾病,特别是年龄相关性黄斑变性(AMD)和糖尿病性视网膜病变(DR),同时解决了三个关键挑战:疾病合并症,严重的类别不平衡,以及缺乏严格配对的OCT和眼底数据。我们提出了一个分层模块化深度学习系统,设计用于多标签OCT筛选,并有条件地路由到专门的分期模块。为了在眼底图像不可用时进行DR分期,我们使用OCT和眼底图像之间的跨模态对齐。这种方法包括训练一个潜在桥,将OCT嵌入投影到眼底特征空间。我们通过分级阈值校准提高临床可靠性,并对仅oct的DR分期实施质量控制检查。实验表明,该方法具有稳健的多标签性能(逐类阈值校准后的macro-F1 =0.989±0.006)和可靠的校准(ECE =2.1±0.4%),在满足质量控制标准的病例中,96.1%的病例仅采用oct进行DR分期是可行的。
{"title":"A Hierarchical Deep Learning Architecture for Diagnosing Retinal Diseases Using Cross-Modal OCT to Fundus Translation in the Lack of Paired Data.","authors":"Ekaterina A Lopukhova, Gulnaz M Idrisova, Timur R Mukhamadeev, Grigory S Voronkov, Ruslan V Kutluyarov, Elizaveta P Topolskaya","doi":"10.3390/jimaging12010036","DOIUrl":"10.3390/jimaging12010036","url":null,"abstract":"<p><p>The paper focuses on automated diagnosis of retinal diseases, particularly Age-related Macular Degeneration (AMD) and diabetic retinopathy (DR), using optical coherence tomography (OCT), while addressing three key challenges: disease comorbidity, severe class imbalance, and the lack of strictly paired OCT and fundus data. We propose a hierarchical modular deep learning system designed for multi-label OCT screening with conditional routing to specialized staging modules. To enable DR staging when fundus images are unavailable, we use cross-modal alignment between OCT and fundus representations. This approach involves training a latent bridge that projects OCT embeddings into the fundus feature space. We enhance clinical reliability through per-class threshold calibration and implement quality control checks for OCT-only DR staging. Experiments demonstrate robust multi-label performance (macro-F1 =0.989±0.006 after per-class threshold calibration) and reliable calibration (ECE =2.1±0.4%), and OCT-only DR staging is feasible in 96.1% of cases that meet the quality control criterion.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12842718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146053781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of the Radiomics Features of Normal-Appearing White Matter in Persons with High or Low Perivascular Space Scores. 血管周围空间评分高与低患者正常白质放射组学特征比较。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2026-01-08 DOI: 10.3390/jimaging12010035
Onural Ozturk, Sibel Balci, Seda Ozturk

The clinical significance of perivascular spaces (PVS) remains controversial. Radiomics refers to the extraction of quantitative features from medical images using pixel-based computational approaches. This study aimed to compare the radiomics features of normal-appearing white matter (NAWM) in patients with low and high PVS scores to reveal microstructural differences that are not visible macroscopically. Adult patients who underwent cranial MRI over a one-month period were retrospectively screened and divided into two groups according to their global PVS score. Radiomics feature extraction from NAWM was performed at the level of the centrum semiovale on FLAIR and ADC images. Radiomics features were selected using Least Absolute Shrinkage and Selection Operator (LASSO) regression during the initial model development phase, and predefined radiomics scores were evaluated for both sequences. A total of 160 patients were included in the study. Radiomics scores derived from normal-appearing white matter demonstrated good discriminative performance for differentiating high vs. low perivascular space (PVS) burden (AUC = 0.853 for FLAIR and AUC = 0.753 for ADC). In age- and scanner-adjusted multivariable models, radiomics scores remained independently associated with high PVS burden. These findings suggest that radiomics analysis of NAWM can capture subtle white matter alterations associated with PVS burden and may serve as a non-invasive biomarker for early detection of microvascular and inflammatory changes.

血管周围间隙(PVS)的临床意义仍有争议。放射组学是指使用基于像素的计算方法从医学图像中提取定量特征。本研究旨在比较PVS评分低和高患者的正常白质(NAWM)放射组学特征,以揭示宏观上不可见的微观结构差异。在一个月内接受颅核磁共振检查的成年患者进行回顾性筛查,并根据其总体PVS评分分为两组。在FLAIR和ADC图像的半瓣中心水平对NAWM进行放射组学特征提取。在初始模型开发阶段,使用最小绝对收缩和选择算子(LASSO)回归选择放射组学特征,并对两个序列的预定义放射组学评分进行评估。该研究共纳入了160名患者。放射组学评分来源于外观正常的白质,在鉴别高、低血管周围间隙(PVS)负担方面表现出良好的鉴别性能(FLAIR的AUC = 0.853, ADC的AUC = 0.753)。在年龄和扫描仪调整的多变量模型中,放射组学评分仍然与高pv负担独立相关。这些发现表明,NAWM的放射组学分析可以捕获与PVS负担相关的细微白质改变,并可能作为早期检测微血管和炎症变化的非侵入性生物标志物。
{"title":"Comparison of the Radiomics Features of Normal-Appearing White Matter in Persons with High or Low Perivascular Space Scores.","authors":"Onural Ozturk, Sibel Balci, Seda Ozturk","doi":"10.3390/jimaging12010035","DOIUrl":"10.3390/jimaging12010035","url":null,"abstract":"<p><p>The clinical significance of perivascular spaces (PVS) remains controversial. Radiomics refers to the extraction of quantitative features from medical images using pixel-based computational approaches. This study aimed to compare the radiomics features of normal-appearing white matter (NAWM) in patients with low and high PVS scores to reveal microstructural differences that are not visible macroscopically. Adult patients who underwent cranial MRI over a one-month period were retrospectively screened and divided into two groups according to their global PVS score. Radiomics feature extraction from NAWM was performed at the level of the centrum semiovale on FLAIR and ADC images. Radiomics features were selected using Least Absolute Shrinkage and Selection Operator (LASSO) regression during the initial model development phase, and predefined radiomics scores were evaluated for both sequences. A total of 160 patients were included in the study. Radiomics scores derived from normal-appearing white matter demonstrated good discriminative performance for differentiating high vs. low perivascular space (PVS) burden (AUC = 0.853 for FLAIR and AUC = 0.753 for ADC). In age- and scanner-adjusted multivariable models, radiomics scores remained independently associated with high PVS burden. These findings suggest that radiomics analysis of NAWM can capture subtle white matter alterations associated with PVS burden and may serve as a non-invasive biomarker for early detection of microvascular and inflammatory changes.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12842764/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1