首页 > 最新文献

Pattern Recognition最新文献

英文 中文
LLM-informed global-local contextualization for zero-shot food detection 基于llm的零射击食品检测的全局-局部情境化
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-15 DOI: 10.1016/j.patcog.2025.112928
Xinlong Wang , Weiqing Min , Guorui Sheng , Jingru Song , Yancun Yang , Tao Yao , Shuqiang Jiang
Zero-Shot Detection, the ability to detect novel objects without training samples, exhibits immense potential in an ever-changing world, particularly in scenarios requiring the identification of emerging categories. However, effectively applying ZSD to fine-grained domains, characterized by high inter-class similarity and notable intra-class diversity, remains a significant challenge. This is particularly pronounced in the food domain, where the intricate nature of food attributes—notably the pervasive visual ambiguity among related culinary categories and the extensive spectrum of appearances within each food category—severely constrains the performance of existing methods. To address these specific challenges in the food domain, we introduce Zero-Shot Food Detection with Semantic Space and Feature Fusion (ZeSF), a novel framework tailored for Zero-Shot Food Detection. ZeSF integrates two key modules: (1) Multi-Scale Context Integration Module (MSCIM) that employs dilated convolutions for hierarchical feature extraction and adaptive multi-scale fusion to capture subtle, fine-grained visual distinctions; and (2) Contextual Text Feature Enhancement Module (CTFEM) that leverages Large Language Models to generate semantically rich textual embeddings, encompassing both global attributes and discriminative local descriptors. Critically, a cross-modal alignment further harmonizes visual and textual features. Comprehensive evaluations on the UEC FOOD 256 and Food Objects With Attributes (FOWA) datasets affirm ZeSF’s superiority, achieving significant improvements in the Harmonic Mean for the Generalized ZSD setting. Crucially, we further validate the framework’s generalization capability on the MS COCO and PASCAL VOC benchmarks, where it again outperforms strong baselines. The source code will be publicly available upon publication.
零射击检测,即在没有训练样本的情况下检测新物体的能力,在不断变化的世界中显示出巨大的潜力,特别是在需要识别新类别的场景中。然而,将ZSD有效地应用于具有高类间相似性和显著类内多样性的细粒度域仍然是一个重大挑战。这在食品领域尤为明显,食品属性的复杂本质——尤其是相关烹饪类别之间普遍存在的视觉模糊性,以及每种食品类别中广泛的外观范围——严重限制了现有方法的性能。为了解决食品领域的这些具体挑战,我们引入了基于语义空间和特征融合的零射击食品检测(ZeSF),这是一种为零射击食品检测量身定制的新框架。ZeSF集成了两个关键模块:(1)多尺度上下文集成模块(MSCIM),该模块采用扩展卷积进行分层特征提取和自适应多尺度融合,以捕获细微的、细粒度的视觉差异;(2)上下文文本特征增强模块(CTFEM),它利用大型语言模型生成语义丰富的文本嵌入,包括全局属性和判别性局部描述符。重要的是,跨模态对齐进一步协调了视觉和文本特征。对UEC FOOD 256和FOOD Objects With Attributes (FOWA)数据集的综合评估肯定了ZeSF的优势,在广义ZSD设置的调和平均值方面取得了显着改进。至关重要的是,我们进一步验证了框架在MS COCO和PASCAL VOC基准上的泛化能力,在这些基准上,它再次优于强基线。源代码将在发布后公开提供。
{"title":"LLM-informed global-local contextualization for zero-shot food detection","authors":"Xinlong Wang ,&nbsp;Weiqing Min ,&nbsp;Guorui Sheng ,&nbsp;Jingru Song ,&nbsp;Yancun Yang ,&nbsp;Tao Yao ,&nbsp;Shuqiang Jiang","doi":"10.1016/j.patcog.2025.112928","DOIUrl":"10.1016/j.patcog.2025.112928","url":null,"abstract":"<div><div>Zero-Shot Detection, the ability to detect novel objects without training samples, exhibits immense potential in an ever-changing world, particularly in scenarios requiring the identification of emerging categories. However, effectively applying ZSD to fine-grained domains, characterized by high inter-class similarity and notable intra-class diversity, remains a significant challenge. This is particularly pronounced in the food domain, where the intricate nature of food attributes—notably the pervasive visual ambiguity among related culinary categories and the extensive spectrum of appearances within each food category—severely constrains the performance of existing methods. To address these specific challenges in the food domain, we introduce Zero-Shot Food Detection with Semantic Space and Feature Fusion (ZeSF), a novel framework tailored for Zero-Shot Food Detection. ZeSF integrates two key modules: (1) Multi-Scale Context Integration Module (MSCIM) that employs dilated convolutions for hierarchical feature extraction and adaptive multi-scale fusion to capture subtle, fine-grained visual distinctions; and (2) Contextual Text Feature Enhancement Module (CTFEM) that leverages Large Language Models to generate semantically rich textual embeddings, encompassing both global attributes and discriminative local descriptors. Critically, a cross-modal alignment further harmonizes visual and textual features. Comprehensive evaluations on the UEC FOOD 256 and Food Objects With Attributes (FOWA) datasets affirm ZeSF’s superiority, achieving significant improvements in the Harmonic Mean for the Generalized ZSD setting. Crucially, we further validate the framework’s generalization capability on the MS COCO and PASCAL VOC benchmarks, where it again outperforms strong baselines. The source code will be publicly available upon publication.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112928"},"PeriodicalIF":7.6,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A regularized deep self-expression feature augmentation network for few-shot unconstrained palmprint recognition 基于正则化深度自表达特征增强网络的少拍无约束掌纹识别
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-14 DOI: 10.1016/j.patcog.2025.112904
Kunlei Jing , Hebo Ma , Chen Zhang , Zhiyuan Zha , Bihan Wen
This paper considers Few-Shot Unconstrained Palmprint Recognition (FS-UPR), a realistic problem in real-world applications, aiming to recognize palmprint images under unconstrained conditions given a few support samples. To date, broad augmentation-based Few-Shot Learning (FSL) methods have emerged to mitigate the sample scarcity. However, large samples are required to train hallucinators, rendering them inapplicable for FS-UPR. This paper addresses FS-UPR via frugal augmentation learning on a few available support samples. Observing that the variations due to various acquisition conditions are transferable across samples, we manage to decompose support samples into principles and variations for variation transfer-based feature augmentation. To this end, we devise a Deep Self-Expression Feature Augmentation Network (DSE-FAN) for simultaneous augmentation learning and FSL. In such an end-to-end manner, downstream tasks drive DSE-FAN to augment features with preserved reality and diversity. The augmented features engage to correct the biased class prototypes to generalize FS-UPR. This process is named task-driven augmentation learning. A tailored Locality Graph regularizer is imposed on DSE to secure augmentation discriminability to further exert the generalization capability. Comprehensive experimental results have verified the efficacy of DSE-FAN against competing methods.
本文考虑了在实际应用中的一个现实问题——少拍无约束掌纹识别(FS-UPR),目的是在给定少量支持样本的无约束条件下对掌纹图像进行识别。迄今为止,广泛的基于增强的少镜头学习(FSL)方法已经出现,以缓解样本稀缺性。然而,训练幻觉者需要大量样本,这使得它们不适用于FS-UPR。本文通过对一些可用的支持样本进行节俭的增强学习来解决FS-UPR问题。观察到由于各种采集条件引起的变化在样本之间是可转移的,我们设法将支持样本分解为基于变化转移的特征增强的原则和变化。为此,我们设计了一个深度自我表达特征增强网络(Deep Self-Expression Feature Augmentation Network, DSE-FAN),用于同时增强学习和FSL。在这种端到端方式下,下游任务驱动DSE-FAN以保留真实性和多样性的方式增强特征。增强特征用于纠正有偏见的类原型,以推广FS-UPR。这个过程被称为任务驱动的增强学习。在DSE上引入了定制的局域图正则化器,保证了DSE的增强可判别性,进一步发挥了泛化能力。综合实验结果验证了DSE-FAN相对于竞争方法的有效性。
{"title":"A regularized deep self-expression feature augmentation network for few-shot unconstrained palmprint recognition","authors":"Kunlei Jing ,&nbsp;Hebo Ma ,&nbsp;Chen Zhang ,&nbsp;Zhiyuan Zha ,&nbsp;Bihan Wen","doi":"10.1016/j.patcog.2025.112904","DOIUrl":"10.1016/j.patcog.2025.112904","url":null,"abstract":"<div><div>This paper considers Few-Shot Unconstrained Palmprint Recognition (FS-UPR), a realistic problem in real-world applications, aiming to recognize palmprint images under unconstrained conditions given <em>a few</em> support samples. To date, broad augmentation-based Few-Shot Learning (FSL) methods have emerged to mitigate the sample scarcity. However, large samples are required to train hallucinators, rendering them inapplicable for FS-UPR. This paper addresses FS-UPR via <em>frugal augmentation</em> learning on a few available support samples. Observing that the variations due to various acquisition conditions are <em>transferable</em> across samples, we manage to decompose support samples into <em>principles</em> and <em>variations</em> for <em>variation transfer</em>-based feature augmentation. To this end, we devise a Deep Self-Expression Feature Augmentation Network (DSE-FAN) for simultaneous augmentation learning and FSL. In such an <em>end-to-end</em> manner, downstream tasks drive DSE-FAN to augment features with preserved <em>reality</em> and <em>diversity</em>. The augmented features engage to correct the biased class prototypes to generalize FS-UPR. This process is named <em>task-driven</em> augmentation learning. A tailored Locality Graph regularizer is imposed on DSE to secure augmentation <em>discriminability</em> to further exert the generalization capability. Comprehensive experimental results have verified the efficacy of DSE-FAN against competing methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112904"},"PeriodicalIF":7.6,"publicationDate":"2025-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explicit semantic guidance for single image reflection removal via perceptual influence modeling 通过感知影响建模去除单幅图像反射的显式语义指导
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.patcog.2025.112881
Binghao Ren, Bin Zhao, Yuan Yuan
Reflection artifacts caused by photographing through glass often degrade image visibility and impair downstream visual tasks. Single Image Reflection Removal (SIRR) remains challenging due to its ill-posed nature and the entangled appearance of reflection and transmission layers. While recent methods explore semantic priors, most rely on implicit feature fusion without explicitly modeling the perceptual disturbance. To address this, we introduce the Perceptual Reflection Influence Map (PRIM)-a luminance-based, relative measure that captures the spatial distribution and intensity of reflection-induced interference. PRIM serves as an explicit supervision signal, guiding the network to focus on perceptually sensitive regions. Building on this, we design a PRIM-Adaptive Fusion Module (PAFM) to dynamically integrate semantic and local features using PRIM-derived cues. Furthermore, we propose a physics-inspired Reflection Removal Unit (RRU) that leverages both statistical frequency-domain priors and the physical image formation model to enable robust feature disentanglement. Extensive experiments on multiple real-world SIRR benchmarks demonstrate that our method achieves state-of-the-art performance, validating the effectiveness of our semantic-guided and physics-inspired framework.
通过玻璃拍摄引起的反射伪影通常会降低图像的可视性并损害下游的视觉任务。单图像反射去除(SIRR)由于其病态性质以及反射层和透射层的纠缠外观而仍然具有挑战性。虽然最近的方法探索语义先验,但大多数依赖于隐式特征融合,而没有明确建模感知干扰。为了解决这个问题,我们引入了感知反射影响图(PRIM)——一种基于亮度的相对测量方法,可以捕获反射诱导干扰的空间分布和强度。PRIM作为一个明确的监督信号,引导网络关注感知敏感区域。在此基础上,我们设计了一个PRIM-Adaptive Fusion Module (PAFM),利用prim衍生的线索动态整合语义特征和局部特征。此外,我们提出了一个物理启发的反射去除单元(RRU),它利用统计频域先验和物理图像形成模型来实现鲁棒的特征解纠缠。在多个真实世界的SIRR基准测试中进行的大量实验表明,我们的方法达到了最先进的性能,验证了我们的语义引导和物理启发框架的有效性。
{"title":"Explicit semantic guidance for single image reflection removal via perceptual influence modeling","authors":"Binghao Ren,&nbsp;Bin Zhao,&nbsp;Yuan Yuan","doi":"10.1016/j.patcog.2025.112881","DOIUrl":"10.1016/j.patcog.2025.112881","url":null,"abstract":"<div><div>Reflection artifacts caused by photographing through glass often degrade image visibility and impair downstream visual tasks. Single Image Reflection Removal (SIRR) remains challenging due to its ill-posed nature and the entangled appearance of reflection and transmission layers. While recent methods explore semantic priors, most rely on implicit feature fusion without explicitly modeling the perceptual disturbance. To address this, we introduce the Perceptual Reflection Influence Map (PRIM)-a luminance-based, relative measure that captures the spatial distribution and intensity of reflection-induced interference. PRIM serves as an explicit supervision signal, guiding the network to focus on perceptually sensitive regions. Building on this, we design a PRIM-Adaptive Fusion Module (PAFM) to dynamically integrate semantic and local features using PRIM-derived cues. Furthermore, we propose a physics-inspired Reflection Removal Unit (RRU) that leverages both statistical frequency-domain priors and the physical image formation model to enable robust feature disentanglement. Extensive experiments on multiple real-world SIRR benchmarks demonstrate that our method achieves state-of-the-art performance, validating the effectiveness of our semantic-guided and physics-inspired framework.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112881"},"PeriodicalIF":7.6,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven bayesian-guided activation functions for multi-task pattern recognition 多任务模式识别的数据驱动贝叶斯引导激活函数
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.patcog.2025.112911
Rui-Jun Bai , Luyang Li , Zhong Li , Jia Guo , Chenkai Zhao , Weicheng Zeng , Haozhao Feng , Hanming Wei , Ping Chen
The performance of deep learning (DL) models is highly dependent on the design of activation functions. However, traditional fixed-shape activation functions often suffer encounter issues such as gradient vanishing and local optimisation, particularly when dealing with complex tasks, thereby limiting their adaptability to diverse task requirements. Although trainable activation functions enhance the flexibility of DL models by incorporating learnable parameters, their optimisation process predominantly relies on gradient descent, which is prone to local optima. To address these limitations, this study proposes a data-driven, prior distributions-based optimisation framework for trainable activation functions. The proposed framework integrates a two-stage optimisation strategy, combining gradient descent and Bayesian inference, to significantly enhance neural network performance across multiple tasks. The core contributions of this paper are threefold: 1) The design of a generalised gated composite activation function that adaptively adjusts its shape according to task requirements by dynamically integrating multiple underlying activation functions. 2) The proposal of a two-stage optimisation framework that effectively alleviates the issue of local optima inherent in traditional optimisation methods. and 3) Comprehensive experimental validation on tasks such as image classification, regression, denoising, segmentation, and super-resolution, demonstrating that the proposed method delivers substantial performance gains across various tasks and datasets, surpassing existing classical activation functions and their variants. This study offers novel insights into the selection and optimisation of activation functions for DL models, holding significant academic and practical implications. Our dataset is available at https://github.com/hellorjb/GCAF.
深度学习(DL)模型的性能高度依赖于激活函数的设计。然而,传统的固定形状激活函数经常遇到梯度消失和局部优化等问题,特别是在处理复杂任务时,从而限制了其对各种任务需求的适应性。虽然可训练激活函数通过加入可学习参数来增强深度学习模型的灵活性,但其优化过程主要依赖于梯度下降,容易出现局部最优。为了解决这些限制,本研究提出了一个数据驱动的、基于先验分布的可训练激活函数优化框架。该框架集成了两阶段优化策略,结合梯度下降和贝叶斯推理,显著提高了神经网络在多任务中的性能。本文的核心贡献有三个方面:1)设计了一种广义门控复合激活函数,通过动态集成多个底层激活函数,根据任务要求自适应调整其形状。2)提出了一种两阶段优化框架,有效缓解了传统优化方法固有的局部最优问题。3)在图像分类、回归、去噪、分割和超分辨率等任务上进行了全面的实验验证,表明所提出的方法在各种任务和数据集上都有显著的性能提升,超越了现有的经典激活函数及其变体。本研究为深度学习模型激活函数的选择和优化提供了新的见解,具有重要的学术和实践意义。我们的数据集可以在https://github.com/hellorjb/GCAF上找到。
{"title":"Data-driven bayesian-guided activation functions for multi-task pattern recognition","authors":"Rui-Jun Bai ,&nbsp;Luyang Li ,&nbsp;Zhong Li ,&nbsp;Jia Guo ,&nbsp;Chenkai Zhao ,&nbsp;Weicheng Zeng ,&nbsp;Haozhao Feng ,&nbsp;Hanming Wei ,&nbsp;Ping Chen","doi":"10.1016/j.patcog.2025.112911","DOIUrl":"10.1016/j.patcog.2025.112911","url":null,"abstract":"<div><div>The performance of deep learning (DL) models is highly dependent on the design of activation functions. However, traditional fixed-shape activation functions often suffer encounter issues such as gradient vanishing and local optimisation, particularly when dealing with complex tasks, thereby limiting their adaptability to diverse task requirements. Although trainable activation functions enhance the flexibility of DL models by incorporating learnable parameters, their optimisation process predominantly relies on gradient descent, which is prone to local optima. To address these limitations, this study proposes a data-driven, prior distributions-based optimisation framework for trainable activation functions. The proposed framework integrates a two-stage optimisation strategy, combining gradient descent and Bayesian inference, to significantly enhance neural network performance across multiple tasks. The core contributions of this paper are threefold: 1) The design of a generalised gated composite activation function that adaptively adjusts its shape according to task requirements by dynamically integrating multiple underlying activation functions. 2) The proposal of a two-stage optimisation framework that effectively alleviates the issue of local optima inherent in traditional optimisation methods. and 3) Comprehensive experimental validation on tasks such as image classification, regression, denoising, segmentation, and super-resolution, demonstrating that the proposed method delivers substantial performance gains across various tasks and datasets, surpassing existing classical activation functions and their variants. This study offers novel insights into the selection and optimisation of activation functions for DL models, holding significant academic and practical implications. Our dataset is available at <span><span>https://github.com/hellorjb/GCAF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112911"},"PeriodicalIF":7.6,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end susceptibility-induced distortion correction for diffusion MRI with unsupervised deep learning 基于无监督深度学习的扩散MRI端到端磁化率畸变校正
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.patcog.2025.112894
Jianhui Feng , Yonggang Shi , Yuchuan Qiao
High-resolution, multi-shell diffusion MRI (dMRI) data provides exceptional advantages for studying human brain pathways. However, significant residual distortions remain in certain brain regions, such as the brainstem, even after processing with existing distortion correction methods. In this paper, we propose a novel unsupervised learning-based framework to correct the susceptibility-induced distortion in dMRI. This end-to-end coarse-to-fine network named Distortion Correction Network (DiscoNet), consists of a dual-branch encoder and a multi-resolution decoder. Instead of using the b0 (b=0) image in most methods, fiber orientation distribution (FOD) images computed from dMRI data are utilized to provide more reliable information. A dual-branch encoder integrating the advantages of Convolutional Neural Networks and Swin Transformer is designed to capture the latent information of FOD images from opposite phase encoding (PE) separately; while a subsequent multi-resolution decoder decomposes the distortion fields into rigid and non-rigid components. We then evaluated our method on large-scale datasets with over 1400 cases, including data in AP-PA PE direction and RL-LR PE direction. Comprehensive experiments had shown that our method achieves over 42 % improvement in the Mean Square Difference (MSD) metric for distortion correction compared to the SOTA methods in the pons of the brainstem region.
高分辨率,多壳扩散MRI (dMRI)数据为研究人类大脑通路提供了独特的优势。然而,即使用现有的畸变校正方法处理后,在某些大脑区域,如脑干,仍然存在明显的残余畸变。在本文中,我们提出了一种新的基于无监督学习的框架来纠正dMRI中敏感性引起的畸变。这种端到端从粗到精的网络被称为失真校正网络(DiscoNet),由一个双支路编码器和一个多分辨率解码器组成。与大多数方法使用b0 (b=0)图像不同,利用dMRI数据计算的纤维取向分布(FOD)图像提供更可靠的信息。结合卷积神经网络和Swin变压器的优点,设计了一种双支路编码器,分别从对相编码(PE)中捕获FOD图像的潜在信息;随后的多分辨率解码器将失真场分解为刚性和非刚性分量。然后,我们在1400多个病例的大规模数据集上评估了我们的方法,包括AP-PA PE方向和RL-LR PE方向的数据。综合实验表明,与SOTA方法相比,我们的方法在脑干桥区的畸变校正均方差(MSD)指标上提高了42%以上。
{"title":"End-to-end susceptibility-induced distortion correction for diffusion MRI with unsupervised deep learning","authors":"Jianhui Feng ,&nbsp;Yonggang Shi ,&nbsp;Yuchuan Qiao","doi":"10.1016/j.patcog.2025.112894","DOIUrl":"10.1016/j.patcog.2025.112894","url":null,"abstract":"<div><div>High-resolution, multi-shell diffusion MRI (dMRI) data provides exceptional advantages for studying human brain pathways. However, significant residual distortions remain in certain brain regions, such as the brainstem, even after processing with existing distortion correction methods. In this paper, we propose a novel unsupervised learning-based framework to correct the susceptibility-induced distortion in dMRI. This end-to-end coarse-to-fine network named Distortion Correction Network (DiscoNet), consists of a dual-branch encoder and a multi-resolution decoder. Instead of using the b0 <span><math><mrow><mo>(</mo><mi>b</mi><mo>=</mo><mn>0</mn><mo>)</mo></mrow></math></span> image in most methods, fiber orientation distribution (FOD) images computed from dMRI data are utilized to provide more reliable information. A dual-branch encoder integrating the advantages of Convolutional Neural Networks and Swin Transformer is designed to capture the latent information of FOD images from opposite phase encoding (PE) separately; while a subsequent multi-resolution decoder decomposes the distortion fields into rigid and non-rigid components. We then evaluated our method on large-scale datasets with over 1400 cases, including data in AP-PA PE direction and RL-LR PE direction. Comprehensive experiments had shown that our method achieves over 42 % improvement in the Mean Square Difference (MSD) metric for distortion correction compared to the SOTA methods in the pons of the brainstem region.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112894"},"PeriodicalIF":7.6,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel consensus transformer for local feature matching 用于局部特征匹配的并联一致性变压器
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.patcog.2025.112905
Xiaoyong Lu , Yuhan Chen , Bin Kang , Songlin Du
Local feature matching establishes correspondences between two sets of image features, a fundamental yet challenging task in computer vision. Existing Transformer-based methods achieve strong global modeling but suffer from high computational costs and limited locality. We propose PCMatcher, a detector-based feature matching framework that leverages parallel consensus attention to address these issues. Parallel consensus attention integrates a local consensus module to incorporate neighborhood information and a parallel attention mechanism to reuse parameters and computations efficiently. Additionally, a multi-scale fusion module combines features from different layers to improve robustness. Extensive experiments indicate that PCMatcher achieves a competitive accuracy-efficiency trade-off across various downstream tasks. The source code will be publicly released upon acceptance.
局部特征匹配在两组图像特征之间建立对应关系,是计算机视觉中的一项基本但具有挑战性的任务。现有的基于变压器的方法实现了较强的全局建模,但存在计算成本高和局部性受限的问题。我们提出PCMatcher,一个基于检测器的特征匹配框架,利用并行共识关注来解决这些问题。并行共识关注集成了局部共识模块来吸收邻域信息,并行关注机制来有效地重用参数和计算。此外,多尺度融合模块结合了不同层的特征,以提高鲁棒性。大量的实验表明,PCMatcher在各种下游任务之间实现了竞争性的精度和效率权衡。源代码将在接受后公开发布。
{"title":"Parallel consensus transformer for local feature matching","authors":"Xiaoyong Lu ,&nbsp;Yuhan Chen ,&nbsp;Bin Kang ,&nbsp;Songlin Du","doi":"10.1016/j.patcog.2025.112905","DOIUrl":"10.1016/j.patcog.2025.112905","url":null,"abstract":"<div><div>Local feature matching establishes correspondences between two sets of image features, a fundamental yet challenging task in computer vision. Existing Transformer-based methods achieve strong global modeling but suffer from high computational costs and limited locality. We propose PCMatcher, a detector-based feature matching framework that leverages parallel consensus attention to address these issues. Parallel consensus attention integrates a local consensus module to incorporate neighborhood information and a parallel attention mechanism to reuse parameters and computations efficiently. Additionally, a multi-scale fusion module combines features from different layers to improve robustness. Extensive experiments indicate that PCMatcher achieves a competitive accuracy-efficiency trade-off across various downstream tasks. The source code will be publicly released upon acceptance.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112905"},"PeriodicalIF":7.6,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CRB-NCE: An adaptable cohesion rule-based approach to number of clusters estimation 基于自适应内聚规则的聚类数估计方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.patcog.2025.112909
J. Tinguaro Rodríguez , Xabier Gonzalez-Garcia , Daniel Gómez , Humberto Bustince
Accurate number-of-clusters estimation (NCE) is a central task in many clustering applications, particularly for prototype-based k-centers methods like k-Means, which require the number of clusters k to be specified in advance. This paper presents CRB-NCE, a general cluster cohesion rule-based framework for NCE integrating three main innovations: (i) the introduction of tail ratios to reliably identify decelerations in sequences of cohesion measures, (ii) a threshold-based rule system supporting accurate NCE, and (iii) an optimization-driven approach to learn these thresholds from synthetic datasets with controlled clustering complexity. Two cohesion measures are considered: inertia (SSE) and a new, scale-invariant metric called the mean coverage index. CRB-NCE is mainly applied to derive general-purpose NCE methods, but, most importantly, it also provides an adaptable framework that enables producing specialized procedures with enhanced performance under specific conditions, such as particular clustering algorithms or overlapping cluster structures. Extensive evaluations on synthetic Gaussian datasets (both standard and high-dimensional), clustering benchmarks, and real-world datasets show that CRB-NCE methods consistently achieve robust and competitive NCE performance with efficient runtimes compared to a broad baseline of internal clustering validity indices and other NCE methods.
准确的簇数估计(NCE)是许多聚类应用的核心任务,特别是对于像k- means这样基于原型的k-中心方法,这需要预先指定簇数k。本文介绍了CRB-NCE,这是一个通用的基于集群内聚规则的NCE框架,它集成了三个主要创新:(i)引入尾部比率来可靠地识别内聚度量序列中的减速,(ii)支持精确的NCE的基于阈值的规则系统,以及(iii)一种优化驱动的方法,从具有控制聚类复杂性的合成数据集中学习这些阈值。考虑了两种内聚度量:惯性(SSE)和一种新的尺度不变度量,称为平均覆盖指数。CRB-NCE主要用于派生通用的NCE方法,但最重要的是,它还提供了一个适应性框架,能够在特定条件下生成具有增强性能的专门过程,例如特定的聚类算法或重叠的聚类结构。对合成高斯数据集(包括标准和高维)、聚类基准和现实世界数据集的广泛评估表明,与内部聚类有效性指标和其他NCE方法的广泛基线相比,CRB-NCE方法在高效运行时始终能够实现鲁棒性和竞争性的NCE性能。
{"title":"CRB-NCE: An adaptable cohesion rule-based approach to number of clusters estimation","authors":"J. Tinguaro Rodríguez ,&nbsp;Xabier Gonzalez-Garcia ,&nbsp;Daniel Gómez ,&nbsp;Humberto Bustince","doi":"10.1016/j.patcog.2025.112909","DOIUrl":"10.1016/j.patcog.2025.112909","url":null,"abstract":"<div><div>Accurate number-of-clusters estimation (NCE) is a central task in many clustering applications, particularly for prototype-based <em>k</em>-centers methods like <em>k</em>-Means, which require the number of clusters <em>k</em> to be specified in advance. This paper presents CRB-NCE, a general cluster cohesion rule-based framework for NCE integrating three main innovations: (i) the introduction of tail ratios to reliably identify decelerations in sequences of cohesion measures, (ii) a threshold-based rule system supporting accurate NCE, and (iii) an optimization-driven approach to learn these thresholds from synthetic datasets with controlled clustering complexity. Two cohesion measures are considered: inertia (SSE) and a new, scale-invariant metric called the mean coverage index. CRB-NCE is mainly applied to derive general-purpose NCE methods, but, most importantly, it also provides an adaptable framework that enables producing specialized procedures with enhanced performance under specific conditions, such as particular clustering algorithms or overlapping cluster structures. Extensive evaluations on synthetic Gaussian datasets (both standard and high-dimensional), clustering benchmarks, and real-world datasets show that CRB-NCE methods consistently achieve robust and competitive NCE performance with efficient runtimes compared to a broad baseline of internal clustering validity indices and other NCE methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112909"},"PeriodicalIF":7.6,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NuclSeg-v2.0: Nuclei segmentation using semi-supervised stain deconvolution with real-time user feedback NuclSeg-v2.0:使用实时用户反馈的半监督染色反卷积进行细胞核分割
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.patcog.2025.112823
Haixin Wang , Jian Yang , Ryohei Katayama , Michiya Matusaki , Tomoyuki Miyao , Ying Li , Jinjia Zhou
Deep learning-based stain deconvolution approaches translate affordable IHC slides into informative mpIF images for nuclei segmentation; however, performance drops when inputs are H&E owing to domain shift. We prepended a stain transfer from H&E to IHC, then performed stain deconvolution from IHC to mpIF. To improve deconvolution, we adopted a semi-supervised scheme with paired GANs (I2M/M2I) that combines supervised and unsupervised objectives to diversify training data and mitigate pseudo-input noise. We further integrated a user interface for manual correction and leveraged its real-time feedback to estimate adaptive weights, enabling dataset-specific refinement without retraining. Across benchmark datasets, the proposed method surpasses state-of-the-art performance while improving robustness and usability for histopathological image analysis.
基于深度学习的染色反卷积方法将经济实惠的IHC幻灯片转化为信息丰富的mpIF图像用于细胞核分割;然而,当输入为H&;E时,由于域移位,性能下降。我们准备了从H&;E到IHC的染色转移,然后从IHC到mpIF进行染色反卷积。为了改善反卷积,我们采用了一种带有配对gan (I2M/M2I)的半监督方案,该方案结合了监督和无监督目标,以使训练数据多样化并减轻伪输入噪声。我们进一步集成了用于手动校正的用户界面,并利用其实时反馈来估计自适应权重,从而无需重新训练即可实现特定于数据集的细化。在基准数据集上,所提出的方法超越了最先进的性能,同时提高了组织病理学图像分析的鲁棒性和可用性。
{"title":"NuclSeg-v2.0: Nuclei segmentation using semi-supervised stain deconvolution with real-time user feedback","authors":"Haixin Wang ,&nbsp;Jian Yang ,&nbsp;Ryohei Katayama ,&nbsp;Michiya Matusaki ,&nbsp;Tomoyuki Miyao ,&nbsp;Ying Li ,&nbsp;Jinjia Zhou","doi":"10.1016/j.patcog.2025.112823","DOIUrl":"10.1016/j.patcog.2025.112823","url":null,"abstract":"<div><div>Deep learning-based stain deconvolution approaches translate affordable IHC slides into informative mpIF images for nuclei segmentation; however, performance drops when inputs are H&amp;E owing to domain shift. We prepended a stain transfer from H&amp;E to IHC, then performed stain deconvolution from IHC to mpIF. To improve deconvolution, we adopted a semi-supervised scheme with paired GANs (I2M/M2I) that combines supervised and unsupervised objectives to diversify training data and mitigate pseudo-input noise. We further integrated a user interface for manual correction and leveraged its real-time feedback to estimate adaptive weights, enabling dataset-specific refinement without retraining. Across benchmark datasets, the proposed method surpasses state-of-the-art performance while improving robustness and usability for histopathological image analysis.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112823"},"PeriodicalIF":7.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLIP-driven rain perception: Adaptive deraining with pattern-aware network routing and mask-guided cross-attention clip驱动的降雨感知:模式感知网络路由和掩码引导的交叉注意的自适应训练
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.patcog.2025.112886
Cong Guan, Osamu Yoshie
Existing deraining models process all rainy images within a single network. However, different rain patterns have significant variations, which makes it challenging for a single network to handle diverse types of raindrops and streaks. To address this limitation, we propose a novel CLIP-driven rain perception network (CLIP-RPN) that leverages CLIP to automatically perceive rain patterns by computing visual-language matching scores and adaptively routing to sub-networks to handle different rain patterns, such as varying raindrop densities, streak orientations, and rainfall intensity. CLIP-RPN establishes semantic-aware rain pattern recognition through CLIP’s cross-modal visual-language alignment capabilities, enabling automatic identification of precipitation characteristics across different rain scenarios. This rain pattern awareness drives an adaptive subnetwork routing mechanism where specialized processing branches are dynamically activated based on the detected rain type, significantly enhancing the model’s capacity to handle diverse rainfall conditions. Furthermore, within sub-networks of CLIP-RPN, we introduce a mask-guided cross-attention mechanism (MGCA) that predicts precise rain masks at multi-scale to facilitate contextual interactions between rainy regions and clean background areas by cross-attention. We also introduces a dynamic loss scheduling mechanism (DLS) to adaptively adjust the gradients for the optimization process of CLIP-RPN. Compared with the commonly used l1 or l2 loss, DLS is more compatible with the inherent dynamics of the network training process, thus achieving enhanced outcomes. Our method achieves state-of-the-art performance across multiple datasets, particularly excelling in complex mixed datasets.
现有的训练模型在一个网络中处理所有的雨天图像。然而,不同的降雨模式有显著的变化,这使得单个网络处理不同类型的雨滴和条纹具有挑战性。为了解决这一限制,我们提出了一种新的CLIP驱动的降雨感知网络(CLIP- rpn),它利用CLIP通过计算视觉语言匹配分数和自适应路由到子网络来处理不同的降雨模式,如不同的雨滴密度、条纹方向和降雨强度,来自动感知降雨模式。CLIP- rpn通过CLIP的跨模态视觉语言校准能力建立语义感知的降雨模式识别,从而能够自动识别不同降雨情景下的降水特征。这种降雨模式感知驱动自适应子网路由机制,其中专门的处理分支根据检测到的降雨类型动态激活,显着增强了模型处理不同降雨条件的能力。此外,在CLIP-RPN的子网络中,我们引入了一种掩模引导的交叉注意机制(MGCA),该机制可以在多尺度上精确预测雨掩模,从而通过交叉注意促进多雨区域和干净背景区域之间的上下文交互。本文还引入了一种动态损耗调度机制(DLS)来自适应调整CLIP-RPN的梯度优化过程。与常用的l1或l2损失相比,DLS更符合网络训练过程的内在动态,从而获得更好的效果。我们的方法在多个数据集上实现了最先进的性能,特别是在复杂的混合数据集上表现出色。
{"title":"CLIP-driven rain perception: Adaptive deraining with pattern-aware network routing and mask-guided cross-attention","authors":"Cong Guan,&nbsp;Osamu Yoshie","doi":"10.1016/j.patcog.2025.112886","DOIUrl":"10.1016/j.patcog.2025.112886","url":null,"abstract":"<div><div>Existing deraining models process all rainy images within a single network. However, different rain patterns have significant variations, which makes it challenging for a single network to handle diverse types of raindrops and streaks. To address this limitation, we propose a novel CLIP-driven rain perception network (CLIP-RPN) that leverages CLIP to automatically perceive rain patterns by computing visual-language matching scores and adaptively routing to sub-networks to handle different rain patterns, such as varying raindrop densities, streak orientations, and rainfall intensity. CLIP-RPN establishes semantic-aware rain pattern recognition through CLIP’s cross-modal visual-language alignment capabilities, enabling automatic identification of precipitation characteristics across different rain scenarios. This rain pattern awareness drives an adaptive subnetwork routing mechanism where specialized processing branches are dynamically activated based on the detected rain type, significantly enhancing the model’s capacity to handle diverse rainfall conditions. Furthermore, within sub-networks of CLIP-RPN, we introduce a mask-guided cross-attention mechanism (MGCA) that predicts precise rain masks at multi-scale to facilitate contextual interactions between rainy regions and clean background areas by cross-attention. We also introduces a dynamic loss scheduling mechanism (DLS) to adaptively adjust the gradients for the optimization process of CLIP-RPN. Compared with the commonly used <em>l</em><sub>1</sub> or <em>l</em><sub>2</sub> loss, DLS is more compatible with the inherent dynamics of the network training process, thus achieving enhanced outcomes. Our method achieves state-of-the-art performance across multiple datasets, particularly excelling in complex mixed datasets.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112886"},"PeriodicalIF":7.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive approach for image quality assessment using quality-centric embedding and ranking networks 使用以质量为中心的嵌入和排序网络进行图像质量评估的综合方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.patcog.2025.112890
Zeeshan Ali Haider , Sareer Ul Amin , Muhammad Fayaz , Fida Muhammad Khan , Hyeonjoon Moon , Sanghyun Seo
This paper presents a new technology that focuses on blind image quality assessment (BIQA) through a framework known as Quality-Centric Embedding and Ranking Network (QCERN). The framework ensures maximum efficiency when processing images under various possible scenarios. QCERN is entirely different from contemporary BIQA techniques, which focus solely on regressing quality scores without structured embeddings. In contrast, the proposed model features a well-defined embedding space as its principal focus, in which picture quality is both clustered and ordered. This dynamic quality of images enables QCERN to utilize several adaptive ranking transformers along a geometric space populated by dynamic score anchors representing images of equivalent quality QCERN features a distinct advantage since unlabeled images of interest can be placed by evaluation of their distance to these specified score anchors inductively in the embedding space, improving accuracy as well as generalization across disparate datasets. Multiple loss functions are utilized in this instance, including order and metric loss, to ensure that images are positioned correctly according to their quality while maintaining distinct divisions of quality. With the application of QCERN, numerous experiments have demonstrated its ability to outperform existing models by consistently delivering high-quality predictions across various datasets, making it a competitive option. This quality-centric embedding and ranking methodology is excellent for reliable quality assessment applications, such as in photography, medical imaging, and surveillance.
本文提出了一种基于以质量为中心的嵌入和排序网络(QCERN)框架的盲图像质量评估新技术。该框架确保在各种可能的场景下处理图像时达到最高效率。QCERN与当代的BIQA技术完全不同,后者只专注于回归质量分数,而没有结构化嵌入。相比之下,该模型以定义良好的嵌入空间为主要焦点,其中图像质量既聚类又有序。这种图像的动态质量使QCERN能够利用几个自适应排名转换器,沿着由动态分数锚点填充的几何空间,代表同等质量的图像。QCERN具有明显的优势,因为可以通过在嵌入空间中归纳评估它们与这些指定分数锚点的距离来放置感兴趣的未标记图像,从而提高准确性以及跨不同数据集的泛化。在这种情况下,使用了多个损失函数,包括阶损失和度量损失,以确保图像根据其质量正确定位,同时保持不同的质量划分。随着QCERN的应用,大量的实验已经证明了它通过在各种数据集上持续提供高质量的预测来超越现有模型的能力,使其成为一个有竞争力的选择。这种以质量为中心的嵌入和排序方法非常适合可靠的质量评估应用,例如摄影、医学成像和监视。
{"title":"A comprehensive approach for image quality assessment using quality-centric embedding and ranking networks","authors":"Zeeshan Ali Haider ,&nbsp;Sareer Ul Amin ,&nbsp;Muhammad Fayaz ,&nbsp;Fida Muhammad Khan ,&nbsp;Hyeonjoon Moon ,&nbsp;Sanghyun Seo","doi":"10.1016/j.patcog.2025.112890","DOIUrl":"10.1016/j.patcog.2025.112890","url":null,"abstract":"<div><div>This paper presents a new technology that focuses on blind image quality assessment (BIQA) through a framework known as Quality-Centric Embedding and Ranking Network (QCERN). The framework ensures maximum efficiency when processing images under various possible scenarios. QCERN is entirely different from contemporary BIQA techniques, which focus solely on regressing quality scores without structured embeddings. In contrast, the proposed model features a well-defined embedding space as its principal focus, in which picture quality is both clustered and ordered. This dynamic quality of images enables QCERN to utilize several adaptive ranking transformers along a geometric space populated by dynamic score anchors representing images of equivalent quality QCERN features a distinct advantage since unlabeled images of interest can be placed by evaluation of their distance to these specified score anchors inductively in the embedding space, improving accuracy as well as generalization across disparate datasets. Multiple loss functions are utilized in this instance, including order and metric loss, to ensure that images are positioned correctly according to their quality while maintaining distinct divisions of quality. With the application of QCERN, numerous experiments have demonstrated its ability to outperform existing models by consistently delivering high-quality predictions across various datasets, making it a competitive option. This quality-centric embedding and ranking methodology is excellent for reliable quality assessment applications, such as in photography, medical imaging, and surveillance.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112890"},"PeriodicalIF":7.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1