首页 > 最新文献

Pattern Recognition最新文献

英文 中文
Parallel consensus transformer for local feature matching 用于局部特征匹配的并联一致性变压器
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.patcog.2025.112905
Xiaoyong Lu , Yuhan Chen , Bin Kang , Songlin Du
Local feature matching establishes correspondences between two sets of image features, a fundamental yet challenging task in computer vision. Existing Transformer-based methods achieve strong global modeling but suffer from high computational costs and limited locality. We propose PCMatcher, a detector-based feature matching framework that leverages parallel consensus attention to address these issues. Parallel consensus attention integrates a local consensus module to incorporate neighborhood information and a parallel attention mechanism to reuse parameters and computations efficiently. Additionally, a multi-scale fusion module combines features from different layers to improve robustness. Extensive experiments indicate that PCMatcher achieves a competitive accuracy-efficiency trade-off across various downstream tasks. The source code will be publicly released upon acceptance.
局部特征匹配在两组图像特征之间建立对应关系,是计算机视觉中的一项基本但具有挑战性的任务。现有的基于变压器的方法实现了较强的全局建模,但存在计算成本高和局部性受限的问题。我们提出PCMatcher,一个基于检测器的特征匹配框架,利用并行共识关注来解决这些问题。并行共识关注集成了局部共识模块来吸收邻域信息,并行关注机制来有效地重用参数和计算。此外,多尺度融合模块结合了不同层的特征,以提高鲁棒性。大量的实验表明,PCMatcher在各种下游任务之间实现了竞争性的精度和效率权衡。源代码将在接受后公开发布。
{"title":"Parallel consensus transformer for local feature matching","authors":"Xiaoyong Lu ,&nbsp;Yuhan Chen ,&nbsp;Bin Kang ,&nbsp;Songlin Du","doi":"10.1016/j.patcog.2025.112905","DOIUrl":"10.1016/j.patcog.2025.112905","url":null,"abstract":"<div><div>Local feature matching establishes correspondences between two sets of image features, a fundamental yet challenging task in computer vision. Existing Transformer-based methods achieve strong global modeling but suffer from high computational costs and limited locality. We propose PCMatcher, a detector-based feature matching framework that leverages parallel consensus attention to address these issues. Parallel consensus attention integrates a local consensus module to incorporate neighborhood information and a parallel attention mechanism to reuse parameters and computations efficiently. Additionally, a multi-scale fusion module combines features from different layers to improve robustness. Extensive experiments indicate that PCMatcher achieves a competitive accuracy-efficiency trade-off across various downstream tasks. The source code will be publicly released upon acceptance.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112905"},"PeriodicalIF":7.6,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CRB-NCE: An adaptable cohesion rule-based approach to number of clusters estimation 基于自适应内聚规则的聚类数估计方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.patcog.2025.112909
J. Tinguaro Rodríguez , Xabier Gonzalez-Garcia , Daniel Gómez , Humberto Bustince
Accurate number-of-clusters estimation (NCE) is a central task in many clustering applications, particularly for prototype-based k-centers methods like k-Means, which require the number of clusters k to be specified in advance. This paper presents CRB-NCE, a general cluster cohesion rule-based framework for NCE integrating three main innovations: (i) the introduction of tail ratios to reliably identify decelerations in sequences of cohesion measures, (ii) a threshold-based rule system supporting accurate NCE, and (iii) an optimization-driven approach to learn these thresholds from synthetic datasets with controlled clustering complexity. Two cohesion measures are considered: inertia (SSE) and a new, scale-invariant metric called the mean coverage index. CRB-NCE is mainly applied to derive general-purpose NCE methods, but, most importantly, it also provides an adaptable framework that enables producing specialized procedures with enhanced performance under specific conditions, such as particular clustering algorithms or overlapping cluster structures. Extensive evaluations on synthetic Gaussian datasets (both standard and high-dimensional), clustering benchmarks, and real-world datasets show that CRB-NCE methods consistently achieve robust and competitive NCE performance with efficient runtimes compared to a broad baseline of internal clustering validity indices and other NCE methods.
准确的簇数估计(NCE)是许多聚类应用的核心任务,特别是对于像k- means这样基于原型的k-中心方法,这需要预先指定簇数k。本文介绍了CRB-NCE,这是一个通用的基于集群内聚规则的NCE框架,它集成了三个主要创新:(i)引入尾部比率来可靠地识别内聚度量序列中的减速,(ii)支持精确的NCE的基于阈值的规则系统,以及(iii)一种优化驱动的方法,从具有控制聚类复杂性的合成数据集中学习这些阈值。考虑了两种内聚度量:惯性(SSE)和一种新的尺度不变度量,称为平均覆盖指数。CRB-NCE主要用于派生通用的NCE方法,但最重要的是,它还提供了一个适应性框架,能够在特定条件下生成具有增强性能的专门过程,例如特定的聚类算法或重叠的聚类结构。对合成高斯数据集(包括标准和高维)、聚类基准和现实世界数据集的广泛评估表明,与内部聚类有效性指标和其他NCE方法的广泛基线相比,CRB-NCE方法在高效运行时始终能够实现鲁棒性和竞争性的NCE性能。
{"title":"CRB-NCE: An adaptable cohesion rule-based approach to number of clusters estimation","authors":"J. Tinguaro Rodríguez ,&nbsp;Xabier Gonzalez-Garcia ,&nbsp;Daniel Gómez ,&nbsp;Humberto Bustince","doi":"10.1016/j.patcog.2025.112909","DOIUrl":"10.1016/j.patcog.2025.112909","url":null,"abstract":"<div><div>Accurate number-of-clusters estimation (NCE) is a central task in many clustering applications, particularly for prototype-based <em>k</em>-centers methods like <em>k</em>-Means, which require the number of clusters <em>k</em> to be specified in advance. This paper presents CRB-NCE, a general cluster cohesion rule-based framework for NCE integrating three main innovations: (i) the introduction of tail ratios to reliably identify decelerations in sequences of cohesion measures, (ii) a threshold-based rule system supporting accurate NCE, and (iii) an optimization-driven approach to learn these thresholds from synthetic datasets with controlled clustering complexity. Two cohesion measures are considered: inertia (SSE) and a new, scale-invariant metric called the mean coverage index. CRB-NCE is mainly applied to derive general-purpose NCE methods, but, most importantly, it also provides an adaptable framework that enables producing specialized procedures with enhanced performance under specific conditions, such as particular clustering algorithms or overlapping cluster structures. Extensive evaluations on synthetic Gaussian datasets (both standard and high-dimensional), clustering benchmarks, and real-world datasets show that CRB-NCE methods consistently achieve robust and competitive NCE performance with efficient runtimes compared to a broad baseline of internal clustering validity indices and other NCE methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112909"},"PeriodicalIF":7.6,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selective intra- and inter-slice interaction for efficient anisotropic medical image segmentation 有效的各向异性医学图像分割的选择片内和片间相互作用
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.patcog.2025.112895
Xian Lin , Xiayu Guo , Zengqiang Yan , Li Yu
Volumetric medical image segmentation relies on efficient intra- and inter-slice interaction. However, 2D and 3D approaches are sub-optimal when segmenting the anisotropic volumes due to absent spatial information or excessive spatial noise. Though 2.5D approaches aim to strike a balance by treating imaging dimensions differently, their rigid inter-slice interaction fails to build efficient cross-slice dependency for various objects. To address this, in this paper, we present a novel 2.5D framework named ACSFormer, allowing dense-yet-lightweight intra-slice interaction and sparse-yet-adaptive inter-slice interaction. Specifically, we propose intra-slice class-aware attention (ICA) by introducing class messengers to capture class-wise global semantics and build dependency between tokens and messengers. In this way, ICA effectively builds global intra-slice interaction with linear-level computational complexity. For inter-slice interaction, slice-wise entropy estimation is adopted to select reference slices for each target slice. To ensure flexible inter-slice interaction, we propose an inter-slice token-specific transformer (ITT) to localize cross-slice relevant regions based on feature relevance and build customized inter-slice dependency for each token. Extensive experiments on four publicly available datasets demonstrate the superiority of ACSFormer, consistently outperforming existing 2D, 2.5D, and 3D approaches with much lower model and computational complexity compared to 3D approaches. The code will be available at https://github.com/xianlin7/ACSFormer.
体积医学图像分割依赖于有效的片内和片间相互作用。然而,由于缺乏空间信息或过多的空间噪声,2D和3D方法在分割各向异性体积时不是最优的。虽然2.5D方法旨在通过不同的成像尺寸来达到平衡,但其刚性的层间相互作用无法为各种对象建立有效的横层依赖。为了解决这个问题,在本文中,我们提出了一个名为ACSFormer的新型2.5D框架,允许密集但轻量级的片内交互和稀疏但自适应的片间交互。具体来说,我们提出了片内类感知注意力(ICA),通过引入类信使来捕获类智能全局语义,并在令牌和信使之间建立依赖关系。通过这种方式,ICA有效地构建具有线性级计算复杂度的全局片内交互。对于片间交互,采用逐片熵估计,为每个目标片选择参考片。为了保证片间交互的灵活性,我们提出了一种基于特征相关性的片间令牌特定转换器(ITT)来定位交叉片相关区域,并为每个令牌构建自定义的片间依赖关系。在四个公开可用的数据集上进行的大量实验证明了ACSFormer的优越性,它始终优于现有的2D、2.5D和3D方法,与3D方法相比,模型和计算复杂性要低得多。代码可在https://github.com/xianlin7/ACSFormer上获得。
{"title":"Selective intra- and inter-slice interaction for efficient anisotropic medical image segmentation","authors":"Xian Lin ,&nbsp;Xiayu Guo ,&nbsp;Zengqiang Yan ,&nbsp;Li Yu","doi":"10.1016/j.patcog.2025.112895","DOIUrl":"10.1016/j.patcog.2025.112895","url":null,"abstract":"<div><div>Volumetric medical image segmentation relies on efficient intra- and inter-slice interaction. However, 2D and 3D approaches are sub-optimal when segmenting the anisotropic volumes due to absent spatial information or excessive spatial noise. Though 2.5D approaches aim to strike a balance by treating imaging dimensions differently, their rigid inter-slice interaction fails to build efficient cross-slice dependency for various objects. To address this, in this paper, we present a novel 2.5D framework named ACSFormer, allowing dense-yet-lightweight intra-slice interaction and sparse-yet-adaptive inter-slice interaction. Specifically, we propose intra-slice class-aware attention (ICA) by introducing class messengers to capture class-wise global semantics and build dependency between tokens and messengers. In this way, ICA effectively builds global intra-slice interaction with linear-level computational complexity. For inter-slice interaction, slice-wise entropy estimation is adopted to select reference slices for each target slice. To ensure flexible inter-slice interaction, we propose an inter-slice token-specific transformer (ITT) to localize cross-slice relevant regions based on feature relevance and build customized inter-slice dependency for each token. Extensive experiments on four publicly available datasets demonstrate the superiority of ACSFormer, consistently outperforming existing 2D, 2.5D, and 3D approaches with much lower model and computational complexity compared to 3D approaches. The code will be available at <span><span>https://github.com/xianlin7/ACSFormer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 112895"},"PeriodicalIF":7.6,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NuclSeg-v2.0: Nuclei segmentation using semi-supervised stain deconvolution with real-time user feedback NuclSeg-v2.0:使用实时用户反馈的半监督染色反卷积进行细胞核分割
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.patcog.2025.112823
Haixin Wang , Jian Yang , Ryohei Katayama , Michiya Matusaki , Tomoyuki Miyao , Ying Li , Jinjia Zhou
Deep learning-based stain deconvolution approaches translate affordable IHC slides into informative mpIF images for nuclei segmentation; however, performance drops when inputs are H&E owing to domain shift. We prepended a stain transfer from H&E to IHC, then performed stain deconvolution from IHC to mpIF. To improve deconvolution, we adopted a semi-supervised scheme with paired GANs (I2M/M2I) that combines supervised and unsupervised objectives to diversify training data and mitigate pseudo-input noise. We further integrated a user interface for manual correction and leveraged its real-time feedback to estimate adaptive weights, enabling dataset-specific refinement without retraining. Across benchmark datasets, the proposed method surpasses state-of-the-art performance while improving robustness and usability for histopathological image analysis.
基于深度学习的染色反卷积方法将经济实惠的IHC幻灯片转化为信息丰富的mpIF图像用于细胞核分割;然而,当输入为H&;E时,由于域移位,性能下降。我们准备了从H&;E到IHC的染色转移,然后从IHC到mpIF进行染色反卷积。为了改善反卷积,我们采用了一种带有配对gan (I2M/M2I)的半监督方案,该方案结合了监督和无监督目标,以使训练数据多样化并减轻伪输入噪声。我们进一步集成了用于手动校正的用户界面,并利用其实时反馈来估计自适应权重,从而无需重新训练即可实现特定于数据集的细化。在基准数据集上,所提出的方法超越了最先进的性能,同时提高了组织病理学图像分析的鲁棒性和可用性。
{"title":"NuclSeg-v2.0: Nuclei segmentation using semi-supervised stain deconvolution with real-time user feedback","authors":"Haixin Wang ,&nbsp;Jian Yang ,&nbsp;Ryohei Katayama ,&nbsp;Michiya Matusaki ,&nbsp;Tomoyuki Miyao ,&nbsp;Ying Li ,&nbsp;Jinjia Zhou","doi":"10.1016/j.patcog.2025.112823","DOIUrl":"10.1016/j.patcog.2025.112823","url":null,"abstract":"<div><div>Deep learning-based stain deconvolution approaches translate affordable IHC slides into informative mpIF images for nuclei segmentation; however, performance drops when inputs are H&amp;E owing to domain shift. We prepended a stain transfer from H&amp;E to IHC, then performed stain deconvolution from IHC to mpIF. To improve deconvolution, we adopted a semi-supervised scheme with paired GANs (I2M/M2I) that combines supervised and unsupervised objectives to diversify training data and mitigate pseudo-input noise. We further integrated a user interface for manual correction and leveraged its real-time feedback to estimate adaptive weights, enabling dataset-specific refinement without retraining. Across benchmark datasets, the proposed method surpasses state-of-the-art performance while improving robustness and usability for histopathological image analysis.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112823"},"PeriodicalIF":7.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLIP-driven rain perception: Adaptive deraining with pattern-aware network routing and mask-guided cross-attention clip驱动的降雨感知:模式感知网络路由和掩码引导的交叉注意的自适应训练
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.patcog.2025.112886
Cong Guan, Osamu Yoshie
Existing deraining models process all rainy images within a single network. However, different rain patterns have significant variations, which makes it challenging for a single network to handle diverse types of raindrops and streaks. To address this limitation, we propose a novel CLIP-driven rain perception network (CLIP-RPN) that leverages CLIP to automatically perceive rain patterns by computing visual-language matching scores and adaptively routing to sub-networks to handle different rain patterns, such as varying raindrop densities, streak orientations, and rainfall intensity. CLIP-RPN establishes semantic-aware rain pattern recognition through CLIP’s cross-modal visual-language alignment capabilities, enabling automatic identification of precipitation characteristics across different rain scenarios. This rain pattern awareness drives an adaptive subnetwork routing mechanism where specialized processing branches are dynamically activated based on the detected rain type, significantly enhancing the model’s capacity to handle diverse rainfall conditions. Furthermore, within sub-networks of CLIP-RPN, we introduce a mask-guided cross-attention mechanism (MGCA) that predicts precise rain masks at multi-scale to facilitate contextual interactions between rainy regions and clean background areas by cross-attention. We also introduces a dynamic loss scheduling mechanism (DLS) to adaptively adjust the gradients for the optimization process of CLIP-RPN. Compared with the commonly used l1 or l2 loss, DLS is more compatible with the inherent dynamics of the network training process, thus achieving enhanced outcomes. Our method achieves state-of-the-art performance across multiple datasets, particularly excelling in complex mixed datasets.
现有的训练模型在一个网络中处理所有的雨天图像。然而,不同的降雨模式有显著的变化,这使得单个网络处理不同类型的雨滴和条纹具有挑战性。为了解决这一限制,我们提出了一种新的CLIP驱动的降雨感知网络(CLIP- rpn),它利用CLIP通过计算视觉语言匹配分数和自适应路由到子网络来处理不同的降雨模式,如不同的雨滴密度、条纹方向和降雨强度,来自动感知降雨模式。CLIP- rpn通过CLIP的跨模态视觉语言校准能力建立语义感知的降雨模式识别,从而能够自动识别不同降雨情景下的降水特征。这种降雨模式感知驱动自适应子网路由机制,其中专门的处理分支根据检测到的降雨类型动态激活,显着增强了模型处理不同降雨条件的能力。此外,在CLIP-RPN的子网络中,我们引入了一种掩模引导的交叉注意机制(MGCA),该机制可以在多尺度上精确预测雨掩模,从而通过交叉注意促进多雨区域和干净背景区域之间的上下文交互。本文还引入了一种动态损耗调度机制(DLS)来自适应调整CLIP-RPN的梯度优化过程。与常用的l1或l2损失相比,DLS更符合网络训练过程的内在动态,从而获得更好的效果。我们的方法在多个数据集上实现了最先进的性能,特别是在复杂的混合数据集上表现出色。
{"title":"CLIP-driven rain perception: Adaptive deraining with pattern-aware network routing and mask-guided cross-attention","authors":"Cong Guan,&nbsp;Osamu Yoshie","doi":"10.1016/j.patcog.2025.112886","DOIUrl":"10.1016/j.patcog.2025.112886","url":null,"abstract":"<div><div>Existing deraining models process all rainy images within a single network. However, different rain patterns have significant variations, which makes it challenging for a single network to handle diverse types of raindrops and streaks. To address this limitation, we propose a novel CLIP-driven rain perception network (CLIP-RPN) that leverages CLIP to automatically perceive rain patterns by computing visual-language matching scores and adaptively routing to sub-networks to handle different rain patterns, such as varying raindrop densities, streak orientations, and rainfall intensity. CLIP-RPN establishes semantic-aware rain pattern recognition through CLIP’s cross-modal visual-language alignment capabilities, enabling automatic identification of precipitation characteristics across different rain scenarios. This rain pattern awareness drives an adaptive subnetwork routing mechanism where specialized processing branches are dynamically activated based on the detected rain type, significantly enhancing the model’s capacity to handle diverse rainfall conditions. Furthermore, within sub-networks of CLIP-RPN, we introduce a mask-guided cross-attention mechanism (MGCA) that predicts precise rain masks at multi-scale to facilitate contextual interactions between rainy regions and clean background areas by cross-attention. We also introduces a dynamic loss scheduling mechanism (DLS) to adaptively adjust the gradients for the optimization process of CLIP-RPN. Compared with the commonly used <em>l</em><sub>1</sub> or <em>l</em><sub>2</sub> loss, DLS is more compatible with the inherent dynamics of the network training process, thus achieving enhanced outcomes. Our method achieves state-of-the-art performance across multiple datasets, particularly excelling in complex mixed datasets.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112886"},"PeriodicalIF":7.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive approach for image quality assessment using quality-centric embedding and ranking networks 使用以质量为中心的嵌入和排序网络进行图像质量评估的综合方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.patcog.2025.112890
Zeeshan Ali Haider , Sareer Ul Amin , Muhammad Fayaz , Fida Muhammad Khan , Hyeonjoon Moon , Sanghyun Seo
This paper presents a new technology that focuses on blind image quality assessment (BIQA) through a framework known as Quality-Centric Embedding and Ranking Network (QCERN). The framework ensures maximum efficiency when processing images under various possible scenarios. QCERN is entirely different from contemporary BIQA techniques, which focus solely on regressing quality scores without structured embeddings. In contrast, the proposed model features a well-defined embedding space as its principal focus, in which picture quality is both clustered and ordered. This dynamic quality of images enables QCERN to utilize several adaptive ranking transformers along a geometric space populated by dynamic score anchors representing images of equivalent quality QCERN features a distinct advantage since unlabeled images of interest can be placed by evaluation of their distance to these specified score anchors inductively in the embedding space, improving accuracy as well as generalization across disparate datasets. Multiple loss functions are utilized in this instance, including order and metric loss, to ensure that images are positioned correctly according to their quality while maintaining distinct divisions of quality. With the application of QCERN, numerous experiments have demonstrated its ability to outperform existing models by consistently delivering high-quality predictions across various datasets, making it a competitive option. This quality-centric embedding and ranking methodology is excellent for reliable quality assessment applications, such as in photography, medical imaging, and surveillance.
本文提出了一种基于以质量为中心的嵌入和排序网络(QCERN)框架的盲图像质量评估新技术。该框架确保在各种可能的场景下处理图像时达到最高效率。QCERN与当代的BIQA技术完全不同,后者只专注于回归质量分数,而没有结构化嵌入。相比之下,该模型以定义良好的嵌入空间为主要焦点,其中图像质量既聚类又有序。这种图像的动态质量使QCERN能够利用几个自适应排名转换器,沿着由动态分数锚点填充的几何空间,代表同等质量的图像。QCERN具有明显的优势,因为可以通过在嵌入空间中归纳评估它们与这些指定分数锚点的距离来放置感兴趣的未标记图像,从而提高准确性以及跨不同数据集的泛化。在这种情况下,使用了多个损失函数,包括阶损失和度量损失,以确保图像根据其质量正确定位,同时保持不同的质量划分。随着QCERN的应用,大量的实验已经证明了它通过在各种数据集上持续提供高质量的预测来超越现有模型的能力,使其成为一个有竞争力的选择。这种以质量为中心的嵌入和排序方法非常适合可靠的质量评估应用,例如摄影、医学成像和监视。
{"title":"A comprehensive approach for image quality assessment using quality-centric embedding and ranking networks","authors":"Zeeshan Ali Haider ,&nbsp;Sareer Ul Amin ,&nbsp;Muhammad Fayaz ,&nbsp;Fida Muhammad Khan ,&nbsp;Hyeonjoon Moon ,&nbsp;Sanghyun Seo","doi":"10.1016/j.patcog.2025.112890","DOIUrl":"10.1016/j.patcog.2025.112890","url":null,"abstract":"<div><div>This paper presents a new technology that focuses on blind image quality assessment (BIQA) through a framework known as Quality-Centric Embedding and Ranking Network (QCERN). The framework ensures maximum efficiency when processing images under various possible scenarios. QCERN is entirely different from contemporary BIQA techniques, which focus solely on regressing quality scores without structured embeddings. In contrast, the proposed model features a well-defined embedding space as its principal focus, in which picture quality is both clustered and ordered. This dynamic quality of images enables QCERN to utilize several adaptive ranking transformers along a geometric space populated by dynamic score anchors representing images of equivalent quality QCERN features a distinct advantage since unlabeled images of interest can be placed by evaluation of their distance to these specified score anchors inductively in the embedding space, improving accuracy as well as generalization across disparate datasets. Multiple loss functions are utilized in this instance, including order and metric loss, to ensure that images are positioned correctly according to their quality while maintaining distinct divisions of quality. With the application of QCERN, numerous experiments have demonstrated its ability to outperform existing models by consistently delivering high-quality predictions across various datasets, making it a competitive option. This quality-centric embedding and ranking methodology is excellent for reliable quality assessment applications, such as in photography, medical imaging, and surveillance.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112890"},"PeriodicalIF":7.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing the impact of model performance gains for semi-supervised medical image segmentation 增强模型性能增益对半监督医学图像分割的影响
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.patcog.2025.112889
Wenbin Zuo , Hongying Liu , Huadeng Wang , Lingqi Zeng , Ningning Tang , Fanhua Shang , Liang Wan , Jingjing Deng
Semi-supervised methods aim to alleviate the high cost of annotating medical images by incorporating unlabeled data into the training set. Recently, various consistency regularization methods based on the mean-teacher model have emerged. However, their performance is limited by the small number and poor quality of confident pixels in the pseudo-labels. Based on experimental observations, we propose a new argument: the performance gains of the model do not proportionally translate into improvements in pseudo-label quality, mainly due to constraints in pixel diversity representation and model expressiveness. Therefore, we propose a novel semi-supervised framework, DOC-MLE, which consists of two key components: a dynamic orthogonal constraint (DyOrCon) method and one multi-level election (MLElect) strategy. Specifically, DyOrCon imposes orthogonal constraints on multiple intermediate projection heads to enhance pixel diversity and fully exploit the model’s potential representation capacity. MLElect is designed considering both unsupervised pixel-level and supervised feature-level strategies, to generate reliable pseudo-labels. Moreover, to generate more robust prototype representations, this paper proposes new threshold filtering, edge erosion, and dynamic convolution strategies to address errors associated with low-confidence, high-confidence, and local morphological constraints. Extensive experiments on coronary angiography, polyp dataset, and retinal fundus images have proven the effectiveness of the proposed method.
半监督方法旨在通过将未标记的数据合并到训练集中来减轻医学图像注释的高成本。近年来,出现了各种基于均值-教师模型的一致性正则化方法。然而,它们的性能受到伪标签中自信像素数量少和质量差的限制。基于实验观察,我们提出了一个新的论点:模型的性能增益并没有成比例地转化为伪标签质量的改进,这主要是由于像素多样性表示和模型表达性的限制。因此,我们提出了一种新的半监督框架DOC-MLE,它由两个关键部分组成:一个动态正交约束(DyOrCon)方法和一个多级选举(MLElect)策略。具体而言,DyOrCon对多个中间投影头施加正交约束,以增强像素多样性,充分利用模型的潜在表示能力。MLElect的设计考虑了无监督像素级和监督特征级策略,以生成可靠的伪标签。此外,为了生成更稳健的原型表示,本文提出了新的阈值滤波、边缘侵蚀和动态卷积策略,以解决与低置信度、高置信度和局部形态约束相关的错误。在冠状动脉造影、息肉数据集和视网膜眼底图像上的大量实验证明了该方法的有效性。
{"title":"Enhancing the impact of model performance gains for semi-supervised medical image segmentation","authors":"Wenbin Zuo ,&nbsp;Hongying Liu ,&nbsp;Huadeng Wang ,&nbsp;Lingqi Zeng ,&nbsp;Ningning Tang ,&nbsp;Fanhua Shang ,&nbsp;Liang Wan ,&nbsp;Jingjing Deng","doi":"10.1016/j.patcog.2025.112889","DOIUrl":"10.1016/j.patcog.2025.112889","url":null,"abstract":"<div><div>Semi-supervised methods aim to alleviate the high cost of annotating medical images by incorporating unlabeled data into the training set. Recently, various consistency regularization methods based on the mean-teacher model have emerged. However, their performance is limited by the small number and poor quality of confident pixels in the pseudo-labels. Based on experimental observations, we propose a new argument: the performance gains of the model do not proportionally translate into improvements in pseudo-label quality, mainly due to constraints in pixel diversity representation and model expressiveness. Therefore, we propose a novel semi-supervised framework, DOC-MLE, which consists of two key components: a dynamic orthogonal constraint (DyOrCon) method and one multi-level election (MLElect) strategy. Specifically, DyOrCon imposes orthogonal constraints on multiple intermediate projection heads to enhance pixel diversity and fully exploit the model’s potential representation capacity. MLElect is designed considering both unsupervised pixel-level and supervised feature-level strategies, to generate reliable pseudo-labels. Moreover, to generate more robust prototype representations, this paper proposes new threshold filtering, edge erosion, and dynamic convolution strategies to address errors associated with low-confidence, high-confidence, and local morphological constraints. Extensive experiments on coronary angiography, polyp dataset, and retinal fundus images have proven the effectiveness of the proposed method.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 112889"},"PeriodicalIF":7.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view fuzzy C-means clustering via multi-objective slime mould and cooperative learning 基于多目标黏菌和合作学习的多视图模糊c均值聚类
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.patcog.2025.112908
Lin Sun , Yiman Zhang , Weiping Ding , Jiucheng Xu
Recently, multi-view fuzzy C-means clustering (MFCMC) can analyze samples from different views. However, it is affected by randomly initializing cluster centers and fails to comprehensively consider important differences between view and feature weights. To overcome these defects, an MFCMC methodology via multi-objective slime mould and cooperative learning is proposed. First, by combining the uniform distribution and great ergodicity of Tent mapping, Logistic mapping and Cosine mapping, a hybrid chaotic mapping, namely Tent-Logistic- Cosine, is designed to initialize the slime mould algorithm (SMA). An adaptive step via cosine function is applied into the anisotropic search to arrive at an optimal trade-off between the exploration and exploitation of SMA. Second, via changing characteristics of exponential function, an adjustable feedback factor is applied to update venation tube formation stage, and the global and local search of SMA is updated by the nonlinear adjustment. Then multi-objective SMA (MSMA) is studied by multiple strategies of hybrid chaotic mapping, adaptive step and adjustable feedback factor, and the optimal solution of MSMA can initialize the cluster center and feature weight of MFCMC. Third, via important differences between features and views, view and feature weights are designed for an objective function, and a novel MFCMC model via collaborative learning is developed to identify irrelevant features in each view. Finally, an MFCMC scheme with MSMA can reduce sensitivity of initial cluster centers and improve accuracy of clustering. Experiments on 24 benchmark functions for optimization and 14 multi-view datasets for clustering show the effectiveness of our developed methodology, respectively.
近年来,多视图模糊c均值聚类(MFCMC)可以对不同视图的样本进行分析。然而,它受到随机初始化聚类中心的影响,并且不能综合考虑视图和特征权重之间的重要差异。为了克服这些缺陷,提出了一种基于多目标黏菌和合作学习的多目标黏菌控制方法。首先,结合Tent映射、Logistic映射和余弦映射的均匀分布和高遍历性,设计了一种混合混沌映射,即Tent-Logistic-余弦映射,对黏菌算法(SMA)进行初始化。在各向异性搜索中引入余弦函数自适应步骤,以在SMA的探索和利用之间实现最优权衡。其次,通过改变指数函数的特征,采用可调反馈因子更新脉管形成阶段,并通过非线性调整更新SMA的全局和局部搜索;然后采用混合混沌映射、自适应步长和可调反馈因子等多种策略对多目标SMA (MSMA)进行了研究,MSMA的最优解可以初始化MFCMC的聚类中心和特征权值。第三,通过特征和视图之间的重要差异,为目标函数设计了视图和特征权重,并通过协作学习开发了一种新的MFCMC模型来识别每个视图中的无关特征。最后,结合MSMA的MFCMC方案可以降低初始聚类中心的敏感性,提高聚类精度。在24个优化基准函数和14个多视图数据集上的聚类实验分别表明了我们所开发的方法的有效性。
{"title":"Multi-view fuzzy C-means clustering via multi-objective slime mould and cooperative learning","authors":"Lin Sun ,&nbsp;Yiman Zhang ,&nbsp;Weiping Ding ,&nbsp;Jiucheng Xu","doi":"10.1016/j.patcog.2025.112908","DOIUrl":"10.1016/j.patcog.2025.112908","url":null,"abstract":"<div><div>Recently, multi-view fuzzy C-means clustering (MFCMC) can analyze samples from different views. However, it is affected by randomly initializing cluster centers and fails to comprehensively consider important differences between view and feature weights. To overcome these defects, an MFCMC methodology via multi-objective slime mould and cooperative learning is proposed. First, by combining the uniform distribution and great ergodicity of Tent mapping, Logistic mapping and Cosine mapping, a hybrid chaotic mapping, namely Tent-Logistic- Cosine, is designed to initialize the slime mould algorithm (SMA). An adaptive step via cosine function is applied into the anisotropic search to arrive at an optimal trade-off between the exploration and exploitation of SMA. Second, via changing characteristics of exponential function, an adjustable feedback factor is applied to update venation tube formation stage, and the global and local search of SMA is updated by the nonlinear adjustment. Then multi-objective SMA (MSMA) is studied by multiple strategies of hybrid chaotic mapping, adaptive step and adjustable feedback factor, and the optimal solution of MSMA can initialize the cluster center and feature weight of MFCMC. Third, via important differences between features and views, view and feature weights are designed for an objective function, and a novel MFCMC model via collaborative learning is developed to identify irrelevant features in each view. Finally, an MFCMC scheme with MSMA can reduce sensitivity of initial cluster centers and improve accuracy of clustering. Experiments on 24 benchmark functions for optimization and 14 multi-view datasets for clustering show the effectiveness of our developed methodology, respectively.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112908"},"PeriodicalIF":7.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards desiderata-driven design of visual counterfactual explainers 朝向以需求为导向的视觉反事实解释器设计
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.patcog.2025.112811
Sidney Bender , Jan Herrmann , Klaus-Robert Müller , Grégoire Montavon
Visual counterfactual explainers (VCEs) are a straightforward and promising approach to enhancing the transparency of image classifiers. VCEs complement other types of explanations, such as feature attribution, by revealing the specific data transformations to which a machine learning model responds most strongly. In this paper, we argue that existing VCEs tend to focus too narrowly on optimizing sample quality or change minimality; they do not consider the more holistic desiderata for an explanation, such as fidelity, understandability, and sufficiency. To address this shortcoming, we explore new mechanisms for counterfactual generation and investigate how they can help fulfill these desiderata. We combine these mechanisms into a novel ‘smooth counterfactual explorer’ (SCE) algorithm and demonstrate its effectiveness through systematic evaluations on synthetic and real data.
视觉反事实解释器(VCEs)是提高图像分类器透明度的一种简单而有前途的方法。vce通过揭示机器学习模型最强烈响应的特定数据转换,补充了其他类型的解释,例如特征归因。在本文中,我们认为现有的vce倾向于过于狭隘地关注优化样本质量或最小化变化;他们不考虑对解释的更全面的要求,如忠实、可理解性和充分性。为了解决这一缺点,我们探索了反事实生成的新机制,并研究了它们如何帮助实现这些愿望。我们将这些机制结合到一种新的“平滑反事实探索者”(SCE)算法中,并通过对合成数据和真实数据的系统评估来证明其有效性。
{"title":"Towards desiderata-driven design of visual counterfactual explainers","authors":"Sidney Bender ,&nbsp;Jan Herrmann ,&nbsp;Klaus-Robert Müller ,&nbsp;Grégoire Montavon","doi":"10.1016/j.patcog.2025.112811","DOIUrl":"10.1016/j.patcog.2025.112811","url":null,"abstract":"<div><div>Visual counterfactual explainers (VCEs) are a straightforward and promising approach to enhancing the transparency of image classifiers. VCEs complement other types of explanations, such as feature attribution, by revealing the specific data transformations to which a machine learning model responds most strongly. In this paper, we argue that existing VCEs tend to focus too narrowly on optimizing sample quality or change minimality; they do not consider the more holistic desiderata for an explanation, such as fidelity, understandability, and sufficiency. To address this shortcoming, we explore new mechanisms for counterfactual generation and investigate how they can help fulfill these desiderata. We combine these mechanisms into a novel ‘smooth counterfactual explorer’ (SCE) algorithm and demonstrate its effectiveness through systematic evaluations on synthetic and real data.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 112811"},"PeriodicalIF":7.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting the patch-based self-supervised learning through past-to-present smoothing 通过从过去到现在的平滑增强基于补丁的自监督学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.patcog.2025.112871
Hanpeng Liu, Shuoxi Zhang, Kaiyuan Gao, Kun He
Self-supervised learning (SSL) has recently achieved remarkable success in computer vision, primarily through joint embedding architectures. These models train dual networks by aligning different augmentations of the same image, as well as preventing feature space collapse. Building upon this, previous work establishes a mathematical connection between joint embedding SSL and the co-occurrences of image patches. Moreover, there have been a number of efforts to scale patch-based SSL to a vast number of image patches, demonstrating rapid convergence and notable performance. However, the efficiency of these methods is hindered by the excessive use of cropped patches. Addressing this issue, we propose a novel framework named Past-to-Present (P2P) smoothing that leverages the model’s previous outputs as a supervisory signal. Specifically, we divide the patch augmentations of a single image into two portions. One portion is used to update the model at iteration t1 and retained as past information of iteration t. The other portion is used for comparison in iteration t, serving as present information to be complementary to the past. This design allows us to spread the patches of the same image across different batches, thereby enhancing the utilization rate of patch-based learning in our model. Through extensive experimentation and validation, our method achieves outstanding accuracy, scoring 94.2 % on CIFAR-10, 74.2 % on CIFAR-100, 49.5 % on TinyImageNet, and 78.2 % on ImageNet-100. Besides, additional experiments demonstrate its enhanced transferability to out-of-domain datasets when compared to other SSL baselines.
自监督学习(SSL)最近在计算机视觉领域取得了显著的成功,主要是通过联合嵌入架构。这些模型通过对齐同一图像的不同增强来训练双重网络,并防止特征空间崩溃。在此基础上,先前的工作建立了联合嵌入SSL和图像补丁共现之间的数学联系。此外,已经有许多工作将基于补丁的SSL扩展到大量映像补丁,展示了快速收敛和显著的性能。然而,这些方法的效率受到过度使用裁剪斑块的阻碍。为了解决这个问题,我们提出了一个名为过去到现在(P2P)平滑的新框架,它利用模型以前的输出作为监督信号。具体来说,我们将单幅图像的patch增强分为两部分。一部分用于在迭代t−1时更新模型,保留为迭代t的过去信息。另一部分用于在迭代t中进行比较,作为现在信息与过去信息相补充。这种设计允许我们将同一图像的patch分散到不同批次,从而提高了我们模型中基于patch的学习的利用率。通过大量的实验和验证,我们的方法在CIFAR-10上的准确率为94.2%,在CIFAR-100上的准确率为74.2%,在TinyImageNet上的准确率为49.5%,在ImageNet-100上的准确率为78.2%。此外,与其他SSL基线相比,实验证明了其对域外数据集的可移植性增强。
{"title":"Boosting the patch-based self-supervised learning through past-to-present smoothing","authors":"Hanpeng Liu,&nbsp;Shuoxi Zhang,&nbsp;Kaiyuan Gao,&nbsp;Kun He","doi":"10.1016/j.patcog.2025.112871","DOIUrl":"10.1016/j.patcog.2025.112871","url":null,"abstract":"<div><div>Self-supervised learning (SSL) has recently achieved remarkable success in computer vision, primarily through joint embedding architectures. These models train dual networks by aligning different augmentations of the same image, as well as preventing feature space collapse. Building upon this, previous work establishes a mathematical connection between joint embedding SSL and the co-occurrences of image patches. Moreover, there have been a number of efforts to scale patch-based SSL to a vast number of image patches, demonstrating rapid convergence and notable performance. However, the efficiency of these methods is hindered by the excessive use of cropped patches. Addressing this issue, we propose a novel framework named Past-to-Present (P2P) smoothing that leverages the model’s previous outputs as a supervisory signal. Specifically, we divide the patch augmentations of a single image into two portions. One portion is used to update the model at iteration <span><math><mrow><mi>t</mi><mo>−</mo><mn>1</mn></mrow></math></span> and retained as past information of iteration <em>t</em>. The other portion is used for comparison in iteration <em>t</em>, serving as present information to be complementary to the past. This design allows us to spread the patches of the same image across different batches, thereby enhancing the utilization rate of patch-based learning in our model. Through extensive experimentation and validation, our method achieves outstanding accuracy, scoring 94.2 % on CIFAR-10, 74.2 % on CIFAR-100, 49.5 % on TinyImageNet, and 78.2 % on ImageNet-100. Besides, additional experiments demonstrate its enhanced transferability to out-of-domain datasets when compared to other SSL baselines.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112871"},"PeriodicalIF":7.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1