首页 > 最新文献

Pattern Recognition最新文献

英文 中文
No-reference dehazed image quality assessment via perception-driven interactive feature representation learning 基于感知驱动的交互式特征表示学习的无参考去噪图像质量评估
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.patcog.2026.113085
Hangyu Nie , Ziqiang Huang , Miao Qi , Junjun Jiang , Jiayi Ma , Wei Liu
The advancement in image dehazing research has increased the demand for effective dehazed image quality assessment (DQA) methods. However, existing DQA approaches suffer from limitations due to scarce labeled data, resulting in insufficient representation of quality-related information. Most current methods focus on distortion artifacts introduced by dehazing algorithms or rely on single quality factors, limiting their performance and generalizability. In this work, we propose a novel no-reference DQA model that leverages self-supervised reconstruction and pseudo-label learning to extract three complementary perceptual features: image Content, Distortion, and Fog Density (CDFD-DQA). The framework includes four key components: Feature Extraction Module (FEM), Perceptual Feature Representation Module (PFRM), Feature Self-Interaction Module (FSIM), and Dual-branch Quality Predictor (DQP). The FEM uses pre-trained content-aware and distortion-aware encoders, along with a fog density predictor, to capture quality-discriminative features related to content preservation, distortion artifacts, and fog density. These features are refined through PFRM to enhance expressive capacity. To capture dependencies among features, FSIM incorporates Content-Distortion-Fog Density Feature Self-Interaction (CDFD-FSI), adaptively integrating interrelated and independent representations. Finally, DQP maps fused features to perceptual quality scores. Extensive experiments on five publicly available DQA datasets demonstrate that CDFD-DQA generally aligns well with human subjective perception and outperforms several existing state-of-the-art methods.
随着图像去雾研究的不断深入,对有效的去雾图像质量评价方法提出了更高的要求。然而,现有的DQA方法由于缺乏标记数据而受到限制,导致质量相关信息的表示不足。目前大多数方法都集中在由消雾算法引入的失真伪影上,或者依赖于单一的质量因素,限制了它们的性能和泛化性。在这项工作中,我们提出了一种新的无参考DQA模型,该模型利用自监督重建和伪标签学习来提取三个互补的感知特征:图像内容、失真和雾密度(CDFD-DQA)。该框架包括四个关键组件:特征提取模块(FEM)、感知特征表示模块(PFRM)、特征自交互模块(FSIM)和双分支质量预测器(DQP)。FEM使用预先训练的内容感知和失真感知编码器,以及雾密度预测器,来捕获与内容保存、失真工件和雾密度相关的质量判别特征。这些特性通过PFRM进行细化,以增强表达能力。为了捕获特征之间的依赖关系,FSIM结合了内容扭曲-雾密度特征自交互(CDFD-FSI),自适应地整合相互关联和独立的表示。最后,DQP映射将特征融合到感知质量分数中。在五个公开可用的DQA数据集上进行的大量实验表明,CDFD-DQA通常与人类的主观感知保持一致,并且优于几种现有的最先进的方法。
{"title":"No-reference dehazed image quality assessment via perception-driven interactive feature representation learning","authors":"Hangyu Nie ,&nbsp;Ziqiang Huang ,&nbsp;Miao Qi ,&nbsp;Junjun Jiang ,&nbsp;Jiayi Ma ,&nbsp;Wei Liu","doi":"10.1016/j.patcog.2026.113085","DOIUrl":"10.1016/j.patcog.2026.113085","url":null,"abstract":"<div><div>The advancement in image dehazing research has increased the demand for effective dehazed image quality assessment (DQA) methods. However, existing DQA approaches suffer from limitations due to scarce labeled data, resulting in insufficient representation of quality-related information. Most current methods focus on distortion artifacts introduced by dehazing algorithms or rely on single quality factors, limiting their performance and generalizability. In this work, we propose a novel no-reference DQA model that leverages self-supervised reconstruction and pseudo-label learning to extract three complementary perceptual features: image Content, Distortion, and Fog Density (CDFD-DQA). The framework includes four key components: Feature Extraction Module (FEM), Perceptual Feature Representation Module (PFRM), Feature Self-Interaction Module (FSIM), and Dual-branch Quality Predictor (DQP). The FEM uses pre-trained content-aware and distortion-aware encoders, along with a fog density predictor, to capture quality-discriminative features related to content preservation, distortion artifacts, and fog density. These features are refined through PFRM to enhance expressive capacity. To capture dependencies among features, FSIM incorporates Content-Distortion-Fog Density Feature Self-Interaction (CDFD-FSI), adaptively integrating interrelated and independent representations. Finally, DQP maps fused features to perceptual quality scores. Extensive experiments on five publicly available DQA datasets demonstrate that CDFD-DQA generally aligns well with human subjective perception and outperforms several existing state-of-the-art methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113085"},"PeriodicalIF":7.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision-language adaptation with imbalance mitigation for generalizable face anti-spoofing 基于失衡缓解的视觉语言自适应广义人脸防欺骗
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.patcog.2026.113101
Fan Cheng , Yuze Qiao , Fanjun Meng , Xianliang Wang , Mingsha Peng , Kaixuan Li , Zhize Wu , Meiwen Chen
Face Anti-Spoofing (FAS) has emerged as a critical research area aimed at detecting spoofing attacks in facial recognition systems. Existing deep learning-based methods, while powerful, can suffer from overfitting to specific datasets and may not perform well on unseen domains. Among these studies, some of them fine-tuned the entire pre-trained vision-language model and assigned equal weight to different domains and classes in FAS datasets. However, this training strategy consumes huge computing resources and does not pay attention to data imbalance. In this paper, we propose a framework based on vision-language adaptation with imbalance mitigation for generalizable FAS (AIM-FAS). Our method builds upon the success of CLIP, a cutting-edge vision-language model, and introduces several key innovations. Specifically, we propose an adaptive transformer block to fine-tune the model on FAS data. Additionally, we design a multi-modal dual-weighting focal loss to address the data imbalance phenomenon commonly encountered in FAS experiments. Furthermore, we introduce a generalizable optimization method based on the Sharpness-Aware Minimization (SAM) to flatten the loss landscape and enhance the generalization ability of our model. Extensive experiments show that the proposed AIM-FAS is effective and outperforms the state-of-the-art methods on several challenging cross-domain datasets.
人脸反欺骗(FAS)技术已成为人脸识别系统中检测欺骗攻击的一个重要研究领域。现有的基于深度学习的方法虽然强大,但可能会过度拟合到特定的数据集,并且可能在不可见的领域表现不佳。在这些研究中,一些研究对整个预训练的视觉语言模型进行了微调,并对FAS数据集中的不同领域和类别赋予了相同的权重。然而,这种训练策略消耗了大量的计算资源,并且没有注意到数据的不平衡。本文提出了一种基于视觉语言自适应和失衡缓解的泛化FAS (AIM-FAS)框架。我们的方法建立在CLIP(一个前沿的视觉语言模型)的成功基础上,并引入了几个关键的创新。具体来说,我们提出了一个自适应变压器块来微调模型的FAS数据。此外,我们设计了一个多模态双加权焦损失来解决在FAS实验中经常遇到的数据不平衡现象。在此基础上,我们引入了一种基于锐度感知最小化(Sharpness-Aware Minimization, SAM)的可泛化优化方法,以平面化损失格局,提高模型的泛化能力。大量的实验表明,所提出的AIM-FAS是有效的,并且在一些具有挑战性的跨域数据集上优于最先进的方法。
{"title":"Vision-language adaptation with imbalance mitigation for generalizable face anti-spoofing","authors":"Fan Cheng ,&nbsp;Yuze Qiao ,&nbsp;Fanjun Meng ,&nbsp;Xianliang Wang ,&nbsp;Mingsha Peng ,&nbsp;Kaixuan Li ,&nbsp;Zhize Wu ,&nbsp;Meiwen Chen","doi":"10.1016/j.patcog.2026.113101","DOIUrl":"10.1016/j.patcog.2026.113101","url":null,"abstract":"<div><div>Face Anti-Spoofing (FAS) has emerged as a critical research area aimed at detecting spoofing attacks in facial recognition systems. Existing deep learning-based methods, while powerful, can suffer from overfitting to specific datasets and may not perform well on unseen domains. Among these studies, some of them fine-tuned the entire pre-trained vision-language model and assigned equal weight to different domains and classes in FAS datasets. However, this training strategy consumes huge computing resources and does not pay attention to data imbalance. In this paper, we propose a framework based on vision-language adaptation with imbalance mitigation for generalizable FAS (AIM-FAS). Our method builds upon the success of CLIP, a cutting-edge vision-language model, and introduces several key innovations. Specifically, we propose an adaptive transformer block to fine-tune the model on FAS data. Additionally, we design a multi-modal dual-weighting focal loss to address the data imbalance phenomenon commonly encountered in FAS experiments. Furthermore, we introduce a generalizable optimization method based on the Sharpness-Aware Minimization (SAM) to flatten the loss landscape and enhance the generalization ability of our model. Extensive experiments show that the proposed AIM-FAS is effective and outperforms the state-of-the-art methods on several challenging cross-domain datasets.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113101"},"PeriodicalIF":7.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tri-SEM: A shape-aware robust regression method via chain-like segmentation and residual analysis 三扫描电镜:通过链状分割和残差分析的形状感知鲁棒回归方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.patcog.2026.113092
Zhilin Xiong , Aihua Han , Tiefeng Ma , Shuangzhe Liu
Outliers pose a significant threat to the reliability of regression analysis. Unlike traditional robust methods that primarily rely on numerical optimization, this paper introduces Tri-SEM, a shape-aware robust regression framework that leverages the geometric and morphological structure of data through a flexible three-stage architecture: Split, Extraction, and Merge. In the Split stage, data are partitioned into chain-like segments using the Anderson-Darling test, projection analysis, and convex hull detection to isolate potential outliers, with clustering performed in a 2-D projected space for computational efficiency. In the Extraction stage, a subset of clean segments is selected by jointly considering their size and median squared residuals. In the Merge stage, reliable inliers are integrated using a histogram transition detector on 1-D residuals, capturing residual distribution patterns to construct the final regression estimate. Comprehensive experiments on diverse datasets demonstrate Tri-SEM’s clear superiority in both prediction accuracy and estimation bias: it achieved the best overall rank and the highest prediction accuracy on 30 of the 35 datasets, while consistently outperforming the second-ranked method (MM-estimator) in estimation bias, achieving a relative improvement exceeding 90% on more than half (54.3%) of the datasets. Extensive ablation, sensitivity, convergence, and runtime analyses confirm the method’s robustness, efficiency, and adaptability across a wide range of data scenarios.
异常值对回归分析的可靠性构成重大威胁。与主要依赖于数值优化的传统鲁棒方法不同,本文介绍了Tri-SEM,这是一种形状感知的鲁棒回归框架,通过灵活的三阶段架构:分裂、提取和合并来利用数据的几何和形态结构。在Split阶段,使用Anderson-Darling测试、投影分析和凸包检测将数据划分为链状片段,以隔离潜在的异常值,并在二维投影空间中进行聚类以提高计算效率。在提取阶段,通过综合考虑干净片段的大小和残差的中位数平方来选择干净片段的子集。在合并阶段,使用一维残差上的直方图过渡检测器集成可靠的内线,捕获残差分布模式以构建最终的回归估计。在不同数据集上的综合实验表明,Tri-SEM在预测精度和估计偏差方面都有明显的优势:在35个数据集中的30个数据集上,Tri-SEM获得了最佳的综合排名和最高的预测精度,同时在估计偏差方面持续优于排名第二的方法(MM-estimator),在超过一半(54.3%)的数据集上实现了超过90%的相对改进。广泛的消融、敏感性、收敛性和运行时分析证实了该方法的鲁棒性、效率和对各种数据场景的适应性。
{"title":"Tri-SEM: A shape-aware robust regression method via chain-like segmentation and residual analysis","authors":"Zhilin Xiong ,&nbsp;Aihua Han ,&nbsp;Tiefeng Ma ,&nbsp;Shuangzhe Liu","doi":"10.1016/j.patcog.2026.113092","DOIUrl":"10.1016/j.patcog.2026.113092","url":null,"abstract":"<div><div>Outliers pose a significant threat to the reliability of regression analysis. Unlike traditional robust methods that primarily rely on numerical optimization, this paper introduces Tri-SEM, a shape-aware robust regression framework that leverages the geometric and morphological structure of data through a flexible three-stage architecture: Split, Extraction, and Merge. In the Split stage, data are partitioned into chain-like segments using the Anderson-Darling test, projection analysis, and convex hull detection to isolate potential outliers, with clustering performed in a 2-D projected space for computational efficiency. In the Extraction stage, a subset of clean segments is selected by jointly considering their size and median squared residuals. In the Merge stage, reliable inliers are integrated using a histogram transition detector on 1-D residuals, capturing residual distribution patterns to construct the final regression estimate. Comprehensive experiments on diverse datasets demonstrate Tri-SEM’s clear superiority in both prediction accuracy and estimation bias: it achieved the best overall rank and the highest prediction accuracy on 30 of the 35 datasets, while consistently outperforming the second-ranked method (MM-estimator) in estimation bias, achieving a relative improvement exceeding 90% on more than half (54.3%) of the datasets. Extensive ablation, sensitivity, convergence, and runtime analyses confirm the method’s robustness, efficiency, and adaptability across a wide range of data scenarios.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113092"},"PeriodicalIF":7.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A spectral difference preservation network based on Mamba pyramid for hyperspectral image compression 基于曼巴金字塔的光谱差异保存网络用于高光谱图像压缩
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.patcog.2026.113088
Kaijie Shi , Cuiping Shi , Weiwei Sun , Liguo Wang
In lossy hyperspectral image compression, reconstructed images often suffer from block or blur effects. This phenomenon is even more pronounced at high compression ratios. Low quality frequency domain features and coarse feature extraction are important factors for this phenomenon. In this paper, a spectral difference preservation network based on Mamba pyramid for hyperspectral image compression (SDMNet) was proposed. Firstly, a spectral difference preservation head (SDPH) was designed, integrating grouped element-wise reconstruction and complex frequency domain feature analysis, to retain high-quality frequency domain features. Secondly, Mamba was introduced into the field of hyperspectral compression for the first time. A Mamba pyramid feature enhancement module (MPFM) was developed, incorporating dimensional squeeze and pyramid Mamba for refined feature extraction. Finally, based on the above innovations, this paper proposes an efficient SDMNet in conjunction with the Variational Autoencoder (VAE) framework for hyperspectral image compression. Some experimental results show that on the dataset Pavia Centre, Chikusei and Houston, compared with some advanced compression models, SDMNet has consistently demonstrated excellent compression performance. Especially on dataset Pavia Centre, the PSNR obtained by SDMNet is increased by up to 11 %. The related code will be released at https://github.com/shikaijieskj/SDMNet.
在有损高光谱图像压缩中,重构后的图像经常会出现块化或模糊现象。这种现象在高压缩比时更为明显。低质量的频域特征和粗糙的特征提取是造成这种现象的重要因素。提出了一种基于曼巴金字塔的高光谱图像差分保存网络(SDMNet)。首先,设计了一种结合分组元重构和复频域特征分析的谱差保持头(SDPH),以保留高质量的频域特征;其次,曼巴首次被引入到高光谱压缩领域。开发了一种曼巴金字塔特征增强模块(MPFM),将尺寸挤压与金字塔曼巴相结合,进行精细特征提取。最后,基于上述创新,本文提出了一种高效的SDMNet与变分自编码器(VAE)框架相结合的高光谱图像压缩方法。实验结果表明,在Pavia Centre、Chikusei和Houston数据集上,与一些先进的压缩模型相比,SDMNet始终表现出优异的压缩性能。特别是在数据集Pavia Centre上,SDMNet获得的PSNR提高了11%。相关代码将在https://github.com/shikaijieskj/SDMNet上发布。
{"title":"A spectral difference preservation network based on Mamba pyramid for hyperspectral image compression","authors":"Kaijie Shi ,&nbsp;Cuiping Shi ,&nbsp;Weiwei Sun ,&nbsp;Liguo Wang","doi":"10.1016/j.patcog.2026.113088","DOIUrl":"10.1016/j.patcog.2026.113088","url":null,"abstract":"<div><div>In lossy hyperspectral image compression, reconstructed images often suffer from block or blur effects. This phenomenon is even more pronounced at high compression ratios. Low quality frequency domain features and coarse feature extraction are important factors for this phenomenon. In this paper, a spectral difference preservation network based on Mamba pyramid for hyperspectral image compression (SDMNet) was proposed. Firstly, a spectral difference preservation head (SDPH) was designed, integrating grouped element-wise reconstruction and complex frequency domain feature analysis, to retain high-quality frequency domain features. Secondly, Mamba was introduced into the field of hyperspectral compression for the first time. A Mamba pyramid feature enhancement module (MPFM) was developed, incorporating dimensional squeeze and pyramid Mamba for refined feature extraction. Finally, based on the above innovations, this paper proposes an efficient SDMNet in conjunction with the Variational Autoencoder (VAE) framework for hyperspectral image compression. Some experimental results show that on the dataset Pavia Centre, Chikusei and Houston, compared with some advanced compression models, SDMNet has consistently demonstrated excellent compression performance. Especially on dataset Pavia Centre, the PSNR obtained by SDMNet is increased by up to 11 %. The related code will be released at <span><span>https://github.com/shikaijieskj/SDMNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113088"},"PeriodicalIF":7.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outlier-robust learning with continuously differentiable least trimmed squares 具有连续可微最小裁剪平方的异常鲁棒学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.patcog.2026.113099
Lei Xing, Yufei Liu, Linhai Xu, Badong Chen
Robust estimation is a fundamental task in statistical analysis, aimed at identifying models that can effectively eliminate the impact of noise, especially in the presence of outliers. The Least Trimmed Squares (LTS) estimation approach is widely recognized for its robustness in such scenarios. However, selecting a representative subset of samples for LTS estimation is computationally demanding, and the effectiveness of LTS is sensitive to the number of samples selected. In this study, we propose a novel approach, continuously differentiable LTS (CD-LTS), which employs a continuous function to approximate the original LTS. Due to its continuity and differentiability properties, CD-LTS can be used as a cost function for a range of learning models and avoids the need for additional sorting steps, thereby addressing the difficulty of applying traditional LTS directly. We utilize CD-LTS to develop four robust learning algorithms, including random vector functional link (RVFL), principal component analysis (PCA), iterative closest point (ICP), and orthogonal iterative (OI). The experimental results indicate that the proposed algorithms exhibit superior performance compared to existing methods.
鲁棒估计是统计分析中的一项基本任务,旨在识别能够有效消除噪声影响的模型,特别是在存在异常值的情况下。在这种情况下,最小裁剪二乘(LTS)估计方法因其鲁棒性而得到广泛认可。然而,选择具有代表性的样本子集进行LTS估计需要大量的计算量,并且LTS的有效性对所选样本的数量很敏感。在这项研究中,我们提出了一种新的方法,连续可微LTS (CD-LTS),它使用一个连续函数来近似原始LTS。由于其连续性和可微性,CD-LTS可以作为一系列学习模型的代价函数,避免了额外的排序步骤,从而解决了直接应用传统LTS的困难。我们利用CD-LTS开发了四种鲁棒学习算法,包括随机向量功能链接(RVFL)、主成分分析(PCA)、迭代最近点(ICP)和正交迭代(OI)。实验结果表明,与现有方法相比,所提出的算法具有更好的性能。
{"title":"Outlier-robust learning with continuously differentiable least trimmed squares","authors":"Lei Xing,&nbsp;Yufei Liu,&nbsp;Linhai Xu,&nbsp;Badong Chen","doi":"10.1016/j.patcog.2026.113099","DOIUrl":"10.1016/j.patcog.2026.113099","url":null,"abstract":"<div><div>Robust estimation is a fundamental task in statistical analysis, aimed at identifying models that can effectively eliminate the impact of noise, especially in the presence of outliers. The Least Trimmed Squares (LTS) estimation approach is widely recognized for its robustness in such scenarios. However, selecting a representative subset of samples for LTS estimation is computationally demanding, and the effectiveness of LTS is sensitive to the number of samples selected. In this study, we propose a novel approach, continuously differentiable LTS (CD-LTS), which employs a continuous function to approximate the original LTS. Due to its continuity and differentiability properties, CD-LTS can be used as a cost function for a range of learning models and avoids the need for additional sorting steps, thereby addressing the difficulty of applying traditional LTS directly. We utilize CD-LTS to develop four robust learning algorithms, including random vector functional link (RVFL), principal component analysis (PCA), iterative closest point (ICP), and orthogonal iterative (OI). The experimental results indicate that the proposed algorithms exhibit superior performance compared to existing methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113099"},"PeriodicalIF":7.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Support tensor ring kernel machine with dual-stage acceleration 支持张量环核机,双级加速
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.patcog.2026.113100
Xiao Li , Yitian Xu
Amidst the growing scale of high-dimensional data, efficient tensorial classification has become increasingly critical. Support Tensor Machines (STMs) are powerful tools, yet existing STMs that leverage CANDECOMP/PARAFAC, Tucker, or Tensor Train decompositions often struggle to capture high-order structures, limiting model expressiveness. Tensor Ring (TR) decomposition offers stronger representational power but incurs prohibitive computational overhead. To address these challenges, we propose a novel Support Tensor Ring Kernel Model (STRKM), which integrates TR decomposition with kernel learning to fully exploit the structural information and enhance classification performance. Moreover, a tailored dual-stage acceleration method for STRKM (Dual-STRKM) is developed. In the tensor decomposition stage, a fast TR decomposition algorithm is designed and its error bound is rigorously analyzed. In the model optimization stage, a safe screening rule based on the duality gap is constructed to dynamically eliminate redundant samples and accelerate the training process. Extensive experiments demonstrate the efficiency and superiority of Dual-STRKM in high-dimensional classification tasks.
随着高维数据规模的不断扩大,高效的张量分类变得越来越重要。支持张量机(stm)是功能强大的工具,但是现有的stm利用CANDECOMP/PARAFAC、Tucker或张量列分解通常难以捕获高阶结构,从而限制了模型的表现力。张量环(TR)分解提供了更强的表示能力,但产生了令人望而却步的计算开销。为了解决这些问题,我们提出了一种新的支持张量环核模型(STRKM),该模型将TR分解与核学习相结合,充分利用了结构信息,提高了分类性能。此外,还开发了一种适合STRKM的双级加速方法(Dual-STRKM)。在张量分解阶段,设计了一种快速的TR分解算法,并对其误差界进行了严格分析。在模型优化阶段,构造基于对偶间隙的安全筛选规则,动态剔除冗余样本,加快训练过程。大量的实验证明了Dual-STRKM在高维分类任务中的有效性和优越性。
{"title":"Support tensor ring kernel machine with dual-stage acceleration","authors":"Xiao Li ,&nbsp;Yitian Xu","doi":"10.1016/j.patcog.2026.113100","DOIUrl":"10.1016/j.patcog.2026.113100","url":null,"abstract":"<div><div>Amidst the growing scale of high-dimensional data, efficient tensorial classification has become increasingly critical. Support Tensor Machines (STMs) are powerful tools, yet existing STMs that leverage CANDECOMP/PARAFAC, Tucker, or Tensor Train decompositions often struggle to capture high-order structures, limiting model expressiveness. Tensor Ring (TR) decomposition offers stronger representational power but incurs prohibitive computational overhead. To address these challenges, we propose a novel Support Tensor Ring Kernel Model (STRKM), which integrates TR decomposition with kernel learning to fully exploit the structural information and enhance classification performance. Moreover, a tailored dual-stage acceleration method for STRKM (Dual-STRKM) is developed. In the tensor decomposition stage, a fast TR decomposition algorithm is designed and its error bound is rigorously analyzed. In the model optimization stage, a safe screening rule based on the duality gap is constructed to dynamically eliminate redundant samples and accelerate the training process. Extensive experiments demonstrate the efficiency and superiority of Dual-STRKM in high-dimensional classification tasks.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113100"},"PeriodicalIF":7.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive saliency based contextual metric learning for few-shot open-set recognition 基于自适应显著性的上下文度量学习在少镜头开集识别中的应用
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.patcog.2026.113096
Ping Li, Jiajun Chen, Lijie Shang, Chenhao Ping
Few-Shot Open-set Recognition (FSOR) aims to recognize the samples from known classes while rejecting those from unknown (unseen) classes. It faces two primary challenges, including the dynamic changing of decision boundary for known classes due to varying episodes (tasks), and the discriminative ambiguity of visually-similar samples between known and unknown classes, which are not well addressed by previous methods. This inspires us to propose an Adaptive Saliency based Contextual Metric learning framework, termed ASCM. This framework consists of two main components, i.e., adaptive saliency fusion module, and contextual metric learning module. The former adaptively models the importance of spatial saliency features, which are indexed by most relevant spatial positions of feature map to the known classes. Also, the former adopts an adaptive saliency fusion strategy to dynamically calibrate class prototypes, by leveraging the global semantic similarity of different classes to adjust the spatial saliency feature by weighting. Meanwhile, the latter captures contextual similarity relation among neighbor embedding features by considering both shared and non-shared neighbors between query sample and class prototypes in terms of contextual metric. This alleviates the confusion problem of samples with similar appearance, because it also considers other dissimilar samples in the neighborhood. Extensive experiments on four benchmarks, i.e., mini-ImageNet, tiered-ImageNet, CIFAR-FS, and FC100, validate the advantage of the proposed approach. Our code is available at https://github.com/mlvccn/ASCM_FewshotOpenset.
少射开集识别(FSOR)旨在识别来自已知类别的样本,同时拒绝来自未知(看不见的)类别的样本。它面临着两个主要的挑战,一是由于事件(任务)的变化而导致已知类的决策边界的动态变化,二是已知和未知类之间视觉相似样本的判别模糊,这是以前的方法没有很好地解决的问题。这启发我们提出了一种基于上下文度量的自适应显著性学习框架,称为ASCM。该框架由两个主要部分组成,即自适应显著性融合模块和上下文度量学习模块。前者自适应地对空间显著性特征的重要性进行建模,通过特征映射到已知类的最相关空间位置来索引这些特征。采用自适应显著性融合策略,利用不同类的全局语义相似度加权调整空间显著性特征,动态校准类原型。后者通过考虑查询样本和类原型之间的共享和非共享邻居,从上下文度量的角度捕捉邻居嵌入特征之间的上下文相似关系。这就缓解了外观相似样本的混淆问题,因为它还考虑了邻域内其他不相似样本。在mini-ImageNet、tier- imagenet、CIFAR-FS和FC100四个基准测试上的大量实验验证了所提出方法的优势。我们的代码可在https://github.com/mlvccn/ASCM_FewshotOpenset上获得。
{"title":"Adaptive saliency based contextual metric learning for few-shot open-set recognition","authors":"Ping Li,&nbsp;Jiajun Chen,&nbsp;Lijie Shang,&nbsp;Chenhao Ping","doi":"10.1016/j.patcog.2026.113096","DOIUrl":"10.1016/j.patcog.2026.113096","url":null,"abstract":"<div><div>Few-Shot Open-set Recognition (FSOR) aims to recognize the samples from known classes while rejecting those from unknown (unseen) classes. It faces two primary challenges, including the dynamic changing of decision boundary for known classes due to varying episodes (tasks), and the discriminative ambiguity of visually-similar samples between known and unknown classes, which are not well addressed by previous methods. This inspires us to propose an <em>Adaptive Saliency based Contextual Metric learning</em> framework, termed <strong>ASCM</strong>. This framework consists of two main components, i.e., adaptive saliency fusion module, and contextual metric learning module. The former adaptively models the importance of spatial saliency features, which are indexed by most relevant spatial positions of feature map to the known classes. Also, the former adopts an adaptive saliency fusion strategy to dynamically calibrate class prototypes, by leveraging the global semantic similarity of different classes to adjust the spatial saliency feature by weighting. Meanwhile, the latter captures contextual similarity relation among neighbor embedding features by considering both shared and non-shared neighbors between query sample and class prototypes in terms of contextual metric. This alleviates the confusion problem of samples with similar appearance, because it also considers other dissimilar samples in the neighborhood. Extensive experiments on four benchmarks, i.e., mini-ImageNet, tiered-ImageNet, CIFAR-FS, and FC100, validate the advantage of the proposed approach. Our code is available at <span><span>https://github.com/mlvccn/ASCM_FewshotOpenset</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113096"},"PeriodicalIF":7.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Degradation-aware feature collaboration for underwater image stitching 水下图像拼接的退化感知特征协同
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-13 DOI: 10.1016/j.patcog.2026.113082
Jiahao Zhang , Xiaoke Shang , Zengxi Zhang , Jinyuan Liu
Image stitching plays a crucial role in panoramic perception by seamlessly merging images captured from different viewpoints to create a wide field-of-view (FOV) image. However, factors such as water depth, suspended particles, and light scattering in underwater environments significantly degrade image quality, rendering conventional stitching methods ineffective. In this study, we propose an underwater image stitching network (DFCNet). The network consists of alignment stage and composition stage. In the alignment stage, we employ multi-differential convolutions to enhance feature extraction, enabling the network to better capture fine details and edge features in underwater images. Additionally, we synergize visual enhancement tasks into the alignment process to improve the robustness of the feature representation. Furthermore, we incorporate semantic features through a feature fusion strategy to enrich the feature representation. In the composition stage, we design a loss function to optimize the seam region and address issues such as brightness variations and suspended particles. Qualitative and quantitative results on supplemented real world underwater image stitching dataset demonstrate that the proposed method outperforms existing approaches, achieving higher accuracy and robustness in underwater image stitching.
图像拼接在全景感知中起着至关重要的作用,通过无缝地合并从不同视点捕获的图像来创建宽视场(FOV)图像。然而,水下环境中的水深、悬浮粒子、光散射等因素会显著降低图像质量,使得传统的拼接方法无效。在本研究中,我们提出了一种水下图像拼接网络(DFCNet)。该网络分为对齐阶段和组合阶段。在对准阶段,我们采用多微分卷积增强特征提取,使网络能够更好地捕捉水下图像中的精细细节和边缘特征。此外,我们将视觉增强任务协同到对齐过程中,以提高特征表示的鲁棒性。此外,我们通过特征融合策略将语义特征融入到特征表示中,以丰富特征表示。在合成阶段,我们设计了一个损失函数来优化接缝区域,并解决了亮度变化和悬浮颗粒等问题。在补充的真实世界水下图像拼接数据集上的定性和定量结果表明,该方法优于现有方法,具有更高的水下图像拼接精度和鲁棒性。
{"title":"Degradation-aware feature collaboration for underwater image stitching","authors":"Jiahao Zhang ,&nbsp;Xiaoke Shang ,&nbsp;Zengxi Zhang ,&nbsp;Jinyuan Liu","doi":"10.1016/j.patcog.2026.113082","DOIUrl":"10.1016/j.patcog.2026.113082","url":null,"abstract":"<div><div>Image stitching plays a crucial role in panoramic perception by seamlessly merging images captured from different viewpoints to create a wide field-of-view (FOV) image. However, factors such as water depth, suspended particles, and light scattering in underwater environments significantly degrade image quality, rendering conventional stitching methods ineffective. In this study, we propose an underwater image stitching network (DFCNet). The network consists of alignment stage and composition stage. In the alignment stage, we employ multi-differential convolutions to enhance feature extraction, enabling the network to better capture fine details and edge features in underwater images. Additionally, we synergize visual enhancement tasks into the alignment process to improve the robustness of the feature representation. Furthermore, we incorporate semantic features through a feature fusion strategy to enrich the feature representation. In the composition stage, we design a loss function to optimize the seam region and address issues such as brightness variations and suspended particles. Qualitative and quantitative results on supplemented real world underwater image stitching dataset demonstrate that the proposed method outperforms existing approaches, achieving higher accuracy and robustness in underwater image stitching.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113082"},"PeriodicalIF":7.6,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSVTformer: Dual-stream spatial-view-temporal transformer for multi-view 3D human pose estimation DSVTformer:用于多视图三维人体姿态估计的双流时空转换器
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-13 DOI: 10.1016/j.patcog.2026.113072
Wanruo Zhang , Mengyuan Liu , Wenhao Li , Hong Liu
Current multi-view 3D human pose estimation methods typically use single-stream architectures that elevate 2D pose sequences into 3D space. However, their soley reliance on 2D joint coordinates limits performance by neglecting valuable complementary visual context from images. To address this, we propose a dual-stream approach that integrates both 2D pose and visual context, enhancing pose estimation without relying on camera parameters or body priors. Furthermore, existing transformer-based methods fail to adequately model the complex correlations across Spatial-View-Temporal (SVT) dimensions, focusing only on per-frame spatial modeling, local cross-view fusion, and temporal modeling over a single fused view. To overcome this, we introduce the Dual-stream Spatial-View-Temporal Transformer (DSVTformer) that fully captures SVT correlations, making it suitable for our dual-stream design. Its encoder extracts multi-view, multi-frame features from both image and pose modalities while a dual-stream decoder fuses them. Specifically, each decoder layer incorporates three axis-aware correlation blocks to model temporally enhanced spatial structures, global cross-view relations, and intra-view temporal dependencies. These blocks are grounded in a basic dual-stream interactive and enhancement mechanism, consisting of bidirectional cascaded cross-modal fusion modules and self-modal enhancement modules. This design allows DSVTformer to perform progressive, complementary cross-modal reasoning across spatial, inter-view, and temporal dimensions, significantly improving multi-view 3D human pose estimation. Extensive experiments on Human3.6M, MPI-INF-3DHP, and Ski-Pose demonstrate that DSVTformer achieves state-of-the-art performance in both accuracy and robustness across diverse multi-view settings. The code is available at https://github.com/Rowenazhang/DSVTformer.
当前的多视图3D人体姿态估计方法通常使用单流架构,将2D姿态序列提升到3D空间。然而,它们仅仅依赖于二维关节坐标,忽略了图像中有价值的互补视觉环境,从而限制了性能。为了解决这个问题,我们提出了一种双流方法,该方法集成了2D姿势和视觉环境,在不依赖相机参数或身体先验的情况下增强姿势估计。此外,现有的基于转换器的方法不能充分地模拟跨空间-视图-时间(SVT)维度的复杂相关性,只关注单个融合视图上的逐帧空间建模、局部跨视图融合和时间建模。为了克服这个问题,我们引入了双流空间-视图-时间转换器(DSVTformer),它可以完全捕获SVT相关性,使其适合我们的双流设计。它的编码器从图像和姿态模式中提取多视图,多帧特征,而双流解码器则融合它们。具体来说,每个解码器层包含三个轴感知相关块来建模时间增强的空间结构、全局跨视图关系和视图内时间依赖性。这些模块基于基本的双流交互和增强机制,包括双向级联跨模态融合模块和自模态增强模块。该设计允许DSVTformer在空间、视图间和时间维度上执行渐进式、互补的跨模态推理,显著改善多视图3D人体姿态估计。在Human3.6M, mpi - if - 3dhp和Ski-Pose上进行的大量实验表明,DSVTformer在不同的多视图设置中实现了最先进的精度和鲁棒性。代码可在https://github.com/Rowenazhang/DSVTformer上获得。
{"title":"DSVTformer: Dual-stream spatial-view-temporal transformer for multi-view 3D human pose estimation","authors":"Wanruo Zhang ,&nbsp;Mengyuan Liu ,&nbsp;Wenhao Li ,&nbsp;Hong Liu","doi":"10.1016/j.patcog.2026.113072","DOIUrl":"10.1016/j.patcog.2026.113072","url":null,"abstract":"<div><div>Current multi-view 3D human pose estimation methods typically use single-stream architectures that elevate 2D pose sequences into 3D space. However, their soley reliance on 2D joint coordinates limits performance by neglecting valuable complementary visual context from images. To address this, we propose a dual-stream approach that integrates both 2D pose and visual context, enhancing pose estimation without relying on camera parameters or body priors. Furthermore, existing transformer-based methods fail to adequately model the complex correlations across Spatial-View-Temporal (SVT) dimensions, focusing only on per-frame spatial modeling, local cross-view fusion, and temporal modeling over a single fused view. To overcome this, we introduce the Dual-stream Spatial-View-Temporal Transformer (DSVTformer) that fully captures SVT correlations, making it suitable for our dual-stream design. Its encoder extracts multi-view, multi-frame features from both image and pose modalities while a dual-stream decoder fuses them. Specifically, each decoder layer incorporates three axis-aware correlation blocks to model temporally enhanced spatial structures, global cross-view relations, and intra-view temporal dependencies. These blocks are grounded in a basic dual-stream interactive and enhancement mechanism, consisting of bidirectional cascaded cross-modal fusion modules and self-modal enhancement modules. This design allows DSVTformer to perform progressive, complementary cross-modal reasoning across spatial, inter-view, and temporal dimensions, significantly improving multi-view 3D human pose estimation. Extensive experiments on Human3.6M, MPI-INF-3DHP, and Ski-Pose demonstrate that DSVTformer achieves state-of-the-art performance in both accuracy and robustness across diverse multi-view settings. The code is available at <span><span>https://github.com/Rowenazhang/DSVTformer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113072"},"PeriodicalIF":7.6,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCRTN: Enhancing multi-modal 3D object detection in complex environments SCRTN:增强复杂环境下的多模态3D目标检测
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-13 DOI: 10.1016/j.patcog.2026.113068
Xiufeng Zhu , Qing Shen , Zhenfang Liu , Kang Zhao , Jungang Lou
In complex application scenarios, noise and environmental interference significantly challenge the accurate association of multi-modal features for 3D object detection. To tackle this issue, this study introduces an advanced multi-modal framework, the Sparse Convolutional Residual Network. The framework integrates two key innovations: first, a region-of-interest feature fusion module called ResTransfusion, which enhances global feature associations between voxel point clouds and augmented color-based point clouds; second, a distant voxel retention sampling strategy that strategically reduces voxel count while maintaining key spatial information, thereby improving computational efficiency. Extensive experiments on the KITTI, NuScenes, and Waymo Open datasets demonstrate the effectiveness of the proposed approach. Notably, it achieves a state-of-the-art mean average precision (mAP) of 89.67% on the KITTI Hard benchmark and delivers competitive performance on NuScenes and Waymo, particularly in noisy and occluded real-world settings where it surpasses existing methods. Our project page is available at https://github.com/zhuxzhuif/SCRTN.
在复杂的应用场景中,噪声和环境干扰极大地挑战了3D目标检测中多模态特征的准确关联。为了解决这个问题,本研究引入了一种先进的多模态框架——稀疏卷积残差网络。该框架集成了两个关键创新:首先,一个称为resransfusion的感兴趣区域特征融合模块,它增强了体素点云和基于增强颜色的点云之间的全局特征关联;其次,远距离体素保留采样策略,在保留关键空间信息的同时策略性地减少体素计数,从而提高计算效率。在KITTI、NuScenes和Waymo Open数据集上进行的大量实验证明了所提出方法的有效性。值得注意的是,它在KITTI Hard基准测试中达到了89.67%的最先进的平均精度(mAP),并在NuScenes和Waymo上提供了具有竞争力的性能,特别是在嘈杂和闭塞的现实环境中,它超越了现有的方法。我们的项目页面可访问https://github.com/zhuxzhuif/SCRTN。
{"title":"SCRTN: Enhancing multi-modal 3D object detection in complex environments","authors":"Xiufeng Zhu ,&nbsp;Qing Shen ,&nbsp;Zhenfang Liu ,&nbsp;Kang Zhao ,&nbsp;Jungang Lou","doi":"10.1016/j.patcog.2026.113068","DOIUrl":"10.1016/j.patcog.2026.113068","url":null,"abstract":"<div><div>In complex application scenarios, noise and environmental interference significantly challenge the accurate association of multi-modal features for 3D object detection. To tackle this issue, this study introduces an advanced multi-modal framework, the Sparse Convolutional Residual Network. The framework integrates two key innovations: first, a region-of-interest feature fusion module called ResTransfusion, which enhances global feature associations between voxel point clouds and augmented color-based point clouds; second, a distant voxel retention sampling strategy that strategically reduces voxel count while maintaining key spatial information, thereby improving computational efficiency. Extensive experiments on the KITTI, NuScenes, and Waymo Open datasets demonstrate the effectiveness of the proposed approach. Notably, it achieves a state-of-the-art mean average precision (mAP) of 89.67% on the KITTI Hard benchmark and delivers competitive performance on NuScenes and Waymo, particularly in noisy and occluded real-world settings where it surpasses existing methods. Our project page is available at <span><span>https://github.com/zhuxzhuif/SCRTN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113068"},"PeriodicalIF":7.6,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1