首页 > 最新文献

Pattern Recognition Letters最新文献

英文 中文
Enhanced facial expression manipulation through domain-aware transformation and dual-level classification with expression awarness loss in the CLIP space 基于领域感知变换和双级分类的面部表情处理方法
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-19 DOI: 10.1016/j.patrec.2025.11.045
Qi Guo, Xiaodong Gu
Accurate facial expression manipulation, particularly transforming complex, non-neutral expressions into specific target states, remains challenging due to substantial disparities among expression domains. Existing methods often struggle with such domain shifts, leading to suboptimal editing results. To address these challenges, we propose a novel framework called Domain-Aware Expression Transformation with Dual-Level Label Information Classifier (DAET-DLIC). The DAET-DLIC architecture consists of two major modules. The Domain-Aware Expression Transformation module enhances domain awareness by processing latent codes to model expression-domain distributions. The Dual-Level Label Information Classifier performs classification at both the latent and image levels to ensure comprehensive and reliable label supervision. Furthermore, the Expression Awareness Loss Function provides precise control over the directionality of expression transformations, effectively reducing the risk of expression semantic drift in the CLIP (Contrastive Language-Image Pretraining) space. We validate our method through extensive quantitative and qualitative experiments on the Radboud Faces Database and CelebA-HQ datasets and introduce a comprehensive quantitative metric to assess manipulation efficacy.
准确的面部表情操纵,特别是将复杂的非中性表情转化为特定的目标状态,由于表情领域之间的巨大差异,仍然具有挑战性。现有的方法经常与这样的领域转移作斗争,导致次优的编辑结果。为了解决这些挑战,我们提出了一个新的框架,称为双级标签信息分类器的领域感知表达式转换(DAET-DLIC)。DAET-DLIC体系结构由两个主要模块组成。领域感知表达式转换模块通过处理潜在代码来对表达式域分布进行建模,从而增强领域感知能力。双级标签信息分类器在潜在和图像两个级别进行分类,以确保全面可靠的标签监管。此外,表达意识损失函数提供了对表达转换方向性的精确控制,有效降低了CLIP(对比语言-图像预训练)空间中表达语义漂移的风险。我们在Radboud Faces数据库和CelebA-HQ数据集上进行了大量的定量和定性实验,验证了我们的方法,并引入了一个全面的定量指标来评估操作效果。
{"title":"Enhanced facial expression manipulation through domain-aware transformation and dual-level classification with expression awarness loss in the CLIP space","authors":"Qi Guo,&nbsp;Xiaodong Gu","doi":"10.1016/j.patrec.2025.11.045","DOIUrl":"10.1016/j.patrec.2025.11.045","url":null,"abstract":"<div><div>Accurate facial expression manipulation, particularly transforming complex, non-neutral expressions into specific target states, remains challenging due to substantial disparities among expression domains. Existing methods often struggle with such domain shifts, leading to suboptimal editing results. To address these challenges, we propose a novel framework called Domain-Aware Expression Transformation with Dual-Level Label Information Classifier (DAET-DLIC). The DAET-DLIC architecture consists of two major modules. The Domain-Aware Expression Transformation module enhances domain awareness by processing latent codes to model expression-domain distributions. The Dual-Level Label Information Classifier performs classification at both the latent and image levels to ensure comprehensive and reliable label supervision. Furthermore, the Expression Awareness Loss Function provides precise control over the directionality of expression transformations, effectively reducing the risk of expression semantic drift in the CLIP (Contrastive Language-Image Pretraining) space. We validate our method through extensive quantitative and qualitative experiments on the Radboud Faces Database and CelebA-HQ datasets and introduce a comprehensive quantitative metric to assess manipulation efficacy.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 102-107"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Psychology-informed safety attributes recognition in dense crowds 密集人群中基于心理的安全属性识别
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-16 DOI: 10.1016/j.patrec.2025.12.006
Jiaqi Yu, Yanshan Zhou, Renjie Pan, Cunyan Li, Hua Yang
Understanding dense crowd scenes requires analyzing multiple spatial and behavioral attributes. However, existing attributes often fall short of identifying potential safety risks such as panic. To address this, we propose two safety-aware crowd attributes: Crowd Motion Stability (CMS) and Individual Comfort Distance (ICD). CMS characterizes macro-level motion coordination based on the spatial-temporal consistency of crowd movement. In contrast, ICD is grounded in social psychology and captures individuals’ preferred interpersonal distance under varying densities. To accurately recognize these attributes, we propose a Psychology-Guided Safety-Aware Network (PGSAN), which integrates the Spatial-Temporal Consistency Network (STCN) and the Spatial Distance Network (SDN). Specifically, STCN is constructed based on behavioral coherence theory to measure CMS. Meanwhile, SDN models ICD by integrating dynamic crowd states and dual perceptual mechanisms (intuitive and analytical) in psychology, enabling adaptive comfort distance extraction. Features from both sub-networks are fused to support attribute recognition across diverse video scenes. Experimental results demonstrate the proposed method’s superior performance in recognizing safety attributes in dense crowds.
理解密集人群场景需要分析多个空间和行为属性。然而,现有的属性往往无法识别潜在的安全风险,比如恐慌。为了解决这个问题,我们提出了两个安全意识人群属性:人群运动稳定性(CMS)和个体舒适距离(ICD)。CMS以人群运动的时空一致性为基础,以宏观层面的运动协调为特征。相比之下,ICD以社会心理学为基础,捕捉了不同密度下个体偏好的人际距离。为了准确识别这些属性,我们提出了一种心理引导的安全感知网络(PGSAN),该网络集成了时空一致性网络(STCN)和空间距离网络(SDN)。具体而言,基于行为相干理论构建STCN来测量CMS。同时,SDN通过整合动态人群状态和心理学的双重感知机制(直观和分析)来建模ICD,实现自适应舒适距离提取。两个子网的特征融合在一起,以支持跨不同视频场景的属性识别。实验结果表明,该方法在密集人群中具有较好的安全属性识别性能。
{"title":"Psychology-informed safety attributes recognition in dense crowds","authors":"Jiaqi Yu,&nbsp;Yanshan Zhou,&nbsp;Renjie Pan,&nbsp;Cunyan Li,&nbsp;Hua Yang","doi":"10.1016/j.patrec.2025.12.006","DOIUrl":"10.1016/j.patrec.2025.12.006","url":null,"abstract":"<div><div>Understanding dense crowd scenes requires analyzing multiple spatial and behavioral attributes. However, existing attributes often fall short of identifying potential safety risks such as panic. To address this, we propose two safety-aware crowd attributes: Crowd Motion Stability (CMS) and Individual Comfort Distance (ICD). CMS characterizes macro-level motion coordination based on the spatial-temporal consistency of crowd movement. In contrast, ICD is grounded in social psychology and captures individuals’ preferred interpersonal distance under varying densities. To accurately recognize these attributes, we propose a Psychology-Guided Safety-Aware Network (PGSAN), which integrates the Spatial-Temporal Consistency Network (STCN) and the Spatial Distance Network (SDN). Specifically, STCN is constructed based on behavioral coherence theory to measure CMS. Meanwhile, SDN models ICD by integrating dynamic crowd states and dual perceptual mechanisms (intuitive and analytical) in psychology, enabling adaptive comfort distance extraction. Features from both sub-networks are fused to support attribute recognition across diverse video scenes. Experimental results demonstrate the proposed method’s superior performance in recognizing safety attributes in dense crowds.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 88-94"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145797789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bounds on the Natarajan dimension of a class of linear multi-class predictors 一类线性多类预测器的Natarajan维的界
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-25 DOI: 10.1016/j.patrec.2025.12.012
Yanru Pan, Benchong Li
The Natarajan dimension is a crucial metric for measuring the capacity of a learning model and analyzing generalization ability of a classifier in multi-class classification tasks. In this paper, we present a tight upper bound of Natarajan dimension for linear multi-class predictors based on class sensitive feature mapping for multi-vector construction, and provide the exact Natarajan dimension when the dimension of feature is 2.
在多类分类任务中,Natarajan维数是衡量学习模型能力和分析分类器泛化能力的重要指标。本文提出了基于类敏感特征映射的线性多类预测器的Natarajan维数的紧上界,并给出了特征维数为2时的精确Natarajan维数。
{"title":"Bounds on the Natarajan dimension of a class of linear multi-class predictors","authors":"Yanru Pan,&nbsp;Benchong Li","doi":"10.1016/j.patrec.2025.12.012","DOIUrl":"10.1016/j.patrec.2025.12.012","url":null,"abstract":"<div><div>The Natarajan dimension is a crucial metric for measuring the capacity of a learning model and analyzing generalization ability of a classifier in multi-class classification tasks. In this paper, we present a tight upper bound of Natarajan dimension for linear multi-class predictors based on class sensitive feature mapping for multi-vector construction, and provide the exact Natarajan dimension when the dimension of feature is 2.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 129-134"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Domain detection of AI-Generated text: Integrating linguistic richness and lexical pair dispersion via deep learning 人工智能生成文本的跨域检测:通过深度学习整合语言丰富度和词汇对分散
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-25 DOI: 10.1016/j.patrec.2025.12.010
Jingang Wang , Tong Xiao , Hui Du , Cheng Zhang , Peng Liu
Cross-domain detection of AI-generated text is a crucial task for cybersecurity. In practical scenarios, after being trained on one or multiple known text generation sources (source domain), a detection model must be capable of effectively identifying text generated by unknown and unseen sources (target domain). Current approaches suffer from limited cross-domain generalization due to insufficient structural adaptation to domain discrepancies. To address this critical limitation, we propose RiDis,a classification model that synergizes Linguistic Richness and Lexical Pair Dispersion for cross-domain AI-generated text detection. Through comprehensive statistical analysis, we establish Linguistic Richness and Lexical Pair Dispersion as discriminative indicators for distinguishing human-authored and machine-generated texts. Our architecture features two innovative components, a Semantic Coherence Extraction Module employing long-range receptive fields to capture linguistic richness through global semantic trend analysis, and a Contextual Dependency Extraction Module utilizing localized receptive fields to quantify lexical pair dispersion via fine-grained word association patterns. The framework further incorporates domain adaptation learning to enhance cross-domain detection robustness. Extensive evaluations demonstrate that our method achieves superior detection accuracy compared to state-of-the-art baselines across multiple domains, with experimental results showing significant performance improvements on cross-domain test scenarios.
人工智能生成文本的跨域检测是网络安全的一项重要任务。在实际场景中,检测模型在一个或多个已知的文本生成源(源域)上进行训练后,必须能够有效地识别未知和不可见源(目标域)生成的文本。目前的方法由于对领域差异的结构适应不足,导致跨领域泛化能力有限。为了解决这一关键限制,我们提出了RiDis,这是一种协同语言丰富度和词汇对分散的分类模型,用于跨领域人工智能生成的文本检测。通过全面的统计分析,我们建立了语言丰富度和词汇对离散度作为区分人类创作和机器生成文本的判别指标。我们的架构有两个创新的组件,一个是语义连贯提取模块,利用远程接受域通过全球语义趋势分析捕获语言的丰富性,另一个是上下文依赖提取模块,利用局部接受域通过细粒度的单词关联模式量化词汇对的分散。该框架进一步融合了领域自适应学习,增强了跨领域检测的鲁棒性。广泛的评估表明,与跨多个域的最先进的基线相比,我们的方法实现了更高的检测精度,实验结果显示跨域测试场景的显着性能改进。
{"title":"Cross-Domain detection of AI-Generated text: Integrating linguistic richness and lexical pair dispersion via deep learning","authors":"Jingang Wang ,&nbsp;Tong Xiao ,&nbsp;Hui Du ,&nbsp;Cheng Zhang ,&nbsp;Peng Liu","doi":"10.1016/j.patrec.2025.12.010","DOIUrl":"10.1016/j.patrec.2025.12.010","url":null,"abstract":"<div><div>Cross-domain detection of AI-generated text is a crucial task for cybersecurity. In practical scenarios, after being trained on one or multiple known text generation sources (source domain), a detection model must be capable of effectively identifying text generated by unknown and unseen sources (target domain). Current approaches suffer from limited cross-domain generalization due to insufficient structural adaptation to domain discrepancies. To address this critical limitation, we propose <strong>RiDis</strong>,a classification model that synergizes Linguistic <strong>Ri</strong>chness and Lexical Pair <strong>Dis</strong>persion for cross-domain AI-generated text detection. Through comprehensive statistical analysis, we establish Linguistic Richness and Lexical Pair Dispersion as discriminative indicators for distinguishing human-authored and machine-generated texts. Our architecture features two innovative components, a Semantic Coherence Extraction Module employing long-range receptive fields to capture linguistic richness through global semantic trend analysis, and a Contextual Dependency Extraction Module utilizing localized receptive fields to quantify lexical pair dispersion via fine-grained word association patterns. The framework further incorporates domain adaptation learning to enhance cross-domain detection robustness. Extensive evaluations demonstrate that our method achieves superior detection accuracy compared to state-of-the-art baselines across multiple domains, with experimental results showing significant performance improvements on cross-domain test scenarios.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 123-128"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The uncertainty advantage: Enhancing large language models’ reliability through chain of uncertainty reasoning 不确定性优势:通过不确定性推理链提高大型语言模型的可靠性
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-11-28 DOI: 10.1016/j.patrec.2025.11.040
Zirong Peng , Xiaoming Liu , Guan Yang , Jie Liu , Xueping Peng , Yang Long
The rapid evolution of large language models (LLMs) has significantly advanced the capabilities of natural language processing (NLP), enabling a broad range of applications from text generation to complex problem-solving. However, these models often struggle with verifying the reliability of their outputs for complex tasks. Chain-of-Thought (CoT) reasoning, a technique that asks LLMs to generate step-by-step reasoning paths, attempts to address the challenge by making reasoning steps explicit, yet it falls short when assumptions of process faithfulness are unmet, leading to inaccuracies. This reveals a critical gap: the absence of a mechanism to handle inherent uncertainties in reasoning processes. To bridge this gap, we propose a novel approach, the Chain of Uncertainty Reasoning (CUR), which integrates uncertainty management into LLMs’ reasoning. CUR employs prompt-based techniques to express uncertainty effectively and leverages a structured approach to introduce uncertainty through a small number of samples. This enables the model to self-assess its uncertainty and adapt to different perspectives, thus enhancing the faithfulness of its outputs. Experimental results on the datasets of StrategyQA, HotpotQA, and FEVER demonstrate that our method significantly improves performance compared to baselines, confirming the utility of incorporating uncertainty into LLM reasoning processes. This approach offers a promising direction for enhancing the reliability and trustworthiness of LLMs’ applications in various domains. Our code is publicly available at: https://github.com/PengZirong/ChainofUncertaintyReasoning.
大型语言模型(llm)的快速发展极大地提高了自然语言处理(NLP)的能力,使从文本生成到复杂问题解决的广泛应用成为可能。然而,这些模型在验证复杂任务输出的可靠性方面经常遇到困难。思维链(CoT)推理是一种要求法学硕士生成一步一步推理路径的技术,它试图通过明确推理步骤来解决挑战,然而,当过程忠实性的假设不满足时,它就会失败,从而导致不准确。这揭示了一个关键的差距:缺乏一种机制来处理推理过程中固有的不确定性。为了弥补这一差距,我们提出了一种新的方法,即不确定性推理链(CUR),它将不确定性管理集成到法学硕士的推理中。CUR采用基于提示的技术来有效地表达不确定性,并利用结构化的方法通过少量样本引入不确定性。这使模型能够自我评估其不确定性并适应不同的观点,从而提高其输出的可信度。在StrategyQA、HotpotQA和FEVER数据集上的实验结果表明,与基线相比,我们的方法显著提高了性能,证实了将不确定性纳入LLM推理过程的实用性。该方法为提高法学硕士在各个领域应用的可靠性和可信度提供了一个有希望的方向。我们的代码可以在https://github.com/PengZirong/ChainofUncertaintyReasoning上公开获得。
{"title":"The uncertainty advantage: Enhancing large language models’ reliability through chain of uncertainty reasoning","authors":"Zirong Peng ,&nbsp;Xiaoming Liu ,&nbsp;Guan Yang ,&nbsp;Jie Liu ,&nbsp;Xueping Peng ,&nbsp;Yang Long","doi":"10.1016/j.patrec.2025.11.040","DOIUrl":"10.1016/j.patrec.2025.11.040","url":null,"abstract":"<div><div>The rapid evolution of large language models (LLMs) has significantly advanced the capabilities of natural language processing (NLP), enabling a broad range of applications from text generation to complex problem-solving. However, these models often struggle with verifying the reliability of their outputs for complex tasks. Chain-of-Thought (CoT) reasoning, a technique that asks LLMs to generate step-by-step reasoning paths, attempts to address the challenge by making reasoning steps explicit, yet it falls short when assumptions of process faithfulness are unmet, leading to inaccuracies. This reveals a critical gap: the absence of a mechanism to handle inherent uncertainties in reasoning processes. To bridge this gap, we propose a novel approach, the Chain of Uncertainty Reasoning (CUR), which integrates uncertainty management into LLMs’ reasoning. CUR employs prompt-based techniques to express uncertainty effectively and leverages a structured approach to introduce uncertainty through a small number of samples. This enables the model to self-assess its uncertainty and adapt to different perspectives, thus enhancing the faithfulness of its outputs. Experimental results on the datasets of StrategyQA, HotpotQA, and FEVER demonstrate that our method significantly improves performance compared to baselines, confirming the utility of incorporating uncertainty into LLM reasoning processes. This approach offers a promising direction for enhancing the reliability and trustworthiness of LLMs’ applications in various domains. Our code is publicly available at: <span><span>https://github.com/PengZirong/ChainofUncertaintyReasoning</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 30-36"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
E2GenF: Universal AIGC image detection based on edge enhanced generalizable features 基于边缘增强广义特征的通用AIGC图像检测
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-09 DOI: 10.1016/j.patrec.2025.12.001
Jian Zou , Jun Wang , Kezhong Lu , Yingxin Lai , Kaiwen Luo , Zitong Yu
Generative models, such as GANs and Diffusion models, have achieved remarkable advancements in Artificial Intelligence Generated Content (AIGC), creating images that are nearly indistinguishable from real ones. However, existing detection methods often face challenges in identifying images generated by unseen models and exhibit limited generalization across different domains. In this paper, our aim is to improve the generalization capacity of AIGC image detectors by leveraging artifact features exposed during the upsampling process. Specifically, we reexamine the upsampling operations employed by generative models and observe that, in high-frequency regions of an image (e.g., edge areas with significant pixel intensity differences), generative models often struggle to accurately replicate the pixel distributions of real images, thereby leaving behind unavoidable artifact information. Based on this observation, we propose to utilize edge detection operators to enrich edge-aware detailed clues, enabling the model to focus on these critical features. Furthermore, We designed a module that combines upsampling and downsampling to analyze pixel correlation changes introduced by interpolation artifacts. The integrated approach effectively enhances the detection of subtle generative traces, thereby improving generalization across diverse generative models. Extensive experiments on three benchmark datasets demonstrate the superior performance of the proposed approach against previous state-of-the-art methods under cross-domain testing scenarios. The code is available at https://github.com/zj56/EdgeEnhanced-DeepfakeDetection.
生成模型,如gan和扩散模型,在人工智能生成内容(AIGC)方面取得了显着进步,创建的图像几乎与真实图像无法区分。然而,现有的检测方法在识别未知模型生成的图像时经常面临挑战,并且在不同领域的泛化能力有限。在本文中,我们的目标是通过利用在上采样过程中暴露的伪特征来提高AIGC图像检测器的泛化能力。具体来说,我们重新审视了生成模型所使用的上采样操作,并观察到,在图像的高频区域(例如,具有显著像素强度差异的边缘区域),生成模型通常难以准确地复制真实图像的像素分布,从而留下不可避免的伪影信息。基于这一观察,我们提出利用边缘检测算子丰富边缘感知的细节线索,使模型能够专注于这些关键特征。此外,我们设计了一个结合上采样和下采样的模块来分析插值伪影带来的像素相关变化。该方法有效地增强了对细微生成轨迹的检测,从而提高了不同生成模型之间的泛化能力。在三个基准数据集上进行的大量实验表明,在跨域测试场景下,所提出的方法比以前最先进的方法具有更好的性能。代码可在https://github.com/zj56/EdgeEnhanced-DeepfakeDetection上获得。
{"title":"E2GenF: Universal AIGC image detection based on edge enhanced generalizable features","authors":"Jian Zou ,&nbsp;Jun Wang ,&nbsp;Kezhong Lu ,&nbsp;Yingxin Lai ,&nbsp;Kaiwen Luo ,&nbsp;Zitong Yu","doi":"10.1016/j.patrec.2025.12.001","DOIUrl":"10.1016/j.patrec.2025.12.001","url":null,"abstract":"<div><div>Generative models, such as GANs and Diffusion models, have achieved remarkable advancements in Artificial Intelligence Generated Content (AIGC), creating images that are nearly indistinguishable from real ones. However, existing detection methods often face challenges in identifying images generated by unseen models and exhibit limited generalization across different domains. In this paper, our aim is to improve the generalization capacity of AIGC image detectors by leveraging artifact features exposed during the upsampling process. Specifically, we reexamine the upsampling operations employed by generative models and observe that, in high-frequency regions of an image (e.g., edge areas with significant pixel intensity differences), generative models often struggle to accurately replicate the pixel distributions of real images, thereby leaving behind unavoidable artifact information. Based on this observation, we propose to utilize edge detection operators to enrich edge-aware detailed clues, enabling the model to focus on these critical features. Furthermore, We designed a module that combines upsampling and downsampling to analyze pixel correlation changes introduced by interpolation artifacts. The integrated approach effectively enhances the detection of subtle generative traces, thereby improving generalization across diverse generative models. Extensive experiments on three benchmark datasets demonstrate the superior performance of the proposed approach against previous state-of-the-art methods under cross-domain testing scenarios. The code is available at <span><span>https://github.com/zj56/EdgeEnhanced-DeepfakeDetection</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 74-80"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145797788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PE-ViT: Parameter-efficient vision transformer with dimension-adaptive experts and economical attention PE-ViT:具有尺寸自适应专家和经济关注的参数高效视觉变压器
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-26 DOI: 10.1016/j.patrec.2025.12.013
Qun Li , Jiru He , Tiancheng Guo , Xinping Gao , Bir Bhanu
Recent advances in Mixture of Experts (MoE) have improved the representational capacity of Vision Transformer (ViT), but most existing methods remain constrained to token-level routing or homogeneous expert scaling, overlooking the diverse representation requirements across different layers and the parameter redundancy within attention modules. To address these problems, we propose PE-ViT, a novel parameter-efficient architecture that integrates the Dimension-adaptive Mixture of Experts (DMoE) and the Selective and Shared Attention (SSA) mechanisms to improve both computational efficiency and model performance. Specifically, DMoE adaptively allocates expert dimensions through layer-wise representation analysis and incorporates shared experts to enhance parameter utilization, while SSA reduces the parameter overhead of attention by dynamically selecting attention heads and sharing query-key projections. Experimental results demonstrate that PE-ViT consistently outperforms existing MoE methods across multiple benchmark datasets.
专家混合(MoE)的最新进展提高了视觉转换器(ViT)的表示能力,但大多数现有方法仍然局限于令牌级路由或同构专家缩放,忽略了不同层之间的不同表示需求和注意模块内部的参数冗余。为了解决这些问题,我们提出了一种新的参数高效架构PE-ViT,它集成了维度自适应混合专家(DMoE)和选择和共享注意(SSA)机制,以提高计算效率和模型性能。具体而言,DMoE通过分层表示分析自适应分配专家维度,并结合共享专家来提高参数利用率,而SSA通过动态选择关注头和共享查询键投影来降低注意力的参数开销。实验结果表明,PE-ViT在多个基准数据集上始终优于现有的MoE方法。
{"title":"PE-ViT: Parameter-efficient vision transformer with dimension-adaptive experts and economical attention","authors":"Qun Li ,&nbsp;Jiru He ,&nbsp;Tiancheng Guo ,&nbsp;Xinping Gao ,&nbsp;Bir Bhanu","doi":"10.1016/j.patrec.2025.12.013","DOIUrl":"10.1016/j.patrec.2025.12.013","url":null,"abstract":"<div><div>Recent advances in Mixture of Experts (MoE) have improved the representational capacity of Vision Transformer (ViT), but most existing methods remain constrained to token-level routing or homogeneous expert scaling, overlooking the diverse representation requirements across different layers and the parameter redundancy within attention modules. To address these problems, we propose PE-ViT, a novel parameter-efficient architecture that integrates the Dimension-adaptive Mixture of Experts (DMoE) and the Selective and Shared Attention (SSA) mechanisms to improve both computational efficiency and model performance. Specifically, DMoE adaptively allocates expert dimensions through layer-wise representation analysis and incorporates shared experts to enhance parameter utilization, while SSA reduces the parameter overhead of attention by dynamically selecting attention heads and sharing query-key projections. Experimental results demonstrate that PE-ViT consistently outperforms existing MoE methods across multiple benchmark datasets.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 135-141"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAMN-FSOD: Class-aware memory network for few-shot infrared object detection CAMN-FSOD:类感知记忆网络,用于少量红外目标检测
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-11-29 DOI: 10.1016/j.patrec.2025.11.033
Jing Hu , Hengkang Ye , Weiwei Zhong , Zican Shi , Yifan Chen , Jie Ren , Xiaohui Zhu , Li Fan
Cross-Domain Few-Shot Object Detection (CD-FSOD) from visible to infrared domains faces a critical challenge: object classification proves significantly more error-prone than localization under fine-tuning adaptation. This stems from substantial representational discrepancies in internal object features between domains, which hinder effective transfer. To enhance the saliency of infrared internal object features and mitigate classification errors in few-shot visible-to-infrared transfer, we propose the Class-Aware Memory Network for Few-Shot Object Detection (CAMN-FSOD). CAMN explicitly memories high-quality internal object features during fine-tuning and leverages memory to augment features,boosting recognition accuracy during inference. Furthermore, we introduce our two-stage Decoupled-Coupled Fine-tuning approach (DCFA) to combat CAMN overfitting in few-shot training and maximize its effectiveness. We establish a visible-infrared FSOD benchmark dataset for evaluation. Extensive experiments demonstrate that CAMN-FSOD significantly enhances the few-shot learning capability of the base model without increasing trainable parameters. In the 1-shot setting, our method achieves 42.0 mAP50, which is 14.4 points higher than the baseline, and an overall mAP of 25.2, showing an improvement of 2.3 points, outperforming existing methods.
从可见光到红外域的跨域少射目标检测(CD-FSOD)面临着一个关键的挑战:在微调适应下,目标分类比定位更容易出错。这源于领域之间内部对象特征的大量表征差异,这阻碍了有效的转移。为了增强红外内部目标特征的显着性,减少少量可见到红外传输中的分类错误,我们提出了用于少量目标检测的类别感知记忆网络(CAMN-FSOD)。CAMN在微调期间显式地存储高质量的内部对象特征,并利用内存来增强特征,从而在推理期间提高识别准确性。此外,我们介绍了我们的两阶段解耦耦合微调方法(DCFA)来对抗CAMN过拟合,并最大化其有效性。我们建立了一个可见-红外FSOD基准数据集进行评价。大量实验表明,在不增加可训练参数的情况下,CAMN-FSOD显著提高了基本模型的少镜头学习能力。在1镜头设置下,我们的方法实现了42.0的mAP50,比基线提高了14.4分,整体mAP为25.2,提高了2.3分,优于现有方法。
{"title":"CAMN-FSOD: Class-aware memory network for few-shot infrared object detection","authors":"Jing Hu ,&nbsp;Hengkang Ye ,&nbsp;Weiwei Zhong ,&nbsp;Zican Shi ,&nbsp;Yifan Chen ,&nbsp;Jie Ren ,&nbsp;Xiaohui Zhu ,&nbsp;Li Fan","doi":"10.1016/j.patrec.2025.11.033","DOIUrl":"10.1016/j.patrec.2025.11.033","url":null,"abstract":"<div><div>Cross-Domain Few-Shot Object Detection (CD-FSOD) from visible to infrared domains faces a critical challenge: object classification proves significantly more error-prone than localization under fine-tuning adaptation. This stems from substantial representational discrepancies in internal object features between domains, which hinder effective transfer. To enhance the saliency of infrared internal object features and mitigate classification errors in few-shot visible-to-infrared transfer, we propose the Class-Aware Memory Network for Few-Shot Object Detection (CAMN-FSOD). CAMN explicitly memories high-quality internal object features during fine-tuning and leverages memory to augment features,boosting recognition accuracy during inference. Furthermore, we introduce our two-stage Decoupled-Coupled Fine-tuning approach (DCFA) to combat CAMN overfitting in few-shot training and maximize its effectiveness. We establish a visible-infrared FSOD benchmark dataset for evaluation. Extensive experiments demonstrate that CAMN-FSOD significantly enhances the few-shot learning capability of the base model without increasing trainable parameters. In the 1-shot setting, our method achieves 42.0 mAP<sub>50</sub>, which is 14.4 points higher than the baseline, and an overall mAP of 25.2, showing an improvement of 2.3 points, outperforming existing methods.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 16-22"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Special section: CIARP-24 特殊章节:CIARP-24
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-11-27 DOI: 10.1016/j.patrec.2025.11.039
Sergio A. Velastin , Ruber Hernández-García
The Iberoamerican Congress on Pattern Recognition (CIARP) is a well-established scientific event, endorsed by the International Association for Pattern Recognition (IAPR), that focuses on all aspects of pattern recognition, computer vision, artificial intelligence, data mining, and related areas. Since 1995, it has provided an important forum for researchers in IberoAmerica and beyond for presenting ongoing research, scientific results, and experiences on mathematical models, computational methods, and their applications in areas such as robotics, industry, health, space exploration, telecommunications, document analysis, and natural language processing. CIARP has helped strengthening regional cooperation and had contributed to the development of emerging research groups across Iberoamerica. The 27th edition, was held at Universidad Católica del Maule in Talca, Chile, from November 26-29, 2024, and comprised an engaging four-day program of single-track sessions, tutorials, and invited keynotes. I had the privilege to be its Program Chair. As guest editor of this Special Section, I am pleased to introduce fully extended and peer-reviewed versions of the two papers that were awarded best paper prizes in CIAPR-24. In the first one, from Argentina and Uruguay, [1] expand their work to describe a multi-sensor approach for automatic precipitation remote sensing detection using Conditional GANs and Recurrent Networks of special relevance in places where precipitations are not very common events. They integrate satellite infrared brightness temperature (IR-BT) with lighting temporal signals and argue that their proposed architecture achieves better precision than alternative methods. They suggest that their results have potential applications in cyanobacteria bloom event prediction and to help setting social policies for water resource management. This is a good example on how pattern recognition research may have a clear impact. In the second paper, from Chile, [2] extend their previous work and consider the problem of dealing with Out-Of-Distribution (ODD) data in text classification. They propose a new method, BBMOE, based on bimodal beta mixture distribution that fine-tunes pre- trained models using labeled OOD data with a bimodal Beta mixture distribution regularization that enhances differentiation between near-OOD and far-OOD data in multi-class text classification. Their results show improvements over the state-of-the-art for various datasets. We thank the authors and the reviewers for their thorough work and hope that you enjoy reading these papers and perhaps consider submitting work to a future CIARP.
伊比利亚美洲模式识别大会(CIARP)是由国际模式识别协会(IAPR)认可的一项成熟的科学活动,重点关注模式识别、计算机视觉、人工智能、数据挖掘和相关领域的各个方面。自1995年以来,它为伊比利亚美洲和其他地区的研究人员提供了一个重要论坛,介绍关于数学模型、计算方法及其在机器人、工业、卫生、空间探索、电信、文件分析和自然语言处理等领域的应用的正在进行的研究、科学成果和经验。该研究所帮助加强了区域合作,并为伊比利亚美洲各地新兴研究小组的发展作出了贡献。第27届会议于2024年11月26日至29日在智利塔尔卡(Talca)的universsidad Católica del Maule举行,为期四天,包括单轨会议、教程和受邀主题演讲。我有幸成为了它的项目主席。作为本专题的客座编辑,我很高兴向大家介绍这两篇获得CIAPR-24最佳论文奖的论文的完整扩展版和同行评议版。在第一篇来自阿根廷和乌拉圭的论文中,[1]扩展了他们的工作,描述了一种多传感器方法,用于在降水不常见的地方使用条件gan和循环网络进行自动降水遥感检测。他们将卫星红外亮度温度(IR-BT)与照明时间信号相结合,并认为他们提出的架构比其他方法具有更好的精度。他们认为,他们的研究结果在蓝藻水华事件预测和帮助制定水资源管理的社会政策方面具有潜在的应用价值。这是一个很好的例子,说明模式识别研究可能会产生明显的影响。在第二篇论文中,来自智利的[2]扩展了他们之前的工作,并考虑了文本分类中out - distribution (ODD)数据的处理问题。他们提出了一种基于双峰beta混合分布的新方法BBMOE,该方法使用带有双峰beta混合分布正则化的标记OOD数据对预训练模型进行微调,从而增强了多类文本分类中近OOD和远OOD数据的区分。他们的结果显示了对各种数据集的改进。我们感谢作者和审稿人的全面工作,并希望您喜欢阅读这些论文,并考虑将工作提交给未来的CIARP。
{"title":"Special section: CIARP-24","authors":"Sergio A. Velastin ,&nbsp;Ruber Hernández-García","doi":"10.1016/j.patrec.2025.11.039","DOIUrl":"10.1016/j.patrec.2025.11.039","url":null,"abstract":"<div><div>The Iberoamerican Congress on Pattern Recognition (CIARP) is a well-established scientific event, endorsed by the International Association for Pattern Recognition (IAPR), that focuses on all aspects of pattern recognition, computer vision, artificial intelligence, data mining, and related areas. Since 1995, it has provided an important forum for researchers in IberoAmerica and beyond for presenting ongoing research, scientific results, and experiences on mathematical models, computational methods, and their applications in areas such as robotics, industry, health, space exploration, telecommunications, document analysis, and natural language processing. CIARP has helped strengthening regional cooperation and had contributed to the development of emerging research groups across Iberoamerica. The 27th edition, was held at Universidad Católica del Maule in Talca, Chile, from November 26-29, 2024, and comprised an engaging four-day program of single-track sessions, tutorials, and invited keynotes. I had the privilege to be its Program Chair. As guest editor of this Special Section, I am pleased to introduce fully extended and peer-reviewed versions of the two papers that were awarded best paper prizes in CIAPR-24. In the first one, from Argentina and Uruguay, <span><span>[1]</span></span> expand their work to describe a multi-sensor approach for automatic precipitation remote sensing detection using Conditional GANs and Recurrent Networks of special relevance in places where precipitations are not very common events. They integrate satellite infrared brightness temperature (IR-BT) with lighting temporal signals and argue that their proposed architecture achieves better precision than alternative methods. They suggest that their results have potential applications in cyanobacteria bloom event prediction and to help setting social policies for water resource management. This is a good example on how pattern recognition research may have a clear impact. In the second paper, from Chile, <span><span>[2]</span></span> extend their previous work and consider the problem of dealing with Out-Of-Distribution (ODD) data in text classification. They propose a new method, BBMOE, based on bimodal beta mixture distribution that fine-tunes pre- trained models using labeled OOD data with a bimodal Beta mixture distribution regularization that enhances differentiation between near-OOD and far-OOD data in multi-class text classification. Their results show improvements over the state-of-the-art for various datasets. We thank the authors and the reviewers for their thorough work and hope that you enjoy reading these papers and perhaps consider submitting work to a future CIARP.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Page 149"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantized DiT with hadamard transformation: A technical report 量化DiT与hadamard转换:技术报告
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-07 DOI: 10.1016/j.patrec.2025.12.003
Yue Liu, Wenxi Yang, Jianbin Jiao
Diffusion Transformers (DiTs) combine the scalability of transformers with the fidelity of diffusion models, achieving state-of-the-art image generation performance. However, their high computational cost hinders efficient deployment. Post-Training Quantization (PTQ) offers a remedy, yet existing methods struggle with the temporal and spatial dynamics of DiTs. We propose a simplified PTQ framework-combining computationally efficient rotation and randomness-for stable and effective DiT quantization. By replacing block-wise rotations with Hadamard transforms and zigzag permutations with random permutations, our method preserves the decorrelation effect while greatly reducing computational overhead. Experiments show that our approach maintains near full-precision performance at 8-bit and 6-bit precision levels. This work demonstrates that lightweight PTQ with structured randomness can effectively balance efficiency and fidelity, enabling practical deployment of DiTs in resource-constrained environments.
扩散变压器(DiTs)结合了变压器的可扩展性和扩散模型的保真度,实现了最先进的图像生成性能。然而,它们的高计算成本阻碍了高效部署。训练后量化(PTQ)提供了一种补救方法,但现有方法难以解决dit的时空动态问题。我们提出了一个简化的PTQ框架-结合计算效率的旋转和随机性-用于稳定和有效的DiT量化。通过用Hadamard变换代替块方向旋转,用随机排列代替之字形排列,我们的方法在保留去相关效果的同时大大减少了计算开销。实验表明,我们的方法在8位和6位精度水平上保持接近全精度的性能。这项工作表明,具有结构化随机性的轻量级PTQ可以有效地平衡效率和保真度,使dit在资源受限环境中的实际部署成为可能。
{"title":"Quantized DiT with hadamard transformation: A technical report","authors":"Yue Liu,&nbsp;Wenxi Yang,&nbsp;Jianbin Jiao","doi":"10.1016/j.patrec.2025.12.003","DOIUrl":"10.1016/j.patrec.2025.12.003","url":null,"abstract":"<div><div>Diffusion Transformers (DiTs) combine the scalability of transformers with the fidelity of diffusion models, achieving state-of-the-art image generation performance. However, their high computational cost hinders efficient deployment. Post-Training Quantization (PTQ) offers a remedy, yet existing methods struggle with the temporal and spatial dynamics of DiTs. We propose a simplified PTQ framework-combining computationally efficient rotation and randomness-for stable and effective DiT quantization. By replacing block-wise rotations with Hadamard transforms and zigzag permutations with random permutations, our method preserves the decorrelation effect while greatly reducing computational overhead. Experiments show that our approach maintains near full-precision performance at 8-bit and 6-bit precision levels. This work demonstrates that lightweight PTQ with structured randomness can effectively balance efficiency and fidelity, enabling practical deployment of DiTs in resource-constrained environments.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 81-87"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145797787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1