首页 > 最新文献

Pattern Recognition最新文献

英文 中文
Semantic change detection of roads and bridges: A fine-grained dataset and multimodal frequency-driven detector 道路和桥梁的语义变化检测:一个细粒度数据集和多模态频率驱动检测器
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-29 DOI: 10.1016/j.patcog.2026.113191
Qing-Ling Shu , Si-Bao Chen , Xiao Wang , Zhi-Hui You , Wei Lu , Jin Tang , Bin Luo
Accurate detection of road and bridge changes is crucial for urban planning and transportation management, yet presents unique challenges for general change detection (CD). Key difficulties arise from maintaining the continuity of roads and bridges as linear structures and disambiguating visually similar land covers (e.g., road construction vs. bare land). Existing spatial-domain models struggle with these issues, further hindered by the lack of specialized, semantically rich datasets. To fill these gaps, we introduce the Road and Bridge Semantic Change Detection (RB-SCD) dataset. Unlike existing benchmarks that primarily focus on general land cover changes, RB-SCD is the first to systematically target 11 specific semantic change transition types (e.g., water → bridge) anchored to traffic infrastructure. This enables a detailed analysis of traffic infrastructure evolution. Building on this, we propose a novel framework, the Multimodal Frequency-Driven Change Detector (MFDCD). MFDCD integrates multimodal features in the frequency domain through two key components: (1) the Dynamic Frequency Coupler (DFC), which leverages wavelet transform to decompose visual features, enabling it to robustly model the continuity of linear transitions; and (2) the Textual Frequency Filter (TFF), which encodes semantic priors into frequency-domain graphs and applies filter banks to align them with visual features, resolving semantic ambiguities. Experiments demonstrate the state-of-the-art performance of MFDCD on RB-SCD and three public CD datasets. The code will be available at https://github.com/DaGuangDaGuang/RB-SCD.
道路和桥梁变化的准确检测对城市规划和交通管理至关重要,但对一般变化检测(CD)提出了独特的挑战。关键的困难来自保持道路和桥梁作为线性结构的连续性,以及消除视觉上相似的土地覆盖(例如,道路建设与裸地)的歧义。现有的空间域模型与这些问题作斗争,进一步受到缺乏专门的、语义丰富的数据集的阻碍。为了填补这些空白,我们引入了道路和桥梁语义变化检测(RB-SCD)数据集。与现有的主要关注一般土地覆盖变化的基准不同,RB-SCD是第一个系统地针对11种特定的语义变化转换类型(例如,水 → 桥梁)锚定在交通基础设施上。这使得对交通基础设施演变的详细分析成为可能。在此基础上,我们提出了一个新的框架,即多模态频率驱动变化检测器(MFDCD)。MFDCD通过两个关键组件在频域中集成多模态特征:(1)动态频率耦合器(DFC),它利用小波变换对视觉特征进行分解,使其能够鲁棒地模拟线性过渡的连续性;(2)文本频率滤波器(TFF),它将语义先验编码到频域图中,并应用滤波器组将它们与视觉特征对齐,从而解决语义歧义。实验证明了MFDCD在RB-SCD和三个公共CD数据集上的性能。代码可在https://github.com/DaGuangDaGuang/RB-SCD上获得。
{"title":"Semantic change detection of roads and bridges: A fine-grained dataset and multimodal frequency-driven detector","authors":"Qing-Ling Shu ,&nbsp;Si-Bao Chen ,&nbsp;Xiao Wang ,&nbsp;Zhi-Hui You ,&nbsp;Wei Lu ,&nbsp;Jin Tang ,&nbsp;Bin Luo","doi":"10.1016/j.patcog.2026.113191","DOIUrl":"10.1016/j.patcog.2026.113191","url":null,"abstract":"<div><div>Accurate detection of road and bridge changes is crucial for urban planning and transportation management, yet presents unique challenges for general change detection (CD). Key difficulties arise from maintaining the continuity of roads and bridges as linear structures and disambiguating visually similar land covers (e.g., road construction vs. bare land). Existing spatial-domain models struggle with these issues, further hindered by the lack of specialized, semantically rich datasets. To fill these gaps, we introduce the Road and Bridge Semantic Change Detection (RB-SCD) dataset. Unlike existing benchmarks that primarily focus on general land cover changes, RB-SCD is the first to systematically target 11 specific semantic change transition types (e.g., water → bridge) anchored to traffic infrastructure. This enables a detailed analysis of traffic infrastructure evolution. Building on this, we propose a novel framework, the Multimodal Frequency-Driven Change Detector (MFDCD). MFDCD integrates multimodal features in the frequency domain through two key components: (1) the Dynamic Frequency Coupler (DFC), which leverages wavelet transform to decompose visual features, enabling it to robustly model the continuity of linear transitions; and (2) the Textual Frequency Filter (TFF), which encodes semantic priors into frequency-domain graphs and applies filter banks to align them with visual features, resolving semantic ambiguities. Experiments demonstrate the state-of-the-art performance of MFDCD on RB-SCD and three public CD datasets. The code will be available at <span><span>https://github.com/DaGuangDaGuang/RB-SCD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113191"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAFS: A distribution-aware hierarchical feature selection method for long-tailed classification DAFS:一种分布感知的长尾分类分层特征选择方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-06 DOI: 10.1016/j.patcog.2026.113218
Yang Zhang , Jie Shi , Yanfang Liu , Hong Zhao
Feature selection for long-tailed data has become a research hotspot due to high-dimensional features and imbalanced distributions in real-world data. Although some of them effectively balance the data, correctly classifying tail classes and distinguishing easy-confused classes in long-tailed data are still two significant challenges. To address these issues, we propose a distribution-aware hierarchical feature selection method for long-tailed classification (DAFS). First, we embed sample distribution-based punishment coefficients into loss and regularization terms to balance feature weights for head and tail classes, which enhances the accuracy of classifying tail classes. Then, we use multi-granularity knowledge and similarities among classes to design feature differentiation regularization terms for improving the distinguishability of easy-confused classes. Finally, extensive experimental results demonstrate that DAFS outperforms the other ten traditional and advanced feature selection methods on different datasets.
长尾数据的高维特征和不平衡分布已成为现实数据特征选择的研究热点。尽管其中一些方法有效地平衡了数据,但正确分类尾类和区分长尾数据中容易混淆的类仍然是两个重大挑战。为了解决这些问题,我们提出了一种分布感知的长尾分类(DAFS)分层特征选择方法。首先,我们将基于样本分布的惩罚系数嵌入到损失项和正则化项中,以平衡头和尾类的特征权重,从而提高尾类分类的准确性。然后,利用多粒度知识和类间相似度设计特征区分正则化项,提高易混淆类的可分辨性。最后,大量的实验结果表明,DAFS在不同的数据集上优于其他10种传统和先进的特征选择方法。
{"title":"DAFS: A distribution-aware hierarchical feature selection method for long-tailed classification","authors":"Yang Zhang ,&nbsp;Jie Shi ,&nbsp;Yanfang Liu ,&nbsp;Hong Zhao","doi":"10.1016/j.patcog.2026.113218","DOIUrl":"10.1016/j.patcog.2026.113218","url":null,"abstract":"<div><div>Feature selection for long-tailed data has become a research hotspot due to high-dimensional features and imbalanced distributions in real-world data. Although some of them effectively balance the data, correctly classifying tail classes and distinguishing easy-confused classes in long-tailed data are still two significant challenges. To address these issues, we propose a distribution-aware hierarchical feature selection method for long-tailed classification (DAFS). First, we embed sample distribution-based punishment coefficients into loss and regularization terms to balance feature weights for head and tail classes, which enhances the accuracy of classifying tail classes. Then, we use multi-granularity knowledge and similarities among classes to design feature differentiation regularization terms for improving the distinguishability of easy-confused classes. Finally, extensive experimental results demonstrate that DAFS outperforms the other ten traditional and advanced feature selection methods on different datasets.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113218"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CD-DPC: Centrifugal degree based density peaks clustering algorithm CD-DPC:基于离心度的密度峰聚类算法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-06 DOI: 10.1016/j.patcog.2026.113223
Linlin Ma , Hui Li , Xincheng Liu , Huihui Chu , Yue Guan , Yuzhen Zhao , Yawen Chen , Da Wang , Wenke Zang
The Density Peak Clustering (DPC) algorithm is simple and efficient. But DPC and its variants identify clusters only by identifying the centers of single or multiple sparse clusters without considering the coherence of the clustering structure, which tends to result in clusters that cannot be accurately captured. In addition, relative distance and density are only used to identify the centers of clusters and do not provide a description of the relative positions of the remaining sample points. To address these issues, this paper proposes an adaptive density peak clustering algorithm based on centrifugal degree (CD-DPC). The centrifugal degree reflects the relative position of the sample points in the cluster. The CD-DPC categorizes sample points into support, structural, coherent and decoration points based on centrifugal degree. Based on this, the number of clusters is automatically obtained by using different association methods for sample points with different centrifugal degrees, which greatly reduces the influence of human factors. Finally, the clustering results are further improved by introducing shared nearest neighbors for the final association of decorated points. Extensive experiments on synthetic and UCI datasets show that this algorithm outperforms other comparative algorithms.
密度峰值聚类(DPC)算法简单、高效。但DPC及其变体仅通过识别单个或多个稀疏聚类的中心来识别聚类,而不考虑聚类结构的相干性,这往往导致无法准确捕获聚类。此外,相对距离和密度仅用于识别聚类的中心,而不能描述剩余样本点的相对位置。针对这些问题,本文提出了一种基于离心度的自适应密度峰聚类算法(CD-DPC)。离心度反映了样本点在聚类中的相对位置。CD-DPC根据离心度将样本点分为支撑点、结构点、连贯点和装饰点。在此基础上,对不同离心度的样本点采用不同的关联方法自动获得聚类个数,大大降低了人为因素的影响。最后,通过引入共享近邻来实现装饰点的最终关联,进一步改善聚类结果。在合成和UCI数据集上的大量实验表明,该算法优于其他比较算法。
{"title":"CD-DPC: Centrifugal degree based density peaks clustering algorithm","authors":"Linlin Ma ,&nbsp;Hui Li ,&nbsp;Xincheng Liu ,&nbsp;Huihui Chu ,&nbsp;Yue Guan ,&nbsp;Yuzhen Zhao ,&nbsp;Yawen Chen ,&nbsp;Da Wang ,&nbsp;Wenke Zang","doi":"10.1016/j.patcog.2026.113223","DOIUrl":"10.1016/j.patcog.2026.113223","url":null,"abstract":"<div><div>The Density Peak Clustering (DPC) algorithm is simple and efficient. But DPC and its variants identify clusters only by identifying the centers of single or multiple sparse clusters without considering the coherence of the clustering structure, which tends to result in clusters that cannot be accurately captured. In addition, relative distance and density are only used to identify the centers of clusters and do not provide a description of the relative positions of the remaining sample points. To address these issues, this paper proposes an adaptive density peak clustering algorithm based on centrifugal degree (CD-DPC). The centrifugal degree reflects the relative position of the sample points in the cluster. The CD-DPC categorizes sample points into support, structural, coherent and decoration points based on centrifugal degree. Based on this, the number of clusters is automatically obtained by using different association methods for sample points with different centrifugal degrees, which greatly reduces the influence of human factors. Finally, the clustering results are further improved by introducing shared nearest neighbors for the final association of decorated points. Extensive experiments on synthetic and UCI datasets show that this algorithm outperforms other comparative algorithms.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113223"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MonoTDF: Temporal deep feature learning for generalizable monocular 3D object detection 用于泛化单目3D物体检测的时间深度特征学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-28 DOI: 10.1016/j.patcog.2026.113184
Xiu-Zhi Chen , Yi-Kai Chiu , Chih-Sheng Huang , Yen-Lin Chen
Monocular 3D object detection has gained significant attention due to its cost-effectiveness and practicality in real-world applications. However, existing monocular methods often struggle with depth estimation and spatial consistency, limiting their accuracy in complex environments. In this work, we introduce a Temporal Deep Feature Learning framework, which enhances monocular 3D object detection by integrating temporal features across sequential frames. Our approach leverages a novel deep feature auxiliary module based on convolutional recurrent structures, effectively capturing spatiotemporal information to improve depth perception and detection robustness. The proposed module is model-agnostic and can be seamlessly integrated into various existing monocular detection frameworks. Extensive experiments across multiple state-of-the-art monocular 3D object detection models demonstrate consistent performance improvements, particularly in detecting small or partially occluded objects. Our results highlight the effectiveness and generalizability of the proposed approach, making it a promising solution for real-world autonomous perception systems. The source code of this work is at: https://github.com/Shuray36/MonoTDF-Temporal-Deep-Feature-Learning-for-Generalizable-Monocular-3D-Object-Detection.
单目三维目标检测由于其成本效益和在实际应用中的实用性而受到了极大的关注。然而,现有的单目方法往往在深度估计和空间一致性方面存在问题,限制了它们在复杂环境下的精度。在这项工作中,我们引入了一个时间深度特征学习框架,该框架通过整合跨序列帧的时间特征来增强单目3D目标检测。我们的方法利用一种基于卷积循环结构的新型深度特征辅助模块,有效地捕获时空信息,以提高深度感知和检测鲁棒性。该模块与模型无关,可以无缝集成到各种现有的单目检测框架中。在多个最先进的单眼3D物体检测模型上进行的广泛实验表明,性能得到了一致的改善,特别是在检测小或部分遮挡的物体方面。我们的结果突出了所提出方法的有效性和可泛化性,使其成为现实世界自主感知系统的一个有前途的解决方案。这项工作的源代码在:https://github.com/Shuray36/MonoTDF-Temporal-Deep-Feature-Learning-for-Generalizable-Monocular-3D-Object-Detection。
{"title":"MonoTDF: Temporal deep feature learning for generalizable monocular 3D object detection","authors":"Xiu-Zhi Chen ,&nbsp;Yi-Kai Chiu ,&nbsp;Chih-Sheng Huang ,&nbsp;Yen-Lin Chen","doi":"10.1016/j.patcog.2026.113184","DOIUrl":"10.1016/j.patcog.2026.113184","url":null,"abstract":"<div><div>Monocular 3D object detection has gained significant attention due to its cost-effectiveness and practicality in real-world applications. However, existing monocular methods often struggle with depth estimation and spatial consistency, limiting their accuracy in complex environments. In this work, we introduce a Temporal Deep Feature Learning framework, which enhances monocular 3D object detection by integrating temporal features across sequential frames. Our approach leverages a novel deep feature auxiliary module based on convolutional recurrent structures, effectively capturing spatiotemporal information to improve depth perception and detection robustness. The proposed module is model-agnostic and can be seamlessly integrated into various existing monocular detection frameworks. Extensive experiments across multiple state-of-the-art monocular 3D object detection models demonstrate consistent performance improvements, particularly in detecting small or partially occluded objects. Our results highlight the effectiveness and generalizability of the proposed approach, making it a promising solution for real-world autonomous perception systems. The source code of this work is at: <span><span>https://github.com/Shuray36/MonoTDF-Temporal-Deep-Feature-Learning-for-Generalizable-Monocular-3D-Object-Detection</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113184"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal behavioral analysis for autism spectrum disorder assessment 自闭症谱系障碍评估的多模态行为分析
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-07 DOI: 10.1016/j.patcog.2026.113155
Yunxiu Zhao , Shigang Wang , Feiyong Jia , Honghua Li , Jinyang Wu , Jian Wei , Yan Zhao
Scale-dependent approaches have shown great potential in diagnosing autism spectrum disorder (ASD). However, such methods often involve lengthy evaluation procedures and require substantial resources, including trained professionals and specialized equipment, which significantly limit their scalability and feasibility for large-scale or routine clinical assessments. In this paper, we propose a novel multimodal behavioral signal analysis (MBSA) approach for the intelligent assessment of ASD. Specifically, we first leverage speech and visual cues to identify the Target Movement Area (TMA), thereby enhancing recognition efficiency. Then, an adaptive fine-tuning strategy is employed to improve the generalization and efficiency of pre-trained models in small-sample action recognition tasks. An attention-based detection method is further incorporated to strengthen the semantic understanding of observed behavioral patterns. To enable effective ASD classification, we develop a behavioral quantification scoring method that structurally models the relationship between behavioral features and disease indicators. We collected a multimodal behavioral database of 160 participants in a real clinical setting and assessed ASD using this data. Extensive experiments demonstrate that the proposed MBSA approach significantly outperforms many state-of-the-art methods. With competitive performance and a solid theoretical foundation, MBSA provides a practical and scalable solution for ASD screening and holds promise for broader applications in the intelligent diagnosis of other neurodevelopmental disorders.
依赖于量表的方法在诊断自闭症谱系障碍(ASD)方面显示出巨大的潜力。然而,这种方法往往涉及冗长的评估程序,需要大量资源,包括训练有素的专业人员和专用设备,这大大限制了其大规模或常规临床评估的可扩展性和可行性。在本文中,我们提出了一种新的多模态行为信号分析(MBSA)方法来智能评估ASD。具体来说,我们首先利用语音和视觉线索来识别目标运动区域(TMA),从而提高识别效率。然后,采用自适应微调策略提高预训练模型在小样本动作识别任务中的泛化和效率。进一步采用基于注意的检测方法来加强对观察到的行为模式的语义理解。为了实现有效的ASD分类,我们开发了一种行为量化评分方法,该方法在结构上模拟了行为特征与疾病指标之间的关系。我们在真实的临床环境中收集了160名参与者的多模式行为数据库,并使用这些数据评估ASD。大量的实验表明,所提出的MBSA方法明显优于许多最先进的方法。具有竞争力的性能和坚实的理论基础,MBSA为ASD筛查提供了实用和可扩展的解决方案,并有望在其他神经发育障碍的智能诊断中得到更广泛的应用。
{"title":"Multimodal behavioral analysis for autism spectrum disorder assessment","authors":"Yunxiu Zhao ,&nbsp;Shigang Wang ,&nbsp;Feiyong Jia ,&nbsp;Honghua Li ,&nbsp;Jinyang Wu ,&nbsp;Jian Wei ,&nbsp;Yan Zhao","doi":"10.1016/j.patcog.2026.113155","DOIUrl":"10.1016/j.patcog.2026.113155","url":null,"abstract":"<div><div>Scale-dependent approaches have shown great potential in diagnosing autism spectrum disorder (ASD). However, such methods often involve lengthy evaluation procedures and require substantial resources, including trained professionals and specialized equipment, which significantly limit their scalability and feasibility for large-scale or routine clinical assessments. In this paper, we propose a novel multimodal behavioral signal analysis (MBSA) approach for the intelligent assessment of ASD. Specifically, we first leverage speech and visual cues to identify the Target Movement Area (TMA), thereby enhancing recognition efficiency. Then, an adaptive fine-tuning strategy is employed to improve the generalization and efficiency of pre-trained models in small-sample action recognition tasks. An attention-based detection method is further incorporated to strengthen the semantic understanding of observed behavioral patterns. To enable effective ASD classification, we develop a behavioral quantification scoring method that structurally models the relationship between behavioral features and disease indicators. We collected a multimodal behavioral database of 160 participants in a real clinical setting and assessed ASD using this data. Extensive experiments demonstrate that the proposed MBSA approach significantly outperforms many state-of-the-art methods. With competitive performance and a solid theoretical foundation, MBSA provides a practical and scalable solution for ASD screening and holds promise for broader applications in the intelligent diagnosis of other neurodevelopmental disorders.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113155"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual relation extraction with wake-sleep memory consolidation 连续关系提取与清醒-睡眠记忆巩固
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-29 DOI: 10.1016/j.patcog.2026.113192
Tingting Hang , Ya Guo , Jun Huang , Yirui Wu , Umapada Pal , Shivakumara Palaiahnakote
Continual Relation Extraction (CRE) has achieved significant success due to its ability to adapt to new relations without frequent retraining. However, existing methods still face challenges such as overfitting and representation bias. Inspired by the wake-sleep memory consolidation process of the human brain, this paper proposes a Wake-Sleep Memory Consolidation (WSMC) framework to address these issues systematically. During the wake phase, the model simulates the brain’s information processing mechanism, quickly encoding new relations and storing them in short-term memory. We also introduce the Experience Iterative Learning (EIL) approach, which dynamically adjusts the distribution of relation samples. This approach corrects the model’s representation bias and enhances memory stability through experience replay. During the sleep phase, the model consolidates existing knowledge by replaying long-term memory. Moreover, the framework generates diverse dream data from existing memory sets, thereby increasing the diversity of the training data and improving the model’s generalization capability. Experimental results show that WSMC significantly outperforms other CRE baseline methods on FewRel and TACRED datasets, demonstrating its superior performance compared to baseline methods. Our source code is available at https://github.com/Gyanis9/WSMC.git.
连续关系提取(CRE)由于能够适应新的关系而无需频繁的再培训而取得了显著的成功。然而,现有的方法仍然面临着过拟合和代表性偏差等挑战。受人类大脑清醒-睡眠记忆巩固过程的启发,本文提出了一个清醒-睡眠记忆巩固(WSMC)框架来系统地解决这些问题。在清醒阶段,该模型模拟大脑的信息处理机制,快速编码新的关系并将其存储在短期记忆中。我们还介绍了经验迭代学习(EIL)方法,该方法可以动态调整关系样本的分布。这种方法纠正了模型的表征偏差,并通过经验重放增强了记忆的稳定性。在睡眠阶段,该模型通过重放长期记忆来巩固现有知识。此外,该框架从现有的记忆集中生成多样化的梦数据,从而增加了训练数据的多样性,提高了模型的泛化能力。实验结果表明,在FewRel和TACRED数据集上,WSMC显著优于其他CRE基线方法,显示了其优于基线方法的性能。我们的源代码可从https://github.com/Gyanis9/WSMC.git获得。
{"title":"Continual relation extraction with wake-sleep memory consolidation","authors":"Tingting Hang ,&nbsp;Ya Guo ,&nbsp;Jun Huang ,&nbsp;Yirui Wu ,&nbsp;Umapada Pal ,&nbsp;Shivakumara Palaiahnakote","doi":"10.1016/j.patcog.2026.113192","DOIUrl":"10.1016/j.patcog.2026.113192","url":null,"abstract":"<div><div>Continual Relation Extraction (CRE) has achieved significant success due to its ability to adapt to new relations without frequent retraining. However, existing methods still face challenges such as overfitting and representation bias. Inspired by the wake-sleep memory consolidation process of the human brain, this paper proposes a <strong>W</strong>ake-<strong>S</strong>leep <strong>M</strong>emory <strong>C</strong>onsolidation (WSMC) framework to address these issues systematically. During the wake phase, the model simulates the brain’s information processing mechanism, quickly encoding new relations and storing them in short-term memory. We also introduce the Experience Iterative Learning (EIL) approach, which dynamically adjusts the distribution of relation samples. This approach corrects the model’s representation bias and enhances memory stability through experience replay. During the sleep phase, the model consolidates existing knowledge by replaying long-term memory. Moreover, the framework generates diverse dream data from existing memory sets, thereby increasing the diversity of the training data and improving the model’s generalization capability. Experimental results show that WSMC significantly outperforms other CRE baseline methods on FewRel and TACRED datasets, demonstrating its superior performance compared to baseline methods. Our source code is available at <span><span>https://github.com/Gyanis9/WSMC.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113192"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPMT: Fast and precise high-resolution makeup transfer via Laplacian pyramid FPMT:通过拉普拉斯金字塔快速精确的高分辨率化妆转移
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-02 DOI: 10.1016/j.patcog.2026.113221
Zhaoyang Sun , Shengwu Xiong , Yi Rong
In this paper, we focus on accelerating high-resolution makeup transfer process without compromising generative performance. To this end, we propose a Fast and Precise Makeup Transfer (FPMT) framework based on Laplacian pyramid. In FPMT, we reveal that most makeup changes are concentrated in the low-frequency component, while a small amount of color- and texture-related details are included in the high-frequency components. Leveraging this insight, FPMT employs a lightweight encoder-decoder network to perform makeup transfer on the low-frequency component of inputs, thus improving efficiency. For each high-frequency component, FPMT implements a tiny refinement network that progressively predicts a mask and adaptively refines the makeup details to ensure transfer quality. By stacking the computationally efficient refinement network, FPMT can process higher-resolution images, demonstrating its flexibility and scalability. Using a single GTX 1660Ti GPU, FPMT can achieve an inference speed of about 42 FPS for input images with 1024 × 1024 resolution, which is much faster than the state-of-the-art methods. Extensive quantitative and qualitative analyses validate the efficiency and effectiveness of the proposed FPMT framework. The source code is available at: https://github.com/Snowfallingplum/FPMT.
在本文中,我们专注于在不影响生成性能的情况下加速高分辨率补码转移过程。为此,我们提出了一种基于拉普拉斯金字塔的快速精确的补码转移(FPMT)框架。在FPMT中,我们发现大多数组成变化集中在低频分量中,而少量与颜色和纹理相关的细节包含在高频分量中。利用这种洞察力,FPMT采用轻量级编码器-解码器网络对输入的低频分量执行补偿传输,从而提高效率。对于每个高频组件,FPMT实现了一个微小的细化网络,该网络逐步预测掩码并自适应地细化组成细节,以确保传输质量。通过叠加计算效率高的优化网络,FPMT可以处理更高分辨率的图像,展示了其灵活性和可扩展性。使用单个GTX 1660Ti GPU,对于1024 × 1024分辨率的输入图像,FPMT可以实现约42 FPS的推理速度,这比最先进的方法快得多。广泛的定量和定性分析验证了所提出的FPMT框架的效率和有效性。源代码可从https://github.com/Snowfallingplum/FPMT获得。
{"title":"FPMT: Fast and precise high-resolution makeup transfer via Laplacian pyramid","authors":"Zhaoyang Sun ,&nbsp;Shengwu Xiong ,&nbsp;Yi Rong","doi":"10.1016/j.patcog.2026.113221","DOIUrl":"10.1016/j.patcog.2026.113221","url":null,"abstract":"<div><div>In this paper, we focus on accelerating high-resolution makeup transfer process without compromising generative performance. To this end, we propose a Fast and Precise Makeup Transfer (FPMT) framework based on Laplacian pyramid. In FPMT, we reveal that most makeup changes are concentrated in the low-frequency component, while a small amount of color- and texture-related details are included in the high-frequency components. Leveraging this insight, FPMT employs a lightweight encoder-decoder network to perform makeup transfer on the low-frequency component of inputs, thus improving efficiency. For each high-frequency component, FPMT implements a tiny refinement network that progressively predicts a mask and adaptively refines the makeup details to ensure transfer quality. By stacking the computationally efficient refinement network, FPMT can process higher-resolution images, demonstrating its flexibility and scalability. Using a single GTX 1660Ti GPU, FPMT can achieve an inference speed of about 42 FPS for input images with 1024 × 1024 resolution, which is much faster than the state-of-the-art methods. Extensive quantitative and qualitative analyses validate the efficiency and effectiveness of the proposed FPMT framework. The source code is available at: <span><span>https://github.com/Snowfallingplum/FPMT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113221"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MC-MVSNet: When multi-view stereo meets monocular cues 当多视角立体遇到单目线索时
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-27 DOI: 10.1016/j.patcog.2026.113166
Xincheng Tang , Mengqi Rong , Bin Fan , Hongmin Liu , Shuhan Shen
Learning-based Multi-View Stereo (MVS) has become a key technique for reconstructing dense 3D point clouds from multiple calibrated images. However, real-world challenges such as occlusions and textureless regions often hinder accurate depth estimation. Recent advances in monocular Vision Foundation Models (VFMs) have demonstrated strong generalization capabilities in scene understanding, offering new opportunities to enhance the robustness of MVS. In this paper, we present MC-MVSNet, a novel MVS framework that integrates diverse monocular cues to improve depth estimation under challenging conditions. During feature extraction, we fuse conventional CNN features with VFM-derived representations through a hybrid feature fusion module, effectively combining local details and global context for more discriminative feature matching. We also propose a cost volume filtering module that enforces cross-view geometric consistency on monocular depth predictions, pruning redundant depth hypotheses to reduce the depth search space and mitigate matching ambiguity. Additionally, we leverage monocular surface normals to construct a curved patch cost aggregation module that aggregates costs over geometry-aligned curved patches, which improves depth estimation accuracy in curved and textureless regions. Extensive experiments on the DTU, Tanks and Temples, and ETH3D benchmarks demonstrate that MC-MVSNet achieves state-of-the-art performance and exhibits strong generalization capabilities, validating the effectiveness and robustness of the proposed method.
基于学习的多视点立体(MVS)技术已经成为从多个标定图像中重建密集三维点云的关键技术。然而,现实世界的挑战,如遮挡和无纹理区域往往阻碍准确的深度估计。近年来,单目视觉基础模型(VFMs)在场景理解方面具有较强的泛化能力,为增强单目视觉基础模型的鲁棒性提供了新的机会。在本文中,我们提出了MC-MVSNet,这是一个新的MVS框架,它集成了多种单目线索,以提高在具有挑战性条件下的深度估计。在特征提取过程中,我们通过混合特征融合模块将传统的CNN特征与vfm衍生的表征融合在一起,有效地将局部细节与全局上下文相结合,实现更具判别性的特征匹配。我们还提出了一个代价体积过滤模块,该模块强制单目深度预测的跨视图几何一致性,修剪冗余深度假设以减少深度搜索空间并减轻匹配歧义。此外,我们利用单眼表面法线构建了一个弯曲斑块成本聚合模块,该模块可以聚合几何对齐的弯曲斑块上的成本,从而提高了弯曲和无纹理区域的深度估计精度。在DTU、Tanks and Temples和ETH3D基准测试上进行的大量实验表明,MC-MVSNet实现了最先进的性能,并表现出强大的泛化能力,验证了所提出方法的有效性和鲁棒性。
{"title":"MC-MVSNet: When multi-view stereo meets monocular cues","authors":"Xincheng Tang ,&nbsp;Mengqi Rong ,&nbsp;Bin Fan ,&nbsp;Hongmin Liu ,&nbsp;Shuhan Shen","doi":"10.1016/j.patcog.2026.113166","DOIUrl":"10.1016/j.patcog.2026.113166","url":null,"abstract":"<div><div>Learning-based Multi-View Stereo (MVS) has become a key technique for reconstructing dense 3D point clouds from multiple calibrated images. However, real-world challenges such as occlusions and textureless regions often hinder accurate depth estimation. Recent advances in monocular Vision Foundation Models (VFMs) have demonstrated strong generalization capabilities in scene understanding, offering new opportunities to enhance the robustness of MVS. In this paper, we present MC-MVSNet, a novel MVS framework that integrates diverse monocular cues to improve depth estimation under challenging conditions. During feature extraction, we fuse conventional CNN features with VFM-derived representations through a hybrid feature fusion module, effectively combining local details and global context for more discriminative feature matching. We also propose a cost volume filtering module that enforces cross-view geometric consistency on monocular depth predictions, pruning redundant depth hypotheses to reduce the depth search space and mitigate matching ambiguity. Additionally, we leverage monocular surface normals to construct a curved patch cost aggregation module that aggregates costs over geometry-aligned curved patches, which improves depth estimation accuracy in curved and textureless regions. Extensive experiments on the DTU, Tanks and Temples, and ETH3D benchmarks demonstrate that MC-MVSNet achieves state-of-the-art performance and exhibits strong generalization capabilities, validating the effectiveness and robustness of the proposed method.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113166"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attribute graph adjusted trace ratio linear discriminant analysis for feature extraction 属性图调整迹比线性判别分析特征提取
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-22 DOI: 10.1016/j.patcog.2026.113136
Quan Wang , Hao Lei , Fei Wang , Xinpei Wen , Zhiping Lin , Feiping Nie
Trace Ratio Linear Discriminant Analysis (TRLDA) is an appealing supervised feature extraction method because it explicitly reflects the Euclidean distances between and within classes of projected samples while preserving data similarity through its orthogonal constraint. However, TRLDA fails to account for inter-attribute correlations, which may limit its discriminant capability. To overcome this limitation, we propose Attribute Graph Adjusted Trace Ratio Linear Discriminant Analysis (AGATRLDA), a novel method that incorporates attribute-level relationships into the discriminant projection matrix. In our approach, each attribute is represented as a point formed by the values of that attribute across all samples. An attribute graph is then constructed by connecting these attribute points with edges weighted according to their pairwise similarity. By integrating the Laplacian matrix of this attribute graph into the optimization framework, AGATRLDA adjusts the discriminant projection matrix to account for inter-attribute correlations. This adjustment encourages attributes with higher similarity to have more aligned coefficients in the projection matrix, thereby improving discriminative performance. Experimental results demonstrate that AGATRLDA consistently outperforms the original TRLDA method as well as several state-of-the-art feature extraction techniques, validating the benefit of incorporating inter-attribute correlations in the discriminant learning process.
迹比线性判别分析(TRLDA)是一种很有吸引力的监督特征提取方法,因为它明确地反映了投影样本类之间和类内的欧几里得距离,同时通过其正交约束保持数据相似性。然而,TRLDA没有考虑属性间的相关性,这可能会限制其判别能力。为了克服这一限制,我们提出了属性图调整迹比线性判别分析(AGATRLDA),这是一种将属性级关系纳入判别投影矩阵的新方法。在我们的方法中,每个属性都表示为由所有样本中该属性的值构成的点。然后将这些属性点与根据它们的两两相似度加权的边连接起来,构造属性图。通过将该属性图的拉普拉斯矩阵整合到优化框架中,AGATRLDA调整判别投影矩阵以考虑属性间的相关性。这种调整鼓励具有较高相似性的属性在投影矩阵中具有更多对齐系数,从而提高判别性能。实验结果表明,AGATRLDA始终优于原始的TRLDA方法以及几种最先进的特征提取技术,验证了在判别学习过程中纳入属性间相关性的好处。
{"title":"Attribute graph adjusted trace ratio linear discriminant analysis for feature extraction","authors":"Quan Wang ,&nbsp;Hao Lei ,&nbsp;Fei Wang ,&nbsp;Xinpei Wen ,&nbsp;Zhiping Lin ,&nbsp;Feiping Nie","doi":"10.1016/j.patcog.2026.113136","DOIUrl":"10.1016/j.patcog.2026.113136","url":null,"abstract":"<div><div>Trace Ratio Linear Discriminant Analysis (TRLDA) is an appealing supervised feature extraction method because it explicitly reflects the Euclidean distances between and within classes of projected samples while preserving data similarity through its orthogonal constraint. However, TRLDA fails to account for inter-attribute correlations, which may limit its discriminant capability. To overcome this limitation, we propose Attribute Graph Adjusted Trace Ratio Linear Discriminant Analysis (AGATRLDA), a novel method that incorporates attribute-level relationships into the discriminant projection matrix. In our approach, each attribute is represented as a point formed by the values of that attribute across all samples. An attribute graph is then constructed by connecting these attribute points with edges weighted according to their pairwise similarity. By integrating the Laplacian matrix of this attribute graph into the optimization framework, AGATRLDA adjusts the discriminant projection matrix to account for inter-attribute correlations. This adjustment encourages attributes with higher similarity to have more aligned coefficients in the projection matrix, thereby improving discriminative performance. Experimental results demonstrate that AGATRLDA consistently outperforms the original TRLDA method as well as several state-of-the-art feature extraction techniques, validating the benefit of incorporating inter-attribute correlations in the discriminant learning process.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113136"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-stage learning framework with a beam image dataset for automatic laser resonator alignment 基于光束图像数据集的激光谐振器自动对准两阶段学习框架
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-27 DOI: 10.1016/j.patcog.2026.113145
Shaoxiang Guo , Donald Risbridger , David A. Robb , Xianwen Kong , M. J. Daniel Esser , Michael J. Chantler , Richard M. Carter , Mustafa Suphi Erden
Accurate alignment of a laser resonator is essential for upscaling industrial laser manufacturing and precision processing. However, traditional manual or semi-automatic methods depend heavily on operator expertise, and struggle with the interdependence among multiple alignment parameters. To tackle this, we introduce the first real-world image dataset for automatic laser resonator alignment, collected on a laboratory-built resonator setup. It comprises over 6000 beam profiler images annotated with four key alignment parameters (intracavity iris aperture diameter, output coupler pitch and yaw actuator displacements, and axial position of the output coupler), with over 500,000 paired samples for data-driven alignment. Given a pair of beam profiler images exhibiting distinct beam patterns under different configurations, the system predicts the control-parameter changes required to realign the resonator. Leveraging this dataset, we propose a novel two-stage deep learning framework for automatic resonator alignment. In Stage 1, a multi-scale CNN augmented with cross-attention and correlation-difference modules, extracts features and outputs an initial coarse prediction of alignment parameters. In Stage 2, a feature-difference map is computed by subtracting the paired feature representations and fed into an iterative refinement module to correct residual misalignments. The final prediction combines coarse and refined estimates, integrating global context with fine-grained corrections for accurate inference. Experiments on our dataset and a different instance of the same physical system from which the CNN was trained suggest superior accuracy and practicality to manual alignment.
激光谐振腔的精确对准是提高工业激光制造和精密加工水平的必要条件。然而,传统的手动或半自动方法在很大程度上依赖于操作人员的专业知识,并且难以处理多个对准参数之间的相互依赖关系。为了解决这个问题,我们引入了第一个用于自动激光谐振器对准的真实世界图像数据集,该数据集收集于实验室构建的谐振器设置上。它包括6000多张带有四个关键对准参数(腔内光圈孔径、输出耦合器节距和偏转执行器位移、输出耦合器轴向位置)注释的光束剖面图像,超过50万对样本用于数据驱动的对准。给定一对在不同配置下显示不同光束模式的光束剖面仪图像,该系统预测重新调整谐振器所需的控制参数变化。利用该数据集,我们提出了一种新的两阶段深度学习框架,用于自动谐振器对齐。在阶段1中,多尺度CNN增强了交叉关注和相关差分模块,提取特征并输出对准参数的初始粗预测。在第二阶段,通过减去配对的特征表示来计算特征差图,并将其输入迭代细化模块以纠正剩余的不对齐。最终的预测结合了粗糙和精细的估计,将全局上下文与精细的修正相结合,以进行准确的推断。在我们的数据集和训练CNN的同一物理系统的不同实例上进行的实验表明,人工校准的准确性和实用性优于人工校准。
{"title":"A two-stage learning framework with a beam image dataset for automatic laser resonator alignment","authors":"Shaoxiang Guo ,&nbsp;Donald Risbridger ,&nbsp;David A. Robb ,&nbsp;Xianwen Kong ,&nbsp;M. J. Daniel Esser ,&nbsp;Michael J. Chantler ,&nbsp;Richard M. Carter ,&nbsp;Mustafa Suphi Erden","doi":"10.1016/j.patcog.2026.113145","DOIUrl":"10.1016/j.patcog.2026.113145","url":null,"abstract":"<div><div>Accurate alignment of a laser resonator is essential for upscaling industrial laser manufacturing and precision processing. However, traditional manual or semi-automatic methods depend heavily on operator expertise, and struggle with the interdependence among multiple alignment parameters. To tackle this, we introduce the first real-world image dataset for automatic laser resonator alignment, collected on a laboratory-built resonator setup. It comprises over 6000 beam profiler images annotated with four key alignment parameters (intracavity iris aperture diameter, output coupler pitch and yaw actuator displacements, and axial position of the output coupler), with over 500,000 paired samples for data-driven alignment. Given a pair of beam profiler images exhibiting distinct beam patterns under different configurations, the system predicts the control-parameter changes required to realign the resonator. Leveraging this dataset, we propose a novel two-stage deep learning framework for automatic resonator alignment. In Stage 1, a multi-scale CNN augmented with cross-attention and correlation-difference modules, extracts features and outputs an initial coarse prediction of alignment parameters. In Stage 2, a feature-difference map is computed by subtracting the paired feature representations and fed into an iterative refinement module to correct residual misalignments. The final prediction combines coarse and refined estimates, integrating global context with fine-grained corrections for accurate inference. Experiments on our dataset and a different instance of the same physical system from which the CNN was trained suggest superior accuracy and practicality to manual alignment.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113145"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1