首页 > 最新文献

Pattern Recognition最新文献

英文 中文
Fast and robust outlier detection: A granular-ball center isolation and region consistency approach 快速和鲁棒异常检测:一种颗粒球中心隔离和区域一致性方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.patcog.2026.113212
Rongxiang Wang , Jihong Wan , Xiaoping Li , Shuaishuai Tan
Outlier detection is an essential task in data mining, focused on identifying abnormal objects that deviate from normal distribution. The k-nearest neighbors-based detection method is one of the widely used techniques. However, as data scale increases, the process of finding k-nearest neighbors for each object becomes extremely time-consuming. Additionally, if neighbors of objects contain noise, it may interfere with computation of its relationships with neighbors, which affects detection performance. To address these issues, this paper proposes a fast and robust outlier detection method based on granular-ball (GB) center isolation and region consistency, called FROD. Specifically, generation of GBs is the first step. The dataset is covered by generating GBs with different granularities. Then, by calculating the GB center isolation (GBCI), it evaluates the isolation degree of different GB centers relative to other GB centers. From a global perspective, GBCI indirectly reflects the position and isolation of each GB center within the overall data distribution. Furthermore, by calculating the GB center region consistency (GBCRC) of an object, it measures closeness between object and GB center neighborhood. From a local perspective, GBCRC reflects the correlation between the object and the data distribution within the GB center neighborhood to which it belongs. Finally, by combining GBCI and GBCRC, outlier factor of each object is obtained, and a corresponding detection algorithm is designed. Experimental results show that FROD performs excellently in terms of detection efficiency and accuracy, and demonstrates robustness in noisy environments.
异常点检测是数据挖掘中的一项重要任务,其重点是识别偏离正态分布的异常对象。基于k近邻的检测方法是目前应用最广泛的检测方法之一。然而,随着数据规模的增加,为每个对象寻找k个最近邻居的过程变得非常耗时。此外,如果目标的邻居含有噪声,可能会干扰其与邻居关系的计算,从而影响检测性能。为了解决这些问题,本文提出了一种基于颗粒球(GB)中心隔离和区域一致性的快速鲁棒异常点检测方法,称为FROD。具体来说,gb的产生是第一步。数据集通过生成不同粒度的gb来覆盖。然后,通过计算GB中心隔离度(GBCI),评价不同GB中心相对于其他GB中心的隔离程度。从全球的角度来看,GBCI间接反映了每个GB中心在整体数据分布中的位置和隔离程度。通过计算对象的GB中心区域一致性(GBCRC)来衡量对象与GB中心邻域的紧密度。从局部的角度来看,GBCRC反映了对象与其所属的GB中心邻域内数据分布之间的相关性。最后,结合GBCI和GBCRC,得到每个目标的离群因子,并设计相应的检测算法。实验结果表明,该方法具有良好的检测效率和精度,在噪声环境下具有较强的鲁棒性。
{"title":"Fast and robust outlier detection: A granular-ball center isolation and region consistency approach","authors":"Rongxiang Wang ,&nbsp;Jihong Wan ,&nbsp;Xiaoping Li ,&nbsp;Shuaishuai Tan","doi":"10.1016/j.patcog.2026.113212","DOIUrl":"10.1016/j.patcog.2026.113212","url":null,"abstract":"<div><div>Outlier detection is an essential task in data mining, focused on identifying abnormal objects that deviate from normal distribution. The <em>k</em>-nearest neighbors-based detection method is one of the widely used techniques. However, as data scale increases, the process of finding <em>k</em>-nearest neighbors for each object becomes extremely time-consuming. Additionally, if neighbors of objects contain noise, it may interfere with computation of its relationships with neighbors, which affects detection performance. To address these issues, this paper proposes a fast and robust outlier detection method based on granular-ball (GB) center isolation and region consistency, called FROD. Specifically, generation of GBs is the first step. The dataset is covered by generating GBs with different granularities. Then, by calculating the GB center isolation (<em>GBCI</em>), it evaluates the isolation degree of different GB centers relative to other GB centers. From a global perspective, <em>GBCI</em> indirectly reflects the position and isolation of each GB center within the overall data distribution. Furthermore, by calculating the GB center region consistency (<em>GBCRC</em>) of an object, it measures closeness between object and GB center neighborhood. From a local perspective, <em>GBCRC</em> reflects the correlation between the object and the data distribution within the GB center neighborhood to which it belongs. Finally, by combining <em>GBCI</em> and <em>GBCRC</em>, outlier factor of each object is obtained, and a corresponding detection algorithm is designed. Experimental results show that FROD performs excellently in terms of detection efficiency and accuracy, and demonstrates robustness in noisy environments.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113212"},"PeriodicalIF":7.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDC-Net: Semi-supervised breast ultrasound lesion segmentation via semantic decoupling 基于语义解耦的半监督乳腺超声病灶分割
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.patcog.2026.113216
Jiansong Zhang , Zhuoqin Yang , Xiaoling Luo , Shaozheng He , Linlin Shen
Semi-supervised breast ultrasound (BUS) lesion boundary segmentation is a promising technique to enhance model generalization, with the potential to address the challenges of high annotation costs and data scarcity in medical imaging. However, existing semi-supervised strategies face significant challenges in this domain due to the low contrast and difficulty in differentiating between breast ultrasound lesion areas and the background. To address this, we propose a novel semi-supervised strategy for breast ultrasound lesion boundary segmentation. Unlike traditional approaches that rely on visual feature understanding, we redefine the semi-supervised segmentation problem as semantic decoupling between foreground and background in lesion images. Based on this insight, we proposed a text-guided semi-supervised segmentation framework for breast ultrasound lesions. It first learns disentangled representations through contrastive learning between text and image features of foreground and background, then enhances the semantic understanding of the image encoder through supervised learning with partially labeled data. Subsequently, the pretrained encoder-decoder is guided by weak prompts on unlabeled data to generate robust pseudo-labels, progressively achieving semantic decoupling of foreground and background for lesion segmentation. We validated the effectiveness of this method on three publicly available breast ultrasound datasets, achieving consistently superior segmentation performance compared with existing semi-supervised approaches. The code will be released here.
半监督乳腺超声(BUS)病灶边界分割是一种很有前途的增强模型泛化的技术,具有解决医学成像中标注成本高和数据稀缺的挑战的潜力。然而,现有的半监督策略在该领域面临着巨大的挑战,因为低对比度和难以区分乳腺超声病变区域和背景。为了解决这个问题,我们提出了一种新的乳腺超声病灶边界分割的半监督策略。与传统的依赖于视觉特征理解的方法不同,我们将半监督分割问题重新定义为病变图像中前景和背景之间的语义解耦。基于此,我们提出了一种文本引导的乳腺超声病灶半监督分割框架。它首先通过前景和背景的文本和图像特征之间的对比学习来学习解纠缠表示,然后通过部分标记数据的监督学习来增强图像编码器的语义理解。随后,通过对未标记数据的弱提示引导预训练的编码器-解码器生成鲁棒伪标签,逐步实现前景和背景的语义解耦,用于病灶分割。我们在三个公开可用的乳房超声数据集上验证了该方法的有效性,与现有的半监督方法相比,该方法始终取得了更好的分割性能。代码将在这里发布。
{"title":"SDC-Net: Semi-supervised breast ultrasound lesion segmentation via semantic decoupling","authors":"Jiansong Zhang ,&nbsp;Zhuoqin Yang ,&nbsp;Xiaoling Luo ,&nbsp;Shaozheng He ,&nbsp;Linlin Shen","doi":"10.1016/j.patcog.2026.113216","DOIUrl":"10.1016/j.patcog.2026.113216","url":null,"abstract":"<div><div>Semi-supervised breast ultrasound (BUS) lesion boundary segmentation is a promising technique to enhance model generalization, with the potential to address the challenges of high annotation costs and data scarcity in medical imaging. However, existing semi-supervised strategies face significant challenges in this domain due to the low contrast and difficulty in differentiating between breast ultrasound lesion areas and the background. To address this, we propose a novel semi-supervised strategy for breast ultrasound lesion boundary segmentation. Unlike traditional approaches that rely on visual feature understanding, we redefine the semi-supervised segmentation problem as semantic decoupling between foreground and background in lesion images. Based on this insight, we proposed a text-guided semi-supervised segmentation framework for breast ultrasound lesions. It first learns disentangled representations through contrastive learning between text and image features of foreground and background, then enhances the semantic understanding of the image encoder through supervised learning with partially labeled data. Subsequently, the pretrained encoder-decoder is guided by weak prompts on unlabeled data to generate robust pseudo-labels, progressively achieving semantic decoupling of foreground and background for lesion segmentation. We validated the effectiveness of this method on three publicly available breast ultrasound datasets, achieving consistently superior segmentation performance compared with existing semi-supervised approaches. The code will be released <span><span>here</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113216"},"PeriodicalIF":7.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCALAR: Spatial-concept alignment for robust vision in harsh open world 标量:在严酷的开放世界中实现健壮视觉的空间概念对齐
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.patcog.2026.113203
Xiaoyu Yang , Lijian Xu , Xingyu Zeng , Xiaosong Wang , Hongsheng Li , Shaoting Zhang
Foundation models have recently transformed visual-linguistic representation learning, yet their robustness under adverse imaging conditions of open worlds remains insufficiently understood. In this work, we introduce SCALAR, a scene-aware framework that endows multi-modal large language models with enhanced capability for robust spatial-concept alignment in degraded visual environments of open worlds. SCALAR proceeds in two complementary stages. The supervised alignment stage reconstructs hierarchical concept chains from visual-linguistic corpora, thereby enabling efficient spatial relationship decoding. The subsequent reinforced fine-tuning stage dispenses with annotations and leverages a consistency-driven reward to facilitate open-world self-evolution, yielding improved adaptability across diverse degraded domains. Crucially, SCALAR jointly optimizes multi-dimensional spatial representations and heterogeneous knowledge structures, thereby fostering resilience and generalization beyond canonical benchmarks. Extensive evaluations across five tasks and eight large-scale datasets demonstrate the efficacy of SCALAR in advancing state-of-the-art performance on visual grounding and complex scene understanding, even under challenging open-world environments with harsh visual conditions. Comprehensive ablation studies further elucidate the contributions of reinforced fine-tuning and multi-task joint optimization. Finally, to encourage future research, we provide a new multi-task visual grounding dataset emphasizing fine-grained scene-object relations under degradation, along with code: https://github.com/AnonymGiant/SCALAR.
基础模型最近改变了视觉语言表征学习,但它们在开放世界的不利成像条件下的鲁棒性仍然没有得到充分的理解。在这项工作中,我们引入了一个场景感知框架SCALAR,它赋予多模态大型语言模型在开放世界的退化视觉环境中具有增强的鲁棒空间概念对齐能力。SCALAR分两个互补的阶段进行。监督对齐阶段从视觉语言语料库中重构层次概念链,从而实现高效的空间关系解码。随后的强化微调阶段省去了注释,并利用一致性驱动的奖励来促进开放世界的自我进化,从而提高了对不同退化领域的适应性。至关重要的是,SCALAR联合优化了多维空间表示和异构知识结构,从而促进了超越规范基准的弹性和泛化。对五个任务和八个大规模数据集的广泛评估表明,即使在具有恶劣视觉条件的具有挑战性的开放世界环境中,SCALAR在提高视觉基础和复杂场景理解方面的最先进性能方面的有效性。综合消融研究进一步阐明了强化微调和多任务关节优化的贡献。最后,为了鼓励未来的研究,我们提供了一个新的多任务视觉基础数据集,强调在退化下的细粒度场景-对象关系,以及代码:https://github.com/AnonymGiant/SCALAR。
{"title":"SCALAR: Spatial-concept alignment for robust vision in harsh open world","authors":"Xiaoyu Yang ,&nbsp;Lijian Xu ,&nbsp;Xingyu Zeng ,&nbsp;Xiaosong Wang ,&nbsp;Hongsheng Li ,&nbsp;Shaoting Zhang","doi":"10.1016/j.patcog.2026.113203","DOIUrl":"10.1016/j.patcog.2026.113203","url":null,"abstract":"<div><div>Foundation models have recently transformed visual-linguistic representation learning, yet their robustness under adverse imaging conditions of open worlds remains insufficiently understood. In this work, we introduce SCALAR, a scene-aware framework that endows multi-modal large language models with enhanced capability for robust spatial-concept alignment in degraded visual environments of open worlds. SCALAR proceeds in two complementary stages. The supervised alignment stage reconstructs hierarchical concept chains from visual-linguistic corpora, thereby enabling efficient spatial relationship decoding. The subsequent reinforced fine-tuning stage dispenses with annotations and leverages a consistency-driven reward to facilitate open-world self-evolution, yielding improved adaptability across diverse degraded domains. Crucially, SCALAR jointly optimizes multi-dimensional spatial representations and heterogeneous knowledge structures, thereby fostering resilience and generalization beyond canonical benchmarks. Extensive evaluations across five tasks and eight large-scale datasets demonstrate the efficacy of SCALAR in advancing state-of-the-art performance on visual grounding and complex scene understanding, even under challenging open-world environments with harsh visual conditions. Comprehensive ablation studies further elucidate the contributions of reinforced fine-tuning and multi-task joint optimization. Finally, to encourage future research, we provide a new multi-task visual grounding dataset emphasizing fine-grained scene-object relations under degradation, along with code: <span><span>https://github.com/AnonymGiant/SCALAR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113203"},"PeriodicalIF":7.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MG-TVMF: Multi-grained text-video matching and fusing for weakly supervised video anomaly detection MG-TVMF:用于弱监督视频异常检测的多粒度文本视频匹配和融合
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.patcog.2026.113201
Ping He, Xiaonan Gao, Huibin Li
Weakly supervised video anomaly detection (WS-VAD) often suffers from false alarms and incomplete localization due to the lack of precise temporal annotations. To address these limitations, we propose a novel method, multi-grained text-video matching and fusing (MG-TVMF), which leverages semantic cues from anomaly category text labels to enhance both the accuracy and completeness of anomaly localization. MG-TVMF integrates two complementary branches: the MG-TVM branch improves localization accuracy through a hierarchical structure comprising a coarse-grained classification module and two fine-grained matching modules, including a video-text matching (VTM) module for global semantic alignment and a segment-text matching (STM) module for local video (i.e. segment) text alignment via optimal transport algorithm. Meanwhile, the MG-TVF branch enhances localization completeness by prepending a global video-level text prompt to each segment-level caption for multi-grained textual fusion, and reconstructing the masked anomaly-related caption of the top-scoring segment using video segment features and anomaly scores. Extensive experiments on the UCF-Crime and XD-Violence datasets demonstrate the effectiveness of the proposed VTM and STM modules as well as the MG-TVF branch, and the proposed MG-TVMF method achieves state-of-the-art performance on UCF-Crime, XD-Violence, and ShanghaiTech datasets.
弱监督视频异常检测(WS-VAD)由于缺乏精确的时间标注,常常存在误报和定位不完整的问题。为了解决这些限制,我们提出了一种新的方法,多粒度文本视频匹配和融合(MG-TVMF),该方法利用异常类别文本标签的语义线索来提高异常定位的准确性和完整性。MG-TVMF集成了两个互补的分支:MG-TVM分支通过一个由粗粒度分类模块和两个细粒度匹配模块组成的层次结构来提高定位精度,其中包括一个用于全局语义对齐的视频文本匹配(VTM)模块和一个通过最优传输算法用于局部视频(即片段)文本对齐的段文本匹配(STM)模块。同时,MG-TVF分支通过在每个片段级标题前添加全局视频级文本提示进行多粒度文本融合,增强定位完整性,并利用视频片段特征和异常分数重构得分最高的片段的屏蔽异常相关标题。在UCF-Crime和XD-Violence数据集上的大量实验证明了所提出的VTM和STM模块以及MG-TVF分支的有效性,所提出的MG-TVMF方法在UCF-Crime、XD-Violence和ShanghaiTech数据集上实现了最先进的性能。
{"title":"MG-TVMF: Multi-grained text-video matching and fusing for weakly supervised video anomaly detection","authors":"Ping He,&nbsp;Xiaonan Gao,&nbsp;Huibin Li","doi":"10.1016/j.patcog.2026.113201","DOIUrl":"10.1016/j.patcog.2026.113201","url":null,"abstract":"<div><div>Weakly supervised video anomaly detection (WS-VAD) often suffers from false alarms and incomplete localization due to the lack of precise temporal annotations. To address these limitations, we propose a novel method, multi-grained text-video matching and fusing (MG-TVMF), which leverages semantic cues from anomaly category text labels to enhance both the accuracy and completeness of anomaly localization. MG-TVMF integrates two complementary branches: the MG-TVM branch improves localization accuracy through a hierarchical structure comprising a coarse-grained classification module and two fine-grained matching modules, including a video-text matching (VTM) module for global semantic alignment and a segment-text matching (STM) module for local video (i.e. segment) text alignment via optimal transport algorithm. Meanwhile, the MG-TVF branch enhances localization completeness by prepending a global video-level text prompt to each segment-level caption for multi-grained textual fusion, and reconstructing the masked anomaly-related caption of the top-scoring segment using video segment features and anomaly scores. Extensive experiments on the UCF-Crime and XD-Violence datasets demonstrate the effectiveness of the proposed VTM and STM modules as well as the MG-TVF branch, and the proposed MG-TVMF method achieves state-of-the-art performance on UCF-Crime, XD-Violence, and ShanghaiTech datasets.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113201"},"PeriodicalIF":7.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning modality knowledge with proxy for RGB-Infrared object detection 用代理学习rgb -红外目标检测的模态知识
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.patcog.2026.113227
You Ma, Lin Chai, Shihan Mao, Yucheng Zhang
RGB-infrared object detection aims to improve detection performance in complex environments by integrating complementary information from RGB and infrared images. While transformer-based methods have demonstrated significant advancements in this field by directly modeling dense relationships between modality tokens to enable cross-modality long-range interactions, they neglect the inherent discrepancies in feature distributions across modalities. Such discrepancies attenuate the reliability of the established relationships, thereby restricting the effective exploitation of complementary information between modalities. To alleviate this problem, we propose a framework for learning modality knowledge with proxy. The core innovation lies in the design of a proxy-guided cross-modality feature fusion module, which realizes dual-modality interactions by using lightweight proxy tokens as intermediate representations. Specifically, self-attention is firstly utilized to facilitate the proxy tokens to learn the global information of each modality; then, the relationship between dual-modality proxy tokens is constructed to capture modality complementary information while also mitigating the interference of modality discrepancies; and finally, the knowledge in the updated proxy tokens is fed back to each modality through cross-attention for enhancing the features of each modality. Additionally, a mixture of knowledge decoupled experts module is designed to effectively fuse enhanced features of the two modalities. This module leverages multiple gating networks to assign modality-specific and modality-shared knowledge to separate expert groups for learning, thus highlighting the advantageous features of the different modalities. Extensive experiments on four RGB-infrared datasets demonstrate that our method outperforms existing state-of-the-art methods.
RGB-红外目标检测旨在将RGB图像和红外图像的互补信息相结合,提高复杂环境下的检测性能。虽然基于转换器的方法通过直接建模模态标记之间的密集关系来实现跨模态远程交互,在该领域取得了重大进展,但它们忽略了模态之间特征分布的固有差异。这种差异削弱了已建立的关系的可靠性,从而限制了对模式之间互补信息的有效利用。为了缓解这一问题,我们提出了一个使用代理学习模态知识的框架。核心创新点在于设计了以代理为导向的跨模态特征融合模块,利用轻量级代理令牌作为中间表示实现双模态交互。具体而言,首先利用自关注促进代理令牌学习各模态的全局信息;然后,构建双模态代理令牌之间的关系,在获取模态互补信息的同时减轻模态差异的干扰;最后,通过交叉关注的方式将更新后的代理令牌中的知识反馈给各个模态,增强各个模态的特征。此外,还设计了一个知识解耦的混合专家模块,以有效地融合两种模式的增强特征。该模块利用多个门控网络将模式特定和模式共享的知识分配给不同的专家组进行学习,从而突出不同模式的优势特征。在四个rgb红外数据集上进行的大量实验表明,我们的方法优于现有的最先进的方法。
{"title":"Learning modality knowledge with proxy for RGB-Infrared object detection","authors":"You Ma,&nbsp;Lin Chai,&nbsp;Shihan Mao,&nbsp;Yucheng Zhang","doi":"10.1016/j.patcog.2026.113227","DOIUrl":"10.1016/j.patcog.2026.113227","url":null,"abstract":"<div><div>RGB-infrared object detection aims to improve detection performance in complex environments by integrating complementary information from RGB and infrared images. While transformer-based methods have demonstrated significant advancements in this field by directly modeling dense relationships between modality tokens to enable cross-modality long-range interactions, they neglect the inherent discrepancies in feature distributions across modalities. Such discrepancies attenuate the reliability of the established relationships, thereby restricting the effective exploitation of complementary information between modalities. To alleviate this problem, we propose a framework for learning modality knowledge with proxy. The core innovation lies in the design of a proxy-guided cross-modality feature fusion module, which realizes dual-modality interactions by using lightweight proxy tokens as intermediate representations. Specifically, self-attention is firstly utilized to facilitate the proxy tokens to learn the global information of each modality; then, the relationship between dual-modality proxy tokens is constructed to capture modality complementary information while also mitigating the interference of modality discrepancies; and finally, the knowledge in the updated proxy tokens is fed back to each modality through cross-attention for enhancing the features of each modality. Additionally, a mixture of knowledge decoupled experts module is designed to effectively fuse enhanced features of the two modalities. This module leverages multiple gating networks to assign modality-specific and modality-shared knowledge to separate expert groups for learning, thus highlighting the advantageous features of the different modalities. Extensive experiments on four RGB-infrared datasets demonstrate that our method outperforms existing state-of-the-art methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113227"},"PeriodicalIF":7.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond similarity: Mutual information-guided retrieval for in-context learning in VQA 超越相似性:VQA中上下文学习的相互信息引导检索
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.patcog.2026.113214
Jun Zhang , Zezhong Lv , Jian Zhao , Yan Wang , Tianle Zhang , Yuchen Yuan , Yuchu Jiang , Chi Zhang , Wenqi Ren , Xuelong Li
Visual Question Answering (VQA) is a challenging multi-modal task. In-context Learning (ICL) has shown promise in improving the generalization of pre-trained models on VQA by retrieving image-text pairs that are similar to the given query. However, existing approaches overlook two critical issues: i) The effectiveness of the In-context Demonstration (ICD) in prompting a pre-trained model is not strictly correlated with the feature similarity. ii) As a multi-modal task involving both vision and language, VQA requires a joint understanding of visual and textual modalities, which is difficult to achieve when retrieval is based on a single modality. To address these limitations, we propose a novel Mutual Information-Guided Retrieval (MIGR) model. Specifically, we annotate a small subset of data (5% of the dataset) with ICD quality scores based on VQA performance, and train our model to maximize the multi-modal mutual information between each query and its corresponding high-quality ICDs. This enables the model to capture more complex relationships beyond feature-level similarity, leading to improved generalization in ICL. Extensive experiments demonstrate that our mutual information-based retrieval strategy significantly outperforms conventional similarity-based retrieval methods in VQA tasks.
可视化问答(VQA)是一项具有挑战性的多模态任务。上下文学习(ICL)通过检索与给定查询相似的图像-文本对,在提高VQA预训练模型的泛化方面显示出了希望。然而,现有的方法忽略了两个关键问题:i)上下文演示(ICD)在提示预训练模型方面的有效性与特征相似度并不严格相关。ii) VQA是一项涉及视觉和语言的多模态任务,需要对视觉模态和文本模态进行联合理解,当检索基于单一模态时难以实现。为了解决这些限制,我们提出了一种新的互信息引导检索(MIGR)模型。具体来说,我们用基于VQA性能的ICD质量分数注释了一小部分数据(数据集的5%),并训练我们的模型最大化每个查询与其相应的高质量ICD之间的多模态互信息。这使得模型能够捕获超越特征级相似性的更复杂的关系,从而改进ICL中的泛化。大量的实验表明,我们的基于互信息的检索策略在VQA任务中显著优于传统的基于相似性的检索方法。
{"title":"Beyond similarity: Mutual information-guided retrieval for in-context learning in VQA","authors":"Jun Zhang ,&nbsp;Zezhong Lv ,&nbsp;Jian Zhao ,&nbsp;Yan Wang ,&nbsp;Tianle Zhang ,&nbsp;Yuchen Yuan ,&nbsp;Yuchu Jiang ,&nbsp;Chi Zhang ,&nbsp;Wenqi Ren ,&nbsp;Xuelong Li","doi":"10.1016/j.patcog.2026.113214","DOIUrl":"10.1016/j.patcog.2026.113214","url":null,"abstract":"<div><div>Visual Question Answering (VQA) is a challenging multi-modal task. In-context Learning (ICL) has shown promise in improving the generalization of pre-trained models on VQA by retrieving image-text pairs that are similar to the given query. However, existing approaches overlook two critical issues: i) The effectiveness of the In-context Demonstration (ICD) in prompting a pre-trained model is not strictly correlated with the feature similarity. ii) As a multi-modal task involving both vision and language, VQA requires a joint understanding of visual and textual modalities, which is difficult to achieve when retrieval is based on a single modality. To address these limitations, we propose a novel Mutual Information-Guided Retrieval (MIGR) model. Specifically, we annotate a small subset of data (5% of the dataset) with ICD quality scores based on VQA performance, and train our model to maximize the multi-modal mutual information between each query and its corresponding high-quality ICDs. This enables the model to capture more complex relationships beyond feature-level similarity, leading to improved generalization in ICL. Extensive experiments demonstrate that our mutual information-based retrieval strategy significantly outperforms conventional similarity-based retrieval methods in VQA tasks.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113214"},"PeriodicalIF":7.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPMT: Fast and precise high-resolution makeup transfer via Laplacian pyramid FPMT:通过拉普拉斯金字塔快速精确的高分辨率化妆转移
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1016/j.patcog.2026.113221
Zhaoyang Sun , Shengwu Xiong , Yi Rong
In this paper, we focus on accelerating high-resolution makeup transfer process without compromising generative performance. To this end, we propose a Fast and Precise Makeup Transfer (FPMT) framework based on Laplacian pyramid. In FPMT, we reveal that most makeup changes are concentrated in the low-frequency component, while a small amount of color- and texture-related details are included in the high-frequency components. Leveraging this insight, FPMT employs a lightweight encoder-decoder network to perform makeup transfer on the low-frequency component of inputs, thus improving efficiency. For each high-frequency component, FPMT implements a tiny refinement network that progressively predicts a mask and adaptively refines the makeup details to ensure transfer quality. By stacking the computationally efficient refinement network, FPMT can process higher-resolution images, demonstrating its flexibility and scalability. Using a single GTX 1660Ti GPU, FPMT can achieve an inference speed of about 42 FPS for input images with 1024 × 1024 resolution, which is much faster than the state-of-the-art methods. Extensive quantitative and qualitative analyses validate the efficiency and effectiveness of the proposed FPMT framework. The source code is available at: https://github.com/Snowfallingplum/FPMT.
在本文中,我们专注于在不影响生成性能的情况下加速高分辨率补码转移过程。为此,我们提出了一种基于拉普拉斯金字塔的快速精确的补码转移(FPMT)框架。在FPMT中,我们发现大多数组成变化集中在低频分量中,而少量与颜色和纹理相关的细节包含在高频分量中。利用这种洞察力,FPMT采用轻量级编码器-解码器网络对输入的低频分量执行补偿传输,从而提高效率。对于每个高频组件,FPMT实现了一个微小的细化网络,该网络逐步预测掩码并自适应地细化组成细节,以确保传输质量。通过叠加计算效率高的优化网络,FPMT可以处理更高分辨率的图像,展示了其灵活性和可扩展性。使用单个GTX 1660Ti GPU,对于1024 × 1024分辨率的输入图像,FPMT可以实现约42 FPS的推理速度,这比最先进的方法快得多。广泛的定量和定性分析验证了所提出的FPMT框架的效率和有效性。源代码可从https://github.com/Snowfallingplum/FPMT获得。
{"title":"FPMT: Fast and precise high-resolution makeup transfer via Laplacian pyramid","authors":"Zhaoyang Sun ,&nbsp;Shengwu Xiong ,&nbsp;Yi Rong","doi":"10.1016/j.patcog.2026.113221","DOIUrl":"10.1016/j.patcog.2026.113221","url":null,"abstract":"<div><div>In this paper, we focus on accelerating high-resolution makeup transfer process without compromising generative performance. To this end, we propose a Fast and Precise Makeup Transfer (FPMT) framework based on Laplacian pyramid. In FPMT, we reveal that most makeup changes are concentrated in the low-frequency component, while a small amount of color- and texture-related details are included in the high-frequency components. Leveraging this insight, FPMT employs a lightweight encoder-decoder network to perform makeup transfer on the low-frequency component of inputs, thus improving efficiency. For each high-frequency component, FPMT implements a tiny refinement network that progressively predicts a mask and adaptively refines the makeup details to ensure transfer quality. By stacking the computationally efficient refinement network, FPMT can process higher-resolution images, demonstrating its flexibility and scalability. Using a single GTX 1660Ti GPU, FPMT can achieve an inference speed of about 42 FPS for input images with 1024 × 1024 resolution, which is much faster than the state-of-the-art methods. Extensive quantitative and qualitative analyses validate the efficiency and effectiveness of the proposed FPMT framework. The source code is available at: <span><span>https://github.com/Snowfallingplum/FPMT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113221"},"PeriodicalIF":7.6,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial training with attention-guided feature fusion and inclusive contrastive learning 基于注意引导特征融合和包容性对比学习的对抗训练
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1016/j.patcog.2026.113220
Xiao Sun , Song Wang , Jucheng Yang
Numerous studies show that deep neural networks (DNNs) are vulnerable to adversarial patch attacks. Many existing adversarial defense strategies present two major drawbacks. First, they cannot handle adversarial patches of random locations and sizes. Second, they attempt to improve defense performance by integrating information from clean and adversarial examples, but this is susceptible to salient and camouflaged features, resulting in weakened generalization and natural accuracy. To address these issues, in this paper we propose an adversarial training method equipped with a novel mechanism of attention-guided feature fusion (or AttFus in short) and inclusive contrastive learning (ICL). By generating attention difference maps based on clean and adversarial examples and performing piecewise fusion of features, AttFus enables the DNN model to refocus on key areas of the image, overcoming the negative effect of adversarial patches, thereby achieving highly accurate image classification. Moreover, the proposed ICL using both clean and adversarial examples as positives allows for a smooth transition between similar examples in the representation space and better discriminates between signal and noise, thus heightening the model’s natural accuracy and resistance to adversarial attacks. Compared with state-of-the-art adversarial defense methods on benchmark datasets, the proposed method demonstrates competitive performance. When faced with the cross-attack, cross-model and cross-dataset challenges, the proposed method demonstrates excellent robustness and generalization. Our code is available at https://github.com/SunX81/AT-with-AttFus-and-ICL.
大量研究表明,深度神经网络(dnn)容易受到对抗性补丁攻击。许多现有的对抗性防御策略存在两个主要缺陷。首先,它们无法处理随机位置和大小的对抗性斑块。其次,他们试图通过整合来自干净和对抗示例的信息来提高防御性能,但这容易受到显著和伪装特征的影响,从而削弱泛化和自然准确性。为了解决这些问题,本文提出了一种对抗训练方法,该方法配备了一种新的注意引导特征融合(或简称AttFus)和包容性对比学习(ICL)机制。AttFus算法基于干净的和对抗性的样本生成注意力差异图,并对特征进行分段融合,使DNN模型能够重新聚焦于图像的关键区域,克服对抗性补丁的负面影响,从而实现高度精确的图像分类。此外,所提出的ICL使用干净和对抗性示例作为阳性,允许在表示空间中的相似示例之间平滑过渡,更好地区分信号和噪声,从而提高模型的自然准确性和对对抗性攻击的抵抗力。在基准数据集上与最先进的对抗性防御方法进行了比较,证明了该方法具有竞争力。在面对交叉攻击、跨模型和跨数据集挑战时,该方法表现出良好的鲁棒性和泛化性。我们的代码可在https://github.com/SunX81/AT-with-AttFus-and-ICL上获得。
{"title":"Adversarial training with attention-guided feature fusion and inclusive contrastive learning","authors":"Xiao Sun ,&nbsp;Song Wang ,&nbsp;Jucheng Yang","doi":"10.1016/j.patcog.2026.113220","DOIUrl":"10.1016/j.patcog.2026.113220","url":null,"abstract":"<div><div>Numerous studies show that deep neural networks (DNNs) are vulnerable to adversarial patch attacks. Many existing adversarial defense strategies present two major drawbacks. First, they cannot handle adversarial patches of random locations and sizes. Second, they attempt to improve defense performance by integrating information from clean and adversarial examples, but this is susceptible to salient and camouflaged features, resulting in weakened generalization and natural accuracy. To address these issues, in this paper we propose an adversarial training method equipped with a novel mechanism of attention-guided feature fusion (or AttFus in short) and inclusive contrastive learning (ICL). By generating attention difference maps based on clean and adversarial examples and performing piecewise fusion of features, AttFus enables the DNN model to refocus on key areas of the image, overcoming the negative effect of adversarial patches, thereby achieving highly accurate image classification. Moreover, the proposed ICL using both clean and adversarial examples as positives allows for a smooth transition between similar examples in the representation space and better discriminates between signal and noise, thus heightening the model’s natural accuracy and resistance to adversarial attacks. Compared with state-of-the-art adversarial defense methods on benchmark datasets, the proposed method demonstrates competitive performance. When faced with the cross-attack, cross-model and cross-dataset challenges, the proposed method demonstrates excellent robustness and generalization. Our code is available at <span><span>https://github.com/SunX81/AT-with-AttFus-and-ICL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113220"},"PeriodicalIF":7.6,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DyC-CLIP: Dynamic context-aware multi-modal prompt learning for zero-shot anomaly detection DyC-CLIP:用于零射击异常检测的动态上下文感知多模态提示学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 DOI: 10.1016/j.patcog.2026.113215
Peng Chen, Fangjun Huang, Chao Huang
Vision-language models (VLMs) have demonstrated remarkable potential in zero-shot anomaly detection (ZSAD) tasks due to their strong generalization capabilities, enabling the identification of anomalies in unseen categories without additional supervision. However, their robustness and adaptability under challenging visual conditions remain limited, as existing approaches typically rely on meticulously designed textual prompts, which require extensive domain expertise and manual effort. Moreover, simple prompt formulations struggle to capture the complex structural characteristics inherent in images. To address these limitations, we propose DyC-CLIP, a novel dynamic context-aware prompt learning method for ZSAD. DyC-CLIP enhances anomaly localization by enabling text embeddings to dynamically adapt to fine-grained patch features. Specifically, we propose a Frequency-domain Dynamic Adapter (FDA) that integrates global visual information into textual prompts, reducing the reliance on product-specific prompts. To further facilitate cross-modal alignment, we develop a Cross-Modal Guided Sparse Attention (CGSA) module, which dynamically refines text embeddings based on fine-grained image features. Additionally, we design an Anomaly-Aware Semantic Aggregation (ASA) module to integrate local contextual information and enhance the model’s ability to discriminate anomalous patterns. Extensive experiments on 14 datasets spanning industrial and medical domains demonstrate that DyC-CLIP achieves state-of-the-art performance.
视觉语言模型(VLMs)由于其强大的泛化能力,在零异常检测(ZSAD)任务中表现出了显着的潜力,可以在没有额外监督的情况下识别未知类别的异常。然而,它们在具有挑战性的视觉条件下的鲁棒性和适应性仍然有限,因为现有的方法通常依赖于精心设计的文本提示,这需要大量的领域专业知识和人工努力。此外,简单的提示公式难以捕捉图像固有的复杂结构特征。为了解决这些限制,我们提出了DyC-CLIP,一种新的动态上下文感知的ZSAD提示学习方法。DyC-CLIP通过使文本嵌入动态适应细粒度补丁特征来增强异常定位。具体来说,我们提出了一个频域动态适配器(FDA),它将全局视觉信息集成到文本提示中,减少了对特定产品提示的依赖。为了进一步促进跨模态对齐,我们开发了一个跨模态引导稀疏注意(CGSA)模块,该模块基于细粒度图像特征动态细化文本嵌入。此外,我们设计了一个异常感知语义聚合(ASA)模块来整合本地上下文信息,增强模型识别异常模式的能力。在跨越工业和医疗领域的14个数据集上进行的广泛实验表明,DyC-CLIP达到了最先进的性能。
{"title":"DyC-CLIP: Dynamic context-aware multi-modal prompt learning for zero-shot anomaly detection","authors":"Peng Chen,&nbsp;Fangjun Huang,&nbsp;Chao Huang","doi":"10.1016/j.patcog.2026.113215","DOIUrl":"10.1016/j.patcog.2026.113215","url":null,"abstract":"<div><div>Vision-language models (VLMs) have demonstrated remarkable potential in zero-shot anomaly detection (ZSAD) tasks due to their strong generalization capabilities, enabling the identification of anomalies in unseen categories without additional supervision. However, their robustness and adaptability under challenging visual conditions remain limited, as existing approaches typically rely on meticulously designed textual prompts, which require extensive domain expertise and manual effort. Moreover, simple prompt formulations struggle to capture the complex structural characteristics inherent in images. To address these limitations, we propose DyC-CLIP, a novel dynamic context-aware prompt learning method for ZSAD. DyC-CLIP enhances anomaly localization by enabling text embeddings to dynamically adapt to fine-grained patch features. Specifically, we propose a Frequency-domain Dynamic Adapter (FDA) that integrates global visual information into textual prompts, reducing the reliance on product-specific prompts. To further facilitate cross-modal alignment, we develop a Cross-Modal Guided Sparse Attention (CGSA) module, which dynamically refines text embeddings based on fine-grained image features. Additionally, we design an Anomaly-Aware Semantic Aggregation (ASA) module to integrate local contextual information and enhance the model’s ability to discriminate anomalous patterns. Extensive experiments on 14 datasets spanning industrial and medical domains demonstrate that DyC-CLIP achieves state-of-the-art performance.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113215"},"PeriodicalIF":7.6,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to “Motional Foreground Attention-based Video Crowd Counting” [Pattern Recognition 144 (2023) 109891] “基于动态前景注意力的视频人群计数”的勘误表[模式识别144 (2023)109891]
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-31 DOI: 10.1016/j.patcog.2026.113153
Miaogen Ling , Tianhang Pan , Yi Ren , Ke Wang , Xin Geng
{"title":"Corrigendum to “Motional Foreground Attention-based Video Crowd Counting” [Pattern Recognition 144 (2023) 109891]","authors":"Miaogen Ling ,&nbsp;Tianhang Pan ,&nbsp;Yi Ren ,&nbsp;Ke Wang ,&nbsp;Xin Geng","doi":"10.1016/j.patcog.2026.113153","DOIUrl":"10.1016/j.patcog.2026.113153","url":null,"abstract":"","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113153"},"PeriodicalIF":7.6,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1