首页 > 最新文献

Pattern Recognition最新文献

英文 中文
Multi-scale temporal correlation multi-dimensional decomposition network for time series analysis 多尺度时间相关多维分解网络用于时间序列分析
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.patcog.2026.113140
Fan Zhang , Lele Yuan , Wenchang Zhang , Mingli Zhang , Hua Wang
Time series analysis plays a crucial role in practical applications such as traffic flow prediction, weather forecasting, electricity demand prediction, and stock market forecasting. Due to the presence of both long-term and short-term cyclical patterns in complex time series, previous research has predominantly focused on one-dimensional temporal domains, which poses significant challenges. The ability to capture cyclical variations is limited in one-dimensional temporal domains. To address this, we propose the Multi-Scale Temporal Correlation Multi-Dimensional Decomposition Network (MTCMD). Our approach transforms one-dimensional time series into multi-dimensional tensors that represent multiple long-term and short-term cycles. This multi-dimensional representation allows us to extract trend components and seasonal components more effectively. Moreover, in real-world scenarios, the interactions between time series cycles are dynamically changing and exhibit significant differences when observed at different temporal scales. Therefore, we introduce the Multi-Scale Temporal Correlation Learner to extract features of seasonal components at various scales, thereby enhancing our ability to learn the correlations of cyclical variations. Experimental results demonstrate that our proposed MTCMD model outperforms existing methods in mainstream time series analysis tasks. These results validate the rationality and effectiveness of transforming one-dimensional time series into multi-dimensional temporal domains.
时间序列分析在交通流量预测、天气预报、电力需求预测和股票市场预测等实际应用中发挥着至关重要的作用。由于复杂时间序列中同时存在长期和短期的周期模式,以往的研究主要集中在一维时间域,这带来了很大的挑战。捕捉周期变化的能力在一维时间域中是有限的。为了解决这个问题,我们提出了多尺度时间相关多维分解网络(MTCMD)。我们的方法将一维时间序列转换为代表多个长期和短期周期的多维张量。这种多维表示使我们能够更有效地提取趋势分量和季节分量。此外,在现实情景中,时间序列周期之间的相互作用是动态变化的,并且在不同的时间尺度上观测时表现出显著差异。因此,我们引入多尺度时间相关学习器来提取不同尺度的季节成分特征,从而提高我们学习周期变化相关性的能力。实验结果表明,我们提出的MTCMD模型在主流时间序列分析任务中优于现有方法。这些结果验证了将一维时间序列转换为多维时间域的合理性和有效性。
{"title":"Multi-scale temporal correlation multi-dimensional decomposition network for time series analysis","authors":"Fan Zhang ,&nbsp;Lele Yuan ,&nbsp;Wenchang Zhang ,&nbsp;Mingli Zhang ,&nbsp;Hua Wang","doi":"10.1016/j.patcog.2026.113140","DOIUrl":"10.1016/j.patcog.2026.113140","url":null,"abstract":"<div><div>Time series analysis plays a crucial role in practical applications such as traffic flow prediction, weather forecasting, electricity demand prediction, and stock market forecasting. Due to the presence of both long-term and short-term cyclical patterns in complex time series, previous research has predominantly focused on one-dimensional temporal domains, which poses significant challenges. The ability to capture cyclical variations is limited in one-dimensional temporal domains. To address this, we propose the Multi-Scale Temporal Correlation Multi-Dimensional Decomposition Network (MTCMD). Our approach transforms one-dimensional time series into multi-dimensional tensors that represent multiple long-term and short-term cycles. This multi-dimensional representation allows us to extract trend components and seasonal components more effectively. Moreover, in real-world scenarios, the interactions between time series cycles are dynamically changing and exhibit significant differences when observed at different temporal scales. Therefore, we introduce the Multi-Scale Temporal Correlation Learner to extract features of seasonal components at various scales, thereby enhancing our ability to learn the correlations of cyclical variations. Experimental results demonstrate that our proposed MTCMD model outperforms existing methods in mainstream time series analysis tasks. These results validate the rationality and effectiveness of transforming one-dimensional time series into multi-dimensional temporal domains.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113140"},"PeriodicalIF":7.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
APCIFormer: Adaptive perception and cross-scale interaction transformer for image super-resolution APCIFormer:图像超分辨率的自适应感知和跨尺度交互转换器
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.patcog.2026.113138
Xiaofeng Wang, Kai Ran, Wenshuo Zhang, Lin Zhang, Jianghua Li
Image Super-Resolution (SR) reconstruction constitutes one of the significant research subjects in the domain of computer vision. Existing methods predominantly emphasize the global structure recovery of images and have achieved significant breakthroughs in texture and detail reconstruction. However, these methods struggle with complex image scenes and effectively capturing multi-scale dependencies. Specifically, they continue to encounter substantial challenges in spatial perception, high-fidelity texture detail recovery, and cross-scale feature interaction. To address these issues, we propose an adaptive spatial perception and cross-scale feature interaction transformer for SR. Firstly, we propose an Adaptive Spatial-Aware Attention mechanism that dynamically perceives multi-scale information in accordance with the characteristics of diverse image regions. This facilitates the effective capture of multi-scale dependencies, thus achieving adaptive spatial perception capabilities. Next, we integrate the Sobel operator into ASA and propose an Adaptive Texture-driven Attention mechanism. This mechanism not only refines the fine-grained texture restoration but also preserves global features, thereby ensuring the integrity of the overall image structure. Finally, we devise a Cross-scale Feature Interaction Module, which constructs feature transmission routes across diverse receptive fields to enhance the complementarity among multi-scale features, thereby further enhancing image quality. Experimental results demonstrate that the proposed method effectively enhances both texture details and overall image quality for SR.
图像超分辨率重建是计算机视觉领域的重要研究课题之一。现有方法主要强调图像的全局结构恢复,在纹理和细节重建方面取得了重大突破。然而,这些方法难以处理复杂的图像场景,无法有效地捕获多尺度依赖关系。具体来说,他们在空间感知、高保真纹理细节恢复和跨尺度特征交互方面继续面临重大挑战。为了解决这些问题,我们提出了一种自适应空间感知和跨尺度特征交互转换器。首先,我们提出了一种自适应空间感知注意机制,该机制根据不同图像区域的特征动态感知多尺度信息。这有助于有效捕获多尺度依赖关系,从而实现自适应空间感知能力。接下来,我们将Sobel算子整合到ASA中,提出了一种自适应纹理驱动的注意机制。这种机制既细化了细粒度的纹理恢复,又保留了全局特征,从而保证了图像整体结构的完整性。最后,我们设计了一个跨尺度特征交互模块,构建了跨不同接收野的特征传递路径,增强了多尺度特征之间的互补性,从而进一步提高了图像质量。实验结果表明,该方法可以有效地增强纹理细节和整体图像质量。
{"title":"APCIFormer: Adaptive perception and cross-scale interaction transformer for image super-resolution","authors":"Xiaofeng Wang,&nbsp;Kai Ran,&nbsp;Wenshuo Zhang,&nbsp;Lin Zhang,&nbsp;Jianghua Li","doi":"10.1016/j.patcog.2026.113138","DOIUrl":"10.1016/j.patcog.2026.113138","url":null,"abstract":"<div><div>Image Super-Resolution (SR) reconstruction constitutes one of the significant research subjects in the domain of computer vision. Existing methods predominantly emphasize the global structure recovery of images and have achieved significant breakthroughs in texture and detail reconstruction. However, these methods struggle with complex image scenes and effectively capturing multi-scale dependencies. Specifically, they continue to encounter substantial challenges in spatial perception, high-fidelity texture detail recovery, and cross-scale feature interaction. To address these issues, we propose an adaptive spatial perception and cross-scale feature interaction transformer for SR. Firstly, we propose an Adaptive Spatial-Aware Attention mechanism that dynamically perceives multi-scale information in accordance with the characteristics of diverse image regions. This facilitates the effective capture of multi-scale dependencies, thus achieving adaptive spatial perception capabilities. Next, we integrate the Sobel operator into ASA and propose an Adaptive Texture-driven Attention mechanism. This mechanism not only refines the fine-grained texture restoration but also preserves global features, thereby ensuring the integrity of the overall image structure. Finally, we devise a Cross-scale Feature Interaction Module, which constructs feature transmission routes across diverse receptive fields to enhance the complementarity among multi-scale features, thereby further enhancing image quality. Experimental results demonstrate that the proposed method effectively enhances both texture details and overall image quality for SR.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113138"},"PeriodicalIF":7.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning spatio-temporal consistency in spiking neural networks by self-distillation 基于自蒸馏的脉冲神经网络时空一致性学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.patcog.2026.113108
Lin Zuo, Yongqi Ding, Mengmeng Jing, Kunshan Yang, Hanpu Deng
Low-power spiking neural networks (SNNs) have received widespread attention for their efficient spatio-temporal modeling properties. Recent SNNs rely on artificial neural networks (ANNs) for knowledge distillation, but suffer from two major limitations: (1) the difficulty of optimizing the spike temporal dynamics with static data-oriented ANNs, and (2) the prohibitively high cost of pre-training teacher models. In this paper, we propose Temporal-Spatial Self-Distillation (TSSD), which eliminates explicit teacher overhead while simultaneously optimizing the spatio-temporal properties of SNNs. On the one hand, by extending the training timestep to construct the implicit temporal teacher, temporal self-distillation enables the SNN to autonomously learn hierarchical temporal patterns. On the other hand, spatial self-distillation embeds a lightweight weak classifier into the SNN, which propagates discriminative spatial representations and mitigates gradient vanishing. By incorporating stochastic latency training, TSSD significantly improves the spatio-temporal performance of SNNs with modest overhead and can be flexibly used for reduced timesteps and early exit inference. Theoretical analysis shows that TSSD reduces empirical risk, and extensive experiments on both static and neuromorphic datasets demonstrate its superior performance. In particular, TSSD improves recognition accuracy by up to 5.3% on the challenging CIFAR10-DVS benchmark. This work provides new insights into the efficient self-distillation of SNNs and advances the exploration of their spatio-temporal properties.
低功率脉冲神经网络(SNNs)因其高效的时空建模特性而受到广泛关注。最近的snn依赖于人工神经网络(ann)进行知识蒸馏,但存在两个主要限制:(1)使用静态面向数据的ann优化尖峰时间动态的困难;(2)预训练教师模型的成本过高。在本文中,我们提出了时空自蒸馏(TSSD),该方法消除了显式教师开销,同时优化了snn的时空特性。一方面,通过扩展训练时间步长来构建隐式时间教师,时间自蒸馏使SNN能够自主学习分层时间模式。另一方面,空间自蒸馏在SNN中嵌入了一个轻量级的弱分类器,该分类器传播判别空间表示并减轻梯度消失。通过结合随机延迟训练,TSSD显著提高了snn的时空性能,开销适中,可以灵活地用于减少时间步长和早期退出推理。理论分析表明,TSSD降低了经验风险,在静态和神经形态数据集上的大量实验证明了其优越的性能。特别是,TSSD在具有挑战性的CIFAR10-DVS基准上将识别准确率提高了5.3%。这项工作为snn的有效自蒸馏提供了新的见解,并推进了对其时空特性的探索。
{"title":"Learning spatio-temporal consistency in spiking neural networks by self-distillation","authors":"Lin Zuo,&nbsp;Yongqi Ding,&nbsp;Mengmeng Jing,&nbsp;Kunshan Yang,&nbsp;Hanpu Deng","doi":"10.1016/j.patcog.2026.113108","DOIUrl":"10.1016/j.patcog.2026.113108","url":null,"abstract":"<div><div>Low-power spiking neural networks (SNNs) have received widespread attention for their efficient spatio-temporal modeling properties. Recent SNNs rely on artificial neural networks (ANNs) for knowledge distillation, but suffer from two major limitations: (1) the difficulty of optimizing the spike temporal dynamics with static data-oriented ANNs, and (2) the prohibitively high cost of pre-training teacher models. In this paper, we propose Temporal-Spatial Self-Distillation (TSSD), which eliminates explicit teacher overhead while simultaneously optimizing the spatio-temporal properties of SNNs. On the one hand, by extending the training timestep to construct the implicit temporal teacher, temporal self-distillation enables the SNN to autonomously learn hierarchical temporal patterns. On the other hand, spatial self-distillation embeds a lightweight weak classifier into the SNN, which propagates discriminative spatial representations and mitigates gradient vanishing. By incorporating stochastic latency training, TSSD significantly improves the spatio-temporal performance of SNNs with modest overhead and can be flexibly used for reduced timesteps and early exit inference. Theoretical analysis shows that TSSD reduces empirical risk, and extensive experiments on both static and neuromorphic datasets demonstrate its superior performance. In particular, TSSD improves recognition accuracy by up to 5.3% on the challenging CIFAR10-DVS benchmark. This work provides new insights into the efficient self-distillation of SNNs and advances the exploration of their spatio-temporal properties.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113108"},"PeriodicalIF":7.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boundary-recovering network for temporal action detection 用于时间动作检测的边界恢复网络
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.patcog.2026.113141
Jihwan Kim , Jaehyun Choi , Yerim Jeon , Jae-Pil Heo
Temporal action detection (TAD) is challenging, yet fundamental for real-world video applications. Large temporal scale variation of actions is one of the most primary difficulties in TAD. Naturally, multi-scale features have potential in localizing actions of diverse lengths as widely used in object detection. Nevertheless, unlike objects in images, actions have more ambiguity in their boundaries. That is, small neighboring objects are not considered as a large one while short adjoining actions can be misunderstood as a long one. In the coarse-to-fine feature pyramid via pooling, these vague action boundaries can fade out, which we call ‘vanishing boundary problem’. To this end, we propose Boundary-Recovering Network (BRN) to address the vanishing boundary problem. BRN constructs scale-time features by introducing a new axis called scale dimension by interpolating multi-scale features to the same temporal length. On top of scale-time features, scale-time blocks learn to exchange features across scale levels, which can effectively settle down the issue. Our extensive experiments demonstrate that our model outperforms the state-of-the-art on the two challenging benchmarks, ActivityNet-v1.3 and THUMOS14, with remarkably reduced degree of the vanishing boundary problem.
时间动作检测(TAD)具有挑战性,但对现实世界的视频应用至关重要。动作的大时间尺度变化是TAD最主要的困难之一。自然,多尺度特征在目标检测中广泛使用的不同长度的动作定位方面具有潜力。然而,与图像中的物体不同,动作在其边界上具有更多的模糊性。也就是说,小的相邻对象不被认为是大的,而短的相邻动作可能被误解为长动作。在通过池化的粗细特征金字塔中,这些模糊的动作边界会逐渐消失,我们称之为“消失边界问题”。为此,我们提出了边界恢复网络(BRN)来解决边界消失问题。BRN通过将多尺度特征插值到相同的时间长度上,引入一个称为尺度维度的新轴来构建尺度时间特征。在标度时特征的基础上,标度时块学习跨标度层次的特征交换,可以有效地解决问题。我们广泛的实验表明,我们的模型在两个具有挑战性的基准测试(ActivityNet-v1.3和THUMOS14)上优于最先进的技术,并且显著降低了边界消失问题的程度。
{"title":"Boundary-recovering network for temporal action detection","authors":"Jihwan Kim ,&nbsp;Jaehyun Choi ,&nbsp;Yerim Jeon ,&nbsp;Jae-Pil Heo","doi":"10.1016/j.patcog.2026.113141","DOIUrl":"10.1016/j.patcog.2026.113141","url":null,"abstract":"<div><div>Temporal action detection (TAD) is challenging, yet fundamental for real-world video applications. Large temporal scale variation of actions is one of the most primary difficulties in TAD. Naturally, multi-scale features have potential in localizing actions of diverse lengths as widely used in object detection. Nevertheless, unlike objects in images, actions have more ambiguity in their boundaries. That is, small neighboring objects are not considered as a large one while short adjoining actions can be misunderstood as a long one. In the coarse-to-fine feature pyramid via pooling, these vague action boundaries can fade out, which we call ‘vanishing boundary problem’. To this end, we propose Boundary-Recovering Network (BRN) to address the vanishing boundary problem. BRN constructs scale-time features by introducing a new axis called scale dimension by interpolating multi-scale features to the same temporal length. On top of scale-time features, scale-time blocks learn to exchange features across scale levels, which can effectively settle down the issue. Our extensive experiments demonstrate that our model outperforms the state-of-the-art on the two challenging benchmarks, ActivityNet-v1.3 and THUMOS14, with remarkably reduced degree of the vanishing boundary problem.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113141"},"PeriodicalIF":7.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware calibrated 3D human motion forecasting with latent conformal prediction 基于潜在保形预测的不确定性校正三维人体运动预测
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.patcog.2026.113144
Yue Ma , Frederick W.B. Li , Xiaohui Liang
3D human motion forecasting aims to predict the future dynamics of observed human movements, with applications ranging from autonomous driving to robotics. Estimating the uncertainty of each individual prediction is crucial for risk-bounded planning and control to ensure safety. However, generative model-based approaches struggle with uncertainty quantification due to their implicit probabilistic representations. To address this, we propose an uncertainty-aware probabilistic forecasting framework that parameterize complex human motions using invertible networks and forecast parameters of the future human motion distribution. This explicit probabilistic representation offers effective uncertainty quantification based on probability density. Additionally, to transform heuristic notions of uncertainty into statistically grounded estimates, we introduce a copula-based latent conformal prediction method for calibrating the predicted distribution. Experiments demonstrate the strong predictive performance of our approach in both deterministic and diverse setup, and validate the effectiveness of the uncertainty estimates.
3D人体运动预测旨在预测观察到的人体运动的未来动态,其应用范围从自动驾驶到机器人。评估每个个体预测的不确定性对于确保安全的有风险规划和控制至关重要。然而,基于生成模型的方法由于其隐式概率表示而难以进行不确定性量化。为了解决这个问题,我们提出了一个不确定性感知概率预测框架,该框架使用可逆网络和预测未来人类运动分布的参数来参数化复杂的人类运动。这种显式的概率表示提供了基于概率密度的有效的不确定性量化。此外,为了将不确定性的启发式概念转化为基于统计的估计,我们引入了一种基于copula的潜在共形预测方法来校准预测分布。实验证明了我们的方法在确定性和多样化设置下的强大预测性能,并验证了不确定性估计的有效性。
{"title":"Uncertainty-aware calibrated 3D human motion forecasting with latent conformal prediction","authors":"Yue Ma ,&nbsp;Frederick W.B. Li ,&nbsp;Xiaohui Liang","doi":"10.1016/j.patcog.2026.113144","DOIUrl":"10.1016/j.patcog.2026.113144","url":null,"abstract":"<div><div>3D human motion forecasting aims to predict the future dynamics of observed human movements, with applications ranging from autonomous driving to robotics. Estimating the uncertainty of each individual prediction is crucial for risk-bounded planning and control to ensure safety. However, generative model-based approaches struggle with uncertainty quantification due to their implicit probabilistic representations. To address this, we propose an uncertainty-aware probabilistic forecasting framework that parameterize complex human motions using invertible networks and forecast parameters of the future human motion distribution. This explicit probabilistic representation offers effective uncertainty quantification based on probability density. Additionally, to transform heuristic notions of uncertainty into statistically grounded estimates, we introduce a copula-based latent conformal prediction method for calibrating the predicted distribution. Experiments demonstrate the strong predictive performance of our approach in both deterministic and diverse setup, and validate the effectiveness of the uncertainty estimates.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113144"},"PeriodicalIF":7.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-aware spatio-temporal topology learning for skeleton-based human activity recognition 基于骨骼的人体活动识别的频率感知时空拓扑学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.patcog.2026.113146
Yi Xia , Sira Yongchareon , Raymond Lutui , Quan Z. Sheng
Skeleton-based human activity recognition (HAR) has made significant progress through graph convolutional networks (GCNs) and Transformer architectures for spatiotemporal modeling. However, existing methods either employ predefined static graph topologies that cannot adapt to heterogeneous skeleton data or learn dynamic topologies based solely on local spatiotemporal features, thereby overlooking the global temporal frequency features of joint movements that are important for discovering semantically meaningful spatial relationships. We propose Frequency-Aware Topology Learning Graph Convolutional Network (FATL-GCN), a novel architecture that integrates frequency-aware temporal context to guide adaptive learning of spatial topology. Our approach leverages Time-to-Vector linear frequency encoding to capture both periodic and non-periodic motion patterns, employs frequency-guided topology learning to generate action-specific graphs through temporal-context-driven attention, and incorporates hierarchical multi-scale fusion for robust feature extraction across scales. Extensive experiments achieved top-1 accuracies of 93.8% (cross-subject) and 97.5% (cross-view) on NTU-60, 91.9% (cross-subject) and 93.1% (cross-setup) on NTU-120, and 51.7% on Kinetics-Skeleton. Ablation studies confirm the critical role of our components, with removing the dynamic graph topology causing a 3.5% accuracy drop and removing frequency-aware encoding causing a 2.1% drop.
基于骨骼的人类活动识别(HAR)通过图卷积网络(GCNs)和Transformer架构进行时空建模取得了重大进展。然而,现有方法要么采用无法适应异构骨架数据的预定义静态图拓扑,要么仅基于局部时空特征学习动态拓扑,从而忽略了关节运动的全局时间频率特征,而这些特征对于发现语义上有意义的空间关系非常重要。我们提出了频率感知拓扑学习图卷积网络(FATL-GCN),这是一种集成了频率感知时间上下文的新架构,以指导空间拓扑的自适应学习。我们的方法利用时间到向量的线性频率编码来捕获周期性和非周期性运动模式,采用频率引导的拓扑学习,通过时间-上下文驱动的注意力来生成特定动作的图,并结合分层多尺度融合来实现跨尺度的鲁棒特征提取。广泛的实验在NTU-60上获得了93.8%(交叉主题)和97.5%(交叉视角)的前1名准确率,在NTU-120上获得了91.9%(交叉主题)和93.1%(交叉设置)的前1名准确率,在Kinetics-Skeleton上获得了51.7%。消融研究证实了我们的组件的关键作用,去除动态图形拓扑导致3.5%的精度下降,去除频率感知编码导致2.1%的精度下降。
{"title":"Frequency-aware spatio-temporal topology learning for skeleton-based human activity recognition","authors":"Yi Xia ,&nbsp;Sira Yongchareon ,&nbsp;Raymond Lutui ,&nbsp;Quan Z. Sheng","doi":"10.1016/j.patcog.2026.113146","DOIUrl":"10.1016/j.patcog.2026.113146","url":null,"abstract":"<div><div>Skeleton-based human activity recognition (HAR) has made significant progress through graph convolutional networks (GCNs) and Transformer architectures for spatiotemporal modeling. However, existing methods either employ predefined static graph topologies that cannot adapt to heterogeneous skeleton data or learn dynamic topologies based solely on local spatiotemporal features, thereby overlooking the global temporal frequency features of joint movements that are important for discovering semantically meaningful spatial relationships. We propose Frequency-Aware Topology Learning Graph Convolutional Network (FATL-GCN), a novel architecture that integrates frequency-aware temporal context to guide adaptive learning of spatial topology. Our approach leverages Time-to-Vector linear frequency encoding to capture both periodic and non-periodic motion patterns, employs frequency-guided topology learning to generate action-specific graphs through temporal-context-driven attention, and incorporates hierarchical multi-scale fusion for robust feature extraction across scales. Extensive experiments achieved top-1 accuracies of 93.8% (cross-subject) and 97.5% (cross-view) on NTU-60, 91.9% (cross-subject) and 93.1% (cross-setup) on NTU-120, and 51.7% on Kinetics-Skeleton. Ablation studies confirm the critical role of our components, with removing the dynamic graph topology causing a 3.5% accuracy drop and removing frequency-aware encoding causing a 2.1% drop.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113146"},"PeriodicalIF":7.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised PolSAR image classification based on deep clustering and scattering mechanism 基于深度聚类和散射机制的无监督PolSAR图像分类
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.patcog.2026.113150
Wenqiang Hua , Sijia Yang , Junfei Shi , Chen Ding , Yizhuo Dong
In recent years, deep learning methods have received extensive attention in the field of polarimetric synthetic aperture radar (PolSAR) image interpretation and understanding. However, the traditional PolSAR image classification method based on deep learning requires a large number of labeled samples, but it is very difficult to obtain labeled samples in practice and requires a lot of manpower and material resources. To solve this problem, an unsupervised PolSAR image classification method based on deep clustering and scattering mechanism is proposed to realize deep learning with unlabeled samples. Firstly, the PolSAR images are initially divided into three categories according to the scattering characteristics of ground objects. Secondly, in order to reduce the impact of initial segmentation error on subsequent clustering, a class contrast optimization (CCO) algorithm is proposed to screen the initial segmentation results. Thirdly, a deep network combining an attention module is proposed to extract deep feature information. Finally, the K-means method is used to cluster the extracted deep features and output the final clustering results. Experimental results on three real PolSAR datasets demonstrate the effectiveness of the proposed method, which significantly improves classification accuracy without any labeled samples.
近年来,深度学习方法在偏振合成孔径雷达(PolSAR)图像解译与理解领域受到了广泛关注。然而,传统的基于深度学习的PolSAR图像分类方法需要大量的标记样本,但在实践中很难获得标记样本,需要大量的人力和物力。为了解决这一问题,提出了一种基于深度聚类和散射机制的无监督PolSAR图像分类方法,实现无标记样本的深度学习。首先,根据地物的散射特性,将PolSAR图像初步分为三类。其次,为了降低初始分割误差对后续聚类的影响,提出了类对比优化(CCO)算法对初始分割结果进行筛选。第三,提出了一种结合关注模块的深度网络提取深度特征信息。最后,使用K-means方法对提取的深度特征进行聚类,并输出最终聚类结果。在三个真实的PolSAR数据集上的实验结果表明了该方法的有效性,在不需要任何标记样本的情况下显著提高了分类精度。
{"title":"Unsupervised PolSAR image classification based on deep clustering and scattering mechanism","authors":"Wenqiang Hua ,&nbsp;Sijia Yang ,&nbsp;Junfei Shi ,&nbsp;Chen Ding ,&nbsp;Yizhuo Dong","doi":"10.1016/j.patcog.2026.113150","DOIUrl":"10.1016/j.patcog.2026.113150","url":null,"abstract":"<div><div>In recent years, deep learning methods have received extensive attention in the field of polarimetric synthetic aperture radar (PolSAR) image interpretation and understanding. However, the traditional PolSAR image classification method based on deep learning requires a large number of labeled samples, but it is very difficult to obtain labeled samples in practice and requires a lot of manpower and material resources. To solve this problem, an unsupervised PolSAR image classification method based on deep clustering and scattering mechanism is proposed to realize deep learning with unlabeled samples. Firstly, the PolSAR images are initially divided into three categories according to the scattering characteristics of ground objects. Secondly, in order to reduce the impact of initial segmentation error on subsequent clustering, a class contrast optimization (CCO) algorithm is proposed to screen the initial segmentation results. Thirdly, a deep network combining an attention module is proposed to extract deep feature information. Finally, the K-means method is used to cluster the extracted deep features and output the final clustering results. Experimental results on three real PolSAR datasets demonstrate the effectiveness of the proposed method, which significantly improves classification accuracy without any labeled samples.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113150"},"PeriodicalIF":7.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SC2R: similarity cues-aware evolutionary relationship mining for fine-grained bird image classification SC2R:基于相似性线索的进化关系挖掘,用于细粒度鸟类图像分类
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.patcog.2026.113128
Hai Liu , Feifei Li , Zhibing Liu , Qiang Chen , Zhiyi Du , Tingting Liu , Zhaoli Zhang , You-Fu Li
Fine-grained bird image classification (FBIC) faces challenges such as similar species, arbitrary posture, and occlusion. To address these challenges, we carefully observe bird images and identify two crucial relationships: structural correlation within individual bird and evolutionary relationships among birds. On the basis of these relationships, we propose a similarity cues-aware evolutionary relationship mining framework for FBIC (SC2R). SC2R is designed to describe evolutionary relationship among birds. It comprises two modules: Evolutionary Relationship Mining (ERM) and Evolution-based Bird Prediction (EBP). The ERM module encodes structural correlation information and phylogenetic knowledge across taxonomic hierarchies (class, order, family, genus, and species) by constructing structural and evolutionary tokens, respectively. The EBP module leverages homology and cooperative losses to guide the model in learning evolutionary priors, thereby promoting discriminative feature learning. Experiments conducted on two FBIC datasets show that SC2R consistently outperforms state-of-the-art methods. This result demonstrates the effectiveness of leveraging structural correlation and evolutionary relationships for FBIC and suggests a promising direction for fine-grained recognition.
细粒度鸟类图像分类(FBIC)面临着物种相似、任意姿态和遮挡等挑战。为了应对这些挑战,我们仔细观察了鸟类的图像,并确定了两种关键的关系:鸟类个体内部的结构相关性和鸟类之间的进化关系。在这些关系的基础上,我们提出了一个基于相似性线索感知的FBIC (SC2R)进化关系挖掘框架。SC2R是用来描述鸟类之间的进化关系的。它包括两个模块:进化关系挖掘(ERM)和基于进化的鸟类预测(EBP)。ERM模块通过分别构建结构标记和进化标记来编码结构相关信息和跨分类层次(类、目、科、属和种)的系统发育知识。EBP模块利用同源性和合作损失来指导模型学习进化先验,从而促进判别性特征学习。在两个FBIC数据集上进行的实验表明,SC2R始终优于最先进的方法。这一结果证明了利用结构相关性和进化关系进行FBIC识别的有效性,并为细粒度识别提供了一个有希望的方向。
{"title":"SC2R: similarity cues-aware evolutionary relationship mining for fine-grained bird image classification","authors":"Hai Liu ,&nbsp;Feifei Li ,&nbsp;Zhibing Liu ,&nbsp;Qiang Chen ,&nbsp;Zhiyi Du ,&nbsp;Tingting Liu ,&nbsp;Zhaoli Zhang ,&nbsp;You-Fu Li","doi":"10.1016/j.patcog.2026.113128","DOIUrl":"10.1016/j.patcog.2026.113128","url":null,"abstract":"<div><div>Fine-grained bird image classification (FBIC) faces challenges such as similar species, arbitrary posture, and occlusion. To address these challenges, we carefully observe bird images and identify two crucial relationships: structural correlation within individual bird and evolutionary relationships among birds. On the basis of these relationships, we propose a similarity cues-aware evolutionary relationship mining framework for FBIC (SC<sup>2</sup>R). SC<sup>2</sup>R is designed to describe evolutionary relationship among birds. It comprises two modules: Evolutionary Relationship Mining (ERM) and Evolution-based Bird Prediction (EBP). The ERM module encodes structural correlation information and phylogenetic knowledge across taxonomic hierarchies (class, order, family, genus, and species) by constructing structural and evolutionary tokens, respectively. The EBP module leverages homology and cooperative losses to guide the model in learning evolutionary priors, thereby promoting discriminative feature learning. Experiments conducted on two FBIC datasets show that SC<sup>2</sup>R consistently outperforms state-of-the-art methods. This result demonstrates the effectiveness of leveraging structural correlation and evolutionary relationships for FBIC and suggests a promising direction for fine-grained recognition.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113128"},"PeriodicalIF":7.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VRDNet: Visual restoration dehazing network with triple color space feature fusion for clustered haze scenarios VRDNet:基于三色空间特征融合的聚类雾霾场景视觉恢复除雾网络
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.patcog.2026.113142
Zhiyu Lyu , Yan Chen , Yimin Hou
Clustered haze refers to high-concentration haze that forms in localized regions due to the non-homogeneous distribution of haze. It can cause a sudden drop in visibility within the affected area. However, most existing dehazing models struggle with clustered haze due to the limitations of the RGB color space, where the loss of visibility results in the loss of crucial feature information. Addressing this concern, we propose a Visual Restoration Dehazing Network (VRDNet) specifically designed for clustered haze scenarios. The network is primarily divided into two major components: a color space feature progressive fusion network and a visual reconstruction network. In the progressive fusion network, we leverage image priors to extract prior features, focusing particularly on visually obscured areas, in the HSV and YCrCb color spaces. This approach aims to mitigate the loss of feature information caused by reduced visibility. In the visual reconstruction network, fused features from different color spaces are combined with upsampling to reconstruct lost scenes. The feature attention units are incorporated to ensure the preservation of crucial information throughout the feature fusion and reconstruction process. Extensive experiments on diverse benchmark datasets illustrate the superiority of our model in visual restoration under clustered haze scenarios when compared to SOTA image dehazing models. The code is available at https://github.com/Chain98/VRDNet.
聚类雾霾是指由于雾霾分布不均匀,在局部区域内形成的高浓度雾霾。它会导致受影响区域内的能见度突然下降。然而,由于RGB色彩空间的限制,大多数现有的除雾模型都在与聚类雾霾作斗争,其中可见性的丧失导致关键特征信息的丢失。为了解决这个问题,我们提出了一个专门为集群雾霾场景设计的视觉恢复去雾网络(VRDNet)。该网络主要分为两大部分:色彩空间特征递进融合网络和视觉重构网络。在渐进式融合网络中,我们利用图像先验提取先验特征,特别是在HSV和YCrCb颜色空间中视觉模糊的区域。这种方法旨在减轻由于可视性降低而导致的特征信息丢失。在视觉重建网络中,将不同色彩空间的融合特征与上采样相结合来重建丢失的场景。纳入特征注意单元以确保在整个特征融合和重建过程中保留关键信息。在不同基准数据集上进行的大量实验表明,与SOTA图像去雾模型相比,我们的模型在聚类雾霾场景下的视觉恢复方面具有优势。代码可在https://github.com/Chain98/VRDNet上获得。
{"title":"VRDNet: Visual restoration dehazing network with triple color space feature fusion for clustered haze scenarios","authors":"Zhiyu Lyu ,&nbsp;Yan Chen ,&nbsp;Yimin Hou","doi":"10.1016/j.patcog.2026.113142","DOIUrl":"10.1016/j.patcog.2026.113142","url":null,"abstract":"<div><div>Clustered haze refers to high-concentration haze that forms in localized regions due to the non-homogeneous distribution of haze. It can cause a sudden drop in visibility within the affected area. However, most existing dehazing models struggle with clustered haze due to the limitations of the RGB color space, where the loss of visibility results in the loss of crucial feature information. Addressing this concern, we propose a Visual Restoration Dehazing Network (VRDNet) specifically designed for clustered haze scenarios. The network is primarily divided into two major components: a color space feature progressive fusion network and a visual reconstruction network. In the progressive fusion network, we leverage image priors to extract prior features, focusing particularly on visually obscured areas, in the HSV and YCrCb color spaces. This approach aims to mitigate the loss of feature information caused by reduced visibility. In the visual reconstruction network, fused features from different color spaces are combined with upsampling to reconstruct lost scenes. The feature attention units are incorporated to ensure the preservation of crucial information throughout the feature fusion and reconstruction process. Extensive experiments on diverse benchmark datasets illustrate the superiority of our model in visual restoration under clustered haze scenarios when compared to SOTA image dehazing models. The code is available at <span><span>https://github.com/Chain98/VRDNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113142"},"PeriodicalIF":7.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FreeStyle: Free lunch for text-guided style transfer using diffusion models 自由式:使用扩散模型进行文本引导风格转移的免费午餐
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.patcog.2026.113093
Feihong He , Gang Li , Fuhui Sun , Mengyuan Zhang , Lingyu Si , Xiaoyan Wang , Li Shen
The rapid development of generative diffusion models has significantly advanced the field of style transfer. However, most current style transfer methods based on diffusion models typically involve a slow iterative optimization process, e.g. model fine-tuning and textual inversion of style concept. In this paper, we introduce FreeStyle, an innovative style transfer method built upon a pre-trained large diffusion model, requiring no further optimization. Besides, our method enables style transfer only through a text description of the desired style, eliminating the necessity of style images. Specifically, we propose a dual-stream encoder and single-stream decoder architecture, replacing the conventional U-Net in diffusion models. In the dual-stream encoder, two distinct branches take the content image and style text prompt as inputs, achieving content and style decoupling. In the decoder, we further modulate features from the dual streams based on a given content image and the corresponding style text prompt for precise style transfer. Our experimental results demonstrate high-quality synthesis and fidelity of our method across various content images and style text prompts. Compared with state-of-the-art methods that require training, our FreeStyle approach notably reduces the computational burden by thousands of iterations, while achieving comparable or superior performance across multiple evaluation metrics including CLIP Aesthetic Score, CLIP Score, and Preference. We have released the code at: https://github.com/FreeStyleFreeLunch/FreeStyle.
生成扩散模型的快速发展极大地推动了风格迁移领域的发展。然而,目前大多数基于扩散模型的风格迁移方法通常涉及缓慢的迭代优化过程,例如模型微调和风格概念的文本反转。在本文中,我们介绍了FreeStyle,这是一种基于预训练的大型扩散模型的创新风格迁移方法,无需进一步优化。此外,我们的方法仅通过所需风格的文本描述来实现风格转移,从而消除了样式图像的必要性。具体来说,我们提出了一种双流编码器和单流解码器架构,以取代扩散模型中的传统U-Net。在双流编码器中,两个不同的分支以内容图像和样式文本提示符作为输入,实现了内容和样式的解耦。在解码器中,我们根据给定的内容图像和相应的样式文本提示进一步调制双流的特征,以实现精确的样式转换。我们的实验结果证明了我们的方法在各种内容图像和样式文本提示中的高质量合成和保真度。与需要训练的最先进的方法相比,我们的FreeStyle方法显著减少了数千次迭代的计算负担,同时在多个评估指标(包括CLIP美学评分、CLIP评分和偏好)上实现了相当或更好的性能。我们已经在https://github.com/FreeStyleFreeLunch/FreeStyle上发布了代码。
{"title":"FreeStyle: Free lunch for text-guided style transfer using diffusion models","authors":"Feihong He ,&nbsp;Gang Li ,&nbsp;Fuhui Sun ,&nbsp;Mengyuan Zhang ,&nbsp;Lingyu Si ,&nbsp;Xiaoyan Wang ,&nbsp;Li Shen","doi":"10.1016/j.patcog.2026.113093","DOIUrl":"10.1016/j.patcog.2026.113093","url":null,"abstract":"<div><div>The rapid development of generative diffusion models has significantly advanced the field of style transfer. However, most current style transfer methods based on diffusion models typically involve a slow iterative optimization process, e.g. model fine-tuning and textual inversion of style concept. In this paper, we introduce FreeStyle, an innovative style transfer method built upon a pre-trained large diffusion model, requiring no further optimization. Besides, our method enables style transfer only through a text description of the desired style, eliminating the necessity of style images. Specifically, we propose a dual-stream encoder and single-stream decoder architecture, replacing the conventional U-Net in diffusion models. In the dual-stream encoder, two distinct branches take the content image and style text prompt as inputs, achieving content and style decoupling. In the decoder, we further modulate features from the dual streams based on a given content image and the corresponding style text prompt for precise style transfer. Our experimental results demonstrate high-quality synthesis and fidelity of our method across various content images and style text prompts. Compared with state-of-the-art methods that require training, our FreeStyle approach notably reduces the computational burden by thousands of iterations, while achieving comparable or superior performance across multiple evaluation metrics including CLIP Aesthetic Score, CLIP Score, and Preference. We have released the code at: <span><span>https://github.com/FreeStyleFreeLunch/FreeStyle</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113093"},"PeriodicalIF":7.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1