首页 > 最新文献

Pattern Recognition最新文献

英文 中文
Neural network-based framework for wide visibility dehazing with synthetic benchmarks 基于神经网络的大可见度除雾框架与合成基准
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1016/j.patcog.2026.113056
Lin Li , Ru Lei , Kun Zhang , Lingchen Sun , Rustam Stolkin
Hazy images caused by atmospheric scattering significantly degrade the visibility and performance of computer vision systems, especially in long-range applications. Existing synthetic haze datasets are usually limited to short visibility ranges and fail to adequately model wavelength-dependent scattering effects, leading to suboptimal evaluation of dehazing algorithms. In this study, we propose a physically motivated synthesis method that combines the atmospheric scattering model with channel-specific extinction coefficients for the RGB channels and depth information ranging from 0 to 10 km. This approach enables the construction of the Wide Visibility Synthetic Haze (WVSH) dataset, which spans visibility distances from 50 m to 2 km. Based on WVSH, we design WVDehazeNet, a convolutional neural network that effectively leverages multi-scale spatial features and wavelength-dependent haze priors. Extensive experiments on both WVSH and real-world hazy images demonstrate that WVDehazeNet achieves competitive or superior performance compared with eight state-of-the-art methods in both quantitative and qualitative evaluations. The WVSH dataset and WVDehazeNet provide valuable benchmarks and references for long-range image dehazing research, helping to advance the field.
大气散射引起的模糊图像严重降低了计算机视觉系统的可视性和性能,特别是在远程应用中。现有的合成雾霾数据集通常局限于较短的能见度范围,无法充分模拟波长相关的散射效应,导致对除雾算法的评估不理想。在这项研究中,我们提出了一种物理驱动的综合方法,该方法将大气散射模型与RGB通道特定消光系数和深度信息结合在一起,范围为0到10 km。这种方法可以构建宽能见度合成雾霾(WVSH)数据集,该数据集的能见度距离从50 m到2 km不等。基于WVSH,我们设计了一种卷积神经网络WVDehazeNet,该网络有效地利用了多尺度空间特征和波长相关的雾霾先验。在WVSH和真实世界的朦胧图像上进行的大量实验表明,与定量和定性评估的八种最先进的方法相比,WVDehazeNet取得了具有竞争力或优越的性能。WVSH数据集和WVDehazeNet为远程图像去雾研究提供了有价值的基准和参考,有助于推动该领域的发展。
{"title":"Neural network-based framework for wide visibility dehazing with synthetic benchmarks","authors":"Lin Li ,&nbsp;Ru Lei ,&nbsp;Kun Zhang ,&nbsp;Lingchen Sun ,&nbsp;Rustam Stolkin","doi":"10.1016/j.patcog.2026.113056","DOIUrl":"10.1016/j.patcog.2026.113056","url":null,"abstract":"<div><div>Hazy images caused by atmospheric scattering significantly degrade the visibility and performance of computer vision systems, especially in long-range applications. Existing synthetic haze datasets are usually limited to short visibility ranges and fail to adequately model wavelength-dependent scattering effects, leading to suboptimal evaluation of dehazing algorithms. In this study, we propose a physically motivated synthesis method that combines the atmospheric scattering model with channel-specific extinction coefficients for the RGB channels and depth information ranging from 0 to 10 km. This approach enables the construction of the Wide Visibility Synthetic Haze (WVSH) dataset, which spans visibility distances from 50 m to 2 km. Based on WVSH, we design WVDehazeNet, a convolutional neural network that effectively leverages multi-scale spatial features and wavelength-dependent haze priors. Extensive experiments on both WVSH and real-world hazy images demonstrate that WVDehazeNet achieves competitive or superior performance compared with eight state-of-the-art methods in both quantitative and qualitative evaluations. The WVSH dataset and WVDehazeNet provide valuable benchmarks and references for long-range image dehazing research, helping to advance the field.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113056"},"PeriodicalIF":7.6,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FST: Improving adversarial robustness via feature similarity-based targeted adversarial training FST:通过基于特征相似性的目标对抗训练来提高对抗鲁棒性
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1016/j.patcog.2025.113010
Yibo Xu , Dawei Zhou , Decheng Liu , Nannan Wang , Xinbo Gao
Deep learning models have been found to be vulnerable to adversarial noise. Adversarial training is a major defense strategy to mitigate the interference caused by adversarial noise. However, the correlations between different categories on deep features in the model have not been fully considered in adversarial training. Our multi-perspective investigations indicate that adversarial noise can disrupt this correlation, resulting in undesirable close inter-class feature distances and far intra-class feature distances, thus degrading accuracy. To solve this problem, in this work, we propose a Feature Similarity-based Targeted adversarial training (FST), which guides the model to learn an appropriate feature distribution among categories under the adversarial environment for making rational decisions. Specifically, we first design a Feature Obfuscation Attack to obfuscate the natural state of feature similarity among categories, and then it is leveraged to generate specific adversarial training examples. Next, we construct target feature similarity matrices as supervision information to prompt the model to learn clean deep features for adversarial data and thereby achieve accurate classification. The target matrix is initialized based on the features learned from natural examples by a naturally pre-trained model. To further enhance the feature similarity between examples with the same category, we directly assign the highest similarity value to the region with the same category in the target matrix. Experimental results on popular datasets show the superior performance of our method, and ablation studies are conducted to demonstrate the effectiveness of designed modules.
人们发现深度学习模型容易受到对抗性噪声的影响。对抗训练是对抗噪声干扰的主要防御策略。然而,在对抗性训练中,模型中深层特征上不同类别之间的相关性并没有得到充分的考虑。我们的多视角研究表明,对抗性噪声会破坏这种相关性,导致不理想的类间特征距离近和类内特征距离远,从而降低精度。为了解决这一问题,本文提出了一种基于特征相似度的目标对抗训练(FST)方法,该方法指导模型在对抗环境下学习合适的类别间特征分布,从而做出合理的决策。具体来说,我们首先设计了一个特征混淆攻击来混淆类别之间特征相似的自然状态,然后利用它来生成特定的对抗性训练示例。接下来,我们构建目标特征相似矩阵作为监督信息,提示模型学习对抗数据的干净深度特征,从而实现准确的分类。目标矩阵是基于自然预训练模型从自然样本中学习到的特征进行初始化的。为了进一步增强同类别样例之间的特征相似度,我们直接将目标矩阵中同类别区域的相似度赋值最高。在常用数据集上的实验结果表明了我们的方法的优越性能,并进行了烧蚀研究来验证所设计模块的有效性。
{"title":"FST: Improving adversarial robustness via feature similarity-based targeted adversarial training","authors":"Yibo Xu ,&nbsp;Dawei Zhou ,&nbsp;Decheng Liu ,&nbsp;Nannan Wang ,&nbsp;Xinbo Gao","doi":"10.1016/j.patcog.2025.113010","DOIUrl":"10.1016/j.patcog.2025.113010","url":null,"abstract":"<div><div>Deep learning models have been found to be vulnerable to adversarial noise. Adversarial training is a major defense strategy to mitigate the interference caused by adversarial noise. However, the correlations between different categories on deep features in the model have not been fully considered in adversarial training. Our multi-perspective investigations indicate that adversarial noise can disrupt this correlation, resulting in undesirable close inter-class feature distances and far intra-class feature distances, thus degrading accuracy. To solve this problem, in this work, we propose a Feature Similarity-based Targeted adversarial training (FST), which guides the model to learn an appropriate feature distribution among categories under the adversarial environment for making rational decisions. Specifically, we first design a <em>Feature Obfuscation Attack</em> to obfuscate the natural state of feature similarity among categories, and then it is leveraged to generate specific adversarial training examples. Next, we construct <em>target feature similarity matrices</em> as supervision information to prompt the model to learn clean deep features for adversarial data and thereby achieve accurate classification. The target matrix is initialized based on the features learned from natural examples by a naturally pre-trained model. To further enhance the feature similarity between examples with the same category, we directly assign the highest similarity value to the region with the same category in the target matrix. Experimental results on popular datasets show the superior performance of our method, and ablation studies are conducted to demonstrate the effectiveness of designed modules.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113010"},"PeriodicalIF":7.6,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HiViTrack: Hierarchical vision transformer with efficient target-prompt update for visual object tracking HiViTrack:层次视觉转换器,具有有效的目标提示更新,用于视觉对象跟踪
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1016/j.patcog.2025.112992
Yang Fang , Yujie Hu , Bailian Xie , Yujie Wang , Zongyi Xu , Weisheng Li , Xinbo Gao
Transformer tracking methods have become the mainstream tracking paradigm due to the excellent ability to capture global context and long-range dependencies. Among them, plain Transformer tracking directly divides the image into 16  ×  16 patches to shorten the token length to reduce computational complexity, which is computationally efficient, but performance is limited due to single-scale feature learning and relational modeling. On the contrary, the hierarchical Transformer tracking enables one to hierarchically learn both low-level details and high-level semantics, which shows stronger tracking performance, but always introducing complicated and asymmetric attention operations. To this end, this paper proposes a simple yet powerful hierarchical Transformer tracking framework, HiViTrack, which enjoys both the efficiency of plain models and the strong representation capabilities of hierarchical models. Specifically, HiViTrack consists mainly of the following modules: a two-stage shallow spatial details retention (SSDR) module that can efficiently capture shallow spatial details to facilitate accurate target localization. Next, a two-stage deep semantic mutual integration (DSMI) module is designated to simultaneously modulate and integrate high-level semantics to enhance discrimination ability and model robustness. Then, the proposed target-prompt update (TPU) mechanism first applies template scoring attention to rank the historical templates, followed by target-prompt attention to generate a target-aware token, before feeding the enriched features into prediction head. Experimental results on six datasets demonstrate that the proposed HiViTrack achieves state-of-the-art (SOTA) performance while maintaining real-time efficiency, establishing a strong baseline for the hierarchical Transformer tracking. Code will be available at https://github.com/huyj2001-ship-it/HiViTrack.
由于具有捕获全局上下文和远程依赖关系的出色能力,变压器跟踪方法已经成为主流跟踪范例。其中,plain Transformer跟踪直接将图像分成16个  ×  16个patch,缩短令牌长度以降低计算复杂度,虽然计算效率高,但由于单尺度特征学习和关系建模,性能受到限制。相反,层次化的Transformer跟踪使人们能够层次化地学习低级细节和高级语义,这显示出更强的跟踪性能,但总是引入复杂和不对称的注意操作。为此,本文提出了一种简单而功能强大的分层变压器跟踪框架HiViTrack,它既具有普通模型的效率,又具有分层模型的强大表示能力。具体来说,HiViTrack主要由以下几个模块组成:一个两阶段的浅层空间细节保留(SSDR)模块,它可以有效地捕获浅层空间细节,从而实现准确的目标定位。其次,设计了一个两阶段深度语义互集成(DSMI)模块,对高层语义进行同步调制和集成,以增强识别能力和模型鲁棒性。然后,提出的目标提示更新(TPU)机制首先应用模板评分注意对历史模板进行排序,然后使用目标提示注意生成目标感知令牌,然后将丰富的特征输入预测头。在六个数据集上的实验结果表明,所提出的HiViTrack在保持实时效率的同时达到了最先进(SOTA)的性能,为分层变压器跟踪建立了强大的基线。代码将在https://github.com/huyj2001-ship-it/HiViTrack上提供。
{"title":"HiViTrack: Hierarchical vision transformer with efficient target-prompt update for visual object tracking","authors":"Yang Fang ,&nbsp;Yujie Hu ,&nbsp;Bailian Xie ,&nbsp;Yujie Wang ,&nbsp;Zongyi Xu ,&nbsp;Weisheng Li ,&nbsp;Xinbo Gao","doi":"10.1016/j.patcog.2025.112992","DOIUrl":"10.1016/j.patcog.2025.112992","url":null,"abstract":"<div><div>Transformer tracking methods have become the mainstream tracking paradigm due to the excellent ability to capture global context and long-range dependencies. Among them, plain Transformer tracking directly divides the image into 16  ×  16 patches to shorten the token length to reduce computational complexity, which is computationally efficient, but performance is limited due to single-scale feature learning and relational modeling. On the contrary, the hierarchical Transformer tracking enables one to hierarchically learn both low-level details and high-level semantics, which shows stronger tracking performance, but always introducing complicated and asymmetric attention operations. To this end, this paper proposes a simple yet powerful hierarchical Transformer tracking framework, HiViTrack, which enjoys both the efficiency of plain models and the strong representation capabilities of hierarchical models. Specifically, HiViTrack consists mainly of the following modules: a two-stage shallow spatial details retention (SSDR) module that can efficiently capture shallow spatial details to facilitate accurate target localization. Next, a two-stage deep semantic mutual integration (DSMI) module is designated to simultaneously modulate and integrate high-level semantics to enhance discrimination ability and model robustness. Then, the proposed target-prompt update (TPU) mechanism first applies template scoring attention to rank the historical templates, followed by target-prompt attention to generate a target-aware token, before feeding the enriched features into prediction head. Experimental results on six datasets demonstrate that the proposed HiViTrack achieves state-of-the-art (SOTA) performance while maintaining real-time efficiency, establishing a strong baseline for the hierarchical Transformer tracking. Code will be available at <span><span>https://github.com/huyj2001-ship-it/HiViTrack</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 112992"},"PeriodicalIF":7.6,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bayesian deep prior-based quaternion matrix completion for color image inpainting 基于贝叶斯深度先验的彩色图像四元数矩阵补全
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1016/j.patcog.2026.113054
Jin-Ping Zou , Huan Ren , Hongjia Chen , Xu-Yun Xu , Xiang Wang
Color image inpainting plays an important role in computer vision, which aims to reconstruct missing regions from the available information. Existing quaternion-based deep inpainting methods often struggle to restore both global structure and natural textures, especially when only a single corrupted image is available for training. To address these challenges, we propose BQAE-TV, a novel model that integrates a quaternion fully connected network to capture global features while incorporating total variation regularization to optimize quaternion matrix completion, producing structurally coherent and visually natural images. Furthermore, a Bayesian inference mechanism is employed to regularize the deep image prior and mitigate overfitting. Experiments demonstrate that BQAE-TV outperforms both traditional and state-of-the-art methods in terms of visual quality and quantitative metrics, validating its effectiveness and robustness.
彩色图像补绘在计算机视觉中起着重要的作用,其目的是根据现有信息重建缺失区域。现有的基于四元数的深度修复方法通常难以恢复全局结构和自然纹理,特别是当只有一个损坏的图像可用于训练时。为了解决这些挑战,我们提出了BQAE-TV模型,该模型集成了四元数全连接网络来捕获全局特征,同时结合了总变差正则化来优化四元数矩阵补全,从而产生结构连贯且视觉上自然的图像。此外,采用贝叶斯推理机制对深度图像进行先验正则化,减少过拟合。实验表明,BQAE-TV在视觉质量和定量指标方面都优于传统方法和最先进的方法,验证了其有效性和鲁棒性。
{"title":"A Bayesian deep prior-based quaternion matrix completion for color image inpainting","authors":"Jin-Ping Zou ,&nbsp;Huan Ren ,&nbsp;Hongjia Chen ,&nbsp;Xu-Yun Xu ,&nbsp;Xiang Wang","doi":"10.1016/j.patcog.2026.113054","DOIUrl":"10.1016/j.patcog.2026.113054","url":null,"abstract":"<div><div>Color image inpainting plays an important role in computer vision, which aims to reconstruct missing regions from the available information. Existing quaternion-based deep inpainting methods often struggle to restore both global structure and natural textures, especially when only a single corrupted image is available for training. To address these challenges, we propose BQAE-TV, a novel model that integrates a quaternion fully connected network to capture global features while incorporating total variation regularization to optimize quaternion matrix completion, producing structurally coherent and visually natural images. Furthermore, a Bayesian inference mechanism is employed to regularize the deep image prior and mitigate overfitting. Experiments demonstrate that BQAE-TV outperforms both traditional and state-of-the-art methods in terms of visual quality and quantitative metrics, validating its effectiveness and robustness.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113054"},"PeriodicalIF":7.6,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D temporal-spatial convolutional LSTM network for assessing drug addiction treatment 三维时空卷积LSTM网络评估药物成瘾治疗
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-06 DOI: 10.1016/j.patcog.2026.113059
Haiping Ma , Jiyuan Huang , Chenxu Shen , Jin Liu , Qingming Liu
Drug addiction (DA) is a chronic and relapsing brain disorder with limited effective treatments. The combined use of repetitive transcranial magnetic stimulation and electroencephalography (rTMS-EEG) presents a highly promising approach for DA treatment. This paper proposes an effective 3D temporal-spatial convolutional long short-term memory (LSTM) network for DA assessment using rTMS-EEG signals. First, the multi-channel EEG time series after rTMS treatment are converted into multiple topomaps with non-uniform sample times, to enhance spatial features of rTMS-EEG signals. Then these topomaps are sequentially fed into a convolutional module to extract spatial features of brain activity under DA conditions. Next, considering the temporal correlation of rTMS-EEG signals, an LSTM module is introduced to adaptively capture significant sequential time information. Further, a contrastive loss function is defined to reinforce the temporal-spatial features, thereby enhancing DA assessment. Finally, to evaluate the performance of the proposed network, the first rTMS-EEG dataset for DA treatment is constructed. The results of extensive experiments indicate that the α and β rhythms are likely to be major brain physiological markers of DA disorder, and the rTMS is a safe and effective treatment for DA. Meanwhile, the proposed network achieves the assessing accuracies of 85% and 83% for sham/pre-DA subjects and pre/post-DA subjects respectively, outperforming several existing approaches.
药物成瘾(DA)是一种慢性和复发的脑部疾病,有效治疗有限。重复经颅磁刺激联合脑电图(rTMS-EEG)是一种非常有前途的治疗DA的方法。本文提出了一种有效的三维时空卷积长短期记忆(LSTM)网络,用于rTMS-EEG信号的数据评估。首先,将rTMS处理后的多通道脑电时间序列转换成多个采样时间不均匀的地形图,增强rTMS-EEG信号的空间特征;然后将这些地形图依次输入到卷积模块中,提取DA条件下大脑活动的空间特征。其次,考虑rTMS-EEG信号的时间相关性,引入LSTM模块自适应捕获重要的时序时间信息。在此基础上,定义了对比损失函数来增强时空特征,从而提高数据处理的评价。最后,为了评估该网络的性能,构建了第一个用于数据处理的rTMS-EEG数据集。大量实验结果表明,α和β节律可能是DA障碍的主要脑生理指标,rTMS是一种安全有效的DA治疗方法。同时,该网络对伪/预数据处理和预/后数据处理的评估准确率分别达到85%和83%,优于现有的几种方法。
{"title":"3D temporal-spatial convolutional LSTM network for assessing drug addiction treatment","authors":"Haiping Ma ,&nbsp;Jiyuan Huang ,&nbsp;Chenxu Shen ,&nbsp;Jin Liu ,&nbsp;Qingming Liu","doi":"10.1016/j.patcog.2026.113059","DOIUrl":"10.1016/j.patcog.2026.113059","url":null,"abstract":"<div><div>Drug addiction (DA) is a chronic and relapsing brain disorder with limited effective treatments. The combined use of repetitive transcranial magnetic stimulation and electroencephalography (rTMS-EEG) presents a highly promising approach for DA treatment. This paper proposes an effective 3D temporal-spatial convolutional long short-term memory (LSTM) network for DA assessment using rTMS-EEG signals. First, the multi-channel EEG time series after rTMS treatment are converted into multiple topomaps with non-uniform sample times, to enhance spatial features of rTMS-EEG signals. Then these topomaps are sequentially fed into a convolutional module to extract spatial features of brain activity under DA conditions. Next, considering the temporal correlation of rTMS-EEG signals, an LSTM module is introduced to adaptively capture significant sequential time information. Further, a contrastive loss function is defined to reinforce the temporal-spatial features, thereby enhancing DA assessment. Finally, to evaluate the performance of the proposed network, the first rTMS-EEG dataset for DA treatment is constructed. The results of extensive experiments indicate that the <span><math><mi>α</mi></math></span> and <span><math><mi>β</mi></math></span> rhythms are likely to be major brain physiological markers of DA disorder, and the rTMS is a safe and effective treatment for DA. Meanwhile, the proposed network achieves the assessing accuracies of 85% and 83% for sham/pre-DA subjects and pre/post-DA subjects respectively, outperforming several existing approaches.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113059"},"PeriodicalIF":7.6,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145915151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pairwise joint symmetric uncertainty based on macro-neighborhood entropy for heterogeneous feature selection 基于宏邻域熵的双联合对称不确定性异构特征选择
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-06 DOI: 10.1016/j.patcog.2026.113051
Zhilin Zhu , Jianhua Dai
High-dimensional heterogeneous data often contain redundant and irrelevant features, hindering pattern recognition and data mining. Feature selection enhances data quality and model generalization capabilities by eliminating redundant features. Although information entropy is effective for symbolic data, heterogeneous datasets with both symbolic and numerical features pose new challenges. The neighborhood rough set (NRS) model provides a solution, but existing NRS-based methods suffer from non-monotonicity in entropy and mutual information measures, and insufficient redundancy handling. To address these problems, we propose a macro-neighborhood entropy framework with monotonic measures and a Pairwise Joint Symmetric Uncertainty (PJSU) method that jointly evaluates decision relevance and feature redundancy. Experiments conducted on 15 benchmark datasets using the Naive Bayes (NB) and CART classifiers demonstrate that PJSU achieves the best performance, with accuracies of 84.61% on NB and 83.00% on CART. Results represent improvements of 14.38% and 4.89%, respectively, compared with the original datasets. Meanwhile, the average dimensionality was effectively reduced from 5390.8 to 5.67 and 6.27 for the two classifiers, respectively. These results demonstrate the effectiveness of the proposed method in heterogeneous feature selection.
高维异构数据往往包含冗余和不相关的特征,阻碍了模式识别和数据挖掘。特征选择通过消除冗余特征来提高数据质量和模型泛化能力。虽然信息熵对符号数据是有效的,但同时具有符号和数值特征的异构数据集带来了新的挑战。邻域粗糙集(NRS)模型提供了一种解决方案,但现有的邻域粗糙集方法存在熵和互信息测度非单调性、冗余处理不足等问题。为了解决这些问题,我们提出了一种具有单调度量的宏观邻域熵框架和一种联合评估决策相关性和特征冗余度的PJSU方法。使用朴素贝叶斯(NB)和CART分类器在15个基准数据集上进行的实验表明,PJSU达到了最佳性能,在NB和CART上的准确率分别为84.61%和83.00%。与原始数据集相比,结果分别提高了14.38%和4.89%。同时,两种分类器的平均维数分别从5390.8有效降至5.67和6.27。这些结果证明了该方法在异构特征选择中的有效性。
{"title":"Pairwise joint symmetric uncertainty based on macro-neighborhood entropy for heterogeneous feature selection","authors":"Zhilin Zhu ,&nbsp;Jianhua Dai","doi":"10.1016/j.patcog.2026.113051","DOIUrl":"10.1016/j.patcog.2026.113051","url":null,"abstract":"<div><div>High-dimensional heterogeneous data often contain redundant and irrelevant features, hindering pattern recognition and data mining. Feature selection enhances data quality and model generalization capabilities by eliminating redundant features. Although information entropy is effective for symbolic data, heterogeneous datasets with both symbolic and numerical features pose new challenges. The neighborhood rough set (NRS) model provides a solution, but existing NRS-based methods suffer from non-monotonicity in entropy and mutual information measures, and insufficient redundancy handling. To address these problems, we propose a macro-neighborhood entropy framework with monotonic measures and a Pairwise Joint Symmetric Uncertainty (PJSU) method that jointly evaluates decision relevance and feature redundancy. Experiments conducted on 15 benchmark datasets using the Naive Bayes (NB) and CART classifiers demonstrate that PJSU achieves the best performance, with accuracies of 84.61% on NB and 83.00% on CART. Results represent improvements of 14.38% and 4.89%, respectively, compared with the original datasets. Meanwhile, the average dimensionality was effectively reduced from 5390.8 to 5.67 and 6.27 for the two classifiers, respectively. These results demonstrate the effectiveness of the proposed method in heterogeneous feature selection.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113051"},"PeriodicalIF":7.6,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145915221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FG-MoE: Heterogeneous mixture of experts model for fine-grained visual classification FG-MoE:用于细粒度视觉分类的异质混合专家模型
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-05 DOI: 10.1016/j.patcog.2026.113050
Songming Yang , Jing Wen , Bin Fang
Fine-grained visual classification (FGVC) is a challenging task due to subtle inter-class differences and significant intra-class variations. Most existing approaches struggle to simultaneously capture multi-level discriminative features and effectively integrate complementary visual information. To address these challenges, we propose Fine-Grained Mixture of Expert (FG-MoE), a novel heterogeneous mixture-of-experts model for fine-grained visual classification. Our approach introduces a specialized multi-scale pyramid module that aggregates multi-scale information and enhances feature representation through spatial and channel attention mechanisms. Inspired by neuroscientific insights into visual processing mechanisms of the human brain, FG-MoE employs five specialized experts that focus on different visual cues: global structures, regional semantics, local details, textures, and part-level interactions. A spatial-aware gating mechanism dynamically selects appropriate expert combinations for each input image. We further design a novel multi-stage training strategy and employ balance constraints along with diversity and orthogonality regularization to ensure balanced learning and promote diverse expert specialization. The final classification leverages fused features from all selected experts. Extensive experiments on three widely used FGVC datasets demonstrate that FG-MoE achieves substantial performance improvements over backbone models and establishes state-of-the-art results across all these benchmarks, validating the effectiveness and robustness of our approach.
细粒度视觉分类(FGVC)是一项具有挑战性的任务,因为类间差异很小,类内差异很大。大多数现有的方法都难以同时捕获多层次的判别特征并有效地整合互补的视觉信息。为了解决这些挑战,我们提出了细粒度混合专家(FG-MoE),这是一种用于细粒度视觉分类的新型异构混合专家模型。我们的方法引入了一个专门的多尺度金字塔模块,该模块聚合了多尺度信息,并通过空间和通道注意机制增强了特征表示。受人类大脑视觉处理机制的神经科学见解的启发,FG-MoE聘请了五位专业专家,专注于不同的视觉线索:整体结构,区域语义,局部细节,纹理和部分级交互。空间感知门控机制为每个输入图像动态选择合适的专家组合。我们进一步设计了一种新的多阶段训练策略,并采用平衡约束以及多样性和正交正则化来确保平衡学习和促进专家多样化专业化。最后的分类利用了所有选定专家的融合特征。在三个广泛使用的FGVC数据集上进行的大量实验表明,FG-MoE比骨干模型取得了实质性的性能改进,并在所有这些基准测试中建立了最先进的结果,验证了我们方法的有效性和鲁棒性。
{"title":"FG-MoE: Heterogeneous mixture of experts model for fine-grained visual classification","authors":"Songming Yang ,&nbsp;Jing Wen ,&nbsp;Bin Fang","doi":"10.1016/j.patcog.2026.113050","DOIUrl":"10.1016/j.patcog.2026.113050","url":null,"abstract":"<div><div>Fine-grained visual classification (FGVC) is a challenging task due to subtle inter-class differences and significant intra-class variations. Most existing approaches struggle to simultaneously capture multi-level discriminative features and effectively integrate complementary visual information. To address these challenges, we propose Fine-Grained Mixture of Expert (FG-MoE), a novel heterogeneous mixture-of-experts model for fine-grained visual classification. Our approach introduces a specialized multi-scale pyramid module that aggregates multi-scale information and enhances feature representation through spatial and channel attention mechanisms. Inspired by neuroscientific insights into visual processing mechanisms of the human brain, FG-MoE employs five specialized experts that focus on different visual cues: global structures, regional semantics, local details, textures, and part-level interactions. A spatial-aware gating mechanism dynamically selects appropriate expert combinations for each input image. We further design a novel multi-stage training strategy and employ balance constraints along with diversity and orthogonality regularization to ensure balanced learning and promote diverse expert specialization. The final classification leverages fused features from all selected experts. Extensive experiments on three widely used FGVC datasets demonstrate that FG-MoE achieves substantial performance improvements over backbone models and establishes state-of-the-art results across all these benchmarks, validating the effectiveness and robustness of our approach.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113050"},"PeriodicalIF":7.6,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FocalGaussian: Improving text-driven 3D human generation with body part focus FocalGaussian:利用身体部分焦点改进文本驱动的3D人体生成
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-05 DOI: 10.1016/j.patcog.2025.112923
Yifan Yang , Zeshuai Deng , Dong Liu , Zixiong Huang , Kai Zhou , Hailin Luo , Qing Du , Mingkui Tan
Text-driven 3D human generation significantly reduces manual labor for professionals and enables non-professionals to create 3D assets, facilitating applications across various fields, such as digital games, advertising, and films. Conventional methods usually follow the paradigm of optimizing 3D representations such as neural radiance field and 3D Gaussian Splatting by Score distillation Sampling (SDS) using a diffusion model. However, existing methods struggle to generate delicate and 3D consistent human body parts, primarily due to the ignorance of imposing stable topology control and precise local view control. Our key idea is to focus on the critical components of the human body parts to impose precise control while optimizing the 3D model. Following this, we propose FocalGaussian. Specifically, to generate delicate body parts, we propose a focal depth loss that recovers delicate human body parts by aligning the depth of local body parts in the 3D human model and SMPL-X at local and global scales. Moreover, to achieve 3D consistent local body parts, we propose a focal view-dependent SDS that emphasizes key body-part features and provides finer control over local geometry. Extensive experiments demonstrate the superiority of our FocalGaussian across a variety of prompts. Critically, our generated 3D humans accurately capture complex features of human body parts, particularly the hands. For more results please check our project page at Project page.
文本驱动的3D生成大大减少了专业人员的体力劳动,并使非专业人员能够创建3D资产,促进了数字游戏、广告和电影等各个领域的应用程序。传统的方法通常遵循优化3D表示的范例,如神经辐射场和使用扩散模型的分数蒸馏采样(SDS)的3D高斯飞溅。然而,现有的方法难以生成精致和3D一致的人体部位,主要是因为忽略了施加稳定的拓扑控制和精确的局部视图控制。我们的关键思想是专注于人体部位的关键部件,在优化3D模型的同时施加精确的控制。在此之后,我们提出了FocalGaussian。具体来说,为了生成精致的身体部位,我们提出了一种focal depth loss,通过在局部和全局尺度上对齐3D人体模型中局部身体部位的深度和SMPL-X来恢复精致的人体部位。此外,为了实现局部身体部位的3D一致性,我们提出了一个焦点视图相关的SDS,该SDS强调了关键的身体部位特征,并提供了对局部几何形状的更好控制。大量的实验证明了我们的FocalGaussian算法在各种提示下的优越性。至关重要的是,我们生成的3D人体准确地捕获了人体部位的复杂特征,特别是手。有关更多结果,请查看我们的项目页面。
{"title":"FocalGaussian: Improving text-driven 3D human generation with body part focus","authors":"Yifan Yang ,&nbsp;Zeshuai Deng ,&nbsp;Dong Liu ,&nbsp;Zixiong Huang ,&nbsp;Kai Zhou ,&nbsp;Hailin Luo ,&nbsp;Qing Du ,&nbsp;Mingkui Tan","doi":"10.1016/j.patcog.2025.112923","DOIUrl":"10.1016/j.patcog.2025.112923","url":null,"abstract":"<div><div>Text-driven 3D human generation significantly reduces manual labor for professionals and enables non-professionals to create 3D assets, facilitating applications across various fields, such as digital games, advertising, and films. Conventional methods usually follow the paradigm of optimizing 3D representations such as neural radiance field and 3D Gaussian Splatting by Score distillation Sampling (SDS) using a diffusion model. However, existing methods struggle to generate delicate and 3D consistent human body parts, primarily due to the ignorance of imposing stable topology control and precise local view control. Our key idea is to focus on the critical components of the human body parts to impose precise control while optimizing the 3D model. Following this, we propose FocalGaussian. Specifically, to generate delicate body parts, we propose a focal depth loss that recovers delicate human body parts by aligning the depth of local body parts in the 3D human model and SMPL-X at local and global scales. Moreover, to achieve 3D consistent local body parts, we propose a focal view-dependent SDS that emphasizes key body-part features and provides finer control over local geometry. Extensive experiments demonstrate the superiority of our FocalGaussian across a variety of prompts. Critically, our generated 3D humans accurately capture complex features of human body parts, particularly the hands. For more results please check our project page at <span><span>Project page</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 112923"},"PeriodicalIF":7.6,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HybridCount : Multi-Scale transformer with knowledge distillation for object counting HybridCount:具有知识蒸馏的多尺度变压器,用于对象计数
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-05 DOI: 10.1016/j.patcog.2026.113043
Jayanthan K S, Domnic S
This work introduces a novel architecture that integrates a multi-scale Visual Transformer (ViT) encoder with a graph attention network decoder to model contextual relationships in visual scenes. Our approach achieves real-time, parameter-efficient object counting through an innovative Knowledge Distillation framework that integrates density estimation maps with regression-based counting mechanisms. The distillation process optimizes performance through a three-component loss function: encoder loss, decoder loss, and our proposed Dual-Domain Density-Regression Loss (DD-R Loss). This novel loss formulation simultaneously supervises both spatial density distribution and direct count regression, providing complementary learning signals for robust object quantification. A key contribution is our scale-aware token embedding technique and cross-attention fusion across varying receptive fields within the ViT architecture, enabling precise counting in cluttered visual environments. Experiments are conducted on four crowd-counting datasets, two vehicle counting datasets. Our detailed experimental evaluation shows that the proposed method delivers outcomes comparable to SOTA methods in terms of counting accuracy and density estimate precision. The detailed comparisons presented in our results and discussion sections highlight the significant strengths and advantages of our methodology within the challenging domain of visual object counting. Our framework bridges the gap between the representational power of transformer-based models and graph network architectures. The efficiency of our approach enables real-time performance comparable to other CNN based approaches. This combination delivers a comprehensive solution for object counting tasks that performs effectively even in resource-constrained environments.
这项工作介绍了一种新的架构,该架构集成了一个多尺度视觉转换器(ViT)编码器和一个图形注意网络解码器,以模拟视觉场景中的上下文关系。我们的方法通过创新的知识蒸馏框架实现实时、参数高效的对象计数,该框架将密度估计图与基于回归的计数机制集成在一起。蒸馏过程通过三分量损失函数优化性能:编码器损失、解码器损失和我们提出的双域密度回归损失(DD-R损失)。这种新的损失公式同时监督空间密度分布和直接计数回归,为鲁棒对象量化提供补充学习信号。一个关键的贡献是我们的尺度感知令牌嵌入技术和跨ViT架构内不同接受域的交叉注意融合,使在混乱的视觉环境中实现精确计数。在4个人群计数数据集和2个车辆计数数据集上进行了实验。我们详细的实验评估表明,所提出的方法在计数精度和密度估计精度方面提供的结果与SOTA方法相当。在我们的结果和讨论部分中提出的详细比较突出了我们的方法在具有挑战性的视觉对象计数领域中的显著优势和优势。我们的框架弥合了基于变压器的模型和图网络架构的表示能力之间的差距。我们方法的效率使得实时性能可以与其他基于CNN的方法相媲美。这种组合为对象计数任务提供了一个全面的解决方案,即使在资源受限的环境中也能有效地执行。
{"title":"HybridCount : Multi-Scale transformer with knowledge distillation for object counting","authors":"Jayanthan K S,&nbsp;Domnic S","doi":"10.1016/j.patcog.2026.113043","DOIUrl":"10.1016/j.patcog.2026.113043","url":null,"abstract":"<div><div>This work introduces a novel architecture that integrates a multi-scale Visual Transformer (ViT) encoder with a graph attention network decoder to model contextual relationships in visual scenes. Our approach achieves real-time, parameter-efficient object counting through an innovative Knowledge Distillation framework that integrates density estimation maps with regression-based counting mechanisms. The distillation process optimizes performance through a three-component loss function: encoder loss, decoder loss, and our proposed Dual-Domain Density-Regression Loss (DD-R Loss). This novel loss formulation simultaneously supervises both spatial density distribution and direct count regression, providing complementary learning signals for robust object quantification. A key contribution is our scale-aware token embedding technique and cross-attention fusion across varying receptive fields within the ViT architecture, enabling precise counting in cluttered visual environments. Experiments are conducted on four crowd-counting datasets, two vehicle counting datasets. Our detailed experimental evaluation shows that the proposed method delivers outcomes comparable to SOTA methods in terms of counting accuracy and density estimate precision. The detailed comparisons presented in our results and discussion sections highlight the significant strengths and advantages of our methodology within the challenging domain of visual object counting. Our framework bridges the gap between the representational power of transformer-based models and graph network architectures. The efficiency of our approach enables real-time performance comparable to other CNN based approaches. This combination delivers a comprehensive solution for object counting tasks that performs effectively even in resource-constrained environments.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113043"},"PeriodicalIF":7.6,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LTSTrack: Visual tracking with long-term temporal sequence LTSTrack:长期时间序列的视觉跟踪
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-04 DOI: 10.1016/j.patcog.2026.113052
Zhaochuan Zeng , Shilei Wang , Yidong Song , Zhenhua Wang , Jifeng Ning
The utilization of temporal sequences is crucial for tracking in complex scenarios, particularly when addressing challenges such as occlusion and deformation. However, existing methods are often constrained by limitations such as the use of unrefined raw images or computationally expensive temporal fusion modules, both of which restrict the scale of temporal sequences that can be utilized. This study proposes a novel appearance compression strategy and a temporal feature fusion module, which together significantly enhance the tracker’s ability to utilize long-term temporal sequences. Based on these designs, we propose a tracker that can leverage a Long-term Temporal Sequence that contains historical context across 300 frames, which we name LTSTrack. First, we present a simple yet effective appearance compression strategy to extract target appearance features from each frame and compress them into compact summary tokens, which constitute a long-term temporal sequence. Then, the Mamba block is introduced to efficiently fuse the long-term temporal sequence, generating a fusion token containing the historical representation of the target. Finally, this fusion token is used to enhance the search-region features, thereby achieving more accurate tracking. Extensive experiments demonstrate that the proposed method achieves significant performance improvements across the GOT-10K, TrackingNet, TNL2K, LaSOT, UAV123 and LaSOText datasets. Notably, it achieves remarkable scores of 75.1% AO on GOT-10K and 84.6% AUC on TrackingNet, substantially outperforming previous state-of-the-art methods.
时间序列的利用对于复杂场景中的跟踪至关重要,特别是在解决诸如遮挡和变形等挑战时。然而,现有的方法往往受到限制,如使用未精制的原始图像或计算昂贵的时间融合模块,这两者都限制了可以利用的时间序列的规模。本研究提出了一种新的外观压缩策略和时间特征融合模块,这两种策略共同显著提高了跟踪器利用长时间序列的能力。基于这些设计,我们提出了一种跟踪器,它可以利用包含300帧历史上下文的长期时间序列,我们将其命名为LTSTrack。首先,我们提出了一种简单而有效的外观压缩策略,从每帧中提取目标外观特征,并将其压缩成紧凑的摘要令牌,这些摘要令牌构成了一个长期的时间序列。然后,引入Mamba块有效地融合长期时间序列,生成包含目标历史表示的融合令牌。最后,利用该融合标记增强搜索区域特征,从而实现更精确的跟踪。大量实验表明,该方法在GOT-10K、TrackingNet、TNL2K、LaSOT、UAV123和LaSOText数据集上实现了显著的性能改进。值得注意的是,它在GOT-10K上达到了75.1%的AO,在TrackingNet上达到了84.6%的AUC,大大优于以前最先进的方法。
{"title":"LTSTrack: Visual tracking with long-term temporal sequence","authors":"Zhaochuan Zeng ,&nbsp;Shilei Wang ,&nbsp;Yidong Song ,&nbsp;Zhenhua Wang ,&nbsp;Jifeng Ning","doi":"10.1016/j.patcog.2026.113052","DOIUrl":"10.1016/j.patcog.2026.113052","url":null,"abstract":"<div><div>The utilization of temporal sequences is crucial for tracking in complex scenarios, particularly when addressing challenges such as occlusion and deformation. However, existing methods are often constrained by limitations such as the use of unrefined raw images or computationally expensive temporal fusion modules, both of which restrict the scale of temporal sequences that can be utilized. This study proposes a novel appearance compression strategy and a temporal feature fusion module, which together significantly enhance the tracker’s ability to utilize long-term temporal sequences. Based on these designs, we propose a tracker that can leverage a <strong>L</strong>ong-term <strong>T</strong>emporal <strong>S</strong>equence that contains historical context across 300 frames, which we name LTSTrack. First, we present a simple yet effective appearance compression strategy to extract target appearance features from each frame and compress them into compact summary tokens, which constitute a long-term temporal sequence. Then, the Mamba block is introduced to efficiently fuse the long-term temporal sequence, generating a fusion token containing the historical representation of the target. Finally, this fusion token is used to enhance the search-region features, thereby achieving more accurate tracking. Extensive experiments demonstrate that the proposed method achieves significant performance improvements across the GOT-10K, TrackingNet, TNL2K, LaSOT, UAV123 and LaSOT<sub><em>ext</em></sub> datasets. Notably, it achieves remarkable scores of 75.1% AO on GOT-10K and 84.6% AUC on TrackingNet, substantially outperforming previous state-of-the-art methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"175 ","pages":"Article 113052"},"PeriodicalIF":7.6,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1