首页 > 最新文献

Neurocomputing最新文献

英文 中文
AHIR: Deep learning-based autoencoder hashing image retrieval AHIR:基于深度学习的自编码器哈希图像检索
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.neucom.2026.132639
Ahmet Yilmaz , Uğur Erkan , Abdurrahim Toktas , Qiang Lai , Suo Gao
Deep learning-based image retrieval (IR) approaches promising automatic feature extraction suffer from several limitations, including insufficient semantic representation, suboptimal retrieval performance, and limited evaluation across different hash code lengths. To address these limitations, a novel deep learning-based Autoencoder Hashing IR (AHIR) algorithm is proposed, employing the strengths of ResNet50 and autoencoder architectures. In this integrated model, ResNet50 is responsible for extracting the semantic features of images, while the autoencoder compresses these features to the required dimensions and transforms them into hash codes. The study's contributions include the ability to capture both low-level and high-level features, streamline IR for large-scale databases, and enhance efficiency in supervised learning scenarios. Furthermore, a comparative analysis of various reported IR algorithms is presented, highlighting the performance of AHIR against its counterparts for MS-COCO, NUS-WIDE, and MIRFLICKR-25K datasets. AHIR outperforms the existing methods with the highest mAP scores of 0.9103, 0.9007, and 0.9136 for MS-COCO, NUS-WIDE, and MIRFLICKR-25K, respectively. The results manifest the superior IR performance of AHIR thanks to the novel integrated autoencoder-based hashing mechanism.
基于深度学习的图像检索(IR)方法有望实现自动特征提取,但存在一些限制,包括语义表示不足、检索性能欠佳以及跨不同哈希码长度的有限评估。为了解决这些限制,提出了一种新的基于深度学习的自编码器哈希IR (AHIR)算法,该算法利用了ResNet50和自编码器架构的优势。在这个集成模型中,ResNet50负责提取图像的语义特征,而自动编码器将这些特征压缩到所需的维度并将其转换为哈希码。该研究的贡献包括捕获低级和高级特征的能力,简化大规模数据库的IR,以及提高监督学习场景的效率。此外,对各种已报道的红外算法进行了比较分析,突出了AHIR与MS-COCO, NUS-WIDE和MIRFLICKR-25K数据集的对应算法的性能。AHIR的mAP得分最高,MS-COCO、NUS-WIDE和MIRFLICKR-25K的mAP得分分别为0.9103、0.9007和0.9136,优于现有方法。结果表明,由于采用了基于自编码器的新型集成哈希机制,AHIR具有优越的红外性能。
{"title":"AHIR: Deep learning-based autoencoder hashing image retrieval","authors":"Ahmet Yilmaz ,&nbsp;Uğur Erkan ,&nbsp;Abdurrahim Toktas ,&nbsp;Qiang Lai ,&nbsp;Suo Gao","doi":"10.1016/j.neucom.2026.132639","DOIUrl":"10.1016/j.neucom.2026.132639","url":null,"abstract":"<div><div>Deep learning-based image retrieval (IR) approaches promising automatic feature extraction suffer from several limitations, including insufficient semantic representation, suboptimal retrieval performance, and limited evaluation across different hash code lengths. To address these limitations, a novel deep learning-based Autoencoder Hashing IR (AHIR) algorithm is proposed, employing the strengths of ResNet50 and autoencoder architectures. In this integrated model, ResNet50 is responsible for extracting the semantic features of images, while the autoencoder compresses these features to the required dimensions and transforms them into hash codes. The study's contributions include the ability to capture both low-level and high-level features, streamline IR for large-scale databases, and enhance efficiency in supervised learning scenarios. Furthermore, a comparative analysis of various reported IR algorithms is presented, highlighting the performance of AHIR against its counterparts for MS-COCO, NUS-WIDE, and MIRFLICKR-25K datasets. AHIR outperforms the existing methods with the highest mAP scores of 0.9103, 0.9007, and 0.9136 for MS-COCO, NUS-WIDE, and MIRFLICKR-25K, respectively. The results manifest the superior IR performance of AHIR thanks to the novel integrated autoencoder-based hashing mechanism.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"671 ","pages":"Article 132639"},"PeriodicalIF":6.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affine non-negative discriminative representation for biomedical image classification 生物医学图像分类的仿射非负判别表示
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.neucom.2026.132663
Dan Hu , Junwei Jin , Songbo Zhou , Xiang Du , Yanting Li
Biomedical imaging plays a vital role in modern healthcare by supporting accurate diagnosis and informed clinical decision-making. Although recent advances in artificial intelligence have significantly improved biomedical image analysis, persistent challenges remain, including high data variability, limited availability of annotated samples, and the need for transparent and interpretable classification models. To address these issues, this paper proposes a novel classification framework termed affine non-negative discriminative representation-based classification (ANDRC) for robust biomedical image recognition. In the proposed framework, a non-negativity constraint is imposed on the representation coefficients, which enforces the test sample to be reconstructed as a purely additive combination of training samples. By preventing subtractive contributions induced by negative coefficients, this constraint encourages stronger contributions from homogeneous samples while naturally suppressing misleading representations from heterogeneous ones, thereby enhancing both interpretability and discriminative stability. In addition, an affine constraint is incorporated to effectively model the intrinsic structure of data distributed over a union of affine subspaces, which commonly arises in real-world biomedical imaging scenarios. The resulting optimization problem is efficiently solved using the Alternating Direction Method of Multipliers, ensuring stable convergence and computational efficiency. Extensive experiments conducted on multiple biomedical image classification benchmarks demonstrate that the proposed ANDRC method consistently outperforms several state-of-the-art approaches. These results highlight the effectiveness of ANDRC in handling data variability and label scarcity, as well as its potential for practical deployment in biomedical image analysis.
生物医学成像通过支持准确的诊断和知情的临床决策,在现代医疗保健中发挥着至关重要的作用。尽管人工智能的最新进展显著改善了生物医学图像分析,但仍然存在持续的挑战,包括高数据可变性,注释样本的可用性有限,以及对透明和可解释的分类模型的需求。为了解决这些问题,本文提出了一种新的分类框架,称为仿射非负判别表示分类(ANDRC),用于鲁棒生物医学图像识别。在提出的框架中,对表示系数施加了非负性约束,从而强制将测试样本重构为训练样本的纯加性组合。通过防止负系数引起的减法贡献,该约束鼓励同质样本的更强贡献,同时自然地抑制异质样本的误导性表征,从而提高可解释性和判别稳定性。此外,一个仿射约束被纳入有效地模拟数据的内在结构分布在一个仿射子空间的联合,这通常出现在现实世界的生物医学成像场景。利用乘法器交替方向法有效地解决了优化问题,保证了稳定的收敛性和计算效率。在多个生物医学图像分类基准上进行的大量实验表明,所提出的ANDRC方法始终优于几种最先进的方法。这些结果突出了ANDRC在处理数据可变性和标签稀缺性方面的有效性,以及它在生物医学图像分析中实际部署的潜力。
{"title":"Affine non-negative discriminative representation for biomedical image classification","authors":"Dan Hu ,&nbsp;Junwei Jin ,&nbsp;Songbo Zhou ,&nbsp;Xiang Du ,&nbsp;Yanting Li","doi":"10.1016/j.neucom.2026.132663","DOIUrl":"10.1016/j.neucom.2026.132663","url":null,"abstract":"<div><div>Biomedical imaging plays a vital role in modern healthcare by supporting accurate diagnosis and informed clinical decision-making. Although recent advances in artificial intelligence have significantly improved biomedical image analysis, persistent challenges remain, including high data variability, limited availability of annotated samples, and the need for transparent and interpretable classification models. To address these issues, this paper proposes a novel classification framework termed affine non-negative discriminative representation-based classification (ANDRC) for robust biomedical image recognition. In the proposed framework, a non-negativity constraint is imposed on the representation coefficients, which enforces the test sample to be reconstructed as a purely additive combination of training samples. By preventing subtractive contributions induced by negative coefficients, this constraint encourages stronger contributions from homogeneous samples while naturally suppressing misleading representations from heterogeneous ones, thereby enhancing both interpretability and discriminative stability. In addition, an affine constraint is incorporated to effectively model the intrinsic structure of data distributed over a union of affine subspaces, which commonly arises in real-world biomedical imaging scenarios. The resulting optimization problem is efficiently solved using the Alternating Direction Method of Multipliers, ensuring stable convergence and computational efficiency. Extensive experiments conducted on multiple biomedical image classification benchmarks demonstrate that the proposed ANDRC method consistently outperforms several state-of-the-art approaches. These results highlight the effectiveness of ANDRC in handling data variability and label scarcity, as well as its potential for practical deployment in biomedical image analysis.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"671 ","pages":"Article 132663"},"PeriodicalIF":6.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HEIGHTS: Hierarchical graph structure learning for time series anomaly detection 用于时间序列异常检测的分层图结构学习
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.neucom.2026.132638
Vinitha M Rajan, Sahely Bhadra
The time series anomaly detection task aims to identify instances of abnormal system behavior by analyzing time series data. For multivariate time series data, it is crucial to learn both temporal and inter-variable relationships. Graph-based methods explicitly learn inter-variable relationships alongside temporal patterns extracted from the data. However, two major limitations exist in these models. First, they are typically trained on data that is assumed to be free of anomalies, which are challenging to obtain in real-world scenarios. Second, they learn relationships only at the variable level, overlooking valuable information about inherent groups of variables. We adopt a more realistic approach by training on data that include anomalies. Our model learns inter-variable and temporal relationships by simultaneously performing forecasting and reconstruction, which prevents overfitting to anomalies present in the training data. Additionally, we enhance the model by leveraging information about groups of variables through a Hierarchical Graph Neural Network, enabling more effective learning of inter-variable relationships. Our method demonstrates a significant improvement in time series anomaly detection performance, as evaluated on benchmark datasets, outperforming state-of-the-art baselines by a margin of up to 39%. We also present an in-depth analysis of the model’s behavioral efficiency through extensive experiments using synthetic and real datasets.
时间序列异常检测任务旨在通过分析时间序列数据来识别系统异常行为的实例。对于多变量时间序列数据,学习时间和变量间的关系至关重要。基于图的方法明确地学习变量间关系以及从数据中提取的时间模式。然而,这些模型存在两个主要限制。首先,他们通常是在假定没有异常的数据上进行训练的,这在现实场景中很难获得。其次,他们只在变量层面学习关系,忽略了关于固有变量组的有价值的信息。我们通过对包含异常的数据进行训练,采用了一种更现实的方法。我们的模型通过同时进行预测和重建来学习变量间和时间关系,从而防止对训练数据中存在的异常进行过拟合。此外,我们通过层次图神经网络利用关于变量组的信息来增强模型,从而能够更有效地学习变量间关系。根据基准数据集的评估,我们的方法在时间序列异常检测性能上有了显著的提高,比最先进的基线高出39%。我们还通过使用合成和真实数据集的广泛实验,对该模型的行为效率进行了深入分析。
{"title":"HEIGHTS: Hierarchical graph structure learning for time series anomaly detection","authors":"Vinitha M Rajan,&nbsp;Sahely Bhadra","doi":"10.1016/j.neucom.2026.132638","DOIUrl":"10.1016/j.neucom.2026.132638","url":null,"abstract":"<div><div>The time series anomaly detection task aims to identify instances of abnormal system behavior by analyzing time series data. For multivariate time series data, it is crucial to learn both temporal and inter-variable relationships. Graph-based methods explicitly learn inter-variable relationships alongside temporal patterns extracted from the data. However, two major limitations exist in these models. First, they are typically trained on data that is assumed to be free of anomalies, which are challenging to obtain in real-world scenarios. Second, they learn relationships only at the variable level, overlooking valuable information about inherent groups of variables. We adopt a more realistic approach by training on data that include anomalies. Our model learns inter-variable and temporal relationships by simultaneously performing forecasting and reconstruction, which prevents overfitting to anomalies present in the training data. Additionally, we enhance the model by leveraging information about groups of variables through a Hierarchical Graph Neural Network, enabling more effective learning of inter-variable relationships. Our method demonstrates a significant improvement in time series anomaly detection performance, as evaluated on benchmark datasets, outperforming state-of-the-art baselines by a margin of up to <span><math><mn>39</mn></math></span>%. We also present an in-depth analysis of the model’s behavioral efficiency through extensive experiments using synthetic and real datasets.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"671 ","pages":"Article 132638"},"PeriodicalIF":6.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meta-evaluation of robustness metrics: An in-depth analysis 稳健性指标的元评价:深入分析
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.neucom.2026.132651
Miquel Miró-Nicolau , Antoni Jaume-i-Capó , Gabriel Moyà-Alcover
Robustness is a pivotal attribute of trustworthy explainable artificial intelligence, ensuring that explanations remain consistent when inputs are perturbed slightly. Although numerous metrics have been proposed to quantify this robustness, their reliability remains largely unverified. This study introduces a novel meta-evaluation framework for robustness metrics, grounded in controlled, verifiable experimental setups. We propose three sanity tests: perfect explanation, normal explanation, and random output. These tests facilitate the systematic assessment of the validity of robustness metrics using transparent models. By evaluating seven state-of-the-art robustness metrics across four benchmark datasets, our results reveal significant shortcomings: no single metric consistently achieves the expected outcomes across all tests. These findings underscore fundamental flaws in robustness metrics and emphasise the necessity for improved evaluation frameworks. Our methodology provides a reproducible, assumption-light benchmark for identifying unreliable metrics before deployment in critical applications.
鲁棒性是值得信赖的可解释人工智能的关键属性,确保在输入受到轻微干扰时解释保持一致。虽然已经提出了许多指标来量化这种稳健性,但它们的可靠性在很大程度上仍未经证实。本研究为稳健性指标引入了一种新的元评估框架,该框架基于可控的、可验证的实验设置。我们提出了三种完整性测试:完美解释、正常解释和随机输出。这些测试有助于使用透明模型对稳健性指标的有效性进行系统评估。通过在四个基准数据集上评估七个最先进的鲁棒性指标,我们的结果揭示了显著的缺点:没有一个指标在所有测试中始终如一地达到预期结果。这些发现强调了稳健性指标的根本缺陷,并强调了改进评估框架的必要性。我们的方法提供了一个可重复的、无需假设的基准,用于在关键应用程序中部署之前识别不可靠的指标。
{"title":"Meta-evaluation of robustness metrics: An in-depth analysis","authors":"Miquel Miró-Nicolau ,&nbsp;Antoni Jaume-i-Capó ,&nbsp;Gabriel Moyà-Alcover","doi":"10.1016/j.neucom.2026.132651","DOIUrl":"10.1016/j.neucom.2026.132651","url":null,"abstract":"<div><div>Robustness is a pivotal attribute of trustworthy explainable artificial intelligence, ensuring that explanations remain consistent when inputs are perturbed slightly. Although numerous metrics have been proposed to quantify this robustness, their reliability remains largely unverified. This study introduces a novel meta-evaluation framework for robustness metrics, grounded in controlled, verifiable experimental setups. We propose three sanity tests: perfect explanation, normal explanation, and random output. These tests facilitate the systematic assessment of the validity of robustness metrics using transparent models. By evaluating seven state-of-the-art robustness metrics across four benchmark datasets, our results reveal significant shortcomings: no single metric consistently achieves the expected outcomes across all tests. These findings underscore fundamental flaws in robustness metrics and emphasise the necessity for improved evaluation frameworks. Our methodology provides a reproducible, assumption-light benchmark for identifying unreliable metrics before deployment in critical applications.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"671 ","pages":"Article 132651"},"PeriodicalIF":6.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards alleviating hallucination in text-to-image retrieval for CLIP in zero-shot learning 零射击学习中CLIP文本到图像检索的幻觉缓解研究
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.neucom.2026.132640
Hanyao Wang , Yibing Zhan , Liu Liu , Liang Ding , Yan Yang , Jun Yu
Pretrained cross-modal models, such as the representative CLIP model, have recently led to a boom in the use of pretrained models for cross-modal zero-shot tasks due to their strong generalization abilities. However, we experimentally discovered that CLIP suffers from text-to-image retrieval hallucination, which adversely limits its capabilities under zero-shot learning. Specifically, in retrieval tasks, CLIP often assigns the highest score to an incorrect image, even when it correctly understands the image’s semantic content in classification tasks. Accordingly, we propose the Balanced Score with Auxiliary Prompts (BSAP) method to address this problem. BSAP introduces auxiliary prompts that provide multiple reference outcomes for each image retrieval task. These outcomes, derived from the image and the target text, are normalized to compute a final similarity score, thereby reducing hallucinations. We further combine the original results with BSAP to generate a more robust hybrid outcome, termed BSAP-H. Extensive experiments on Referring Expression Comprehension (REC) and Referring Image Segmentation (RIS) tasks demonstrate that BSAP significantly improves the performance of CLIP and state-of-the-art vision-language models (VLMs). Code available at https://github.com/WangHanyao/BSAP.
预训练的跨模态模型,如具有代表性的CLIP模型,由于其强大的泛化能力,最近在跨模态零射击任务中使用了预训练模型。然而,我们通过实验发现CLIP存在文本到图像检索幻觉,这不利地限制了它在零射击学习下的能力。具体来说,在检索任务中,即使在分类任务中正确理解图像的语义内容,CLIP也经常给不正确的图像分配最高分。因此,我们提出了带有辅助提示的平衡分数(BSAP)方法来解决这个问题。BSAP引入了辅助提示,为每个图像检索任务提供多个参考结果。这些从图像和目标文本中得出的结果被归一化,以计算最终的相似性得分,从而减少幻觉。我们进一步将原始结果与BSAP结合起来,产生更稳健的混合结果,称为BSAP- h。在参考表达理解(REC)和参考图像分割(RIS)任务上的大量实验表明,BSAP显著提高了CLIP和最先进的视觉语言模型(VLMs)的性能。代码可从https://github.com/WangHanyao/BSAP获得。
{"title":"Towards alleviating hallucination in text-to-image retrieval for CLIP in zero-shot learning","authors":"Hanyao Wang ,&nbsp;Yibing Zhan ,&nbsp;Liu Liu ,&nbsp;Liang Ding ,&nbsp;Yan Yang ,&nbsp;Jun Yu","doi":"10.1016/j.neucom.2026.132640","DOIUrl":"10.1016/j.neucom.2026.132640","url":null,"abstract":"<div><div>Pretrained cross-modal models, such as the representative CLIP model, have recently led to a boom in the use of pretrained models for cross-modal zero-shot tasks due to their strong generalization abilities. However, we experimentally discovered that CLIP suffers from text-to-image retrieval hallucination, which adversely limits its capabilities under zero-shot learning. Specifically, in retrieval tasks, CLIP often assigns the highest score to an incorrect image, even when it correctly understands the image’s semantic content in classification tasks. Accordingly, we propose the <strong>B</strong>alanced <strong>S</strong>core with <strong>A</strong>uxiliary <strong>P</strong>rompts (<strong>BSAP</strong>) method to address this problem. BSAP introduces auxiliary prompts that provide multiple reference outcomes for each image retrieval task. These outcomes, derived from the image and the target text, are normalized to compute a final similarity score, thereby reducing hallucinations. We further combine the original results with BSAP to generate a more robust hybrid outcome, termed BSAP-H. Extensive experiments on Referring Expression Comprehension (REC) and Referring Image Segmentation (RIS) tasks demonstrate that BSAP significantly improves the performance of CLIP and state-of-the-art vision-language models (VLMs). Code available at <span><span>https://github.com/WangHanyao/BSAP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"671 ","pages":"Article 132640"},"PeriodicalIF":6.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG-TFX: An interactive MATLAB toolbox for EEG feature engineering via multi-scale temporal windowing and filter banks EEG- tfx:一个基于多尺度时间窗和滤波器组的EEG特征工程的交互式MATLAB工具箱
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.neucom.2026.132650
Qingyue Xin , Rui Zhang , Yaoqi Hu , Weidong Zhou , Lan Tian , Guoyang Liu
This paper presents an intelligent Electroencephalogram (EEG) analysis platform (EEG-TFX) that provides a comprehensive solution for EEG signal processing. The platform adopts a modular design, integrating data preprocessing, time-frequency segmentation, feature extraction, model training, and visualization functions. It supports flexible configuration of time windows, frequency band segmentation, and multiple filter order selections. The platform offers various feature extraction methods, including entropy-based features and other popular EEG features, together with multiple classification algorithms, such as artificial neural networks, support vector machines, and decision trees. In addition, the system supports user-defined function extensions. It enables joint feature processing, dynamic visualization, and result export, making it suitable for multiple neuro-computing research fields such as brain-computer interfaces, neural rehabilitation, and EEG-based emotion recognition. EEG-TFX significantly enhances the efficiency and reliability of EEG signal analysis by providing standardized workflows, offering a convenient and integrated tool for related research. The toolbox is open-source and available at: https://github.com/Xin-qy/EEG-TFX.
提出了一种智能脑电图(EEG)分析平台(EEG- tfx),为脑电图信号处理提供了全面的解决方案。平台采用模块化设计,集数据预处理、时频分割、特征提取、模型训练、可视化等功能于一体。它支持灵活配置的时间窗口,频带分割,和多个滤波器的顺序选择。该平台提供了多种特征提取方法,包括基于熵的特征和其他流行的EEG特征,以及多种分类算法,如人工神经网络、支持向量机和决策树。此外,系统还支持用户自定义的功能扩展。它支持联合特征处理、动态可视化和结果导出,适用于脑机接口、神经康复、基于脑电图的情感识别等多个神经计算研究领域。EEG- tfx通过提供标准化的工作流程,显著提高了脑电信号分析的效率和可靠性,为相关研究提供了方便和集成的工具。该工具箱是开源的,可从https://github.com/Xin-qy/EEG-TFX获得。
{"title":"EEG-TFX: An interactive MATLAB toolbox for EEG feature engineering via multi-scale temporal windowing and filter banks","authors":"Qingyue Xin ,&nbsp;Rui Zhang ,&nbsp;Yaoqi Hu ,&nbsp;Weidong Zhou ,&nbsp;Lan Tian ,&nbsp;Guoyang Liu","doi":"10.1016/j.neucom.2026.132650","DOIUrl":"10.1016/j.neucom.2026.132650","url":null,"abstract":"<div><div>This paper presents an intelligent Electroencephalogram (EEG) analysis platform (EEG-TFX) that provides a comprehensive solution for EEG signal processing. The platform adopts a modular design, integrating data preprocessing, time-frequency segmentation, feature extraction, model training, and visualization functions. It supports flexible configuration of time windows, frequency band segmentation, and multiple filter order selections. The platform offers various feature extraction methods, including entropy-based features and other popular EEG features, together with multiple classification algorithms, such as artificial neural networks, support vector machines, and decision trees. In addition, the system supports user-defined function extensions. It enables joint feature processing, dynamic visualization, and result export, making it suitable for multiple neuro-computing research fields such as brain-computer interfaces, neural rehabilitation, and EEG-based emotion recognition. EEG-TFX significantly enhances the efficiency and reliability of EEG signal analysis by providing standardized workflows, offering a convenient and integrated tool for related research. The toolbox is open-source and available at: <span><span>https://github.com/Xin-qy/EEG-TFX</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"671 ","pages":"Article 132650"},"PeriodicalIF":6.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale BiTemporal fusion for dynamic facial expression recognition in the wild 基于多尺度双时间融合的野外动态面部表情识别
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.neucom.2026.132658
Zixiang Fei , Hao Liu , Wenju Zhou , Minrui Fei
Dynamic Facial Expression Recognition (DFER) in real-world scenarios remains a difficult problem for emotion analysis, as it requires capturing subtle temporal variations while resisting the interference of spatial noise. To address this issue, we design a new framework, Multi-Scale BiTemporal Fusion Network (MSBTFN), which progressively refines spatiotemporal representations through dual-path processing combined with adaptive attention. The proposed architecture integrates multi-scale convolutions to capture both local and global motion characteristics from paired temporal segments. In addition, a dual-pooling mechanism fuses channel statistics with spatial operations to highlight discriminative and transient emotional cues. Coordinated 1D and 2D attention layers are then applied to construct adaptive channel weights and spatial response maps, effectively suppressing noise and enhancing feature quality. Furthermore, the Temporal Transformer Module (TTM) models temporal dependencies on the refined spatial features through an encoder built upon dual window and shifted-window attention blocks. Comprehensive experiments and ablation studies on three large-scale in-the-wild datasets—AFEW, FERV39K, and DFEW—demonstrate the robustness and accuracy of the proposed method.
动态面部表情识别(DFER)在现实场景中仍然是情绪分析的一个难题,因为它需要捕捉细微的时间变化,同时抵抗空间噪声的干扰。为了解决这一问题,我们设计了一个新的框架,即多尺度双时间融合网络(MSBTFN),该网络通过双路径处理结合自适应注意逐步细化时空表征。所提出的结构集成了多尺度卷积,以捕获成对时间段的局部和全局运动特征。此外,双池机制融合了通道统计和空间操作,以突出歧视性和短暂的情绪线索。然后利用协调的一维和二维注意层构建自适应信道权值和空间响应图,有效地抑制噪声并提高特征质量。此外,时间转换模块(TTM)通过建立在双窗口和移动窗口注意力块上的编码器对精炼空间特征的时间依赖性进行建模。在三个大规模野外数据集(few、FERV39K和dfew)上的综合实验和烧蚀研究证明了该方法的鲁棒性和准确性。
{"title":"Multi-scale BiTemporal fusion for dynamic facial expression recognition in the wild","authors":"Zixiang Fei ,&nbsp;Hao Liu ,&nbsp;Wenju Zhou ,&nbsp;Minrui Fei","doi":"10.1016/j.neucom.2026.132658","DOIUrl":"10.1016/j.neucom.2026.132658","url":null,"abstract":"<div><div>Dynamic Facial Expression Recognition (DFER) in real-world scenarios remains a difficult problem for emotion analysis, as it requires capturing subtle temporal variations while resisting the interference of spatial noise. To address this issue, we design a new framework, Multi-Scale BiTemporal Fusion Network (MSBTFN), which progressively refines spatiotemporal representations through dual-path processing combined with adaptive attention. The proposed architecture integrates multi-scale convolutions to capture both local and global motion characteristics from paired temporal segments. In addition, a dual-pooling mechanism fuses channel statistics with spatial operations to highlight discriminative and transient emotional cues. Coordinated 1D and 2D attention layers are then applied to construct adaptive channel weights and spatial response maps, effectively suppressing noise and enhancing feature quality. Furthermore, the Temporal Transformer Module (TTM) models temporal dependencies on the refined spatial features through an encoder built upon dual window and shifted-window attention blocks. Comprehensive experiments and ablation studies on three large-scale in-the-wild datasets—AFEW, FERV39K, and DFEW—demonstrate the robustness and accuracy of the proposed method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"671 ","pages":"Article 132658"},"PeriodicalIF":6.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differential evolutionary architecture search with dynamic similarity-aware weight sharing for optimization of GANs 基于动态相似度感知权值共享的差分进化架构搜索优化gan
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.neucom.2026.132619
Atifa Rafique , Xue Yu , Musaed Alhussein , Kashif Iqbal , Mohammad Kamrul Hasan , Khursheed Aurangzeb
Generative adversarial networks (GANs) have demonstrated remarkable success in image synthesis but often suffer from mode collapse, training instability, and computational inefficiency. To overcome these challenges, we propose a dynamic evolutionary architecture search approach for GANs, which employs differential evolution (DE) to optimize the generator architecture through genetic operations such as crossover and mutation. Furthermore, it incorporates dynamic layer-wise weight sharing (DLWS) with an adaptive similarity threshold (AST) to enhance parameter efficiency and training stability. Unlike traditional weight-sharing techniques, our dynamic mechanism adjusts based on structural similarities between layers, improving both specialization and stability. Additionally, we integrate fair single-path sampling and an operation discard strategy to ensure smoother training and faster convergence. Based on extensive experiments on CIFAR-10, STL-10, CIFAR-100, and CelebA datasets, our proposed method achieves an inception score (IS) of 8.99 ± 0.06 on CIFAR-10, fréchet inception distance (FID) of 21.75 on STL-10 and an IS of 8.89 on the CIFAR-100 dataset. These results demonstrate the superior performance of our method, while significantly reducing GPU hours compared to existing approaches. This paper provides a scalable, stable, and efficient solution for optimizing GAN architectures, opening new possibilities for advanced generative modeling tasks.
生成对抗网络(GANs)在图像合成方面取得了显著的成功,但往往存在模式崩溃、训练不稳定和计算效率低下的问题。为了克服这些挑战,我们提出了一种gan的动态进化架构搜索方法,该方法利用差分进化(DE)通过交叉和突变等遗传操作来优化生成器架构。此外,该方法还结合了基于自适应相似阈值(AST)的动态分层权重共享(DLWS),以提高参数效率和训练稳定性。与传统的权重共享技术不同,我们的动态机制基于层之间的结构相似性进行调整,从而提高了专业化和稳定性。此外,我们还集成了公平的单路径采样和操作丢弃策略,以确保更平稳的训练和更快的收敛。基于在CIFAR-10、STL-10、CIFAR-100和CelebA数据集上的大量实验,我们提出的方法在CIFAR-10上的起始分数(IS)为8.99±0.06,在STL-10上的起始距离(FID)为21.75,在CIFAR-100数据集上的起始距离(IS)为8.89。这些结果证明了我们的方法具有优越的性能,同时与现有方法相比显着减少了GPU时间。本文为优化GAN架构提供了一个可扩展、稳定和高效的解决方案,为高级生成建模任务开辟了新的可能性。
{"title":"Differential evolutionary architecture search with dynamic similarity-aware weight sharing for optimization of GANs","authors":"Atifa Rafique ,&nbsp;Xue Yu ,&nbsp;Musaed Alhussein ,&nbsp;Kashif Iqbal ,&nbsp;Mohammad Kamrul Hasan ,&nbsp;Khursheed Aurangzeb","doi":"10.1016/j.neucom.2026.132619","DOIUrl":"10.1016/j.neucom.2026.132619","url":null,"abstract":"<div><div>Generative adversarial networks (GANs) have demonstrated remarkable success in image synthesis but often suffer from mode collapse, training instability, and computational inefficiency. To overcome these challenges, we propose a dynamic evolutionary architecture search approach for GANs, which employs differential evolution (DE) to optimize the generator architecture through genetic operations such as crossover and mutation. Furthermore, it incorporates dynamic layer-wise weight sharing (DLWS) with an adaptive similarity threshold (AST) to enhance parameter efficiency and training stability. Unlike traditional weight-sharing techniques, our dynamic mechanism adjusts based on structural similarities between layers, improving both specialization and stability. Additionally, we integrate fair single-path sampling and an operation discard strategy to ensure smoother training and faster convergence. Based on extensive experiments on CIFAR-10, STL-10, CIFAR-100, and CelebA datasets, our proposed method achieves an inception score (IS) of 8.99 <span><math><mo>±</mo></math></span> 0.06 on CIFAR-10, fréchet inception distance (FID) of 21.75 on STL-10 and an IS of 8.89 on the CIFAR-100 dataset. These results demonstrate the superior performance of our method, while significantly reducing GPU hours compared to existing approaches. This paper provides a scalable, stable, and efficient solution for optimizing GAN architectures, opening new possibilities for advanced generative modeling tasks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"671 ","pages":"Article 132619"},"PeriodicalIF":6.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cost-effective dynamic physical watermarking strategy for replay attack detection in cyber-physical systems 一种经济有效的网络物理系统重放攻击检测动态物理水印策略
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.neucom.2026.132655
Qilin Zhu , Yulong Ding , Shuang-Hua Yang
Cyber-Physical Systems (CPS) are intelligent control systems that integrate computation, physical processes, and network communication. They are widely applied in various critical infrastructures such as industrial manufacturing, power grids, and water treatment plants. In recent years, CPS have increasingly become targets of malicious attacks, posing severe threats to both society and the environment. To safeguard the security of CPS, anomaly detection has emerged as a key approach to ensuring normal system operation. However, existing anomaly detection methods for CPS often suffer from limitations. Many approaches focus solely on cyber-level anomalies, without accounting for the unique integration of cyber and physical components in CPS, and they lack authentication mechanisms for physical components. Physical watermarking, which leverages the inherent control logic of the system, is capable of detecting replay attacks that are difficult to detect through traditional means. Nevertheless, conventional persistent watermarking strategies lead to substantial increases in control costs. This study proposes a dual-threshold watermarking method that, without assuming specific replay attack models, aims to add watermarks exclusively during attacks, thereby effectively reducing control costs. In addition, a novel anomaly detection statistic is introduced to address the limitations of existing intermittent watermarking schemes, where insufficient watermark addition during the attacker’s data recording phase leads to weakened residual anomalies. Furthermore, this study employs reinforcement learning techniques to dynamically regulate both the timing and strength of watermark addition. Simulation experiments on widely used linearized quadruple-tank system in CPS demonstrate the effectiveness of the proposed methods in accurately identifying replay attacks while minimizing control costs.
信息物理系统(CPS)是集计算、物理过程和网络通信于一体的智能控制系统。它们广泛应用于工业制造、电网、水处理厂等各种关键基础设施中。近年来,CPS日益成为恶意攻击的目标,对社会和环境造成了严重威胁。为了保障CPS的安全,异常检测已成为保证系统正常运行的关键手段。然而,现有的CPS异常检测方法往往存在局限性。许多方法只关注网络级异常,而没有考虑到CPS中网络和物理组件的独特集成,并且缺乏物理组件的认证机制。物理水印利用系统固有的控制逻辑,能够检测到传统手段难以检测到的重放攻击。然而,传统的持久水印策略导致控制成本的大幅增加。本研究提出了一种双阈值水印方法,该方法不假设特定的重放攻击模型,在攻击过程中只添加水印,从而有效降低控制成本。此外,引入了一种新的异常检测统计量,以解决现有间歇性水印方案的局限性,即攻击者在数据记录阶段添加的水印不足导致剩余异常减弱。此外,本研究采用强化学习技术来动态调节水印添加的时间和强度。在CPS中广泛使用的线性化四缸系统的仿真实验表明,所提出的方法在准确识别重放攻击的同时最小化了控制成本。
{"title":"A cost-effective dynamic physical watermarking strategy for replay attack detection in cyber-physical systems","authors":"Qilin Zhu ,&nbsp;Yulong Ding ,&nbsp;Shuang-Hua Yang","doi":"10.1016/j.neucom.2026.132655","DOIUrl":"10.1016/j.neucom.2026.132655","url":null,"abstract":"<div><div>Cyber-Physical Systems (CPS) are intelligent control systems that integrate computation, physical processes, and network communication. They are widely applied in various critical infrastructures such as industrial manufacturing, power grids, and water treatment plants. In recent years, CPS have increasingly become targets of malicious attacks, posing severe threats to both society and the environment. To safeguard the security of CPS, anomaly detection has emerged as a key approach to ensuring normal system operation. However, existing anomaly detection methods for CPS often suffer from limitations. Many approaches focus solely on cyber-level anomalies, without accounting for the unique integration of cyber and physical components in CPS, and they lack authentication mechanisms for physical components. Physical watermarking, which leverages the inherent control logic of the system, is capable of detecting replay attacks that are difficult to detect through traditional means. Nevertheless, conventional persistent watermarking strategies lead to substantial increases in control costs. This study proposes a dual-threshold watermarking method that, without assuming specific replay attack models, aims to add watermarks exclusively during attacks, thereby effectively reducing control costs. In addition, a novel anomaly detection statistic is introduced to address the limitations of existing intermittent watermarking schemes, where insufficient watermark addition during the attacker’s data recording phase leads to weakened residual anomalies. Furthermore, this study employs reinforcement learning techniques to dynamically regulate both the timing and strength of watermark addition. Simulation experiments on widely used linearized quadruple-tank system in CPS demonstrate the effectiveness of the proposed methods in accurately identifying replay attacks while minimizing control costs.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"671 ","pages":"Article 132655"},"PeriodicalIF":6.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
scMFE: A multi-view fusion enhanced graph contrastive learning method for scRNA-seq data clustering scMFE:一种用于scRNA-seq数据聚类的多视图融合增强图对比学习方法
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.neucom.2026.132662
Yun Bai, Zhenqiu Shu, Kaiwen Tan, Yongbing Zhang, Zhengtao Yu
Graph contrastive learning has become an advanced paradigm for clustering scRNA-seq data by effectively mitigating the issue of false negative sample pairs in traditional contrastive learning. However, applying graph contrastive learning to scRNA-seq data clustering still faces two issues: first, scRNA-seq data is high-dimensional, noisy, and sparse, making it difficult for current graph construction methods to capture the inherent complex relationships between cells; second, existing methods rely on data perturbation to generate contrastive views, inevitably introducing additional noise and limiting model performance. To address these challenges, we propose scMFE, a multi-view fusion enhanced graph contrastive learning method for scRNA-seq data clustering. scMFE constructs cell graphs by fusing multi-view features, thereby capturing linear, nonlinear, and biological relationships between cells. scMFE generates contrastive views by applying topological structure constraints to the fused graph, thereby avoiding the introduction of additional noise. Building upon this, scMFE utilizes graph attention autoencoders to learn cell representations, which are then optimized through reconstruction loss and dual contrastive learning at both the cell and cluster levels. Comparative experiments on 14 real datasets against 14 baseline methods demonstrate scMFE’s superior performance in scRNA-seq data clustering. Further experiments, including analysis of cell graph quality, imbalanced cluster identification, ablation study, hyperparameter analysis, runtime analysis, cell type annotation, batch effect removal, analysis of multi-omics data, sensitivity analysis of the number of clusters, sensitivity analysis of the number of neighbors K in the KNN graph, and generalization to unseen cell types, validate the method’s effectiveness and biological rationality. The source code for scMFE is available at https://github.com/Thirty-Six-Stratagems/scMFE.
图对比学习有效地解决了传统对比学习中假阴性样本对的问题,成为scRNA-seq数据聚类的一种先进范式。然而,将图对比学习应用于scRNA-seq数据聚类仍然面临两个问题:首先,scRNA-seq数据是高维的、有噪声的、稀疏的,现有的图构建方法难以捕捉细胞间固有的复杂关系;其次,现有方法依赖于数据扰动来生成对比视图,不可避免地引入了额外的噪声并限制了模型的性能。为了解决这些挑战,我们提出了scMFE,一种用于scRNA-seq数据聚类的多视图融合增强图对比学习方法。scMFE通过融合多视图特征构建细胞图,从而捕获细胞之间的线性、非线性和生物关系。scMFE通过对融合图应用拓扑结构约束来生成对比视图,从而避免引入额外的噪声。在此基础上,scMFE利用图注意自编码器来学习细胞表示,然后通过重建损失和在细胞和集群级别的双重对比学习来优化。在14个真实数据集上与14种基线方法的对比实验表明,scMFE在scRNA-seq数据聚类方面具有优越的性能。进一步的实验,包括细胞图质量分析、不平衡聚类识别、消融研究、超参数分析、运行时分析、细胞类型注释、批效应去除、多组学数据分析、聚类数量敏感性分析、KNN图中邻居K数量敏感性分析以及对未见细胞类型的推广,验证了该方法的有效性和生物学合理性。scMFE的源代码可从https://github.com/Thirty-Six-Stratagems/scMFE获得。
{"title":"scMFE: A multi-view fusion enhanced graph contrastive learning method for scRNA-seq data clustering","authors":"Yun Bai,&nbsp;Zhenqiu Shu,&nbsp;Kaiwen Tan,&nbsp;Yongbing Zhang,&nbsp;Zhengtao Yu","doi":"10.1016/j.neucom.2026.132662","DOIUrl":"10.1016/j.neucom.2026.132662","url":null,"abstract":"<div><div>Graph contrastive learning has become an advanced paradigm for clustering scRNA-seq data by effectively mitigating the issue of false negative sample pairs in traditional contrastive learning. However, applying graph contrastive learning to scRNA-seq data clustering still faces two issues: first, scRNA-seq data is high-dimensional, noisy, and sparse, making it difficult for current graph construction methods to capture the inherent complex relationships between cells; second, existing methods rely on data perturbation to generate contrastive views, inevitably introducing additional noise and limiting model performance. To address these challenges, we propose scMFE, a multi-view fusion enhanced graph contrastive learning method for scRNA-seq data clustering. scMFE constructs cell graphs by fusing multi-view features, thereby capturing linear, nonlinear, and biological relationships between cells. scMFE generates contrastive views by applying topological structure constraints to the fused graph, thereby avoiding the introduction of additional noise. Building upon this, scMFE utilizes graph attention autoencoders to learn cell representations, which are then optimized through reconstruction loss and dual contrastive learning at both the cell and cluster levels. Comparative experiments on 14 real datasets against 14 baseline methods demonstrate scMFE’s superior performance in scRNA-seq data clustering. Further experiments, including analysis of cell graph quality, imbalanced cluster identification, ablation study, hyperparameter analysis, runtime analysis, cell type annotation, batch effect removal, analysis of multi-omics data, sensitivity analysis of the number of clusters, sensitivity analysis of the number of neighbors <span><math><mi>K</mi></math></span> in the KNN graph, and generalization to unseen cell types, validate the method’s effectiveness and biological rationality. The source code for scMFE is available at <span><span>https://github.com/Thirty-Six-Stratagems/scMFE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"671 ","pages":"Article 132662"},"PeriodicalIF":6.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1