首页 > 最新文献

Neurocomputing最新文献

英文 中文
AVSCNet: A dual-branch network for synchronization detection and content consistency learning in audio-video forgery detection AVSCNet:用于音视频伪造检测中的同步检测和内容一致性学习的双分支网络
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-13 DOI: 10.1016/j.neucom.2026.133067
Guangwei Zhu , Hai-Sheng Li , Haiying Xia , Shuxiang Song
Deepfake videos significantly threaten digital media credibility and public trust. While existing multimodal detection methods have advanced, they struggle to generalize across diverse real-world scenarios. Most current approaches focus exclusively on either synchronization detection or content consistency checking, limiting their effectiveness. To tackle these challenges, this study introduces a new dual-branch architecture that simultaneously learns synchronization features and content consistency representations. The model includes a synchronization branch to capture temporal misalignments and a content branch to detect semantic anomalies, with decoupling loss to enhance task specificity. In the content branch, a conditional generation task is introduced to reconstruct the fused feature sequence based on the content token, enhancing the resilience of feature representations through self-supervised learning. The proposed method also includes a hierarchical cross-modal interaction mechanism with cross-attention and fine-grained embeddings. Cross-attention combines features from different modalities to improve feature representations. Fine-grained embeddings provide the model with detailed information. Experimental results show that our approach attains an AUC of 98.30% on the FakeAVCeleb dataset, approaching the current SOTA. When evaluated across datasets, it outperformed the SOTA approaches by 0.08%, 13.46%, and 10.12% on the DeepfakeTIMIT, LAV-DF, and MAVOS-DD datasets, respectively, with AUC scores of 99.11%, 86.97%, and 67.23%. Our code is available at https://github.com/zhudedede5-droid/AVSCNet.
深度造假视频严重威胁到数字媒体的可信度和公众的信任。虽然现有的多模态检测方法已经取得了进步,但它们很难在不同的现实世界场景中进行推广。目前大多数方法只关注同步检测或内容一致性检查,限制了它们的有效性。为了应对这些挑战,本研究引入了一种新的双分支架构,该架构同时学习同步特性和内容一致性表示。该模型包括一个同步分支用于捕获时间偏差,一个内容分支用于检测语义异常,并具有解耦损失以增强任务特异性。在内容分支中,引入条件生成任务,基于内容令牌重构融合的特征序列,通过自监督学习增强特征表示的弹性。该方法还包括具有交叉关注和细粒度嵌入的分层跨模态交互机制。交叉注意结合了不同模态的特征来改善特征表征。细粒度嵌入为模型提供详细信息。实验结果表明,我们的方法在FakeAVCeleb数据集上获得了98.30%的AUC,接近当前的SOTA。当跨数据集进行评估时,它在DeepfakeTIMIT, LAV-DF和MAVOS-DD数据集上的AUC得分分别为99.11%,86.97%和67.23%,分别优于SOTA方法0.08%,13.46%和10.12%。我们的代码可在https://github.com/zhudedede5-droid/AVSCNet上获得。
{"title":"AVSCNet: A dual-branch network for synchronization detection and content consistency learning in audio-video forgery detection","authors":"Guangwei Zhu ,&nbsp;Hai-Sheng Li ,&nbsp;Haiying Xia ,&nbsp;Shuxiang Song","doi":"10.1016/j.neucom.2026.133067","DOIUrl":"10.1016/j.neucom.2026.133067","url":null,"abstract":"<div><div>Deepfake videos significantly threaten digital media credibility and public trust. While existing multimodal detection methods have advanced, they struggle to generalize across diverse real-world scenarios. Most current approaches focus exclusively on either synchronization detection or content consistency checking, limiting their effectiveness. To tackle these challenges, this study introduces a new dual-branch architecture that simultaneously learns synchronization features and content consistency representations. The model includes a synchronization branch to capture temporal misalignments and a content branch to detect semantic anomalies, with decoupling loss to enhance task specificity. In the content branch, a conditional generation task is introduced to reconstruct the fused feature sequence based on the content token, enhancing the resilience of feature representations through self-supervised learning. The proposed method also includes a hierarchical cross-modal interaction mechanism with cross-attention and fine-grained embeddings. Cross-attention combines features from different modalities to improve feature representations. Fine-grained embeddings provide the model with detailed information. Experimental results show that our approach attains an AUC of 98.30% on the FakeAVCeleb dataset, approaching the current SOTA. When evaluated across datasets, it outperformed the SOTA approaches by 0.08%, 13.46%, and 10.12% on the DeepfakeTIMIT, LAV-DF, and MAVOS-DD datasets, respectively, with AUC scores of 99.11%, 86.97%, and 67.23%. Our code is available at <span><span>https://github.com/zhudedede5-droid/AVSCNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 133067"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel dendritic neuron model enhanced by the synaptic-attention mechanism and fusion-dendritic layer 一种由突触-注意机制和树突融合层增强的新型树突神经元模型
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-12 DOI: 10.1016/j.neucom.2026.133030
Runcong Ma , Yonghua Pang , Canlong Zhang , Xudong Luo
Among various neural network architectures, the Dendritic Neuron Model (DNM) provides a biologically plausible framework with unique computational properties due to its nonlinearity, interpretability and efficiency. The nonlinearity is attributed to the multiplicative aggregation of dendritic synaptic features, however, when facing high-dimensional data, the multiplicative aggregation of multiple normalized features can lead to the exponential decay of output and catastrophic vanishing of gradients during backpropagation. To alleviate this issue, we proposed the Fusion-Dendritic Layer that uses mean and multiplicative aggregation instead of the original multiplicative aggregation, and a theoretical and empirical analysis confirm the effectiveness of this improvement. Furthermore, we integrated a novel Synaptic-Attention module after the synaptic layer, thus enabling the model to focus on task-relevant information, which accelerates model convergence. Experiments on 31 public datasets showed that the improved DNM achieved remarkable efficiency with very few parameters and effectively addressed the stability limitations of standard DNMs. Compared with various recent DNM variants, our DNM also obtained higher accuracy and faster convergence speed when processing high-dimensional data. The code is available at https://github.com/PPOMZ/SAM-FDL-DNM.
在各种神经网络结构中,树突状神经元模型(DNM)提供了一个生物学上合理的框架,由于其非线性、可解释性和效率而具有独特的计算特性。非线性归因于树突突触特征的乘法聚集,然而,当面对高维数据时,多个归一化特征的乘法聚集会导致输出的指数衰减和反向传播过程中梯度的灾难性消失。为了解决这一问题,我们提出了融合树突状层,使用均值和乘法聚集代替原来的乘法聚集,理论和实证分析证实了这种改进的有效性。此外,我们在突触层之后集成了一个新颖的突触-注意模块,从而使模型能够专注于任务相关信息,从而加速模型的收敛。在31个公开数据集上进行的实验表明,改进的DNM在参数较少的情况下取得了显著的效率,有效地解决了标准DNM的稳定性限制。与最近的各种DNM变体相比,我们的DNM在处理高维数据时也获得了更高的精度和更快的收敛速度。代码可在https://github.com/PPOMZ/SAM-FDL-DNM上获得。
{"title":"A novel dendritic neuron model enhanced by the synaptic-attention mechanism and fusion-dendritic layer","authors":"Runcong Ma ,&nbsp;Yonghua Pang ,&nbsp;Canlong Zhang ,&nbsp;Xudong Luo","doi":"10.1016/j.neucom.2026.133030","DOIUrl":"10.1016/j.neucom.2026.133030","url":null,"abstract":"<div><div>Among various neural network architectures, the Dendritic Neuron Model (DNM) provides a biologically plausible framework with unique computational properties due to its nonlinearity, interpretability and efficiency. The nonlinearity is attributed to the multiplicative aggregation of dendritic synaptic features, however, when facing high-dimensional data, the multiplicative aggregation of multiple normalized features can lead to the exponential decay of output and catastrophic vanishing of gradients during backpropagation. To alleviate this issue, we proposed the Fusion-Dendritic Layer that uses mean and multiplicative aggregation instead of the original multiplicative aggregation, and a theoretical and empirical analysis confirm the effectiveness of this improvement. Furthermore, we integrated a novel Synaptic-Attention module after the synaptic layer, thus enabling the model to focus on task-relevant information, which accelerates model convergence. Experiments on 31 public datasets showed that the improved DNM achieved remarkable efficiency with very few parameters and effectively addressed the stability limitations of standard DNMs. Compared with various recent DNM variants, our DNM also obtained higher accuracy and faster convergence speed when processing high-dimensional data. The code is available at <span><span>https://github.com/PPOMZ/SAM-FDL-DNM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 133030"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TabNSA: Native sparse attention for efficient tabular data learning TabNSA:用于高效表格数据学习的原生稀疏关注
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-04-28 Epub Date: 2026-02-05 DOI: 10.1016/j.neucom.2026.132928
Ali Eslamian , Qiang Cheng
Tabular data poses unique challenges for deep learning due to its heterogeneous feature types, lack of spatial structure, and often limited sample sizes. We propose TabNSA, a novel deep learning framework that integrates Native Sparse Attention (NSA) with a TabMixer backbone to efficiently model tabular data. TabNSA tackles computational and representational challenges by dynamically focusing on relevant feature subsets per instance. The NSA module employs a hierarchical sparse attention mechanism, including token compression, selective preservation, and localized sliding windows, to significantly reduce the quadratic complexity of standard attention operations while addressing feature heterogeneity. Complementing this, the TabMixer backbone captures complex, non-linear dependencies through parallel multilayer perceptron (MLP) branches with independent parameters. These modules are synergistically combined via element-wise summation and mean pooling, enabling TabNSA to model both global context and fine-grained interactions. Extensive experiments across supervised and transfer learning settings show that TabNSA consistently outperforms state-of-the-art deep learning models. Furthermore, by augmenting TabNSA with a fine-tuned large language model (LLM), we enable it to effectively address Few-Shot Learning challenges through language-guided generalization on diverse tabular benchmarks. Code available on: https://github.com/aseslamian/TabNSA.
表格数据由于其异构的特征类型、缺乏空间结构和通常有限的样本量,给深度学习带来了独特的挑战。我们提出了一种新的深度学习框架TabNSA,它将原生稀疏注意(NSA)与TabMixer主干集成在一起,以有效地对表格数据建模。TabNSA通过动态关注每个实例的相关特征子集来解决计算和表示方面的挑战。NSA模块采用分层稀疏关注机制,包括令牌压缩、选择性保存和局部滑动窗口,在解决特征异质性的同时显著降低了标准关注操作的二次复杂度。作为补充,TabMixer主干通过具有独立参数的并行多层感知器(MLP)分支捕获复杂的非线性依赖关系。这些模块通过元素求和和均值池协同组合,使TabNSA能够建模全局上下文和细粒度交互。在监督学习和迁移学习设置中进行的大量实验表明,TabNSA始终优于最先进的深度学习模型。此外,通过使用微调的大型语言模型(LLM)来增强TabNSA,我们使其能够通过语言引导的对各种表格基准的泛化来有效地解决Few-Shot Learning挑战。代码可在:https://github.com/aseslamian/TabNSA。
{"title":"TabNSA: Native sparse attention for efficient tabular data learning","authors":"Ali Eslamian ,&nbsp;Qiang Cheng","doi":"10.1016/j.neucom.2026.132928","DOIUrl":"10.1016/j.neucom.2026.132928","url":null,"abstract":"<div><div>Tabular data poses unique challenges for deep learning due to its heterogeneous feature types, lack of spatial structure, and often limited sample sizes. We propose TabNSA, a novel deep learning framework that integrates Native Sparse Attention (NSA) with a TabMixer backbone to efficiently model tabular data. TabNSA tackles computational and representational challenges by dynamically focusing on relevant feature subsets per instance. The NSA module employs a hierarchical sparse attention mechanism, including token compression, selective preservation, and localized sliding windows, to significantly reduce the quadratic complexity of standard attention operations while addressing feature heterogeneity. Complementing this, the TabMixer backbone captures complex, non-linear dependencies through parallel multilayer perceptron (MLP) branches with independent parameters. These modules are synergistically combined via element-wise summation and mean pooling, enabling TabNSA to model both global context and fine-grained interactions. Extensive experiments across supervised and transfer learning settings show that TabNSA consistently outperforms state-of-the-art deep learning models. Furthermore, by augmenting TabNSA with a fine-tuned large language model (LLM), we enable it to effectively address Few-Shot Learning challenges through language-guided generalization on diverse tabular benchmarks. <strong>Code available on:</strong> <span><span>https://github.com/aseslamian/TabNSA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132928"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPEED: Structured kernel block pruning with filter groups for efficient and elastic SW-HW co-design in FPGA-based CNN accelerators 速度:基于fpga的CNN加速器中高效和弹性的SW-HW协同设计的滤波器组结构化核块剪枝
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-04-28 Epub Date: 2026-02-07 DOI: 10.1016/j.neucom.2026.132958
Kwanghyun Koo , Sunwoong Kim , Hyun Kim
On-device AI has received increasing attention due to its ability to provide personalized performance, reduce server load, and address privacy concerns. In this context, efforts have been made to deploy deep learning models on power-efficient hardware platforms, such as field-programmable gate arrays (FPGAs). Specifically, various pruning techniques have been devised to improve performance and energy consumption. However, prior pruning methods fail to achieve balanced hardware utilization, which limits actual performance gains. This paper proposes SPEED, a hardware-aware structured pruning framework integrated into FPGA-based convolutional neural network (CNN) accelerators. SPEED introduces a novel processing unit (PU)-aware kernel block pruning technique for balanced computation across a PU array. Additionally, it proposes an adaptive kernel merging technique to minimize information loss during pruning. Experiments on ResNet18, ResNet50, and YOLACT using ImageNet and Pascal VOC2012 datasets show that SPEED achieves comparable accuracy to software-based pruning methods while achieving higher throughput and lower latency, validated on two types of processing elements. Specifically, for ResNet18, SPEED removes 57.9% of parameters and 44.6% of FLOPs with only a 0.91% drop in Top-1 accuracy, and for ResNet50, it removes 73.2% of parameters and 66.0% of FLOPs with a 1.20% drop in Top-1 accuracy. FPGA benchmarking results show that SPEED efficiently converts reductions in floating-point operations into actual speedups, with little increase in hardware resource usage. When deployed on an FPGA board, SPEED improves FPS by 42.2% and enhances power efficiency by 42.7% compared to the baseline. Case studies in CNN classification and instance segmentation models demonstrate the effectiveness of SPEED as a practical pruning solution for FPGA-based CNN accelerators.
设备上的人工智能因其提供个性化性能、减少服务器负载和解决隐私问题的能力而受到越来越多的关注。在这种背景下,人们努力将深度学习模型部署在节能硬件平台上,例如现场可编程门阵列(fpga)。具体来说,已经设计了各种修剪技术来提高性能和能耗。但是,先前的修剪方法无法实现均衡的硬件利用率,从而限制了实际的性能增益。本文提出了一种集成在基于fpga的卷积神经网络(CNN)加速器中的硬件感知结构化剪枝框架SPEED。SPEED引入了一种新的处理单元(PU)感知内核块修剪技术,用于跨PU阵列的平衡计算。此外,提出了一种自适应核合并技术,以减少剪枝过程中的信息损失。使用ImageNet和Pascal VOC2012数据集在ResNet18、ResNet50和YOLACT上进行的实验表明,在两种类型的处理元素上验证,SPEED达到了与基于软件的修剪方法相当的精度,同时实现了更高的吞吐量和更低的延迟。具体来说,对于ResNet18, SPEED去除57.9%的参数和44.6%的FLOPs, Top-1精度仅下降0.91%;对于ResNet50,它去除73.2%的参数和66.0%的FLOPs, Top-1精度下降1.20%。FPGA基准测试结果表明,SPEED有效地将浮点运算的减少转化为实际的速度,而硬件资源的使用几乎没有增加。当部署在FPGA板上时,与基准相比,SPEED可提高42.2%的FPS和42.7%的功耗效率。CNN分类和实例分割模型的案例研究证明了SPEED作为基于fpga的CNN加速器的实用修剪解决方案的有效性。
{"title":"SPEED: Structured kernel block pruning with filter groups for efficient and elastic SW-HW co-design in FPGA-based CNN accelerators","authors":"Kwanghyun Koo ,&nbsp;Sunwoong Kim ,&nbsp;Hyun Kim","doi":"10.1016/j.neucom.2026.132958","DOIUrl":"10.1016/j.neucom.2026.132958","url":null,"abstract":"<div><div>On-device AI has received increasing attention due to its ability to provide personalized performance, reduce server load, and address privacy concerns. In this context, efforts have been made to deploy deep learning models on power-efficient hardware platforms, such as field-programmable gate arrays (FPGAs). Specifically, various pruning techniques have been devised to improve performance and energy consumption. However, prior pruning methods fail to achieve balanced hardware utilization, which limits actual performance gains. This paper proposes <em>SPEED</em>, a hardware-aware structured pruning framework integrated into FPGA-based convolutional neural network (CNN) accelerators. <em>SPEED</em> introduces a novel processing unit (PU)-aware kernel block pruning technique for balanced computation across a PU array. Additionally, it proposes an adaptive kernel merging technique to minimize information loss during pruning. Experiments on ResNet18, ResNet50, and YOLACT using ImageNet and Pascal VOC2012 datasets show that <em>SPEED</em> achieves comparable accuracy to software-based pruning methods while achieving higher throughput and lower latency, validated on two types of processing elements. Specifically, for ResNet18, <em>SPEED</em> removes 57.9% of parameters and 44.6% of FLOPs with only a 0.91% drop in Top-1 accuracy, and for ResNet50, it removes 73.2% of parameters and 66.0% of FLOPs with a 1.20% drop in Top-1 accuracy. FPGA benchmarking results show that <em>SPEED</em> efficiently converts reductions in floating-point operations into actual speedups, with little increase in hardware resource usage. When deployed on an FPGA board, <em>SPEED</em> improves FPS by 42.2% and enhances power efficiency by 42.7% compared to the baseline. Case studies in CNN classification and instance segmentation models demonstrate the effectiveness of <em>SPEED</em> as a practical pruning solution for FPGA-based CNN accelerators.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132958"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tuning metaheuristic parameters with the use of Large Language Models 使用大型语言模型调优元启发式参数
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-04-28 Epub Date: 2026-02-05 DOI: 10.1016/j.neucom.2026.132976
Alicja Martinek , Ewelina Bartuzi-Trokielewicz , Szymon Łukasik , Amir H. Gandomi
Since their explosion in popularity, the impact of Large Language Models (LLMs) has been evident in almost every aspect of life. This study examines whether LLMs can be utilized for tuning metaheuristic algorithms through the selection of their parameters. To verify this hypothesis, ten instances each of three well-known combinatorial optimization problems, Graph Coloring, Job-Shop Scheduling, and Traveling Salesman, were solved using heuristic optimizers guided by LLMs, including genetic algorithm, ant colony optimization, particle swarm optimization, and simulated annealing. Parameter values were generated by prompting several state-of-the-art LLMs with problem complexity descriptors and the set of tunable parameters. A two-stage procedure was employed: an initial run based on general problem characteristics, followed by a feedback run that used performance metrics such as average fitness, variance, and convergence behavior. Default settings from the Python-based Mealpy library served as the baseline for comparison.
Results, aggregated over 900 optimizer runs, show that LLMs are capable of proposing parameter configurations that outperform defaults in terms of final objective value and convergence speed. This effect is particularly pronounced in simulated annealing and Traveling Salesman problem settings. The findings suggest that LLMs possess a high degree of generalization and contextual understanding in the domain of optimization and can serve as practical assistants in heuristic algorithm design and tuning.
自从大语言模型(llm)大受欢迎以来,它的影响几乎在生活的各个方面都很明显。本研究考察了llm是否可以通过参数的选择来调整元启发式算法。为了验证这一假设,使用遗传算法、蚁群优化、粒子群优化和模拟退火等启发式优化器,对图着色、作业车间调度和旅行推销员这三个著名的组合优化问题分别求解了10个实例。参数值是通过使用问题复杂性描述符和一组可调参数提示几个最先进的llm来生成的。采用了两个阶段的程序:基于一般问题特征的初始运行,然后是使用平均适应度、方差和收敛行为等性能指标的反馈运行。基于python的Mealpy库的默认设置用作比较的基线。在900次优化器运行中汇总的结果表明,llm能够提出在最终目标值和收敛速度方面优于默认值的参数配置。这种效果在模拟退火和旅行推销员问题设置中特别明显。研究结果表明,llm在优化领域具有高度的泛化和上下文理解能力,可以作为启发式算法设计和调优的实用助手。
{"title":"Tuning metaheuristic parameters with the use of Large Language Models","authors":"Alicja Martinek ,&nbsp;Ewelina Bartuzi-Trokielewicz ,&nbsp;Szymon Łukasik ,&nbsp;Amir H. Gandomi","doi":"10.1016/j.neucom.2026.132976","DOIUrl":"10.1016/j.neucom.2026.132976","url":null,"abstract":"<div><div>Since their explosion in popularity, the impact of Large Language Models (LLMs) has been evident in almost every aspect of life. This study examines whether LLMs can be utilized for tuning metaheuristic algorithms through the selection of their parameters. To verify this hypothesis, ten instances each of three well-known combinatorial optimization problems, Graph Coloring, Job-Shop Scheduling, and Traveling Salesman, were solved using heuristic optimizers guided by LLMs, including genetic algorithm, ant colony optimization, particle swarm optimization, and simulated annealing. Parameter values were generated by prompting several state-of-the-art LLMs with problem complexity descriptors and the set of tunable parameters. A two-stage procedure was employed: an initial run based on general problem characteristics, followed by a feedback run that used performance metrics such as average fitness, variance, and convergence behavior. Default settings from the Python-based Mealpy library served as the baseline for comparison.</div><div>Results, aggregated over 900 optimizer runs, show that LLMs are capable of proposing parameter configurations that outperform defaults in terms of final objective value and convergence speed. This effect is particularly pronounced in simulated annealing and Traveling Salesman problem settings. The findings suggest that LLMs possess a high degree of generalization and contextual understanding in the domain of optimization and can serve as practical assistants in heuristic algorithm design and tuning.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132976"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive finite-time tracking control for stochastic nonlinear systems based on IT2FNN 基于IT2FNN的随机非线性系统自适应有限时间跟踪控制
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-04-28 Epub Date: 2026-02-09 DOI: 10.1016/j.neucom.2026.133013
Shuangyun Xing , Mingchen Wei , Feiqi Deng , Xueyan Zhao , Fengjun Xiao
The paper focuses on the finite-time tracking problems for a class of stochastic nonlinear systems. Firstly, we address the trajectory tracking control problem for stochastic nonlinear systems with entirely unknown nonlinear functions. The finite-time control strategy is proposed, which leverages the approximation capability of interval type-2 fuzzy neural networks. Secondly, the adaptive interval type-2 fuzzy neural network controller is designed in view of an improved backstepping method and Lyapunov stability theory, then finite-time stability criterion is built through applying integral median theorem, Jensen’s inequality and Young’s inequality. Finally, the practical simulation example is carried out for the nonlinear systems while considering the influence of stochastic disturbances to prove validity of the proposed methods.
研究一类随机非线性系统的有限时间跟踪问题。首先,研究了具有完全未知非线性函数的随机非线性系统的轨迹跟踪控制问题。利用区间2型模糊神经网络的逼近能力,提出了有限时间控制策略。其次,基于改进的反演方法和Lyapunov稳定性理论,设计了自适应区间2型模糊神经网络控制器,并应用积分中值定理、Jensen不等式和Young不等式建立了有限时间稳定性判据;最后,对考虑随机干扰影响的非线性系统进行了实际仿真,验证了所提方法的有效性。
{"title":"Adaptive finite-time tracking control for stochastic nonlinear systems based on IT2FNN","authors":"Shuangyun Xing ,&nbsp;Mingchen Wei ,&nbsp;Feiqi Deng ,&nbsp;Xueyan Zhao ,&nbsp;Fengjun Xiao","doi":"10.1016/j.neucom.2026.133013","DOIUrl":"10.1016/j.neucom.2026.133013","url":null,"abstract":"<div><div>The paper focuses on the finite-time tracking problems for a class of stochastic nonlinear systems. Firstly, we address the trajectory tracking control problem for stochastic nonlinear systems with entirely unknown nonlinear functions. The finite-time control strategy is proposed, which leverages the approximation capability of interval type-2 fuzzy neural networks. Secondly, the adaptive interval type-2 fuzzy neural network controller is designed in view of an improved backstepping method and Lyapunov stability theory, then finite-time stability criterion is built through applying integral median theorem, Jensen’s inequality and Young’s inequality. Finally, the practical simulation example is carried out for the nonlinear systems while considering the influence of stochastic disturbances to prove validity of the proposed methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 133013"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Odin: Oriented dual-module integration for text-rich network representation learning 面向双模块集成的富文本网络表示学习
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-04-28 Epub Date: 2026-02-09 DOI: 10.1016/j.neucom.2026.133018
Kaifeng Hong, Yinglong Zhang, Xiaoying Hong, Xuewen Xia, Xing Xu
Text-attributed graphs require models to effectively combine strong textual understanding with structurally informed reasoning. Existing approaches either rely on GNNs—limited by over-smoothing and hop-dependent diffusion—or employ Transformers that largely overlook graph topology and treat nodes as isolated sequences. We propose Odin (Oriented dual-module integration), a new architecture that injects graph structure into Transformers at selected depths through an oriented dual-module mechanism. Unlike message-passing GNNs, Odin does not rely on multi-hop diffusion; instead, multi-hop structures are integrated at specific Transformer layers, yielding low-, mid-, and high-level structural abstraction aligned with the model’s semantic hierarchy. Because aggregation operates on node-specific [CLS] representations induced by textual tokens, Odin mitigates over-smoothing by preventing the iterative diffusion of homogeneous hidden states, and decouples structural abstraction from neighborhood size or graph topology. We further establish that Odin’s expressive power strictly contains that of both pure Transformers and GNNs. To make the design efficient in large-scale or low-resource settings, we introduce Light Odin, a lightweight variant that preserves the same layer-aligned structural abstraction for faster training and inference. Experiments on multiple text-rich graph benchmarks show that Odin achieves state-of-the-art accuracy, while Light Odin delivers competitive performance with significantly reduced computational cost. Together, Odin and Light Odin form a unified, hop-free framework for principled structure–text integration. The source code for this model has been released at https://github.com/hongkaifeng/Odin.
文本属性图要求模型有效地将强大的文本理解与结构知情推理结合起来。现有的方法要么依赖于gnns(受过度平滑和跳跃相关扩散的限制),要么使用变压器,后者在很大程度上忽略了图拓扑并将节点视为孤立序列。我们提出了Odin(面向双模块集成),这是一种新的架构,通过面向双模块的机制将图形结构注入到选定深度的变压器中。与消息传递gnn不同,Odin不依赖于多跳扩散;相反,在特定的Transformer层中集成了多跳结构,从而产生与模型的语义层次一致的低、中、高级结构抽象。由于聚合操作在由文本标记引起的节点特定的[CLS]表示上,Odin通过防止同质隐藏状态的迭代扩散来减轻过度平滑,并将结构抽象与邻域大小或图拓扑解耦。我们进一步确定,奥丁的表达能力严格包含纯变形金刚和gnn的表达能力。为了使设计在大规模或低资源设置中高效,我们引入了Light Odin,这是一种轻量级的变体,它保留了相同的层对齐结构抽象,以实现更快的训练和推理。在多个富文本图形基准测试上的实验表明,Odin达到了最先进的精度,而Light Odin在显著降低计算成本的同时提供了具有竞争力的性能。在一起,奥丁和轻奥丁形成了一个统一的、无跳跃的框架,用于原则性的结构-文本整合。该模型的源代码已在https://github.com/hongkaifeng/Odin上发布。
{"title":"Odin: Oriented dual-module integration for text-rich network representation learning","authors":"Kaifeng Hong,&nbsp;Yinglong Zhang,&nbsp;Xiaoying Hong,&nbsp;Xuewen Xia,&nbsp;Xing Xu","doi":"10.1016/j.neucom.2026.133018","DOIUrl":"10.1016/j.neucom.2026.133018","url":null,"abstract":"<div><div>Text-attributed graphs require models to effectively combine strong textual understanding with structurally informed reasoning. Existing approaches either rely on GNNs—limited by over-smoothing and hop-dependent diffusion—or employ Transformers that largely overlook graph topology and treat nodes as isolated sequences. We propose <strong>Odin</strong> (<strong>O</strong>riented <strong>d</strong>ual-module <strong>in</strong>tegration), a new architecture that injects graph structure into Transformers at selected depths through an oriented dual-module mechanism. Unlike message-passing GNNs, Odin does not rely on multi-hop diffusion; instead, multi-hop structures are integrated at specific Transformer layers, yielding low-, mid-, and high-level structural abstraction aligned with the model’s semantic hierarchy. Because aggregation operates on node-specific [CLS] representations induced by textual tokens, Odin mitigates over-smoothing by preventing the iterative diffusion of homogeneous hidden states, and decouples structural abstraction from neighborhood size or graph topology. We further establish that Odin’s expressive power strictly contains that of both pure Transformers and GNNs. To make the design efficient in large-scale or low-resource settings, we introduce Light Odin, a lightweight variant that preserves the same layer-aligned structural abstraction for faster training and inference. Experiments on multiple text-rich graph benchmarks show that Odin achieves state-of-the-art accuracy, while Light Odin delivers competitive performance with significantly reduced computational cost. Together, Odin and Light Odin form a unified, hop-free framework for principled structure–text integration. The source code for this model has been released at <span><span>https://github.com/hongkaifeng/Odin</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 133018"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust domain adaptation using gram optimal transport for high variance environments 基于gram最优传输的高方差环境鲁棒域自适应
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-04-28 Epub Date: 2026-02-10 DOI: 10.1016/j.neucom.2026.132965
Khushboo Mishra , Tanima Dutta
High-variance environments characterized by substantial distribution shifts, label noise, and complex internal data relationships pose a fundamental challenge for reliable machine learning. In such settings, domain adaptation (DA) becomes especially difficult, as conventional methods often assume moderate divergence between source and target domains and overlook structural noise and data heterogeneity. These assumptions lead to performance degradation and instability when applied to high-variance scenarios. To address this, we propose a robust DA framework that leverages the expressive power of Gram matrices and the flexibility of Optimal Transport (OT) to align distributions while preserving intra-domain structure. Unlike traditional OT methods, which rely primarily on pairwise geometric distances and often ignore higher-order feature dependencies, our approach embeds Gram-based similarity networks to model relational patterns within and across domains. This enables the capture of semantic consistency beyond mere pointwise alignment. A key innovation of our method is the introduction of a low-rank constraint on the Gram matrix, which acts as a structural regularizer to suppress noise and highlight dominant data subspaces. Traditional OT formulations are susceptible to corruption in high-variance settings, as they treat all feature dimensions equally and can overfit to noisy or outlier-dominated regions. In contrast, our rank-constrained transport plan selectively emphasizes coherent, low-dimensional structures within the source domain, effectively filtering out corrupted subspaces and enhancing the robustness of cross-domain alignment. Experimental results across multiple benchmarks demonstrate that our approach significantly improves adaptation robustness, achieving an average accuracy of 89.36% on Office-Caltech, with gains of +3.85% on Office-Home, +2.5% on DomainNet, and +5.08% on the imbalanced VisDA dataset.
以大量分布变化、标签噪声和复杂的内部数据关系为特征的高方差环境对可靠的机器学习构成了根本性的挑战。在这种情况下,领域自适应(DA)变得特别困难,因为传统的方法通常假设源和目标域之间存在适度的差异,而忽略了结构噪声和数据异质性。当应用于高方差场景时,这些假设会导致性能下降和不稳定。为了解决这个问题,我们提出了一个鲁棒的数据分析框架,该框架利用Gram矩阵的表达能力和最优传输(OT)的灵活性来对齐分布,同时保留域内结构。与传统的OT方法不同,传统的OT方法主要依赖于成对几何距离,并且经常忽略高阶特征依赖关系,我们的方法嵌入了基于gram的相似性网络来建模域内和跨域的关系模式。这使得捕获语义一致性超越了仅仅逐点对齐。我们的方法的一个关键创新是在Gram矩阵上引入了一个低秩约束,它作为一个结构正则器来抑制噪声并突出显示优势数据子空间。传统的OT公式在高方差环境中容易损坏,因为它们平等地对待所有特征维度,并且可能过度拟合到噪声或异常值主导的区域。相比之下,我们的秩约束传输计划选择性地强调源域中连贯的低维结构,有效地过滤掉损坏的子空间并增强跨域对齐的鲁棒性。多个基准测试的实验结果表明,我们的方法显著提高了自适应鲁棒性,在Office-Caltech上实现了89.36%的平均准确率,在Office-Home上提高了+3.85%,在DomainNet上提高了+2.5%,在不平衡的VisDA数据集上提高了+5.08%。
{"title":"Robust domain adaptation using gram optimal transport for high variance environments","authors":"Khushboo Mishra ,&nbsp;Tanima Dutta","doi":"10.1016/j.neucom.2026.132965","DOIUrl":"10.1016/j.neucom.2026.132965","url":null,"abstract":"<div><div>High-variance environments characterized by substantial distribution shifts, label noise, and complex internal data relationships pose a fundamental challenge for reliable machine learning. In such settings, domain adaptation (DA) becomes especially difficult, as conventional methods often assume moderate divergence between source and target domains and overlook structural noise and data heterogeneity. These assumptions lead to performance degradation and instability when applied to high-variance scenarios. To address this, we propose a robust DA framework that leverages the expressive power of Gram matrices and the flexibility of Optimal Transport (OT) to align distributions while preserving intra-domain structure. Unlike traditional OT methods, which rely primarily on pairwise geometric distances and often ignore higher-order feature dependencies, our approach embeds Gram-based similarity networks to model relational patterns within and across domains. This enables the capture of semantic consistency beyond mere pointwise alignment. A key innovation of our method is the introduction of a low-rank constraint on the Gram matrix, which acts as a structural regularizer to suppress noise and highlight dominant data subspaces. Traditional OT formulations are susceptible to corruption in high-variance settings, as they treat all feature dimensions equally and can overfit to noisy or outlier-dominated regions. In contrast, our rank-constrained transport plan selectively emphasizes coherent, low-dimensional structures within the source domain, effectively filtering out corrupted subspaces and enhancing the robustness of cross-domain alignment. Experimental results across multiple benchmarks demonstrate that our approach significantly improves adaptation robustness, achieving an average accuracy of 89.36% on Office-Caltech, with gains of +3.85% on Office-Home, +2.5% on DomainNet, and +5.08% on the imbalanced VisDA dataset.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132965"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A linear encryption privacy protection strategy against eavesdroppers in cyber-physical systems 网络物理系统中针对窃听者的线性加密隐私保护策略
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-04-28 Epub Date: 2026-02-10 DOI: 10.1016/j.neucom.2026.132937
Shaojie Xu , Dan Ye , Dongsheng Yang , Guangdi Li
In this paper, a linear coding scheme is proposed to enhance the privacy of data in cyber-physical systems. Firstly, the expression of the estimation error covariance under this linear coding scheme is presented. Secondly, a necessary and sufficient condition is proposed to guarantee the estimation performance of legitimate users. The optimal design parameters are provided by maximizing the estimation error covariance of the eavesdropper. Finally, the different estimation states of legitimate users and eavesdroppers under packet loss are described by using Markov chains, and the steady-state expected value of the estimation error covariance under packet loss is given. The above coding scheme has been verified through simulations using a load frequency control system.
本文提出了一种线性编码方案,以提高网络物理系统中数据的保密性。首先给出了该线性编码方案下估计误差协方差的表达式。其次,提出了保证合法用户估计性能的充分必要条件;通过最大化窃听器的估计误差协方差来获得最优设计参数。最后,利用马尔可夫链描述了合法用户和窃听者在丢包情况下的不同估计状态,给出了丢包情况下估计误差协方差的稳态期望值。通过负载频率控制系统的仿真验证了上述编码方案的有效性。
{"title":"A linear encryption privacy protection strategy against eavesdroppers in cyber-physical systems","authors":"Shaojie Xu ,&nbsp;Dan Ye ,&nbsp;Dongsheng Yang ,&nbsp;Guangdi Li","doi":"10.1016/j.neucom.2026.132937","DOIUrl":"10.1016/j.neucom.2026.132937","url":null,"abstract":"<div><div>In this paper, a linear coding scheme is proposed to enhance the privacy of data in cyber-physical systems. Firstly, the expression of the estimation error covariance under this linear coding scheme is presented. Secondly, a necessary and sufficient condition is proposed to guarantee the estimation performance of legitimate users. The optimal design parameters are provided by maximizing the estimation error covariance of the eavesdropper. Finally, the different estimation states of legitimate users and eavesdroppers under packet loss are described by using Markov chains, and the steady-state expected value of the estimation error covariance under packet loss is given. The above coding scheme has been verified through simulations using a load frequency control system.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132937"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CaPGNN: Optimizing parallel graph neural network training with joint caching and resource-aware graph partitioning CaPGNN:利用联合缓存和资源感知图分区优化并行图神经网络训练
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-04-28 Epub Date: 2026-02-07 DOI: 10.1016/j.neucom.2026.132978
Xianfeng Song, Yi Zou, Zheng Shi
Graph-structured data is ubiquitous in the real world, and Graph Neural Networks (GNNs) have become increasingly popular in various fields due to their ability to process such irregular data directly. However, as data scales, GNNs become inefficient. Although parallel training offers performance improvements, increased communication costs often offset these advantages. To address this, this paper introduces CaPGNN, a novel parallel full-batch GNN training framework on single server with multi-GPU. Firstly, considering the fact that the number of remote vertices in a partition is often greater than or equal to the number of local vertices and there may exist many duplicate vertices, we propose a joint adaptive caching algorithm that leverages both CPU and GPU memory, integrating lightweight cache update and prefetch techniques to effectively reduce redundant communication costs. Furthermore, taking into account the varying computational and communication capabilities among GPUs, we propose a communication- and computation-aware heuristic graph partitioning algorithm inspired by graph sparsification. Additionally, we implement a pipeline to overlap computation and communication. Extensive experiments show that CaPGNN improves training efficiency by up to 18.98x and reduces communication costs by up to 99%, with minimal accuracy loss or even accuracy improvement in some cases. Finally, we extend CaPGNN to multi-machine multi-GPU environments. The code is available at https://github.com/songxf1024/CaPGNN.
图结构数据在现实世界中无处不在,图神经网络(gnn)由于能够直接处理这种不规则数据而在各个领域越来越受欢迎。然而,随着数据规模的扩大,gnn变得低效。虽然并行培训提供了性能改进,但增加的通信成本往往抵消了这些优势。为了解决这一问题,本文介绍了一种新型的多gpu单服务器并行全批GNN训练框架CaPGNN。首先,考虑到一个分区中远程顶点的数量通常大于或等于本地顶点的数量,并且可能存在许多重复顶点,我们提出了一种联合自适应缓存算法,该算法利用CPU和GPU内存,集成轻量级缓存更新和预取技术,有效降低冗余通信成本。此外,考虑到gpu之间不同的计算和通信能力,我们提出了一种受图稀疏化启发的通信和计算感知的启发式图划分算法。此外,我们还实现了一个管道来重叠计算和通信。大量实验表明,CaPGNN将训练效率提高了18.98倍,将通信成本降低了99%,在某些情况下精度损失最小,甚至精度提高。最后,我们将CaPGNN扩展到多机多gpu环境。代码可在https://github.com/songxf1024/CaPGNN上获得。
{"title":"CaPGNN: Optimizing parallel graph neural network training with joint caching and resource-aware graph partitioning","authors":"Xianfeng Song,&nbsp;Yi Zou,&nbsp;Zheng Shi","doi":"10.1016/j.neucom.2026.132978","DOIUrl":"10.1016/j.neucom.2026.132978","url":null,"abstract":"<div><div>Graph-structured data is ubiquitous in the real world, and Graph Neural Networks (GNNs) have become increasingly popular in various fields due to their ability to process such irregular data directly. However, as data scales, GNNs become inefficient. Although parallel training offers performance improvements, increased communication costs often offset these advantages. To address this, this paper introduces CaPGNN, a novel parallel full-batch GNN training framework on single server with multi-GPU. Firstly, considering the fact that the number of remote vertices in a partition is often greater than or equal to the number of local vertices and there may exist many duplicate vertices, we propose a joint adaptive caching algorithm that leverages both CPU and GPU memory, integrating lightweight cache update and prefetch techniques to effectively reduce redundant communication costs. Furthermore, taking into account the varying computational and communication capabilities among GPUs, we propose a communication- and computation-aware heuristic graph partitioning algorithm inspired by graph sparsification. Additionally, we implement a pipeline to overlap computation and communication. Extensive experiments show that CaPGNN improves training efficiency by up to 18.98x and reduces communication costs by up to 99%, with minimal accuracy loss or even accuracy improvement in some cases. Finally, we extend CaPGNN to multi-machine multi-GPU environments. The code is available at <span><span>https://github.com/songxf1024/CaPGNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132978"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1