首页 > 最新文献

Neurocomputing最新文献

英文 中文
Kronecker reparameterized large kernel for image compressed sensing Kronecker重参数化大核图像压缩感知
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.neucom.2026.132833
Jiao Xie , Lingfu Jiang , Heming Jia , Shaohui Lin , Yinqi Zhang , Linlin Yang , Fang Fang , Junjun Jiang
Image compressed sensing methods utilizing deep neural networks have achieved remarkable performance improvements compared with traditional ones, regarding both reconstruction quality and efficiency. However, the factors that affect the reconstruction quality of such deep compressed sensing methods remain unclear. In this paper, we reveal that one important factor is the size of the Effective Receptive Field (ERF), based on which we propose a novel Convolutional Compressed Sensing Network with the Kronecker Reparameterized Large Kernel (KR-CCSNet). Specifically, to enlarge the ERF and achieve superior reconstruction quality, we propose the Kronecker Reparameterized Large Kernel Sampling Network (KR-LKSN) for the sampling phase. KR-LKSN not only delivers better reconstruction quality, reduced computation, and fewer parameters, but also shows great potential for deployment on resource-constrained edge sensors, owing to the lightweight design of its sampling module. For the reconstruction network, we design an Adaptive Reconstruction Module (ARM) to leverage multi-scale information from measurements via gated attention, which further enlarges the ERF during the reconstruction phase to generate high-quality images. Extensive experiments demonstrate the effectiveness of KR-CCSNet on Set5, Set14, and BSDS100. For instance, our method outperforms MR-CCSNet by an average PSNR of 0.35 dB on Set5 and Set14 across six compression ratios. Our source codes are released at https://github.com/Will0x6c5f/KRCCSNet.
利用深度神经网络的图像压缩感知方法在重构质量和效率方面都比传统方法有了显著的提高。然而,影响这些深度压缩感知方法重建质量的因素尚不清楚。在本文中,我们揭示了一个重要的因素是有效感受野(ERF)的大小,并在此基础上提出了一种新颖的Kronecker重参数化大核卷积压缩感知网络(KR-CCSNet)。具体来说,为了扩大ERF并获得更好的重构质量,我们提出了Kronecker重参数化大核采样网络(KR-LKSN)作为采样阶段。KR-LKSN不仅具有更好的重构质量、更少的计算量和更少的参数,而且由于其采样模块的轻量化设计,在资源受限的边缘传感器上也显示出巨大的部署潜力。对于重建网络,我们设计了一个自适应重建模块(ARM),通过门控注意利用测量的多尺度信息,进一步扩大重建阶段的ERF,以生成高质量的图像。大量实验证明了KR-CCSNet在Set5、Set14和BSDS100上的有效性。例如,我们的方法优于MR-CCSNet,在Set5和Set14上,在六个压缩比下的平均PSNR为0.35 dB。我们的源代码发布在https://github.com/Will0x6c5f/KRCCSNet。
{"title":"Kronecker reparameterized large kernel for image compressed sensing","authors":"Jiao Xie ,&nbsp;Lingfu Jiang ,&nbsp;Heming Jia ,&nbsp;Shaohui Lin ,&nbsp;Yinqi Zhang ,&nbsp;Linlin Yang ,&nbsp;Fang Fang ,&nbsp;Junjun Jiang","doi":"10.1016/j.neucom.2026.132833","DOIUrl":"10.1016/j.neucom.2026.132833","url":null,"abstract":"<div><div>Image compressed sensing methods utilizing deep neural networks have achieved remarkable performance improvements compared with traditional ones, regarding both reconstruction quality and efficiency. However, the factors that affect the reconstruction quality of such deep compressed sensing methods remain unclear. In this paper, we reveal that one important factor is the size of the Effective Receptive Field (ERF), based on which we propose a novel Convolutional Compressed Sensing Network with the Kronecker Reparameterized Large Kernel (KR-CCSNet). Specifically, to enlarge the ERF and achieve superior reconstruction quality, we propose the Kronecker Reparameterized Large Kernel Sampling Network (KR-LKSN) for the sampling phase. KR-LKSN not only delivers better reconstruction quality, reduced computation, and fewer parameters, but also shows great potential for deployment on resource-constrained edge sensors, owing to the lightweight design of its sampling module. For the reconstruction network, we design an Adaptive Reconstruction Module (ARM) to leverage multi-scale information from measurements via gated attention, which further enlarges the ERF during the reconstruction phase to generate high-quality images. Extensive experiments demonstrate the effectiveness of KR-CCSNet on Set5, Set14, and BSDS100. For instance, our method outperforms MR-CCSNet by an average PSNR of 0.35 dB on Set5 and Set14 across six compression ratios. Our source codes are released at <span><span>https://github.com/Will0x6c5f/KRCCSNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"672 ","pages":"Article 132833"},"PeriodicalIF":6.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual hypergraph regularized nonnegative matrix factorization with nonsmooth and orthogonality constraints for data clustering 具有非光滑和正交约束的对偶超图正则化非负矩阵分解用于数据聚类
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.neucom.2026.132800
Chunli Song , Linzhang Lu , Chengbin Zeng
Nonnegative Matrix Factorization (NMF) has emerged as a powerful tool for data clustering, largely due to its ability to yield interpretable low-dimensional representations. However, existing NMF-based methods struggle to fully model high-order relationships across both sample and feature spaces, and they also fail to simultaneously enforce feature sparsity and preserve intrinsic geometric structures, which are key factors for clustering complex datasets. To address these challenges, this paper proposes a novel framework, namely Dual Hypergraph Regularized Nonsmooth Nonnegative Matrix Factorization with Orthogonality Constraints (DHNNMF). The model employs dual hypergraph regularization to capture high-order correlations, a nonsmooth constraint via a smoothing matrix to enhance feature sparsity and interpretability, and orthogonality constraints on the factor matrices to prevent degenerate solutions and improve clustering quality. An efficient multiplicative optimization algorithm is developed for the non-convex objective function, supported by rigorous theoretical analysis that guarantees monotonic convergence. Extensive experiments on benchmark datasets demonstrate that DHNNMF achieves superior or comparable performance compared to baseline methods.
非负矩阵分解(NMF)已经成为数据聚类的强大工具,主要是因为它能够产生可解释的低维表示。然而,现有的基于nmf的方法很难在样本和特征空间中完全建模高阶关系,而且它们也不能同时增强特征稀疏性和保留固有的几何结构,这是聚类复杂数据集的关键因素。为了解决这些问题,本文提出了一个新的框架,即具有正交性约束的对偶超图正则化非光滑非负矩阵分解(DHNNMF)。该模型使用对偶超图正则化来捕获高阶相关性,通过平滑矩阵来增强特征稀疏性和可解释性的非光滑约束,以及对因子矩阵的正交性约束来防止退化解并提高聚类质量。针对非凸目标函数,提出了一种高效的乘法优化算法,并进行了严格的理论分析,保证了算法的单调收敛性。在基准数据集上进行的大量实验表明,与基线方法相比,DHNNMF实现了优越或相当的性能。
{"title":"Dual hypergraph regularized nonnegative matrix factorization with nonsmooth and orthogonality constraints for data clustering","authors":"Chunli Song ,&nbsp;Linzhang Lu ,&nbsp;Chengbin Zeng","doi":"10.1016/j.neucom.2026.132800","DOIUrl":"10.1016/j.neucom.2026.132800","url":null,"abstract":"<div><div>Nonnegative Matrix Factorization (NMF) has emerged as a powerful tool for data clustering, largely due to its ability to yield interpretable low-dimensional representations. However, existing NMF-based methods struggle to fully model high-order relationships across both sample and feature spaces, and they also fail to simultaneously enforce feature sparsity and preserve intrinsic geometric structures, which are key factors for clustering complex datasets. To address these challenges, this paper proposes a novel framework, namely Dual Hypergraph Regularized Nonsmooth Nonnegative Matrix Factorization with Orthogonality Constraints (DHNNMF). The model employs dual hypergraph regularization to capture high-order correlations, a nonsmooth constraint via a smoothing matrix to enhance feature sparsity and interpretability, and orthogonality constraints on the factor matrices to prevent degenerate solutions and improve clustering quality. An efficient multiplicative optimization algorithm is developed for the non-convex objective function, supported by rigorous theoretical analysis that guarantees monotonic convergence. Extensive experiments on benchmark datasets demonstrate that DHNNMF achieves superior or comparable performance compared to baseline methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"672 ","pages":"Article 132800"},"PeriodicalIF":6.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recurrence mimicking learning: Eliminating sequential rollouts in offline recurrent reinforcement learning 递归模仿学习:消除离线递归强化学习中的顺序滚动
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.neucom.2026.132807
Tomasz Witkowski , Krzysztof Kania , Tomasz Wachowicz
Recurrent Reinforcement Learning (RRL) is widely used in settings where actions depend on previous decisions, such as dynamic decision-making. However, offline RRL suffers from a major computational drawback: it evaluates trajectories step by step, making training inefficient for long horizons, complex models, and high-dimensional features. To address this, we propose Recurrence Mimicking Learning (RML), an approach that reorders offline RRL rollouts to require only two batched forward passes per epoch, independent of horizon length. RML enumerates all previous actions in a single pass and reconstructs the exact recurrent path through a lightweight selection step. Experiments show that RML preserves the exact final action trajectory of standard offline RRL, allows direct optimization of global rewards, and reduces training computation time to approximately 5% of the conventional approach, while scaling efficiently with both sequence length and action space size.
循环强化学习(RRL)广泛应用于行为依赖于先前决策的环境中,例如动态决策。然而,离线RRL存在一个主要的计算缺陷:它一步一步地评估轨迹,使得长视界、复杂模型和高维特征的训练效率低下。为了解决这个问题,我们提出了递归模仿学习(RML),这是一种重新排序离线RRL部署的方法,每个历元只需要两个批处理的前向传递,与视界长度无关。RML在一次传递中枚举所有以前的操作,并通过轻量级选择步骤重建精确的循环路径。实验表明,RML保留了标准离线RRL的确切最终动作轨迹,允许直接优化全局奖励,并将训练计算时间减少到常规方法的约5%,同时有效地扩展序列长度和动作空间大小。
{"title":"Recurrence mimicking learning: Eliminating sequential rollouts in offline recurrent reinforcement learning","authors":"Tomasz Witkowski ,&nbsp;Krzysztof Kania ,&nbsp;Tomasz Wachowicz","doi":"10.1016/j.neucom.2026.132807","DOIUrl":"10.1016/j.neucom.2026.132807","url":null,"abstract":"<div><div>Recurrent Reinforcement Learning (RRL) is widely used in settings where actions depend on previous decisions, such as dynamic decision-making. However, offline RRL suffers from a major computational drawback: it evaluates trajectories step by step, making training inefficient for long horizons, complex models, and high-dimensional features. To address this, we propose Recurrence Mimicking Learning (RML), an approach that reorders offline RRL rollouts to require only two batched forward passes per epoch, independent of horizon length. RML enumerates all <em>previous actions</em> in a single pass and reconstructs the exact recurrent path through a lightweight selection step. Experiments show that RML preserves the exact final action trajectory of standard offline RRL, allows direct optimization of global rewards, and reduces training computation time to approximately 5% of the conventional approach, while scaling efficiently with both sequence length and action space size.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"672 ","pages":"Article 132807"},"PeriodicalIF":6.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PS-Seg: Learning from partial scribbles for 3D multiple abdominal organ segmentation PS-Seg:从局部涂鸦中学习3D多腹部器官分割
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.neucom.2026.132837
Meng Han , Xiaochuan Ma , Xiangde Luo , Wenjun Liao , Shichuan Zhang , Shaoting Zhang , Guotai Wang
Accurate multi-organ segmentation in abdominal Computed Tomography (CT) images is crucial for clinical diagnosis and treatment planning. However, existing deep learning approaches rely heavily on dense pixel-level annotations, which are costly and time-consuming to obtain. Scribble-based supervision offers a promising alternative to reduce annotation burden, but suffers from insufficient supervision due to the extremely limited number of labeled pixels. To address this challenge, we propose an efficient scribble-supervised 3D medical image segmentation framework based on a Triple-branch multi-Dilated Network (TDNet). TDNet employs a shared encoder and three decoders with heterogeneous dilation rates and feature-level perturbations to capture complementary contextual information and to generate reliable soft pseudo-labels for unlabeled voxels, which are further refined using voxel-wise uncertainty estimation for decoder supervision. In addition, a multi-scale Cross-Class Affinity Contrastive (CCAC) loss is introduced to enhance intra-class compactness and inter-class separability in the learned embedding space. Extensive experiments showed that our method obtained an average Dice of 88.38% and 79.24% on the public WORD and Synapse datasets with scribble supervision, respectively. It consistently outperformed eight state-of-the-art scribble-supervised segmentation approaches, and maintained strong performance even under extremely sparse scribble annotations. The results indicate that our method provides an effective and robust solution for scribble-supervised multi-organ segmentation.
腹部计算机断层扫描(CT)图像中多器官的准确分割对临床诊断和治疗计划至关重要。然而,现有的深度学习方法严重依赖于密集的像素级注释,这是昂贵且耗时的。基于涂鸦的监督提供了一种很有希望的替代方法来减少注释负担,但由于标记像素的数量极其有限,因此缺乏足够的监督。为了解决这一挑战,我们提出了一种基于三分支多扩张网络(TDNet)的高效涂鸦监督3D医学图像分割框架。TDNet采用一个共享编码器和三个具有异构膨胀率和特征级扰动的解码器来捕获互补的上下文信息,并为未标记的体素生成可靠的软伪标签,并使用体素不确定性估计进一步细化解码器监督。此外,引入了多尺度跨类亲和对比损失(CCAC)来增强学习嵌入空间的类内紧密性和类间可分性。大量的实验表明,我们的方法在公开的WORD和Synapse数据集上获得的平均Dice分别为88.38%和79.24%。它始终优于八种最先进的涂鸦监督分割方法,并且即使在非常稀疏的涂鸦注释下也保持了强大的性能。结果表明,该方法为潦草监督下的多器官分割提供了有效的鲁棒性解决方案。
{"title":"PS-Seg: Learning from partial scribbles for 3D multiple abdominal organ segmentation","authors":"Meng Han ,&nbsp;Xiaochuan Ma ,&nbsp;Xiangde Luo ,&nbsp;Wenjun Liao ,&nbsp;Shichuan Zhang ,&nbsp;Shaoting Zhang ,&nbsp;Guotai Wang","doi":"10.1016/j.neucom.2026.132837","DOIUrl":"10.1016/j.neucom.2026.132837","url":null,"abstract":"<div><div>Accurate multi-organ segmentation in abdominal Computed Tomography (CT) images is crucial for clinical diagnosis and treatment planning. However, existing deep learning approaches rely heavily on dense pixel-level annotations, which are costly and time-consuming to obtain. Scribble-based supervision offers a promising alternative to reduce annotation burden, but suffers from insufficient supervision due to the extremely limited number of labeled pixels. To address this challenge, we propose an efficient scribble-supervised 3D medical image segmentation framework based on a Triple-branch multi-Dilated Network (TDNet). TDNet employs a shared encoder and three decoders with heterogeneous dilation rates and feature-level perturbations to capture complementary contextual information and to generate reliable soft pseudo-labels for unlabeled voxels, which are further refined using voxel-wise uncertainty estimation for decoder supervision. In addition, a multi-scale Cross-Class Affinity Contrastive (CCAC) loss is introduced to enhance intra-class compactness and inter-class separability in the learned embedding space. Extensive experiments showed that our method obtained an average Dice of 88.38% and 79.24% on the public WORD and Synapse datasets with scribble supervision, respectively. It consistently outperformed eight state-of-the-art scribble-supervised segmentation approaches, and maintained strong performance even under extremely sparse scribble annotations. The results indicate that our method provides an effective and robust solution for scribble-supervised multi-organ segmentation.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"672 ","pages":"Article 132837"},"PeriodicalIF":6.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minion gated recurrent unit for continual learning 随从门控循环单元持续学习
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.neucom.2026.132847
Abdullah M. Zyarah , Dhireesha Kudithipudi
The increasing demand for continual learning in sequential data processing has led to progressively complex training methodologies and larger recurrent network architectures. Consequently, this has widened the knowledge gap between continual learning with recurrent neural networks (RNNs) and their ability to operate on devices with limited memory and compute. To address this challenge, we investigate the effectiveness of simplifying RNN architectures, particularly gated recurrent unit (GRU), and its impact on both single-task and multitask sequential learning. We propose a new variant of GRU, namely the minion recurrent unit (MiRU). MiRU replaces conventional gating mechanisms with scaling coefficients to regulate dynamic updates of hidden states and historical context, reducing computational costs and memory requirements. Despite its simplified architecture, MiRU maintains performance comparable to the standard GRU while achieving more than 1.92× speed-up and reducing parameter usage by 2.88×, as demonstrated through evaluations on sequential image classification and natural language processing benchmarks. The impact of model simplification on its learning capacity is also investigated by performing continual learning tasks with a rehearsal-based strategy and global inhibition. We find that MiRU demonstrates stable performance in multitask learning even when using only rehearsal, unlike the standard GRU and its variants. These features position MiRU as a promising candidate for edge-device applications.
对连续数据处理中持续学习的需求日益增长,导致了越来越复杂的训练方法和更大的循环网络架构。因此,这扩大了循环神经网络(rnn)的持续学习与它们在内存和计算有限的设备上运行的能力之间的知识差距。为了应对这一挑战,我们研究了简化RNN架构的有效性,特别是门控循环单元(GRU),以及它对单任务和多任务顺序学习的影响。我们提出了一种新的GRU变体,即仆从循环单元(MiRU)。MiRU用缩放系数取代了传统的门控机制,以调节隐藏状态和历史上下文的动态更新,降低了计算成本和内存需求。尽管其架构简化,但MiRU的性能与标准GRU相当,同时实现了超过1.92倍的加速,并减少了2.88倍的参数使用,这一点通过对顺序图像分类和自然语言处理基准的评估得到了证明。模型简化对其学习能力的影响也通过执行基于预演的策略和全局抑制的持续学习任务进行了研究。我们发现,与标准GRU及其变体不同,MiRU在多任务学习中表现稳定,即使只使用排练。这些特性使MiRU成为边缘设备应用的一个有前途的候选者。
{"title":"Minion gated recurrent unit for continual learning","authors":"Abdullah M. Zyarah ,&nbsp;Dhireesha Kudithipudi","doi":"10.1016/j.neucom.2026.132847","DOIUrl":"10.1016/j.neucom.2026.132847","url":null,"abstract":"<div><div>The increasing demand for continual learning in sequential data processing has led to progressively complex training methodologies and larger recurrent network architectures. Consequently, this has widened the knowledge gap between continual learning with recurrent neural networks (RNNs) and their ability to operate on devices with limited memory and compute. To address this challenge, we investigate the effectiveness of simplifying RNN architectures, particularly gated recurrent unit (GRU), and its impact on both single-task and multitask sequential learning. We propose a new variant of GRU, namely the minion recurrent unit (MiRU). MiRU replaces conventional gating mechanisms with scaling coefficients to regulate dynamic updates of hidden states and historical context, reducing computational costs and memory requirements. Despite its simplified architecture, MiRU maintains performance comparable to the standard GRU while achieving more than 1.92<span><math><mo>×</mo></math></span> speed-up and reducing parameter usage by 2.88<span><math><mo>×</mo></math></span>, as demonstrated through evaluations on sequential image classification and natural language processing benchmarks. The impact of model simplification on its learning capacity is also investigated by performing continual learning tasks with a rehearsal-based strategy and global inhibition. We find that MiRU demonstrates stable performance in multitask learning even when using only rehearsal, unlike the standard GRU and its variants. These features position MiRU as a promising candidate for edge-device applications.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"673 ","pages":"Article 132847"},"PeriodicalIF":6.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuroadaptive consensus control with prescribed performance for nonlinear multi-agent systems with input delay and input quantization 具有输入延迟和输入量化的非线性多智能体系统的预定性能神经自适应一致控制
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.neucom.2026.132813
Baiqi Qiu , Zhi Liu , Licheng Zheng , C.L.Philip Chen
This paper proposes a novel adaptive neural control scheme for uncertain nonlinear multi-agent systems to achieve prescribed performance consensus under simultaneous input delay and quantized input. First, a hybrid strategy combining Padé approximation with the bounded nature of uniform quantizer is introduced to handle coupled input delay and quantization effects, with the boundedness of the resulting quantization error rigorously proved. Second, by incorporating prescribed performance functions and an error transformation mechanism, the constrained error dynamics are converted into an unconstrained form. This transformation guarantees that the synchronization error remains strictly within predefined bounds, converges to a user-defined accuracy within a preset time, and is independent of initial conditions. Furthermore, a neural state observer using quantized inputs is designed to estimate unmeasurable states, while a first-order filter is employed to resolve the “complexity explosion” issue in traditional backstepping design. Two simulation examples demonstrate the effectiveness and practicality of the proposed approach.
针对不确定非线性多智能体系统,提出了一种新的自适应神经控制方案,在同步输入延迟和量化输入的情况下,达到预定的性能一致性。首先,提出了一种结合pad近似和均匀量化器有界特性的混合策略来处理输入延迟和量化耦合效应,并严格证明了所得到的量化误差的有界性。其次,结合规定的性能函数和误差转换机制,将约束误差动力学转化为无约束形式;这种转换保证同步误差严格保持在预定义的范围内,在预设的时间内收敛到用户定义的精度,并且与初始条件无关。在此基础上,设计了一个使用量化输入的神经状态观测器来估计不可测状态,并采用一阶滤波器来解决传统反演设计中的“复杂度爆炸”问题。两个仿真实例验证了该方法的有效性和实用性。
{"title":"Neuroadaptive consensus control with prescribed performance for nonlinear multi-agent systems with input delay and input quantization","authors":"Baiqi Qiu ,&nbsp;Zhi Liu ,&nbsp;Licheng Zheng ,&nbsp;C.L.Philip Chen","doi":"10.1016/j.neucom.2026.132813","DOIUrl":"10.1016/j.neucom.2026.132813","url":null,"abstract":"<div><div>This paper proposes a novel adaptive neural control scheme for uncertain nonlinear multi-agent systems to achieve prescribed performance consensus under simultaneous input delay and quantized input. First, a hybrid strategy combining Padé approximation with the bounded nature of uniform quantizer is introduced to handle coupled input delay and quantization effects, with the boundedness of the resulting quantization error rigorously proved. Second, by incorporating prescribed performance functions and an error transformation mechanism, the constrained error dynamics are converted into an unconstrained form. This transformation guarantees that the synchronization error remains strictly within predefined bounds, converges to a user-defined accuracy within a preset time, and is independent of initial conditions. Furthermore, a neural state observer using quantized inputs is designed to estimate unmeasurable states, while a first-order filter is employed to resolve the “complexity explosion” issue in traditional backstepping design. Two simulation examples demonstrate the effectiveness and practicality of the proposed approach.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"672 ","pages":"Article 132813"},"PeriodicalIF":6.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mutual-guidance framework for audio DeepFake detection via multi-dimensional feature interaction 基于多维特征交互的音频DeepFake检测相互引导框架
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.neucom.2026.132732
Dengtai Tan , Boao Tan , Deyi Yang , Yang Yang, Chengyu Niu
With the rapid advancement of text-to-speech (TTS) and voice cloning (VC) technologies, the perceptual quality of synthetic speech has approached that of natural speech, posing substantial challenges to conventional auditory-based detection methods. To address this issue, a novel Mutual-Guidance Framework (MGF) is introduced, designed to integrate both physical and high-level semantic representations for enhanced detection accuracy. In this framework, Mel-Frequency Cepstral Coefficients (MFCCs) and Linear-Frequency Cepstral Coefficients (LFCCs) are employed to capture low-level acoustic characteristics, while a pre-trained Wav2Vec encoder is utilized to extract semantic embeddings, thereby constructing a multi-level representational hierarchy. To achieve precise cross-modal alignment and deep interaction, a Hierarchical Cross-Attention Fusion (HCAF) module is incorporated, enabling multi-level information exchange between physical and semantic features. Furthermore, a mutual-guidance strategy is embedded, facilitating bidirectional adaptation and dynamic interaction between the two modalities, thereby reinforcing their representational consistency while exploiting complementary strengths. To evaluate the efficacy of the proposed approach, a multilingual and multi-generation cloned speech dataset, BA, was constructed, comprising paired genuine and spoofed utterances generated by both TTS and VC systems. Experimental results on the ASVspoof 2019 dataset indicate that the MGF framework achieves an equal error rate (EER) of 5.94%, demonstrating substantial robustness in cross-lingual and cross-model detection scenarios. Analysis on the BA dataset further reveals that both linguistic variations and differences in synthesis techniques exert significant influence on detection performance, highlighting the bottlenecks in cross-scenario generalization.
随着文本到语音(TTS)和语音克隆(VC)技术的快速发展,合成语音的感知质量已经接近自然语音的感知质量,这对传统的基于听觉的检测方法提出了实质性的挑战。为了解决这个问题,引入了一种新的相互指导框架(MGF),旨在整合物理和高级语义表示以提高检测精度。在该框架中,Mel-Frequency Cepstral Coefficients (MFCCs)和Linear-Frequency Cepstral Coefficients (LFCCs)被用来捕捉低阶声学特征,而预训练的Wav2Vec编码器被用来提取语义嵌入,从而构建一个多层次的表示层次。为了实现精确的跨模态对齐和深度交互,集成了分层交叉注意融合(HCAF)模块,实现了物理和语义特征之间的多层次信息交换。此外,还嵌入了相互指导策略,促进了两种模式之间的双向适应和动态互动,从而在利用互补优势的同时加强了它们的代表性一致性。为了评估该方法的有效性,构建了一个多语言多代克隆语音数据集BA,其中包含由TTS和VC系统生成的配对真实和欺骗话语。在ASVspoof 2019数据集上的实验结果表明,MGF框架的等错误率(EER)为5.94%,在跨语言和跨模型的检测场景下具有很强的鲁棒性。对BA数据集的分析进一步表明,语言差异和合成技术的差异对检测性能都有显著影响,突出了跨场景泛化的瓶颈。
{"title":"Mutual-guidance framework for audio DeepFake detection via multi-dimensional feature interaction","authors":"Dengtai Tan ,&nbsp;Boao Tan ,&nbsp;Deyi Yang ,&nbsp;Yang Yang,&nbsp;Chengyu Niu","doi":"10.1016/j.neucom.2026.132732","DOIUrl":"10.1016/j.neucom.2026.132732","url":null,"abstract":"<div><div>With the rapid advancement of text-to-speech (TTS) and voice cloning (VC) technologies, the perceptual quality of synthetic speech has approached that of natural speech, posing substantial challenges to conventional auditory-based detection methods. To address this issue, a novel Mutual-Guidance Framework (MGF) is introduced, designed to integrate both physical and high-level semantic representations for enhanced detection accuracy. In this framework, Mel-Frequency Cepstral Coefficients (MFCCs) and Linear-Frequency Cepstral Coefficients (LFCCs) are employed to capture low-level acoustic characteristics, while a pre-trained Wav2Vec encoder is utilized to extract semantic embeddings, thereby constructing a multi-level representational hierarchy. To achieve precise cross-modal alignment and deep interaction, a Hierarchical Cross-Attention Fusion (HCAF) module is incorporated, enabling multi-level information exchange between physical and semantic features. Furthermore, a mutual-guidance strategy is embedded, facilitating bidirectional adaptation and dynamic interaction between the two modalities, thereby reinforcing their representational consistency while exploiting complementary strengths. To evaluate the efficacy of the proposed approach, a multilingual and multi-generation cloned speech dataset, BA, was constructed, comprising paired genuine and spoofed utterances generated by both TTS and VC systems. Experimental results on the ASVspoof 2019 dataset indicate that the MGF framework achieves an equal error rate (EER) of 5.94%, demonstrating substantial robustness in cross-lingual and cross-model detection scenarios. Analysis on the BA dataset further reveals that both linguistic variations and differences in synthesis techniques exert significant influence on detection performance, highlighting the bottlenecks in cross-scenario generalization.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"672 ","pages":"Article 132732"},"PeriodicalIF":6.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge concept cold-start approach for cognitive diagnosis 认知诊断的知识概念冷启动方法
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.neucom.2026.132770
Miao Zhang , Huihuan Li , Lele Zheng , Shunfeng Tan , Chao Yang , Kui Xiao , Zhifang Huang , Zhifei Li
Domain-level zero-shot cognitive diagnosis aims to assess students in the absence of interaction records. However, existing methods in this area primarily analyze student-exercise interactions while neglecting the cold-start problem at the knowledge concept level. For newly introduced knowledge concepts in a course, interaction records are missing, making it challenging for the system to accurately infer students’ proficiency based on historical data. To address this issue, we propose the Knowledge Concept Cold-Start Cognitive Diagnosis Model (KCSCD). KCSCD comprehensively considers both interaction and interaction missing knowledge concepts. It introduces a Knowledge Concept Domain Inference Module to model students’ deep knowledge states and a Deep Learning Ability Assessment Module to evaluate their deep learning ability. These components are then integrated through a gating mechanism for weighted fusion. Experiments conducted on the ASSIST17, ASSIST09, and Junyi datasets demonstrate that KCSCD outperforms existing methods in both accuracy and interpretability. Moreover, the model exhibits a significant advantage in predicting students’ knowledge mastery in cold-start scenarios. Our code is available at https://github.com/lihihuan1213/KCSCD.
领域级零射击认知诊断的目的是在没有交互记录的情况下对学生进行评估。然而,现有的方法主要是分析学生与习题的互动关系,而忽略了知识概念层面的冷启动问题。对于课程中新引入的知识概念,缺少交互记录,系统难以根据历史数据准确推断学生的熟练程度。为了解决这一问题,我们提出了知识概念冷启动认知诊断模型(KCSCD)。KCSCD综合考虑了交互和交互缺失的知识概念。引入知识概念领域推理模块对学生的深度知识状态进行建模,引入深度学习能力评估模块对学生的深度学习能力进行评估。然后,这些组件通过加权融合的门控机制进行整合。在ASSIST17、ASSIST09和Junyi数据集上进行的实验表明,KCSCD在准确性和可解释性方面都优于现有方法。此外,该模型在预测冷启动情景下学生的知识掌握方面具有显著优势。我们的代码可在https://github.com/lihihuan1213/KCSCD上获得。
{"title":"Knowledge concept cold-start approach for cognitive diagnosis","authors":"Miao Zhang ,&nbsp;Huihuan Li ,&nbsp;Lele Zheng ,&nbsp;Shunfeng Tan ,&nbsp;Chao Yang ,&nbsp;Kui Xiao ,&nbsp;Zhifang Huang ,&nbsp;Zhifei Li","doi":"10.1016/j.neucom.2026.132770","DOIUrl":"10.1016/j.neucom.2026.132770","url":null,"abstract":"<div><div>Domain-level zero-shot cognitive diagnosis aims to assess students in the absence of interaction records. However, existing methods in this area primarily analyze student-exercise interactions while neglecting the cold-start problem at the knowledge concept level. For newly introduced knowledge concepts in a course, interaction records are missing, making it challenging for the system to accurately infer students’ proficiency based on historical data. To address this issue, we propose the Knowledge Concept Cold-Start Cognitive Diagnosis Model (KCSCD). KCSCD comprehensively considers both interaction and interaction missing knowledge concepts. It introduces a Knowledge Concept Domain Inference Module to model students’ deep knowledge states and a Deep Learning Ability Assessment Module to evaluate their deep learning ability. These components are then integrated through a gating mechanism for weighted fusion. Experiments conducted on the ASSIST17, ASSIST09, and Junyi datasets demonstrate that KCSCD outperforms existing methods in both accuracy and interpretability. Moreover, the model exhibits a significant advantage in predicting students’ knowledge mastery in cold-start scenarios. Our code is available at <span><span>https://github.com/lihihuan1213/KCSCD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"672 ","pages":"Article 132770"},"PeriodicalIF":6.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced deep features fusion network for partial overlapping registration 部分重叠配准的高级深度特征融合网络
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.neucom.2026.132835
Zhuoran Tian , Jun Lu , Chengtao Cai
Learning-based point cloud registration methods have garnered significant attention in industrial fields such as autonomous driving and environmental perception. However, in practical applications, acquired point cloud pairs are often affected by occlusion and other issues, resulting in incomplete data, which significantly degrades the registration accuracy of most mature models. To overcome these challenges, we propose an Advanced Deep Features Fusion Network (ADFfNet) designed to precisely describe correspondences under low overlap rates and filter out the influence of incorrect correspondences. The network primarily consists of a Deep Feature fusion (DF) module, a Spatial Feature embedding (SF) module, and a Non-correspondence Filtering (NF) module. To accurately establish correspondences under low overlap, we design the DF Module, which enhances local geometric interaction between point clouds to identify potential overlapping regions. Additionally, the SF module aims to embed global positional information into a consistent spatial representation to better analyze overlapping point pairs. Concurrently, we propose an attention-based NF module that leverages both spatial positional information and deep interaction features to identify highly confident and discriminative point correspondences, thereby achieving a robust registration task. Comprehensive evaluations demonstrate that our method achieves superior accuracy and robustness compared to existing state-of-the-art approaches. Furthermore, we conducted comparative experiments on the ModelNet40 and ShapeNet datasets under varying overlap rates, proving our method effectively enhances registration performance in low-overlap scenarios. We also conducted generalization experiments on the KITTI dataset to demonstrate the practical application ability of the model.
基于学习的点云配准方法在自动驾驶和环境感知等工业领域受到了广泛关注。然而,在实际应用中,获取的点云对经常受到遮挡等问题的影响,导致数据不完整,这大大降低了大多数成熟模型的配准精度。为了克服这些挑战,我们提出了一种先进的深度特征融合网络(ADFfNet),旨在精确描述低重叠率下的对应并过滤掉不正确对应的影响。该网络主要由DF (Deep Feature fusion)模块、SF (Spatial Feature embedding)模块和NF (Non-correspondence Filtering)模块组成。为了在低重叠情况下准确地建立对应关系,我们设计了DF模块,该模块增强了点云之间的局部几何相互作用,以识别潜在的重叠区域。此外,SF模块旨在将全球位置信息嵌入到一致的空间表示中,以更好地分析重叠的点对。同时,我们提出了一个基于注意力的NF模块,该模块利用空间位置信息和深度交互特征来识别高度自信和判别的点对应,从而实现鲁棒的配准任务。综合评估表明,与现有的最先进的方法相比,我们的方法具有更高的准确性和鲁棒性。此外,我们在不同重叠率下对ModelNet40和ShapeNet数据集进行了对比实验,证明了我们的方法有效地提高了低重叠场景下的配准性能。我们还在KITTI数据集上进行了泛化实验,以验证模型的实际应用能力。
{"title":"Advanced deep features fusion network for partial overlapping registration","authors":"Zhuoran Tian ,&nbsp;Jun Lu ,&nbsp;Chengtao Cai","doi":"10.1016/j.neucom.2026.132835","DOIUrl":"10.1016/j.neucom.2026.132835","url":null,"abstract":"<div><div>Learning-based point cloud registration methods have garnered significant attention in industrial fields such as autonomous driving and environmental perception. However, in practical applications, acquired point cloud pairs are often affected by occlusion and other issues, resulting in incomplete data, which significantly degrades the registration accuracy of most mature models. To overcome these challenges, we propose an Advanced Deep Features Fusion Network (ADFfNet) designed to precisely describe correspondences under low overlap rates and filter out the influence of incorrect correspondences. The network primarily consists of a Deep Feature fusion (DF) module, a Spatial Feature embedding (SF) module, and a Non-correspondence Filtering (NF) module. To accurately establish correspondences under low overlap, we design the DF Module, which enhances local geometric interaction between point clouds to identify potential overlapping regions. Additionally, the SF module aims to embed global positional information into a consistent spatial representation to better analyze overlapping point pairs. Concurrently, we propose an attention-based NF module that leverages both spatial positional information and deep interaction features to identify highly confident and discriminative point correspondences, thereby achieving a robust registration task. Comprehensive evaluations demonstrate that our method achieves superior accuracy and robustness compared to existing state-of-the-art approaches. Furthermore, we conducted comparative experiments on the ModelNet40 and ShapeNet datasets under varying overlap rates, proving our method effectively enhances registration performance in low-overlap scenarios. We also conducted generalization experiments on the KITTI dataset to demonstrate the practical application ability of the model.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"672 ","pages":"Article 132835"},"PeriodicalIF":6.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146036580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PU-FHN: Detail-preserving indoor scene point cloud upsampling via frequency-guided hybrid network PU-FHN:基于频率制导混合网络的室内场景点云上采样
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.neucom.2026.132840
Yilin Hou , Jin Wang , Jiade Chen , Yunhui Shi , Nam Ling , Baocai Yin
Reconstructing high-fidelity surfaces from sparse point clouds remains a core challenge in 3D vision, especially in complex indoor environments where preserving fine geometric details is essential. The primary challenge lies in designing a network that can effectively capture global context while preserving detailed local features. To address this, we introduce PU-FHN, a novel framework centered on a Hybrid Feature Enhancement Unit (HFEU) that follows a two-stage hierarchical design. First, the Multi-Scale Residual Convolution Block (MSRC) captures broad spatial context. Then, the High-Frequency Aware Transformer (HFAT) leverages frequency-guided attention to recover and enhance high-frequency details that are often lost in early processing. This hybrid architecture is further strengthened by a Cross-Scale Feature Recalibration Fusion (CSFRF) module, which adaptively integrates features across multiple network scales. To accurately reconstruct local geometry, we introduce a Detail Restoration Block (DRB) with a Dual-Path Contextual Refinement (DPCR) mechanism. Extensive experiments on challenging indoor scene datasets demonstrate that PU-FHN outperforms existing state-of-the-art methods. Quantitatively, our method consistently achieves the lowest Chamfer Distance (CD) and Density-Aware Chamfer Distance (DCD) across all datasets and upsampling rates, surpassing recent diffusion and flow-based baselines. Furthermore, PU-FHN demonstrates exceptional efficiency, achieving inference speeds an order of magnitude faster than patch-based approaches while preserving intricate high-frequency geometric details.
从稀疏点云重建高保真表面仍然是3D视觉的核心挑战,特别是在复杂的室内环境中,保持精细的几何细节是必不可少的。主要的挑战在于设计一个网络,既能有效地捕捉全局背景,又能保留详细的局部特征。为了解决这个问题,我们引入了PU-FHN,这是一个以混合特征增强单元(HFEU)为中心的新框架,遵循两阶段分层设计。首先,多尺度残差卷积块(MSRC)捕获了广泛的空间背景。然后,高频感知变压器(HFAT)利用频率引导注意力来恢复和增强在早期处理中经常丢失的高频细节。跨尺度特征再校准融合(CSFRF)模块进一步加强了这种混合架构,该模块自适应地集成了多个网络尺度的特征。为了精确地重建局部几何,我们引入了带有双路径上下文细化(DPCR)机制的细节恢复块(DRB)。在具有挑战性的室内场景数据集上进行的大量实验表明,PU-FHN优于现有的最先进方法。从数量上讲,我们的方法在所有数据集和上采样率中始终实现最低的倒角距离(CD)和密度感知倒角距离(DCD),超过了最近的扩散和基于流量的基线。此外,PU-FHN表现出卓越的效率,在保留复杂高频几何细节的同时,实现的推理速度比基于补丁的方法快一个数量级。
{"title":"PU-FHN: Detail-preserving indoor scene point cloud upsampling via frequency-guided hybrid network","authors":"Yilin Hou ,&nbsp;Jin Wang ,&nbsp;Jiade Chen ,&nbsp;Yunhui Shi ,&nbsp;Nam Ling ,&nbsp;Baocai Yin","doi":"10.1016/j.neucom.2026.132840","DOIUrl":"10.1016/j.neucom.2026.132840","url":null,"abstract":"<div><div>Reconstructing high-fidelity surfaces from sparse point clouds remains a core challenge in 3D vision, especially in complex indoor environments where preserving fine geometric details is essential. The primary challenge lies in designing a network that can effectively capture global context while preserving detailed local features. To address this, we introduce PU-FHN, a novel framework centered on a Hybrid Feature Enhancement Unit (HFEU) that follows a two-stage hierarchical design. First, the Multi-Scale Residual Convolution Block (MSRC) captures broad spatial context. Then, the High-Frequency Aware Transformer (HFAT) leverages frequency-guided attention to recover and enhance high-frequency details that are often lost in early processing. This hybrid architecture is further strengthened by a Cross-Scale Feature Recalibration Fusion (CSFRF) module, which adaptively integrates features across multiple network scales. To accurately reconstruct local geometry, we introduce a Detail Restoration Block (DRB) with a Dual-Path Contextual Refinement (DPCR) mechanism. Extensive experiments on challenging indoor scene datasets demonstrate that PU-FHN outperforms existing state-of-the-art methods. Quantitatively, our method consistently achieves the lowest Chamfer Distance (CD) and Density-Aware Chamfer Distance (DCD) across all datasets and upsampling rates, surpassing recent diffusion and flow-based baselines. Furthermore, PU-FHN demonstrates exceptional efficiency, achieving inference speeds an order of magnitude faster than patch-based approaches while preserving intricate high-frequency geometric details.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"672 ","pages":"Article 132840"},"PeriodicalIF":6.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1