首页 > 最新文献

Neurocomputing最新文献

英文 中文
CRColor: Cycle reference learning for exemplar-based image colorization CRColor:基于范例的图像着色的循环参考学习
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.neucom.2026.132932
Siqi Chen , Mingdao Wang , Xianlin Zhang , Xueming Li , Yue Zhang
Exemplar-based image colorization colorizes a grayscale image based on a color reference image. Although recent advances have significantly improved color matching and generation techniques, there is a paucity of research addressing the issue of color fidelity, i.e., whether the colored grayscale image accurately preserves the guidance information from the reference image. The absence of a ground truth colored target image for each reference-target image pair renders the color fidelity difficult to quantify or learn by the models. Motivated by this, this paper introduces cyclic strategy into exemplar-based colorization task. Firstly, we propose the concept of cycle reference peak signal-to-noise ratio (CRPSNR). By careful design, the CRPSNR uses the colorization output as the guidance to recolor the reference image. Using the original color reference image as the ground truth, CRPSNR enables the quantification of color fidelity. Furthermore, the cycle reference learning for exemplar-based image colorization (CRColor) is proposed. The CRColor uses a main branch to colorize the target image and a training-only cycle branch to draw the result closer to the guidance, which enables model to learn color fidelity. Experiments demonstrate that our method maintains comparable image quality to recent state-of-the-art methods while outperforming the methods in color fidelity to the reference image, both quantitatively and qualitatively. Our code will be published for academic research.
基于示例的图像着色将基于颜色参考图像的灰度图像着色。尽管近年来的进展显著改善了颜色匹配和生成技术,但缺乏对色彩保真度问题的研究,即彩色灰度图像是否准确地保留了参考图像的引导信息。由于每个参考-目标图像对缺少一个真实的彩色目标图像,使得模型难以量化或学习彩色保真度。基于此,本文将循环策略引入到基于示例的着色任务中。首先,提出周期参考峰值信噪比(CRPSNR)的概念。通过精心设计,CRPSNR使用着色输出作为参考图像重新着色的指导。CRPSNR使用原始彩色参考图像作为基准真值,实现了色彩保真度的量化。在此基础上,提出了基于样本的图像着色(CRColor)的循环参考学习方法。CRColor使用一个主分支为目标图像上色,使用一个只训练的循环分支使结果更接近指导,这使模型能够学习色彩保真度。实验表明,我们的方法与最新的最先进的方法保持相当的图像质量,同时在定量和定性上优于参考图像的色彩保真度方法。我们的代码将用于学术研究。
{"title":"CRColor: Cycle reference learning for exemplar-based image colorization","authors":"Siqi Chen ,&nbsp;Mingdao Wang ,&nbsp;Xianlin Zhang ,&nbsp;Xueming Li ,&nbsp;Yue Zhang","doi":"10.1016/j.neucom.2026.132932","DOIUrl":"10.1016/j.neucom.2026.132932","url":null,"abstract":"<div><div>Exemplar-based image colorization colorizes a grayscale image based on a color reference image. Although recent advances have significantly improved color matching and generation techniques, there is a paucity of research addressing the issue of color fidelity, i.e., whether the colored grayscale image accurately preserves the guidance information from the reference image. The absence of a ground truth colored target image for each reference-target image pair renders the color fidelity difficult to quantify or learn by the models. Motivated by this, this paper introduces cyclic strategy into exemplar-based colorization task. Firstly, we propose the concept of cycle reference peak signal-to-noise ratio (CRPSNR). By careful design, the CRPSNR uses the colorization output as the guidance to recolor the reference image. Using the original color reference image as the ground truth, CRPSNR enables the quantification of color fidelity. Furthermore, the cycle reference learning for exemplar-based image colorization (CRColor) is proposed. The CRColor uses a main branch to colorize the target image and a training-only cycle branch to draw the result closer to the guidance, which enables model to learn color fidelity. Experiments demonstrate that our method maintains comparable image quality to recent state-of-the-art methods while outperforming the methods in color fidelity to the reference image, both quantitatively and qualitatively. Our code will be published for academic research.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"674 ","pages":"Article 132932"},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tensor-to-tensor models with fast iterated sum features 具有快速迭代和特征的张量到张量模型
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.neucom.2026.132884
Joscha Diehl , Rasheed Ibraheem , Leonard Schmitz , Yue Wu
Designing expressive yet computationally efficient layers for high-dimensional tensor data (e.g., images) remains a significant challenge. While sequence modeling has seen a shift toward linear-time architectures, extending these benefits to higher-order tensors is non-trivial.
In this work, we introduce the Fast Iterated Sums (FIS) layer, a novel tensor-to-tensor primitive with linear time and space complexity relative to the input size.
Theoretically, our framework bridges deep learning and algorithmic combinatorics: it leverages “corner tree” structures from permutation pattern counting to efficiently compute 2D iterated sums. This formulation admits dual interpretations as both a higher-order state-space model (SSM) and a multiparameter extension of the Signature Transform.
Practically, the FIS layer serves as a drop-in replacement for standard layers in vision backbones. We evaluate its performance on image classification and anomaly detection. When replacing layers in a smaller ResNet, the FIS-based model achieves accuracy of a larger ResNet baseline while reducing both trainable parameters and multiply-add operations. When replacing layers in ConvNeXt tiny, the FIS-based model saves around 2% of parameters, has around 8% shorter time per epoch and improves accuracy by around 0.6% on CIFAR-10 and around 2% on CIFAR-100. Furthermore, on the texture subset of MVTec AD, it attains an average AUROC of 97.3%. The code is available at https://github.com/diehlj/fast-iterated-sums.
为高维张量数据(如图像)设计具有表现力且计算效率高的层仍然是一个重大挑战。虽然序列建模已经转向线性时间架构,但将这些好处扩展到高阶张量是非常重要的。在这项工作中,我们引入了快速迭代求和(FIS)层,这是一种新的张量到张量原语,相对于输入大小具有线性的时间和空间复杂性。从理论上讲,我们的框架连接了深度学习和算法组合:它利用来自排列模式计数的“角树”结构来有效地计算二维迭代和。该公式允许双重解释,即高阶状态空间模型(SSM)和签名变换的多参数扩展。实际上,FIS层是视觉骨干中标准层的替代层。对其在图像分类和异常检测方面的性能进行了评价。当在较小的ResNet中替换层时,基于fis的模型在减少可训练参数和乘加操作的同时达到了较大ResNet基线的精度。当在ConvNeXt tiny中替换层时,基于fis的模型节省了约2%的参数,每个历元的时间缩短了约8%,并且在CIFAR-10上提高了约0.6%的精度,在CIFAR-100上提高了约2%的精度。在MVTec AD的纹理子集上,平均AUROC达到97.3%。代码可在https://github.com/diehlj/fast-iterated-sums上获得。
{"title":"Tensor-to-tensor models with fast iterated sum features","authors":"Joscha Diehl ,&nbsp;Rasheed Ibraheem ,&nbsp;Leonard Schmitz ,&nbsp;Yue Wu","doi":"10.1016/j.neucom.2026.132884","DOIUrl":"10.1016/j.neucom.2026.132884","url":null,"abstract":"<div><div>Designing expressive yet computationally efficient layers for high-dimensional tensor data (e.g., images) remains a significant challenge. While sequence modeling has seen a shift toward linear-time architectures, extending these benefits to higher-order tensors is non-trivial.</div><div>In this work, we introduce the <strong>Fast Iterated Sums (FIS)</strong> layer, a novel tensor-to-tensor primitive with <strong>linear time and space complexity</strong> relative to the input size.</div><div>Theoretically, our framework bridges deep learning and algorithmic combinatorics: it leverages “corner tree” structures from permutation pattern counting to efficiently compute 2D iterated sums. This formulation admits dual interpretations as both a higher-order state-space model (SSM) and a multiparameter extension of the Signature Transform.</div><div>Practically, the FIS layer serves as a drop-in replacement for standard layers in vision backbones. We evaluate its performance on image classification and anomaly detection. When replacing layers in a smaller ResNet, the FIS-based model achieves accuracy of a larger ResNet baseline while reducing both trainable parameters and multiply-add operations. When replacing layers in ConvNeXt tiny, the FIS-based model saves around 2% of parameters, has around 8% shorter time per epoch and improves accuracy by around 0.6% on CIFAR-10 and around 2% on CIFAR-100. Furthermore, on the texture subset of MVTec AD, it attains an average AUROC of 97.3%. The code is available at <span><span>https://github.com/diehlj/fast-iterated-sums</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132884"},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse assemblies of recurrent neural networks with stability guarantees 具有稳定性保证的递归神经网络的稀疏集合
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.neucom.2026.132952
Andrea Ceni, Valerio De Caro, Davide Bacciu, Claudio Gallicchio
We introduce AdaDiag, a framework for constructing sparse assemblies of recurrent neural networks (RNNs) with formal stability guarantees. Our approach builds upon contraction theory by designing RNN modules that are inherently contractive through adaptive diagonal parametrization and learnable characteristic time scales. This formulation enables each module to remain fully trainable while preserving global stability under skew-symmetric coupling. We provide rigorous theoretical analysis of contractivity, along with a complexity discussion showing that stability is achieved without additional computational burden. Experiments on ten heterogeneous time series benchmarks demonstrate that AdaDiag consistently surpasses SCN, LSTM, and Vanilla RNN baselines, and achieves competitive performance with state-of-the-art models, all while requiring substantially fewer trainable parameters. These results highlight the effectiveness of sparse and stable assemblies for efficient, adaptive, and generalizable sequence modeling.
我们介绍了AdaDiag,一个用于构造具有形式稳定性保证的递归神经网络(rnn)的稀疏集合的框架。我们的方法建立在收缩理论的基础上,通过设计RNN模块,这些模块通过自适应对角参数化和可学习的特征时间尺度固有地收缩。该公式使每个模块保持完全可训练性,同时保持偏对称耦合下的全局稳定性。我们对收缩性进行了严格的理论分析,并对复杂性进行了讨论,表明在没有额外计算负担的情况下实现了稳定性。在10个异构时间序列基准上的实验表明,AdaDiag始终超过SCN、LSTM和Vanilla RNN基线,并在需要更少的可训练参数的情况下,使用最先进的模型实现了具有竞争力的性能。这些结果突出了稀疏和稳定组合对于高效、自适应和可推广的序列建模的有效性。
{"title":"Sparse assemblies of recurrent neural networks with stability guarantees","authors":"Andrea Ceni,&nbsp;Valerio De Caro,&nbsp;Davide Bacciu,&nbsp;Claudio Gallicchio","doi":"10.1016/j.neucom.2026.132952","DOIUrl":"10.1016/j.neucom.2026.132952","url":null,"abstract":"<div><div>We introduce AdaDiag, a framework for constructing sparse assemblies of recurrent neural networks (RNNs) with formal stability guarantees. Our approach builds upon contraction theory by designing RNN modules that are inherently contractive through adaptive diagonal parametrization and learnable characteristic time scales. This formulation enables each module to remain fully trainable while preserving global stability under skew-symmetric coupling. We provide rigorous theoretical analysis of contractivity, along with a complexity discussion showing that stability is achieved without additional computational burden. Experiments on ten heterogeneous time series benchmarks demonstrate that AdaDiag consistently surpasses SCN, LSTM, and Vanilla RNN baselines, and achieves competitive performance with state-of-the-art models, all while requiring substantially fewer trainable parameters. These results highlight the effectiveness of sparse and stable assemblies for efficient, adaptive, and generalizable sequence modeling.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132952"},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sign language translation via cross-modal alignment and graph convolution 基于跨模态对齐和图卷积的手语翻译
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.neucom.2026.132949
Ming Yu , Pengfei Zhang , Cuihong Xue , Yingchun Guo
Sign language translation (SLT) converts sign language videos into textual sentences. This process is essential for enabling communication between deaf and hearing individuals. However, the inherent modal gap between visual sign sequences and textual linguistics severely limits performance. Existing methods rely on costly gloss annotations for intermediate supervision, restricting scalability; unsupervised alternatives lack fine-grained alignment or semantic learning capabilities. To address this, we introduce CMAG-Net, a framework integrating cross-modal alignment pre-training and dynamic graph convolutions. The architecture comprises two modules: (1) A cross-modal alignment pre-training module. Optimized with a multi-objective loss, it learns to align visual features with textual semantics, effectively bridging the modality gap without gloss supervision; (2) A dynamic dual-graph spatiotemporal module. It consists of a temporal graph that captures local sign dynamics and a similarity graph that aggregates global semantic relationships. This design suppresses noise, enhances discriminative features, and addresses the challenges of redundant frames and complex spatiotemporal dependencies. Experiments show CMAG-Net outperforms all gloss-free methods on PHOENIX-2014T, CSL-Daily and How2Sign, approaching gloss-based state-of-the-art performance. Versus GFSLT-VLP (gloss-free) on PHOENIX-2014T dev/test sets, BLEU-4 improves by +5.19/+5.95. Compared to MMTLB (gloss-based), the gap narrows to 0.37/0.22 BLEU-4.
手语翻译(SLT)将手语视频转换成文本句子。这一过程对于聋人与正常人之间的交流至关重要。然而,视觉符号序列与语篇语言学之间固有的模态差距严重限制了语言的表现。现有方法依赖于昂贵的光泽注释进行中间监督,限制了可扩展性;无监督替代方案缺乏细粒度对齐或语义学习能力。为了解决这个问题,我们引入了CMAG-Net,这是一个集成了跨模态对齐预训练和动态图卷积的框架。该体系结构包括两个模块:(1)跨模态对齐预训练模块。通过多目标损失优化,学习将视觉特征与文本语义对齐,在没有光泽监督的情况下有效弥合模态差距;(2)动态双图时空模块。它由捕获局部符号动态的时间图和聚合全局语义关系的相似图组成。这种设计抑制了噪声,增强了识别特征,并解决了冗余帧和复杂时空依赖性的挑战。实验表明,CMAG-Net在PHOENIX-2014T、CSL-Daily和How2Sign上优于所有无光泽方法,接近基于光泽的最先进性能。与PHOENIX-2014T开发/测试集上的GFSLT-VLP(无光泽)相比,BLEU-4提高了+5.19/+5.95。与MMTLB(基于光泽)相比,差距缩小到0.37/0.22 BLEU-4。
{"title":"Sign language translation via cross-modal alignment and graph convolution","authors":"Ming Yu ,&nbsp;Pengfei Zhang ,&nbsp;Cuihong Xue ,&nbsp;Yingchun Guo","doi":"10.1016/j.neucom.2026.132949","DOIUrl":"10.1016/j.neucom.2026.132949","url":null,"abstract":"<div><div>Sign language translation (SLT) converts sign language videos into textual sentences. This process is essential for enabling communication between deaf and hearing individuals. However, the inherent modal gap between visual sign sequences and textual linguistics severely limits performance. Existing methods rely on costly gloss annotations for intermediate supervision, restricting scalability; unsupervised alternatives lack fine-grained alignment or semantic learning capabilities. To address this, we introduce CMAG-Net, a framework integrating cross-modal alignment pre-training and dynamic graph convolutions. The architecture comprises two modules: (1) A cross-modal alignment pre-training module. Optimized with a multi-objective loss, it learns to align visual features with textual semantics, effectively bridging the modality gap without gloss supervision; (2) A dynamic dual-graph spatiotemporal module. It consists of a temporal graph that captures local sign dynamics and a similarity graph that aggregates global semantic relationships. This design suppresses noise, enhances discriminative features, and addresses the challenges of redundant frames and complex spatiotemporal dependencies. Experiments show CMAG-Net outperforms all gloss-free methods on PHOENIX-2014T, CSL-Daily and How2Sign, approaching gloss-based state-of-the-art performance. Versus GFSLT-VLP (gloss-free) on PHOENIX-2014T dev/test sets, BLEU-4 improves by +5.19/+5.95. Compared to MMTLB (gloss-based), the gap narrows to 0.37/0.22 BLEU-4.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132949"},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CDINet: A cascaded dual-domain interaction network for vapor degraded thermal infrared image restoration CDINet:用于蒸汽退化热红外图像恢复的级联双域相互作用网络
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.neucom.2026.132930
Kailun Wei, Xiaoyan Liu, Wei Zhao
Infrared thermography allows imaging in dark and smoky environments and is widely used in firefighting and industrial scenarios. However, high temperature water vapor in the above scenarios can significantly degrade the quality of thermal infrared (TIR) images, leading to errors in subsequent visual tasks. The non-uniform distribution of high-temperature water vapor and the resulting severe information loss in TIR images pose significant challenges to restoration. To address this issue, we propose a cascaded dual-domain interaction network (CDINet) for TIR image restoration. The Dual-domain Interaction Block (DIB) is designed as the basic unit of CDINet. This module enhances feature representation through spatial-frequency interaction, thereby improving the model’s performance in perceiving and restoring non-uniform vapor degraded regions. In addition, we introduce Long Short-Term Memory (LSTM) and design CDINet as a cascade structure to progressively restore and refine the lost information caused by vapor interference in an iterative manner. Furthermore, we have constructed a benchmark dataset comprising 12,500 vapor degraded TIR images to evaluate the restoration performance of different models. Extensive experiments comparing our CDINet with 12 state-of-the-art methods have shown that CDINet can effectively eliminate vapor interference from scenes with varying distributions. It outperforms other methods, especially in challenging scenarios with large non-uniform dense and localized non-uniform vapor degradation. The dataset and code are publicly available at: https://github.com/wkl1996/CDINet-TIR-Restoration.
红外热成像允许在黑暗和烟雾环境中成像,并广泛用于消防和工业场景。然而,在上述情况下,高温水蒸气会显著降低热红外(TIR)图像的质量,导致后续视觉任务的错误。高温水蒸气的不均匀分布及其造成的严重信息损失给红外图像的恢复带来了巨大的挑战。为了解决这个问题,我们提出了一个级联双域交互网络(CDINet)用于红外图像恢复。双域交互块(Dual-domain Interaction Block, DIB)是CDINet的基本单元。该模块通过空间-频率交互增强特征表示,从而提高模型感知和恢复非均匀蒸汽退化区域的性能。此外,我们引入了长短期记忆(LSTM),并将CDINet设计为级联结构,以迭代的方式逐步恢复和细化由蒸汽干扰引起的丢失信息。此外,我们还构建了一个包含12500张蒸汽退化TIR图像的基准数据集,以评估不同模型的恢复性能。将CDINet与12种最先进的方法进行比较的大量实验表明,CDINet可以有效地消除不同分布场景中的蒸汽干扰。它优于其他方法,特别是在具有大量非均匀密度和局部非均匀蒸汽降解的挑战性场景中。数据集和代码可在:https://github.com/wkl1996/CDINet-TIR-Restoration上公开获取。
{"title":"CDINet: A cascaded dual-domain interaction network for vapor degraded thermal infrared image restoration","authors":"Kailun Wei,&nbsp;Xiaoyan Liu,&nbsp;Wei Zhao","doi":"10.1016/j.neucom.2026.132930","DOIUrl":"10.1016/j.neucom.2026.132930","url":null,"abstract":"<div><div>Infrared thermography allows imaging in dark and smoky environments and is widely used in firefighting and industrial scenarios. However, high temperature water vapor in the above scenarios can significantly degrade the quality of thermal infrared (TIR) images, leading to errors in subsequent visual tasks. The non-uniform distribution of high-temperature water vapor and the resulting severe information loss in TIR images pose significant challenges to restoration. To address this issue, we propose a cascaded dual-domain interaction network (CDINet) for TIR image restoration. The Dual-domain Interaction Block (DIB) is designed as the basic unit of CDINet. This module enhances feature representation through spatial-frequency interaction, thereby improving the model’s performance in perceiving and restoring non-uniform vapor degraded regions. In addition, we introduce Long Short-Term Memory (LSTM) and design CDINet as a cascade structure to progressively restore and refine the lost information caused by vapor interference in an iterative manner. Furthermore, we have constructed a benchmark dataset comprising 12,500 vapor degraded TIR images to evaluate the restoration performance of different models. Extensive experiments comparing our CDINet with 12 state-of-the-art methods have shown that CDINet can effectively eliminate vapor interference from scenes with varying distributions. It outperforms other methods, especially in challenging scenarios with large non-uniform dense and localized non-uniform vapor degradation. The dataset and code are publicly available at: <span><span>https://github.com/wkl1996/CDINet-TIR-Restoration</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132930"},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ESGME: Generating metaphor explanations from event-related potential signals using large language models ESGME:使用大型语言模型从与事件相关的潜在信号生成隐喻解释
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.neucom.2026.132896
Dongyu Zhang , Wanqiu Liao , Haojia Li , Hongfei Lin
Metaphor plays a fundamental role in human cognition, involving the construction of conceptual mappings that unfold through dynamic neural processes. However, current natural language processing (NLP) systems largely overlook the brain signals engaged during metaphor production, limiting their ability to capture cognitively grounded mechanisms. To address this gap, we introduce ESGME (ERP-Signal-Guided Metaphor Explanation), a framework that integrates event-related potential (ERP) recordings with large language models (LLMs) to generate metaphor explanations conditioned on neural activity. ESGME employs a two-stage design: in Stage 1, an ERP encoder is trained to align ERP signals with the semantic embedding space of the target LLM, enabling neural representations to be mapped to conceptual-level meaning; in Stage 2, the aligned ERP embeddings serve as cognitive cue prompts that guide LLMs in producing metaphor explanations. The framework further incorporates text-based guiding factors to stabilize conceptual mapping during explanation generation. Experiments across multiple LLMs demonstrate that aligned ERP signals provide meaningful cognitive information beyond textual cues. These results highlight the feasibility of translating metaphor-related neural activity into coherent explanatory text and establish a new pathway for bridging cognitive neuroscience with generative NLP. Dataset and code: https://github.com/xinyu706/ESGME.
隐喻在人类认知中起着重要的作用,它涉及通过动态神经过程展开的概念映射的构建。然而,目前的自然语言处理(NLP)系统在很大程度上忽略了隐喻产生过程中参与的大脑信号,限制了它们捕捉认知基础机制的能力。为了解决这一差距,我们引入了ESGME (ERP信号引导隐喻解释),这是一个将事件相关电位(ERP)记录与大型语言模型(llm)集成在一起的框架,以生成以神经活动为条件的隐喻解释。ESGME采用两阶段设计:第一阶段,训练ERP编码器将ERP信号与目标LLM的语义嵌入空间对齐,使神经表征能够映射到概念级意义;在第二阶段,对齐的ERP嵌入作为认知提示提示,指导法学硕士产生隐喻解释。该框架进一步引入了基于文本的指导因子,以稳定解释生成过程中的概念映射。跨多个llm的实验表明,对齐的ERP信号提供了超越文本线索的有意义的认知信息。这些结果强调了将隐喻相关的神经活动转化为连贯的解释性文本的可行性,并为认知神经科学与生成式自然语言处理之间的桥梁建立了新的途径。数据集和代码:https://github.com/xinyu706/ESGME。
{"title":"ESGME: Generating metaphor explanations from event-related potential signals using large language models","authors":"Dongyu Zhang ,&nbsp;Wanqiu Liao ,&nbsp;Haojia Li ,&nbsp;Hongfei Lin","doi":"10.1016/j.neucom.2026.132896","DOIUrl":"10.1016/j.neucom.2026.132896","url":null,"abstract":"<div><div>Metaphor plays a fundamental role in human cognition, involving the construction of conceptual mappings that unfold through dynamic neural processes. However, current natural language processing (NLP) systems largely overlook the brain signals engaged during metaphor production, limiting their ability to capture cognitively grounded mechanisms. To address this gap, we introduce ESGME (ERP-Signal-Guided Metaphor Explanation), a framework that integrates event-related potential (ERP) recordings with large language models (LLMs) to generate metaphor explanations conditioned on neural activity. ESGME employs a two-stage design: in Stage 1, an ERP encoder is trained to align ERP signals with the semantic embedding space of the target LLM, enabling neural representations to be mapped to conceptual-level meaning; in Stage 2, the aligned ERP embeddings serve as cognitive cue prompts that guide LLMs in producing metaphor explanations. The framework further incorporates text-based guiding factors to stabilize conceptual mapping during explanation generation. Experiments across multiple LLMs demonstrate that aligned ERP signals provide meaningful cognitive information beyond textual cues. These results highlight the feasibility of translating metaphor-related neural activity into coherent explanatory text and establish a new pathway for bridging cognitive neuroscience with generative NLP. Dataset and code: <span><span>https://github.com/xinyu706/ESGME</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132896"},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEM-WGAN: A new data evaluation method based on Wasserstein Generative Adversarial Network for imbalanced data classification DEM-WGAN:一种新的基于Wasserstein生成对抗网络的数据评估方法,用于不平衡数据分类
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.neucom.2026.132817
Gang Chen , Binjie Hou
Imbalanced data classification is a common challenge in various fields of medical diagnosis and financial risk management. However, the traditional Synthetic Minority Oversampling Technique (SMOTE) algorithm and its variants exhibit certain limitations, particularly the tendency to increase noise during the sample generation and the lack of a robust evaluation mechanism for assessing the quality of the synthetic data. To address these issues, we propose a novel data evaluation method based on the Wasserstein Generative Adversarial Network (DEM-WGAN). DEM-WGAN first learns the distribution characteristics of the majority-class by inputting majority-class samples into Wasserstein Generative Adversarial Network (WGAN). Then, the trained discriminator is used to evaluate the similarity between the synthetic data and the majority-class distribution. Finally, high-quality data that better conforms to the minority-class are generated through the evaluation process until the number of minority-class samples equals that of the majority-class. Experimental results demonstrate that DEM-WGAN significantly improves classification performance compared to several SMOTE algorithms. Source code for the applications discussed in this paper is available at https://github.com/ithbjgit1/DEM-WGAN.git.
在医疗诊断和金融风险管理的各个领域,数据分类不平衡是一个共同的挑战。然而,传统的合成少数派过采样技术(SMOTE)算法及其变体存在一定的局限性,特别是在样本生成过程中容易增加噪声,并且缺乏可靠的评估机制来评估合成数据的质量。为了解决这些问题,我们提出了一种新的基于Wasserstein生成对抗网络(DEM-WGAN)的数据评估方法。DEM-WGAN首先通过将多数类样本输入到Wasserstein生成对抗网络(WGAN)中来学习多数类的分布特征。然后,使用训练好的鉴别器来评估合成数据与多数类分布之间的相似性。最后,通过评估过程生成更符合少数类的高质量数据,直到少数类样本数量与多数类样本数量相等。实验结果表明,与几种SMOTE算法相比,DEM-WGAN显著提高了分类性能。本文中讨论的应用程序的源代码可从https://github.com/ithbjgit1/DEM-WGAN.git获得。
{"title":"DEM-WGAN: A new data evaluation method based on Wasserstein Generative Adversarial Network for imbalanced data classification","authors":"Gang Chen ,&nbsp;Binjie Hou","doi":"10.1016/j.neucom.2026.132817","DOIUrl":"10.1016/j.neucom.2026.132817","url":null,"abstract":"<div><div>Imbalanced data classification is a common challenge in various fields of medical diagnosis and financial risk management. However, the traditional Synthetic Minority Oversampling Technique (SMOTE) algorithm and its variants exhibit certain limitations, particularly the tendency to increase noise during the sample generation and the lack of a robust evaluation mechanism for assessing the quality of the synthetic data. To address these issues, we propose a novel data evaluation method based on the Wasserstein Generative Adversarial Network (DEM-WGAN). DEM-WGAN first learns the distribution characteristics of the majority-class by inputting majority-class samples into Wasserstein Generative Adversarial Network (WGAN). Then, the trained discriminator is used to evaluate the similarity between the synthetic data and the majority-class distribution. Finally, high-quality data that better conforms to the minority-class are generated through the evaluation process until the number of minority-class samples equals that of the majority-class. Experimental results demonstrate that DEM-WGAN significantly improves classification performance compared to several SMOTE algorithms. Source code for the applications discussed in this paper is available at <span><span>https://github.com/ithbjgit1/DEM-WGAN.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"674 ","pages":"Article 132817"},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic view synthesis with topologically-varying neural radiance fields from sparse input views 基于稀疏输入视图的拓扑变化神经辐射场的动态视图合成
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.neucom.2026.132942
Kangkan Wang , Kejie Wei , Shao-Yuan Li
This paper addresses the challenge of dynamic view synthesis from sparse input views with topologically-varying neural radiance fields (NeRFs). Previous methods estimate a NeRFs at each time step or learn a hyperspace of templates to represent the topology-changing scenes. However, the time-conditioned NeRFs is highly ill-posed as it predicts NeRFs and motion simultaneously with a single model, while hyperspace template approaches suffer from degraded performance when the number of input views is insufficient. To address these issues, we propose a topologically-varying NeRFs by learning sparse templates in canonical space. The sparse template NeRFs are learned to represent different topology-changing states of dynamic scenes which are realized through a variance constraint on hyper-coordinates of the templates. By composing the deformation fields with an inverse deformation fields, we obtain 3D scene flows among different time instances and constrain the per-frame deformation with 2D optical flows, which also implicitly form multi-view constraints on the NeRF model from sparse input views. Compared to existing methods for dynamic view synthesis, our method is more effective at handling the sparse view data with large topology changes owing to the constrained space of sparse template NeRFs and constraints from forward-inverse deformation fields. Extensive experiments on various datasets demonstrate that our method improves the quality of novel-view synthesis compared with previous works.
本文解决了从具有拓扑变化神经辐射场(nerf)的稀疏输入视图中进行动态视图合成的挑战。以前的方法在每个时间步估计nerf或学习模板的超空间来表示拓扑变化的场景。然而,时间条件的nerf是高度病态的,因为它同时预测nerf和单个模型的运动,而超空间模板方法在输入视图数量不足时性能下降。为了解决这些问题,我们通过学习规范空间中的稀疏模板,提出了一种拓扑变化的nerf。通过对模板超坐标的方差约束,学习稀疏模板nerf来表示动态场景的不同拓扑变化状态。通过将变形场与逆变形场组合在一起,得到不同时间实例间的三维场景流,用二维光流约束每帧变形,这也隐含地形成了来自稀疏输入视图的NeRF模型的多视图约束。与现有的动态视图合成方法相比,由于稀疏模板nerf的约束空间和正逆变形场的约束,我们的方法在处理拓扑变化较大的稀疏视图数据时更有效。在各种数据集上进行的大量实验表明,与以往的研究相比,我们的方法提高了新视图合成的质量。
{"title":"Dynamic view synthesis with topologically-varying neural radiance fields from sparse input views","authors":"Kangkan Wang ,&nbsp;Kejie Wei ,&nbsp;Shao-Yuan Li","doi":"10.1016/j.neucom.2026.132942","DOIUrl":"10.1016/j.neucom.2026.132942","url":null,"abstract":"<div><div>This paper addresses the challenge of dynamic view synthesis from sparse input views with topologically-varying neural radiance fields (NeRFs). Previous methods estimate a NeRFs at each time step or learn a hyperspace of templates to represent the topology-changing scenes. However, the time-conditioned NeRFs is highly ill-posed as it predicts NeRFs and motion simultaneously with a single model, while hyperspace template approaches suffer from degraded performance when the number of input views is insufficient. To address these issues, we propose a topologically-varying NeRFs by learning sparse templates in canonical space. The sparse template NeRFs are learned to represent different topology-changing states of dynamic scenes which are realized through a variance constraint on hyper-coordinates of the templates. By composing the deformation fields with an inverse deformation fields, we obtain 3D scene flows among different time instances and constrain the per-frame deformation with 2D optical flows, which also implicitly form multi-view constraints on the NeRF model from sparse input views. Compared to existing methods for dynamic view synthesis, our method is more effective at handling the sparse view data with large topology changes owing to the constrained space of sparse template NeRFs and constraints from forward-inverse deformation fields. Extensive experiments on various datasets demonstrate that our method improves the quality of novel-view synthesis compared with previous works.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"674 ","pages":"Article 132942"},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An explainable multi-view representation fusion learning framework with hybrid MetaFormer for EEG-based epileptic seizure detection 基于脑电图的癫痫发作检测中可解释的多视图表示融合学习框架与混合元former
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.neucom.2026.132929
Jingyue Wang , Lu Wei , Zheng Qian , Chengyao Shi , Yuwen Liu , Yinglan Xu
Multi-view learning (MVL), a paradigm of deep learning, has greatly facilitated the detection of epileptic seizures from electroencephalograms (EEGs) owing to its remarkable capability to learn generalization features. However, existing MVL-based seizure detection methods rely on decision strategies to aggregate the discriminative outputs of separate learners, leading to insufficient extraction of inter-view complementarity and limiting the detection performance. To address this issue, this paper focuses on two aspects and proposes a multi-view representation fusion learning framework, which enables direct information fusion at the feature encoding level. Firstly, to enhance discriminability, we construct hierarchical multi-view representations based on the Gramian Angular Summation Field and an improved Stockwell transform by introducing the spatial characteristics of EEG montages and temporal dependency dynamics. Secondly, to process both local and global features comprehensively, we propose a hybrid MetaFormer network that incorporates inverted depth-wise separable convolutions and sparsity-enhanced shifted-window attention mechanisms. Specifically, the fusion unit with cross-attention mechanisms exploits the Key and Value matrices to achieve effective inter-view information exchange. The experimental results on the public CHB-MIT and Siena datasets demonstrate that the proposed method outperforms competing techniques in both sample-based and event-based evaluations for EEG seizure detection. In addition, an explanation module is devised based on feature importance scoring. In this way, our method enables post-hoc explanations for the multi-view fusion learning process and discriminative results utilizing topographic maps, indicating an explainable computational solution for EEG seizure detection.
多视图学习(Multi-view learning, MVL)是一种深度学习的范例,由于其学习泛化特征的能力,极大地促进了从脑电图(eeg)中检测癫痫发作。然而,现有的基于mhl的癫痫检测方法依赖于决策策略来汇总不同学习器的判别输出,导致无法充分提取视图间互补性,限制了检测性能。针对这一问题,本文从两个方面着手,提出了一种多视图表示融合学习框架,实现了特征编码层的直接信息融合。首先,通过引入EEG蒙太奇的空间特征和时间依赖动力学,基于Gramian角求和场和改进的Stockwell变换构建分层多视图表示,增强了识别能力;其次,为了综合处理局部和全局特征,我们提出了一个混合的MetaFormer网络,该网络结合了倒转深度可分离卷积和稀疏增强的移动窗口注意机制。具体而言,融合单元采用交叉注意机制,利用关键矩阵和价值矩阵实现有效的面谈信息交换。在CHB-MIT和Siena公共数据集上的实验结果表明,该方法在基于样本和基于事件的脑电图发作检测评估方面优于竞争技术。此外,设计了基于特征重要性评分的解释模块。通过这种方式,我们的方法可以对多视图融合学习过程和利用地形图的判别结果进行事后解释,为脑电图发作检测提供了一种可解释的计算解决方案。
{"title":"An explainable multi-view representation fusion learning framework with hybrid MetaFormer for EEG-based epileptic seizure detection","authors":"Jingyue Wang ,&nbsp;Lu Wei ,&nbsp;Zheng Qian ,&nbsp;Chengyao Shi ,&nbsp;Yuwen Liu ,&nbsp;Yinglan Xu","doi":"10.1016/j.neucom.2026.132929","DOIUrl":"10.1016/j.neucom.2026.132929","url":null,"abstract":"<div><div>Multi-view learning (MVL), a paradigm of deep learning, has greatly facilitated the detection of epileptic seizures from electroencephalograms (EEGs) owing to its remarkable capability to learn generalization features. However, existing MVL-based seizure detection methods rely on decision strategies to aggregate the discriminative outputs of separate learners, leading to insufficient extraction of inter-view complementarity and limiting the detection performance. To address this issue, this paper focuses on two aspects and proposes a multi-view representation fusion learning framework, which enables direct information fusion at the feature encoding level. Firstly, to enhance discriminability, we construct hierarchical multi-view representations based on the Gramian Angular Summation Field and an improved Stockwell transform by introducing the spatial characteristics of EEG montages and temporal dependency dynamics. Secondly, to process both local and global features comprehensively, we propose a hybrid MetaFormer network that incorporates inverted depth-wise separable convolutions and sparsity-enhanced shifted-window attention mechanisms. Specifically, the fusion unit with cross-attention mechanisms exploits the Key and Value matrices to achieve effective inter-view information exchange. The experimental results on the public CHB-MIT and Siena datasets demonstrate that the proposed method outperforms competing techniques in both sample-based and event-based evaluations for EEG seizure detection. In addition, an explanation module is devised based on feature importance scoring. In this way, our method enables post-hoc explanations for the multi-view fusion learning process and discriminative results utilizing topographic maps, indicating an explainable computational solution for EEG seizure detection.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132929"},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COMMANDing anomalies: Continual video anomaly detection via dual-memory and temporal mamba modeling 指挥异常:通过双存储器和时间曼巴建模持续视频异常检测
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.neucom.2026.132943
Yan Liu , Kaiju Li , Md Sabuj Khan , Jian Lang , Rongpei Hong , Kunpeng Zhang , Fan Zhou
Weakly supervised video anomaly detection (WSVAD) aims to localize frame-level anomalies using only video-level labels, offering scalability for large-scale surveillance systems. However, existing methods often struggle to adapt to previously unseen and continuously evolving anomaly patterns, limiting their practical applicability. This challenge necessitates the development of continual learning (CL) frameworks that support incremental adaptation while preserving previously acquired knowledge. To this end, we propose a novel CL-based framework, dubbed COMMAND, for WSVAD that enables robust and adaptive anomaly detection in dynamic environments. COMMAND incorporates TempMamba, a temporal modeling unit based on Mamba blocks, which effectively captures both short-range and long-range temporal dependencies essential for distinguishing normal and abnormal behavior. In addition, MemDualNet introduces a dual-memory mechanism that retains both short-term variations and long-term contextual information, facilitating more expressive temporal representations. The framework Notation++, a continual learning strategy that integrates memory replay with a composite loss function comprising contrastive, focal, and multiple-instance objectives to alleviate catastrophic forgetting. Experimental results on benchmark datasets such as UCF-Crime and ShanghaiTech validate the effectiveness of the proposed approach, demonstrating superior performance in adaptability, generalization, and anomaly localization compared to existing state-of-the-art methods.
弱监督视频异常检测(WSVAD)旨在仅使用视频级标签来定位帧级异常,为大规模监控系统提供可扩展性。然而,现有的方法往往难以适应以前看不见的和不断发展的异常模式,限制了它们的实际适用性。这一挑战要求开发持续学习(CL)框架,以支持渐进式适应,同时保留先前获得的知识。为此,我们提出了一种新的基于cl的WSVAD框架,称为COMMAND,它可以在动态环境中实现鲁棒和自适应的异常检测。COMMAND集成了TempMamba,一个基于Mamba块的时间建模单元,它有效地捕获了区分正常和异常行为所必需的短期和长期时间依赖性。此外,MemDualNet引入了一种双记忆机制,可以保留短期变化和长期上下文信息,从而促进更具表现力的时间表征。框架Notation++,一种持续学习策略,将记忆重放与包含对比、焦点和多实例目标的复合损失函数集成在一起,以减轻灾难性遗忘。在UCF-Crime和ShanghaiTech等基准数据集上的实验结果验证了所提出方法的有效性,与现有的最先进方法相比,该方法在适应性、泛化和异常定位方面表现出优越的性能。
{"title":"COMMANDing anomalies: Continual video anomaly detection via dual-memory and temporal mamba modeling","authors":"Yan Liu ,&nbsp;Kaiju Li ,&nbsp;Md Sabuj Khan ,&nbsp;Jian Lang ,&nbsp;Rongpei Hong ,&nbsp;Kunpeng Zhang ,&nbsp;Fan Zhou","doi":"10.1016/j.neucom.2026.132943","DOIUrl":"10.1016/j.neucom.2026.132943","url":null,"abstract":"<div><div>Weakly supervised video anomaly detection (WSVAD) aims to localize frame-level anomalies using only video-level labels, offering scalability for large-scale surveillance systems. However, existing methods often struggle to adapt to previously unseen and continuously evolving anomaly patterns, limiting their practical applicability. This challenge necessitates the development of continual learning (CL) frameworks that support incremental adaptation while preserving previously acquired knowledge. To this end, we propose a novel CL-based framework, dubbed <strong>COMMAND</strong>, for WSVAD that enables robust and adaptive anomaly detection in dynamic environments. COMMAND incorporates TempMamba, a temporal modeling unit based on Mamba blocks, which effectively captures both short-range and long-range temporal dependencies essential for distinguishing normal and abnormal behavior. In addition, MemDualNet introduces a dual-memory mechanism that retains both short-term variations and long-term contextual information, facilitating more expressive temporal representations. The framework Notation<span><math><mo>+</mo><mo>+</mo></math></span>, a continual learning strategy that integrates memory replay with a composite loss function comprising contrastive, focal, and multiple-instance objectives to alleviate catastrophic forgetting. Experimental results on benchmark datasets such as UCF-Crime and ShanghaiTech validate the effectiveness of the proposed approach, demonstrating superior performance in adaptability, generalization, and anomaly localization compared to existing state-of-the-art methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"674 ","pages":"Article 132943"},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1