首页 > 最新文献

Neural Networks最新文献

英文 中文
GIN-transformer based pairwise graph contrastive learning framework 基于GIN-transformer的两两图对比学习框架
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-18 DOI: 10.1016/j.neunet.2026.108621
Shufeng Zhou , Lina Zhou , Yueying Zhou , Hongyan Han , Hongxia Zheng , Lishan Qiao
Resting-state functional magnetic resonance imaging (rs-fMRI) provides critical biomarkers for diagnosing neuropsychiatric disorders such as autism spectrum disorder (ASD) and major depressive disorder (MDD). However, existing deep learning models heavily rely on labeled data, limiting their clinical applicability. This study proposes a GIN-Transformer-based pairwise graph contrastive learning framework (GITrans-PairCL) that integrates a Graph Isomorphism Network (GIN) and Transformer to address data scarcity through unsupervised graph contrastive learning. The framework comprises two key components: a Dual-modal Contrastive Learning (DCL) module and a Task-Driven Fine-tuning (TDF) module. DCL employs sliding-window augmented rs-fMRI time series, combining GIN for modeling local spatial connectivity and Transformer for capturing global temporal dynamics, enabling multi-scale feature extraction via cross-view contrastive learning. TDF adapts the pre-trained model to downstream classification tasks. We conducted single-site and cross-site evaluation on two publicly available datasets, and the experimental results showed that GITrans-PairCL outperforms both traditional machine learning and deep learning baseline methods in automatic diagnosis of brain diseases. The model combines local and global features, and uses pre-trained contrast learning to reduce the dependence on labeling information and improve generalization.
静息状态功能磁共振成像(rs-fMRI)为诊断神经精神疾病(如自闭症谱系障碍(ASD)和重度抑郁症(MDD))提供了关键的生物标志物。然而,现有的深度学习模型严重依赖于标记数据,限制了它们的临床适用性。本研究提出了一个基于GIN-Transformer的成对图对比学习框架(gittrans - paircl),该框架集成了图同构网络(GIN)和Transformer,通过无监督图对比学习解决数据稀缺性问题。该框架包括两个关键组件:双模态对比学习(DCL)模块和任务驱动微调(TDF)模块。DCL采用滑动窗口增强的rs-fMRI时间序列,结合GIN建模局部空间连通性和Transformer捕获全局时间动态,通过交叉视图对比学习实现多尺度特征提取。TDF使预训练的模型适应下游分类任务。我们对两个公开的数据集进行了单站点和跨站点的评估,实验结果表明,gittrans - paircl在脑疾病自动诊断方面优于传统的机器学习和深度学习基线方法。该模型结合了局部特征和全局特征,并使用预训练的对比学习来减少对标记信息的依赖,提高泛化能力。
{"title":"GIN-transformer based pairwise graph contrastive learning framework","authors":"Shufeng Zhou ,&nbsp;Lina Zhou ,&nbsp;Yueying Zhou ,&nbsp;Hongyan Han ,&nbsp;Hongxia Zheng ,&nbsp;Lishan Qiao","doi":"10.1016/j.neunet.2026.108621","DOIUrl":"10.1016/j.neunet.2026.108621","url":null,"abstract":"<div><div>Resting-state functional magnetic resonance imaging (rs-fMRI) provides critical biomarkers for diagnosing neuropsychiatric disorders such as autism spectrum disorder (ASD) and major depressive disorder (MDD). However, existing deep learning models heavily rely on labeled data, limiting their clinical applicability. This study proposes a GIN-Transformer-based pairwise graph contrastive learning framework (GITrans-PairCL) that integrates a Graph Isomorphism Network (GIN) and Transformer to address data scarcity through unsupervised graph contrastive learning. The framework comprises two key components: a Dual-modal Contrastive Learning (DCL) module and a Task-Driven Fine-tuning (TDF) module. DCL employs sliding-window augmented rs-fMRI time series, combining GIN for modeling local spatial connectivity and Transformer for capturing global temporal dynamics, enabling multi-scale feature extraction via cross-view contrastive learning. TDF adapts the pre-trained model to downstream classification tasks. We conducted single-site and cross-site evaluation on two publicly available datasets, and the experimental results showed that GITrans-PairCL outperforms both traditional machine learning and deep learning baseline methods in automatic diagnosis of brain diseases. The model combines local and global features, and uses pre-trained contrast learning to reduce the dependence on labeling information and improve generalization.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108621"},"PeriodicalIF":6.3,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Source Temporal-Depth fusion for robust end-to-End visual odometry 鲁棒端到端视觉里程计的多源时间深度融合
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-17 DOI: 10.1016/j.neunet.2026.108598
Sihang Zhang , Congqi Cao , Qiang Gao , Ganchao Liu
End-to-end visual odometry models have recently achieved localization accuracy on par with conventional techniques, while effectively reducing the occurrence of catastrophic failures. However, the relevant models cannot leverage the complete time-series data for pose adjustment and optimization. Moreover, these models are limited to using joint depth prediction tasks merely as a means of scale constraint, lacking effective utilization of depth information. In this paper, we propose an end-to-end multi-source visual odometry (MVO) model that dynamically integrates the key components of hybrid visual odometry pipelines into a unified, learnable deep framework. Specifically, we propose TimePoseNet to model the mapping relationship from time to pose, capturing temporal dependencies across the entire sequence. Additionally, a wavelet convolutional attention mechanism is employed to extract global depth information from the depth map, which is then directly embedded into the pose features to dynamically constrain scale ambiguity. Furthermore, temporal and depth cues are jointly incorporated into the post-processing stage of pose estimation. The proposed method attains state-of-the-art performance on both the KITTI benchmark and the newly introduced UAV-2025 dataset, while preserving computational efficiency during inference.
端到端视觉里程计模型最近实现了与传统技术相当的定位精度,同时有效地减少了灾难性故障的发生。然而,相关模型无法利用完整的时间序列数据进行位姿调整和优化。此外,这些模型仅将联合深度预测任务作为一种尺度约束手段,缺乏对深度信息的有效利用。在本文中,我们提出了一个端到端多源视觉里程计(MVO)模型,该模型将混合视觉里程计管道的关键组件动态集成到一个统一的、可学习的深度框架中。具体来说,我们提出了TimePoseNet来对从时间到姿势的映射关系进行建模,从而捕获整个序列的时间依赖性。此外,采用小波卷积注意机制从深度图中提取全局深度信息,然后将其直接嵌入到姿态特征中,动态约束尺度模糊度。此外,在姿态估计的后处理阶段,将时间线索和深度线索结合起来。所提出的方法在KITTI基准和新引入的UAV-2025数据集上都获得了最先进的性能,同时在推理过程中保持了计算效率。
{"title":"Multi-Source Temporal-Depth fusion for robust end-to-End visual odometry","authors":"Sihang Zhang ,&nbsp;Congqi Cao ,&nbsp;Qiang Gao ,&nbsp;Ganchao Liu","doi":"10.1016/j.neunet.2026.108598","DOIUrl":"10.1016/j.neunet.2026.108598","url":null,"abstract":"<div><div>End-to-end visual odometry models have recently achieved localization accuracy on par with conventional techniques, while effectively reducing the occurrence of catastrophic failures. However, the relevant models cannot leverage the complete time-series data for pose adjustment and optimization. Moreover, these models are limited to using joint depth prediction tasks merely as a means of scale constraint, lacking effective utilization of depth information. In this paper, we propose an end-to-end multi-source visual odometry (MVO) model that dynamically integrates the key components of hybrid visual odometry pipelines into a unified, learnable deep framework. Specifically, we propose TimePoseNet to model the mapping relationship from time to pose, capturing temporal dependencies across the entire sequence. Additionally, a wavelet convolutional attention mechanism is employed to extract global depth information from the depth map, which is then directly embedded into the pose features to dynamically constrain scale ambiguity. Furthermore, temporal and depth cues are jointly incorporated into the post-processing stage of pose estimation. The proposed method attains state-of-the-art performance on both the KITTI benchmark and the newly introduced UAV-2025 dataset, while preserving computational efficiency during inference.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108598"},"PeriodicalIF":6.3,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FRM-PTQ: Feature relationship matching enhanced low-bit post-training quantization for large language models FRM-PTQ:针对大型语言模型的特征关系匹配增强的低比特训练后量化。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-17 DOI: 10.1016/j.neunet.2026.108619
Chao Zeng , Jiaqi Zhao , Miao Zhang , Li Wang , Weili Guan , Liqiang Nie
Post-Training Quantization (PTQ) has emerged as an effective approach to reduce memory and computational demands during LLMs inference. However, existing PTQ methods are highly sensitive to ultra-low-bit quantization with significant performance loss, which is further exacerbated by recently released advanced models like LLaMA-3 and LLaMA-3.1. To address this challenge, we propose a novel PTQ framework, termed FRM-PTQ, by introducing feature relationship matching. This approach integrates token-level relationship modeling and structure-level distribution alignment based on the intra-block self-distillation framework to effectively mitigate significant performance degradation caused by low-bit quantization. Unlike conventional MSE loss methods, which focus solely on point-to-point discrepancies, feature relationship matching captures feature representations in high-dimensional spaces to effectively bridge the representation gap between quantized and full-precision blocks. Additionally, we propose a multi-granularity per-group quantization technique featuring a customized kernel, designed based on the quantization sensitivity of decoder block, to further relieve the quantization performance degradation. Extensive experimental results demonstrate that our method achieves outstanding performance in the W4A4 low-bit scenario, maintaining near full-precision accuracy while delivering a 2 ×  throughput improvement and a 3.17 ×  memory reduction. This advantage is particularly evident in the latest models such as LLaMA-3, LLaMA-3.1 and Qwen2.5 models, as well as in the W3A3 extreme low-bit scenarios. Codes are available at https://github.com/HITSZ-Miao-Group/FRM.
训练后量化(PTQ)已成为llm推理过程中减少内存和计算需求的有效方法。然而,现有的PTQ方法对超低比特量化非常敏感,性能损失很大,最近发布的先进模型如LLaMA-3和LLaMA-3.1进一步加剧了这一问题。为了解决这一挑战,我们提出了一个新的PTQ框架,称为FRM-PTQ,通过引入特征关系匹配。该方法结合了基于块内自蒸馏框架的令牌级关系建模和结构级分布对齐,有效缓解了低比特量化导致的性能下降。与仅关注点对点差异的传统MSE损失方法不同,特征关系匹配捕获高维空间中的特征表示,有效地弥合了量化块和全精度块之间的表示差距。此外,我们提出了一种基于译码块量化灵敏度的定制内核的多粒度每组量化技术,以进一步缓解量化性能的下降。大量的实验结果表明,我们的方法在W4A4低比特场景中取得了出色的性能,在提供2 × 吞吐量改进和3.17 × 内存减少的同时,保持了接近全精度的精度。这一优势在最新型号如LLaMA-3、LLaMA-3.1和Qwen2.5型号以及W3A3极低比特场景中尤为明显。代码可在https://github.com/HITSZ-Miao-Group/FRM上获得。
{"title":"FRM-PTQ: Feature relationship matching enhanced low-bit post-training quantization for large language models","authors":"Chao Zeng ,&nbsp;Jiaqi Zhao ,&nbsp;Miao Zhang ,&nbsp;Li Wang ,&nbsp;Weili Guan ,&nbsp;Liqiang Nie","doi":"10.1016/j.neunet.2026.108619","DOIUrl":"10.1016/j.neunet.2026.108619","url":null,"abstract":"<div><div>Post-Training Quantization (PTQ) has emerged as an effective approach to reduce memory and computational demands during LLMs inference. However, existing PTQ methods are highly sensitive to ultra-low-bit quantization with significant performance loss, which is further exacerbated by recently released advanced models like LLaMA-3 and LLaMA-3.1. To address this challenge, we propose a novel PTQ framework, termed <strong>FRM-PTQ</strong>, by introducing feature relationship matching. This approach integrates token-level relationship modeling and structure-level distribution alignment based on the intra-block self-distillation framework to effectively mitigate significant performance degradation caused by low-bit quantization. Unlike conventional MSE loss methods, which focus solely on point-to-point discrepancies, feature relationship matching captures feature representations in high-dimensional spaces to effectively bridge the representation gap between quantized and full-precision blocks. Additionally, we propose a multi-granularity per-group quantization technique featuring a customized kernel, designed based on the quantization sensitivity of decoder block, to further relieve the quantization performance degradation. Extensive experimental results demonstrate that our method achieves outstanding performance in the W4A4 low-bit scenario, maintaining near full-precision accuracy while delivering a 2 ×  throughput improvement and a 3.17 ×  memory reduction. This advantage is particularly evident in the latest models such as LLaMA-3, LLaMA-3.1 and Qwen2.5 models, as well as in the W3A3 extreme low-bit scenarios. Codes are available at <span><span>https://github.com/HITSZ-Miao-Group/FRM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108619"},"PeriodicalIF":6.3,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146047368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-enhanced dual low-rank correlation embedding for spatio-temporal EEG fusion in depression recognition. 基于图增强双低秩相关嵌入的脑电时空融合抑郁症识别。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-17 DOI: 10.1016/j.neunet.2026.108609
Lu Zhang, Jisheng Dang, Shu Zhang, Wencheng Gan, Juan Wang, Bin Hu, Gang Feng, Hong Peng

Electroencephalography (EEG) signals contain rich spatiotemporal information reflecting brain activity, making them valuable for analyzing cognitive, emotional, and neurological disorders. However, effectively integrating these two types of information to capture both discriminative and complementary features remains a significant challenge. To address this, we propose a Graph-Enhanced Dual Low-Rank Correlation Embedding (GEDLCE) method, which integrates spatiotemporal EEG features to improve depression recognition. GEDLCE enforces low-rank constraints at both feature and sample levels, enabling extraction of shared latent factors across multiple feature sets. To preserve the intrinsic geometric structure of the data, GEDLCE employs two graph Laplacian terms to model local relationships in the sample space. Furthermore, GEDLCE introduces a graph embedding term that utilizes label information to enhance its discriminative capability. In addition, GEDLCE incorporates an enhanced correlation analysis to exploit inter-view correlations while reducing intra-view redundancy. Finally, GEDLCE jointly optimizes low-rank representations, correlation constraints, and graph embedding within a unified framework. Experiments on EEG datasets show that GEDLCE effectively captures critical information, achieves superior performance in depression recognition, and shows promise for early diagnosis and disease monitoring.

脑电图(EEG)信号包含丰富的反映大脑活动的时空信息,使其对分析认知、情绪和神经系统疾病有价值。然而,如何有效地整合这两种类型的信息以捕获判别和互补特征仍然是一个重大挑战。为了解决这个问题,我们提出了一种图增强双低秩相关嵌入(GEDLCE)方法,该方法集成了脑电图的时空特征,以提高抑郁症的识别能力。GEDLCE在特征和样本水平上强制执行低秩约束,从而能够跨多个特征集提取共享的潜在因素。为了保持数据固有的几何结构,GEDLCE采用两个图拉普拉斯项来模拟样本空间中的局部关系。此外,GEDLCE还引入了一个利用标签信息的图嵌入词来增强其判别能力。此外,GEDLCE结合了增强的相关性分析,以利用视图间的相关性,同时减少视图内冗余。最后,GEDLCE在统一的框架内共同优化了低秩表示、关联约束和图嵌入。在脑电数据集上的实验表明,GEDLCE能够有效捕获关键信息,在抑郁症识别方面取得了优异的成绩,在早期诊断和疾病监测方面具有广阔的应用前景。
{"title":"Graph-enhanced dual low-rank correlation embedding for spatio-temporal EEG fusion in depression recognition.","authors":"Lu Zhang, Jisheng Dang, Shu Zhang, Wencheng Gan, Juan Wang, Bin Hu, Gang Feng, Hong Peng","doi":"10.1016/j.neunet.2026.108609","DOIUrl":"https://doi.org/10.1016/j.neunet.2026.108609","url":null,"abstract":"<p><p>Electroencephalography (EEG) signals contain rich spatiotemporal information reflecting brain activity, making them valuable for analyzing cognitive, emotional, and neurological disorders. However, effectively integrating these two types of information to capture both discriminative and complementary features remains a significant challenge. To address this, we propose a Graph-Enhanced Dual Low-Rank Correlation Embedding (GEDLCE) method, which integrates spatiotemporal EEG features to improve depression recognition. GEDLCE enforces low-rank constraints at both feature and sample levels, enabling extraction of shared latent factors across multiple feature sets. To preserve the intrinsic geometric structure of the data, GEDLCE employs two graph Laplacian terms to model local relationships in the sample space. Furthermore, GEDLCE introduces a graph embedding term that utilizes label information to enhance its discriminative capability. In addition, GEDLCE incorporates an enhanced correlation analysis to exploit inter-view correlations while reducing intra-view redundancy. Finally, GEDLCE jointly optimizes low-rank representations, correlation constraints, and graph embedding within a unified framework. Experiments on EEG datasets show that GEDLCE effectively captures critical information, achieves superior performance in depression recognition, and shows promise for early diagnosis and disease monitoring.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"108609"},"PeriodicalIF":6.3,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146114755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoCoFR: Collaborative codebooks learning with soft matching strategy for blind face restoration CoCoFR:基于软匹配策略的协同码本学习盲人面部修复
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-16 DOI: 10.1016/j.neunet.2026.108607
Teng Feng , Junwei Xu , Tao Huang , Zhenyu Wang , Fangfang Wu , Weisheng Dong , Xin Li , Guangming Shi
Blind Face Restoration (BFR) has garnered considerable attention for its practical applicability to recover high-quality (HQ) facial images from their degraded versions. Existing BFR methods primarily incorporate diverse priors to mitigate its ill-posed nature. Notably, the codebook prior, which aggregates facial representations from HQ images has achieved impressive results. However, two performance constraints remain: i) The reliance on a single spatial-domain codebook neglects the potential information in the frequency domain. ii) The commonly used feature-matching strategies overlook the valid information encapsulated within the low-quality (LQ) identity features. To address these issues, we propose CoCoFR, which learns collaborative codebooks in both spatial and frequency domains and implements adaptive matching between LQ and HQ features with a designed Dual Codebooks Cross Attention (DCCA) module. Additionally, benefiting from its global receptive fields and linear complexity, CoCoFR facilitates coarse-to-fine feature fusion via a simple yet effective state space model (Mamba)-based fusion block MFB. Extensive experiments on both synthetic and real-world datasets validate the superiority of our CoCoFR in terms of realness and fidelity compared to state-of-the-art methods.
盲人脸复原技术(Blind Face Restoration,简称BFR)因其在高质量人脸图像复原中的实用性而受到广泛关注。现有的BFR方法主要采用多种先验来减轻其病态性。值得注意的是,先前的码本聚合了HQ图像的面部表征,取得了令人印象深刻的结果。然而,仍然存在两个性能限制:i)对单个空间域码本的依赖忽略了频域的潜在信息。ii)常用的特征匹配策略忽略了封装在低质量(LQ)身份特征中的有效信息。为了解决这些问题,我们提出了CoCoFR,它在空间和频域学习协作码本,并通过设计的双码本交叉注意(Dual codebooks Cross Attention, DCCA)模块实现LQ和HQ特征之间的自适应匹配。此外,得益于其全局接受域和线性复杂性,CoCoFR通过一个简单而有效的基于状态空间模型(Mamba)的融合块MFB促进了粗到细的特征融合。在合成数据集和真实数据集上进行的大量实验验证了我们的CoCoFR在真实感和保真度方面的优势。
{"title":"CoCoFR: Collaborative codebooks learning with soft matching strategy for blind face restoration","authors":"Teng Feng ,&nbsp;Junwei Xu ,&nbsp;Tao Huang ,&nbsp;Zhenyu Wang ,&nbsp;Fangfang Wu ,&nbsp;Weisheng Dong ,&nbsp;Xin Li ,&nbsp;Guangming Shi","doi":"10.1016/j.neunet.2026.108607","DOIUrl":"10.1016/j.neunet.2026.108607","url":null,"abstract":"<div><div>Blind Face Restoration (BFR) has garnered considerable attention for its practical applicability to recover high-quality (HQ) facial images from their degraded versions. Existing BFR methods primarily incorporate diverse priors to mitigate its ill-posed nature. Notably, the codebook prior, which aggregates facial representations from HQ images has achieved impressive results. However, two performance constraints remain: i) The reliance on a single spatial-domain codebook neglects the potential information in the frequency domain. ii) The commonly used feature-matching strategies overlook the valid information encapsulated within the low-quality (LQ) identity features. To address these issues, we propose CoCoFR, which learns collaborative codebooks in both spatial and frequency domains and implements adaptive matching between LQ and HQ features with a designed Dual Codebooks Cross Attention (DCCA) module. Additionally, benefiting from its global receptive fields and linear complexity, CoCoFR facilitates coarse-to-fine feature fusion via a simple yet effective state space model (Mamba)-based fusion block MFB. Extensive experiments on both synthetic and real-world datasets validate the superiority of our CoCoFR in terms of realness and fidelity compared to state-of-the-art methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108607"},"PeriodicalIF":6.3,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG-to-gait decoding via phase-aware representation learning 基于相位感知表征学习的eeg -步态解码。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-16 DOI: 10.1016/j.neunet.2026.108608
Xi Fu , Weibang Jiang , Rui Liu , Gernot R. Müller-Putz , Cuntai Guan
Accurate decoding of lower-limb motion from EEG signals is essential for advancing brain-computer interface (BCI) applications in movement intent recognition and control. This study presents NeuroDyGait, a two-stage, phase-aware EEG-to-gait decoding framework that explicitly models temporal continuity and domain relationships. To address challenges of causal, phase-consistent prediction and cross-subject variability, Stage I learns semantically aligned EEG-motion embeddings via relative contrastive learning with a cross-attention-based metric, while Stage II performs domain relation-aware decoding through dynamic fusion of session-specific heads. Comprehensive experiments on two benchmark datasets (GED and FMD) show substantial gains over baselines, including a recent 2025 model EEG2GAIT. The framework generalizes to unseen subjects and maintains inference latency below 5 ms per window, satisfying real-time BCI requirements. Visualization of learned attention and phase-specific cortical saliency maps further reveals interpretable neural correlates of gait phases. Future extensions will target rehabilitation populations and multimodal integration.
从脑电信号中准确解码下肢运动是推进脑机接口(BCI)在运动意图识别和控制中的应用的关键。本研究提出了neurody步态,这是一个两阶段,相位感知的eeg -步态解码框架,明确地模拟了时间连续性和域关系。为了解决因果关系、阶段一致性预测和跨主体可变性的挑战,第一阶段通过基于交叉注意的相对对比学习来学习语义对齐的脑电图运动嵌入,而第二阶段通过会话特定头部的动态融合来进行领域关系感知解码。在两个基准数据集(GED和FMD)上进行的综合实验显示,与基线相比,包括最近的2025模型eeg2步态,取得了实质性的进展。该框架推广到不可见的对象,并将推理延迟保持在每窗口5 ms以下,满足实时BCI要求。可视化学习的注意和阶段特异性皮层显著性图进一步揭示了步态阶段可解释的神经关联。未来的扩展将针对康复人群和多模式融合。
{"title":"EEG-to-gait decoding via phase-aware representation learning","authors":"Xi Fu ,&nbsp;Weibang Jiang ,&nbsp;Rui Liu ,&nbsp;Gernot R. Müller-Putz ,&nbsp;Cuntai Guan","doi":"10.1016/j.neunet.2026.108608","DOIUrl":"10.1016/j.neunet.2026.108608","url":null,"abstract":"<div><div>Accurate decoding of lower-limb motion from EEG signals is essential for advancing brain-computer interface (BCI) applications in movement intent recognition and control. This study presents <strong>NeuroDyGait</strong>, a two-stage, phase-aware EEG-to-gait decoding framework that explicitly models temporal continuity and domain relationships. To address challenges of causal, phase-consistent prediction and cross-subject variability, Stage I learns semantically aligned EEG-motion embeddings via relative contrastive learning with a cross-attention-based metric, while Stage II performs domain relation-aware decoding through dynamic fusion of session-specific heads. Comprehensive experiments on two benchmark datasets (GED and FMD) show substantial gains over baselines, including a recent 2025 model EEG2GAIT. The framework generalizes to unseen subjects and maintains inference latency below 5 ms per window, satisfying real-time BCI requirements. Visualization of learned attention and phase-specific cortical saliency maps further reveals interpretable neural correlates of gait phases. Future extensions will target rehabilitation populations and multimodal integration.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108608"},"PeriodicalIF":6.3,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-Agnostic Linear Transformers 图形不可知线性变压器
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-16 DOI: 10.1016/j.neunet.2026.108595
Zhiyu Guo , Yang Liu , Xiang Ao , Yateng Tang , Xinhuan Chen , Xuehao Zheng , Qing He
Graph Transformers (GTs), as emerging foundational encoders for graph-structured data, have shown promising performance due to the integration of local graph structures with global attention mechanisms. However, the complex attention functions and their coupling with graph structures incur significant computational overhead, particularly in large-scale graphs. In this paper, we decouple graph structures from Transformers and propose the Graph-Agnostic Linear Transformer (GALiT). In GALiT, graph structures are solely utilized to denoise raw node features before training, as our findings reveal that these denoised features have integrated the main information of the graph structure and can replace it to guide Transformers. By excluding graph structures from the training and inference stages, GALiT serves as a graph-agnostic model which significantly reduces computational complexity. Additionally, we simplify the linear attention functions inherited from traditional Transformers, which further reduces computational overhead while still capturing the relationships between nodes. Through weighted combination, we integrate the denoised features into the attention mechanism, as our theoretical analysis reveals the key role of the synergy between linear attention and denoised features in enhancing representation diversity. Despite decoupling graph structures and simplifying attention mechanisms, our model surprisingly outperforms most GNNs and GTs on benchmark graphs. Experimental results indicate that GALiT achieves high efficiency while maintaining or even enhancing performance.
图转换器(GTs)作为新兴的图结构数据基础编码器,由于将局部图结构与全局关注机制相结合,表现出了良好的性能。然而,复杂的注意函数及其与图结构的耦合导致了大量的计算开销,特别是在大规模图中。本文将图结构与变压器解耦,提出了图不可知线性变压器(GALiT)。在GALiT中,仅利用图结构在训练前对原始节点特征进行降噪,我们的研究结果表明,这些降噪后的特征已经整合了图结构的主要信息,可以代替图结构来指导变形器。通过将图结构排除在训练和推理阶段,GALiT作为一个图不可知模型,显著降低了计算复杂度。此外,我们简化了从传统的变压器继承的线性注意力函数,这进一步减少了计算开销,同时仍然捕获节点之间的关系。通过加权组合,我们将去噪特征整合到注意机制中,我们的理论分析揭示了线性注意和去噪特征之间的协同作用在增强表征多样性方面的关键作用。尽管解耦了图结构并简化了注意机制,但我们的模型在基准图上的表现惊人地优于大多数gnn和gt。实验结果表明,GALiT在保持甚至提高性能的同时实现了高效率。
{"title":"Graph-Agnostic Linear Transformers","authors":"Zhiyu Guo ,&nbsp;Yang Liu ,&nbsp;Xiang Ao ,&nbsp;Yateng Tang ,&nbsp;Xinhuan Chen ,&nbsp;Xuehao Zheng ,&nbsp;Qing He","doi":"10.1016/j.neunet.2026.108595","DOIUrl":"10.1016/j.neunet.2026.108595","url":null,"abstract":"<div><div>Graph Transformers (GTs), as emerging foundational encoders for graph-structured data, have shown promising performance due to the integration of local graph structures with global attention mechanisms. However, the complex attention functions and their coupling with graph structures incur significant computational overhead, particularly in large-scale graphs. In this paper, we decouple graph structures from Transformers and propose the Graph-Agnostic Linear Transformer (GALiT). In GALiT, graph structures are solely utilized to denoise raw node features before training, as our findings reveal that these denoised features have integrated the main information of the graph structure and can replace it to guide Transformers. By excluding graph structures from the training and inference stages, GALiT serves as a graph-agnostic model which significantly reduces computational complexity. Additionally, we simplify the linear attention functions inherited from traditional Transformers, which further reduces computational overhead while still capturing the relationships between nodes. Through weighted combination, we integrate the denoised features into the attention mechanism, as our theoretical analysis reveals the key role of the synergy between linear attention and denoised features in enhancing representation diversity. Despite decoupling graph structures and simplifying attention mechanisms, our model surprisingly outperforms most GNNs and GTs on benchmark graphs. Experimental results indicate that GALiT achieves high efficiency while maintaining or even enhancing performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108595"},"PeriodicalIF":6.3,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial contrastive with leveraging negative knowledge for point of interest sequence learning 利用负面知识进行兴趣点序列学习的对抗对比。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-16 DOI: 10.1016/j.neunet.2026.108599
Jinhui Zhu , Xiangfeng Luo , Xiao Wei , Xin Yao
The core of mining point of interest (POI) data is to learn the user preference representation. However, existing POI sequence learning methods often serve downstream tasks in an end-to-end manner. It lacks the ability to support multiple downstream tasks, resulting in unsatisfactory generalization and poor performance. Besides, although POI sequence learning uses contrastive learning to learn the user preferences feature in positive and negative samples, they fail to simultaneously consider that negative samples contain useful characteristics. To improve the generalization and performance of POI sequence learning methods for various downstream tasks, we propose an Adversarial Contrastive with Leveraging Negative Knowledge model (ACLNK). First, we design an adversarial generalizing representation module for capturing the user long-term preferences to generate a generalized user historical representation incorporating user social circles. Second, to capture comprehensive short-term preferences from a limited input sequence, we design a negative sample knowledge extraction attention mechanism to absorb knowledge from negative data. Finally, the learned short- and long-term preferences as the input of the contrastive module to generate the accurate user generalization representation. We demonstrate the effectiveness and generality of ACLNK on three check-in sequence datasets for two kinds of downstream tasks. Extensive experiments demonstrate that our proposed model significantly outperforms previous state-of-the-art models. Our code is available at https://github.com/Lucas-Z9277/ACLNK_main.
兴趣点数据挖掘的核心是学习用户偏好表示。然而,现有的POI序列学习方法通常以端到端方式服务于下游任务。它缺乏支持多个下游任务的能力,导致泛化效果不理想,性能不佳。此外,虽然POI序列学习使用对比学习来学习正负样本中的用户偏好特征,但它们没有同时考虑到负样本中包含有用的特征。为了提高POI序列学习方法在各种下游任务中的泛化和性能,我们提出了一种利用负知识的对抗对比模型(ACLNK)。首先,我们设计了一个对抗性泛化表示模块,用于捕获用户长期偏好,以生成包含用户社交圈的广义用户历史表示。其次,为了从有限的输入序列中获取综合的短期偏好,我们设计了一个负样本知识提取注意机制,从负数据中吸收知识。最后,将学习到的短期和长期偏好作为对比模块的输入,生成准确的用户泛化表示。我们在两种下游任务的三个签入序列数据集上证明了ACLNK的有效性和通用性。大量的实验表明,我们提出的模型明显优于以前的最先进的模型。我们的代码可在https://github.com/Lucas-Z9277/ACLNK_main上获得。
{"title":"Adversarial contrastive with leveraging negative knowledge for point of interest sequence learning","authors":"Jinhui Zhu ,&nbsp;Xiangfeng Luo ,&nbsp;Xiao Wei ,&nbsp;Xin Yao","doi":"10.1016/j.neunet.2026.108599","DOIUrl":"10.1016/j.neunet.2026.108599","url":null,"abstract":"<div><div>The core of mining point of interest (POI) data is to learn the user preference representation. However, existing POI sequence learning methods often serve downstream tasks in an end-to-end manner. It lacks the ability to support multiple downstream tasks, resulting in unsatisfactory generalization and poor performance. Besides, although POI sequence learning uses contrastive learning to learn the user preferences feature in positive and negative samples, they fail to simultaneously consider that negative samples contain useful characteristics. To improve the generalization and performance of POI sequence learning methods for various downstream tasks, we propose an Adversarial Contrastive with Leveraging Negative Knowledge model (ACLNK). First, we design an adversarial generalizing representation module for capturing the user long-term preferences to generate a generalized user historical representation incorporating user social circles. Second, to capture comprehensive short-term preferences from a limited input sequence, we design a negative sample knowledge extraction attention mechanism to absorb knowledge from negative data. Finally, the learned short- and long-term preferences as the input of the contrastive module to generate the accurate user generalization representation. We demonstrate the effectiveness and generality of ACLNK on three check-in sequence datasets for two kinds of downstream tasks. Extensive experiments demonstrate that our proposed model significantly outperforms previous state-of-the-art models. Our code is available at <span><span>https://github.com/Lucas-Z9277/ACLNK_main</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108599"},"PeriodicalIF":6.3,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146013049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSA-Diff: Dynamic schedule alignment for training-Inference consistent modality translation in x-prediction diffusion model DSA-Diff: x-预测扩散模型中训练-推理一致模态转换的动态调度对齐。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-16 DOI: 10.1016/j.neunet.2026.108611
Xianhua Zeng , Yixin Xiang , Jian Zhang , Bowen Lu
For modality translation tasks, diffusion models based on x-prediction offer faster and more accurate image generation compared to traditional ϵ-prediction. However, they often suffer from training-inference inconsistency (TII), which arises from a mismatch between the Gaussian distribution assumed by the preset noise schedule and the true data distribution. To address this, we propose DSA-Diff, a novel framework that employs dual noise schedules to decouple the training and inference processes. Our approach decomposes the noise schedule along three dimensions: noise sequence, timestep, and correction matrix, and introduces a Bayesian-Greedy Alignment Scheduler (BGAS) to dynamically reconstruct the inference schedule. BGAS combines greedy initialization and Bayesian optimization to align the generated data manifold with the true one. Additionally, we introduce progressive target prediction and multi-scale perceptual alignment to enhance the robustness and detail fidelity of the x-prediction model. Experiments on four datasets show that DSA-Diff achieves high-fidelity image synthesis in only 4–10 adaptive inference steps, with minimal computational cost (68 GFLOPS). It improves the SSIM metric by up to 2.56% in TFW dataset using only one additional algorithmic module, effectively mitigating TII. Code and models are available at: https://github.com/ElephantOH/DSA-Diff.
对于模态翻译任务,与传统的ϵ-prediction相比,基于x-预测的扩散模型提供了更快、更准确的图像生成。然而,它们经常受到训练-推理不一致(TII)的困扰,这是由于预设噪声调度假设的高斯分布与真实数据分布不匹配造成的。为了解决这个问题,我们提出了DSA-Diff,这是一个采用双噪声调度来解耦训练和推理过程的新框架。该方法从噪声序列、时间步长和校正矩阵三个维度对噪声调度进行分解,并引入贝叶斯贪婪对齐调度(BGAS)来动态重构推理调度。BGAS将贪婪初始化和贝叶斯优化相结合,使生成的数据流形与真实的数据流形对齐。此外,我们引入渐进式目标预测和多尺度感知对齐来增强x预测模型的鲁棒性和细节保真度。在4个数据集上的实验表明,DSA-Diff算法仅需4-10个自适应推理步骤即可实现高保真图像合成,且计算成本最小(68 GFLOPS)。它仅使用一个额外的算法模块,就将TFW数据集中的SSIM指标提高了2.56%,有效地缓解了TII。代码和模型可在:https://github.com/ElephantOH/DSA-Diff。
{"title":"DSA-Diff: Dynamic schedule alignment for training-Inference consistent modality translation in x-prediction diffusion model","authors":"Xianhua Zeng ,&nbsp;Yixin Xiang ,&nbsp;Jian Zhang ,&nbsp;Bowen Lu","doi":"10.1016/j.neunet.2026.108611","DOIUrl":"10.1016/j.neunet.2026.108611","url":null,"abstract":"<div><div>For modality translation tasks, diffusion models based on x-prediction offer faster and more accurate image generation compared to traditional ϵ-prediction. However, they often suffer from training-inference inconsistency (TII), which arises from a mismatch between the Gaussian distribution assumed by the preset noise schedule and the true data distribution. To address this, we propose DSA-Diff, a novel framework that employs dual noise schedules to decouple the training and inference processes. Our approach decomposes the noise schedule along three dimensions: noise sequence, timestep, and correction matrix, and introduces a Bayesian-Greedy Alignment Scheduler (BGAS) to dynamically reconstruct the inference schedule. BGAS combines greedy initialization and Bayesian optimization to align the generated data manifold with the true one. Additionally, we introduce progressive target prediction and multi-scale perceptual alignment to enhance the robustness and detail fidelity of the x-prediction model. Experiments on four datasets show that DSA-Diff achieves high-fidelity image synthesis in only 4–10 adaptive inference steps, with minimal computational cost (68 GFLOPS). It improves the SSIM metric by up to 2.56% in TFW dataset using only one additional algorithmic module, effectively mitigating TII. Code and models are available at: <span><span>https://github.com/ElephantOH/DSA-Diff</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108611"},"PeriodicalIF":6.3,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
G2CL: Gradient-guided graph contrastive learning for eliminating the message contrastive conflict G2CL:用于消除消息对比冲突的梯度引导图对比学习。
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-16 DOI: 10.1016/j.neunet.2026.108605
Shuai Zhang, Shan Yang, Wenyu Zhang, Jiahao Nie, Shan Ji
Graph contrastive learning methods based on the information noise contrastive estimation (InfoNCE) loss have made significant advances in graph representation learning. However, existing methods primarily focus on optimizing graph augmentation strategies or contrastive objectives. They cannot effectively eliminate the message contrastive conflict (MCC) that arises from the collaboration between the InfoNCE loss and the message-passing mechanism of graph neural networks. The MCC prevents the effective minimization of similarity among negative samples, thereby undermining the efficacy of graph contrastive learning. Furthermore, the issues of false negative samples and long-tail conflict effect (LCE) under the MCC remain unresolved. To this end, a novel method termed gradient-guided graph contrastive learning for eliminating the message contrastive conflict (G2CL) is proposed. First, this study theoretically demonstrates the existence of the MCC and analyzes in detail the impact of false negative samples and LCE on the MCC. In addition, a new gradient-guided dynamic capturer is proposed to eliminate the MCC. Next, based on the semantic and topological information of the graph, a new false negative strategy is proposed to address the issue of false negative samples. Furthermore, a new pheromone-based message-passing mechanism is proposed to address the issue of LCE. Finally, extensive experiments on 11 datasets demonstrate that the G2CL outperforms state-of-the-art baselines.
基于信息噪声对比估计(InfoNCE)损失的图对比学习方法在图表示学习方面取得了重大进展。然而,现有的方法主要集中在优化图增强策略或对比目标。它们不能有效地消除由于InfoNCE丢失和图神经网络的消息传递机制之间的协作而产生的消息对比冲突(MCC)。MCC阻止了负样本之间相似性的有效最小化,从而破坏了图对比学习的效果。此外,MCC下的假阴性样本和长尾冲突效应(LCE)问题仍未得到解决。为此,提出了一种消除信息冲突的梯度引导图对比学习方法。首先,本研究从理论上论证了MCC的存在,并详细分析了假阴性样本和LCE对MCC的影响。此外,还提出了一种新的梯度制导动态捕集器来消除MCC。其次,基于图的语义和拓扑信息,提出了一种新的假阴性策略来解决假阴性样本的问题。此外,提出了一种新的基于信息素的信息传递机制来解决LCE问题。最后,在11个数据集上进行的广泛实验表明,G2CL优于最先进的基线。
{"title":"G2CL: Gradient-guided graph contrastive learning for eliminating the message contrastive conflict","authors":"Shuai Zhang,&nbsp;Shan Yang,&nbsp;Wenyu Zhang,&nbsp;Jiahao Nie,&nbsp;Shan Ji","doi":"10.1016/j.neunet.2026.108605","DOIUrl":"10.1016/j.neunet.2026.108605","url":null,"abstract":"<div><div>Graph contrastive learning methods based on the information noise contrastive estimation (InfoNCE) loss have made significant advances in graph representation learning. However, existing methods primarily focus on optimizing graph augmentation strategies or contrastive objectives. They cannot effectively eliminate the message contrastive conflict (MCC) that arises from the collaboration between the InfoNCE loss and the message-passing mechanism of graph neural networks. The MCC prevents the effective minimization of similarity among negative samples, thereby undermining the efficacy of graph contrastive learning. Furthermore, the issues of false negative samples and long-tail conflict effect (LCE) under the MCC remain unresolved. To this end, a novel method termed gradient-guided graph contrastive learning for eliminating the message contrastive conflict (G2CL) is proposed. First, this study theoretically demonstrates the existence of the MCC and analyzes in detail the impact of false negative samples and LCE on the MCC. In addition, a new gradient-guided dynamic capturer is proposed to eliminate the MCC. Next, based on the semantic and topological information of the graph, a new false negative strategy is proposed to address the issue of false negative samples. Furthermore, a new pheromone-based message-passing mechanism is proposed to address the issue of LCE. Finally, extensive experiments on 11 datasets demonstrate that the G2CL outperforms state-of-the-art baselines.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108605"},"PeriodicalIF":6.3,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1