首页 > 最新文献

Neural Networks最新文献

英文 中文
Lifelong knowledge graph embedding via diffusion model 基于扩散模型的终身知识图谱嵌入
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.neunet.2026.108630
Deyu Chen , Caicai Guo , Qiyuan Li , Jinguang Gu , Meiyi Xie , Hong Zhu
Lifelong knowledge graph embedding (KGE) methods aim to learn new knowledge continuously while retaining old knowledge. This line of work has received much attention for its potential to enable knowledge retention and transfer and to reduce training costs under knowledge graphs’ growing scale and flexibility. However, embedding space drift under different contexts is a crucial reason for catastrophic forgetting and inefficient learning of new facts, and existing work ignores this perspective. In order to address the above issues, we proposed a novel lifelong KGE framework that considers learning new facts and preserving old facts in a unified perspective. We propose a diffusion-based embedding method that captures the contextual variation of entity representations and obtains transferable embeddings. In order to handle the drift of the embedding space and balance the learning efficiency, we adopt a reconstruction and generation strategy based on contrastive learning. To avoid catastrophic forgetting and maintain the stability of the embedding distribution, we proposed an effective distribution regularization method. We conduct extensive experiments on seven benchmark datasets with different construction strategies and incremental speed. Experimental results show that our proposed framework outperforms existing lifelong KGE methods.
终身知识图嵌入(KGE)方法的目的是在保留旧知识的同时不断学习新知识。在知识图的规模和灵活性不断增长的情况下,这方面的工作因其实现知识保留和转移以及降低培训成本的潜力而受到广泛关注。然而,在不同情境下嵌入空间漂移是灾难性遗忘和新事实学习效率低下的重要原因,现有的研究忽视了这一观点。为了解决上述问题,我们提出了一种新的终身KGE框架,该框架从统一的角度考虑了学习新事实和保留旧事实。我们提出了一种基于扩散的嵌入方法,该方法捕获实体表示的上下文变化并获得可转移的嵌入。为了处理嵌入空间的漂移和平衡学习效率,我们采用了一种基于对比学习的重构生成策略。为了避免灾难性遗忘和保持嵌入分布的稳定性,提出了一种有效的分布正则化方法。我们在7个基准数据集上采用不同的构建策略和增量速度进行了广泛的实验。实验结果表明,我们提出的框架优于现有的终身KGE方法。
{"title":"Lifelong knowledge graph embedding via diffusion model","authors":"Deyu Chen ,&nbsp;Caicai Guo ,&nbsp;Qiyuan Li ,&nbsp;Jinguang Gu ,&nbsp;Meiyi Xie ,&nbsp;Hong Zhu","doi":"10.1016/j.neunet.2026.108630","DOIUrl":"10.1016/j.neunet.2026.108630","url":null,"abstract":"<div><div>Lifelong knowledge graph embedding (KGE) methods aim to learn new knowledge continuously while retaining old knowledge. This line of work has received much attention for its potential to enable knowledge retention and transfer and to reduce training costs under knowledge graphs’ growing scale and flexibility. However, embedding space drift under different contexts is a crucial reason for catastrophic forgetting and inefficient learning of new facts, and existing work ignores this perspective. In order to address the above issues, we proposed a novel lifelong KGE framework that considers learning new facts and preserving old facts in a unified perspective. We propose a diffusion-based embedding method that captures the contextual variation of entity representations and obtains transferable embeddings. In order to handle the drift of the embedding space and balance the learning efficiency, we adopt a reconstruction and generation strategy based on contrastive learning. To avoid catastrophic forgetting and maintain the stability of the embedding distribution, we proposed an effective distribution regularization method. We conduct extensive experiments on seven benchmark datasets with different construction strategies and incremental speed. Experimental results show that our proposed framework outperforms existing lifelong KGE methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108630"},"PeriodicalIF":6.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FluidFormer : Transformer with continuous convolution for particle-based fluid simulation FluidFormer:具有连续卷积的变压器,用于基于颗粒的流体模拟
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.neunet.2026.108631
Nianyi Wang, Shuai Zheng, Yu Chen, Hai Zhao, Zhou Fang
Learning-based fluid simulation has emerged as an efficient alternative to traditional Navier-Stokes solvers. However, existing neural methods that build upon Smoothed Particle Hydrodynamics (SPH) predominantly rely on local particle interactions, which induces instability in complex scenarios due to error accumulation. To address this, we introduce FluidFormer, a novel architecture that establishes a hierarchical local-global modeling paradigm. The core of our model is the Fluid Attention Block (FAB), a co-design that orchestrates continuous convolution for locality with self-attention for global corrective long-range hydrodynamic phenomena. Embedded in a dual-pipeline network, our approach seamlessly fuses inductive physical biases with structured global reasoning. Extensive experiments show that FluidFormer achieves state-of-the-art performance, with significantly improved stability and generalization in challenging fluid scenes, demonstrating its potential as a robust simulator for complex physical systems.
基于学习的流体模拟已经成为传统Navier-Stokes解算器的有效替代方案。然而,现有的基于光滑粒子流体动力学(SPH)的神经方法主要依赖于局部粒子相互作用,这在复杂的情况下由于误差积累而导致不稳定。为了解决这个问题,我们介绍了FluidFormer,这是一种新的架构,可以建立分层的局部全局建模范式。我们模型的核心是流体注意块(FAB),这是一种协同设计,它协调了局部的连续卷积和全局校正远程流体动力学现象的自关注。我们的方法嵌入在双管道网络中,将归纳物理偏差与结构化全局推理无缝融合。大量实验表明,FluidFormer实现了最先进的性能,在具有挑战性的流体场景中具有显着提高的稳定性和通用性,证明了其作为复杂物理系统鲁棒模拟器的潜力。
{"title":"FluidFormer : Transformer with continuous convolution for particle-based fluid simulation","authors":"Nianyi Wang,&nbsp;Shuai Zheng,&nbsp;Yu Chen,&nbsp;Hai Zhao,&nbsp;Zhou Fang","doi":"10.1016/j.neunet.2026.108631","DOIUrl":"10.1016/j.neunet.2026.108631","url":null,"abstract":"<div><div>Learning-based fluid simulation has emerged as an efficient alternative to traditional Navier-Stokes solvers. However, existing neural methods that build upon Smoothed Particle Hydrodynamics (SPH) predominantly rely on local particle interactions, which induces instability in complex scenarios due to error accumulation. To address this, we introduce FluidFormer, a novel architecture that establishes a hierarchical local-global modeling paradigm. The core of our model is the Fluid Attention Block (FAB), a co-design that orchestrates continuous convolution for locality with self-attention for global corrective long-range hydrodynamic phenomena. Embedded in a dual-pipeline network, our approach seamlessly fuses inductive physical biases with structured global reasoning. Extensive experiments show that FluidFormer achieves state-of-the-art performance, with significantly improved stability and generalization in challenging fluid scenes, demonstrating its potential as a robust simulator for complex physical systems.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108631"},"PeriodicalIF":6.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relation-aware pre-trained network with hierarchical aggregation mechanism for cold-start drug recommendation 基于层次聚合机制的关系感知预训练网络冷启动药物推荐
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.neunet.2026.108618
Xiaobo Li , Xiaodi Hou , Shilong Wang , Hongfei Lin , Yijia Zhang
Drug recommendation systems have garnered considerable interest in the healthcare, striving to offer precise and customized drug prescriptions that align with patients’ specific health needs. However, existing methods primarily focus on modeling temporal dependencies between visits for patients with multiple encounters, often neglecting the challenge of data sparsity in single-visit patients. To address above limitation, we propose a novel Relation-aware Pre-trained Network with hierarchical aggregation mechanism for drug recommendation (RPNet), which employs a pre-training and fine-tuning framework to enhance drug recommendation in cold-start scenario. Specifically, we introduce: 1) A code matching discrimination task during pre-training, designed to model the complex relationships between diagnosis and procedure entities. This task employs a mask-replace contrastive learning strategy, which pulls similar samples closer while pushing dissimilar ones apart, thereby capturing robust feature representations; 2) A hierarchical aggregation mechanism that enhances drug information integration by first selecting relevant visits based on rarity discrimination and then retrieving similar patients’ drug insights via similarity matching during fine-tuning. Extensive experiments on two real-world datasets demonstrate the superiority of the proposed RPNet, notably improving the F1 metric by 1.32% and 1.19%. The code of our model is available at https://github.com/Lxb0102/RPNet.
药物推荐系统已经在医疗保健领域获得了相当大的兴趣,努力提供精确和定制的药物处方,与患者的特定健康需求保持一致。然而,现有的方法主要侧重于对多次就诊患者就诊之间的时间依赖性建模,往往忽略了单次就诊患者的数据稀疏性的挑战。为了解决上述问题,我们提出了一种新的具有层次聚合机制的关系感知预训练药物推荐网络(RPNet),该网络采用预训练和微调框架来增强冷启动场景下的药物推荐。具体来说,我们介绍了:1)在预训练过程中,一个代码匹配判别任务,旨在对诊断和过程实体之间的复杂关系进行建模。该任务采用掩模替换对比学习策略,将相似的样本拉得更近,同时将不相似的样本分开,从而捕获鲁棒特征表示;2)层次聚合机制,首先基于稀缺性判别选择相关就诊,然后在微调过程中通过相似性匹配检索相似患者的药物见解,增强药物信息整合。在两个真实数据集上的大量实验证明了RPNet的优越性,显著提高了F1指标1.32%和1.19%。我们模型的代码可以在https://github.com/Lxb0102/RPNet上找到。
{"title":"Relation-aware pre-trained network with hierarchical aggregation mechanism for cold-start drug recommendation","authors":"Xiaobo Li ,&nbsp;Xiaodi Hou ,&nbsp;Shilong Wang ,&nbsp;Hongfei Lin ,&nbsp;Yijia Zhang","doi":"10.1016/j.neunet.2026.108618","DOIUrl":"10.1016/j.neunet.2026.108618","url":null,"abstract":"<div><div>Drug recommendation systems have garnered considerable interest in the healthcare, striving to offer precise and customized drug prescriptions that align with patients’ specific health needs. However, existing methods primarily focus on modeling temporal dependencies between visits for patients with multiple encounters, often neglecting the challenge of data sparsity in single-visit patients. To address above limitation, we propose a novel Relation-aware Pre-trained Network with hierarchical aggregation mechanism for drug recommendation (RPNet), which employs a pre-training and fine-tuning framework to enhance drug recommendation in cold-start scenario. Specifically, we introduce: 1) A code matching discrimination task during pre-training, designed to model the complex relationships between diagnosis and procedure entities. This task employs a mask-replace contrastive learning strategy, which pulls similar samples closer while pushing dissimilar ones apart, thereby capturing robust feature representations; 2) A hierarchical aggregation mechanism that enhances drug information integration by first selecting relevant visits based on rarity discrimination and then retrieving similar patients’ drug insights via similarity matching during fine-tuning. Extensive experiments on two real-world datasets demonstrate the superiority of the proposed RPNet, notably improving the F1 metric by 1.32% and 1.19%. The code of our model is available at <span><span>https://github.com/Lxb0102/RPNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108618"},"PeriodicalIF":6.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MMFormer: Multi-Modality semi-Supervised vision transformer in remote sensing imagery classification MMFormer:遥感图像分类中的多模态半监督视觉转换器
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.neunet.2026.108628
Daixun Li , Weiying Xie , Leyuan Fang , Yunke Wang , Zirui Li , Mingxiang Cao , Jitao Ma , Yunsong Li , Chang Xu
Significant progress has been made in the application of transformer architectures for multimodal tasks. However, current methods such as the self-attention mechanism rarely consider the benefits that feature complementarity and consistency between different modalities bring to fusion, leading to obstacles such as redundant fusion or incomplete representation. Inspired by topological homology groups, we introduce MMFormer, a novel semi-supervised algorithm for high-dimensional multimodal fusion. This method is engineered to capture comprehensive representations by enhancing the interactivity between modal mappings. Specifically, we advocate for the representational consistency between these heterogeneous representations through a complete dictionary lookup and homology space in the encoder, and establish an exclusivity-aware mapping of the two modalities to emphasize their complementary information, serving as a powerful supplement for multimodal feature interpretation. Moreover, the model attempts to alleviate the challenge of sparse annotations in high-dimensional multimodal data by introducing a consistency joint regularization term. We have formulated these focuses into a unified end-to-end optimization framework and are the first to explore and derive the application of semi-supervised visual transformers in high-dimensional multimodal data fusion. Extensive experiments across three benchmarks demonstrate the superiority of MMFormer. Specifically, the model improves overall accuracy by 3.12% on Houston2013, 1.86% on Augsburg, and 1.66% on MUUFL compared with the strongest existing methods, confirming its robustness and effectiveness under sparse annotation conditions. The code is available at https://github.com/LDXDU/MMFormer.
变压器架构在多模态任务中的应用取得了重大进展。然而,现有的自注意机制等方法很少考虑到不同模式之间的互补性和一致性给融合带来的好处,导致融合冗余或表征不完整等障碍。受拓扑同调群的启发,提出了一种新的半监督算法MMFormer,用于高维多模态融合。该方法旨在通过增强模态映射之间的交互性来捕获全面的表示。具体而言,我们主张通过完整的字典查找和编码器中的同源空间来实现这些异构表示之间的表征一致性,并建立两种模态的排他性感知映射,以强调它们的互补信息,作为多模态特征解释的有力补充。此外,该模型试图通过引入一致性联合正则化项来缓解高维多模态数据中稀疏注释的挑战。我们已经将这些重点制定为统一的端到端优化框架,并率先探索和推导了半监督视觉变形在高维多模态数据融合中的应用。在三个基准测试中进行的大量实验证明了MMFormer的优越性。具体而言,与现有最强的方法相比,该模型在Houston2013上的总体准确率提高了3.12%,在Augsburg上提高了1.86%,在MUUFL上提高了1.66%,证实了其在稀疏标注条件下的鲁棒性和有效性。代码可在https://github.com/LDXDU/MMFormer上获得。
{"title":"MMFormer: Multi-Modality semi-Supervised vision transformer in remote sensing imagery classification","authors":"Daixun Li ,&nbsp;Weiying Xie ,&nbsp;Leyuan Fang ,&nbsp;Yunke Wang ,&nbsp;Zirui Li ,&nbsp;Mingxiang Cao ,&nbsp;Jitao Ma ,&nbsp;Yunsong Li ,&nbsp;Chang Xu","doi":"10.1016/j.neunet.2026.108628","DOIUrl":"10.1016/j.neunet.2026.108628","url":null,"abstract":"<div><div>Significant progress has been made in the application of transformer architectures for multimodal tasks. However, current methods such as the self-attention mechanism rarely consider the benefits that feature complementarity and consistency between different modalities bring to fusion, leading to obstacles such as redundant fusion or incomplete representation. Inspired by topological homology groups, we introduce MMFormer, a novel semi-supervised algorithm for high-dimensional multimodal fusion. This method is engineered to capture comprehensive representations by enhancing the interactivity between modal mappings. Specifically, we advocate for the representational consistency between these heterogeneous representations through a complete dictionary lookup and homology space in the encoder, and establish an exclusivity-aware mapping of the two modalities to emphasize their complementary information, serving as a powerful supplement for multimodal feature interpretation. Moreover, the model attempts to alleviate the challenge of sparse annotations in high-dimensional multimodal data by introducing a consistency joint regularization term. We have formulated these focuses into a unified end-to-end optimization framework and are the first to explore and derive the application of semi-supervised visual transformers in high-dimensional multimodal data fusion. Extensive experiments across three benchmarks demonstrate the superiority of MMFormer. Specifically, the model improves overall accuracy by 3.12% on Houston2013, 1.86% on Augsburg, and 1.66% on MUUFL compared with the strongest existing methods, confirming its robustness and effectiveness under sparse annotation conditions. The code is available at <span><span>https://github.com/LDXDU/MMFormer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108628"},"PeriodicalIF":6.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PHoM: Effective pan-sharpening via higher-order state-space model 基于高阶状态空间模型的有效泛锐化
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.neunet.2026.108616
Penglian Gao , Hongwei Ge , Shuzhi Su
Pan-sharpening is intended to generate high-resolution multi-spectral images, utilizing pairs of low-resolution multi-spectral and high-resolution panchromatic images. Recently, the Mamba-based pan-sharpening models achieve state-of-the-art performance due to their efficient long-range relational modeling. However, Mamba inherently obeys a first-order state-space high-dimensional nonlinear mapping, which fails to efficiently encode higher-order expressive interactions of spectral features. In this study, we propose a novel higher-order state-space model for pan-sharpening (PHoM). Our PHoM follows the concept of splitting, interaction, and aggregation for higher-order spatial adaptive interaction and discriminative learning without introducing excessive computational overhead. To model the fusion process between multi-spectral and panchromatic images, we further extend the PHoM into a cross-modal PHoM, which further improves the representation capability by exploiting higher-order cross-modal correlations. We conduct extensive experiments on different datasets. Experimental results show that our method achieves significant performance improvements, outperforming previous state-of-the-art methods on public datasets.
泛锐化的目的是产生高分辨率的多光谱图像,利用对低分辨率多光谱和高分辨率全色图像。最近,基于曼巴的泛锐化模型由于其高效的远程关系建模而实现了最先进的性能。然而,曼巴固有地服从一阶状态空间高维非线性映射,无法有效地编码光谱特征的高阶表达相互作用。在这项研究中,我们提出了一种新的高阶状态空间泛锐化模型。我们的PHoM遵循分裂、交互和聚合的概念,用于高阶空间自适应交互和判别学习,而不会引入过多的计算开销。为了模拟多光谱和全色图像之间的融合过程,我们进一步将PHoM扩展为跨模态PHoM,通过利用高阶跨模态相关性进一步提高了表征能力。我们在不同的数据集上进行大量的实验。实验结果表明,我们的方法在公共数据集上取得了显着的性能改进,优于以前最先进的方法。
{"title":"PHoM: Effective pan-sharpening via higher-order state-space model","authors":"Penglian Gao ,&nbsp;Hongwei Ge ,&nbsp;Shuzhi Su","doi":"10.1016/j.neunet.2026.108616","DOIUrl":"10.1016/j.neunet.2026.108616","url":null,"abstract":"<div><div>Pan-sharpening is intended to generate high-resolution multi-spectral images, utilizing pairs of low-resolution multi-spectral and high-resolution panchromatic images. Recently, the Mamba-based pan-sharpening models achieve state-of-the-art performance due to their efficient long-range relational modeling. However, Mamba inherently obeys a first-order state-space high-dimensional nonlinear mapping, which fails to efficiently encode higher-order expressive interactions of spectral features. In this study, we propose a novel higher-order state-space model for pan-sharpening (PHoM). Our PHoM follows the concept of splitting, interaction, and aggregation for higher-order spatial adaptive interaction and discriminative learning without introducing excessive computational overhead. To model the fusion process between multi-spectral and panchromatic images, we further extend the PHoM into a cross-modal PHoM, which further improves the representation capability by exploiting higher-order cross-modal correlations. We conduct extensive experiments on different datasets. Experimental results show that our method achieves significant performance improvements, outperforming previous state-of-the-art methods on public datasets.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108616"},"PeriodicalIF":6.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-category spatiotemporal consensus and discriminative networks for weakly-supervised temporal action localization 弱监督时间动作定位的跨范畴时空共识和判别网络
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.neunet.2026.108627
Kunlun Wu, Donghai Zhai
Weakly-supervised temporal action localization is a practical task that localizes different action instances from untrimmed videos without frame-level annotations. Current approaches either enhance the discriminative features of action snippets to reduce confusion with the background, or focus on less informative snippets to guide the model to explore non-salient regions. However, they seldom explicitly consider similar sub-processes across different actions, known as cross-category consensus relationships, which can provide complementary information for exploring the more comprehensive localization results. Moreover, previous methods mostly overlooked class-level higher-order dynamics, which can provide finer-grained motion relationships to help the model capture subtle discriminative features. To alleviate the above problems, we investigate a simple yet effective method termed the STCD network, which leverages superclass-level semantics and high-order dynamics for spatiotemporal consensus and discriminative learning. Specifically, we leverage the high-order encoding module based on Koopman theory to explicitly explore the discriminative class-wise dynamics. Meanwhile, we adopt superclass-level semantics to capture the consensus relationships among various actions due to the similar sub-actions of diverse categories are essential to mine more comprehensive action snippets. Finally, we argue that snippets with high entropy in their category distribution typically demonstrate significant uncertainty and possess ambiguous representations in their feature space. From the perspective of information theory, we further propose an effective loss function to further enhance the discriminative features of each action snippet, i.e., selecting the Top-k categories with the highest predicted probability for each segment and reducing the uncertainty by minimizing their information entropy. Experimental results on three datasets, i.e., THUMOS14, ActivityNet v1.2 and ActivityNet v1.3, demonstrate that our method is superior to the state-of-the-art.
弱监督时态动作定位是一种实用的任务,它在没有帧级注释的情况下对未修剪视频中的不同动作实例进行定位。目前的方法要么增强动作片段的区别特征,以减少与背景的混淆,要么关注信息较少的片段,以指导模型探索非显著区域。然而,他们很少明确地考虑跨不同动作的相似子过程,称为跨类别共识关系,这可以为探索更全面的定位结果提供补充信息。此外,以前的方法大多忽略了类级别的高阶动态,这可以提供更细粒度的运动关系,以帮助模型捕捉细微的判别特征。为了缓解上述问题,我们研究了一种简单而有效的方法,称为STCD网络,它利用超类级语义和高阶动态来实现时空共识和判别学习。具体来说,我们利用基于Koopman理论的高阶编码模块来明确地探索区分类的动态。同时,我们采用超类级语义来捕获各种动作之间的共识关系,因为不同类别的相似子动作对于挖掘更全面的动作片段至关重要。最后,我们认为在其类别分布中具有高熵的片段通常表现出显著的不确定性,并且在其特征空间中具有歧义表示。从信息论的角度,我们进一步提出了一种有效的损失函数来进一步增强每个动作片段的判别特征,即为每个片段选择预测概率最高的Top-k类,并通过最小化其信息熵来降低不确定性。在THUMOS14、ActivityNet v1.2和ActivityNet v1.3三个数据集上的实验结果表明,我们的方法优于目前最先进的方法。
{"title":"Cross-category spatiotemporal consensus and discriminative networks for weakly-supervised temporal action localization","authors":"Kunlun Wu,&nbsp;Donghai Zhai","doi":"10.1016/j.neunet.2026.108627","DOIUrl":"10.1016/j.neunet.2026.108627","url":null,"abstract":"<div><div>Weakly-supervised temporal action localization is a practical task that localizes different action instances from untrimmed videos without frame-level annotations. Current approaches either enhance the discriminative features of action snippets to reduce confusion with the background, or focus on less informative snippets to guide the model to explore non-salient regions. However, they seldom explicitly consider similar sub-processes across different actions, known as cross-category consensus relationships, which can provide complementary information for exploring the more comprehensive localization results. Moreover, previous methods mostly overlooked class-level higher-order dynamics, which can provide finer-grained motion relationships to help the model capture subtle discriminative features. To alleviate the above problems, we investigate a simple yet effective method termed the STCD network, which leverages superclass-level semantics and high-order dynamics for spatiotemporal consensus and discriminative learning. Specifically, we leverage the high-order encoding module based on <em>Koopman</em> theory to explicitly explore the discriminative class-wise dynamics. Meanwhile, we adopt superclass-level semantics to capture the consensus relationships among various actions due to the similar sub-actions of diverse categories are essential to mine more comprehensive action snippets. Finally, we argue that snippets with high entropy in their category distribution typically demonstrate significant uncertainty and possess ambiguous representations in their feature space. From the perspective of information theory, we further propose an effective loss function to further enhance the discriminative features of each action snippet, i.e., selecting the Top-k categories with the highest predicted probability for each segment and reducing the uncertainty by minimizing their information entropy. Experimental results on three datasets, i.e., THUMOS14, ActivityNet v1.2 and ActivityNet v1.3, demonstrate that our method is superior to the state-of-the-art.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108627"},"PeriodicalIF":6.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CBAM-ST-GCN: An enhanced DRL-based end-to-end visual navigation framework for mobile robot CBAM-ST-GCN:一种基于drl的移动机器人端到端视觉导航框架
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1016/j.neunet.2026.108622
Mingyang Xie, Wei Yu, Huanyu Jin, Wei Li, Xin Chen
Visual-based navigation for mobile robot poses significant challenges due to limited visual perception and the presence of unforeseen dynamic obstacles. Deep reinforcement learning (DRL) provides an end-to-end solution by directly mapping raw sensor data to control commands, offering high adaptability and reduced reliance on handcrafted rules. However, high-dimensional visual inputs and the non-stationarity introduced by dynamic obstacles easily make the policy learning of DRL difficult to convergent and unstable. In this paper, an enhanced end-to-end visual navigation framework is proposed for mobile robot operating in dynamic environments, denoted as CBAM-ST-GCN. A convolutional block attention module (CBAM) is introduced into the framework to enhance visual perception by assigning attention weights across spatial and temporal dimensions. Furthermore, a spatio-temporal graph convolutional network (ST-GCN) is designed to capture the behavior features of moving obstacles. In addition, a velocity obstacle (VO) method-based penalty term is incorporated into the reward function for the enhancement of collision avoidance. Extensive simulation results demonstrate that the proposed method achieves superior success rates and significantly higher convergence speed. Real-world experiments further validate the effectiveness and adaptability of the proposed approach in practical scenarios.
由于视觉感知能力的限制和不可预见的动态障碍物的存在,移动机器人基于视觉的导航面临着巨大的挑战。深度强化学习(DRL)通过直接将原始传感器数据映射到控制命令,提供了一个端到端解决方案,具有高适应性,减少了对手工规则的依赖。然而,高维视觉输入和动态障碍物引入的非平稳性容易使DRL的策略学习难以收敛和不稳定。针对动态环境下的移动机器人,本文提出了一种增强的端到端视觉导航框架,记作CBAM-ST-GCN。在框架中引入了卷积块注意模块(CBAM),通过在空间和时间维度上分配注意权重来增强视觉感知。此外,设计了一个时空图卷积网络(ST-GCN)来捕捉移动障碍物的行为特征。此外,在奖励函数中引入基于速度障碍(VO)方法的惩罚项,增强了避碰性能。大量的仿真结果表明,该方法具有较高的成功率和显著的收敛速度。现实世界的实验进一步验证了该方法在实际场景中的有效性和适应性。
{"title":"CBAM-ST-GCN: An enhanced DRL-based end-to-end visual navigation framework for mobile robot","authors":"Mingyang Xie,&nbsp;Wei Yu,&nbsp;Huanyu Jin,&nbsp;Wei Li,&nbsp;Xin Chen","doi":"10.1016/j.neunet.2026.108622","DOIUrl":"10.1016/j.neunet.2026.108622","url":null,"abstract":"<div><div>Visual-based navigation for mobile robot poses significant challenges due to limited visual perception and the presence of unforeseen dynamic obstacles. Deep reinforcement learning (DRL) provides an end-to-end solution by directly mapping raw sensor data to control commands, offering high adaptability and reduced reliance on handcrafted rules. However, high-dimensional visual inputs and the non-stationarity introduced by dynamic obstacles easily make the policy learning of DRL difficult to convergent and unstable. In this paper, an enhanced end-to-end visual navigation framework is proposed for mobile robot operating in dynamic environments, denoted as CBAM-ST-GCN. A convolutional block attention module (CBAM) is introduced into the framework to enhance visual perception by assigning attention weights across spatial and temporal dimensions. Furthermore, a spatio-temporal graph convolutional network (ST-GCN) is designed to capture the behavior features of moving obstacles. In addition, a velocity obstacle (VO) method-based penalty term is incorporated into the reward function for the enhancement of collision avoidance. Extensive simulation results demonstrate that the proposed method achieves superior success rates and significantly higher convergence speed. Real-world experiments further validate the effectiveness and adaptability of the proposed approach in practical scenarios.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108622"},"PeriodicalIF":6.3,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarially robust neural network decision boundaries via tropical geometry 基于热带几何的对抗鲁棒神经网络决策边界
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1016/j.neunet.2026.108624
Kurt Pasque , Christopher Teska , Ruriko Yoshida , Keiji Miura , Jefferson Huang
We introduce a simple, easy to implement, and computationally efficient tropical convolutional neural network architecture that is robust against adversarial attacks. We exploit the tropical nature of piece-wise linear neural networks by embedding the data in the tropical projective torus. This can be accomplished with a single additional hidden layer called a tropical embedding layer, and can in principle be added to any neural network architecture. We study the geometry of the resulting decision boundary, and find that like adversarial training and various regularization techniques that have been proposed, adding the tropical embedding layer tends to increase the number of linear regions associated with the decision boundaries. Our numerical experiments show that our approach achieves state-of-the-art levels of adversarial robustness, while requiring much less computational time than adversarial training.
我们介绍了一个简单、易于实现、计算效率高的热带卷积神经网络架构,该架构对对抗性攻击具有鲁棒性。我们通过将数据嵌入热带投影环面来利用分段线性神经网络的热带特性。这可以通过一个称为热带嵌入层的附加隐藏层来完成,原则上可以添加到任何神经网络架构中。我们研究了生成的决策边界的几何形状,发现与已经提出的对抗训练和各种正则化技术一样,添加热带嵌入层倾向于增加与决策边界相关的线性区域的数量。我们的数值实验表明,我们的方法达到了最先进的对抗鲁棒性水平,同时所需的计算时间比对抗训练少得多。
{"title":"Adversarially robust neural network decision boundaries via tropical geometry","authors":"Kurt Pasque ,&nbsp;Christopher Teska ,&nbsp;Ruriko Yoshida ,&nbsp;Keiji Miura ,&nbsp;Jefferson Huang","doi":"10.1016/j.neunet.2026.108624","DOIUrl":"10.1016/j.neunet.2026.108624","url":null,"abstract":"<div><div>We introduce a simple, easy to implement, and computationally efficient tropical convolutional neural network architecture that is robust against adversarial attacks. We exploit the tropical nature of piece-wise linear neural networks by embedding the data in the <em>tropical projective torus</em>. This can be accomplished with a single additional hidden layer called a <em>tropical embedding layer</em>, and can in principle be added to any neural network architecture. We study the geometry of the resulting decision boundary, and find that like adversarial training and various regularization techniques that have been proposed, adding the tropical embedding layer tends to increase the number of linear regions associated with the decision boundaries. Our numerical experiments show that our approach achieves state-of-the-art levels of adversarial robustness, while requiring much less computational time than adversarial training.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108624"},"PeriodicalIF":6.3,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal feature alignment networks for multi-label image classification 用于多标签图像分类的多模态特征对齐网络
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1016/j.neunet.2026.108629
Wenlan Kuang , Zhixin Li
Multi-label image classification is a classification task that assigns labels to multiple objects in an input image. Recent research ideas mainly focus on solving the semantic consistency of visual features and label features. However, since images contain complex scene content, the features captured by visual feature extraction networks based on grid or sequence representation may introduce redundant information or lack continuity when identifying irregular objects. In order to fully mine the visual information of complex objects in images and enhance the inter-modal interaction of images and labels, we introduce a flexible graph structure to explore the internal information of objects and design a multi-modal feature alignment (MMFA) network for multi-label image classification. To enhance the context awareness and semantic association of different patch regions, we propose a semantic-augmented interaction module that combines two kinds of visual semantic information with label embeddings for interactive learning. Finally, we refine the dependence between local intrinsic information and overall semantics by redefining semantic queries through semantically enhanced visual spatial features and graph aggregation features. Experiments on three large-scale public datasets: Microsoft COCO, Pascal VOC 2007 and NUS-WIDE demonstrate the effectiveness of our proposed MMFA and achieve state-of-the-art performance.
多标签图像分类是为输入图像中的多个对象分配标签的分类任务。目前的研究思路主要集中在解决视觉特征和标签特征的语义一致性问题上。然而,由于图像包含复杂的场景内容,基于网格或序列表示的视觉特征提取网络捕获的特征在识别不规则物体时可能会引入冗余信息或缺乏连续性。为了充分挖掘图像中复杂物体的视觉信息,增强图像与标签的多模态交互,我们引入了一种灵活的图结构来探索物体的内部信息,并设计了一个多模态特征对齐(MMFA)网络用于多标签图像分类。为了增强不同贴片区域的上下文感知和语义关联,我们提出了一种语义增强交互模块,该模块将两种视觉语义信息与标签嵌入相结合,用于交互学习。最后,我们通过语义增强的视觉空间特征和图聚合特征重新定义语义查询,从而细化局部内在信息与整体语义之间的依赖关系。在三个大型公共数据集上的实验:Microsoft COCO、Pascal VOC 2007和NUS-WIDE证明了我们提出的MMFA的有效性,并达到了最先进的性能。
{"title":"Multi-modal feature alignment networks for multi-label image classification","authors":"Wenlan Kuang ,&nbsp;Zhixin Li","doi":"10.1016/j.neunet.2026.108629","DOIUrl":"10.1016/j.neunet.2026.108629","url":null,"abstract":"<div><div>Multi-label image classification is a classification task that assigns labels to multiple objects in an input image. Recent research ideas mainly focus on solving the semantic consistency of visual features and label features. However, since images contain complex scene content, the features captured by visual feature extraction networks based on grid or sequence representation may introduce redundant information or lack continuity when identifying irregular objects. In order to fully mine the visual information of complex objects in images and enhance the inter-modal interaction of images and labels, we introduce a flexible graph structure to explore the internal information of objects and design a multi-modal feature alignment (MMFA) network for multi-label image classification. To enhance the context awareness and semantic association of different patch regions, we propose a semantic-augmented interaction module that combines two kinds of visual semantic information with label embeddings for interactive learning. Finally, we refine the dependence between local intrinsic information and overall semantics by redefining semantic queries through semantically enhanced visual spatial features and graph aggregation features. Experiments on three large-scale public datasets: Microsoft COCO, Pascal VOC 2007 and NUS-WIDE demonstrate the effectiveness of our proposed MMFA and achieve state-of-the-art performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108629"},"PeriodicalIF":6.3,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rotation equivariant quantum graph neural networks with trainable compression encoder and entanglement-enhanced aggregation 具有可训练压缩编码器和纠缠增强聚合的旋转等变量子图神经网络
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1016/j.neunet.2026.108625
Wenjie Liu , Bohan Du , Weiwei Liu , Yifan Zhu
The integration of symmetry, such as permutation equivariance, into Quantum Graph Neural Networks (QGNNs), referred to as Equivariant Quantum Graph Neural Networks (EQGNNs), markedly improves the model’s generalization performance on graph-structured data. Despite this advancement, current research has not yet extended rotational equivariance to QGNN frameworks. Furthermore, processing large-scale graph data increases computational complexity due to numerous inter-node connections, significantly raising the required number of qubits. To address these challenges, a novel Rotationally Equivariant Quantum Graph Neural Network (REQGNN) with trainable compression encoder and entanglement-enhanced aggregation mechanism is proposed. By adopting quantum fidelity as the evaluation metric, we design a quantum autoencoder to effectively compress feature dimensionality, substantially lowering the qubit requirements of the model while preserving essential global structural details. To achieve rotational equivariance in the model, we propose an entanglement-enhanced layer that incorporates distance and angle information between nodes. This layer performs entanglement by extracting diverse edge information, thereby further refining edge feature extraction. Additionally, an auxiliary entanglement layer is introduced to mitigate the over-smoothing issue. Experimental results demonstrate REQGNN is significantly better for graph classification tasks than GIN, Gra+QSVM, and Gra+QCNN on four datasets in all metrics and achieves better results than egoGQNN in accuracy on PTC dataset, and it also has advantage for graph regression tasks over the classical models, including EGNN and EquiformerV2, and reduces the MAE of Cv task unit by 20% on average compared with a previous quantum model QGCNN. Our approach offers an effective solution for achieving rotational equivariance while providing a novel perspective for exploring symmetry in graph neural networks (GNNs).
在量子图神经网络(Quantum Graph Neural Networks, qgnn)中引入排列等方差等对称性,可显著提高模型对图结构数据的泛化性能,称为等变量子图神经网络(Equivariant Quantum Graph Neural Networks, eqgnn)。尽管取得了这一进展,但目前的研究尚未将旋转等效性扩展到QGNN框架。此外,处理大规模图形数据由于大量节点间连接而增加了计算复杂度,显著提高了所需的量子比特数量。为了解决这些问题,提出了一种具有可训练压缩编码器和纠缠增强聚合机制的旋转等变量子图神经网络(REQGNN)。通过采用量子保真度作为评价指标,我们设计了一个量子自编码器来有效地压缩特征维数,大大降低了模型的量子比特要求,同时保留了基本的全局结构细节。为了在模型中实现旋转等方差,我们提出了一个包含节点之间距离和角度信息的纠缠增强层。该层通过提取不同的边缘信息进行纠缠,从而进一步细化边缘特征提取。此外,还引入了一个辅助纠缠层来缓解过度平滑问题。实验结果表明,REQGNN在4个数据集上的所有指标均明显优于GIN、Gra+QSVM和Gra+QCNN,在PTC数据集上的准确率优于egoGQNN,在图回归任务上也优于EGNN和EquiformerV2等经典模型,Cv任务单元的MAE比之前的量子模型QGCNN平均降低了20%。我们的方法为实现旋转等方差提供了一个有效的解决方案,同时为探索图神经网络(gnn)中的对称性提供了一个新的视角。
{"title":"Rotation equivariant quantum graph neural networks with trainable compression encoder and entanglement-enhanced aggregation","authors":"Wenjie Liu ,&nbsp;Bohan Du ,&nbsp;Weiwei Liu ,&nbsp;Yifan Zhu","doi":"10.1016/j.neunet.2026.108625","DOIUrl":"10.1016/j.neunet.2026.108625","url":null,"abstract":"<div><div>The integration of symmetry, such as permutation equivariance, into Quantum Graph Neural Networks (QGNNs), referred to as Equivariant Quantum Graph Neural Networks (EQGNNs), markedly improves the model’s generalization performance on graph-structured data. Despite this advancement, current research has not yet extended rotational equivariance to QGNN frameworks. Furthermore, processing large-scale graph data increases computational complexity due to numerous inter-node connections, significantly raising the required number of qubits. To address these challenges, a novel Rotationally Equivariant Quantum Graph Neural Network (REQGNN) with trainable compression encoder and entanglement-enhanced aggregation mechanism is proposed. By adopting quantum fidelity as the evaluation metric, we design a quantum autoencoder to effectively compress feature dimensionality, substantially lowering the qubit requirements of the model while preserving essential global structural details. To achieve rotational equivariance in the model, we propose an entanglement-enhanced layer that incorporates distance and angle information between nodes. This layer performs entanglement by extracting diverse edge information, thereby further refining edge feature extraction. Additionally, an auxiliary entanglement layer is introduced to mitigate the over-smoothing issue. Experimental results demonstrate REQGNN is significantly better for graph classification tasks than GIN, Gra+QSVM, and Gra+QCNN on four datasets in all metrics and achieves better results than egoGQNN in accuracy on PTC dataset, and it also has advantage for graph regression tasks over the classical models, including EGNN and EquiformerV2, and reduces the MAE of <em>C<sub>v</sub></em> task unit by 20% on average compared with a previous quantum model QGCNN. Our approach offers an effective solution for achieving rotational equivariance while providing a novel perspective for exploring symmetry in graph neural networks (GNNs).</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"198 ","pages":"Article 108625"},"PeriodicalIF":6.3,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1