首页 > 最新文献

Neurocomputing最新文献

英文 中文
Learning high-order user-item relation via hyperedge for recommender system 基于hyperedge的推荐系统高阶用户-物品关系学习
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-07 DOI: 10.1016/j.neucom.2026.133004
Jingwen Wang , Yuguang Yan , Ruichu Cai , Michael K. Ng , Zhifeng Hao
Recommender systems aim to find candidate items that are likely to be interesting to users based on their potential preferences. Existing methods mainly leverage user-item interaction data to learn a pairwise relation between a user and an item, or incorporate the social relations of users from a social network to model high-order relations among multiple users. However, complex relations exist not only among users but also among items, and high-order relations among users and items are vital for recommendation. For example, people buy products because of different latent reasons, which can be captured by relations involving multiple items. Such a high-order user-item relations have barely been studied in existing research. In this paper, we seek to extract high-order relations involving both users and items from interaction data and construct hyperedges to represent these relations. Specifically, we identify latent factors between users and items as the hyperedges, which is performed by matrix factorization on the interaction matrix. After that, we develop a hypergraph convolutional network based on hypergraph expansion to learn embeddings for users, items, as well as high-order relations in a joint representation space. By doing this, high-order relations involving multiple users and items are exploited to learn comprehensive representations of users and items for recommendation. The experimental results on several real-world datasets demonstrate the effectiveness of our proposed method.
推荐系统的目标是根据用户的潜在偏好找到他们可能感兴趣的候选项目。现有的方法主要是利用用户-物品交互数据来学习用户和物品之间的成对关系,或者将来自社交网络的用户的社会关系纳入到多个用户之间的高阶关系模型中。然而,复杂的关系不仅存在于用户之间,也存在于项目之间,用户和项目之间的高阶关系对推荐至关重要。例如,人们出于不同的潜在原因购买产品,这些潜在原因可以通过涉及多个项目的关系来捕获。这种高阶用户-物品关系在现有研究中很少被研究。在本文中,我们试图从交互数据中提取涉及用户和项目的高阶关系,并构建超边缘来表示这些关系。具体而言,我们将用户和项目之间的潜在因素识别为超边缘,并通过交互矩阵的矩阵分解来执行。之后,我们开发了一个基于超图展开的超图卷积网络,以学习用户、项目以及联合表示空间中的高阶关系的嵌入。通过这样做,涉及多个用户和项目的高阶关系被利用来学习用户和项目的综合表示以进行推荐。在多个实际数据集上的实验结果证明了该方法的有效性。
{"title":"Learning high-order user-item relation via hyperedge for recommender system","authors":"Jingwen Wang ,&nbsp;Yuguang Yan ,&nbsp;Ruichu Cai ,&nbsp;Michael K. Ng ,&nbsp;Zhifeng Hao","doi":"10.1016/j.neucom.2026.133004","DOIUrl":"10.1016/j.neucom.2026.133004","url":null,"abstract":"<div><div>Recommender systems aim to find candidate items that are likely to be interesting to users based on their potential preferences. Existing methods mainly leverage user-item interaction data to learn a pairwise relation between a user and an item, or incorporate the social relations of users from a social network to model high-order relations among multiple users. However, complex relations exist not only among users but also among items, and high-order relations among users and items are vital for recommendation. For example, people buy products because of different latent reasons, which can be captured by relations involving multiple items. Such a high-order user-item relations have barely been studied in existing research. In this paper, we seek to extract high-order relations involving both users and items from interaction data and construct hyperedges to represent these relations. Specifically, we identify latent factors between users and items as the hyperedges, which is performed by matrix factorization on the interaction matrix. After that, we develop a hypergraph convolutional network based on hypergraph expansion to learn embeddings for users, items, as well as high-order relations in a joint representation space. By doing this, high-order relations involving multiple users and items are exploited to learn comprehensive representations of users and items for recommendation. The experimental results on several real-world datasets demonstrate the effectiveness of our proposed method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 133004"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FMFNet: A Faster Multimodal Fusion Network for action recognition via efficient modality compensation 基于有效模态补偿的快速多模态融合网络
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-16 DOI: 10.1016/j.neucom.2026.133091
Shaocan Liu , Xingtao Wang , Penghong Wang , Ruiqin Xiong , Xiaopeng Fan
Skeleton-based methods have achieved remarkable success in human action recognition. However, their performance remains limited due to insufficient diversity of visual features inherently present in skeleton data. While existing multimodal approaches have demonstrated improved recognition accuracy by integrating RGB and skeleton modalities, these methods often rely on multi-stream architectures that involve numerous parameters, resulting in low inference speed. Therefore, efficient fusion strategies for RGB and skeleton modalities in multimodal frameworks deserve further exploration. In this paper, we propose a Faster Multimodal Fusion Network (FMFNet) for action recognition, which effectively integrates RGB and skeleton modalities using a modality compensation strategy. Compared to existing multi-stream multimodal methods, our approach significantly reduces model parameters while maintaining competitive recognition performance. Specifically, we first construct two types of mid-level features based on skeleton and RGB data. These mid-level features are then fused within the proposed modality compensation module to produce comprehensive representations, which are subsequently processed by a lightweight main stream to generate final predictions. Extensive experiments demonstrate that our FMFNet achieves a superior trade-off between recognition accuracy and computational efficiency on two large-scale data sets (NTU RGB+D 60 and 120). For instance, FMFNet obtains an accuracy of 95.5% on the cross-subject benchmark of NTU RGB+D 60, while being approximately 7.4× smaller and 2.3× faster than HM-CTNet, one of the current state-of-the-art methods.
基于骨骼的方法在人体动作识别方面取得了显著的成功。然而,由于骨骼数据中固有的视觉特征多样性不足,它们的性能仍然受到限制。虽然现有的多模态方法通过集成RGB和骨架模态来提高识别精度,但这些方法通常依赖于涉及众多参数的多流架构,导致推理速度较低。因此,多模态框架中RGB和骨架模态的有效融合策略值得进一步探索。在本文中,我们提出了一种更快的多模态融合网络(FMFNet)用于动作识别,该网络使用模态补偿策略有效地集成了RGB和骨架模态。与现有的多流多模态方法相比,我们的方法显著降低了模型参数,同时保持了具有竞争力的识别性能。具体而言,我们首先基于骨架和RGB数据构建了两种类型的中级特征。然后将这些中级特征融合到提议的模态补偿模块中,以产生全面的表示,随后由轻量级主流处理以生成最终预测。大量的实验表明,我们的FMFNet在两个大规模数据集(NTU RGB+D 60和120)上实现了识别精度和计算效率之间的卓越权衡。例如,FMFNet在NTU RGB+ d60的交叉学科基准上获得了95.5%的准确率,而比目前最先进的方法之一HM-CTNet小7.4倍,快2.3倍。
{"title":"FMFNet: A Faster Multimodal Fusion Network for action recognition via efficient modality compensation","authors":"Shaocan Liu ,&nbsp;Xingtao Wang ,&nbsp;Penghong Wang ,&nbsp;Ruiqin Xiong ,&nbsp;Xiaopeng Fan","doi":"10.1016/j.neucom.2026.133091","DOIUrl":"10.1016/j.neucom.2026.133091","url":null,"abstract":"<div><div>Skeleton-based methods have achieved remarkable success in human action recognition. However, their performance remains limited due to insufficient diversity of visual features inherently present in skeleton data. While existing multimodal approaches have demonstrated improved recognition accuracy by integrating RGB and skeleton modalities, these methods often rely on multi-stream architectures that involve numerous parameters, resulting in low inference speed. Therefore, efficient fusion strategies for RGB and skeleton modalities in multimodal frameworks deserve further exploration. In this paper, we propose a Faster Multimodal Fusion Network (FMFNet) for action recognition, which effectively integrates RGB and skeleton modalities using a modality compensation strategy. Compared to existing multi-stream multimodal methods, our approach significantly reduces model parameters while maintaining competitive recognition performance. Specifically, we first construct two types of mid-level features based on skeleton and RGB data. These mid-level features are then fused within the proposed modality compensation module to produce comprehensive representations, which are subsequently processed by a lightweight main stream to generate final predictions. Extensive experiments demonstrate that our FMFNet achieves a superior trade-off between recognition accuracy and computational efficiency on two large-scale data sets (NTU RGB+D 60 and 120). For instance, FMFNet obtains an accuracy of 95.5% on the cross-subject benchmark of NTU RGB+D 60, while being approximately 7.4<span><math><mo>×</mo></math></span> smaller and 2.3<span><math><mo>×</mo></math></span> faster than HM-CTNet, one of the current state-of-the-art methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 133091"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CC-mamba: Mamba-based color constancy with illumination prior-guided dynamic feature modulation and wavelet-domain attention mechanism cc -曼巴:基于曼巴的色彩恒常与光照先验引导的动态特征调制和小波域注意机制
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-16 DOI: 10.1016/j.neucom.2026.133068
Junyi Liu , Li Zhuo , Hui Zhang , Haokui Xu , Xiaoguang Li
Computational Color Constancy (CC) aims to correct color deviations in images caused by the different illumination conditions. The existing methods, based on Convolutional Neural Networks (CNNs) and Transformers, exhibit limitations in extracting global contextual information or achieving computational efficiency, leading to inaccurate illumination estimation. Therefore, in this paper, a Mamba-based CC method, named CC-Mamba, is proposed. It employs VMamba as the backbone to construct an end-to-end illumination estimation model. Specifically, a Wavelet-domain Attention (WA) mechanism is first proposed, which captures local details (e.g., edges and textures) and global illumination information while suppressing noise, thereby, facilitating improved illumination estimation accuracy. Next, an Illumination Prior-guided Dynamic Feature Modulation (IP-DFM) module is proposed, which utilizes illumination prior information obtained from high-level features to dynamically modulate the mid-level features, thus enhancing the model’s robustness against complex illumination variations and challenging scenarios. Finally, the Multi-Scale Gated Attentional Feature Fusion (MS-GAFF) module is designed, which employs a gating mechanism for illumination-related feature selection, coupled with a channel-spatial dual-dimensional dynamic weighting strategy to adaptively weigh the features at different scales to promote the representation ability of the features. Experimental results on the NUS 8-Camera and CCD (Color Checker Dataset) benchmark datasets demonstrate that CC-Mamba achieves superior illumination estimation accuracy compared to the state-of-the-art methods. Furthermore, the corrected image has a better perceptual quality.
计算色彩恒常性(Computational Color Constancy, CC)的目的是校正不同光照条件下图像的色彩偏差。现有的基于卷积神经网络(cnn)和变压器的方法在提取全局上下文信息或实现计算效率方面存在局限性,导致光照估计不准确。因此,本文提出了一种基于mamba的CC方法,命名为CC- mamba。它以vamba为主干,构建端到端的照度估计模型。具体而言,首先提出了一种小波域注意(WA)机制,该机制在抑制噪声的同时捕获局部细节(如边缘和纹理)和全局光照信息,从而提高了光照估计的精度。其次,提出了一种基于光照先验的动态特征调制(IP-DFM)模块,该模块利用从高级特征中获得的光照先验信息对中级特征进行动态调制,从而增强了模型对复杂光照变化和挑战性场景的鲁棒性。最后,设计了多尺度门控注意特征融合(MS-GAFF)模块,该模块采用门控机制对照明相关特征进行选择,并结合通道-空间二维动态加权策略对不同尺度的特征进行自适应加权,以提高特征的表征能力。在NUS 8-Camera和CCD (Color Checker Dataset)基准数据集上的实验结果表明,与最先进的方法相比,CC-Mamba具有更高的照明估计精度。校正后的图像具有更好的感知质量。
{"title":"CC-mamba: Mamba-based color constancy with illumination prior-guided dynamic feature modulation and wavelet-domain attention mechanism","authors":"Junyi Liu ,&nbsp;Li Zhuo ,&nbsp;Hui Zhang ,&nbsp;Haokui Xu ,&nbsp;Xiaoguang Li","doi":"10.1016/j.neucom.2026.133068","DOIUrl":"10.1016/j.neucom.2026.133068","url":null,"abstract":"<div><div>Computational Color Constancy (CC) aims to correct color deviations in images caused by the different illumination conditions. The existing methods, based on Convolutional Neural Networks (CNNs) and Transformers, exhibit limitations in extracting global contextual information or achieving computational efficiency, leading to inaccurate illumination estimation. Therefore, in this paper, a Mamba-based CC method, named CC-Mamba, is proposed. It employs VMamba as the backbone to construct an end-to-end illumination estimation model. Specifically, a Wavelet-domain Attention (WA) mechanism is first proposed, which captures local details (<em>e.g.,</em> edges and textures) and global illumination information while suppressing noise, thereby, facilitating improved illumination estimation accuracy. Next, an Illumination Prior-guided Dynamic Feature Modulation (IP-DFM) module is proposed, which utilizes illumination prior information obtained from high-level features to dynamically modulate the mid-level features, thus enhancing the model’s robustness against complex illumination variations and challenging scenarios. Finally, the Multi-Scale Gated Attentional Feature Fusion (MS-GAFF) module is designed, which employs a gating mechanism for illumination-related feature selection, coupled with a channel-spatial dual-dimensional dynamic weighting strategy to adaptively weigh the features at different scales to promote the representation ability of the features. Experimental results on the NUS 8-Camera and CCD (Color Checker Dataset) benchmark datasets demonstrate that CC-Mamba achieves superior illumination estimation accuracy compared to the state-of-the-art methods. Furthermore, the corrected image has a better perceptual quality.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 133068"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An optimal control method for uncertain process industry based on working condition relevance and policy transfer 基于工况关联和策略转移的不确定过程工业最优控制方法
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-11 DOI: 10.1016/j.neucom.2026.133028
Can Zhou, Xuan Ouyang, Hongqiu Zhu, Tianhao Liu
Energy consumption optimal control is a key challenge in process industries and is essential for enhancing production efficiency and reducing costs. However, the intricate physical and chemical reaction mechanisms in process industries make it difficult to establish precise process models. Moreover, unpredictable fluctuations in raw materials introduce uncertain variations in working conditions, causing unknown working conditions to emerge. Under this circumstance, the mismatch between the trained controller and the unknown working condition leads to deviation in control results. To get around these issues, a control method based on transfer reinforcement learning is proposed, which comprises two constituents. First, multiple RL controllers are trained under different working conditions to learn their distinct characteristics independently. By embedding domain knowledge into the reward function, the learning of control policies is guided to optimize energy consumption. Second, seeing that retraining RL controllers when encountering unknown working conditions is a costly endeavor, the concept of working condition relevance is proposed to quantify the degree of correlation between known and unknown working conditions. By utilizing a small amount of interaction experience with unknown and known working conditions, the control policies for known working conditions are transferred to the ones for unknown working conditions. As a result, the learning costs of the control policy are reduced while the negative transfer is mitigated to a certain extent. The simulation experiments on the zinc electrowinning process demonstrate that our method transfers control policies effectively when unknown working conditions arise, reducing learning costs noticeably.
能源消耗最优控制是过程工业面临的一个关键挑战,对提高生产效率和降低成本至关重要。然而,过程工业中复杂的物理和化学反应机理使得建立精确的过程模型变得困难。此外,原材料不可预测的波动导致了工作条件的不确定变化,从而产生了未知的工作条件。在这种情况下,训练好的控制器与未知的工作状态不匹配,导致控制结果出现偏差。为了解决这些问题,提出了一种基于迁移强化学习的控制方法,该方法由两个部分组成。首先,在不同的工作条件下对多个RL控制器进行训练,以独立学习其不同的特性。通过将领域知识嵌入到奖励函数中,引导控制策略的学习以优化能耗。其次,考虑到在遇到未知工作条件时再训练RL控制器是一项代价高昂的工作,提出了工作条件相关性的概念来量化已知和未知工作条件之间的相关程度。利用少量已知和未知工况的交互经验,将已知工况的控制策略转化为未知工况的控制策略。从而降低了控制策略的学习成本,并在一定程度上缓解了负迁移。对锌电积过程的仿真实验表明,该方法在出现未知工况时能够有效地传递控制策略,显著降低了学习成本。
{"title":"An optimal control method for uncertain process industry based on working condition relevance and policy transfer","authors":"Can Zhou,&nbsp;Xuan Ouyang,&nbsp;Hongqiu Zhu,&nbsp;Tianhao Liu","doi":"10.1016/j.neucom.2026.133028","DOIUrl":"10.1016/j.neucom.2026.133028","url":null,"abstract":"<div><div>Energy consumption optimal control is a key challenge in process industries and is essential for enhancing production efficiency and reducing costs. However, the intricate physical and chemical reaction mechanisms in process industries make it difficult to establish precise process models. Moreover, unpredictable fluctuations in raw materials introduce uncertain variations in working conditions, causing unknown working conditions to emerge. Under this circumstance, the mismatch between the trained controller and the unknown working condition leads to deviation in control results. To get around these issues, a control method based on transfer reinforcement learning is proposed, which comprises two constituents. First, multiple RL controllers are trained under different working conditions to learn their distinct characteristics independently. By embedding domain knowledge into the reward function, the learning of control policies is guided to optimize energy consumption. Second, seeing that retraining RL controllers when encountering unknown working conditions is a costly endeavor, the concept of working condition relevance is proposed to quantify the degree of correlation between known and unknown working conditions. By utilizing a small amount of interaction experience with unknown and known working conditions, the control policies for known working conditions are transferred to the ones for unknown working conditions. As a result, the learning costs of the control policy are reduced while the negative transfer is mitigated to a certain extent. The simulation experiments on the zinc electrowinning process demonstrate that our method transfers control policies effectively when unknown working conditions arise, reducing learning costs noticeably.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 133028"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MetaGT: A lightweight graph transformer via meta learning for large-scale graphs MetaGT:一个轻量级的图转换器,通过元学习来处理大规模的图
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-12 DOI: 10.1016/j.neucom.2026.132992
Wenting Wang , Yunfei Tian , Hailong Zhang
Graph Transformers (GTs) excel at modeling long-distance dependencies and global information in graphs, but their scalability on large-scale graphs is limited by the quadratic growth of memory and computation costs introduced by self-attention. To address these challenges, existing methods such as linear attention mechanisms and graph partitioning have been proposed. However, these methods have inherent limitations. Linear attention mechanisms still incur significant computational overhead, while subgraph-based training often leads to reduced accuracy due to the lack of global context and the structural differences among subgraphs. In this paper, we propose MetaGT, a lightweight Graph Transformer model for large-scale graphs. MetaGT first partitions the global graph into structurally compact subgraphs to reduce memory requirements. To further ensure both efficiency and accuracy, we introduce a “pre- and post-training” paradigm into subgraph-based GT models. In pre-training, meta-learning is employed on a subset of subgraphs to obtain generalizable parameters, which are then enhanced by minimal fine-tuning during post-training. MetaGT is plug-and-play and can be seamlessly integrated into mainstream GT models. Extensive experiments demonstrate that MetaGT consistently achieves promising efficiency, memory consumption, and accuracy across various backbone GT models on large-scale graph datasets. Our code is available at https://anonymous.4open.science/r/MetaGT-D2B0.
图形转换器(Graph transformer, gt)擅长对图中的长距离依赖关系和全局信息进行建模,但其在大规模图上的可扩展性受到内存二次增长和自关注带来的计算成本的限制。为了应对这些挑战,人们提出了线性注意机制和图划分等现有方法。然而,这些方法有其固有的局限性。线性注意机制仍然会产生大量的计算开销,而基于子图的训练通常会由于缺乏全局上下文和子图之间的结构差异而导致准确性降低。在本文中,我们提出了MetaGT,一个用于大规模图的轻量级图转换器模型。MetaGT首先将全局图划分为结构紧凑的子图,以减少内存需求。为了进一步确保效率和准确性,我们在基于子图的GT模型中引入了“训练前和训练后”范式。在预训练中,在子图的子集上使用元学习来获得可推广的参数,然后在后训练期间通过最小的微调来增强这些参数。MetaGT是即插即用的,可以无缝集成到主流GT模型中。大量的实验表明,MetaGT在大规模图数据集上的各种骨干GT模型上始终如一地实现了有希望的效率、内存消耗和准确性。我们的代码可在https://anonymous.4open.science/r/MetaGT-D2B0上获得。
{"title":"MetaGT: A lightweight graph transformer via meta learning for large-scale graphs","authors":"Wenting Wang ,&nbsp;Yunfei Tian ,&nbsp;Hailong Zhang","doi":"10.1016/j.neucom.2026.132992","DOIUrl":"10.1016/j.neucom.2026.132992","url":null,"abstract":"<div><div>Graph Transformers (GTs) excel at modeling long-distance dependencies and global information in graphs, but their scalability on large-scale graphs is limited by the quadratic growth of memory and computation costs introduced by self-attention. To address these challenges, existing methods such as linear attention mechanisms and graph partitioning have been proposed. However, these methods have inherent limitations. Linear attention mechanisms still incur significant computational overhead, while subgraph-based training often leads to reduced accuracy due to the lack of global context and the structural differences among subgraphs. In this paper, we propose MetaGT, a lightweight Graph Transformer model for large-scale graphs. MetaGT first partitions the global graph into structurally compact subgraphs to reduce memory requirements. To further ensure both efficiency and accuracy, we introduce a “pre- and post-training” paradigm into subgraph-based GT models. In pre-training, meta-learning is employed on a subset of subgraphs to obtain generalizable parameters, which are then enhanced by minimal fine-tuning during post-training. MetaGT is plug-and-play and can be seamlessly integrated into mainstream GT models. Extensive experiments demonstrate that MetaGT consistently achieves promising efficiency, memory consumption, and accuracy across various backbone GT models on large-scale graph datasets. Our code is available at <span><span>https://anonymous.4open.science/r/MetaGT-D2B0</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 132992"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting representation diversity in video transformers via segmented contrastive masked autoencoders 通过分段对比掩码自编码器提高视频转换器的表示多样性
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-13 DOI: 10.1016/j.neucom.2026.133058
Yawei Feng , Lijun Guo , Guitao Yu , Rong Zhang , Jiangbo Qian , Chong Wang , Shangce Gao
In self-supervised representation learning, contrastive learning leverages data invariance to identify global patterns; however, its lack of local constraints often results in video representations that are overly focused on global content, thereby limiting feature diversity. To address this, we propose Segmented Contrastive Masked Autoencoders (SCMA). By extending the masked distillation paradigm to the spatiotemporal domain, our framework integrates contrastive learning with masked autoencoders via teacher-student self-distillation. SCMA enhances the capability of contrastive learning to capture local details in videos, effectively improving the diversity of video representations learned by ViT models. Furthermore, we introduce a Segmented Mask Sampling strategy to partition videos uniformly and apply adaptive masking, enhancing both training efficiency and robustness to input variations. Experimental results demonstrate that SCMA achieves superior top-1 accuracy on a range of video action recognition benchmarks under both fine-tuning and linear probing settings, while also demonstrating strong performance in video object segmentation.
在自监督表示学习中,对比学习利用数据不变性来识别全局模式;然而,它缺乏局部约束,往往导致视频表示过于关注全局内容,从而限制了特征的多样性。为了解决这个问题,我们提出了分段对比掩码自编码器(SCMA)。通过将掩膜蒸馏范式扩展到时空域,我们的框架通过师生自蒸馏将对比学习与掩膜自编码器集成在一起。SCMA增强了对比学习捕捉视频局部细节的能力,有效提高了ViT模型学习到的视频表示的多样性。此外,我们引入了一种分段掩码采样策略,对视频进行均匀分割并应用自适应掩码,提高了训练效率和对输入变化的鲁棒性。实验结果表明,在微调和线性探测设置下,SCMA在一系列视频动作识别基准上取得了优异的top-1精度,同时在视频对象分割方面也表现出了很强的性能。
{"title":"Boosting representation diversity in video transformers via segmented contrastive masked autoencoders","authors":"Yawei Feng ,&nbsp;Lijun Guo ,&nbsp;Guitao Yu ,&nbsp;Rong Zhang ,&nbsp;Jiangbo Qian ,&nbsp;Chong Wang ,&nbsp;Shangce Gao","doi":"10.1016/j.neucom.2026.133058","DOIUrl":"10.1016/j.neucom.2026.133058","url":null,"abstract":"<div><div>In self-supervised representation learning, contrastive learning leverages data invariance to identify global patterns; however, its lack of local constraints often results in video representations that are overly focused on global content, thereby limiting feature diversity. To address this, we propose Segmented Contrastive Masked Autoencoders (SCMA). By extending the masked distillation paradigm to the spatiotemporal domain, our framework integrates contrastive learning with masked autoencoders via teacher-student self-distillation. SCMA enhances the capability of contrastive learning to capture local details in videos, effectively improving the diversity of video representations learned by ViT models. Furthermore, we introduce a Segmented Mask Sampling strategy to partition videos uniformly and apply adaptive masking, enhancing both training efficiency and robustness to input variations. Experimental results demonstrate that SCMA achieves superior top-1 accuracy on a range of video action recognition benchmarks under both fine-tuning and linear probing settings, while also demonstrating strong performance in video object segmentation.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 133058"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic denoising track: Towards end-to-end multiple object tracking against attention trivialization 动态去噪跟踪:针对注意力琐碎化的端到端多目标跟踪
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-11 DOI: 10.1016/j.neucom.2026.132994
Ruonan Wei, Yuehuan Wang
Transformer-based architectures have emerged as a powerful paradigm for end-to-end multi-object tracking (MOT). However, existing frameworks typically employ static architectures with a fixed number of decoding layers, failing to adapt feature interaction levels to the varying complexities of tracking scenarios. This rigidity introduces a critical issue, which we define as attention trivialization, where excessive global interaction in deeper Transformer layers accumulates redundant noise and diminishes the discriminative power of attention. To mitigate this, we propose the Dynamic Denoising Tracking framework (DDTrack), which adaptively regulates interaction depth based on target-specific requirements. DDTrack incorporates two core components: Spatial-Temporal Early-Exit (STE2) module and Self-Guided Denoising (SGD) module. The STE2 module mitigates noise propagation by terminating feature interactions early in the decoding process. It leverages spatio-temporal clues to perceive the complexity of foreground-background queries, enabling low-complexity samples to exit early and reduce the propagation of redundant information, thereby mitigating the attention trivialization of simple samples. The SGD module is designed for high-complexity targets. It utilizes cross-layer guidance to rectify attention trivialization. Specifically, shallow-layer queries serve as structural anchors to help deeper queries filter irrelevant noise and preserve target-specific information, thereby obtaining refined embeddings and highly discriminative attention. Finally, we jointly optimize inference depth and tracking performance through an end-to-end learning framework. Extensive experiments are conducted on DanceTrack, MOT17, and MOT20 benchmarks, achieving state-of-the-art results on several metrics and verifying the effectiveness of our proposed method.
基于变压器的体系结构已经成为端到端多目标跟踪(MOT)的强大范例。然而,现有框架通常采用具有固定数量解码层的静态架构,无法使功能交互级别适应跟踪场景的不同复杂性。这种刚性引入了一个关键问题,我们将其定义为注意力琐屑化,在较深的Transformer层中过度的全局交互会积累冗余噪声并降低注意力的辨别能力。为了缓解这一问题,我们提出了动态去噪跟踪框架(DDTrack),该框架基于目标特定需求自适应调节交互深度。DDTrack包含两个核心组件:时空早期退出(STE2)模块和自导引去噪(SGD)模块。STE2模块通过在解码过程的早期终止特征交互来减轻噪声传播。它利用时空线索感知前景-背景查询的复杂性,使低复杂性的样本尽早退出,减少冗余信息的传播,从而减轻简单样本的注意力琐细化。SGD模块是为高复杂性目标设计的。它利用跨层引导来纠正注意力琐屑化。具体来说,浅层查询作为结构锚点,帮助深层查询过滤无关噪声并保留目标特定信息,从而获得精细化的嵌入和高度判别性的关注。最后,我们通过端到端学习框架共同优化推理深度和跟踪性能。在DanceTrack、MOT17和MOT20基准上进行了广泛的实验,在几个指标上获得了最先进的结果,并验证了我们提出的方法的有效性。
{"title":"Dynamic denoising track: Towards end-to-end multiple object tracking against attention trivialization","authors":"Ruonan Wei,&nbsp;Yuehuan Wang","doi":"10.1016/j.neucom.2026.132994","DOIUrl":"10.1016/j.neucom.2026.132994","url":null,"abstract":"<div><div>Transformer-based architectures have emerged as a powerful paradigm for end-to-end multi-object tracking (MOT). However, existing frameworks typically employ static architectures with a fixed number of decoding layers, failing to adapt feature interaction levels to the varying complexities of tracking scenarios. This rigidity introduces a critical issue, which we define as attention trivialization, where excessive global interaction in deeper Transformer layers accumulates redundant noise and diminishes the discriminative power of attention. To mitigate this, we propose the Dynamic Denoising Tracking framework (DDTrack), which adaptively regulates interaction depth based on target-specific requirements. DDTrack incorporates two core components: Spatial-Temporal Early-Exit (<span><math><msup><mtext>STE</mtext><mn>2</mn></msup></math></span>) module and Self-Guided Denoising (SGD) module. The <span><math><msup><mtext>STE</mtext><mn>2</mn></msup></math></span> module mitigates noise propagation by terminating feature interactions early in the decoding process. It leverages spatio-temporal clues to perceive the complexity of foreground-background queries, enabling low-complexity samples to exit early and reduce the propagation of redundant information, thereby mitigating the attention trivialization of simple samples. The SGD module is designed for high-complexity targets. It utilizes cross-layer guidance to rectify attention trivialization. Specifically, shallow-layer queries serve as structural anchors to help deeper queries filter irrelevant noise and preserve target-specific information, thereby obtaining refined embeddings and highly discriminative attention. Finally, we jointly optimize inference depth and tracking performance through an end-to-end learning framework. Extensive experiments are conducted on DanceTrack, MOT17, and MOT20 benchmarks, achieving state-of-the-art results on several metrics and verifying the effectiveness of our proposed method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 132994"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-stage line graph reasoning framework for multi-intent spoken language understanding 面向多意图口语理解的两阶段线形图推理框架
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-13 DOI: 10.1016/j.neucom.2026.133053
Shijie Duan , Long Yu , Shengwei Tian
Spoken language understanding (SLU) is a core component of dialogue systems, responsible for intent detection and slot filling to interpret user utterances. We propose a two-stage line-graph-based framework for joint multi-intent SLU. In the first stage, our model uses cross-attention to encode token-level and intent-level features and leverages domain information to perform preliminary slot tagging and intent prediction. In the second stage, we construct an intent-slot graph from the first-stage outputs and then apply a line graph neural network to explicitly model interactions between each intent and its associated slots. This line graph reasoning mechanism reduces interference between multiple intents and enables effective bidirectional information flow, thereby enhancing semantic fusion of intent and slot cues. Experiments on the MixATIS and MixSNIPS benchmarks, as well as the challenging BlendATIS and BlendSNIPS datasets of blended utterances, demonstrate that the proposed approach outperforms state-of-the-art models, achieving higher slot-filling F1, intent classification accuracy, and overall sentence-level understanding accuracy.
口语理解(SLU)是对话系统的核心组成部分,负责意图检测和槽位填充来解释用户的话语。我们提出了一个基于两阶段线图的联合多意图SLU框架。在第一阶段,我们的模型使用交叉注意对令牌级和意图级特征进行编码,并利用领域信息进行初步的槽标记和意图预测。在第二阶段,我们从第一阶段的输出构建一个意图-槽图,然后应用线形图神经网络来显式地建模每个意图及其相关槽之间的相互作用。该折线图推理机制减少了多个意图之间的干扰,实现了有效的双向信息流,从而增强了意图与槽线索的语义融合。在MixATIS和MixSNIPS基准测试以及具有挑战性的混合语音BlendATIS和BlendSNIPS数据集上的实验表明,所提出的方法优于最先进的模型,实现了更高的槽填充F1、意图分类精度和整体句子级理解精度。
{"title":"A two-stage line graph reasoning framework for multi-intent spoken language understanding","authors":"Shijie Duan ,&nbsp;Long Yu ,&nbsp;Shengwei Tian","doi":"10.1016/j.neucom.2026.133053","DOIUrl":"10.1016/j.neucom.2026.133053","url":null,"abstract":"<div><div>Spoken language understanding (SLU) is a core component of dialogue systems, responsible for intent detection and slot filling to interpret user utterances. We propose a two-stage line-graph-based framework for joint <em>multi-intent</em> SLU. In the first stage, our model uses cross-attention to encode token-level and intent-level features and leverages domain information to perform preliminary slot tagging and intent prediction. In the second stage, we construct an intent-slot graph from the first-stage outputs and then apply a line graph neural network to explicitly model interactions between each intent and its associated slots. This line graph reasoning mechanism reduces interference between multiple intents and enables effective bidirectional information flow, thereby enhancing semantic fusion of intent and slot cues. Experiments on the MixATIS and MixSNIPS benchmarks, as well as the challenging BlendATIS and BlendSNIPS datasets of blended utterances, demonstrate that the proposed approach outperforms state-of-the-art models, achieving higher slot-filling F<sub>1</sub>, intent classification accuracy, and overall sentence-level understanding accuracy.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 133053"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synchronization of fractional-order delayed fuzzy memristive neural networks with unknown parameters and reaction-diffusion terms 具有未知参数和反应扩散项的分数阶延迟模糊记忆神经网络的同步
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-14 DOI: 10.1016/j.neucom.2026.133037
Haining Li , Hong-Li Li , Long Zhang , Hai Zhang , Yonggui Kao
This paper investigates the global asymptotic synchronization issue of fractional-order delayed fuzzy memristive neural networks with unknown parameters and reaction-diffusion terms. Firstly, to make the model established in this paper more general, activation functions are extended to the discontinuous case. Next, two novel controllers, namely a delayed state-feedback controller and an adaptive controller, are designed. By employing some useful analytical techniques and the Gronwall-Bellman inequality, some easily validated criteria are derived to guarantee global asymptotic synchronization. Eventually, the effectiveness of the developed theoretical results is validated through an illustrative example.
研究了具有未知参数和反应扩散项的分数阶延迟模糊记忆神经网络的全局渐近同步问题。首先,将激活函数扩展到不连续的情况,使本文建立的模型更具通用性。其次,设计了延迟状态反馈控制器和自适应控制器。利用一些有用的分析方法和Gronwall-Bellman不等式,导出了一些易于验证的保证全局渐近同步的判据。最后,通过实例验证了所建立理论结果的有效性。
{"title":"Synchronization of fractional-order delayed fuzzy memristive neural networks with unknown parameters and reaction-diffusion terms","authors":"Haining Li ,&nbsp;Hong-Li Li ,&nbsp;Long Zhang ,&nbsp;Hai Zhang ,&nbsp;Yonggui Kao","doi":"10.1016/j.neucom.2026.133037","DOIUrl":"10.1016/j.neucom.2026.133037","url":null,"abstract":"<div><div>This paper investigates the global asymptotic synchronization issue of fractional-order delayed fuzzy memristive neural networks with unknown parameters and reaction-diffusion terms. Firstly, to make the model established in this paper more general, activation functions are extended to the discontinuous case. Next, two novel controllers, namely a delayed state-feedback controller and an adaptive controller, are designed. By employing some useful analytical techniques and the Gronwall-Bellman inequality, some easily validated criteria are derived to guarantee global asymptotic synchronization. Eventually, the effectiveness of the developed theoretical results is validated through an illustrative example.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 133037"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GeomFlow: Geometry-aware adaptive diffusion model via Hessian information GeomFlow:基于Hessian信息的几何感知自适应扩散模型
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-02-06 DOI: 10.1016/j.neucom.2026.132934
Haozhuo Cao , Xiaorui Wang , Liyang Yu , Wangcai Ding
Score-based generative models have achieved remarkable fidelity by employing stochastic differential equations (SDEs) to progressively transform data into noise. However, standard approaches suffer from a fundamental geometric mismatch: they apply spatially uniform diffusion dynamics across data manifolds that exhibit highly heterogeneous curvature. This “one-size-fits-all” strategy leads to inefficient sampling—over-computing in flat regions while under-resolving complex high-frequency details. To address this, we propose GeomFlow, a novel Geometry-Aware Adaptive Diffusion Model. GeomFlow synergizes two key mechanisms: a global learnable noise schedule that optimizes the macroscopic noise progression, and a geometric complexity estimator that utilizes a robust stochastic approximation of the Hessian trace to actively modulate local diffusion strength. Theoretically, we prove that our geometry-aware reverse process is equivalent to Riemannian preconditioned Langevin dynamics, enabling accelerated convergence and better escape from saddle points. Extensive experiments demonstrate that it achieves highly competitive performance on CIFAR-10 (FID 2.14) and CelebA-HQ, demonstrating superior structural understanding in conditional generation and image inpainting tasks with significant improvements in preserving semantic consistency and recovering missing texture details.
基于分数的生成模型通过采用随机微分方程(SDEs)逐步将数据转换为噪声,获得了显著的保真度。然而,标准方法存在一个基本的几何不匹配:它们在数据流形上应用空间均匀的扩散动力学,而这些数据流形表现出高度异质的曲率。这种“一刀切”的策略导致在平坦区域采样效率低下——过度计算,同时对复杂的高频细节分辨率不足。为了解决这个问题,我们提出了一种新的几何感知自适应扩散模型GeomFlow。GeomFlow协同了两个关键机制:优化宏观噪声进程的全局可学习噪声调度,以及利用Hessian轨迹的鲁棒随机逼近来主动调节局部扩散强度的几何复杂性估计器。理论上,我们证明了我们的几何感知逆向过程等价于黎曼预条件朗之万动力学,能够加速收敛并更好地脱离鞍点。大量的实验表明,它在CIFAR-10 (FID 2.14)和CelebA-HQ上取得了极具竞争力的性能,在条件生成和图像绘制任务中表现出卓越的结构理解能力,在保持语义一致性和恢复缺失的纹理细节方面有显著改进。
{"title":"GeomFlow: Geometry-aware adaptive diffusion model via Hessian information","authors":"Haozhuo Cao ,&nbsp;Xiaorui Wang ,&nbsp;Liyang Yu ,&nbsp;Wangcai Ding","doi":"10.1016/j.neucom.2026.132934","DOIUrl":"10.1016/j.neucom.2026.132934","url":null,"abstract":"<div><div>Score-based generative models have achieved remarkable fidelity by employing stochastic differential equations (SDEs) to progressively transform data into noise. However, standard approaches suffer from a fundamental geometric mismatch: they apply spatially uniform diffusion dynamics across data manifolds that exhibit highly heterogeneous curvature. This “one-size-fits-all” strategy leads to inefficient sampling—over-computing in flat regions while under-resolving complex high-frequency details. To address this, we propose GeomFlow, a novel Geometry-Aware Adaptive Diffusion Model. GeomFlow synergizes two key mechanisms: a global learnable noise schedule that optimizes the macroscopic noise progression, and a geometric complexity estimator that utilizes a robust stochastic approximation of the Hessian trace to actively modulate local diffusion strength. Theoretically, we prove that our geometry-aware reverse process is equivalent to Riemannian preconditioned Langevin dynamics, enabling accelerated convergence and better escape from saddle points. Extensive experiments demonstrate that it achieves highly competitive performance on CIFAR-10 (FID 2.14) and CelebA-HQ, demonstrating superior structural understanding in conditional generation and image inpainting tasks with significant improvements in preserving semantic consistency and recovering missing texture details.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"676 ","pages":"Article 132934"},"PeriodicalIF":6.5,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1