首页 > 最新文献

Neural Networks最新文献

英文 中文
I2HGNN: Iterative Interpretable HyperGraph Neural Network for semi-supervised classification. 半监督分类的迭代可解释超图神经网络。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 Epub Date: 2024-11-22 DOI: 10.1016/j.neunet.2024.106929
Hongwei Zhang, Saizhuo Wang, Zixin Hu, Yuan Qi, Zengfeng Huang, Jian Guo

Learning on hypergraphs has garnered significant attention recently due to their ability to effectively represent complex higher-order interactions among multiple entities compared to conventional graphs. Nevertheless, the majority of existing methods are direct extensions of graph neural networks, and they exhibit noteworthy limitations. Specifically, most of these approaches primarily rely on either the Laplacian matrix with information distortion or heuristic message passing techniques. The former tends to escalate algorithmic complexity, while the latter lacks a solid theoretical foundation. To address these limitations, we propose a novel hypergraph neural network named I2HGNN, which is grounded in an energy minimization function formulated for hypergraphs. Our analysis reveals that propagation layers align well with the message-passing paradigm in the context of hypergraphs. I2HGNN achieves a favorable trade-off between performance and interpretability. Furthermore, it effectively balances the significance of node features and hypergraph topology across a diverse range of datasets. We conducted extensive experiments on 15 datasets, and the results highlight the superior performance of I2HGNN in the task of hypergraph node classification across nearly all benchmarking datasets.

由于与传统图相比,超图能够有效地表示多个实体之间复杂的高阶交互,因此最近在超图上的学习引起了极大的关注。然而,现有的大多数方法都是图神经网络的直接扩展,它们表现出明显的局限性。具体来说,这些方法中的大多数主要依赖于带有信息失真的拉普拉斯矩阵或启发式消息传递技术。前者往往会增加算法的复杂度,而后者缺乏坚实的理论基础。为了解决这些限制,我们提出了一种名为I2HGNN的新型超图神经网络,该网络基于为超图制定的能量最小化函数。我们的分析表明,传播层与超图上下文中的消息传递范式很好地结合在一起。I2HGNN在性能和可解释性之间实现了良好的平衡。此外,它有效地平衡了节点特征和超图拓扑在不同数据集上的重要性。我们在15个数据集上进行了大量的实验,结果突出了I2HGNN在几乎所有基准数据集的超图节点分类任务中的优越性能。
{"title":"I<sup>2</sup>HGNN: Iterative Interpretable HyperGraph Neural Network for semi-supervised classification.","authors":"Hongwei Zhang, Saizhuo Wang, Zixin Hu, Yuan Qi, Zengfeng Huang, Jian Guo","doi":"10.1016/j.neunet.2024.106929","DOIUrl":"10.1016/j.neunet.2024.106929","url":null,"abstract":"<p><p>Learning on hypergraphs has garnered significant attention recently due to their ability to effectively represent complex higher-order interactions among multiple entities compared to conventional graphs. Nevertheless, the majority of existing methods are direct extensions of graph neural networks, and they exhibit noteworthy limitations. Specifically, most of these approaches primarily rely on either the Laplacian matrix with information distortion or heuristic message passing techniques. The former tends to escalate algorithmic complexity, while the latter lacks a solid theoretical foundation. To address these limitations, we propose a novel hypergraph neural network named I<sup>2</sup>HGNN, which is grounded in an energy minimization function formulated for hypergraphs. Our analysis reveals that propagation layers align well with the message-passing paradigm in the context of hypergraphs. I<sup>2</sup>HGNN achieves a favorable trade-off between performance and interpretability. Furthermore, it effectively balances the significance of node features and hypergraph topology across a diverse range of datasets. We conducted extensive experiments on 15 datasets, and the results highlight the superior performance of I<sup>2</sup>HGNN in the task of hypergraph node classification across nearly all benchmarking datasets.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106929"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142781698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stabilizing sequence learning in stochastic spiking networks with GABA-Modulated STDP. 利用 GABA 调制 STDP 稳定随机尖峰网络中的序列学习
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 Epub Date: 2024-12-07 DOI: 10.1016/j.neunet.2024.106985
Marius Vieth, Jochen Triesch

Cortical networks are capable of unsupervised learning and spontaneous replay of complex temporal sequences. Endowing artificial spiking neural networks with similar learning abilities remains a challenge. In particular, it is unresolved how different plasticity rules can contribute to both learning and the maintenance of network stability during learning. Here we introduce a biologically inspired form of GABA-Modulated Spike Timing-Dependent Plasticity (GMS) and demonstrate its ability to permit stable learning of complex temporal sequences including natural language in recurrent spiking neural networks. Motivated by biological findings, GMS utilizes the momentary level of inhibition onto excitatory cells to adjust both the magnitude and sign of Spike Timing-Dependent Plasticity (STDP) of connections between excitatory cells. In particular, high levels of inhibition in the network cause depression of excitatory-to-excitatory connections. We demonstrate the effectiveness of this mechanism during several sequence learning experiments with character- and token-based text inputs as well as visual input sequences. We show that GMS maintains stability during learning and spontaneous replay and permits the network to form a clustered hierarchical representation of its input sequences. Overall, we provide a biologically inspired model of unsupervised learning of complex sequences in recurrent spiking neural networks.

皮层网络能够无监督学习和自发重放复杂的时间序列。赋予人工尖峰神经网络类似的学习能力仍然是一个挑战。特别是,不同的可塑性规则在学习过程中如何促进学习和网络稳定性的维持,还没有得到解决。在这里,我们介绍了一种受生物学启发的gaba调制的Spike time - dependent Plasticity (GMS),并证明了它能够在循环Spike神经网络中稳定地学习复杂的时间序列,包括自然语言。基于生物学发现,GMS利用对兴奋性细胞的瞬时抑制水平来调节兴奋性细胞之间连接的Spike time - dependent Plasticity (STDP)的大小和信号。特别是,网络中的高水平抑制会导致兴奋性到兴奋性连接的抑制。我们在几个基于字符和标记的文本输入以及视觉输入序列的序列学习实验中证明了该机制的有效性。我们证明GMS在学习和自发重播期间保持稳定性,并允许网络形成其输入序列的聚类分层表示。总的来说,我们提供了一个受生物学启发的模型,用于循环尖峰神经网络中复杂序列的无监督学习。
{"title":"Stabilizing sequence learning in stochastic spiking networks with GABA-Modulated STDP.","authors":"Marius Vieth, Jochen Triesch","doi":"10.1016/j.neunet.2024.106985","DOIUrl":"10.1016/j.neunet.2024.106985","url":null,"abstract":"<p><p>Cortical networks are capable of unsupervised learning and spontaneous replay of complex temporal sequences. Endowing artificial spiking neural networks with similar learning abilities remains a challenge. In particular, it is unresolved how different plasticity rules can contribute to both learning and the maintenance of network stability during learning. Here we introduce a biologically inspired form of GABA-Modulated Spike Timing-Dependent Plasticity (GMS) and demonstrate its ability to permit stable learning of complex temporal sequences including natural language in recurrent spiking neural networks. Motivated by biological findings, GMS utilizes the momentary level of inhibition onto excitatory cells to adjust both the magnitude and sign of Spike Timing-Dependent Plasticity (STDP) of connections between excitatory cells. In particular, high levels of inhibition in the network cause depression of excitatory-to-excitatory connections. We demonstrate the effectiveness of this mechanism during several sequence learning experiments with character- and token-based text inputs as well as visual input sequences. We show that GMS maintains stability during learning and spontaneous replay and permits the network to form a clustered hierarchical representation of its input sequences. Overall, we provide a biologically inspired model of unsupervised learning of complex sequences in recurrent spiking neural networks.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106985"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedMEKT: Distillation-based embedding knowledge transfer for multimodal federated learning. FedMEKT:基于蒸馏的嵌入式知识转移,用于多模式联合学习。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 Epub Date: 2024-12-09 DOI: 10.1016/j.neunet.2024.107017
Huy Q Le, Minh N H Nguyen, Chu Myaet Thwal, Yu Qiao, Chaoning Zhang, Choong Seon Hong

Federated learning (FL) enables a decentralized machine learning paradigm for multiple clients to collaboratively train a generalized global model without sharing their private data. Most existing works have focused on designing FL systems for unimodal data, limiting their potential to exploit valuable multimodal data for future personalized applications. Moreover, the majority of FL approaches still rely on labeled data at the client side, which is often constrained by the inability of users to self-annotate their data in real-world applications. In light of these limitations, we propose a novel multimodal FL framework, namely FedMEKT, based on a semi-supervised learning approach to leverage representations from different modalities. To address the challenges of modality discrepancy and labeled data constraints in existing FL systems, our proposed FedMEKT framework comprises local multimodal autoencoder learning, generalized multimodal autoencoder construction, and generalized classifier learning. Bringing this concept into the proposed framework, we develop a distillation-based multimodal embedding knowledge transfer mechanism which allows the server and clients to exchange joint multimodal embedding knowledge extracted from a multimodal proxy dataset. Specifically, our FedMEKT iteratively updates the generalized global encoders with joint multimodal embedding knowledge from participating clients through upstream and downstream multimodal embedding knowledge transfer for local learning. Through extensive experiments on four multimodal datasets, we demonstrate that FedMEKT not only achieves superior global encoder performance in linear evaluation but also guarantees user privacy for personal data and model parameters while demanding less communication cost than other baselines.

联邦学习(FL)为多个客户端提供了一种分散的机器学习范式,可以在不共享私有数据的情况下协作训练通用的全局模型。大多数现有的工作都集中在为单模态数据设计FL系统,限制了它们为未来个性化应用开发有价值的多模态数据的潜力。此外,大多数FL方法仍然依赖于客户端的标记数据,这通常受到用户无法在实际应用程序中对其数据进行自我注释的限制。鉴于这些限制,我们提出了一个新的多模态FL框架,即FedMEKT,基于半监督学习方法来利用来自不同模态的表示。为了解决现有FL系统中模态差异和标记数据约束的挑战,我们提出的FedMEKT框架包括局部多模态自编码器学习、广义多模态自编码器构建和广义分类器学习。将这一概念引入所提出的框架中,我们开发了一种基于蒸馏的多模态嵌入知识传递机制,该机制允许服务器和客户端交换从多模态代理数据集中提取的联合多模态嵌入知识。具体来说,我们的FedMEKT通过上游和下游的多模态嵌入知识转移进行局部学习,通过参与客户端的联合多模态嵌入知识迭代更新广义全局编码器。通过在四个多模态数据集上的大量实验,我们证明了FedMEKT不仅在线性评估方面具有优越的全局编码器性能,而且在保证用户个人数据和模型参数隐私的同时,所需的通信成本比其他基线低。
{"title":"FedMEKT: Distillation-based embedding knowledge transfer for multimodal federated learning.","authors":"Huy Q Le, Minh N H Nguyen, Chu Myaet Thwal, Yu Qiao, Chaoning Zhang, Choong Seon Hong","doi":"10.1016/j.neunet.2024.107017","DOIUrl":"10.1016/j.neunet.2024.107017","url":null,"abstract":"<p><p>Federated learning (FL) enables a decentralized machine learning paradigm for multiple clients to collaboratively train a generalized global model without sharing their private data. Most existing works have focused on designing FL systems for unimodal data, limiting their potential to exploit valuable multimodal data for future personalized applications. Moreover, the majority of FL approaches still rely on labeled data at the client side, which is often constrained by the inability of users to self-annotate their data in real-world applications. In light of these limitations, we propose a novel multimodal FL framework, namely FedMEKT, based on a semi-supervised learning approach to leverage representations from different modalities. To address the challenges of modality discrepancy and labeled data constraints in existing FL systems, our proposed FedMEKT framework comprises local multimodal autoencoder learning, generalized multimodal autoencoder construction, and generalized classifier learning. Bringing this concept into the proposed framework, we develop a distillation-based multimodal embedding knowledge transfer mechanism which allows the server and clients to exchange joint multimodal embedding knowledge extracted from a multimodal proxy dataset. Specifically, our FedMEKT iteratively updates the generalized global encoders with joint multimodal embedding knowledge from participating clients through upstream and downstream multimodal embedding knowledge transfer for local learning. Through extensive experiments on four multimodal datasets, we demonstrate that FedMEKT not only achieves superior global encoder performance in linear evaluation but also guarantees user privacy for personal data and model parameters while demanding less communication cost than other baselines.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"107017"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142848266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting the problem of learning long-term dependencies in recurrent neural networks. 回顾递归神经网络中长期依赖关系的学习问题。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 Epub Date: 2024-11-26 DOI: 10.1016/j.neunet.2024.106887
Liam Johnston, Vivak Patel, Yumian Cui, Prasanna Balaprakash

Recurrent neural networks (RNNs) are an important class of models for learning sequential behavior. However, training RNNs to learn long-term dependencies is a tremendously difficult task, and this difficulty is widely attributed to the vanishing and exploding gradient (VEG) problem. Since it was first characterized 30 years ago, the belief that if VEG occurs during optimization then RNNs learn long-term dependencies poorly has become a central tenet in the RNN literature and has been steadily cited as motivation for a wide variety of research advancements. In this work, we revisit and interrogate this belief using a large factorial experiment where more than 40,000 RNNs were trained, and provide evidence contradicting this belief. Motivated by these findings, we re-examine the original discussion that analyzed latching behavior in RNNs by way of hyperbolic attractors, and ultimately demonstrate that these dynamics do not fully capture the learned characteristics of RNNs. Our findings suggest that these models are fully capable of learning dynamics that do not correspond to hyperbolic attractors, and that the choice of hyper-parameters, namely learning rate, has a substantial impact on the likelihood of whether an RNN will be able to learn long-term dependencies.

递归神经网络(RNNs)是学习序列行为的一类重要模型。然而,训练rnn学习长期依赖关系是一项非常困难的任务,这一困难被广泛归因于消失和爆炸梯度(VEG)问题。自从它在30年前首次被描述以来,如果VEG在优化过程中发生,那么RNN学习长期依赖性较差的信念已经成为RNN文献中的核心原则,并已被稳定地引用为各种研究进展的动力。在这项工作中,我们使用一个大型析因实验来重新审视和质疑这一信念,其中超过40,000个rnn接受了训练,并提供了与这一信念相矛盾的证据。受这些发现的启发,我们重新审视了通过双曲吸引子分析rnn中的闩锁行为的原始讨论,并最终证明这些动力学并不能完全捕获rnn的学习特征。我们的研究结果表明,这些模型完全能够学习不对应于双曲吸引子的动态,并且超参数(即学习率)的选择对RNN是否能够学习长期依赖关系的可能性有重大影响。
{"title":"Revisiting the problem of learning long-term dependencies in recurrent neural networks.","authors":"Liam Johnston, Vivak Patel, Yumian Cui, Prasanna Balaprakash","doi":"10.1016/j.neunet.2024.106887","DOIUrl":"10.1016/j.neunet.2024.106887","url":null,"abstract":"<p><p>Recurrent neural networks (RNNs) are an important class of models for learning sequential behavior. However, training RNNs to learn long-term dependencies is a tremendously difficult task, and this difficulty is widely attributed to the vanishing and exploding gradient (VEG) problem. Since it was first characterized 30 years ago, the belief that if VEG occurs during optimization then RNNs learn long-term dependencies poorly has become a central tenet in the RNN literature and has been steadily cited as motivation for a wide variety of research advancements. In this work, we revisit and interrogate this belief using a large factorial experiment where more than 40,000 RNNs were trained, and provide evidence contradicting this belief. Motivated by these findings, we re-examine the original discussion that analyzed latching behavior in RNNs by way of hyperbolic attractors, and ultimately demonstrate that these dynamics do not fully capture the learned characteristics of RNNs. Our findings suggest that these models are fully capable of learning dynamics that do not correspond to hyperbolic attractors, and that the choice of hyper-parameters, namely learning rate, has a substantial impact on the likelihood of whether an RNN will be able to learn long-term dependencies.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106887"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lie group convolution neural networks with scale-rotation equivariance. 具有尺度旋转等方差的李群卷积神经网络。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 Epub Date: 2024-11-28 DOI: 10.1016/j.neunet.2024.106980
Weidong Qiao, Yang Xu, Hui Li

The weight-sharing mechanism of convolutional kernels ensures the translation equivariance of convolutional neural networks (CNNs) but not scale and rotation equivariance. This study proposes a SIM(2) Lie group-CNN, which can simultaneously keep scale, rotation, and translation equivariance for image classification tasks. The SIM(2) Lie group-CNN includes a lifting module, a series of group convolution modules, a global pooling layer, and a classification layer. The lifting module transfers the input image from Euclidean space to Lie group space, and the group convolution is parameterized through a fully connected network using the Lie Algebra coefficients of Lie group elements as inputs to achieve scale and rotation equivariance. It is worth noting that the mapping relationship between SIM(2) and its Lie Algebra and the distance measure of SIM(2) are defined explicitly in this paper, thus solving the problem of the metric of features on the space of SIM(2) Lie group, which contrasts with other Lie groups characterized by a single element, such as SO(2). The scale-rotation equivariance of Lie group-CNN is verified, and the best recognition accuracy is achieved on three categories of image datasets. Consequently, the SIM(2) Lie group-CNN can successfully extract geometric features and perform equivariant recognition on images with rotation and scale transformations.

卷积核的权值共享机制保证了卷积神经网络的平移等方差,但不能保证卷积神经网络的尺度和旋转等方差。本研究提出了一种SIM(2) Lie群- cnn算法,该算法可以同时保持图像分类任务的尺度、旋转和平移等方差。SIM(2) Lie群- cnn包括提升模块、一系列群卷积模块、全局池化层和分类层。提升模块将输入图像从欧几里德空间传输到李群空间,以李群元素的李代数系数作为输入,通过全连通网络对群卷积进行参数化,实现尺度和旋转等方差。值得注意的是,本文明确定义了SIM(2)与其李代数之间的映射关系以及SIM(2)的距离测度,从而解决了SIM(2)李群空间上的特征度量问题,这与其他单元李群(如SO(2))形成了对比。验证了李群- cnn的尺度旋转等方差,在三类图像数据集上取得了最佳的识别精度。因此,SIM(2)李群- cnn可以成功地提取几何特征并对旋转和尺度变换的图像进行等变识别。
{"title":"Lie group convolution neural networks with scale-rotation equivariance.","authors":"Weidong Qiao, Yang Xu, Hui Li","doi":"10.1016/j.neunet.2024.106980","DOIUrl":"10.1016/j.neunet.2024.106980","url":null,"abstract":"<p><p>The weight-sharing mechanism of convolutional kernels ensures the translation equivariance of convolutional neural networks (CNNs) but not scale and rotation equivariance. This study proposes a SIM(2) Lie group-CNN, which can simultaneously keep scale, rotation, and translation equivariance for image classification tasks. The SIM(2) Lie group-CNN includes a lifting module, a series of group convolution modules, a global pooling layer, and a classification layer. The lifting module transfers the input image from Euclidean space to Lie group space, and the group convolution is parameterized through a fully connected network using the Lie Algebra coefficients of Lie group elements as inputs to achieve scale and rotation equivariance. It is worth noting that the mapping relationship between SIM(2) and its Lie Algebra and the distance measure of SIM(2) are defined explicitly in this paper, thus solving the problem of the metric of features on the space of SIM(2) Lie group, which contrasts with other Lie groups characterized by a single element, such as SO(2). The scale-rotation equivariance of Lie group-CNN is verified, and the best recognition accuracy is achieved on three categories of image datasets. Consequently, the SIM(2) Lie group-CNN can successfully extract geometric features and perform equivariant recognition on images with rotation and scale transformations.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106980"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An extrapolation-driven network architecture for physics-informed deep learning. 物理信息深度学习的外推驱动网络架构。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 Epub Date: 2024-12-05 DOI: 10.1016/j.neunet.2024.106998
Yong Wang, Yanzhong Yao, Zhiming Gao

Current physics-informed neural network (PINN) implementations with sequential learning strategies often experience some weaknesses, such as the failure to reproduce the previous training results when using a single network, the difficulty to strictly ensure continuity and smoothness at the time interval nodes when using multiple networks, and the increase in complexity and computational overhead. To overcome these shortcomings, we first investigate the extrapolation capability of the PINN method for time-dependent PDEs. Taking advantage of this extrapolation property, we generalize the training result obtained in a specific time subinterval to larger intervals by adding a correction term to the network parameters of the subinterval. The correction term is determined by further training with the sample points in the added subinterval. Secondly, by designing an extrapolation control function with special characteristics and combining it with a correction term, we construct a new neural network architecture whose network parameters are coupled with the time variable, which we call the extrapolation-driven network architecture. Based on this architecture, using a single neural network, we can obtain the overall PINN solution of the whole domain with the following two characteristics: (1) it completely inherits the local solution of the interval obtained from the previous training, (2) at the interval node, it strictly maintains the continuity and smoothness that the true solution has. The extrapolation-driven network architecture allows us to divide a large time domain into multiple subintervals and solve the time-dependent PDEs one by one in a chronological order. This training scheme respects the causality principle and effectively overcomes the difficulties of the conventional PINN method in solving the evolution equation on a large time domain. Numerical experiments verify the performance of our method. The data and code accompanying this paper are available at https://github.com/wangyong1301108/E-DNN.

当前采用顺序学习策略的物理信息神经网络(PINN)实现往往存在一些弱点,例如使用单个网络时无法重现先前的训练结果,使用多个网络时难以严格保证时间间隔节点的连续性和平滑性,以及复杂性和计算开销的增加。为了克服这些缺点,我们首先研究了PINN方法对时变偏微分方程的外推能力。利用这种外推特性,我们通过在子区间的网络参数中加入校正项,将在特定时间子区间内得到的训练结果推广到更大的时间区间。修正项是通过对添加的子区间内的样本点进行进一步训练来确定的。其次,通过设计具有特殊特征的外推控制函数,并将其与修正项相结合,构造了一种网络参数与时间变量耦合的神经网络结构,称为外推驱动网络结构。基于该体系结构,利用单个神经网络,我们可以得到整个域的整体PINN解,该解具有以下两个特点:(1)它完全继承了之前训练得到的区间局部解;(2)在区间节点上,它严格保持了真解所具有的连续性和平滑性。外推驱动的网络架构允许我们将一个大的时域划分为多个子区间,并按时间顺序逐个求解与时间相关的偏微分方程。该训练方案尊重因果关系原理,有效克服了传统的PINN方法在大时间域上求解演化方程的困难。数值实验验证了该方法的有效性。本文附带的数据和代码可在https://github.com/wangyong1301108/E-DNN上获得。
{"title":"An extrapolation-driven network architecture for physics-informed deep learning.","authors":"Yong Wang, Yanzhong Yao, Zhiming Gao","doi":"10.1016/j.neunet.2024.106998","DOIUrl":"10.1016/j.neunet.2024.106998","url":null,"abstract":"<p><p>Current physics-informed neural network (PINN) implementations with sequential learning strategies often experience some weaknesses, such as the failure to reproduce the previous training results when using a single network, the difficulty to strictly ensure continuity and smoothness at the time interval nodes when using multiple networks, and the increase in complexity and computational overhead. To overcome these shortcomings, we first investigate the extrapolation capability of the PINN method for time-dependent PDEs. Taking advantage of this extrapolation property, we generalize the training result obtained in a specific time subinterval to larger intervals by adding a correction term to the network parameters of the subinterval. The correction term is determined by further training with the sample points in the added subinterval. Secondly, by designing an extrapolation control function with special characteristics and combining it with a correction term, we construct a new neural network architecture whose network parameters are coupled with the time variable, which we call the extrapolation-driven network architecture. Based on this architecture, using a single neural network, we can obtain the overall PINN solution of the whole domain with the following two characteristics: (1) it completely inherits the local solution of the interval obtained from the previous training, (2) at the interval node, it strictly maintains the continuity and smoothness that the true solution has. The extrapolation-driven network architecture allows us to divide a large time domain into multiple subintervals and solve the time-dependent PDEs one by one in a chronological order. This training scheme respects the causality principle and effectively overcomes the difficulties of the conventional PINN method in solving the evolution equation on a large time domain. Numerical experiments verify the performance of our method. The data and code accompanying this paper are available at https://github.com/wangyong1301108/E-DNN.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106998"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DFA-mode-dependent stability of impulsive switched memristive neural networks under channel-covert aperiodic asynchronous attacks. 信道隐蔽非周期异步攻击下脉冲交换记忆神经网络的dfa模式依赖稳定性。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 Epub Date: 2024-12-01 DOI: 10.1016/j.neunet.2024.106962
Xinyi Han, Yongbin Yu, Xiangxiang Wang, Xiao Feng, Jingya Wang, Jingye Cai, Kaibo Shi, Shouming Zhong

This article is concerned with the deterministic finite automaton-mode-dependent (DFAMD) exponential stability problem of impulsive switched memristive neural networks (SMNNs) with aperiodic asynchronous attacks and the network covert channel. First, unlike the existing literature on SMNNs, this article focuses on DFA to drive mode switching, which facilitates precise system behavior modeling based on deterministic rules and input characters. To eliminate the periodicity and consistency constraints of traditional attacks, this article presents the multichannel aperiodic asynchronous denial-of-service (DoS) attacks, allowing for the diversity of attack sequences. Meanwhile, the network covert channel with a security layer is exploited and its dynamic adjustment is realized jointly through the dynamic weighted try-once-discard (DWTOD) protocol and selector, which can reduce network congestion, improve data security, and enhance system defense capability. In addition, this article proposes a novel mode-dependent hybrid controller composed of output feedback control and mode-dependent impulsive control, with the goal of increasing system flexibility and efficiency. Inspired by the semi-tensor product (STP) technique, Lyapunov-Krasovskii functions, and inequality technology, the novel exponential stability conditions are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the developed approach.

研究具有非周期异步攻击和网络隐蔽信道的脉冲开关记忆神经网络(smnn)的确定性有限自动机模式相关(DFAMD)指数稳定性问题。首先,与现有关于smnn的文献不同,本文侧重于DFA来驱动模式切换,这有助于基于确定性规则和输入字符的精确系统行为建模。为了消除传统攻击的周期性和一致性约束,本文提出了多通道非周期异步拒绝服务(DoS)攻击,允许攻击序列的多样性。同时,利用具有安全层的网络隐蔽通道,通过动态加权尝试丢弃(DWTOD)协议和选择器共同实现隐蔽通道的动态调整,减少网络拥塞,提高数据安全性,增强系统防御能力。此外,本文还提出了一种由输出反馈控制和模态依赖脉冲控制组成的新型模态依赖混合控制器,以提高系统的灵活性和效率。在半张量积(STP)技术、Lyapunov-Krasovskii函数和不等式技术的启发下,导出了新的指数稳定性条件。最后,通过数值仿真验证了该方法的有效性。
{"title":"DFA-mode-dependent stability of impulsive switched memristive neural networks under channel-covert aperiodic asynchronous attacks.","authors":"Xinyi Han, Yongbin Yu, Xiangxiang Wang, Xiao Feng, Jingya Wang, Jingye Cai, Kaibo Shi, Shouming Zhong","doi":"10.1016/j.neunet.2024.106962","DOIUrl":"10.1016/j.neunet.2024.106962","url":null,"abstract":"<p><p>This article is concerned with the deterministic finite automaton-mode-dependent (DFAMD) exponential stability problem of impulsive switched memristive neural networks (SMNNs) with aperiodic asynchronous attacks and the network covert channel. First, unlike the existing literature on SMNNs, this article focuses on DFA to drive mode switching, which facilitates precise system behavior modeling based on deterministic rules and input characters. To eliminate the periodicity and consistency constraints of traditional attacks, this article presents the multichannel aperiodic asynchronous denial-of-service (DoS) attacks, allowing for the diversity of attack sequences. Meanwhile, the network covert channel with a security layer is exploited and its dynamic adjustment is realized jointly through the dynamic weighted try-once-discard (DWTOD) protocol and selector, which can reduce network congestion, improve data security, and enhance system defense capability. In addition, this article proposes a novel mode-dependent hybrid controller composed of output feedback control and mode-dependent impulsive control, with the goal of increasing system flexibility and efficiency. Inspired by the semi-tensor product (STP) technique, Lyapunov-Krasovskii functions, and inequality technology, the novel exponential stability conditions are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the developed approach.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106962"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tensor dictionary-based heterogeneous transfer learning to study emotion-related gender differences in brain. 基于张量词典的异质迁移学习研究大脑中情绪相关的性别差异。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 Epub Date: 2024-12-03 DOI: 10.1016/j.neunet.2024.106974
Lan Yang, Chen Qiao, Takafumi Kanamori, Vince D Calhoun, Julia M Stephen, Tony W Wilson, Yu-Ping Wang

In practice, collecting auxiliary labeled data with same feature space from multiple domains is difficult. Thus, we focus on the heterogeneous transfer learning to address the problem of insufficient sample sizes in neuroimaging. Viewing subjects, time, and features as dimensions, brain activation and dynamic functional connectivity data can be treated as high-order heterogeneous data with heterogeneity arising from distinct feature space. To use the heterogeneous priori knowledge from the low-dimensional brain activation data to improve the classification performance of high-dimensional dynamic functional connectivity data, we propose a tensor dictionary-based heterogeneous transfer learning framework. It combines supervised tensor dictionary learning with heterogeneous transfer learning for enhance high-order heterogeneous knowledge sharing. The former can encode the underlying discriminative features in high-order data into dictionaries, while the latter can transfer heterogeneous knowledge encoded in dictionaries through feature transformation derived from mathematical relationship between domains. The primary focus of this paper is gender classification using fMRI data to identify emotion-related brain gender differences during adolescence. Additionally, experiments on simulated data and EEG data are included to demonstrate the generalizability of the proposed method. Experimental results indicate that incorporating prior knowledge significantly enhances classification performance. Further analysis of brain gender differences suggests that temporal variability in brain activity explains differences in emotion regulation strategies between genders. By adopting the heterogeneous knowledge sharing strategy, the proposed framework can capture the multifaceted characteristics of the brain, improve the generalization of the model, and reduce training costs. Understanding the gender specific neural mechanisms of emotional cognition helps to develop the gender-specific treatments for neurological diseases.

在实践中,从多个领域中收集具有相同特征空间的辅助标记数据是困难的。因此,我们将重点放在异质迁移学习上,以解决神经影像学中样本不足的问题。以被试、时间、特征为维度,脑激活和动态功能连接数据可以视为高阶异构数据,异质性来源于不同的特征空间。为了利用低维脑活动数据的异构先验知识来提高高维动态功能连接数据的分类性能,提出了一种基于张量字典的异构迁移学习框架。它将监督张量字典学习与异构迁移学习相结合,增强了高阶异构知识共享。前者可以将高阶数据中潜在的判别特征编码到字典中,而后者可以通过域间数学关系的特征转换来转移字典中编码的异构知识。本文的主要焦点是性别分类使用功能磁共振成像数据来识别青春期情绪相关的大脑性别差异。通过仿真数据和脑电数据的实验,验证了该方法的通用性。实验结果表明,结合先验知识可以显著提高分类性能。对大脑性别差异的进一步分析表明,大脑活动的时间变异解释了性别之间情绪调节策略的差异。通过采用异构知识共享策略,该框架能够捕捉大脑的多面性特征,提高模型的泛化能力,降低训练成本。了解情绪认知的性别特异性神经机制有助于发展神经系统疾病的性别特异性治疗。
{"title":"Tensor dictionary-based heterogeneous transfer learning to study emotion-related gender differences in brain.","authors":"Lan Yang, Chen Qiao, Takafumi Kanamori, Vince D Calhoun, Julia M Stephen, Tony W Wilson, Yu-Ping Wang","doi":"10.1016/j.neunet.2024.106974","DOIUrl":"10.1016/j.neunet.2024.106974","url":null,"abstract":"<p><p>In practice, collecting auxiliary labeled data with same feature space from multiple domains is difficult. Thus, we focus on the heterogeneous transfer learning to address the problem of insufficient sample sizes in neuroimaging. Viewing subjects, time, and features as dimensions, brain activation and dynamic functional connectivity data can be treated as high-order heterogeneous data with heterogeneity arising from distinct feature space. To use the heterogeneous priori knowledge from the low-dimensional brain activation data to improve the classification performance of high-dimensional dynamic functional connectivity data, we propose a tensor dictionary-based heterogeneous transfer learning framework. It combines supervised tensor dictionary learning with heterogeneous transfer learning for enhance high-order heterogeneous knowledge sharing. The former can encode the underlying discriminative features in high-order data into dictionaries, while the latter can transfer heterogeneous knowledge encoded in dictionaries through feature transformation derived from mathematical relationship between domains. The primary focus of this paper is gender classification using fMRI data to identify emotion-related brain gender differences during adolescence. Additionally, experiments on simulated data and EEG data are included to demonstrate the generalizability of the proposed method. Experimental results indicate that incorporating prior knowledge significantly enhances classification performance. Further analysis of brain gender differences suggests that temporal variability in brain activity explains differences in emotion regulation strategies between genders. By adopting the heterogeneous knowledge sharing strategy, the proposed framework can capture the multifaceted characteristics of the brain, improve the generalization of the model, and reduce training costs. Understanding the gender specific neural mechanisms of emotional cognition helps to develop the gender-specific treatments for neurological diseases.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106974"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Counterfactual learning for higher-order relation prediction in heterogeneous information networks. 异构信息网络中高阶关系预测的反事实学习。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 Epub Date: 2024-12-10 DOI: 10.1016/j.neunet.2024.107024
Xuan Guo, Jie Li, Pengfei Jiao, Wang Zhang, Tianpeng Li, Wenjun Wang

Heterogeneous Information Networks (HINs) play a crucial role in modeling complex social systems, where predicting missing links/relations is a significant task. Existing methods primarily focus on pairwise relations, but real-world scenarios often involve multi-entity interactions. For example, in academic collaboration networks, an interaction occurs between a paper, a conference, and multiple authors. These higher-order relations are prevalent but have been underexplored. Moreover, existing methods often neglect the causal relationship between the global graph structure and the state of relations, limiting their ability to capture the fundamental factors driving relation prediction. In this paper, we propose HINCHOR, an end-to-end model for higher-order relation prediction in HINs. HINCHOR introduces a higher-order structure encoder to capture multi-entity proximity information. Then, it focuses on a counterfactual question: "If the global graph structure were different, would the higher-order relation change?" By presenting a counterfactual data augmentation module, HINCHOR utilizes global structure information to generate counterfactual relations. Through counterfactual learning, HINCHOR estimates causal effects while predicting higher-order relations. The experimental results on four constructed benchmark datasets show that HINCHOR outperforms existing state-of-the-art methods.

异构信息网络(HINs)在复杂社会系统建模中起着至关重要的作用,其中预测缺失的链接/关系是一项重要任务。现有的方法主要关注成对关系,但现实世界的场景通常涉及多实体交互。例如,在学术协作网络中,一篇论文、一个会议和多个作者之间发生交互。这些高阶关系普遍存在,但尚未得到充分探索。此外,现有的方法往往忽略了全局图结构与关系状态之间的因果关系,限制了它们捕捉驱动关系预测的基本因素的能力。在本文中,我们提出了HINCHOR,一个用于HINs中高阶关系预测的端到端模型。HINCHOR引入了一个高阶结构编码器来捕获多实体接近信息。然后,它关注一个反事实问题:“如果全局图结构不同,高阶关系会改变吗?”通过提供反事实数据增强模块,HINCHOR利用全局结构信息生成反事实关系。通过反事实学习,HINCHOR在预测高阶关系的同时估计因果关系。在四个构建的基准数据集上的实验结果表明,HINCHOR优于现有的最先进的方法。
{"title":"Counterfactual learning for higher-order relation prediction in heterogeneous information networks.","authors":"Xuan Guo, Jie Li, Pengfei Jiao, Wang Zhang, Tianpeng Li, Wenjun Wang","doi":"10.1016/j.neunet.2024.107024","DOIUrl":"10.1016/j.neunet.2024.107024","url":null,"abstract":"<p><p>Heterogeneous Information Networks (HINs) play a crucial role in modeling complex social systems, where predicting missing links/relations is a significant task. Existing methods primarily focus on pairwise relations, but real-world scenarios often involve multi-entity interactions. For example, in academic collaboration networks, an interaction occurs between a paper, a conference, and multiple authors. These higher-order relations are prevalent but have been underexplored. Moreover, existing methods often neglect the causal relationship between the global graph structure and the state of relations, limiting their ability to capture the fundamental factors driving relation prediction. In this paper, we propose HINCHOR, an end-to-end model for higher-order relation prediction in HINs. HINCHOR introduces a higher-order structure encoder to capture multi-entity proximity information. Then, it focuses on a counterfactual question: \"If the global graph structure were different, would the higher-order relation change?\" By presenting a counterfactual data augmentation module, HINCHOR utilizes global structure information to generate counterfactual relations. Through counterfactual learning, HINCHOR estimates causal effects while predicting higher-order relations. The experimental results on four constructed benchmark datasets show that HINCHOR outperforms existing state-of-the-art methods.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"107024"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142824122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
P2ED: A four-quadrant framework for progressive prompt enhancement in 3D interactive medical imaging segmentation. P2ED:三维交互式医学影像分割中渐进式提示增强的四象限框架。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 Epub Date: 2024-12-03 DOI: 10.1016/j.neunet.2024.106973
Ao Chang, Xing Tao, Yuhao Huang, Xin Yang, Jiajun Zeng, Xinrui Zhou, Ruobing Huang, Dong Ni

Interactive segmentation allows active user participation to enhance output quality and resolve ambiguities. This may be especially indispensable to medical image segmentation to address complex anatomy and customization to varying user requirements. Existing approaches often encounter issues such as information dilution, limited adaptability to diverse user interactions, and insufficient response. To address these challenges, we present a novel 3D interactive framework P2ED that divides the task into four quadrants. It is equipped with a multi-granular prompt encrypted to extract prompt features from various hierarchical levels, along with a progressive hierarchical prompt decrypter to adaptively heighten the attention to the scarce prompt features along three spatial axes. Finally, it is appended by a calibration module to further align the prediction with user intentions. Extensive experiments demonstrate that the proposed P2ED achieves accurate results with fewer user interactions compared to state-of-the-art methods and is effective in promoting the upper limit of segmentation performance. The code will be released in https://github.com/chuyhu/P2ED.

交互式分割允许用户积极参与,以提高输出质量和解决歧义。这对于解决复杂解剖和定制不同用户需求的医学图像分割尤其不可或缺。现有的方法经常遇到信息稀释、对不同用户交互的适应性有限、响应不足等问题。为了解决这些挑战,我们提出了一种新的3D交互框架P2ED,将任务分为四个象限。该算法采用多粒度加密提示,从不同层次提取提示特征;采用渐进分层提示解密器,沿三个空间轴自适应增强对稀缺提示特征的关注。最后,它被附加一个校准模块,以进一步使预测与用户意图保持一致。大量的实验表明,与最先进的方法相比,所提出的P2ED以更少的用户交互获得了准确的结果,并且有效地提高了分割性能的上限。代码将在https://github.com/chuyhu/P2ED上发布。
{"title":"P<sup>2</sup>ED: A four-quadrant framework for progressive prompt enhancement in 3D interactive medical imaging segmentation.","authors":"Ao Chang, Xing Tao, Yuhao Huang, Xin Yang, Jiajun Zeng, Xinrui Zhou, Ruobing Huang, Dong Ni","doi":"10.1016/j.neunet.2024.106973","DOIUrl":"10.1016/j.neunet.2024.106973","url":null,"abstract":"<p><p>Interactive segmentation allows active user participation to enhance output quality and resolve ambiguities. This may be especially indispensable to medical image segmentation to address complex anatomy and customization to varying user requirements. Existing approaches often encounter issues such as information dilution, limited adaptability to diverse user interactions, and insufficient response. To address these challenges, we present a novel 3D interactive framework P<sup>2</sup>ED that divides the task into four quadrants. It is equipped with a multi-granular prompt encrypted to extract prompt features from various hierarchical levels, along with a progressive hierarchical prompt decrypter to adaptively heighten the attention to the scarce prompt features along three spatial axes. Finally, it is appended by a calibration module to further align the prediction with user intentions. Extensive experiments demonstrate that the proposed P<sup>2</sup>ED achieves accurate results with fewer user interactions compared to state-of-the-art methods and is effective in promoting the upper limit of segmentation performance. The code will be released in https://github.com/chuyhu/P2ED.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106973"},"PeriodicalIF":6.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142796531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1