首页 > 最新文献

Neural Networks最新文献

英文 中文
Adaptive indefinite kernels in hyperbolic spaces 双曲空间中的自适应不定核
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.neunet.2024.106803
Pengfei Fang
Learning embeddings in hyperbolic space has gained increasing interest in the community, due to its property of negative curvature, as a way of encoding data hierarchy. Recent works investigate the improvement of the representation power of hyperbolic embeddings through kernelization. However, existing developments focus on defining positive definite (pd) kernels, which may affect the intriguing property of hyperbolic spaces. This is due to the structures of hyperbolic spaces being modeled in indefinite spaces (e.g., Kreĭn space). This paper addresses this issue by developing adaptive indefinite kernels, which can better utilize the structures in the Kreĭn space. To this end, we first propose an adaptive embedding function in the Lorentz model and define indefinite Lorentz kernels (iLks) via the embedding function. Due to the isometric relationship between the Lorentz model and the Poincaré ball, these iLks are further extended to the Poincaré ball, resulting in the development of what are termed indefinite Poincaré kernels (iPKs). We evaluate the proposed indefinite kernels on a diversity of learning scenarios, including image classification, few-shot learning, zero-shot learning, person re-identification, knowledge distillation, etc. We show that the proposed indefinite kernels can bring significant performance gains over the baselines and enjoy better representation power from RKKSs than pd kernels.
由于双曲空间的负曲率特性,在双曲空间中学习嵌入作为一种数据分层编码的方法越来越受到业界的关注。最近的一些研究成果探讨了如何通过核化提高双曲嵌入的表示能力。然而,现有的研究侧重于定义正定(pd)核,这可能会影响双曲空间的耐人寻味特性。这是由于双曲空间的结构是在不定空间(如 Kreĭn 空间)中建模的。本文通过开发能更好地利用 Kreĭn 空间结构的自适应不定核来解决这一问题。为此,我们首先提出了洛伦兹模型中的自适应嵌入函数,并通过嵌入函数定义了不定洛伦兹核(iLks)。由于洛伦兹模型和庞加莱球之间的等距关系,这些 iLks 进一步扩展到庞加莱球,从而发展出所谓的不定庞加莱核(iPKs)。我们在各种学习场景中对所提出的不定核进行了评估,包括图像分类、少镜头学习、零镜头学习、人物再识别、知识提炼等。我们的研究表明,与 pd 内核相比,所提出的不定内核能显著提高基线性能,并能从 RKKSs 中获得更好的表示力。
{"title":"Adaptive indefinite kernels in hyperbolic spaces","authors":"Pengfei Fang","doi":"10.1016/j.neunet.2024.106803","DOIUrl":"10.1016/j.neunet.2024.106803","url":null,"abstract":"<div><div>Learning embeddings in hyperbolic space has gained increasing interest in the community, due to its property of negative curvature, as a way of encoding data hierarchy. Recent works investigate the improvement of the representation power of hyperbolic embeddings through kernelization. However, existing developments focus on defining positive definite (pd) kernels, which may affect the intriguing property of hyperbolic spaces. This is due to the structures of hyperbolic spaces being modeled in indefinite spaces (<em>e.g</em>., Kreĭn space). This paper addresses this issue by developing adaptive indefinite kernels, which can better utilize the structures in the Kreĭn space. To this end, we first propose an adaptive embedding function in the Lorentz model and define indefinite Lorentz kernels (iLks) via the embedding function. Due to the isometric relationship between the Lorentz model and the Poincaré ball, these iLks are further extended to the Poincaré ball, resulting in the development of what are termed indefinite Poincaré kernels (iPKs). We evaluate the proposed indefinite kernels on a diversity of learning scenarios, including image classification, few-shot learning, zero-shot learning, person re-identification, knowledge distillation, <em>etc</em>. We show that the proposed indefinite kernels can bring significant performance gains over the baselines and enjoy better representation power from RKKSs than pd kernels.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106803"},"PeriodicalIF":6.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tipping prediction of a class of large-scale radial-ring neural networks 一类大规模径向环神经网络的临界预测
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.neunet.2024.106820
Yunxiang Lu , Min Xiao , Xiaoqun Wu , Hamid Reza Karimi , Xiangpeng Xie , Jinde Cao , Wei Xing Zheng
Understanding the emergence and evolution of collective dynamics in large-scale neural networks remains a complex challenge. This paper seeks to address this gap by applying dynamical systems theory, with a particular focus on tipping mechanisms. First, we introduce a novel (n+mn)-scale radial-ring neural network and employ Coates’ flow graph topological approach to derive the characteristic equation of the linearized network. Second, through deriving stability conditions and predicting the tipping point using an algebraic approach based on the integral element concept, we identify critical factors such as the synaptic transmission delay, the self-feedback coefficient, and the network topology. Finally, we validate the methodology’s effectiveness in predicting the tipping point. The findings reveal that increased synaptic transmission delay can induce and amplify periodic oscillations. Additionally, the self-feedback coefficient and the network topology influence the onset of tipping points. Moreover, the selection of activation function impacts both the number of equilibrium solutions and the convergence speed of the neural network. Lastly, we demonstrate that the proposed large-scale radial-ring neural network exhibits stronger robustness compared to lower-scale networks with a single topology. The results provide a comprehensive depiction of the dynamics observed in large-scale neural networks under the influence of various factor combinations.
理解大规模神经网络中集体动力学的出现和演化仍然是一项复杂的挑战。本文试图通过应用动力系统理论来弥补这一不足,并特别关注临界机制。首先,我们引入了一个新颖的(n+mn)尺度径向环状神经网络,并采用科茨的流图拓扑方法推导出线性化网络的特征方程。其次,通过推导稳定性条件,并使用基于积分元素概念的代数方法预测临界点,我们确定了突触传输延迟、自反馈系数和网络拓扑结构等关键因素。最后,我们验证了该方法在预测临界点方面的有效性。研究结果表明,突触传递延迟的增加会诱发和放大周期性振荡。此外,自反馈系数和网络拓扑结构也会影响临界点的出现。此外,激活函数的选择也会影响平衡解的数量和神经网络的收敛速度。最后,我们证明了与单一拓扑结构的低尺度网络相比,所提出的大规模径向环形神经网络具有更强的鲁棒性。这些结果全面描述了大规模神经网络在各种因素组合影响下的动态变化。
{"title":"Tipping prediction of a class of large-scale radial-ring neural networks","authors":"Yunxiang Lu ,&nbsp;Min Xiao ,&nbsp;Xiaoqun Wu ,&nbsp;Hamid Reza Karimi ,&nbsp;Xiangpeng Xie ,&nbsp;Jinde Cao ,&nbsp;Wei Xing Zheng","doi":"10.1016/j.neunet.2024.106820","DOIUrl":"10.1016/j.neunet.2024.106820","url":null,"abstract":"<div><div>Understanding the emergence and evolution of collective dynamics in large-scale neural networks remains a complex challenge. This paper seeks to address this gap by applying dynamical systems theory, with a particular focus on tipping mechanisms. First, we introduce a novel <span><math><mrow><mo>(</mo><mi>n</mi><mo>+</mo><mi>m</mi><mi>n</mi><mo>)</mo></mrow></math></span>-scale radial-ring neural network and employ Coates’ flow graph topological approach to derive the characteristic equation of the linearized network. Second, through deriving stability conditions and predicting the tipping point using an algebraic approach based on the integral element concept, we identify critical factors such as the synaptic transmission delay, the self-feedback coefficient, and the network topology. Finally, we validate the methodology’s effectiveness in predicting the tipping point. The findings reveal that increased synaptic transmission delay can induce and amplify periodic oscillations. Additionally, the self-feedback coefficient and the network topology influence the onset of tipping points. Moreover, the selection of activation function impacts both the number of equilibrium solutions and the convergence speed of the neural network. Lastly, we demonstrate that the proposed large-scale radial-ring neural network exhibits stronger robustness compared to lower-scale networks with a single topology. The results provide a comprehensive depiction of the dynamics observed in large-scale neural networks under the influence of various factor combinations.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106820"},"PeriodicalIF":6.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep fuzzy physics-informed neural networks for forward and inverse PDE problems 用于正演和反演 PDE 问题的深度模糊物理信息神经网络。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neunet.2024.106750
Wenyuan Wu , Siyuan Duan , Yuan Sun , Yang Yu , Dong Liu , Dezhong Peng
As a grid-independent approach for solving partial differential equations (PDEs), Physics-Informed Neural Networks (PINNs) have garnered significant attention due to their unique capability to simultaneously learn from both data and the governing physical equations. Existing PINNs methods always assume that the data is stable and reliable, but data obtained from commercial simulation software often inevitably have ambiguous and inaccurate problems. Obviously, this will have a negative impact on the use of PINNs to solve forward and inverse PDE problems. To overcome the above problems, this paper proposes a Deep Fuzzy Physics-Informed Neural Networks (FPINNs) that explores the uncertainty in data. Specifically, to capture the uncertainty behind the data, FPINNs learns fuzzy representation through the fuzzy membership function layer and fuzzy rule layer. Afterward, we use deep neural networks to learn neural representation. Subsequently, the fuzzy representation is integrated with the neural representation. Finally, the residual of the physical equation and the data error are considered as the two components of the loss function, guiding the network to optimize towards adherence to the physical laws for accurate prediction of the physical field. Extensive experiment results show that FPINNs outperforms these comparative methods in solving forward and inverse PDE problems on four widely used datasets. The demo code will be released at https://github.com/siyuancncd/FPINNs.
物理信息神经网络(PINNs)是一种独立于网格的偏微分方程(PDEs)求解方法,因其同时从数据和物理方程中学习的独特能力而备受关注。现有的 PINNs 方法总是假定数据是稳定可靠的,但从商业模拟软件中获得的数据往往不可避免地存在模糊和不准确的问题。显然,这将对使用 PINNs 解决正演和反演 PDE 问题产生负面影响。为了克服上述问题,本文提出了一种探索数据不确定性的深度模糊物理信息神经网络(FPINNs)。具体来说,为了捕捉数据背后的不确定性,FPINNs 通过模糊成员函数层和模糊规则层学习模糊表示。之后,我们使用深度神经网络来学习神经表征。随后,将模糊表示与神经表示进行整合。最后,物理方程的残差和数据误差被视为损失函数的两个分量,引导网络朝着遵循物理规律的方向进行优化,从而实现对物理场的精确预测。广泛的实验结果表明,在解决四个广泛使用的数据集上的正演和反演 PDE 问题时,FPINNs 优于这些比较方法。演示代码将在 https://github.com/siyuancncd/FPINNs 上发布。
{"title":"Deep fuzzy physics-informed neural networks for forward and inverse PDE problems","authors":"Wenyuan Wu ,&nbsp;Siyuan Duan ,&nbsp;Yuan Sun ,&nbsp;Yang Yu ,&nbsp;Dong Liu ,&nbsp;Dezhong Peng","doi":"10.1016/j.neunet.2024.106750","DOIUrl":"10.1016/j.neunet.2024.106750","url":null,"abstract":"<div><div>As a grid-independent approach for solving partial differential equations (PDEs), Physics-Informed Neural Networks (PINNs) have garnered significant attention due to their unique capability to simultaneously learn from both data and the governing physical equations. Existing PINNs methods always assume that the data is stable and reliable, but data obtained from commercial simulation software often inevitably have ambiguous and inaccurate problems. Obviously, this will have a negative impact on the use of PINNs to solve forward and inverse PDE problems. To overcome the above problems, this paper proposes a Deep Fuzzy Physics-Informed Neural Networks (FPINNs) that explores the uncertainty in data. Specifically, to capture the uncertainty behind the data, FPINNs learns fuzzy representation through the fuzzy membership function layer and fuzzy rule layer. Afterward, we use deep neural networks to learn neural representation. Subsequently, the fuzzy representation is integrated with the neural representation. Finally, the residual of the physical equation and the data error are considered as the two components of the loss function, guiding the network to optimize towards adherence to the physical laws for accurate prediction of the physical field. Extensive experiment results show that FPINNs outperforms these comparative methods in solving forward and inverse PDE problems on four widely used datasets. The demo code will be released at <span><span>https://github.com/siyuancncd/FPINNs</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106750"},"PeriodicalIF":6.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local contour features contribute to figure-ground segregation in monkey V4 neural populations and human perception 局部轮廓特征有助于猴子 V4 神经群和人类感知中的图形-地面分离。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neunet.2024.106821
Motofumi Shishikura , Itsuki Machida , Hiroshi Tamura , Ko Sakai
Figure-ground (FG) segregation is a crucial step towards the recognition of objects in natural scenes. Gestalt psychologists have emphasized the importance of contour features in perception of FG. Recent electrophysiological studies have identified a neural population in V4 that shows FG-dependent modulation (FG neurons). However, whether the contour features contribute to the modulation of the response patterns of the neural population remains unclear. In the present study, we quantified the contour features associated with Gestalt factors in local natural stimuli and examined whether salient contour-features evoked reliable perceptual and neural responses by analyzing response consistency (stability) across trials. The results showed the tendency that the more salient contour-features evoked the greater consistencies in the perceptual FG judgments and population-based neural responses in FG determination; a greater partial correlation for curvature and weaker correlations for closure and parallelism. Multiple linear regression analyses demonstrated that the perceptual consistency depended similarly on curvature and closure, and the neural consistency depended significantly on curvature but weakly on closure. We further observed a strong correlation between the consistencies in the perceptual and neural responses, i.e., stimuli that evoked more stable percepts tended to evoke more stable neural responses. These results indicate that local contour-features modulate the responses of the neural population in V4 and contribute to the perception of FG organization.
图-地(FG)分离是识别自然场景中物体的关键一步。格式塔心理学家强调了轮廓特征在感知 FG 中的重要性。最近的电生理学研究发现,在 V4 中有一个神经群(FG 神经元)显示出 FG 依赖性调制。然而,轮廓特征是否有助于调节神经群的反应模式仍不清楚。在本研究中,我们对局部自然刺激中与格式塔因素相关的轮廓特征进行了量化,并通过分析跨试验的反应一致性(稳定性),考察了突出的轮廓特征是否能唤起可靠的知觉和神经反应。结果表明,越是突出的轮廓特征越能唤起对 FG 判断的知觉一致性和对 FG 确定的群体神经反应;对曲率的部分相关性越大,对封闭性和平行性的相关性越小。多元线性回归分析表明,知觉一致性同样取决于曲率和闭合度,而神经一致性显著取决于曲率,但弱于闭合度。我们进一步观察到,知觉反应和神经反应的一致性之间存在很强的相关性,也就是说,能唤起更稳定知觉的刺激往往能唤起更稳定的神经反应。这些结果表明,局部轮廓特征会调节 V4 神经群的反应,并有助于对 FG 组织的感知。
{"title":"Local contour features contribute to figure-ground segregation in monkey V4 neural populations and human perception","authors":"Motofumi Shishikura ,&nbsp;Itsuki Machida ,&nbsp;Hiroshi Tamura ,&nbsp;Ko Sakai","doi":"10.1016/j.neunet.2024.106821","DOIUrl":"10.1016/j.neunet.2024.106821","url":null,"abstract":"<div><div>Figure-ground (FG) segregation is a crucial step towards the recognition of objects in natural scenes. Gestalt psychologists have emphasized the importance of contour features in perception of FG. Recent electrophysiological studies have identified a neural population in V4 that shows FG-dependent modulation (FG neurons). However, whether the contour features contribute to the modulation of the response patterns of the neural population remains unclear. In the present study, we quantified the contour features associated with Gestalt factors in local natural stimuli and examined whether salient contour-features evoked reliable perceptual and neural responses by analyzing response consistency (stability) across trials. The results showed the tendency that the more salient contour-features evoked the greater consistencies in the perceptual FG judgments and population-based neural responses in FG determination; a greater partial correlation for curvature and weaker correlations for closure and parallelism. Multiple linear regression analyses demonstrated that the perceptual consistency depended similarly on curvature and closure, and the neural consistency depended significantly on curvature but weakly on closure. We further observed a strong correlation between the consistencies in the perceptual and neural responses, <em>i.e</em>., stimuli that evoked more stable percepts tended to evoke more stable neural responses. These results indicate that local contour-features modulate the responses of the neural population in V4 and contribute to the perception of FG organization.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106821"},"PeriodicalIF":6.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wasserstein task embedding for measuring task similarities 用于测量任务相似性的 Wasserstein 任务嵌入。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neunet.2024.106796
Xinran Liu , Yikun Bai , Yuzhe Lu , Andrea Soltoggio , Soheil Kolouri
Measuring similarities between different tasks is critical in a broad spectrum of machine learning problems, including transfer, multi-task, continual, and meta-learning. Most current approaches to measuring task similarities are architecture-dependent: (1) relying on pre-trained models, or (2) training networks on tasks and using forward transfer as a proxy for task similarity. In this paper, we leverage the optimal transport theory and define a novel task embedding for supervised classification that is model-agnostic, training-free, and capable of handling (partially) disjoint label sets. In short, given a dataset with ground-truth labels, we perform a label embedding through multi-dimensional scaling and concatenate dataset samples with their corresponding label embeddings. Then, we define the distance between two datasets as the 2-Wasserstein distance between their updated samples. Lastly, we leverage the 2-Wasserstein embedding framework to embed tasks into a vector space in which the Euclidean distance between the embedded points approximates the proposed 2-Wasserstein distance between tasks. We show that the proposed embedding leads to a significantly faster comparison of tasks compared to related approaches like the Optimal Transport Dataset Distance (OTDD). Furthermore, we demonstrate the effectiveness of our embedding through various numerical experiments and show statistically significant correlations between our proposed distance and the forward and backward transfer among tasks on a wide variety of image recognition datasets.
测量不同任务之间的相似性对于广泛的机器学习问题至关重要,这些问题包括转移学习、多任务学习、持续学习和元学习。目前大多数测量任务相似性的方法都依赖于架构:(1) 依靠预先训练的模型,或 (2) 在任务上训练网络,并使用前向传输作为任务相似性的代理。在本文中,我们利用最优传输理论,为监督分类定义了一种新颖的任务嵌入,它与模型无关、无需训练,并能处理(部分)不相交的标签集。简而言之,给定一个带有地面实况标签的数据集,我们通过多维缩放进行标签嵌入,并将数据集样本与相应的标签嵌入串联起来。然后,我们将两个数据集之间的距离定义为其更新样本之间的 2-Wasserstein 距离。最后,我们利用 2-Wasserstein 嵌入框架将任务嵌入向量空间,其中嵌入点之间的欧氏距离近似于任务之间的 2-Wasserstein 距离。我们证明,与最优传输数据集距离(OTDD)等相关方法相比,建议的嵌入方法能显著加快任务比较的速度。此外,我们还通过各种数值实验证明了我们的嵌入方法的有效性,并在各种图像识别数据集上展示了我们提出的距离与任务间前向和后向传输之间在统计学上的显著相关性。
{"title":"Wasserstein task embedding for measuring task similarities","authors":"Xinran Liu ,&nbsp;Yikun Bai ,&nbsp;Yuzhe Lu ,&nbsp;Andrea Soltoggio ,&nbsp;Soheil Kolouri","doi":"10.1016/j.neunet.2024.106796","DOIUrl":"10.1016/j.neunet.2024.106796","url":null,"abstract":"<div><div>Measuring similarities between different tasks is critical in a broad spectrum of machine learning problems, including transfer, multi-task, continual, and meta-learning. Most current approaches to measuring task similarities are architecture-dependent: (1) relying on pre-trained models, or (2) training networks on tasks and using forward transfer as a proxy for task similarity. In this paper, we leverage the optimal transport theory and define a novel task embedding for supervised classification that is model-agnostic, training-free, and capable of handling (partially) disjoint label sets. In short, given a dataset with ground-truth labels, we perform a label embedding through multi-dimensional scaling and concatenate dataset samples with their corresponding label embeddings. Then, we define the distance between two datasets as the 2-Wasserstein distance between their updated samples. Lastly, we leverage the 2-Wasserstein embedding framework to embed tasks into a vector space in which the Euclidean distance between the embedded points approximates the proposed 2-Wasserstein distance between tasks. We show that the proposed embedding leads to a significantly faster comparison of tasks compared to related approaches like the Optimal Transport Dataset Distance (OTDD). Furthermore, we demonstrate the effectiveness of our embedding through various numerical experiments and show statistically significant correlations between our proposed distance and the forward and backward transfer among tasks on a wide variety of image recognition datasets.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106796"},"PeriodicalIF":6.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks for electroencephalogram analysis: Alzheimer’s disease and epilepsy use cases 用于脑电图分析的图神经网络:阿尔茨海默病和癫痫用例
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neunet.2024.106792
Sergi Abadal , Pablo Galván , Alberto Mármol , Nadia Mammone , Cosimo Ieracitano , Michele Lo Giudice , Alessandro Salvini , Francesco Carlo Morabito
Electroencephalography (EEG) is widely used as a non-invasive technique for the diagnosis of several brain disorders, including Alzheimer’s disease and epilepsy. Until recently, diseases have been identified over EEG readings by human experts, which may not only be specific and difficult to find, but are also subject to human error. Despite the recent emergence of machine learning methods for the interpretation of EEGs, most approaches are not capable of capturing the underlying arbitrary non-Euclidean relations between signals in the different regions of the human brain. In this context, Graph Neural Networks (GNNs) have gained attention for their ability to effectively analyze complex relationships within different types of graph-structured data. This includes EEGs, a use case still relatively unexplored. In this paper, we aim to bridge this gap by presenting a study that applies GNNs for the EEG-based detection of Alzheimer’s disease and discrimination of two different types of seizures. To this end, we demonstrate the value of GNNs by showing that a single GNN architecture can achieve state-of-the-art performance in both use cases. Through design space explorations and explainability analysis, we develop a graph-based transformer that achieves cross-validated accuracies over 89% and 96% in the ternary classification variants of Alzheimer’s disease and epilepsy use cases, respectively, matching the intuitions drawn by expert neurologists. We also argue about the computational efficiency, generalizability and potential for real-time operation of GNNs for EEGs, positioning them as a valuable tool for classifying various neurological pathologies and opening up new prospects for research and clinical practice.
脑电图(EEG)作为一种非侵入性技术被广泛用于诊断包括阿尔茨海默病和癫痫在内的多种脑部疾病。直到最近,疾病一直是由人类专家通过脑电图读数来识别的,这不仅可能是特定的、难以发现的,而且还可能出现人为误差。尽管最近出现了用于解释脑电图的机器学习方法,但大多数方法都无法捕捉到人脑不同区域信号之间潜在的任意非欧几里得关系。在这种情况下,图神经网络(GNN)因其能够有效分析不同类型图结构数据中的复杂关系而备受关注。这其中就包括脑电图,而脑电图是一种尚未被开发的应用案例。在本文中,我们旨在通过介绍一项研究,将 GNN 应用于基于脑电图的阿尔茨海默病检测和两种不同类型癫痫发作的鉴别,从而弥补这一空白。为此,我们展示了 GNN 的价值,表明单一 GNN 架构可以在这两种应用案例中实现最先进的性能。通过设计空间探索和可解释性分析,我们开发了一种基于图的转换器,在阿尔茨海默病和癫痫的三元分类变体用例中,交叉验证准确率分别超过 89% 和 96%,与神经学专家得出的直觉相吻合。我们还论证了脑电图 GNN 的计算效率、通用性和实时运行潜力,将其定位为分类各种神经系统病症的重要工具,为研究和临床实践开辟了新的前景。
{"title":"Graph neural networks for electroencephalogram analysis: Alzheimer’s disease and epilepsy use cases","authors":"Sergi Abadal ,&nbsp;Pablo Galván ,&nbsp;Alberto Mármol ,&nbsp;Nadia Mammone ,&nbsp;Cosimo Ieracitano ,&nbsp;Michele Lo Giudice ,&nbsp;Alessandro Salvini ,&nbsp;Francesco Carlo Morabito","doi":"10.1016/j.neunet.2024.106792","DOIUrl":"10.1016/j.neunet.2024.106792","url":null,"abstract":"<div><div>Electroencephalography (EEG) is widely used as a non-invasive technique for the diagnosis of several brain disorders, including Alzheimer’s disease and epilepsy. Until recently, diseases have been identified over EEG readings by human experts, which may not only be specific and difficult to find, but are also subject to human error. Despite the recent emergence of machine learning methods for the interpretation of EEGs, most approaches are not capable of capturing the underlying arbitrary non-Euclidean relations between signals in the different regions of the human brain. In this context, Graph Neural Networks (GNNs) have gained attention for their ability to effectively analyze complex relationships within different types of graph-structured data. This includes EEGs, a use case still relatively unexplored. In this paper, we aim to bridge this gap by presenting a study that applies GNNs for the EEG-based detection of Alzheimer’s disease and discrimination of two different types of seizures. To this end, we demonstrate the value of GNNs by showing that a single GNN architecture can achieve state-of-the-art performance in both use cases. Through design space explorations and explainability analysis, we develop a graph-based transformer that achieves cross-validated accuracies over 89% and 96% in the ternary classification variants of Alzheimer’s disease and epilepsy use cases, respectively, matching the intuitions drawn by expert neurologists. We also argue about the computational efficiency, generalizability and potential for real-time operation of GNNs for EEGs, positioning them as a valuable tool for classifying various neurological pathologies and opening up new prospects for research and clinical practice.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106792"},"PeriodicalIF":6.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatDiff: A ChatGPT-based diffusion model for long-tailed classification ChatDiff:基于 ChatGPT 的长尾分类扩散模型。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neunet.2024.106794
Chenxun Deng , Dafang Li , Lin Ji , Chengyang Zhang , Baican Li , Hongying Yan , Jiyuan Zheng , Lifeng Wang , Junguo Zhang
Long-tailed data distributions have been a major challenge for the practical application of deep learning. Information augmentation intends to expand the long-tailed data into uniform distribution, which provides a feasible way to mitigate the data starvation of underrepresented classes. However, most existing augmentation methods face two significant challenges: (1) limited diversity in generated samples, and (2) the adverse effect of generated negative samples on downstream classification performance. In this paper, we propose a novel information augmentation method, named ChatDiff, to provide diverse positive samples for underrepresented classes, and eliminate generated negative samples. Specifically, we start with a prompt template to extract textual prior knowledge from the ChatGPT-3.5 model, enhancing the feature space for underrepresented classes. Then using this prior knowledge, a conditional diffusion model generates semantic-rich image samples for tail classes. Moreover, the proposed ChatDiff leverages a CLIP-based discriminator to screen and remove generated negative samples. This process avoids neural network learning the invalid or erroneous features, and further, improves long-tailed classification performance. Comprehensive experiments conducted on long-tailed benchmarks such as CIFAR10-LT, CIFAR100-LT, ImageNet-LT, and iNaturalist 2018, validate the effectiveness of our ChatDiff method.
长尾数据分布一直是深度学习实际应用的一大挑战。信息增强旨在将长尾数据扩展为均匀分布,这为缓解代表性不足类别的数据饥渴提供了可行的方法。然而,现有的大多数增强方法都面临两个重大挑战:(1)生成样本的多样性有限;(2)生成的负样本对下游分类性能产生不利影响。在本文中,我们提出了一种名为 ChatDiff 的新型信息增强方法,为代表性不足的类别提供多样化的正样本,并消除生成的负样本。具体来说,我们从一个提示模板开始,从 ChatGPT-3.5 模型中提取文本先验知识,从而增强代表性不足类别的特征空间。然后,利用这些先验知识,条件扩散模型为尾部类别生成语义丰富的图像样本。此外,拟议的 ChatDiff 还利用基于 CLIP 的判别器来筛选和移除生成的负样本。这一过程避免了神经网络学习无效或错误的特征,进一步提高了长尾分类性能。在 CIFAR10-LT、CIFAR100-LT、ImageNet-LT 和 iNaturalist 2018 等长尾基准上进行的综合实验验证了我们的 ChatDiff 方法的有效性。
{"title":"ChatDiff: A ChatGPT-based diffusion model for long-tailed classification","authors":"Chenxun Deng ,&nbsp;Dafang Li ,&nbsp;Lin Ji ,&nbsp;Chengyang Zhang ,&nbsp;Baican Li ,&nbsp;Hongying Yan ,&nbsp;Jiyuan Zheng ,&nbsp;Lifeng Wang ,&nbsp;Junguo Zhang","doi":"10.1016/j.neunet.2024.106794","DOIUrl":"10.1016/j.neunet.2024.106794","url":null,"abstract":"<div><div>Long-tailed data distributions have been a major challenge for the practical application of deep learning. Information augmentation intends to expand the long-tailed data into uniform distribution, which provides a feasible way to mitigate the data starvation of underrepresented classes. However, most existing augmentation methods face two significant challenges: (1) limited diversity in generated samples, and (2) the adverse effect of generated negative samples on downstream classification performance. In this paper, we propose a novel information augmentation method, named ChatDiff, to provide diverse positive samples for underrepresented classes, and eliminate generated negative samples. Specifically, we start with a prompt template to extract textual prior knowledge from the ChatGPT-3.5 model, enhancing the feature space for underrepresented classes. Then using this prior knowledge, a conditional diffusion model generates semantic-rich image samples for tail classes. Moreover, the proposed ChatDiff leverages a CLIP-based discriminator to screen and remove generated negative samples. This process avoids neural network learning the invalid or erroneous features, and further, improves long-tailed classification performance. Comprehensive experiments conducted on long-tailed benchmarks such as CIFAR10-LT, CIFAR100-LT, ImageNet-LT, and iNaturalist 2018, validate the effectiveness of our ChatDiff method.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106794"},"PeriodicalIF":6.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A neurodynamic optimization approach to distributed nonconvex optimization based on an HP augmented Lagrangian function 基于 HP 增强拉格朗日函数的分布式非凸优化神经动力优化方法。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.neunet.2024.106791
Huimin Guan , Yang Liu , Kit Ian Kou , Weihua Gui
This paper develops a neurodynamic model for distributed nonconvex-constrained optimization. In the distributed constrained optimization model, the objective function and inequality constraints do not need to be convex, and equality constraints do not need to be affine. A Hestenes–Powell augmented Lagrangian function for handling the nonconvexity is established, and a neurodynamic system is developed based on this. It is proved that it is stable at a local optimal solution of the optimization model. Two illustrative examples are provided to evaluate the enhanced stability and optimality of the developed neurodynamic systems.
本文建立了分布式非凸约束优化的神经动力学模型。在分布式约束优化模型中,目标函数和不等式约束不需要是凸的,等式约束不需要是仿射的。建立了一个用于处理非凸性的 Hestenes-Powell 增强拉格朗日函数,并在此基础上开发了一个神经动力系统。研究证明,该系统在优化模型的局部最优解处是稳定的。本文提供了两个示例来评估所开发的神经动力系统的稳定性和最优性。
{"title":"A neurodynamic optimization approach to distributed nonconvex optimization based on an HP augmented Lagrangian function","authors":"Huimin Guan ,&nbsp;Yang Liu ,&nbsp;Kit Ian Kou ,&nbsp;Weihua Gui","doi":"10.1016/j.neunet.2024.106791","DOIUrl":"10.1016/j.neunet.2024.106791","url":null,"abstract":"<div><div>This paper develops a neurodynamic model for distributed nonconvex-constrained optimization. In the distributed constrained optimization model, the objective function and inequality constraints do not need to be convex, and equality constraints do not need to be affine. A Hestenes–Powell augmented Lagrangian function for handling the nonconvexity is established, and a neurodynamic system is developed based on this. It is proved that it is stable at a local optimal solution of the optimization model. Two illustrative examples are provided to evaluate the enhanced stability and optimality of the developed neurodynamic systems.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106791"},"PeriodicalIF":6.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dictionary trained attention constrained low rank and sparse autoencoder for hyperspectral anomaly detection 用于高光谱异常检测的字典训练注意力约束低等级稀疏自动编码器
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.neunet.2024.106797
Xing Hu , Zhixuan Li , Lingkun Luo , Hamid Reza Karimi , Dawei Zhang
Dictionary representations and deep learning Autoencoder (AE) models have proven effective in hyperspectral anomaly detection. Dictionary representations offer self-explanation but struggle with complex scenarios. Conversely, autoencoders can capture details in complex scenes but lack self-explanation. Complex scenarios often involve extensive spatial information, making its utilization crucial in hyperspectral anomaly detection. To effectively combine the advantages of both methods and address the insufficient use of spatial information, we propose an attention constrained low-rank and sparse autoencoder for hyperspectral anomaly detection. This model includes two encoders: an attention constrained low-rank autoencoder (AClrAE) trained with a background dictionary and incorporating a Global Self-Attention Module (GAM) to focus on global spatial information, resulting in improved background reconstruction; and an attention constrained sparse autoencoder (ACsAE) trained with an anomaly dictionary and incorporating a Local Self-Attention Module (LAM) to focus on local spatial information, resulting in enhanced anomaly reconstruction. Finally, to merge the detection results from both encoders, a nonlinear fusion scheme is employed. Experiments on multiple real and synthetic datasets demonstrate the effectiveness and feasibility of the proposed method.
事实证明,字典表示法和深度学习自动编码器(AE)模型在高光谱异常检测中非常有效。字典表示法可提供自解释性,但在复杂场景中却显得力不从心。相反,自动编码器可以捕捉复杂场景中的细节,但缺乏自我解释能力。复杂场景通常涉及大量空间信息,因此空间信息的利用在高光谱异常检测中至关重要。为了有效结合两种方法的优势,解决空间信息利用不足的问题,我们提出了一种用于高光谱异常检测的注意力受限低阶稀疏自动编码器。该模型包括两个编码器:一个是使用背景字典训练的注意力受限低阶自动编码器(AClrAE),其中包含一个全局自注意模块(GAM),专注于全局空间信息,从而改进了背景重建;另一个是使用异常字典训练的注意力受限稀疏自动编码器(ACSAE),其中包含一个局部自注意模块(LAM),专注于局部空间信息,从而增强了异常重建。最后,为了合并两个编码器的检测结果,采用了非线性融合方案。在多个真实和合成数据集上的实验证明了所提方法的有效性和可行性。
{"title":"Dictionary trained attention constrained low rank and sparse autoencoder for hyperspectral anomaly detection","authors":"Xing Hu ,&nbsp;Zhixuan Li ,&nbsp;Lingkun Luo ,&nbsp;Hamid Reza Karimi ,&nbsp;Dawei Zhang","doi":"10.1016/j.neunet.2024.106797","DOIUrl":"10.1016/j.neunet.2024.106797","url":null,"abstract":"<div><div>Dictionary representations and deep learning Autoencoder (AE) models have proven effective in hyperspectral anomaly detection. Dictionary representations offer self-explanation but struggle with complex scenarios. Conversely, autoencoders can capture details in complex scenes but lack self-explanation. Complex scenarios often involve extensive spatial information, making its utilization crucial in hyperspectral anomaly detection. To effectively combine the advantages of both methods and address the insufficient use of spatial information, we propose an attention constrained low-rank and sparse autoencoder for hyperspectral anomaly detection. This model includes two encoders: an attention constrained low-rank autoencoder (AClrAE) trained with a background dictionary and incorporating a Global Self-Attention Module (GAM) to focus on global spatial information, resulting in improved background reconstruction; and an attention constrained sparse autoencoder (ACsAE) trained with an anomaly dictionary and incorporating a Local Self-Attention Module (LAM) to focus on local spatial information, resulting in enhanced anomaly reconstruction. Finally, to merge the detection results from both encoders, a nonlinear fusion scheme is employed. Experiments on multiple real and synthetic datasets demonstrate the effectiveness and feasibility of the proposed method.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106797"},"PeriodicalIF":6.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-set long-tailed recognition via orthogonal prototype learning and false rejection correction 通过正交原型学习和错误拒绝校正实现开放集长尾识别
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.neunet.2024.106789
Binquan Deng, Aouaidjia Kamel, Chongsheng Zhang
Learning from data with long-tailed and open-ended distributions is highly challenging. In this work, we propose OLPR, which is a new dual-stream Open-set Long-tailed recognition framework based on orthogonal Prototype learning and false Rejection correction. It consists of a Probabilistic Prediction Learning (PPL) branch and a Distance Metric Learning (DML) branch. The former is used to generate prediction probability for image classification. The latter learns orthogonal prototypes for each class by computing three distance losses, which are the orthogonal prototype loss among all the prototypes, the balanced Softmin distance based cross-entropy loss between each prototype and its corresponding input sample, and the adversarial loss for making the open-set space more compact. Furthermore, for open-set learning, instead of merely relying on binary decisions, we propose an Iterative Clustering Module (ICM) to categorize similar open-set samples and correct the false rejected closed-set samples simultaneously. If a sample is detected as a false rejection, i.e., a sample of the known classes is incorrectly identified as belonging to the unknown classes, we will re-classify the sample to the closest known/closed-set class. We conduct extensive experiments on ImageNet-LT, Places-LT, CIFAR-10/100-LT benchmark datasets, as well as a new long-tailed open-ended dataset that we build. Experimental results demonstrate that OLPR improves over the best competitors by up to 2.2% in terms of overall classification accuracy in closed-set settings, and up to 4% in terms of F-measure in open-set settings, which are very remarkable.
从长尾分布和开放分布的数据中学习是一项极具挑战性的工作。在这项工作中,我们提出了 OLPR,这是一种基于正交原型学习和错误拒绝校正的新型双流开放集长尾识别框架。它由概率预测学习(PPL)分支和距离度量学习(DML)分支组成。前者用于生成图像分类的预测概率。后者通过计算三种距离损失来学习每个类别的正交原型,即所有原型之间的正交原型损失、每个原型与其相应输入样本之间基于软敏距离的平衡交叉熵损失,以及使开集空间更紧凑的对抗损失。此外,对于开放集学习,我们不再仅仅依赖二元判定,而是提出了一个迭代聚类模块(ICM),用于对相似的开放集样本进行分类,并同时纠正被错误剔除的封闭集样本。如果检测到一个样本被错误拒绝,即已知类别的样本被错误地识别为属于未知类别,我们将把该样本重新分类到最接近的已知/封闭集类别。我们在 ImageNet-LT、Places-LT、CIFAR-10/100-LT 基准数据集以及我们建立的新的长尾开放式数据集上进行了广泛的实验。实验结果表明,在封闭集环境下,OLPR 的整体分类准确率比最佳竞争者提高了 2.2%,在开放集环境下,OLPR 的 F-measure 提高了 4%,这是非常了不起的。
{"title":"Open-set long-tailed recognition via orthogonal prototype learning and false rejection correction","authors":"Binquan Deng,&nbsp;Aouaidjia Kamel,&nbsp;Chongsheng Zhang","doi":"10.1016/j.neunet.2024.106789","DOIUrl":"10.1016/j.neunet.2024.106789","url":null,"abstract":"<div><div>Learning from data with long-tailed and open-ended distributions is highly challenging. In this work, we propose <strong>OLPR</strong>, which is a new dual-stream <strong>O</strong>pen-set <strong>L</strong>ong-tailed recognition framework based on orthogonal <strong>P</strong>rototype learning and false <strong>R</strong>ejection correction. It consists of a Probabilistic Prediction Learning (PPL) branch and a Distance Metric Learning (DML) branch. The former is used to generate prediction probability for image classification. The latter learns orthogonal prototypes for each class by computing three distance losses, which are the orthogonal prototype loss among all the prototypes, the balanced Softmin distance based cross-entropy loss between each prototype and its corresponding input sample, and the adversarial loss for making the open-set space more compact. Furthermore, for open-set learning, instead of merely relying on binary decisions, we propose an Iterative Clustering Module (ICM) to categorize similar open-set samples and correct the false rejected closed-set samples simultaneously. If a sample is detected as a false rejection, i.e., a sample of the known classes is incorrectly identified as belonging to the unknown classes, we will re-classify the sample to the closest known/closed-set class. We conduct extensive experiments on ImageNet-LT, Places-LT, CIFAR-10/100-LT benchmark datasets, as well as a new long-tailed open-ended dataset that we build. Experimental results demonstrate that OLPR improves over the best competitors by up to 2.2% in terms of overall classification accuracy in closed-set settings, and up to 4% in terms of F-measure in open-set settings, which are very remarkable.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106789"},"PeriodicalIF":6.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1