首页 > 最新文献

Neural Networks最新文献

英文 中文
Deep fuzzy physics-informed neural networks for forward and inverse PDE problems 用于正演和反演 PDE 问题的深度模糊物理信息神经网络。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neunet.2024.106750
As a grid-independent approach for solving partial differential equations (PDEs), Physics-Informed Neural Networks (PINNs) have garnered significant attention due to their unique capability to simultaneously learn from both data and the governing physical equations. Existing PINNs methods always assume that the data is stable and reliable, but data obtained from commercial simulation software often inevitably have ambiguous and inaccurate problems. Obviously, this will have a negative impact on the use of PINNs to solve forward and inverse PDE problems. To overcome the above problems, this paper proposes a Deep Fuzzy Physics-Informed Neural Networks (FPINNs) that explores the uncertainty in data. Specifically, to capture the uncertainty behind the data, FPINNs learns fuzzy representation through the fuzzy membership function layer and fuzzy rule layer. Afterward, we use deep neural networks to learn neural representation. Subsequently, the fuzzy representation is integrated with the neural representation. Finally, the residual of the physical equation and the data error are considered as the two components of the loss function, guiding the network to optimize towards adherence to the physical laws for accurate prediction of the physical field. Extensive experiment results show that FPINNs outperforms these comparative methods in solving forward and inverse PDE problems on four widely used datasets. The demo code will be released at https://github.com/siyuancncd/FPINNs.
物理信息神经网络(PINNs)是一种独立于网格的偏微分方程(PDEs)求解方法,因其同时从数据和物理方程中学习的独特能力而备受关注。现有的 PINNs 方法总是假定数据是稳定可靠的,但从商业模拟软件中获得的数据往往不可避免地存在模糊和不准确的问题。显然,这将对使用 PINNs 解决正演和反演 PDE 问题产生负面影响。为了克服上述问题,本文提出了一种探索数据不确定性的深度模糊物理信息神经网络(FPINNs)。具体来说,为了捕捉数据背后的不确定性,FPINNs 通过模糊成员函数层和模糊规则层学习模糊表示。之后,我们使用深度神经网络来学习神经表征。随后,将模糊表示与神经表示进行整合。最后,物理方程的残差和数据误差被视为损失函数的两个分量,引导网络朝着遵循物理规律的方向进行优化,从而实现对物理场的精确预测。广泛的实验结果表明,在解决四个广泛使用的数据集上的正演和反演 PDE 问题时,FPINNs 优于这些比较方法。演示代码将在 https://github.com/siyuancncd/FPINNs 上发布。
{"title":"Deep fuzzy physics-informed neural networks for forward and inverse PDE problems","authors":"","doi":"10.1016/j.neunet.2024.106750","DOIUrl":"10.1016/j.neunet.2024.106750","url":null,"abstract":"<div><div>As a grid-independent approach for solving partial differential equations (PDEs), Physics-Informed Neural Networks (PINNs) have garnered significant attention due to their unique capability to simultaneously learn from both data and the governing physical equations. Existing PINNs methods always assume that the data is stable and reliable, but data obtained from commercial simulation software often inevitably have ambiguous and inaccurate problems. Obviously, this will have a negative impact on the use of PINNs to solve forward and inverse PDE problems. To overcome the above problems, this paper proposes a Deep Fuzzy Physics-Informed Neural Networks (FPINNs) that explores the uncertainty in data. Specifically, to capture the uncertainty behind the data, FPINNs learns fuzzy representation through the fuzzy membership function layer and fuzzy rule layer. Afterward, we use deep neural networks to learn neural representation. Subsequently, the fuzzy representation is integrated with the neural representation. Finally, the residual of the physical equation and the data error are considered as the two components of the loss function, guiding the network to optimize towards adherence to the physical laws for accurate prediction of the physical field. Extensive experiment results show that FPINNs outperforms these comparative methods in solving forward and inverse PDE problems on four widely used datasets. The demo code will be released at <span><span>https://github.com/siyuancncd/FPINNs</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local contour features contribute to figure-ground segregation in monkey V4 neural populations and human perception 局部轮廓特征有助于猴子 V4 神经群和人类感知中的图形-地面分离。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neunet.2024.106821
Figure-ground (FG) segregation is a crucial step towards the recognition of objects in natural scenes. Gestalt psychologists have emphasized the importance of contour features in perception of FG. Recent electrophysiological studies have identified a neural population in V4 that shows FG-dependent modulation (FG neurons). However, whether the contour features contribute to the modulation of the response patterns of the neural population remains unclear. In the present study, we quantified the contour features associated with Gestalt factors in local natural stimuli and examined whether salient contour-features evoked reliable perceptual and neural responses by analyzing response consistency (stability) across trials. The results showed the tendency that the more salient contour-features evoked the greater consistencies in the perceptual FG judgments and population-based neural responses in FG determination; a greater partial correlation for curvature and weaker correlations for closure and parallelism. Multiple linear regression analyses demonstrated that the perceptual consistency depended similarly on curvature and closure, and the neural consistency depended significantly on curvature but weakly on closure. We further observed a strong correlation between the consistencies in the perceptual and neural responses, i.e., stimuli that evoked more stable percepts tended to evoke more stable neural responses. These results indicate that local contour-features modulate the responses of the neural population in V4 and contribute to the perception of FG organization.
图-地(FG)分离是识别自然场景中物体的关键一步。格式塔心理学家强调了轮廓特征在感知 FG 中的重要性。最近的电生理学研究发现,在 V4 中有一个神经群(FG 神经元)显示出 FG 依赖性调制。然而,轮廓特征是否有助于调节神经群的反应模式仍不清楚。在本研究中,我们对局部自然刺激中与格式塔因素相关的轮廓特征进行了量化,并通过分析跨试验的反应一致性(稳定性),考察了突出的轮廓特征是否能唤起可靠的知觉和神经反应。结果表明,越是突出的轮廓特征越能唤起对 FG 判断的知觉一致性和对 FG 确定的群体神经反应;对曲率的部分相关性越大,对封闭性和平行性的相关性越小。多元线性回归分析表明,知觉一致性同样取决于曲率和闭合度,而神经一致性显著取决于曲率,但弱于闭合度。我们进一步观察到,知觉反应和神经反应的一致性之间存在很强的相关性,也就是说,能唤起更稳定知觉的刺激往往能唤起更稳定的神经反应。这些结果表明,局部轮廓特征会调节 V4 神经群的反应,并有助于对 FG 组织的感知。
{"title":"Local contour features contribute to figure-ground segregation in monkey V4 neural populations and human perception","authors":"","doi":"10.1016/j.neunet.2024.106821","DOIUrl":"10.1016/j.neunet.2024.106821","url":null,"abstract":"<div><div>Figure-ground (FG) segregation is a crucial step towards the recognition of objects in natural scenes. Gestalt psychologists have emphasized the importance of contour features in perception of FG. Recent electrophysiological studies have identified a neural population in V4 that shows FG-dependent modulation (FG neurons). However, whether the contour features contribute to the modulation of the response patterns of the neural population remains unclear. In the present study, we quantified the contour features associated with Gestalt factors in local natural stimuli and examined whether salient contour-features evoked reliable perceptual and neural responses by analyzing response consistency (stability) across trials. The results showed the tendency that the more salient contour-features evoked the greater consistencies in the perceptual FG judgments and population-based neural responses in FG determination; a greater partial correlation for curvature and weaker correlations for closure and parallelism. Multiple linear regression analyses demonstrated that the perceptual consistency depended similarly on curvature and closure, and the neural consistency depended significantly on curvature but weakly on closure. We further observed a strong correlation between the consistencies in the perceptual and neural responses, <em>i.e</em>., stimuli that evoked more stable percepts tended to evoke more stable neural responses. These results indicate that local contour-features modulate the responses of the neural population in V4 and contribute to the perception of FG organization.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wasserstein task embedding for measuring task similarities 用于测量任务相似性的 Wasserstein 任务嵌入。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neunet.2024.106796
Measuring similarities between different tasks is critical in a broad spectrum of machine learning problems, including transfer, multi-task, continual, and meta-learning. Most current approaches to measuring task similarities are architecture-dependent: (1) relying on pre-trained models, or (2) training networks on tasks and using forward transfer as a proxy for task similarity. In this paper, we leverage the optimal transport theory and define a novel task embedding for supervised classification that is model-agnostic, training-free, and capable of handling (partially) disjoint label sets. In short, given a dataset with ground-truth labels, we perform a label embedding through multi-dimensional scaling and concatenate dataset samples with their corresponding label embeddings. Then, we define the distance between two datasets as the 2-Wasserstein distance between their updated samples. Lastly, we leverage the 2-Wasserstein embedding framework to embed tasks into a vector space in which the Euclidean distance between the embedded points approximates the proposed 2-Wasserstein distance between tasks. We show that the proposed embedding leads to a significantly faster comparison of tasks compared to related approaches like the Optimal Transport Dataset Distance (OTDD). Furthermore, we demonstrate the effectiveness of our embedding through various numerical experiments and show statistically significant correlations between our proposed distance and the forward and backward transfer among tasks on a wide variety of image recognition datasets.
测量不同任务之间的相似性对于广泛的机器学习问题至关重要,这些问题包括转移学习、多任务学习、持续学习和元学习。目前大多数测量任务相似性的方法都依赖于架构:(1) 依靠预先训练的模型,或 (2) 在任务上训练网络,并使用前向传输作为任务相似性的代理。在本文中,我们利用最优传输理论,为监督分类定义了一种新颖的任务嵌入,它与模型无关、无需训练,并能处理(部分)不相交的标签集。简而言之,给定一个带有地面实况标签的数据集,我们通过多维缩放进行标签嵌入,并将数据集样本与相应的标签嵌入串联起来。然后,我们将两个数据集之间的距离定义为其更新样本之间的 2-Wasserstein 距离。最后,我们利用 2-Wasserstein 嵌入框架将任务嵌入向量空间,其中嵌入点之间的欧氏距离近似于任务之间的 2-Wasserstein 距离。我们证明,与最优传输数据集距离(OTDD)等相关方法相比,建议的嵌入方法能显著加快任务比较的速度。此外,我们还通过各种数值实验证明了我们的嵌入方法的有效性,并在各种图像识别数据集上展示了我们提出的距离与任务间前向和后向传输之间在统计学上的显著相关性。
{"title":"Wasserstein task embedding for measuring task similarities","authors":"","doi":"10.1016/j.neunet.2024.106796","DOIUrl":"10.1016/j.neunet.2024.106796","url":null,"abstract":"<div><div>Measuring similarities between different tasks is critical in a broad spectrum of machine learning problems, including transfer, multi-task, continual, and meta-learning. Most current approaches to measuring task similarities are architecture-dependent: (1) relying on pre-trained models, or (2) training networks on tasks and using forward transfer as a proxy for task similarity. In this paper, we leverage the optimal transport theory and define a novel task embedding for supervised classification that is model-agnostic, training-free, and capable of handling (partially) disjoint label sets. In short, given a dataset with ground-truth labels, we perform a label embedding through multi-dimensional scaling and concatenate dataset samples with their corresponding label embeddings. Then, we define the distance between two datasets as the 2-Wasserstein distance between their updated samples. Lastly, we leverage the 2-Wasserstein embedding framework to embed tasks into a vector space in which the Euclidean distance between the embedded points approximates the proposed 2-Wasserstein distance between tasks. We show that the proposed embedding leads to a significantly faster comparison of tasks compared to related approaches like the Optimal Transport Dataset Distance (OTDD). Furthermore, we demonstrate the effectiveness of our embedding through various numerical experiments and show statistically significant correlations between our proposed distance and the forward and backward transfer among tasks on a wide variety of image recognition datasets.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatDiff: A ChatGPT-based diffusion model for long-tailed classification ChatDiff:基于 ChatGPT 的长尾分类扩散模型。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neunet.2024.106794
Long-tailed data distributions have been a major challenge for the practical application of deep learning. Information augmentation intends to expand the long-tailed data into uniform distribution, which provides a feasible way to mitigate the data starvation of underrepresented classes. However, most existing augmentation methods face two significant challenges: (1) limited diversity in generated samples, and (2) the adverse effect of generated negative samples on downstream classification performance. In this paper, we propose a novel information augmentation method, named ChatDiff, to provide diverse positive samples for underrepresented classes, and eliminate generated negative samples. Specifically, we start with a prompt template to extract textual prior knowledge from the ChatGPT-3.5 model, enhancing the feature space for underrepresented classes. Then using this prior knowledge, a conditional diffusion model generates semantic-rich image samples for tail classes. Moreover, the proposed ChatDiff leverages a CLIP-based discriminator to screen and remove generated negative samples. This process avoids neural network learning the invalid or erroneous features, and further, improves long-tailed classification performance. Comprehensive experiments conducted on long-tailed benchmarks such as CIFAR10-LT, CIFAR100-LT, ImageNet-LT, and iNaturalist 2018, validate the effectiveness of our ChatDiff method.
长尾数据分布一直是深度学习实际应用的一大挑战。信息增强旨在将长尾数据扩展为均匀分布,这为缓解代表性不足类别的数据饥渴提供了可行的方法。然而,现有的大多数增强方法都面临两个重大挑战:(1)生成样本的多样性有限;(2)生成的负样本对下游分类性能产生不利影响。在本文中,我们提出了一种名为 ChatDiff 的新型信息增强方法,为代表性不足的类别提供多样化的正样本,并消除生成的负样本。具体来说,我们从一个提示模板开始,从 ChatGPT-3.5 模型中提取文本先验知识,从而增强代表性不足类别的特征空间。然后,利用这些先验知识,条件扩散模型为尾部类别生成语义丰富的图像样本。此外,拟议的 ChatDiff 还利用基于 CLIP 的判别器来筛选和移除生成的负样本。这一过程避免了神经网络学习无效或错误的特征,进一步提高了长尾分类性能。在 CIFAR10-LT、CIFAR100-LT、ImageNet-LT 和 iNaturalist 2018 等长尾基准上进行的综合实验验证了我们的 ChatDiff 方法的有效性。
{"title":"ChatDiff: A ChatGPT-based diffusion model for long-tailed classification","authors":"","doi":"10.1016/j.neunet.2024.106794","DOIUrl":"10.1016/j.neunet.2024.106794","url":null,"abstract":"<div><div>Long-tailed data distributions have been a major challenge for the practical application of deep learning. Information augmentation intends to expand the long-tailed data into uniform distribution, which provides a feasible way to mitigate the data starvation of underrepresented classes. However, most existing augmentation methods face two significant challenges: (1) limited diversity in generated samples, and (2) the adverse effect of generated negative samples on downstream classification performance. In this paper, we propose a novel information augmentation method, named ChatDiff, to provide diverse positive samples for underrepresented classes, and eliminate generated negative samples. Specifically, we start with a prompt template to extract textual prior knowledge from the ChatGPT-3.5 model, enhancing the feature space for underrepresented classes. Then using this prior knowledge, a conditional diffusion model generates semantic-rich image samples for tail classes. Moreover, the proposed ChatDiff leverages a CLIP-based discriminator to screen and remove generated negative samples. This process avoids neural network learning the invalid or erroneous features, and further, improves long-tailed classification performance. Comprehensive experiments conducted on long-tailed benchmarks such as CIFAR10-LT, CIFAR100-LT, ImageNet-LT, and iNaturalist 2018, validate the effectiveness of our ChatDiff method.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks for electroencephalogram analysis: Alzheimer’s disease and epilepsy use cases 用于脑电图分析的图神经网络:阿尔茨海默病和癫痫用例
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neunet.2024.106792
Electroencephalography (EEG) is widely used as a non-invasive technique for the diagnosis of several brain disorders, including Alzheimer’s disease and epilepsy. Until recently, diseases have been identified over EEG readings by human experts, which may not only be specific and difficult to find, but are also subject to human error. Despite the recent emergence of machine learning methods for the interpretation of EEGs, most approaches are not capable of capturing the underlying arbitrary non-Euclidean relations between signals in the different regions of the human brain. In this context, Graph Neural Networks (GNNs) have gained attention for their ability to effectively analyze complex relationships within different types of graph-structured data. This includes EEGs, a use case still relatively unexplored. In this paper, we aim to bridge this gap by presenting a study that applies GNNs for the EEG-based detection of Alzheimer’s disease and discrimination of two different types of seizures. To this end, we demonstrate the value of GNNs by showing that a single GNN architecture can achieve state-of-the-art performance in both use cases. Through design space explorations and explainability analysis, we develop a graph-based transformer that achieves cross-validated accuracies over 89% and 96% in the ternary classification variants of Alzheimer’s disease and epilepsy use cases, respectively, matching the intuitions drawn by expert neurologists. We also argue about the computational efficiency, generalizability and potential for real-time operation of GNNs for EEGs, positioning them as a valuable tool for classifying various neurological pathologies and opening up new prospects for research and clinical practice.
脑电图(EEG)作为一种非侵入性技术被广泛用于诊断包括阿尔茨海默病和癫痫在内的多种脑部疾病。直到最近,疾病一直是由人类专家通过脑电图读数来识别的,这不仅可能是特定的、难以发现的,而且还可能出现人为误差。尽管最近出现了用于解释脑电图的机器学习方法,但大多数方法都无法捕捉到人脑不同区域信号之间潜在的任意非欧几里得关系。在这种情况下,图神经网络(GNN)因其能够有效分析不同类型图结构数据中的复杂关系而备受关注。这其中就包括脑电图,而脑电图是一种尚未被开发的应用案例。在本文中,我们旨在通过介绍一项研究,将 GNN 应用于基于脑电图的阿尔茨海默病检测和两种不同类型癫痫发作的鉴别,从而弥补这一空白。为此,我们展示了 GNN 的价值,表明单一 GNN 架构可以在这两种应用案例中实现最先进的性能。通过设计空间探索和可解释性分析,我们开发了一种基于图的转换器,在阿尔茨海默病和癫痫的三元分类变体用例中,交叉验证准确率分别超过 89% 和 96%,与神经学专家得出的直觉相吻合。我们还论证了脑电图 GNN 的计算效率、通用性和实时运行潜力,将其定位为分类各种神经系统病症的重要工具,为研究和临床实践开辟了新的前景。
{"title":"Graph neural networks for electroencephalogram analysis: Alzheimer’s disease and epilepsy use cases","authors":"","doi":"10.1016/j.neunet.2024.106792","DOIUrl":"10.1016/j.neunet.2024.106792","url":null,"abstract":"<div><div>Electroencephalography (EEG) is widely used as a non-invasive technique for the diagnosis of several brain disorders, including Alzheimer’s disease and epilepsy. Until recently, diseases have been identified over EEG readings by human experts, which may not only be specific and difficult to find, but are also subject to human error. Despite the recent emergence of machine learning methods for the interpretation of EEGs, most approaches are not capable of capturing the underlying arbitrary non-Euclidean relations between signals in the different regions of the human brain. In this context, Graph Neural Networks (GNNs) have gained attention for their ability to effectively analyze complex relationships within different types of graph-structured data. This includes EEGs, a use case still relatively unexplored. In this paper, we aim to bridge this gap by presenting a study that applies GNNs for the EEG-based detection of Alzheimer’s disease and discrimination of two different types of seizures. To this end, we demonstrate the value of GNNs by showing that a single GNN architecture can achieve state-of-the-art performance in both use cases. Through design space explorations and explainability analysis, we develop a graph-based transformer that achieves cross-validated accuracies over 89% and 96% in the ternary classification variants of Alzheimer’s disease and epilepsy use cases, respectively, matching the intuitions drawn by expert neurologists. We also argue about the computational efficiency, generalizability and potential for real-time operation of GNNs for EEGs, positioning them as a valuable tool for classifying various neurological pathologies and opening up new prospects for research and clinical practice.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A neurodynamic optimization approach to distributed nonconvex optimization based on an HP augmented Lagrangian function 基于 HP 增强拉格朗日函数的分布式非凸优化神经动力优化方法。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.neunet.2024.106791
This paper develops a neurodynamic model for distributed nonconvex-constrained optimization. In the distributed constrained optimization model, the objective function and inequality constraints do not need to be convex, and equality constraints do not need to be affine. A Hestenes–Powell augmented Lagrangian function for handling the nonconvexity is established, and a neurodynamic system is developed based on this. It is proved that it is stable at a local optimal solution of the optimization model. Two illustrative examples are provided to evaluate the enhanced stability and optimality of the developed neurodynamic systems.
本文建立了分布式非凸约束优化的神经动力学模型。在分布式约束优化模型中,目标函数和不等式约束不需要是凸的,等式约束不需要是仿射的。建立了一个用于处理非凸性的 Hestenes-Powell 增强拉格朗日函数,并在此基础上开发了一个神经动力系统。研究证明,该系统在优化模型的局部最优解处是稳定的。本文提供了两个示例来评估所开发的神经动力系统的稳定性和最优性。
{"title":"A neurodynamic optimization approach to distributed nonconvex optimization based on an HP augmented Lagrangian function","authors":"","doi":"10.1016/j.neunet.2024.106791","DOIUrl":"10.1016/j.neunet.2024.106791","url":null,"abstract":"<div><div>This paper develops a neurodynamic model for distributed nonconvex-constrained optimization. In the distributed constrained optimization model, the objective function and inequality constraints do not need to be convex, and equality constraints do not need to be affine. A Hestenes–Powell augmented Lagrangian function for handling the nonconvexity is established, and a neurodynamic system is developed based on this. It is proved that it is stable at a local optimal solution of the optimization model. Two illustrative examples are provided to evaluate the enhanced stability and optimality of the developed neurodynamic systems.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dictionary trained attention constrained low rank and sparse autoencoder for hyperspectral anomaly detection 用于高光谱异常检测的字典训练注意力约束低等级稀疏自动编码器
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.neunet.2024.106797
Dictionary representations and deep learning Autoencoder (AE) models have proven effective in hyperspectral anomaly detection. Dictionary representations offer self-explanation but struggle with complex scenarios. Conversely, autoencoders can capture details in complex scenes but lack self-explanation. Complex scenarios often involve extensive spatial information, making its utilization crucial in hyperspectral anomaly detection. To effectively combine the advantages of both methods and address the insufficient use of spatial information, we propose an attention constrained low-rank and sparse autoencoder for hyperspectral anomaly detection. This model includes two encoders: an attention constrained low-rank autoencoder (AClrAE) trained with a background dictionary and incorporating a Global Self-Attention Module (GAM) to focus on global spatial information, resulting in improved background reconstruction; and an attention constrained sparse autoencoder (ACsAE) trained with an anomaly dictionary and incorporating a Local Self-Attention Module (LAM) to focus on local spatial information, resulting in enhanced anomaly reconstruction. Finally, to merge the detection results from both encoders, a nonlinear fusion scheme is employed. Experiments on multiple real and synthetic datasets demonstrate the effectiveness and feasibility of the proposed method.
事实证明,字典表示法和深度学习自动编码器(AE)模型在高光谱异常检测中非常有效。字典表示法可提供自解释性,但在复杂场景中却显得力不从心。相反,自动编码器可以捕捉复杂场景中的细节,但缺乏自我解释能力。复杂场景通常涉及大量空间信息,因此空间信息的利用在高光谱异常检测中至关重要。为了有效结合两种方法的优势,解决空间信息利用不足的问题,我们提出了一种用于高光谱异常检测的注意力受限低阶稀疏自动编码器。该模型包括两个编码器:一个是使用背景字典训练的注意力受限低阶自动编码器(AClrAE),其中包含一个全局自注意模块(GAM),专注于全局空间信息,从而改进了背景重建;另一个是使用异常字典训练的注意力受限稀疏自动编码器(ACSAE),其中包含一个局部自注意模块(LAM),专注于局部空间信息,从而增强了异常重建。最后,为了合并两个编码器的检测结果,采用了非线性融合方案。在多个真实和合成数据集上的实验证明了所提方法的有效性和可行性。
{"title":"Dictionary trained attention constrained low rank and sparse autoencoder for hyperspectral anomaly detection","authors":"","doi":"10.1016/j.neunet.2024.106797","DOIUrl":"10.1016/j.neunet.2024.106797","url":null,"abstract":"<div><div>Dictionary representations and deep learning Autoencoder (AE) models have proven effective in hyperspectral anomaly detection. Dictionary representations offer self-explanation but struggle with complex scenarios. Conversely, autoencoders can capture details in complex scenes but lack self-explanation. Complex scenarios often involve extensive spatial information, making its utilization crucial in hyperspectral anomaly detection. To effectively combine the advantages of both methods and address the insufficient use of spatial information, we propose an attention constrained low-rank and sparse autoencoder for hyperspectral anomaly detection. This model includes two encoders: an attention constrained low-rank autoencoder (AClrAE) trained with a background dictionary and incorporating a Global Self-Attention Module (GAM) to focus on global spatial information, resulting in improved background reconstruction; and an attention constrained sparse autoencoder (ACsAE) trained with an anomaly dictionary and incorporating a Local Self-Attention Module (LAM) to focus on local spatial information, resulting in enhanced anomaly reconstruction. Finally, to merge the detection results from both encoders, a nonlinear fusion scheme is employed. Experiments on multiple real and synthetic datasets demonstrate the effectiveness and feasibility of the proposed method.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-set long-tailed recognition via orthogonal prototype learning and false rejection correction 通过正交原型学习和错误拒绝校正实现开放集长尾识别
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.neunet.2024.106789
Learning from data with long-tailed and open-ended distributions is highly challenging. In this work, we propose OLPR, which is a new dual-stream Open-set Long-tailed recognition framework based on orthogonal Prototype learning and false Rejection correction. It consists of a Probabilistic Prediction Learning (PPL) branch and a Distance Metric Learning (DML) branch. The former is used to generate prediction probability for image classification. The latter learns orthogonal prototypes for each class by computing three distance losses, which are the orthogonal prototype loss among all the prototypes, the balanced Softmin distance based cross-entropy loss between each prototype and its corresponding input sample, and the adversarial loss for making the open-set space more compact. Furthermore, for open-set learning, instead of merely relying on binary decisions, we propose an Iterative Clustering Module (ICM) to categorize similar open-set samples and correct the false rejected closed-set samples simultaneously. If a sample is detected as a false rejection, i.e., a sample of the known classes is incorrectly identified as belonging to the unknown classes, we will re-classify the sample to the closest known/closed-set class. We conduct extensive experiments on ImageNet-LT, Places-LT, CIFAR-10/100-LT benchmark datasets, as well as a new long-tailed open-ended dataset that we build. Experimental results demonstrate that OLPR improves over the best competitors by up to 2.2% in terms of overall classification accuracy in closed-set settings, and up to 4% in terms of F-measure in open-set settings, which are very remarkable.
从长尾分布和开放分布的数据中学习是一项极具挑战性的工作。在这项工作中,我们提出了 OLPR,这是一种基于正交原型学习和错误拒绝校正的新型双流开放集长尾识别框架。它由概率预测学习(PPL)分支和距离度量学习(DML)分支组成。前者用于生成图像分类的预测概率。后者通过计算三种距离损失来学习每个类别的正交原型,即所有原型之间的正交原型损失、每个原型与其相应输入样本之间基于软敏距离的平衡交叉熵损失,以及使开集空间更紧凑的对抗损失。此外,对于开放集学习,我们不再仅仅依赖二元判定,而是提出了一个迭代聚类模块(ICM),用于对相似的开放集样本进行分类,并同时纠正被错误剔除的封闭集样本。如果检测到一个样本被错误拒绝,即已知类别的样本被错误地识别为属于未知类别,我们将把该样本重新分类到最接近的已知/封闭集类别。我们在 ImageNet-LT、Places-LT、CIFAR-10/100-LT 基准数据集以及我们建立的新的长尾开放式数据集上进行了广泛的实验。实验结果表明,在封闭集环境下,OLPR 的整体分类准确率比最佳竞争者提高了 2.2%,在开放集环境下,OLPR 的 F-measure 提高了 4%,这是非常了不起的。
{"title":"Open-set long-tailed recognition via orthogonal prototype learning and false rejection correction","authors":"","doi":"10.1016/j.neunet.2024.106789","DOIUrl":"10.1016/j.neunet.2024.106789","url":null,"abstract":"<div><div>Learning from data with long-tailed and open-ended distributions is highly challenging. In this work, we propose <strong>OLPR</strong>, which is a new dual-stream <strong>O</strong>pen-set <strong>L</strong>ong-tailed recognition framework based on orthogonal <strong>P</strong>rototype learning and false <strong>R</strong>ejection correction. It consists of a Probabilistic Prediction Learning (PPL) branch and a Distance Metric Learning (DML) branch. The former is used to generate prediction probability for image classification. The latter learns orthogonal prototypes for each class by computing three distance losses, which are the orthogonal prototype loss among all the prototypes, the balanced Softmin distance based cross-entropy loss between each prototype and its corresponding input sample, and the adversarial loss for making the open-set space more compact. Furthermore, for open-set learning, instead of merely relying on binary decisions, we propose an Iterative Clustering Module (ICM) to categorize similar open-set samples and correct the false rejected closed-set samples simultaneously. If a sample is detected as a false rejection, i.e., a sample of the known classes is incorrectly identified as belonging to the unknown classes, we will re-classify the sample to the closest known/closed-set class. We conduct extensive experiments on ImageNet-LT, Places-LT, CIFAR-10/100-LT benchmark datasets, as well as a new long-tailed open-ended dataset that we build. Experimental results demonstrate that OLPR improves over the best competitors by up to 2.2% in terms of overall classification accuracy in closed-set settings, and up to 4% in terms of F-measure in open-set settings, which are very remarkable.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph explicit pooling for graph-level representation learning 用于图形级表征学习的图形显式池。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.neunet.2024.106790
Graph pooling has been increasingly recognized as crucial for Graph Neural Networks (GNNs) to facilitate hierarchical graph representation learning. Existing graph pooling methods commonly consist of two stages: selecting top-ranked nodes and discarding the remaining to construct coarsened graph representations. However, this paper highlights two key issues with these methods: (1) The process of selecting nodes to discard frequently employs additional Graph Convolutional Networks or Multilayer Perceptrons, lacking a thorough evaluation of each node’s impact on the final graph representation and subsequent prediction tasks. (2) Current graph pooling methods tend to directly discard the noise segment (dropped) of the graph without accounting for the latent information contained within these elements. To address the first issue, we introduce a novel Graph explicit Pooling (GrePool) method, which selects nodes by explicitly leveraging the relationships between the nodes and final representation vectors crucial for classification. The second issue is addressed using an extended version of GrePool (i.e., GrePool+), which applies a uniform loss on the discarded nodes. This addition is designed to augment the training process and improve classification accuracy. Furthermore, we conduct comprehensive experiments across 12 widely used datasets to validate our proposed method’s effectiveness, including the Open Graph Benchmark datasets. Our experimental results uniformly demonstrate that GrePool outperforms 14 baseline methods for most datasets. Likewise, implementing GrePool+ enhances GrePool’s performance without incurring additional computational costs. The code is available at https://github.com/LiuChuang0059/GrePool.
人们越来越认识到,图集合对于图神经网络(GNN)促进分层图表示学习至关重要。现有的图集合方法通常包括两个阶段:选择排名靠前的节点和丢弃剩余节点,以构建粗化图表示。然而,本文强调了这些方法的两个关键问题:(1) 在选择要丢弃的节点的过程中,经常使用额外的图卷积网络或多层感知器,缺乏对每个节点对最终图表示和后续预测任务的影响的全面评估。(2) 当前的图集合方法倾向于直接丢弃图中的噪声段(丢弃),而不考虑这些元素中包含的潜在信息。为了解决第一个问题,我们引入了一种新颖的图形显式池化(GrePool)方法,该方法通过显式利用节点与对分类至关重要的最终表示向量之间的关系来选择节点。第二个问题是使用 GrePool 的扩展版本(即 GrePool+)来解决的,该版本对丢弃的节点采用统一损失。这一新增功能旨在增强训练过程并提高分类准确性。此外,我们还在 12 个广泛使用的数据集上进行了综合实验,以验证我们提出的方法的有效性,其中包括开放图基准数据集。实验结果一致表明,GrePool 在大多数数据集上都优于 14 种基准方法。同样,GrePool+ 的实施在不增加计算成本的情况下提高了 GrePool 的性能。代码见 https://github.com/LiuChuang0059/GrePool。
{"title":"Graph explicit pooling for graph-level representation learning","authors":"","doi":"10.1016/j.neunet.2024.106790","DOIUrl":"10.1016/j.neunet.2024.106790","url":null,"abstract":"<div><div>Graph pooling has been increasingly recognized as crucial for Graph Neural Networks (GNNs) to facilitate hierarchical graph representation learning. Existing graph pooling methods commonly consist of two stages: selecting top-ranked nodes and discarding the remaining to construct coarsened graph representations. However, this paper highlights two key issues with these methods: <strong>(1)</strong> The process of selecting nodes to discard frequently employs additional Graph Convolutional Networks or Multilayer Perceptrons, lacking a thorough evaluation of each node’s impact on the final graph representation and subsequent prediction tasks. <strong>(2)</strong> Current graph pooling methods tend to directly discard the noise segment (dropped) of the graph without accounting for the latent information contained within these elements. To address the <strong>first</strong> issue, we introduce a novel <u>Gr</u>aph <u>e</u>xplicit <u>Pool</u>ing (GrePool) method, which selects nodes by explicitly leveraging the relationships between the nodes and final representation vectors crucial for classification. The <strong>second</strong> issue is addressed using an extended version of GrePool (<em>i.e.</em>, GrePool+), which applies a uniform loss on the discarded nodes. This addition is designed to augment the training process and improve classification accuracy. Furthermore, we conduct comprehensive experiments across 12 widely used datasets to validate our proposed method’s effectiveness, including the Open Graph Benchmark datasets. Our experimental results uniformly demonstrate that GrePool outperforms 14 baseline methods for most datasets. Likewise, implementing GrePool+ enhances GrePool’s performance without incurring additional computational costs. The code is available at <span><span>https://github.com/LiuChuang0059/GrePool</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization limits of Graph Neural Networks in identity effects learning 图神经网络在身份效应学习中的泛化极限。
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-10 DOI: 10.1016/j.neunet.2024.106793
Graph Neural Networks (GNNs) have emerged as a powerful tool for data-driven learning on various graph domains. They are usually based on a message-passing mechanism and have gained increasing popularity for their intuitive formulation, which is closely linked to the Weisfeiler–Lehman (WL) test for graph isomorphism to which they have been proven equivalent in terms of expressive power. In this work, we establish new generalization properties and fundamental limits of GNNs in the context of learning so-called identity effects, i.e., the task of determining whether an object is composed of two identical components or not. Our study is motivated by the need to understand the capabilities of GNNs when performing simple cognitive tasks, with potential applications in computational linguistics and chemistry. We analyze two case studies: (i) two-letters words, for which we show that GNNs trained via stochastic gradient descent are unable to generalize to unseen letters when utilizing orthogonal encodings like one-hot representations; (ii) dicyclic graphs, i.e., graphs composed of two cycles, for which we present positive existence results leveraging the connection between GNNs and the WL test. Our theoretical analysis is supported by an extensive numerical study.
图神经网络(GNN)已成为在各种图领域进行数据驱动学习的强大工具。它们通常基于消息传递机制,因其直观的表述方式而越来越受欢迎,这种表述方式与 Weisfeiler-Lehman (WL) 的图同构性测试密切相关,而后者已被证明在表达能力方面与之相当。在这项工作中,我们在学习所谓的同一性效应(即确定一个对象是否由两个相同成分组成)的背景下,建立了 GNN 的新概括特性和基本限制。我们的研究是出于了解 GNN 在执行简单认知任务时的能力的需要,它在计算语言学和化学中有着潜在的应用。我们分析了两个案例研究:(i) 双字母单词,结果表明通过随机梯度下降训练的 GNN 在使用正交编码(如单热表示)时无法泛化到未见过的字母;(ii) 双环图,即由两个循环组成的图,我们利用 GNN 和 WL 测试之间的联系,提出了积极的存在性结果。我们的理论分析得到了大量数值研究的支持。
{"title":"Generalization limits of Graph Neural Networks in identity effects learning","authors":"","doi":"10.1016/j.neunet.2024.106793","DOIUrl":"10.1016/j.neunet.2024.106793","url":null,"abstract":"<div><div>Graph Neural Networks (GNNs) have emerged as a powerful tool for data-driven learning on various graph domains. They are usually based on a message-passing mechanism and have gained increasing popularity for their intuitive formulation, which is closely linked to the Weisfeiler–Lehman (WL) test for graph isomorphism to which they have been proven equivalent in terms of expressive power. In this work, we establish new generalization properties and fundamental limits of GNNs in the context of learning so-called identity effects, i.e., the task of determining whether an object is composed of two identical components or not. Our study is motivated by the need to understand the capabilities of GNNs when performing simple cognitive tasks, with potential applications in computational linguistics and chemistry. We analyze two case studies: (i) two-letters words, for which we show that GNNs trained via stochastic gradient descent are unable to generalize to unseen letters when utilizing orthogonal encodings like one-hot representations; (ii) dicyclic graphs, i.e., graphs composed of two cycles, for which we present positive existence results leveraging the connection between GNNs and the WL test. Our theoretical analysis is supported by an extensive numerical study.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1