首页 > 最新文献

Neural Networks最新文献

英文 中文
A multiscale distributed neural computing model database (NCMD) for neuromorphic architecture 神经形态架构的多尺度分布式神经计算模型数据库 (NCMD)
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-10 DOI: 10.1016/j.neunet.2024.106727

Distributed neuromorphic architecture is a promising technique for on-chip processing of multiple tasks. Deploying the constructed model in a distributed neuromorphic system, however, remains time-consuming and challenging due to considerations such as network topology, connection rules, and compatibility with multiple programming languages. We proposed a multiscale distributed neural computing model database (NCMD), which is a framework designed for ARM-based multi-core hardware. Various neural computing components, including ion channels, synapses, and neurons, are encompassed in NCMD. We demonstrated how NCMD constructs and deploys multi-compartmental detailed neuron models as well as spiking neural networks (SNNs) in BrainS, a distributed multi-ARM neuromorphic system. We demonstrated that the electrodiffusive Pinsky–Rinzel (edPR) model developed by NCMD is well-suited for BrainS. All dynamic properties, such as changes in membrane potential and ion concentrations, can be easily explored. In addition, SNNs constructed by NCMD can achieve an accuracy of 86.67% on the test set of the Iris dataset. The proposed NCMD offers an innovative approach to applying BrainS in neuroscience, cognitive decision-making, and artificial intelligence research.

分布式神经形态架构是一种很有前途的片上处理多项任务的技术。然而,由于网络拓扑结构、连接规则以及与多种编程语言的兼容性等因素,在分布式神经形态系统中部署所构建的模型仍然耗时且具有挑战性。我们提出了多尺度分布式神经计算模型数据库(NCMD),这是一个专为基于 ARM 的多核硬件设计的框架。NCMD 包含各种神经计算组件,包括离子通道、突触和神经元。我们演示了 NCMD 如何在分布式多 ARM 神经形态系统 BrainS 中构建和部署多区室详细神经元模型以及尖峰神经网络 (SNN)。我们证明,NCMD 开发的电扩散平斯基-林泽尔(edPR)模型非常适合 BrainS。所有动态特性,如膜电位和离子浓度的变化,都可以轻松探索。此外,NCMD 构建的 SNN 在虹膜数据集测试集上的准确率高达 86.67%。所提出的 NCMD 为将 BrainS 应用于神经科学、认知决策和人工智能研究提供了一种创新方法。
{"title":"A multiscale distributed neural computing model database (NCMD) for neuromorphic architecture","authors":"","doi":"10.1016/j.neunet.2024.106727","DOIUrl":"10.1016/j.neunet.2024.106727","url":null,"abstract":"<div><p>Distributed neuromorphic architecture is a promising technique for on-chip processing of multiple tasks. Deploying the constructed model in a distributed neuromorphic system, however, remains time-consuming and challenging due to considerations such as network topology, connection rules, and compatibility with multiple programming languages. We proposed a multiscale distributed neural computing model database (NCMD), which is a framework designed for ARM-based multi-core hardware. Various neural computing components, including ion channels, synapses, and neurons, are encompassed in NCMD. We demonstrated how NCMD constructs and deploys multi-compartmental detailed neuron models as well as spiking neural networks (SNNs) in BrainS, a distributed multi-ARM neuromorphic system. We demonstrated that the electrodiffusive Pinsky–Rinzel (edPR) model developed by NCMD is well-suited for BrainS. All dynamic properties, such as changes in membrane potential and ion concentrations, can be easily explored. In addition, SNNs constructed by NCMD can achieve an accuracy of 86.67% on the test set of the Iris dataset. The proposed NCMD offers an innovative approach to applying BrainS in neuroscience, cognitive decision-making, and artificial intelligence research.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multistability and fixed-time multisynchronization of switched neural networks with state-dependent switching rules 具有状态相关切换规则的切换神经网络的多稳定性和固定时间多同步性
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106713

This paper presents theoretical results on the multistability and fixed-time synchronization of switched neural networks with multiple almost-periodic solutions and state-dependent switching rules. It is shown herein that the number, location, and stability of the almost-periodic solutions of the switched neural networks can be characterized by making use of the state-space partition. Two sets of sufficient conditions are derived to ascertain the existence of 3n exponentially stable almost-periodic solutions. Subsequently, this paper introduces the novel concept of fixed-time multisynchronization in switched neural networks associated with a range of almost-periodic parameters within multiple stable equilibrium states for the first time. Based on the multistability results, it is demonstrated that there are 3n synchronization manifolds, wherein n is the number of neurons. Additionally, an estimation for the settling time required for drive–response switched neural networks to achieve synchronization is provided. It should be noted that this paper considers stable equilibrium points (static multisynchronization), stable almost-periodic orbits (dynamical multisynchronization), and hybrid stable equilibrium states (hybrid multisynchronization) as special cases of multistability (multisynchronization). Two numerical examples are elaborated to substantiate the theoretical results.

本文介绍了具有多个几乎周期解和与状态相关的切换规则的切换神经网络的多稳定性和固定时间同步的理论结果。本文表明,可以利用状态空间分区来表征开关神经网络几乎周期解的数量、位置和稳定性。本文导出了两组充分条件,以确定 3n 个指数稳定的近周期解的存在。随后,本文首次提出了开关神经网络中固定时间多同步的新概念,即在多个稳定均衡状态内与一系列近周期参数相关联。根据多稳态性结果,证明存在 3n 个同步流形,其中 n 为神经元数量。此外,本文还估算了驱动响应切换神经网络实现同步所需的沉淀时间。需要注意的是,本文将稳定平衡点(静态多同步)、稳定的近周期轨道(动态多同步)和混合稳定平衡状态(混合多同步)视为多稳态性(多同步)的特例。为证实理论结果,阐述了两个数值实例。
{"title":"Multistability and fixed-time multisynchronization of switched neural networks with state-dependent switching rules","authors":"","doi":"10.1016/j.neunet.2024.106713","DOIUrl":"10.1016/j.neunet.2024.106713","url":null,"abstract":"<div><p>This paper presents theoretical results on the multistability and fixed-time synchronization of switched neural networks with multiple almost-periodic solutions and state-dependent switching rules. It is shown herein that the number, location, and stability of the almost-periodic solutions of the switched neural networks can be characterized by making use of the state-space partition. Two sets of sufficient conditions are derived to ascertain the existence of <span><math><msup><mrow><mn>3</mn></mrow><mrow><mi>n</mi></mrow></msup></math></span> exponentially stable almost-periodic solutions. Subsequently, this paper introduces the novel concept of fixed-time multisynchronization in switched neural networks associated with a range of almost-periodic parameters within multiple stable equilibrium states for the first time. Based on the multistability results, it is demonstrated that there are <span><math><msup><mrow><mn>3</mn></mrow><mrow><mi>n</mi></mrow></msup></math></span> synchronization manifolds, wherein <span><math><mi>n</mi></math></span> is the number of neurons. Additionally, an estimation for the settling time required for drive–response switched neural networks to achieve synchronization is provided. It should be noted that this paper considers stable equilibrium points (static multisynchronization), stable almost-periodic orbits (dynamical multisynchronization), and hybrid stable equilibrium states (hybrid multisynchronization) as special cases of multistability (multisynchronization). Two numerical examples are elaborated to substantiate the theoretical results.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intermediate-grained kernel elements pruning with structured sparsity 利用结构稀疏性修剪中间粒度内核元素
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106708

Neural network pruning provides a promising prospect for the deployment of neural networks on embedded or mobile devices with limited resources. Although current structured strategies are unconstrained by specific hardware architecture in the phase of forward inference, the decline in classification accuracy of structured methods is beyond the tolerance at the level of general pruning rate. This inspires us to develop a technique that satisfies high pruning rate with a small decline in accuracy and has the general nature of structured pruning. In this paper, we propose a new pruning method, namely KEP (Kernel Elements Pruning), to compress deep convolutional neural networks by exploring the significance of elements in each kernel plane and removing unimportant elements. In this method, we apply a controllable regularization penalty to constrain unimportant elements by adding a prior knowledge mask and obtain a compact model. In the calculation procedure of forward inference, we introduce a sparse convolution operation which is different from the sliding window to eliminate invalid zero calculations and verify the effectiveness of the operation for further deployment on FPGA. A massive variety of experiments demonstrate the effectiveness of KEP on two datasets: CIFAR-10 and ImageNet. Specially, with few indexes of non-zero weights introduced, KEP has a significant improvement over the latest structured methods in terms of parameter and float-point operation (FLOPs) reduction, and performs well on large datasets.

神经网络剪枝为在资源有限的嵌入式或移动设备上部署神经网络提供了广阔的前景。虽然目前的结构化策略在前向推理阶段不受特定硬件架构的限制,但结构化方法的分类准确率下降超出了一般剪枝率水平的容忍度。这启发我们开发一种技术,既能满足较高的剪枝率,又能降低较小的准确率,同时还具有结构化剪枝的通用性。在本文中,我们提出了一种新的剪枝方法,即 KEP(Kernel Elements Pruning,核元素剪枝),通过探索每个核平面中元素的重要性并去除不重要的元素来压缩深度卷积神经网络。在这种方法中,我们采用可控的正则化惩罚,通过添加先验知识掩码来约束不重要的元素,从而获得一个紧凑的模型。在前向推理的计算过程中,我们引入了不同于滑动窗口的稀疏卷积运算,以消除无效的零点计算,并验证了该运算的有效性,以便在 FPGA 上进一步部署。大量实验证明了 KEP 在两个数据集上的有效性:CIFAR-10 和 ImageNet。特别是,由于引入了少量非零权重索引,KEP 在参数和浮点运算(FLOPs)减少方面比最新的结构化方法有了显著改进,并且在大型数据集上表现良好。
{"title":"Intermediate-grained kernel elements pruning with structured sparsity","authors":"","doi":"10.1016/j.neunet.2024.106708","DOIUrl":"10.1016/j.neunet.2024.106708","url":null,"abstract":"<div><p>Neural network pruning provides a promising prospect for the deployment of neural networks on embedded or mobile devices with limited resources. Although current structured strategies are unconstrained by specific hardware architecture in the phase of forward inference, the decline in classification accuracy of structured methods is beyond the tolerance at the level of general pruning rate. This inspires us to develop a technique that satisfies high pruning rate with a small decline in accuracy and has the general nature of structured pruning. In this paper, we propose a new pruning method, namely KEP (Kernel Elements Pruning), to compress deep convolutional neural networks by exploring the significance of elements in each kernel plane and removing unimportant elements. In this method, we apply a controllable regularization penalty to constrain unimportant elements by adding a prior knowledge mask and obtain a compact model. In the calculation procedure of forward inference, we introduce a sparse convolution operation which is different from the sliding window to eliminate invalid zero calculations and verify the effectiveness of the operation for further deployment on FPGA. A massive variety of experiments demonstrate the effectiveness of KEP on two datasets: CIFAR-10 and ImageNet. Specially, with few indexes of non-zero weights introduced, KEP has a significant improvement over the latest structured methods in terms of parameter and float-point operation (FLOPs) reduction, and performs well on large datasets.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BGAT-CCRF: A novel end-to-end model for knowledge graph noise correction BGAT-CCRF:用于知识图谱噪声校正的新型端到端模型
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106715

Knowledge graph (KG) noise correction aims to select suitable candidates to correct the noises in KGs. Most of the existing studies have limited performance in repairing the noisy triple that contains more than one incorrect entity or relation, which significantly constrains their implementation in real-world KGs. To overcome this challenge, we propose a novel end-to-end model (BGAT-CCRF) that achieves better noise correction results. Specifically, we construct a balanced-based graph attention model (BGAT) to learn the features of nodes in triples’ neighborhoods and capture the correlation between nodes based on their position and frequency. Additionally, we design a constrained conditional random field model (CCRF) to select suitable candidates guided by three constraints for correcting one or more noises in the triple. In this way, BGAT-CCRF can select multiple candidates from a smaller domain to repair multiple noises in triples simultaneously, rather than selecting candidates from the whole KG to repair noisy triples as traditional methods do, which can only repair one noise in the triple at a time. The effectiveness of BGAT-CCRF is validated by the KG noise correction experiment. Compared with the state-of-the-art models, BGAT-CCRF improves the fMRR metric by 3.58% on the FB15K dataset. Hence, it has the potential to facilitate the implementation of KGs in the real world.

知识图谱(KG)噪声校正旨在选择合适的候选对象来校正知识图谱中的噪声。大多数现有研究在修复包含一个以上错误实体或关系的噪声三元组方面性能有限,这极大地限制了它们在真实世界知识图谱中的应用。为了克服这一难题,我们提出了一种新颖的端到端模型(BGAT-CCRF),它能实现更好的噪声修正效果。具体来说,我们构建了一个基于平衡的图注意力模型(BGAT)来学习三元组邻域中节点的特征,并根据节点的位置和频率捕捉节点之间的相关性。此外,我们还设计了一个约束条件随机场模型(CCRF),在三个约束条件的指导下选择合适的候选对象,以修正三元组中的一个或多个噪声。这样,BGAT-CCRF 就能从较小的域中选择多个候选者,同时修复三元组中的多个噪声,而不是像传统方法那样从整个 KG 中选择候选者来修复噪声三元组,因为传统方法一次只能修复三元组中的一个噪声。KG 噪声修正实验验证了 BGAT-CCRF 的有效性。与最先进的模型相比,BGAT-CCRF 在 FB15K 数据集上的 fMRR 指标提高了 3.58%。因此,BGAT-CCRF 有潜力促进 KG 在现实世界中的应用。
{"title":"BGAT-CCRF: A novel end-to-end model for knowledge graph noise correction","authors":"","doi":"10.1016/j.neunet.2024.106715","DOIUrl":"10.1016/j.neunet.2024.106715","url":null,"abstract":"<div><p>Knowledge graph (KG) noise correction aims to select suitable candidates to correct the noises in KGs. Most of the existing studies have limited performance in repairing the noisy triple that contains more than one incorrect entity or relation, which significantly constrains their implementation in real-world KGs. To overcome this challenge, we propose a novel end-to-end model (BGAT-CCRF) that achieves better noise correction results. Specifically, we construct a <u>b</u>alanced-based <u>g</u>raph <u>at</u>tention model (BGAT) to learn the features of nodes in triples’ neighborhoods and capture the correlation between nodes based on their position and frequency. Additionally, we design a <u>c</u>onstrained <u>c</u>onditional <u>r</u>andom <u>f</u>ield model (CCRF) to select suitable candidates guided by three constraints for correcting one or more noises in the triple. In this way, BGAT-CCRF can select multiple candidates from a smaller domain to repair multiple noises in triples simultaneously, rather than selecting candidates from the whole KG to repair noisy triples as traditional methods do, which can only repair one noise in the triple at a time. The effectiveness of BGAT-CCRF is validated by the KG noise correction experiment. Compared with the state-of-the-art models, BGAT-CCRF improves the fMRR metric by 3.58% on the FB15K dataset. Hence, it has the potential to facilitate the implementation of KGs in the real world.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring refined dual visual features cross-combination for image captioning 探索图像字幕的精制双视觉特征交叉组合
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106710

For current image caption tasks used to encode region features and grid features Transformer-based encoders have become commonplace, because of their multi-head self-attention mechanism, the encoder can better capture the relationship between different regions in the image and contextual information. However, stacking Transformer blocks necessitates quadratic computation through self-attention to visual features, not only resulting in the computation of numerous redundant features but also significantly increasing computational overhead. This paper presents a novel Distilled Cross-Combination Transformer (DCCT) network. Technically, we first introduce a distillation cascade fusion encoder (DCFE), where a probabilistic sparse self-attention layer is used to filter out some redundant and distracting features that affect attention focus, aiming to obtain more refined visual features and enhance encoding efficiency. Next, we develop a parallel cross-fusion attention module (PCFA) that fully exploits the complementarity and correlation between grid and region features to better fuse the encoded dual visual features. Extensive experiments conducted on the MSCOCO dataset demonstrate that our proposed DCCT method achieves outstanding performance, rivaling current state-of-the-art approaches.

对于当前用于编码区域特征和网格特征的图像标题任务而言,基于变换器的编码器已成为常用的编码器,因为其多头自注意机制,编码器可以更好地捕捉图像中不同区域之间的关系和上下文信息。然而,堆叠变换器块需要通过视觉特征的自注意进行二次计算,不仅导致计算大量冗余特征,还大大增加了计算开销。本文提出了一种新颖的蒸馏交叉组合变换器(DCCT)网络。在技术上,我们首先引入了蒸馏级联融合编码器(DCFE),利用概率稀疏自注意力层过滤掉一些影响注意力集中的冗余和干扰特征,从而获得更精细的视觉特征并提高编码效率。接下来,我们开发了并行交叉融合注意模块(PCFA),充分利用网格和区域特征之间的互补性和相关性,更好地融合编码的双重视觉特征。在 MSCOCO 数据集上进行的大量实验表明,我们提出的 DCCT 方法性能卓越,可与目前最先进的方法相媲美。
{"title":"Exploring refined dual visual features cross-combination for image captioning","authors":"","doi":"10.1016/j.neunet.2024.106710","DOIUrl":"10.1016/j.neunet.2024.106710","url":null,"abstract":"<div><p>For current image caption tasks used to encode region features and grid features Transformer-based encoders have become commonplace, because of their multi-head self-attention mechanism, the encoder can better capture the relationship between different regions in the image and contextual information. However, stacking Transformer blocks necessitates quadratic computation through self-attention to visual features, not only resulting in the computation of numerous redundant features but also significantly increasing computational overhead. This paper presents a novel Distilled Cross-Combination Transformer (DCCT) network. Technically, we first introduce a distillation cascade fusion encoder (DCFE), where a probabilistic sparse self-attention layer is used to filter out some redundant and distracting features that affect attention focus, aiming to obtain more refined visual features and enhance encoding efficiency. Next, we develop a parallel cross-fusion attention module (PCFA) that fully exploits the complementarity and correlation between grid and region features to better fuse the encoded dual visual features. Extensive experiments conducted on the MSCOCO dataset demonstrate that our proposed DCCT method achieves outstanding performance, rivaling current state-of-the-art approaches.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bidirectional consistency with temporal-aware for semi-supervised time series classification 用于半监督时间序列分类的时间感知双向一致性
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106709

Semi-supervised learning (SSL) has achieved significant success due to its capacity to alleviate annotation dependencies. Most existing SSL methods utilize pseudo-labeling to propagate useful supervised information for training unlabeled data. However, these methods ignore learning temporal representations, making it challenging to obtain a well-separable feature space for modeling explicit class boundaries. In this work, we propose a semi-supervised Time Series classification framework via Bidirectional Consistency with Temporal-aware (TS-BCT), which regularizes the feature space distribution by learning temporal representations through pseudo-label-guided contrastive learning. Specifically, TS-BCT utilizes time-specific augmentation to transform the entire raw time series into two distinct views, avoiding sampling bias. The pseudo-labels for each view, generated through confidence estimation in the feature space, are then employed to propagate class-related information into unlabeled samples. Subsequently, we introduce a temporal-aware contrastive learning module that learns discriminative temporal-invariant representations. Finally, we design a bidirectional consistency strategy by incorporating pseudo-labels from two distinct views into temporal-aware contrastive learning to construct a class-related contrastive pattern. This strategy enables the model to learn well-separated feature spaces, making class boundaries more discriminative. Extensive experimental results on real-world datasets demonstrate the effectiveness of TS-BCT compared to baselines.

半监督学习(SSL)因其减轻标注依赖性的能力而取得了巨大成功。大多数现有的半监督学习方法都利用伪标注来传播有用的监督信息,以训练未标注的数据。然而,这些方法忽略了对时间表征的学习,因此要获得一个可很好分离的特征空间来建模明确的类边界就变得非常困难。在这项工作中,我们提出了一种通过时间感知双向一致性(TS-BCT)的半监督时间序列分类框架,该框架通过伪标签引导的对比学习来学习时间表征,从而对特征空间分布进行正则化。具体来说,TS-BCT 利用特定时间增强技术将整个原始时间序列转换为两个不同的视图,从而避免了采样偏差。每个视图的伪标签是通过特征空间中的置信度估计生成的,然后用于将与类相关的信息传播到未标记的样本中。随后,我们引入了一个时间感知对比学习模块,用于学习具有区分性的时间不变表征。最后,我们设计了一种双向一致性策略,将来自两个不同视图的伪标签纳入时间感知对比学习,以构建与类别相关的对比模式。这种策略能使模型学习到分离良好的特征空间,从而使类别边界更具辨别力。在真实世界数据集上的大量实验结果表明,与基线相比,TS-BCT 非常有效。
{"title":"Bidirectional consistency with temporal-aware for semi-supervised time series classification","authors":"","doi":"10.1016/j.neunet.2024.106709","DOIUrl":"10.1016/j.neunet.2024.106709","url":null,"abstract":"<div><p>Semi-supervised learning (SSL) has achieved significant success due to its capacity to alleviate annotation dependencies. Most existing SSL methods utilize pseudo-labeling to propagate useful supervised information for training unlabeled data. However, these methods ignore learning temporal representations, making it challenging to obtain a well-separable feature space for modeling explicit class boundaries. In this work, we propose a semi-supervised <strong>T</strong>ime <strong>S</strong>eries classification framework via <strong>B</strong>idirectional <strong>C</strong>onsistency with <strong>T</strong>emporal-aware (<strong>TS-BCT</strong>), which regularizes the feature space distribution by learning temporal representations through pseudo-label-guided contrastive learning. Specifically, <strong>TS-BCT</strong> utilizes time-specific augmentation to transform the entire raw time series into two distinct views, avoiding sampling bias. The pseudo-labels for each view, generated through confidence estimation in the feature space, are then employed to propagate class-related information into unlabeled samples. Subsequently, we introduce a temporal-aware contrastive learning module that learns discriminative temporal-invariant representations. Finally, we design a bidirectional consistency strategy by incorporating pseudo-labels from two distinct views into temporal-aware contrastive learning to construct a class-related contrastive pattern. This strategy enables the model to learn well-separated feature spaces, making class boundaries more discriminative. Extensive experimental results on real-world datasets demonstrate the effectiveness of TS-BCT compared to baselines.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is artificial consciousness achievable? Lessons from the human brain 人工意识可以实现吗?人脑的启示
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106714

We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI.

Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word “consciousness” for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.

在此,我们以人脑的进化及其与意识的关系为参考模型或基准,从进化的角度分析开发人工意识的问题。这种分析揭示了人脑的若干结构和功能特征,这些特征似乎是实现类似人类的复杂意识体验的关键,也是当前人工智能(AI)研究在试图开发能够进行类似人类意识处理的系统时应该考虑的因素。我们认为,即使人工智能模仿人类意识的能力有限,但其内在(即结构和架构)和外在(即与当前科学和技术发展阶段相关的)两方面的因素都是重要的、此外,理论上不能排除人工智能研究可以开发出部分或潜在的替代意识形态,这些意识形态与人类意识形态有质的区别,而且根据研究视角的不同,这些意识形态可能更加复杂,也可能不那么复杂。因此,在谈论人工意识时,我们建议从神经科学的角度谨慎从事:因为对人类和人工智能使用相同的 "意识 "一词会变得模棱两可,并可能产生误导,所以我们建议明确指出人工智能研究旨在开发哪种级别和/或类型的意识,以及人工智能意识处理过程与人类意识体验相比有哪些共同点和不同点。
{"title":"Is artificial consciousness achievable? Lessons from the human brain","authors":"","doi":"10.1016/j.neunet.2024.106714","DOIUrl":"10.1016/j.neunet.2024.106714","url":null,"abstract":"<div><p>We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI.</p><p>Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word “consciousness” for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0893608024006385/pdfft?md5=c98bc30981fde7ebf05da59255e256cb&pid=1-s2.0-S0893608024006385-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Near-optimal deep neural network approximation for Korobov functions with respect to Lp and H1 norms 关于 Lp 和 H1 规范的 Korobov 函数的近优深度神经网络近似值
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-06 DOI: 10.1016/j.neunet.2024.106702

This paper derives the optimal rate of approximation for Korobov functions with deep neural networks in the high dimensional hypercube with respect to Lp-norms and H1-norm. Our approximation bounds are non-asymptotic in both the width and depth of the networks. The obtained approximation rates demonstrate a remarkable super-convergence feature, improving the existing convergence rates of neural networks that are continuous function approximators. Finally, using a VC-dimension argument, we show that the established rates are near-optimal.

本文推导了高维超立方体中深度神经网络对 Korobov 函数的最佳逼近率,涉及 Lp 值和 H1 值。我们的逼近边界在网络的宽度和深度上都是非渐近的。所获得的逼近率表现出显著的超收敛特性,改善了作为连续函数逼近器的神经网络的现有收敛率。最后,利用 VC 维度论证,我们证明所建立的速率接近最优。
{"title":"Near-optimal deep neural network approximation for Korobov functions with respect to Lp and H1 norms","authors":"","doi":"10.1016/j.neunet.2024.106702","DOIUrl":"10.1016/j.neunet.2024.106702","url":null,"abstract":"<div><p>This paper derives the optimal rate of approximation for Korobov functions with deep neural networks in the high dimensional hypercube with respect to <span><math><msup><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msup></math></span>-norms and <span><math><msup><mrow><mi>H</mi></mrow><mrow><mn>1</mn></mrow></msup></math></span>-norm. Our approximation bounds are non-asymptotic in both the width and depth of the networks. The obtained approximation rates demonstrate a remarkable <em>super-convergence</em> feature, improving the existing convergence rates of neural networks that are continuous function approximators. Finally, using a VC-dimension argument, we show that the established rates are near-optimal.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complementary information mutual learning for multimodality medical image segmentation 用于多模态医学图像分割的互补信息相互学习
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-06 DOI: 10.1016/j.neunet.2024.106670

Radiologists must utilize medical images of multiple modalities for tumor segmentation and diagnosis due to the limitations of medical imaging technology and the diversity of tumor signals. This has led to the development of multimodal learning in medical image segmentation. However, the redundancy among modalities creates challenges for existing subtraction-based joint learning methods, such as misjudging the importance of modalities, ignoring specific modal information, and increasing cognitive load. These thorny issues ultimately decrease segmentation accuracy and increase the risk of overfitting. This paper presents the complementary information mutual learning (CIML) framework, which can mathematically model and address the negative impact of inter-modal redundant information. CIML adopts the idea of addition and removes inter-modal redundant information through inductive bias-driven task decomposition and message passing-based redundancy filtering. CIML first decomposes the multimodal segmentation task into multiple subtasks based on expert prior knowledge, minimizing the information dependence between modalities. Furthermore, CIML introduces a scheme in which each modality can extract information from other modalities additively through message passing. To achieve non-redundancy of extracted information, the redundant filtering is transformed into complementary information learning inspired by the variational information bottleneck. The complementary information learning procedure can be efficiently solved by variational inference and cross-modal spatial attention. Numerical results from the verification task and standard benchmarks indicate that CIML efficiently removes redundant information between modalities, outperforming SOTA methods regarding validation accuracy and segmentation effect. To emphasize, message-passing-based redundancy filtering allows neural network visualization techniques to visualize the knowledge relationship among different modalities, which reflects interpretability.

由于医学成像技术的局限性和肿瘤信号的多样性,放射科医生必须利用多种模式的医学影像进行肿瘤分割和诊断。因此,医学图像分割中的多模态学习应运而生。然而,模态间的冗余给现有的基于减法的联合学习方法带来了挑战,如错误判断模态的重要性、忽略特定模态信息、增加认知负荷等。这些棘手的问题最终会降低分割精度,增加过拟合的风险。本文提出了互补信息相互学习(CIML)框架,该框架可以对模态间冗余信息的负面影响进行数学建模并加以解决。CIML 采用加法的思想,通过归纳偏差驱动的任务分解和基于消息传递的冗余过滤来消除模态间的冗余信息。CIML 首先根据专家的先验知识将多模态分割任务分解为多个子任务,从而将模态间的信息依赖性降至最低。此外,CIML 还引入了一种方案,即每种模态都可以通过信息传递从其他模态中以加法方式提取信息。为了实现提取信息的非冗余性,受变异信息瓶颈的启发,冗余过滤被转化为互补信息学习。互补信息学习过程可以通过变异推理和跨模态空间注意力有效解决。来自验证任务和标准基准的数值结果表明,CIML 能有效去除模态间的冗余信息,在验证精度和分割效果方面优于 SOTA 方法。值得强调的是,基于消息传递的冗余过滤允许神经网络可视化技术将不同模态之间的知识关系可视化,这反映了可解释性。
{"title":"Complementary information mutual learning for multimodality medical image segmentation","authors":"","doi":"10.1016/j.neunet.2024.106670","DOIUrl":"10.1016/j.neunet.2024.106670","url":null,"abstract":"<div><p>Radiologists must utilize medical images of multiple modalities for tumor segmentation and diagnosis due to the limitations of medical imaging technology and the diversity of tumor signals. This has led to the development of multimodal learning in medical image segmentation. However, the redundancy among modalities creates challenges for existing <em>subtraction</em>-based joint learning methods, such as misjudging the importance of modalities, ignoring specific modal information, and increasing cognitive load. These thorny issues ultimately decrease segmentation accuracy and increase the risk of overfitting. This paper presents the <strong>complementary information mutual learning (CIML)</strong> framework, which can mathematically model and address the negative impact of inter-modal redundant information. CIML adopts the idea of <em>addition</em> and removes inter-modal redundant information through inductive bias-driven task decomposition and message passing-based redundancy filtering. CIML first decomposes the multimodal segmentation task into multiple subtasks based on expert prior knowledge, minimizing the information dependence between modalities. Furthermore, CIML introduces a scheme in which each modality can extract information from other modalities additively through message passing. To achieve non-redundancy of extracted information, the redundant filtering is transformed into complementary information learning inspired by the variational information bottleneck. The complementary information learning procedure can be efficiently solved by variational inference and cross-modal spatial attention. Numerical results from the verification task and standard benchmarks indicate that CIML efficiently removes redundant information between modalities, outperforming SOTA methods regarding validation accuracy and segmentation effect. To emphasize, message-passing-based redundancy filtering allows neural network visualization techniques to visualize the knowledge relationship among different modalities, which reflects interpretability.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sampled-data synchronization for fuzzy inertial cellular neural networks and its application in secure communication 模糊惯性蜂窝神经网络的采样数据同步及其在安全通信中的应用
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-06 DOI: 10.1016/j.neunet.2024.106671

This paper designs the sampled-data control (SDC) scheme to delve into the synchronization problem of fuzzy inertial cellular neural networks (FICNNs). Technically, the rate at which the information or activation of cellular neuronal transmission made can be described in a first-order differential model, but the network response concerning the received information may be dependent on time that can be modeled as a second-order (inertial) cellular neural network (ICNN) model. Generally, a fuzzy cellular neural network (FCNN) is a combination of fuzzy logic and a cellular neural network. Fuzzy logic models are composed of input and output templates which are in the form of a sum of product operations that help to evaluate the information transmission on a rule-basis. Hence, this study proposes a user-controlled FICNNs model with the same dynamic properties as FICNN model. In this regard, the synchronization approach is considerably effective in ensuring the dynamical properties of the drive (without control input) and response (with external control input). Theoretically, the synchronization between the drive-response can be ensured by analyzing the error model derived from the drive-response but due to nonlinearities, the Lyapunov stability theory can be utilized to derive sufficient stability conditions in terms of linear matrix inequalities (LMIs) that will guarantee the convergence of the error model onto the origin. Distinct from the existing stability conditions, this paper derives the stability conditions by involving the delay information in the form of a quadratic function with lower and upper bounds, which are evaluated through the negative determination lemma (NDL). Besides, numerical simulations that support the validation of proposed theoretical frameworks are discussed. As a direct application, the FICNN model is considered as a cryptosystem in image encryption and decryption algorithm, and the corresponding outcomes are illustrated along with security measures.

本文设计了采样数据控制(SDC)方案,以深入研究模糊惯性细胞神经网络(FICNN)的同步问题。从技术上讲,细胞神经元传输信息或激活的速率可以用一阶微分模型来描述,但网络对接收到的信息的响应可能与时间有关,可以用二阶(惯性)细胞神经网络(ICNN)模型来模拟。一般来说,模糊蜂窝神经网络(FCNN)是模糊逻辑和蜂窝神经网络的结合。模糊逻辑模型由输入和输出模板组成,这些模板采用乘积运算总和的形式,有助于在规则基础上评估信息传输。因此,本研究提出了一种用户控制的 FICNNs 模型,其动态特性与 FICNN 模型相同。在这方面,同步方法在确保驱动(无控制输入)和响应(有外部控制输入)的动态特性方面相当有效。从理论上讲,可以通过分析从驱动-响应得出的误差模型来确保驱动-响应之间的同步,但由于非线性因素,可以利用 Lyapunov 稳定性理论来推导线性矩阵不等式(LMI)的充分稳定条件,从而保证误差模型收敛到原点。与现有的稳定性条件不同的是,本文以二次函数的形式涉及延迟信息,并通过负确定性稃(NDL)评估下限和上限,从而推导出稳定性条件。此外,还讨论了支持验证所提理论框架的数值模拟。作为直接应用,FICNN 模型被视为图像加密和解密算法中的密码系统,并说明了相应的结果和安全措施。
{"title":"Sampled-data synchronization for fuzzy inertial cellular neural networks and its application in secure communication","authors":"","doi":"10.1016/j.neunet.2024.106671","DOIUrl":"10.1016/j.neunet.2024.106671","url":null,"abstract":"<div><p>This paper designs the sampled-data control (SDC) scheme to delve into the synchronization problem of fuzzy inertial cellular neural networks (FICNNs). Technically, the rate at which the information or activation of cellular neuronal transmission made can be described in a first-order differential model, but the network response concerning the received information may be dependent on time that can be modeled as a second-order (inertial) cellular neural network (ICNN) model. Generally, a fuzzy cellular neural network (FCNN) is a combination of fuzzy logic and a cellular neural network. Fuzzy logic models are composed of input and output templates which are in the form of a sum of product operations that help to evaluate the information transmission on a rule-basis. Hence, this study proposes a user-controlled FICNNs model with the same dynamic properties as FICNN model. In this regard, the synchronization approach is considerably effective in ensuring the dynamical properties of the drive (without control input) and response (with external control input). Theoretically, the synchronization between the drive-response can be ensured by analyzing the error model derived from the drive-response but due to nonlinearities, the Lyapunov stability theory can be utilized to derive sufficient stability conditions in terms of linear matrix inequalities (LMIs) that will guarantee the convergence of the error model onto the origin. Distinct from the existing stability conditions, this paper derives the stability conditions by involving the delay information in the form of a quadratic function with lower and upper bounds, which are evaluated through the negative determination lemma (NDL). Besides, numerical simulations that support the validation of proposed theoretical frameworks are discussed. As a direct application, the FICNN model is considered as a cryptosystem in image encryption and decryption algorithm, and the corresponding outcomes are illustrated along with security measures.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1