首页 > 最新文献

Neurocomputing最新文献

英文 中文
Multi-agent, human–agent and beyond: A survey on cooperation in social dilemmas 多代理、人类代理及其他:社会困境中的合作调查
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-06 DOI: 10.1016/j.neucom.2024.128514

The study of cooperation within social dilemmas has long been a fundamental topic across various disciplines, including computer science and social science. Recent advancements in Artificial Intelligence (AI) have significantly reshaped this field, offering fresh insights into understanding and enhancing cooperation. This survey examines three key areas at the intersection of AI and cooperation in social dilemmas. First, focusing on multi-agent cooperation, we review the intrinsic and external motivations that support cooperation among rational agents, and the methods employed to develop effective strategies against diverse opponents. Second, looking into human–agent cooperation, we discuss the current AI algorithms for cooperating with humans and the human biases towards AI agents. Third, we review the emergent field of leveraging AI agents to enhance cooperation among humans. We conclude by discussing future research avenues, such as using large language models, establishing unified theoretical frameworks, revisiting existing theories of human cooperation, and exploring multiple real-world applications.

长期以来,研究社会困境中的合作一直是包括计算机科学和社会科学在内的各个学科的基本课题。人工智能(AI)的最新进展极大地重塑了这一领域,为理解和加强合作提供了新的视角。本研究探讨了人工智能与社会困境中的合作之间的三个关键领域。首先,我们以多代理合作为重点,回顾了支持理性代理之间合作的内在和外在动机,以及针对不同对手制定有效策略的方法。其次,在人类-代理合作方面,我们讨论了当前与人类合作的人工智能算法,以及人类对人工智能代理的偏见。第三,我们回顾了利用人工智能代理加强人类合作这一新兴领域。最后,我们讨论了未来的研究方向,如使用大型语言模型、建立统一的理论框架、重新审视现有的人类合作理论以及探索多种现实世界应用。
{"title":"Multi-agent, human–agent and beyond: A survey on cooperation in social dilemmas","authors":"","doi":"10.1016/j.neucom.2024.128514","DOIUrl":"10.1016/j.neucom.2024.128514","url":null,"abstract":"<div><p>The study of cooperation within social dilemmas has long been a fundamental topic across various disciplines, including computer science and social science. Recent advancements in Artificial Intelligence (AI) have significantly reshaped this field, offering fresh insights into understanding and enhancing cooperation. This survey examines three key areas at the intersection of AI and cooperation in social dilemmas. First, focusing on multi-agent cooperation, we review the intrinsic and external motivations that support cooperation among rational agents, and the methods employed to develop effective strategies against diverse opponents. Second, looking into human–agent cooperation, we discuss the current AI algorithms for cooperating with humans and the human biases towards AI agents. Third, we review the emergent field of leveraging AI agents to enhance cooperation among humans. We conclude by discussing future research avenues, such as using large language models, establishing unified theoretical frameworks, revisiting existing theories of human cooperation, and exploring multiple real-world applications.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Text-to-text generative approach for enhanced complex word identification 文本到文本生成法增强复杂词语识别能力
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-06 DOI: 10.1016/j.neucom.2024.128501

This paper presents a novel approach for solving the Complex Word Identification (CWI) task using the text-to-text generative model. The CWI task involves identifying complex words in text, which is a challenging Natural Language Processing task. To our knowledge, it is a first attempt to address CWI problem into text-to-text context. In this work, we propose a new methodology that leverages the power of the Transformer model to evaluate complexity of words in binary and probabilistic settings. We also propose a novel CWI dataset, which consists of 62,200 phrases, both complex and simple. We train and fine-tune our proposed model on our CWI dataset. We also evaluate its performance on separate test sets across three different domains. Our experimental results demonstrate the effectiveness of our proposed approach compared to state-of-the-art methods.

本文提出了一种利用文本到文本生成模型解决复杂词语识别(CWI)任务的新方法。CWI 任务涉及识别文本中的复杂词语,是一项具有挑战性的自然语言处理任务。据我们所知,这是首次尝试在文本到文本的语境中解决 CWI 问题。在这项工作中,我们提出了一种新方法,利用 Transformer 模型的强大功能来评估二进制和概率设置中单词的复杂性。我们还提出了一个新颖的 CWI 数据集,该数据集由 62,200 个短语组成,既有复杂短语,也有简单短语。我们在 CWI 数据集上对我们提出的模型进行了训练和微调。我们还在三个不同领域的独立测试集上对其性能进行了评估。实验结果表明,与最先进的方法相比,我们提出的方法非常有效。
{"title":"Text-to-text generative approach for enhanced complex word identification","authors":"","doi":"10.1016/j.neucom.2024.128501","DOIUrl":"10.1016/j.neucom.2024.128501","url":null,"abstract":"<div><p>This paper presents a novel approach for solving the Complex Word Identification (CWI) task using the text-to-text generative model. The CWI task involves identifying complex words in text, which is a challenging Natural Language Processing task. To our knowledge, it is a first attempt to address CWI problem into text-to-text context. In this work, we propose a new methodology that leverages the power of the Transformer model to evaluate complexity of words in binary and probabilistic settings. We also propose a novel CWI dataset, which consists of 62,200 phrases, both complex and simple. We train and fine-tune our proposed model on our CWI dataset. We also evaluate its performance on separate test sets across three different domains. Our experimental results demonstrate the effectiveness of our proposed approach compared to state-of-the-art methods.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0925231224012724/pdfft?md5=f8ab474940958df48eb8630b15af37e4&pid=1-s2.0-S0925231224012724-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A slimmable framework for practical neural video compression 实用神经视频压缩的纤薄框架
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 DOI: 10.1016/j.neucom.2024.128525

Deep learning is being increasingly applied to image and video compression in a new paradigm known as neural video compression. While achieving impressive rate–distortion (RD) performance, neural video codecs (NVC) require heavy neural networks, which in turn have large memory and computational costs and often lack important functionalities such as variable rate. These are significant limitations to their practical application. Addressing these problems, recent slimmable image codecs can dynamically adjust their model capacity to elegantly reduce the memory and computation requirements, without harming RD performance. However, the extension to video is not straightforward due to the non-trivial interplay with complex motion estimation and compensation modules in most NVC architectures. In this paper we propose the slimmable video codec framework (SlimVC) that integrates an slimmable autoencoder and a motion-free conditional entropy model. We show that the slimming mechanism is also applicable to the more complex case of video architectures, providing SlimVC with simultaneous control of the computational cost, memory and rate, which are all important requirements in practice. We further provide detailed experimental analysis, and describe application scenarios that can benefit from slimmable video codecs.

深度学习正越来越多地应用于图像和视频压缩,形成了一种被称为神经视频压缩的新模式。虽然神经视频编解码器(NVC)实现了令人印象深刻的速率-失真(RD)性能,但它需要庞大的神经网络,而神经网络又需要大量的内存和计算成本,而且往往缺乏可变速率等重要功能。这些都严重限制了它们的实际应用。为了解决这些问题,最近推出的超薄图像编解码器可以动态调整其模型容量,从而在不损害 RD 性能的情况下优雅地降低内存和计算要求。然而,由于大多数 NVC 架构中复杂的运动估计和补偿模块之间存在非难处理的相互作用,因此将其扩展到视频并非易事。在本文中,我们提出了可瘦身视频编解码器框架(SlimVC),它集成了可瘦身自动编码器和无运动条件熵模型。我们表明,瘦身机制也适用于更复杂的视频架构,为 SlimVC 同时提供了对计算成本、内存和速率的控制,而这些都是实际应用中的重要要求。我们进一步提供了详细的实验分析,并描述了可从瘦身视频编解码器中受益的应用场景。
{"title":"A slimmable framework for practical neural video compression","authors":"","doi":"10.1016/j.neucom.2024.128525","DOIUrl":"10.1016/j.neucom.2024.128525","url":null,"abstract":"<div><p>Deep learning is being increasingly applied to image and video compression in a new paradigm known as neural video compression. While achieving impressive rate–distortion (RD) performance, neural video codecs (NVC) require heavy neural networks, which in turn have large memory and computational costs and often lack important functionalities such as variable rate. These are significant limitations to their practical application. Addressing these problems, recent slimmable image codecs can dynamically adjust their model capacity to elegantly reduce the memory and computation requirements, without harming RD performance. However, the extension to video is not straightforward due to the non-trivial interplay with complex motion estimation and compensation modules in most NVC architectures. In this paper we propose the slimmable video codec framework (SlimVC) that integrates an slimmable autoencoder and a motion-free conditional entropy model. We show that the slimming mechanism is also applicable to the more complex case of video architectures, providing SlimVC with simultaneous control of the computational cost, memory and rate, which are all important requirements in practice. We further provide detailed experimental analysis, and describe application scenarios that can benefit from slimmable video codecs.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0925231224012967/pdfft?md5=654b6c4d97ef5741b1cbca57e7e0b8f4&pid=1-s2.0-S0925231224012967-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFGN: Attention Feature Guided Network for object detection in optical remote sensing image AFGN:用于光学遥感图像中物体检测的注意力特征引导网络
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1016/j.neucom.2024.128527

Object detection in optical remote sensing (RS) images is crucial for both military and civilian applications. However, a major challenge in RS object detection lies in the complexity of texture details within the images, which makes it difficult to accurately identify the objects. Currently, many object detection methods based on deep learning focus primarily on network architecture and label assignment design. These methods often employ an end-to-end training approach, where the loss function only directly constraints the final output layer. However, this approach gives each module within the network a significant amount of freedom during the optimization process, which can hinder the network’s ability to effectively focus on the object and limit detection accuracy. To address these limitations, this paper proposes a novel approach called the Attention Feature Guided Network (AFGN). In this approach, a Attention Feature Guided Branch (AFGB) is introduced during the training phase of the CNN-based end-to-end detection network. The AFGB provides additional shallow supervision outside the detector’s output layer, guiding the backbone to effectively focus on the object amidst complex backgrounds. Additionally, a new operation called Background Blur Mask (BBM) is proposed, which is embedded in the AFGB to achieve image-level attention. Experiments conducted on the DIOR dataset demonstrate the effectiveness and efficiency of the proposed method. Our method achieves an mAP (mean average precision) of 0.777, surpassing many state-of-the-art object detection methods.

光学遥感(RS)图像中的物体检测对于军事和民用应用都至关重要。然而,RS 物体检测的一大挑战在于图像中纹理细节的复杂性,这使得准确识别物体变得十分困难。目前,许多基于深度学习的物体检测方法主要侧重于网络架构和标签分配设计。这些方法通常采用端到端训练方法,损失函数只直接约束最终输出层。然而,这种方法在优化过程中给了网络中每个模块很大的自由度,这可能会阻碍网络有效关注物体的能力,并限制检测精度。为了解决这些局限性,本文提出了一种名为注意力特征引导网络(AFGN)的新方法。在这种方法中,基于 CNN 的端到端检测网络在训练阶段引入了注意力特征引导分支(AFGB)。AFGB 在检测器输出层之外提供额外的浅层监督,引导骨干网在复杂背景中有效地聚焦于目标。此外,还提出了一种名为背景模糊掩码(BBM)的新操作,并将其嵌入到 AFGB 中,以实现图像级关注。在 DIOR 数据集上进行的实验证明了所提方法的有效性和效率。我们的方法达到了 0.777 的 mAP(平均精度),超过了许多最先进的物体检测方法。
{"title":"AFGN: Attention Feature Guided Network for object detection in optical remote sensing image","authors":"","doi":"10.1016/j.neucom.2024.128527","DOIUrl":"10.1016/j.neucom.2024.128527","url":null,"abstract":"<div><p>Object detection in optical remote sensing (RS) images is crucial for both military and civilian applications. However, a major challenge in RS object detection lies in the complexity of texture details within the images, which makes it difficult to accurately identify the objects. Currently, many object detection methods based on deep learning focus primarily on network architecture and label assignment design. These methods often employ an end-to-end training approach, where the loss function only directly constraints the final output layer. However, this approach gives each module within the network a significant amount of freedom during the optimization process, which can hinder the network’s ability to effectively focus on the object and limit detection accuracy. To address these limitations, this paper proposes a novel approach called the Attention Feature Guided Network (AFGN). In this approach, a Attention Feature Guided Branch (AFGB) is introduced during the training phase of the CNN-based end-to-end detection network. The AFGB provides additional shallow supervision outside the detector’s output layer, guiding the backbone to effectively focus on the object amidst complex backgrounds. Additionally, a new operation called Background Blur Mask (BBM) is proposed, which is embedded in the AFGB to achieve image-level attention. Experiments conducted on the DIOR dataset demonstrate the effectiveness and efficiency of the proposed method. Our method achieves an mAP (mean average precision) of 0.777, surpassing many state-of-the-art object detection methods.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel capsule network based on Multi-Order Descartes Extension Transformation 基于多阶笛卡尔扩展变换的新型胶囊网络
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1016/j.neucom.2024.128526

In recent years, the capsule network has significantly impacted deep learning with its unique structure that robustly handles spatial relationships and image deformations like rotation and scaling. While previous research has primarily focused on enhancing the structural network of capsule networks to process complex images, little attention has been given to the rich semantic information contained within the capsules themselves. We recognize this gap and propose the Multi-Order Descartes Expansion Capsule Network (MODE-CapsNet). By introducing the Multi-Order Descartes Expansion Transformation (MODET), this innovative architecture enhances the expressiveness of a single capsule by enabling its projection into a higher-dimensional space. As far as we know, this is the first significant enhancement at the single-capsule granularity level, providing a new perspective for improving capsule networks. Additionally, we proposed a hierarchical routing algorithm designed explicitly for the MODE capsules, significantly optimizing computational efficiency and performance. Experimental results on datasets (MNIST, Fashion-MNIST, SVHN, CIFAR-10, tiny-ImageNet) showed that MODE capsules exhibited improved separability and expressiveness, contributing to overall network accuracy, robustness, and computational efficiency.

近年来,胶囊网络以其独特的结构对深度学习产生了重大影响,它能稳健地处理空间关系以及旋转和缩放等图像变形。以往的研究主要集中在增强胶囊网络的结构网络,以处理复杂图像,而很少关注胶囊本身所包含的丰富语义信息。我们认识到了这一差距,并提出了多阶笛卡尔扩展胶囊网络(MODE-CapsNet)。通过引入多阶笛卡尔扩展变换(MODET),这一创新架构可将单个胶囊投射到更高维度的空间,从而增强胶囊的表达能力。据我们所知,这是首次在单胶囊粒度层面上的重大改进,为胶囊网络的改进提供了新的视角。此外,我们还提出了一种专为 MODE 胶囊设计的分层路由算法,大大优化了计算效率和性能。在数据集(MNIST、Fashion-MNIST、SVHN、CIFAR-10、tiny-ImageNet)上的实验结果表明,MODE 胶囊表现出更好的可分离性和表现力,有助于提高网络的整体准确性、鲁棒性和计算效率。
{"title":"A novel capsule network based on Multi-Order Descartes Extension Transformation","authors":"","doi":"10.1016/j.neucom.2024.128526","DOIUrl":"10.1016/j.neucom.2024.128526","url":null,"abstract":"<div><p>In recent years, the capsule network has significantly impacted deep learning with its unique structure that robustly handles spatial relationships and image deformations like rotation and scaling. While previous research has primarily focused on enhancing the structural network of capsule networks to process complex images, little attention has been given to the rich semantic information contained within the capsules themselves. We recognize this gap and propose the Multi-Order Descartes Expansion Capsule Network (MODE-CapsNet). By introducing the Multi-Order Descartes Expansion Transformation (MODET), this innovative architecture enhances the expressiveness of a single capsule by enabling its projection into a higher-dimensional space. As far as we know, this is the first significant enhancement at the single-capsule granularity level, providing a new perspective for improving capsule networks. Additionally, we proposed a hierarchical routing algorithm designed explicitly for the MODE capsules, significantly optimizing computational efficiency and performance. Experimental results on datasets (MNIST, Fashion-MNIST, SVHN, CIFAR-10, tiny-ImageNet) showed that MODE capsules exhibited improved separability and expressiveness, contributing to overall network accuracy, robustness, and computational efficiency.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building efficient CNNs using Depthwise Convolutional Eigen-Filters (DeCEF) 利用深度卷积特征滤波器 (DeCEF) 构建高效的 CNN
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1016/j.neucom.2024.128461

Deep Convolutional Neural Networks (CNNs) have been widely used in various domains due to their impressive capabilities. These models are typically composed of a large number of 2D convolutional (Conv2D) layers with numerous trainable parameters. To manage the complexity of such networks, compression techniques can be applied, which typically rely on the analysis of trained deep learning models. However, in certain situations, training a new CNN from scratch may be infeasible due to resource limitations. In this paper, we propose an alternative parameterization to Conv2D filters with significantly fewer parameters without relying on compressing a pre-trained CNN. Our analysis reveals that the effective rank of the vectorized Conv2D filters decreases with respect to the increasing depth in the network. This leads to the development of the Depthwise Convolutional Eigen-Filter (DeCEF) layer, which is a low rank version of the Conv2D layer with significantly fewer trainable parameters and floating point operations (FLOPs). The way we define the effective rank is different from previous work, and it is easy to implement and interpret. Applying this technique is straightforward – one can simply replace any standard convolutional layer with a DeCEF layer in a CNN. To evaluate the effectiveness of DeCEF layers, experiments are conducted on the benchmark datasets CIFAR-10 and ImageNet for various network architectures. The results have shown a similar or higher accuracy using about 2/3 of the original parameters and reducing the number of FLOPs to 2/3 of the base network. Additionally, analyzing the patterns in the effective rank provides insights into the inner workings of CNNs and highlights opportunities for future research.

深度卷积神经网络(CNN)因其强大的功能而被广泛应用于各个领域。这些模型通常由大量二维卷积(Conv2D)层组成,具有众多可训练参数。为了管理此类网络的复杂性,可以采用压缩技术,这种技术通常依赖于对训练有素的深度学习模型的分析。然而,在某些情况下,由于资源限制,从头开始训练一个新的 CNN 可能并不可行。在本文中,我们提出了 Conv2D 过滤器的另一种参数化方法,参数数量大大减少,无需依赖压缩预先训练好的 CNN。我们的分析表明,矢量化 Conv2D 滤波器的有效等级随网络深度的增加而降低。因此,我们开发了深度卷积特征滤波器(DeCEF)层,它是 Conv2D 层的低级版本,可训练参数和浮点运算(FLOP)显著减少。我们定义有效秩的方法不同于以往的工作,而且易于实现和解释。这项技术的应用非常简单,只需在 CNN 中用 DeCEF 层替换任何标准卷积层即可。为了评估 DeCEF 层的有效性,我们在基准数据集 CIFAR-10 和 ImageNet 上针对不同的网络架构进行了实验。结果表明,使用大约 2/3 的原始参数,并将 FLOP 数量减少到基础网络的 2/3,就能获得类似或更高的精度。此外,通过分析有效等级的模式,可以深入了解 CNN 的内部工作原理,并为今后的研究提供了机会。
{"title":"Building efficient CNNs using Depthwise Convolutional Eigen-Filters (DeCEF)","authors":"","doi":"10.1016/j.neucom.2024.128461","DOIUrl":"10.1016/j.neucom.2024.128461","url":null,"abstract":"<div><p>Deep Convolutional Neural Networks (CNNs) have been widely used in various domains due to their impressive capabilities. These models are typically composed of a large number of 2D convolutional (Conv2D) layers with numerous trainable parameters. To manage the complexity of such networks, compression techniques can be applied, which typically rely on the analysis of trained deep learning models. However, in certain situations, training a new CNN from scratch may be infeasible due to resource limitations. In this paper, we propose an alternative parameterization to Conv2D filters with significantly fewer parameters without relying on compressing a pre-trained CNN. Our analysis reveals that the effective rank of the vectorized Conv2D filters decreases with respect to the increasing depth in the network. This leads to the development of the Depthwise Convolutional Eigen-Filter (DeCEF) layer, which is a low rank version of the Conv2D layer with significantly fewer trainable parameters and floating point operations (FLOPs). The way we define the effective rank is different from previous work, and it is easy to implement and interpret. Applying this technique is straightforward – one can simply replace any standard convolutional layer with a DeCEF layer in a CNN. To evaluate the effectiveness of DeCEF layers, experiments are conducted on the benchmark datasets CIFAR-10 and ImageNet for various network architectures. The results have shown a similar or higher accuracy using about 2/3 of the original parameters and reducing the number of FLOPs to 2/3 of the base network. Additionally, analyzing the patterns in the effective rank provides insights into the inner workings of CNNs and highlights opportunities for future research.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0925231224012323/pdfft?md5=6f5a3a86accd86ed460b34e4b3ac884f&pid=1-s2.0-S0925231224012323-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142128681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NCLWO: Newton’s cooling law-based weighted oversampling algorithm for imbalanced datasets with feature noise NCLWO:基于牛顿冷却定律的加权超采样算法,适用于有特征噪声的不平衡数据集
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1016/j.neucom.2024.128538

Imbalanced datasets pose challenges to standard classification algorithms. Although oversampling techniques can balance the number of samples across different classes, the difficulties of imbalanced classification is not solely imbalanced data itself but other factors, such as small disjuncts and overlapping regions, especially in the presence of noise. Traditional oversampling techniques are not effectively address these intricacies. To this end, we propose a novel oversampling method called Newton’s Cooling Law-Based Weighted Oversampling (NCLWO). The proposed method initially calculates the weight of the minority class based on density and closeness factors to identify hard-to-learn samples, assigning them higher heat. Subsequently, Newton’s Cooling Law is applied to each minority class sample by using it as the center and expanding the sampling region outward, gradually decreasing the heat until reaching a balanced state. Finally, majority class samples within the sampling region are translated to eliminate overlapping areas, and a weighted oversampling approach is employed to synthesize informative minority class samples. The experimental study, carried out on a set of benchmark datasets, confirm that the proposed method not only outperforms state-of-the-art oversampling approaches but also shows greater robustness in the presence of feature noise.

不平衡数据集给标准分类算法带来了挑战。尽管超采样技术可以平衡不同类别的样本数量,但不平衡分类的困难并不仅仅在于不平衡数据本身,还在于其他因素,如小的不连续性和重叠区域,尤其是在存在噪声的情况下。传统的超采样技术无法有效解决这些错综复杂的问题。为此,我们提出了一种新颖的超采样方法,称为基于牛顿冷却定律的加权超采样(NCLWO)。该方法首先根据密度和接近度因子计算少数类的权重,以识别难以学习的样本,并为其分配更高的热度。随后,对每个少数类样本应用牛顿冷却定律,以其为中心向外扩展采样区域,逐渐降低热度,直至达到平衡状态。最后,对采样区域内的多数类样本进行平移以消除重叠区域,并采用加权超采样方法合成信息丰富的少数类样本。在一组基准数据集上进行的实验研究证实,所提出的方法不仅优于最先进的超采样方法,而且在存在特征噪声的情况下表现出更强的鲁棒性。
{"title":"NCLWO: Newton’s cooling law-based weighted oversampling algorithm for imbalanced datasets with feature noise","authors":"","doi":"10.1016/j.neucom.2024.128538","DOIUrl":"10.1016/j.neucom.2024.128538","url":null,"abstract":"<div><p>Imbalanced datasets pose challenges to standard classification algorithms. Although oversampling techniques can balance the number of samples across different classes, the difficulties of imbalanced classification is not solely imbalanced data itself but other factors, such as small disjuncts and overlapping regions, especially in the presence of noise. Traditional oversampling techniques are not effectively address these intricacies. To this end, we propose a novel oversampling method called <em>Newton’s Cooling Law-Based Weighted Oversampling</em> (NCLWO). The proposed method initially calculates the weight of the minority class based on density and closeness factors to identify hard-to-learn samples, assigning them higher heat. Subsequently, Newton’s Cooling Law is applied to each minority class sample by using it as the center and expanding the sampling region outward, gradually decreasing the heat until reaching a balanced state. Finally, majority class samples within the sampling region are translated to eliminate overlapping areas, and a weighted oversampling approach is employed to synthesize informative minority class samples. The experimental study, carried out on a set of benchmark datasets, confirm that the proposed method not only outperforms state-of-the-art oversampling approaches but also shows greater robustness in the presence of feature noise.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploration on the spiking response of a single compartment neuron with multiple active properties under electrical stimuli 探索具有多种活动特性的单室神经元在电刺激下的尖峰响应
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1016/j.neucom.2024.128537

The study of modulating the neuronal signals by electrical stimuli is important to intervene the abnormal neuronal firing and bring them to a normal state. Spike trains, which are the highest quality of brain signals, have been deficiently explored and analyzed owing to the challenges of obtaining them in reality. In this regard, this paper aims to investigate and analyze the effect of electrical stimuli on the spiking response of neurons, and the following work is to be carried out. The relationships between the spiking response and three parameters (namely, the amplitude of the electrode current (EC), the angular velocity of the electric field current (EFC), and the signal-noise ratio (SNR)) are examined on a neuronal model with spatial length and multiple active properties. When specific currents with different SNRs are imposed on the neurons, their influence on the spiking response is further explored. With regard to the spiking response, the main focus is on three characteristics, i.e., the spiking pattern, the spike count (SC), and the spiking arrangement. An algorithm, called the return map distance (RMD) algorithm, is proposed in this paper, and gives the classification of spiking patterns a quantitative criterion. Based on it, the spiking patterns are classified in this paper as busting spike train, regular spike train (RST), and meager spike train (MST). Simulation results indicate that both the amplitude of the EC and the angular velocity of the EFC change the neuronal spiking patterns. As the amplitude (angular velocity) of the EC (EFC) increases, the spiking pattern of the Soldado-Magraner model (SMM) eventually tends to RST (MST). In addition, the SC increases with the amplitude of the EC, while it does not hold for the SC with respect to the angular velocity of the EFC. Furthermore, the spiking arrangement and the SC are severely destroyed for the EC with low SNRs, while three spiking features of the SMM under EFC are all robust to the different SNRs, which implies that compared with the EC, the spiking responses of the SMM under EFC are more stable. The findings in this paper may provide some theoretical guidance to the fields related to neuronal firing, such as brain-computer interfaces and electrotherapy. The RMD algorithm proposed here can be applied to more individual neurons, and the spiking arrangement discussed here could be regarded as an effective encoding way for spike trains.

通过电刺激调节神经元信号的研究对于干预异常的神经元发射并使其恢复到正常状态非常重要。尖峰脉冲序列是最高质量的大脑信号,但由于在现实中难以获得,因此对其的探索和分析一直不足。为此,本文旨在研究和分析电刺激对神经元尖峰响应的影响,并开展以下工作。本文在一个具有空间长度和多种活性特性的神经元模型上,研究了尖峰响应与三个参数(即电极电流振幅(EC)、电场电流角速度(EFC)和信噪比(SNR))之间的关系。当对神经元施加不同信噪比的特定电流时,进一步探讨了它们对尖峰响应的影响。关于尖峰响应,主要关注三个特征,即尖峰模式、尖峰计数(SC)和尖峰排列。本文提出了一种名为 "返回图距离(RMD)"的算法,为尖峰模式的分类提供了量化标准。在此基础上,本文将尖峰模式分为破坏性尖峰序列、规则尖峰序列(RST)和微弱尖峰序列(MST)。模拟结果表明,EC 的振幅和 EFC 的角速度都会改变神经元的尖峰模式。随着EC(EFC)振幅(角速度)的增加,索尔达多-马格拉纳模型(SMM)的尖峰模式最终趋向于RST(MST)。此外,SC 随 EC 振幅的增加而增加,而 SC 与 EFC 角速度的关系则不成立。此外,在信噪比较低的 EC 条件下,尖峰排列和 SC 会受到严重破坏,而在 EFC 条件下,SMM 的三个尖峰特征在不同信噪比下均具有鲁棒性,这意味着与 EC 相比,SMM 在 EFC 条件下的尖峰响应更加稳定。本文的研究结果可为脑机接口和电疗等神经元发射相关领域提供一些理论指导。本文提出的 RMD 算法可应用于更多的单个神经元,本文讨论的尖峰排列可被视为尖峰列车的有效编码方式。
{"title":"Exploration on the spiking response of a single compartment neuron with multiple active properties under electrical stimuli","authors":"","doi":"10.1016/j.neucom.2024.128537","DOIUrl":"10.1016/j.neucom.2024.128537","url":null,"abstract":"<div><p>The study of modulating the neuronal signals by electrical stimuli is important to intervene the abnormal neuronal firing and bring them to a normal state. Spike trains, which are the highest quality of brain signals, have been deficiently explored and analyzed owing to the challenges of obtaining them in reality. In this regard, this paper aims to investigate and analyze the effect of electrical stimuli on the spiking response of neurons, and the following work is to be carried out. The relationships between the spiking response and three parameters (namely, the amplitude of the electrode current (EC), the angular velocity of the electric field current (EFC), and the signal-noise ratio (SNR)) are examined on a neuronal model with spatial length and multiple active properties. When specific currents with different SNRs are imposed on the neurons, their influence on the spiking response is further explored. With regard to the spiking response, the main focus is on three characteristics, i.e., the spiking pattern, the spike count (SC), and the spiking arrangement. An algorithm, called the return map distance (RMD) algorithm, is proposed in this paper, and gives the classification of spiking patterns a quantitative criterion. Based on it, the spiking patterns are classified in this paper as busting spike train, regular spike train (RST), and meager spike train (MST). Simulation results indicate that both the amplitude of the EC and the angular velocity of the EFC change the neuronal spiking patterns. As the amplitude (angular velocity) of the EC (EFC) increases, the spiking pattern of the Soldado-Magraner model (SMM) eventually tends to RST (MST). In addition, the SC increases with the amplitude of the EC, while it does not hold for the SC with respect to the angular velocity of the EFC. Furthermore, the spiking arrangement and the SC are severely destroyed for the EC with low SNRs, while three spiking features of the SMM under EFC are all robust to the different SNRs, which implies that compared with the EC, the spiking responses of the SMM under EFC are more stable. The findings in this paper may provide some theoretical guidance to the fields related to neuronal firing, such as brain-computer interfaces and electrotherapy. The RMD algorithm proposed here can be applied to more individual neurons, and the spiking arrangement discussed here could be regarded as an effective encoding way for spike trains.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging discriminative data: A pathway to high-performance, stable One-shot Network Pruning at Initialization 利用判别数据:实现高性能、稳定初始化一次性网络剪枝的途径
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1016/j.neucom.2024.128529

One-shot Network Pruning at Initialization (OPaI) is acknowledged as a highly cost-effective strategy for network pruning. However, it has been observed that OPaI models tend to suffer from reduced accuracy stability as target sparsity increases. This study introduces a novel approach by incorporating Discriminative Data (DD) into OPaI, significantly improving performance at higher sparsity levels while maintaining the “one-shot” nature. Our approach achieves state-of-the-art (SOTA) performance, challenging the previously held belief of OPaI’s data independence. Through detailed ablation studies, we thoroughly investigate the influence of data on OPaI, particularly focusing on how DD addresses a common failure in OPaI known as “layer collapse”. Furthermore, our experiments demonstrate that leveraging DD from various pre-trained models can markedly boost pruning performance across different models without requiring changes to the existing model architectures or pruning methodologies. These significant improvements highlight our method’s high generalizability and stability, paving new paths for advancing pruning strategies. Our code is publicly available at: https://github.com/Nonac/DDOPaI.

初始化一次性网络修剪(OPaI)被认为是一种极具成本效益的网络修剪策略。然而,据观察,随着目标稀疏度的增加,OPaI 模型的准确性稳定性往往会下降。本研究引入了一种新方法,将判别数据(DD)纳入 OPaI,在保持 "单次 "性质的同时,显著提高了更高稀疏度水平下的性能。我们的方法实现了最先进的(SOTA)性能,挑战了以往对 OPaI 数据独立性的看法。通过详细的消融研究,我们深入探讨了数据对 OPaI 的影响,尤其关注 DD 如何解决 OPaI 中常见的故障,即 "层崩溃"。此外,我们的实验证明,利用来自各种预训练模型的 DD 可以显著提高不同模型的剪枝性能,而无需改变现有的模型架构或剪枝方法。这些重大改进凸显了我们方法的高通用性和稳定性,为推进剪枝策略铺平了新的道路。我们的代码可在 https://github.com/Nonac/DDOPaI 公开获取。
{"title":"Leveraging discriminative data: A pathway to high-performance, stable One-shot Network Pruning at Initialization","authors":"","doi":"10.1016/j.neucom.2024.128529","DOIUrl":"10.1016/j.neucom.2024.128529","url":null,"abstract":"<div><p>One-shot Network Pruning at Initialization (OPaI) is acknowledged as a highly cost-effective strategy for network pruning. However, it has been observed that OPaI models tend to suffer from reduced accuracy stability as target sparsity increases. This study introduces a novel approach by incorporating Discriminative Data (DD) into OPaI, significantly improving performance at higher sparsity levels while maintaining the “one-shot” nature. Our approach achieves state-of-the-art (SOTA) performance, challenging the previously held belief of OPaI’s data independence. Through detailed ablation studies, we thoroughly investigate the influence of data on OPaI, particularly focusing on how DD addresses a common failure in OPaI known as “layer collapse”. Furthermore, our experiments demonstrate that leveraging DD from various pre-trained models can markedly boost pruning performance across different models without requiring changes to the existing model architectures or pruning methodologies. These significant improvements highlight our method’s high generalizability and stability, paving new paths for advancing pruning strategies. Our code is publicly available at: <span><span>https://github.com/Nonac/DDOPaI</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double-kernel based Bayesian approximation broad learning system with dropout 基于双核的贝叶斯近似广义学习系统与辍学
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1016/j.neucom.2024.128533

Broad learning system (BLS) is an efficient incremental learning machine algorithm. However, there are some disadvantages in such an algorithm. For example, the number of hidden layer nodes needs to be manually adjusted during the training process, meanwhile the large uncertainty will be caused by two random mappings. To solve these problems, based on the optimization ability of the kernel function, a double-kernel broad learning system (DKBLS) is proposed to eliminate the uncertainty of random mapping by using additive kernel strategy. Meanwhile, to reduce the computing costs and training time of DKBLS, a double-kernel based bayesian approximation broad learning system with dropout (Dropout-DKBLS) is further proposed. Ablation experiments show that the output accuracy of Dropout-DKBLS does not decrease even if the node is dropped. In addition, function approximation experiments show that DKBLS and Dropout-DKBLS have good robustness and can accurately predict noise data. The regression and classification experiments on multiple datasets are compared with the latest kernel-based learning methods. The comparison results show that both DKBLS and Dropout-DKBLS have good regression and classification performance. By further comparing the training time of these kernel-based learning methods, we prove that the Dropout-DKBLS can reduce the computational cost while ensuring the output accuracy.

广义学习系统(BLS)是一种高效的增量学习机算法。然而,这种算法也存在一些缺点。例如,在训练过程中需要手动调整隐层节点的数量,同时两个随机映射会导致较大的不确定性。为了解决这些问题,本文基于核函数的优化能力,提出了一种双核广义学习系统(DKBLS),利用加法核策略消除随机映射的不确定性。同时,为了减少 DKBLS 的计算成本和训练时间,进一步提出了基于贝叶斯逼近的双核广义学习系统(Dropout-DKBLS)。剔除实验表明,即使节点被剔除,Dropout-DKBLS 的输出精度也不会降低。此外,函数逼近实验表明,DKBLS 和 Dropout-DKBLS 具有良好的鲁棒性,可以准确预测噪声数据。在多个数据集上进行的回归和分类实验与最新的基于核的学习方法进行了比较。比较结果表明,DKBLS 和 Dropout-DKBLS 都具有良好的回归和分类性能。通过进一步比较这些基于核的学习方法的训练时间,我们证明了 Dropout-DKBLS 可以在确保输出准确性的同时降低计算成本。
{"title":"Double-kernel based Bayesian approximation broad learning system with dropout","authors":"","doi":"10.1016/j.neucom.2024.128533","DOIUrl":"10.1016/j.neucom.2024.128533","url":null,"abstract":"<div><p>Broad learning system (BLS) is an efficient incremental learning machine algorithm. However, there are some disadvantages in such an algorithm. For example, the number of hidden layer nodes needs to be manually adjusted during the training process, meanwhile the large uncertainty will be caused by two random mappings. To solve these problems, based on the optimization ability of the kernel function, a double-kernel broad learning system (DKBLS) is proposed to eliminate the uncertainty of random mapping by using additive kernel strategy. Meanwhile, to reduce the computing costs and training time of DKBLS, a double-kernel based bayesian approximation broad learning system with dropout (Dropout-DKBLS) is further proposed. Ablation experiments show that the output accuracy of Dropout-DKBLS does not decrease even if the node is dropped. In addition, function approximation experiments show that DKBLS and Dropout-DKBLS have good robustness and can accurately predict noise data. The regression and classification experiments on multiple datasets are compared with the latest kernel-based learning methods. The comparison results show that both DKBLS and Dropout-DKBLS have good regression and classification performance. By further comparing the training time of these kernel-based learning methods, we prove that the Dropout-DKBLS can reduce the computational cost while ensuring the output accuracy.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1