首页 > 最新文献

arXiv - CS - Neural and Evolutionary Computing最新文献

英文 中文
Emulating Brain-like Rapid Learning in Neuromorphic Edge Computing 在神经形态边缘计算中模拟类脑快速学习
Pub Date : 2024-08-28 DOI: arxiv-2408.15800
Kenneth Stewart, Michael Neumeier, Sumit Bam Shrestha, Garrick Orchard, Emre Neftci
Achieving personalized intelligence at the edge with real-time learningcapabilities holds enormous promise in enhancing our daily experiences andhelping decision making, planning, and sensing. However, efficient and reliableedge learning remains difficult with current technology due to the lack ofpersonalized data, insufficient hardware capabilities, and inherent challengesposed by online learning. Over time and across multiple developmental stages, the brain has evolved toefficiently incorporate new knowledge by gradually building on previousknowledge. In this work, we emulate the multiple stages of learning withdigital neuromorphic technology that simulates the neural and synapticprocesses of the brain using two stages of learning. First, a meta-trainingstage trains the hyperparameters of synaptic plasticity for one-shot learningusing a differentiable simulation of the neuromorphic hardware. Thismeta-training process refines a hardware local three-factor synaptic plasticityrule and its associated hyperparameters to align with the trained task domain.In a subsequent deployment stage, these optimized hyperparameters enable fast,data-efficient, and accurate learning of new classes. We demonstrate ourapproach using event-driven vision sensor data and the Intel Loihi neuromorphicprocessor with its plasticity dynamics, achieving real-time one-shot learningof new classes that is vastly improved over transfer learning. Our methodologycan be deployed with arbitrary plasticity models and can be applied tosituations demanding quick learning and adaptation at the edge, such asnavigating unfamiliar environments or learning unexpected categories of datathrough user engagement.
通过实时学习功能在边缘实现个性化智能,在提升我们的日常体验、帮助决策、规划和感知方面大有可为。然而,由于缺乏个性化数据、硬件能力不足以及在线学习带来的固有挑战,目前的技术仍然难以实现高效可靠的边缘学习。随着时间的推移和多个发育阶段的经历,大脑已经进化到可以通过逐步积累以前的知识来有效地吸收新知识。在这项工作中,我们利用数字神经形态技术模拟了学习的多个阶段,该技术通过两个学习阶段模拟了大脑的神经和突触过程。首先,元训练阶段利用神经形态硬件的可微分模拟,训练突触可塑性的超参数,以实现单次学习。在随后的部署阶段,这些经过优化的超参数可以快速、高效、准确地学习新类别。我们利用事件驱动视觉传感器数据和具有可塑性动态特性的英特尔 Loihi 神经形态处理器演示了我们的方法,实现了新类别的实时单次学习,比迁移学习有了很大改进。我们的方法可与任意可塑性模型一起部署,并可应用于要求在边缘快速学习和适应的应用,例如导航陌生环境或通过用户参与学习意想不到的数据类别。
{"title":"Emulating Brain-like Rapid Learning in Neuromorphic Edge Computing","authors":"Kenneth Stewart, Michael Neumeier, Sumit Bam Shrestha, Garrick Orchard, Emre Neftci","doi":"arxiv-2408.15800","DOIUrl":"https://doi.org/arxiv-2408.15800","url":null,"abstract":"Achieving personalized intelligence at the edge with real-time learning\u0000capabilities holds enormous promise in enhancing our daily experiences and\u0000helping decision making, planning, and sensing. However, efficient and reliable\u0000edge learning remains difficult with current technology due to the lack of\u0000personalized data, insufficient hardware capabilities, and inherent challenges\u0000posed by online learning. Over time and across multiple developmental stages, the brain has evolved to\u0000efficiently incorporate new knowledge by gradually building on previous\u0000knowledge. In this work, we emulate the multiple stages of learning with\u0000digital neuromorphic technology that simulates the neural and synaptic\u0000processes of the brain using two stages of learning. First, a meta-training\u0000stage trains the hyperparameters of synaptic plasticity for one-shot learning\u0000using a differentiable simulation of the neuromorphic hardware. This\u0000meta-training process refines a hardware local three-factor synaptic plasticity\u0000rule and its associated hyperparameters to align with the trained task domain.\u0000In a subsequent deployment stage, these optimized hyperparameters enable fast,\u0000data-efficient, and accurate learning of new classes. We demonstrate our\u0000approach using event-driven vision sensor data and the Intel Loihi neuromorphic\u0000processor with its plasticity dynamics, achieving real-time one-shot learning\u0000of new classes that is vastly improved over transfer learning. Our methodology\u0000can be deployed with arbitrary plasticity models and can be applied to\u0000situations demanding quick learning and adaptation at the edge, such as\u0000navigating unfamiliar environments or learning unexpected categories of data\u0000through user engagement.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpikingSSMs: Learning Long Sequences with Sparse and Parallel Spiking State Space Models SpikingSSMs:利用稀疏并行尖峰状态空间模型学习长序列
Pub Date : 2024-08-27 DOI: arxiv-2408.14909
Shuaijie Shen, Chao Wang, Renzhuo Huang, Yan Zhong, Qinghai Guo, Zhichao Lu, Jianguo Zhang, Luziwei Leng
Known as low energy consumption networks, spiking neural networks (SNNs) havegained a lot of attention within the past decades. While SNNs are increasingcompetitive with artificial neural networks (ANNs) for vision tasks, they arerarely used for long sequence tasks, despite their intrinsic temporal dynamics.In this work, we develop spiking state space models (SpikingSSMs) for longsequence learning by leveraging on the sequence learning abilities of statespace models (SSMs). Inspired by dendritic neuron structure, we hierarchicallyintegrate neuronal dynamics with the original SSM block, meanwhile realizingsparse synaptic computation. Furthermore, to solve the conflict of event-drivenneuronal dynamics with parallel computing, we propose a light-weight surrogatedynamic network which accurately predicts the after-reset membrane potentialand compatible to learnable thresholds, enabling orders of acceleration intraining speed compared with conventional iterative methods. On the long rangearena benchmark task, SpikingSSM achieves competitive performance tostate-of-the-art SSMs meanwhile realizing on average 90% of network sparsity.On language modeling, our network significantly surpasses existing spikinglarge language models (spikingLLMs) on the WikiText-103 dataset with only athird of the model size, demonstrating its potential as backbone architecturefor low computation cost LLMs.
尖峰神经网络(SNN)被称为低能耗网络,在过去的几十年里受到了广泛关注。在这项工作中,我们利用状态空间模型(SSM)的序列学习能力,开发了用于长序列学习的尖峰状态空间模型(SpikingSSM)。受树突状神经元结构的启发,我们将神经元动力学与原始的 SSM 模块进行了分层整合,同时实现了解析突触计算。此外,为了解决事件驱动神经元动力学与并行计算的矛盾,我们提出了一种轻量级的代理动力学网络,它能准确预测复位后的膜电位,并兼容可学习的阈值,与传统的迭代方法相比,训练速度加快了几个数量级。在长距离区域基准任务上,SpikingSSM的性能与最先进的SSM相比具有竞争力,同时平均实现了90%的网络稀疏性。在语言建模方面,我们的网络在WikiText-103数据集上显著超越了现有的spiking大型语言模型(spikingLLMs),而模型大小仅为现有模型的三分之一,这证明了它作为低计算成本LLMs骨干架构的潜力。
{"title":"SpikingSSMs: Learning Long Sequences with Sparse and Parallel Spiking State Space Models","authors":"Shuaijie Shen, Chao Wang, Renzhuo Huang, Yan Zhong, Qinghai Guo, Zhichao Lu, Jianguo Zhang, Luziwei Leng","doi":"arxiv-2408.14909","DOIUrl":"https://doi.org/arxiv-2408.14909","url":null,"abstract":"Known as low energy consumption networks, spiking neural networks (SNNs) have\u0000gained a lot of attention within the past decades. While SNNs are increasing\u0000competitive with artificial neural networks (ANNs) for vision tasks, they are\u0000rarely used for long sequence tasks, despite their intrinsic temporal dynamics.\u0000In this work, we develop spiking state space models (SpikingSSMs) for long\u0000sequence learning by leveraging on the sequence learning abilities of state\u0000space models (SSMs). Inspired by dendritic neuron structure, we hierarchically\u0000integrate neuronal dynamics with the original SSM block, meanwhile realizing\u0000sparse synaptic computation. Furthermore, to solve the conflict of event-driven\u0000neuronal dynamics with parallel computing, we propose a light-weight surrogate\u0000dynamic network which accurately predicts the after-reset membrane potential\u0000and compatible to learnable thresholds, enabling orders of acceleration in\u0000training speed compared with conventional iterative methods. On the long range\u0000arena benchmark task, SpikingSSM achieves competitive performance to\u0000state-of-the-art SSMs meanwhile realizing on average 90% of network sparsity.\u0000On language modeling, our network significantly surpasses existing spiking\u0000large language models (spikingLLMs) on the WikiText-103 dataset with only a\u0000third of the model size, demonstrating its potential as backbone architecture\u0000for low computation cost LLMs.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distance-Forward Learning: Enhancing the Forward-Forward Algorithm Towards High-Performance On-Chip Learning 距离前向学习:增强前向算法,实现高性能片上学习
Pub Date : 2024-08-27 DOI: arxiv-2408.14925
Yujie Wu, Siyuan Xu, Jibin Wu, Lei Deng, Mingkun Xu, Qinghao Wen, Guoqi Li
The Forward-Forward (FF) algorithm was recently proposed as a local learningmethod to address the limitations of backpropagation (BP), offering biologicalplausibility along with memory-efficient and highly parallelized computationalbenefits. However, it suffers from suboptimal performance and poorgeneralization, largely due to inadequate theoretical support and a lack ofeffective learning strategies. In this work, we reformulate FF using distancemetric learning and propose a distance-forward algorithm (DF) to improve FFperformance in supervised vision tasks while preserving its local computationalproperties, making it competitive for efficient on-chip learning. To achievethis, we reinterpret FF through the lens of centroid-based metric learning anddevelop a goodness-based N-pair margin loss to facilitate the learning ofdiscriminative features. Furthermore, we integrate layer-collaboration localupdate strategies to reduce information loss caused by greedy local parameterupdates. Our method surpasses existing FF models and other advanced locallearning approaches, with accuracies of 99.7% on MNIST, 88.2% on CIFAR-10,59% on CIFAR-100, 95.9% on SVHN, and 82.5% on ImageNette, respectively.Moreover, it achieves comparable performance with less than 40% memory costcompared to BP training, while exhibiting stronger robustness to multiple typesof hardware-related noise, demonstrating its potential for online learning andenergy-efficient computation on neuromorphic chips.
前向前馈(FF)算法是最近提出的一种局部学习方法,旨在解决反向传播(BP)的局限性,该算法不仅具有生物学上的合理性,还具有内存效率高、计算高度并行化等优点。然而,它的性能不理想,泛化能力差,这主要是由于理论支持不足和缺乏有效的学习策略。在这项工作中,我们使用距离度量学习重新表述了 FF,并提出了一种距离前向算法 (DF),以提高 FF 在有监督视觉任务中的性能,同时保留其本地计算特性,使其在高效片上学习方面具有竞争力。为了实现这一目标,我们从基于中心点的度量学习角度重新解释了 FF,并开发了一种基于善度的 N 对边距损失,以促进区分性特征的学习。此外,我们还整合了层协作局部更新策略,以减少贪婪的局部参数更新造成的信息损失。我们的方法超越了现有的FF模型和其他先进的局部学习方法,在MNIST上的准确率为99.7%,在CIFAR-10上的准确率为88.2%,在CIFAR-100上的准确率为59%,在SVHN上的准确率为95.9%,在ImageNette上的准确率为82.5%。此外,与BP训练相比,它以不到40%的内存成本实现了可比的性能,同时对多种类型的硬件相关噪声表现出更强的鲁棒性,证明了它在神经形态芯片上的在线学习和节能计算潜力。
{"title":"Distance-Forward Learning: Enhancing the Forward-Forward Algorithm Towards High-Performance On-Chip Learning","authors":"Yujie Wu, Siyuan Xu, Jibin Wu, Lei Deng, Mingkun Xu, Qinghao Wen, Guoqi Li","doi":"arxiv-2408.14925","DOIUrl":"https://doi.org/arxiv-2408.14925","url":null,"abstract":"The Forward-Forward (FF) algorithm was recently proposed as a local learning\u0000method to address the limitations of backpropagation (BP), offering biological\u0000plausibility along with memory-efficient and highly parallelized computational\u0000benefits. However, it suffers from suboptimal performance and poor\u0000generalization, largely due to inadequate theoretical support and a lack of\u0000effective learning strategies. In this work, we reformulate FF using distance\u0000metric learning and propose a distance-forward algorithm (DF) to improve FF\u0000performance in supervised vision tasks while preserving its local computational\u0000properties, making it competitive for efficient on-chip learning. To achieve\u0000this, we reinterpret FF through the lens of centroid-based metric learning and\u0000develop a goodness-based N-pair margin loss to facilitate the learning of\u0000discriminative features. Furthermore, we integrate layer-collaboration local\u0000update strategies to reduce information loss caused by greedy local parameter\u0000updates. Our method surpasses existing FF models and other advanced local\u0000learning approaches, with accuracies of 99.7% on MNIST, 88.2% on CIFAR-10,\u000059% on CIFAR-100, 95.9% on SVHN, and 82.5% on ImageNette, respectively.\u0000Moreover, it achieves comparable performance with less than 40% memory cost\u0000compared to BP training, while exhibiting stronger robustness to multiple types\u0000of hardware-related noise, demonstrating its potential for online learning and\u0000energy-efficient computation on neuromorphic chips.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PMSN: A Parallel Multi-compartment Spiking Neuron for Multi-scale Temporal Processing PMSN:用于多尺度时态处理的并行多室尖峰神经元
Pub Date : 2024-08-27 DOI: arxiv-2408.14917
Xinyi Chen, Jibin Wu, Chenxiang Ma, Yinsong Yan, Yujie Wu, Kay Chen Tan
Spiking Neural Networks (SNNs) hold great potential to realizebrain-inspired, energy-efficient computational systems. However, current SNNsstill fall short in terms of multi-scale temporal processing compared to theirbiological counterparts. This limitation has resulted in poor performance inmany pattern recognition tasks with information that varies across differenttimescales. To address this issue, we put forward a novel spiking neuron modelcalled Parallel Multi-compartment Spiking Neuron (PMSN). The PMSN emulatesbiological neurons by incorporating multiple interacting substructures andallows for flexible adjustment of the substructure counts to effectivelyrepresent temporal information across diverse timescales. Additionally, toaddress the computational burden associated with the increased complexity ofthe proposed model, we introduce two parallelization techniques that decouplethe temporal dependencies of neuronal updates, enabling parallelized trainingacross different time steps. Our experimental results on a wide range ofpattern recognition tasks demonstrate the superiority of PMSN. It outperformsother state-of-the-art spiking neuron models in terms of its temporalprocessing capacity, training speed, and computation cost. Specifically,compared with the commonly used Leaky Integrate-and-Fire neuron, PMSN offers asimulation acceleration of over 10 $times$ and a 30 % improvement in accuracyon Sequential CIFAR10 dataset, while maintaining comparable computational cost.
尖峰神经网络(SNN)在实现由大脑启发的高能效计算系统方面具有巨大潜力。然而,与生物类似系统相比,目前的尖峰神经网络在多尺度时间处理方面仍然存在不足。这一局限性导致在许多模式识别任务中,不同时间尺度的信息表现不佳。为了解决这个问题,我们提出了一种新的尖峰神经元模型,称为并行多室尖峰神经元(PMSN)。该模型通过整合多个相互作用的子结构来模拟生物神经元,并允许灵活调整子结构数量,从而有效地反映不同时间尺度上的时间信息。此外,为了解决所提模型复杂性增加带来的计算负担,我们引入了两种并行化技术,它们能解除神经元更新的时间依赖性,从而实现跨不同时间步的并行化训练。我们在各种模式识别任务上的实验结果证明了 PMSN 的优越性。它在时间处理能力、训练速度和计算成本方面都优于其他最先进的尖峰神经元模型。具体来说,与常用的 "漏积分-火神经元"(Leaky Integrate-and-Fire neuron)相比,PMSN的模拟速度提高了10倍以上,在序列CIFAR10数据集上的准确率提高了30%,同时还保持了相当的计算成本。
{"title":"PMSN: A Parallel Multi-compartment Spiking Neuron for Multi-scale Temporal Processing","authors":"Xinyi Chen, Jibin Wu, Chenxiang Ma, Yinsong Yan, Yujie Wu, Kay Chen Tan","doi":"arxiv-2408.14917","DOIUrl":"https://doi.org/arxiv-2408.14917","url":null,"abstract":"Spiking Neural Networks (SNNs) hold great potential to realize\u0000brain-inspired, energy-efficient computational systems. However, current SNNs\u0000still fall short in terms of multi-scale temporal processing compared to their\u0000biological counterparts. This limitation has resulted in poor performance in\u0000many pattern recognition tasks with information that varies across different\u0000timescales. To address this issue, we put forward a novel spiking neuron model\u0000called Parallel Multi-compartment Spiking Neuron (PMSN). The PMSN emulates\u0000biological neurons by incorporating multiple interacting substructures and\u0000allows for flexible adjustment of the substructure counts to effectively\u0000represent temporal information across diverse timescales. Additionally, to\u0000address the computational burden associated with the increased complexity of\u0000the proposed model, we introduce two parallelization techniques that decouple\u0000the temporal dependencies of neuronal updates, enabling parallelized training\u0000across different time steps. Our experimental results on a wide range of\u0000pattern recognition tasks demonstrate the superiority of PMSN. It outperforms\u0000other state-of-the-art spiking neuron models in terms of its temporal\u0000processing capacity, training speed, and computation cost. Specifically,\u0000compared with the commonly used Leaky Integrate-and-Fire neuron, PMSN offers a\u0000simulation acceleration of over 10 $times$ and a 30 % improvement in accuracy\u0000on Sequential CIFAR10 dataset, while maintaining comparable computational cost.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research Advances and New Paradigms for Biology-inspired Spiking Neural Networks 生物启发尖峰神经网络的研究进展和新范例
Pub Date : 2024-08-26 DOI: arxiv-2408.13996
Tianyu Zheng, Liyuan Han, Tielin Zhang
Spiking neural networks (SNNs) are gaining popularity in the computationalsimulation and artificial intelligence fields owing to their biologicalplausibility and computational efficiency. This paper explores the historicaldevelopment of SNN and concludes that these two fields are intersecting andmerging rapidly. Following the successful application of Dynamic Vision Sensors(DVS) and Dynamic Audio Sensors (DAS), SNNs have found some proper paradigms,such as continuous visual signal tracking, automatic speech recognition, andreinforcement learning for continuous control, that have extensively supportedtheir key features, including spike encoding, neuronal heterogeneity, specificfunctional circuits, and multiscale plasticity. Compared to these real-worldparadigms, the brain contains a spiking version of the biology-world paradigm,which exhibits a similar level of complexity and is usually considered a mirrorof the real world. Considering the projected rapid development of invasive andparallel Brain-Computer Interface (BCI), as well as the new BCI-based paradigmsthat include online pattern recognition and stimulus control of biologicalspike trains, SNNs naturally leverage their advantages in energy efficiency,robustness, and flexibility. The biological brain has inspired the presentstudy of SNNs and effective SNN machine-learning algorithms, which can helpenhance neuroscience discoveries in the brain by applying them to the new BCIparadigm. Such two-way interactions with positive feedback can accelerate brainscience research and brain-inspired intelligence technology.
尖峰神经网络(SNN)因其生物学拟真性和计算效率,在计算模拟和人工智能领域越来越受欢迎。本文探讨了尖峰神经网络的历史发展,认为这两个领域正在迅速交叉和融合。随着动态视觉传感器(DVS)和动态音频传感器(DAS)的成功应用,SNN 找到了一些合适的范例,如连续视觉信号跟踪、自动语音识别和用于连续控制的强化学习,这些范例广泛支持其关键特征,包括尖峰编码、神经元异质性、特定功能电路和多尺度可塑性。与这些真实世界的范例相比,大脑包含了生物世界范例的尖峰版本,表现出类似的复杂程度,通常被认为是真实世界的一面镜子。考虑到侵入式和并行式脑机接口(BCI)的快速发展,以及基于 BCI 的新范例(包括生物尖峰列车的在线模式识别和刺激控制),SNN 自然会利用其在能效、鲁棒性和灵活性方面的优势。生物大脑启发了目前对 SNN 和有效 SNN 机器学习算法的研究,通过将其应用于新的 BCI 范式,有助于增强大脑神经科学的发现。这种正反馈的双向互动可以加速脑科学研究和脑启发智能技术的发展。
{"title":"Research Advances and New Paradigms for Biology-inspired Spiking Neural Networks","authors":"Tianyu Zheng, Liyuan Han, Tielin Zhang","doi":"arxiv-2408.13996","DOIUrl":"https://doi.org/arxiv-2408.13996","url":null,"abstract":"Spiking neural networks (SNNs) are gaining popularity in the computational\u0000simulation and artificial intelligence fields owing to their biological\u0000plausibility and computational efficiency. This paper explores the historical\u0000development of SNN and concludes that these two fields are intersecting and\u0000merging rapidly. Following the successful application of Dynamic Vision Sensors\u0000(DVS) and Dynamic Audio Sensors (DAS), SNNs have found some proper paradigms,\u0000such as continuous visual signal tracking, automatic speech recognition, and\u0000reinforcement learning for continuous control, that have extensively supported\u0000their key features, including spike encoding, neuronal heterogeneity, specific\u0000functional circuits, and multiscale plasticity. Compared to these real-world\u0000paradigms, the brain contains a spiking version of the biology-world paradigm,\u0000which exhibits a similar level of complexity and is usually considered a mirror\u0000of the real world. Considering the projected rapid development of invasive and\u0000parallel Brain-Computer Interface (BCI), as well as the new BCI-based paradigms\u0000that include online pattern recognition and stimulus control of biological\u0000spike trains, SNNs naturally leverage their advantages in energy efficiency,\u0000robustness, and flexibility. The biological brain has inspired the present\u0000study of SNNs and effective SNN machine-learning algorithms, which can help\u0000enhance neuroscience discoveries in the brain by applying them to the new BCI\u0000paradigm. Such two-way interactions with positive feedback can accelerate brain\u0000science research and brain-inspired intelligence technology.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Uncertainty with Implicit Quantile Network 利用隐含量子网络估算不确定性
Pub Date : 2024-08-26 DOI: arxiv-2408.14525
Yi Hung Lim
Uncertainty quantification is an important part of many performance criticalapplications. This paper provides a simple alternative to existing approachessuch as ensemble learning and bayesian neural networks. By directly modelingthe loss distribution with an Implicit Quantile Network, we get an estimate ofhow uncertain the model is of its predictions. For experiments with MNIST andCIFAR datasets, the mean of the estimated loss distribution is 2x higher forincorrect predictions. When data with high estimated uncertainty is removedfrom the test dataset, the accuracy of the model goes up as much as 10%. Thismethod is simple to implement while offering important information toapplications where the user has to know when the model could be wrong (e.g.deep learning for healthcare).
不确定性量化是许多性能关键应用的重要组成部分。本文为现有的集合学习和贝叶斯神经网络等方法提供了一个简单的替代方案。通过使用隐含量子网络直接模拟损失分布,我们可以估算出模型预测的不确定性有多大。在使用 MNIST 和 CIFAR 数据集进行的实验中,对于不正确的预测,估计损失分布的平均值要高出 2 倍。当从测试数据集中移除估计不确定性较高的数据时,模型的准确性会提高 10%。这种方法实现起来很简单,同时还能为用户必须知道模型何时可能出错的应用(如医疗保健领域的深度学习)提供重要信息。
{"title":"Estimating Uncertainty with Implicit Quantile Network","authors":"Yi Hung Lim","doi":"arxiv-2408.14525","DOIUrl":"https://doi.org/arxiv-2408.14525","url":null,"abstract":"Uncertainty quantification is an important part of many performance critical\u0000applications. This paper provides a simple alternative to existing approaches\u0000such as ensemble learning and bayesian neural networks. By directly modeling\u0000the loss distribution with an Implicit Quantile Network, we get an estimate of\u0000how uncertain the model is of its predictions. For experiments with MNIST and\u0000CIFAR datasets, the mean of the estimated loss distribution is 2x higher for\u0000incorrect predictions. When data with high estimated uncertainty is removed\u0000from the test dataset, the accuracy of the model goes up as much as 10%. This\u0000method is simple to implement while offering important information to\u0000applications where the user has to know when the model could be wrong (e.g.\u0000deep learning for healthcare).","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering Long-Term Effects on Parameter Efficient Fine-tuning 发现参数高效微调的长期影响
Pub Date : 2024-08-24 DOI: arxiv-2409.06706
Gaole Dai, Yiming Tang, Chunkai Fan, Qizhe Zhang, Zhi Zhang, Yulu Gan, Chengqing Zeng, Shanghang Zhang, Tiejun Huang
Pre-trained Artificial Neural Networks (ANNs) exhibit robust patternrecognition capabilities and share extensive similarities with the human brain,specifically Biological Neural Networks (BNNs). We are particularly intriguedby these models' ability to acquire new knowledge through fine-tuning. In thisregard, Parameter-efficient Fine-tuning (PEFT) has gained widespread adoptionas a substitute for full fine-tuning due to its cost reduction in training andmitigation of over-fitting risks by limiting the number of trainable parametersduring adaptation. Since both ANNs and BNNs propagate informationlayer-by-layer, a common analogy can be drawn: weights in ANNs representsynapses in BNNs, while features (also known as latent variables or logits) inANNs represent neurotransmitters released by neurons in BNNs. Mainstream PEFTmethods aim to adjust feature or parameter values using only a limited numberof trainable parameters (usually less than 1% of the total parameters), yetachieve surprisingly good results. Building upon this clue, we delve deeperinto exploring the connections between feature adjustment and parameteradjustment, resulting in our proposed method Synapses & Neurons (SAN) thatlearns scaling matrices for features and propagates their effects towardsposterior weight matrices. Our approach draws strong inspiration fromwell-known neuroscience phenomena - Long-term Potentiation (LTP) and Long-termDepression (LTD), which also reveal the relationship between synapsedevelopment and neurotransmitter release levels. We conducted extensivecomparisons of PEFT on 26 datasets using attention-based networks as well asconvolution-based networks, leading to significant improvements compared toother tuning methods (+8.5% over fully-finetune, +7% over Visual Prompt Tuning,and +3.2% over LoRA). The codes would be released.
预训练的人工神经网络(ANN)具有强大的模式识别能力,与人脑,特别是生物神经网络(BNN)有着广泛的相似之处。我们对这些模型通过微调获取新知识的能力尤其感兴趣。在这方面,参数高效微调(Parameter-efficient Fine-tuning,PEFT)因其降低了训练成本,并通过在适应过程中限制可训练参数的数量来减轻过拟合风险,已被广泛采用,作为完全微调的替代方法。由于 ANNs 和 BNNs 都是逐层传播信息的,因此可以做一个共同的类比:ANNs 中的权重代表 BNNs 中的突触,而 ANNs 中的特征(也称为潜变量或对数)代表 BNNs 中神经元释放的神经递质。主流的 PEFT 方法旨在仅使用有限数量的可训练参数(通常少于总参数的 1%)来调整特征或参数值,但却取得了出人意料的好结果。在这一线索的基础上,我们深入探索了特征调整和参数调整之间的联系,从而提出了我们的方法 "突触与神经元"(SAN),它可以学习特征的缩放矩阵,并将其影响传播到后置权重矩阵。我们的方法从众所周知的神经科学现象--长期电位(LTP)和长期抑制(LTD)中汲取了灵感,这两种现象也揭示了突触发育与神经递质释放水平之间的关系。我们使用基于注意力的网络和基于卷积的网络在 26 个数据集上对 PEFT 进行了广泛的比较,结果发现,与其他调谐方法相比,PEFT 有了显著的改进(比完全调谐法提高了 8.5%,比视觉提示调谐法提高了 7%,比 LoRA 提高了 3.2%)。这些代码将被发布。
{"title":"Discovering Long-Term Effects on Parameter Efficient Fine-tuning","authors":"Gaole Dai, Yiming Tang, Chunkai Fan, Qizhe Zhang, Zhi Zhang, Yulu Gan, Chengqing Zeng, Shanghang Zhang, Tiejun Huang","doi":"arxiv-2409.06706","DOIUrl":"https://doi.org/arxiv-2409.06706","url":null,"abstract":"Pre-trained Artificial Neural Networks (ANNs) exhibit robust pattern\u0000recognition capabilities and share extensive similarities with the human brain,\u0000specifically Biological Neural Networks (BNNs). We are particularly intrigued\u0000by these models' ability to acquire new knowledge through fine-tuning. In this\u0000regard, Parameter-efficient Fine-tuning (PEFT) has gained widespread adoption\u0000as a substitute for full fine-tuning due to its cost reduction in training and\u0000mitigation of over-fitting risks by limiting the number of trainable parameters\u0000during adaptation. Since both ANNs and BNNs propagate information\u0000layer-by-layer, a common analogy can be drawn: weights in ANNs represent\u0000synapses in BNNs, while features (also known as latent variables or logits) in\u0000ANNs represent neurotransmitters released by neurons in BNNs. Mainstream PEFT\u0000methods aim to adjust feature or parameter values using only a limited number\u0000of trainable parameters (usually less than 1% of the total parameters), yet\u0000achieve surprisingly good results. Building upon this clue, we delve deeper\u0000into exploring the connections between feature adjustment and parameter\u0000adjustment, resulting in our proposed method Synapses & Neurons (SAN) that\u0000learns scaling matrices for features and propagates their effects towards\u0000posterior weight matrices. Our approach draws strong inspiration from\u0000well-known neuroscience phenomena - Long-term Potentiation (LTP) and Long-term\u0000Depression (LTD), which also reveal the relationship between synapse\u0000development and neurotransmitter release levels. We conducted extensive\u0000comparisons of PEFT on 26 datasets using attention-based networks as well as\u0000convolution-based networks, leading to significant improvements compared to\u0000other tuning methods (+8.5% over fully-finetune, +7% over Visual Prompt Tuning,\u0000and +3.2% over LoRA). The codes would be released.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VFM-Det: Towards High-Performance Vehicle Detection via Large Foundation Models VFM-Det:通过大型基础模型实现高性能车辆检测
Pub Date : 2024-08-23 DOI: arxiv-2408.13031
Wentao Wu, Fanghua Hong, Xiao Wang, Chenglong Li, Jin Tang
Existing vehicle detectors are usually obtained by training a typicaldetector (e.g., YOLO, RCNN, DETR series) on vehicle images based on apre-trained backbone (e.g., ResNet, ViT). Some researchers also exploit andenhance the detection performance using pre-trained large foundation models.However, we think these detectors may only get sub-optimal results because thelarge models they use are not specifically designed for vehicles. In addition,their results heavily rely on visual features, and seldom of they consider thealignment between the vehicle's semantic information and visualrepresentations. In this work, we propose a new vehicle detection paradigmbased on a pre-trained foundation vehicle model (VehicleMAE) and a largelanguage model (T5), termed VFM-Det. It follows the region proposal-baseddetection framework and the features of each proposal can be enhanced usingVehicleMAE. More importantly, we propose a new VAtt2Vec module that predictsthe vehicle semantic attributes of these proposals and transforms them intofeature vectors to enhance the vision features via contrastive learning.Extensive experiments on three vehicle detection benchmark datasets thoroughlyproved the effectiveness of our vehicle detector. Specifically, our modelimproves the baseline approach by $+5.1%$, $+6.2%$ on the $AP_{0.5}$,$AP_{0.75}$ metrics, respectively, on the Cityscapes dataset.The source code ofthis work will be released at https://github.com/Event-AHU/VFM-Det.
现有的车辆检测器通常是通过在车辆图像上训练一个典型的检测器(如 YOLO、RCNN、DETR 系列)获得的,该检测器基于预先训练的骨干网(如 ResNet、ViT)。然而,我们认为这些检测器可能只能获得次优结果,因为它们使用的大型模型并非专为车辆设计。此外,它们的结果严重依赖于视觉特征,很少考虑车辆语义信息与视觉呈现之间的匹配问题。在这项工作中,我们提出了一种基于预训练基础车辆模型(VehicleMAE)和大型语言模型(T5)的全新车辆检测范式,称为 VFM-Det。 它遵循基于区域提案的检测框架,每个提案的特征都可以通过 VehicleMAE 得到增强。更重要的是,我们提出了一个新的 VAtt2Vec 模块,它可以预测这些提案的车辆语义属性,并将其转换为特征向量,通过对比学习增强视觉特征。具体来说,在城市景观数据集上,我们的模型在 $AP_{0.5}$、$AP_{0.75}$ 指标上分别比基线方法提高了 $+5.1%$、$+6.2%$。这项工作的源代码将在 https://github.com/Event-AHU/VFM-Det 上发布。
{"title":"VFM-Det: Towards High-Performance Vehicle Detection via Large Foundation Models","authors":"Wentao Wu, Fanghua Hong, Xiao Wang, Chenglong Li, Jin Tang","doi":"arxiv-2408.13031","DOIUrl":"https://doi.org/arxiv-2408.13031","url":null,"abstract":"Existing vehicle detectors are usually obtained by training a typical\u0000detector (e.g., YOLO, RCNN, DETR series) on vehicle images based on a\u0000pre-trained backbone (e.g., ResNet, ViT). Some researchers also exploit and\u0000enhance the detection performance using pre-trained large foundation models.\u0000However, we think these detectors may only get sub-optimal results because the\u0000large models they use are not specifically designed for vehicles. In addition,\u0000their results heavily rely on visual features, and seldom of they consider the\u0000alignment between the vehicle's semantic information and visual\u0000representations. In this work, we propose a new vehicle detection paradigm\u0000based on a pre-trained foundation vehicle model (VehicleMAE) and a large\u0000language model (T5), termed VFM-Det. It follows the region proposal-based\u0000detection framework and the features of each proposal can be enhanced using\u0000VehicleMAE. More importantly, we propose a new VAtt2Vec module that predicts\u0000the vehicle semantic attributes of these proposals and transforms them into\u0000feature vectors to enhance the vision features via contrastive learning.\u0000Extensive experiments on three vehicle detection benchmark datasets thoroughly\u0000proved the effectiveness of our vehicle detector. Specifically, our model\u0000improves the baseline approach by $+5.1%$, $+6.2%$ on the $AP_{0.5}$,\u0000$AP_{0.75}$ metrics, respectively, on the Cityscapes dataset.The source code of\u0000this work will be released at https://github.com/Event-AHU/VFM-Det.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive Representation Learning for Dynamic Link Prediction in Temporal Networks 用于时态网络动态链接预测的对比表征学习
Pub Date : 2024-08-22 DOI: arxiv-2408.12753
Amirhossein Nouranizadeh, Fatemeh Tabatabaei Far, Mohammad Rahmati
Evolving networks are complex data structures that emerge in a wide range ofsystems in science and engineering. Learning expressive representations forsuch networks that encode their structural connectivity and temporal evolutionis essential for downstream data analytics and machine learning applications.In this study, we introduce a self-supervised method for learningrepresentations of temporal networks and employ these representations in thedynamic link prediction task. While temporal networks are typicallycharacterized as a sequence of interactions over the continuous time domain,our study focuses on their discrete-time versions. This enables us to balancethe trade-off between computational complexity and precise modeling of theinteractions. We propose a recurrent message-passing neural networkarchitecture for modeling the information flow over time-respecting paths oftemporal networks. The key feature of our method is the contrastive trainingobjective of the model, which is a combination of three loss functions: linkprediction, graph reconstruction, and contrastive predictive coding losses. Thecontrastive predictive coding objective is implemented using infoNCE losses atboth local and global scales of the input graphs. We empirically show that theadditional self-supervised losses enhance the training and improve the model'sperformance in the dynamic link prediction task. The proposed method is testedon Enron, COLAB, and Facebook datasets and exhibits superior results comparedto existing models.
演化网络是复杂的数据结构,出现在科学和工程领域的各种系统中。本研究中,我们介绍了一种学习时态网络表征的自监督方法,并在动态链接预测任务中使用了这些表征。时态网络通常被描述为连续时域上的一系列交互,而我们的研究则侧重于其离散时域版本。这使我们能够在计算复杂性和交互的精确建模之间取得平衡。我们提出了一种递归信息传递神经网络架构,用于模拟时空网络路径上的信息流。我们方法的主要特点是模型的对比训练目标(contrastive trainingobjective),它是三个损失函数的组合:链接预测、图重构和对比预测编码损失。对比预测编码目标是在输入图的局部和全局范围内使用 infoNCE 损失实现的。我们的经验表明,在动态链接预测任务中,附加的自监督损失增强了训练效果,并提高了模型的性能。我们在安然、COLAB 和 Facebook 数据集上对所提出的方法进行了测试,结果显示该方法优于现有模型。
{"title":"Contrastive Representation Learning for Dynamic Link Prediction in Temporal Networks","authors":"Amirhossein Nouranizadeh, Fatemeh Tabatabaei Far, Mohammad Rahmati","doi":"arxiv-2408.12753","DOIUrl":"https://doi.org/arxiv-2408.12753","url":null,"abstract":"Evolving networks are complex data structures that emerge in a wide range of\u0000systems in science and engineering. Learning expressive representations for\u0000such networks that encode their structural connectivity and temporal evolution\u0000is essential for downstream data analytics and machine learning applications.\u0000In this study, we introduce a self-supervised method for learning\u0000representations of temporal networks and employ these representations in the\u0000dynamic link prediction task. While temporal networks are typically\u0000characterized as a sequence of interactions over the continuous time domain,\u0000our study focuses on their discrete-time versions. This enables us to balance\u0000the trade-off between computational complexity and precise modeling of the\u0000interactions. We propose a recurrent message-passing neural network\u0000architecture for modeling the information flow over time-respecting paths of\u0000temporal networks. The key feature of our method is the contrastive training\u0000objective of the model, which is a combination of three loss functions: link\u0000prediction, graph reconstruction, and contrastive predictive coding losses. The\u0000contrastive predictive coding objective is implemented using infoNCE losses at\u0000both local and global scales of the input graphs. We empirically show that the\u0000additional self-supervised losses enhance the training and improve the model's\u0000performance in the dynamic link prediction task. The proposed method is tested\u0000on Enron, COLAB, and Facebook datasets and exhibits superior results compared\u0000to existing models.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Spiking Neural Networks with Hybrid Coding 混合编码的自适应尖峰神经网络
Pub Date : 2024-08-22 DOI: arxiv-2408.12407
Huaxu He
The Spiking Neural Network (SNN), due to its unique spiking-driven nature, isa more energy-efficient and effective neural network compared to ArtificialNeural Networks (ANNs). The encoding method directly influences the overallperformance of the network, and currently, direct encoding is primarily usedfor directly trained SNNs. When working with static image datasets, directencoding inputs the same feature map at every time step, failing to fullyexploit the spatiotemporal properties of SNNs. While temporal encoding convertsinput data into spike trains with spatiotemporal characteristics, traditionalSNNs utilize the same neurons when processing input data across different timesteps, limiting their ability to integrate and utilize spatiotemporalinformation effectively.To address this, this paper employs temporal encodingand proposes the Adaptive Spiking Neural Network (ASNN), enhancing theutilization of temporal encoding in conventional SNNs. Additionally, temporalencoding is less frequently used because short time steps can lead tosignificant loss of input data information, often necessitating a higher numberof time steps in practical applications. However, training large SNNs with longtime steps is challenging due to hardware constraints. To overcome this, thispaper introduces a hybrid encoding approach that not only reduces the requiredtime steps for training but also continues to improve the overall networkperformance.Notably, significant improvements in classification performance areobserved on both Spikformer and Spiking ResNet architectures.our code isavailable at https://github.com/hhx0320/ASNN
尖峰神经网络(SNN)由于其独特的尖峰驱动特性,与人工神经网络(ANN)相比是一种更节能、更有效的神经网络。编码方法直接影响网络的整体性能,目前,直接编码主要用于直接训练的 SNN。在处理静态图像数据集时,直接编码会在每个时间步输入相同的特征图,无法充分发挥 SNN 的时空特性。虽然时态编码将输入数据转换为具有时空特性的尖峰列车,但传统 SNN 在处理不同时间步的输入数据时使用相同的神经元,从而限制了其有效整合和利用时空信息的能力。此外,由于短时间步长会导致输入数据信息的大量丢失,在实际应用中往往需要更多的时间步长,因此时间编码较少使用。然而,由于硬件限制,用较长的时间步长训练大型 SNN 是一项挑战。为了克服这一问题,本文介绍了一种混合编码方法,它不仅减少了训练所需的时间步长,还能继续提高网络的整体性能。值得注意的是,在 Spikformer 和 Spiking ResNet 架构上,分类性能都有显著提高。我们的代码可在 https://github.com/hhx0320/ASNN 上获取。
{"title":"Adaptive Spiking Neural Networks with Hybrid Coding","authors":"Huaxu He","doi":"arxiv-2408.12407","DOIUrl":"https://doi.org/arxiv-2408.12407","url":null,"abstract":"The Spiking Neural Network (SNN), due to its unique spiking-driven nature, is\u0000a more energy-efficient and effective neural network compared to Artificial\u0000Neural Networks (ANNs). The encoding method directly influences the overall\u0000performance of the network, and currently, direct encoding is primarily used\u0000for directly trained SNNs. When working with static image datasets, direct\u0000encoding inputs the same feature map at every time step, failing to fully\u0000exploit the spatiotemporal properties of SNNs. While temporal encoding converts\u0000input data into spike trains with spatiotemporal characteristics, traditional\u0000SNNs utilize the same neurons when processing input data across different time\u0000steps, limiting their ability to integrate and utilize spatiotemporal\u0000information effectively.To address this, this paper employs temporal encoding\u0000and proposes the Adaptive Spiking Neural Network (ASNN), enhancing the\u0000utilization of temporal encoding in conventional SNNs. Additionally, temporal\u0000encoding is less frequently used because short time steps can lead to\u0000significant loss of input data information, often necessitating a higher number\u0000of time steps in practical applications. However, training large SNNs with long\u0000time steps is challenging due to hardware constraints. To overcome this, this\u0000paper introduces a hybrid encoding approach that not only reduces the required\u0000time steps for training but also continues to improve the overall network\u0000performance.Notably, significant improvements in classification performance are\u0000observed on both Spikformer and Spiking ResNet architectures.our code is\u0000available at https://github.com/hhx0320/ASNN","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Neural and Evolutionary Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1