首页 > 最新文献

arXiv - CS - Neural and Evolutionary Computing最新文献

英文 中文
Distance-Forward Learning: Enhancing the Forward-Forward Algorithm Towards High-Performance On-Chip Learning 距离前向学习:增强前向算法,实现高性能片上学习
Pub Date : 2024-08-27 DOI: arxiv-2408.14925
Yujie Wu, Siyuan Xu, Jibin Wu, Lei Deng, Mingkun Xu, Qinghao Wen, Guoqi Li
The Forward-Forward (FF) algorithm was recently proposed as a local learningmethod to address the limitations of backpropagation (BP), offering biologicalplausibility along with memory-efficient and highly parallelized computationalbenefits. However, it suffers from suboptimal performance and poorgeneralization, largely due to inadequate theoretical support and a lack ofeffective learning strategies. In this work, we reformulate FF using distancemetric learning and propose a distance-forward algorithm (DF) to improve FFperformance in supervised vision tasks while preserving its local computationalproperties, making it competitive for efficient on-chip learning. To achievethis, we reinterpret FF through the lens of centroid-based metric learning anddevelop a goodness-based N-pair margin loss to facilitate the learning ofdiscriminative features. Furthermore, we integrate layer-collaboration localupdate strategies to reduce information loss caused by greedy local parameterupdates. Our method surpasses existing FF models and other advanced locallearning approaches, with accuracies of 99.7% on MNIST, 88.2% on CIFAR-10,59% on CIFAR-100, 95.9% on SVHN, and 82.5% on ImageNette, respectively.Moreover, it achieves comparable performance with less than 40% memory costcompared to BP training, while exhibiting stronger robustness to multiple typesof hardware-related noise, demonstrating its potential for online learning andenergy-efficient computation on neuromorphic chips.
前向前馈(FF)算法是最近提出的一种局部学习方法,旨在解决反向传播(BP)的局限性,该算法不仅具有生物学上的合理性,还具有内存效率高、计算高度并行化等优点。然而,它的性能不理想,泛化能力差,这主要是由于理论支持不足和缺乏有效的学习策略。在这项工作中,我们使用距离度量学习重新表述了 FF,并提出了一种距离前向算法 (DF),以提高 FF 在有监督视觉任务中的性能,同时保留其本地计算特性,使其在高效片上学习方面具有竞争力。为了实现这一目标,我们从基于中心点的度量学习角度重新解释了 FF,并开发了一种基于善度的 N 对边距损失,以促进区分性特征的学习。此外,我们还整合了层协作局部更新策略,以减少贪婪的局部参数更新造成的信息损失。我们的方法超越了现有的FF模型和其他先进的局部学习方法,在MNIST上的准确率为99.7%,在CIFAR-10上的准确率为88.2%,在CIFAR-100上的准确率为59%,在SVHN上的准确率为95.9%,在ImageNette上的准确率为82.5%。此外,与BP训练相比,它以不到40%的内存成本实现了可比的性能,同时对多种类型的硬件相关噪声表现出更强的鲁棒性,证明了它在神经形态芯片上的在线学习和节能计算潜力。
{"title":"Distance-Forward Learning: Enhancing the Forward-Forward Algorithm Towards High-Performance On-Chip Learning","authors":"Yujie Wu, Siyuan Xu, Jibin Wu, Lei Deng, Mingkun Xu, Qinghao Wen, Guoqi Li","doi":"arxiv-2408.14925","DOIUrl":"https://doi.org/arxiv-2408.14925","url":null,"abstract":"The Forward-Forward (FF) algorithm was recently proposed as a local learning\u0000method to address the limitations of backpropagation (BP), offering biological\u0000plausibility along with memory-efficient and highly parallelized computational\u0000benefits. However, it suffers from suboptimal performance and poor\u0000generalization, largely due to inadequate theoretical support and a lack of\u0000effective learning strategies. In this work, we reformulate FF using distance\u0000metric learning and propose a distance-forward algorithm (DF) to improve FF\u0000performance in supervised vision tasks while preserving its local computational\u0000properties, making it competitive for efficient on-chip learning. To achieve\u0000this, we reinterpret FF through the lens of centroid-based metric learning and\u0000develop a goodness-based N-pair margin loss to facilitate the learning of\u0000discriminative features. Furthermore, we integrate layer-collaboration local\u0000update strategies to reduce information loss caused by greedy local parameter\u0000updates. Our method surpasses existing FF models and other advanced local\u0000learning approaches, with accuracies of 99.7% on MNIST, 88.2% on CIFAR-10,\u000059% on CIFAR-100, 95.9% on SVHN, and 82.5% on ImageNette, respectively.\u0000Moreover, it achieves comparable performance with less than 40% memory cost\u0000compared to BP training, while exhibiting stronger robustness to multiple types\u0000of hardware-related noise, demonstrating its potential for online learning and\u0000energy-efficient computation on neuromorphic chips.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PMSN: A Parallel Multi-compartment Spiking Neuron for Multi-scale Temporal Processing PMSN:用于多尺度时态处理的并行多室尖峰神经元
Pub Date : 2024-08-27 DOI: arxiv-2408.14917
Xinyi Chen, Jibin Wu, Chenxiang Ma, Yinsong Yan, Yujie Wu, Kay Chen Tan
Spiking Neural Networks (SNNs) hold great potential to realizebrain-inspired, energy-efficient computational systems. However, current SNNsstill fall short in terms of multi-scale temporal processing compared to theirbiological counterparts. This limitation has resulted in poor performance inmany pattern recognition tasks with information that varies across differenttimescales. To address this issue, we put forward a novel spiking neuron modelcalled Parallel Multi-compartment Spiking Neuron (PMSN). The PMSN emulatesbiological neurons by incorporating multiple interacting substructures andallows for flexible adjustment of the substructure counts to effectivelyrepresent temporal information across diverse timescales. Additionally, toaddress the computational burden associated with the increased complexity ofthe proposed model, we introduce two parallelization techniques that decouplethe temporal dependencies of neuronal updates, enabling parallelized trainingacross different time steps. Our experimental results on a wide range ofpattern recognition tasks demonstrate the superiority of PMSN. It outperformsother state-of-the-art spiking neuron models in terms of its temporalprocessing capacity, training speed, and computation cost. Specifically,compared with the commonly used Leaky Integrate-and-Fire neuron, PMSN offers asimulation acceleration of over 10 $times$ and a 30 % improvement in accuracyon Sequential CIFAR10 dataset, while maintaining comparable computational cost.
尖峰神经网络(SNN)在实现由大脑启发的高能效计算系统方面具有巨大潜力。然而,与生物类似系统相比,目前的尖峰神经网络在多尺度时间处理方面仍然存在不足。这一局限性导致在许多模式识别任务中,不同时间尺度的信息表现不佳。为了解决这个问题,我们提出了一种新的尖峰神经元模型,称为并行多室尖峰神经元(PMSN)。该模型通过整合多个相互作用的子结构来模拟生物神经元,并允许灵活调整子结构数量,从而有效地反映不同时间尺度上的时间信息。此外,为了解决所提模型复杂性增加带来的计算负担,我们引入了两种并行化技术,它们能解除神经元更新的时间依赖性,从而实现跨不同时间步的并行化训练。我们在各种模式识别任务上的实验结果证明了 PMSN 的优越性。它在时间处理能力、训练速度和计算成本方面都优于其他最先进的尖峰神经元模型。具体来说,与常用的 "漏积分-火神经元"(Leaky Integrate-and-Fire neuron)相比,PMSN的模拟速度提高了10倍以上,在序列CIFAR10数据集上的准确率提高了30%,同时还保持了相当的计算成本。
{"title":"PMSN: A Parallel Multi-compartment Spiking Neuron for Multi-scale Temporal Processing","authors":"Xinyi Chen, Jibin Wu, Chenxiang Ma, Yinsong Yan, Yujie Wu, Kay Chen Tan","doi":"arxiv-2408.14917","DOIUrl":"https://doi.org/arxiv-2408.14917","url":null,"abstract":"Spiking Neural Networks (SNNs) hold great potential to realize\u0000brain-inspired, energy-efficient computational systems. However, current SNNs\u0000still fall short in terms of multi-scale temporal processing compared to their\u0000biological counterparts. This limitation has resulted in poor performance in\u0000many pattern recognition tasks with information that varies across different\u0000timescales. To address this issue, we put forward a novel spiking neuron model\u0000called Parallel Multi-compartment Spiking Neuron (PMSN). The PMSN emulates\u0000biological neurons by incorporating multiple interacting substructures and\u0000allows for flexible adjustment of the substructure counts to effectively\u0000represent temporal information across diverse timescales. Additionally, to\u0000address the computational burden associated with the increased complexity of\u0000the proposed model, we introduce two parallelization techniques that decouple\u0000the temporal dependencies of neuronal updates, enabling parallelized training\u0000across different time steps. Our experimental results on a wide range of\u0000pattern recognition tasks demonstrate the superiority of PMSN. It outperforms\u0000other state-of-the-art spiking neuron models in terms of its temporal\u0000processing capacity, training speed, and computation cost. Specifically,\u0000compared with the commonly used Leaky Integrate-and-Fire neuron, PMSN offers a\u0000simulation acceleration of over 10 $times$ and a 30 % improvement in accuracy\u0000on Sequential CIFAR10 dataset, while maintaining comparable computational cost.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research Advances and New Paradigms for Biology-inspired Spiking Neural Networks 生物启发尖峰神经网络的研究进展和新范例
Pub Date : 2024-08-26 DOI: arxiv-2408.13996
Tianyu Zheng, Liyuan Han, Tielin Zhang
Spiking neural networks (SNNs) are gaining popularity in the computationalsimulation and artificial intelligence fields owing to their biologicalplausibility and computational efficiency. This paper explores the historicaldevelopment of SNN and concludes that these two fields are intersecting andmerging rapidly. Following the successful application of Dynamic Vision Sensors(DVS) and Dynamic Audio Sensors (DAS), SNNs have found some proper paradigms,such as continuous visual signal tracking, automatic speech recognition, andreinforcement learning for continuous control, that have extensively supportedtheir key features, including spike encoding, neuronal heterogeneity, specificfunctional circuits, and multiscale plasticity. Compared to these real-worldparadigms, the brain contains a spiking version of the biology-world paradigm,which exhibits a similar level of complexity and is usually considered a mirrorof the real world. Considering the projected rapid development of invasive andparallel Brain-Computer Interface (BCI), as well as the new BCI-based paradigmsthat include online pattern recognition and stimulus control of biologicalspike trains, SNNs naturally leverage their advantages in energy efficiency,robustness, and flexibility. The biological brain has inspired the presentstudy of SNNs and effective SNN machine-learning algorithms, which can helpenhance neuroscience discoveries in the brain by applying them to the new BCIparadigm. Such two-way interactions with positive feedback can accelerate brainscience research and brain-inspired intelligence technology.
尖峰神经网络(SNN)因其生物学拟真性和计算效率,在计算模拟和人工智能领域越来越受欢迎。本文探讨了尖峰神经网络的历史发展,认为这两个领域正在迅速交叉和融合。随着动态视觉传感器(DVS)和动态音频传感器(DAS)的成功应用,SNN 找到了一些合适的范例,如连续视觉信号跟踪、自动语音识别和用于连续控制的强化学习,这些范例广泛支持其关键特征,包括尖峰编码、神经元异质性、特定功能电路和多尺度可塑性。与这些真实世界的范例相比,大脑包含了生物世界范例的尖峰版本,表现出类似的复杂程度,通常被认为是真实世界的一面镜子。考虑到侵入式和并行式脑机接口(BCI)的快速发展,以及基于 BCI 的新范例(包括生物尖峰列车的在线模式识别和刺激控制),SNN 自然会利用其在能效、鲁棒性和灵活性方面的优势。生物大脑启发了目前对 SNN 和有效 SNN 机器学习算法的研究,通过将其应用于新的 BCI 范式,有助于增强大脑神经科学的发现。这种正反馈的双向互动可以加速脑科学研究和脑启发智能技术的发展。
{"title":"Research Advances and New Paradigms for Biology-inspired Spiking Neural Networks","authors":"Tianyu Zheng, Liyuan Han, Tielin Zhang","doi":"arxiv-2408.13996","DOIUrl":"https://doi.org/arxiv-2408.13996","url":null,"abstract":"Spiking neural networks (SNNs) are gaining popularity in the computational\u0000simulation and artificial intelligence fields owing to their biological\u0000plausibility and computational efficiency. This paper explores the historical\u0000development of SNN and concludes that these two fields are intersecting and\u0000merging rapidly. Following the successful application of Dynamic Vision Sensors\u0000(DVS) and Dynamic Audio Sensors (DAS), SNNs have found some proper paradigms,\u0000such as continuous visual signal tracking, automatic speech recognition, and\u0000reinforcement learning for continuous control, that have extensively supported\u0000their key features, including spike encoding, neuronal heterogeneity, specific\u0000functional circuits, and multiscale plasticity. Compared to these real-world\u0000paradigms, the brain contains a spiking version of the biology-world paradigm,\u0000which exhibits a similar level of complexity and is usually considered a mirror\u0000of the real world. Considering the projected rapid development of invasive and\u0000parallel Brain-Computer Interface (BCI), as well as the new BCI-based paradigms\u0000that include online pattern recognition and stimulus control of biological\u0000spike trains, SNNs naturally leverage their advantages in energy efficiency,\u0000robustness, and flexibility. The biological brain has inspired the present\u0000study of SNNs and effective SNN machine-learning algorithms, which can help\u0000enhance neuroscience discoveries in the brain by applying them to the new BCI\u0000paradigm. Such two-way interactions with positive feedback can accelerate brain\u0000science research and brain-inspired intelligence technology.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Uncertainty with Implicit Quantile Network 利用隐含量子网络估算不确定性
Pub Date : 2024-08-26 DOI: arxiv-2408.14525
Yi Hung Lim
Uncertainty quantification is an important part of many performance criticalapplications. This paper provides a simple alternative to existing approachessuch as ensemble learning and bayesian neural networks. By directly modelingthe loss distribution with an Implicit Quantile Network, we get an estimate ofhow uncertain the model is of its predictions. For experiments with MNIST andCIFAR datasets, the mean of the estimated loss distribution is 2x higher forincorrect predictions. When data with high estimated uncertainty is removedfrom the test dataset, the accuracy of the model goes up as much as 10%. Thismethod is simple to implement while offering important information toapplications where the user has to know when the model could be wrong (e.g.deep learning for healthcare).
不确定性量化是许多性能关键应用的重要组成部分。本文为现有的集合学习和贝叶斯神经网络等方法提供了一个简单的替代方案。通过使用隐含量子网络直接模拟损失分布,我们可以估算出模型预测的不确定性有多大。在使用 MNIST 和 CIFAR 数据集进行的实验中,对于不正确的预测,估计损失分布的平均值要高出 2 倍。当从测试数据集中移除估计不确定性较高的数据时,模型的准确性会提高 10%。这种方法实现起来很简单,同时还能为用户必须知道模型何时可能出错的应用(如医疗保健领域的深度学习)提供重要信息。
{"title":"Estimating Uncertainty with Implicit Quantile Network","authors":"Yi Hung Lim","doi":"arxiv-2408.14525","DOIUrl":"https://doi.org/arxiv-2408.14525","url":null,"abstract":"Uncertainty quantification is an important part of many performance critical\u0000applications. This paper provides a simple alternative to existing approaches\u0000such as ensemble learning and bayesian neural networks. By directly modeling\u0000the loss distribution with an Implicit Quantile Network, we get an estimate of\u0000how uncertain the model is of its predictions. For experiments with MNIST and\u0000CIFAR datasets, the mean of the estimated loss distribution is 2x higher for\u0000incorrect predictions. When data with high estimated uncertainty is removed\u0000from the test dataset, the accuracy of the model goes up as much as 10%. This\u0000method is simple to implement while offering important information to\u0000applications where the user has to know when the model could be wrong (e.g.\u0000deep learning for healthcare).","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering Long-Term Effects on Parameter Efficient Fine-tuning 发现参数高效微调的长期影响
Pub Date : 2024-08-24 DOI: arxiv-2409.06706
Gaole Dai, Yiming Tang, Chunkai Fan, Qizhe Zhang, Zhi Zhang, Yulu Gan, Chengqing Zeng, Shanghang Zhang, Tiejun Huang
Pre-trained Artificial Neural Networks (ANNs) exhibit robust patternrecognition capabilities and share extensive similarities with the human brain,specifically Biological Neural Networks (BNNs). We are particularly intriguedby these models' ability to acquire new knowledge through fine-tuning. In thisregard, Parameter-efficient Fine-tuning (PEFT) has gained widespread adoptionas a substitute for full fine-tuning due to its cost reduction in training andmitigation of over-fitting risks by limiting the number of trainable parametersduring adaptation. Since both ANNs and BNNs propagate informationlayer-by-layer, a common analogy can be drawn: weights in ANNs representsynapses in BNNs, while features (also known as latent variables or logits) inANNs represent neurotransmitters released by neurons in BNNs. Mainstream PEFTmethods aim to adjust feature or parameter values using only a limited numberof trainable parameters (usually less than 1% of the total parameters), yetachieve surprisingly good results. Building upon this clue, we delve deeperinto exploring the connections between feature adjustment and parameteradjustment, resulting in our proposed method Synapses & Neurons (SAN) thatlearns scaling matrices for features and propagates their effects towardsposterior weight matrices. Our approach draws strong inspiration fromwell-known neuroscience phenomena - Long-term Potentiation (LTP) and Long-termDepression (LTD), which also reveal the relationship between synapsedevelopment and neurotransmitter release levels. We conducted extensivecomparisons of PEFT on 26 datasets using attention-based networks as well asconvolution-based networks, leading to significant improvements compared toother tuning methods (+8.5% over fully-finetune, +7% over Visual Prompt Tuning,and +3.2% over LoRA). The codes would be released.
预训练的人工神经网络(ANN)具有强大的模式识别能力,与人脑,特别是生物神经网络(BNN)有着广泛的相似之处。我们对这些模型通过微调获取新知识的能力尤其感兴趣。在这方面,参数高效微调(Parameter-efficient Fine-tuning,PEFT)因其降低了训练成本,并通过在适应过程中限制可训练参数的数量来减轻过拟合风险,已被广泛采用,作为完全微调的替代方法。由于 ANNs 和 BNNs 都是逐层传播信息的,因此可以做一个共同的类比:ANNs 中的权重代表 BNNs 中的突触,而 ANNs 中的特征(也称为潜变量或对数)代表 BNNs 中神经元释放的神经递质。主流的 PEFT 方法旨在仅使用有限数量的可训练参数(通常少于总参数的 1%)来调整特征或参数值,但却取得了出人意料的好结果。在这一线索的基础上,我们深入探索了特征调整和参数调整之间的联系,从而提出了我们的方法 "突触与神经元"(SAN),它可以学习特征的缩放矩阵,并将其影响传播到后置权重矩阵。我们的方法从众所周知的神经科学现象--长期电位(LTP)和长期抑制(LTD)中汲取了灵感,这两种现象也揭示了突触发育与神经递质释放水平之间的关系。我们使用基于注意力的网络和基于卷积的网络在 26 个数据集上对 PEFT 进行了广泛的比较,结果发现,与其他调谐方法相比,PEFT 有了显著的改进(比完全调谐法提高了 8.5%,比视觉提示调谐法提高了 7%,比 LoRA 提高了 3.2%)。这些代码将被发布。
{"title":"Discovering Long-Term Effects on Parameter Efficient Fine-tuning","authors":"Gaole Dai, Yiming Tang, Chunkai Fan, Qizhe Zhang, Zhi Zhang, Yulu Gan, Chengqing Zeng, Shanghang Zhang, Tiejun Huang","doi":"arxiv-2409.06706","DOIUrl":"https://doi.org/arxiv-2409.06706","url":null,"abstract":"Pre-trained Artificial Neural Networks (ANNs) exhibit robust pattern\u0000recognition capabilities and share extensive similarities with the human brain,\u0000specifically Biological Neural Networks (BNNs). We are particularly intrigued\u0000by these models' ability to acquire new knowledge through fine-tuning. In this\u0000regard, Parameter-efficient Fine-tuning (PEFT) has gained widespread adoption\u0000as a substitute for full fine-tuning due to its cost reduction in training and\u0000mitigation of over-fitting risks by limiting the number of trainable parameters\u0000during adaptation. Since both ANNs and BNNs propagate information\u0000layer-by-layer, a common analogy can be drawn: weights in ANNs represent\u0000synapses in BNNs, while features (also known as latent variables or logits) in\u0000ANNs represent neurotransmitters released by neurons in BNNs. Mainstream PEFT\u0000methods aim to adjust feature or parameter values using only a limited number\u0000of trainable parameters (usually less than 1% of the total parameters), yet\u0000achieve surprisingly good results. Building upon this clue, we delve deeper\u0000into exploring the connections between feature adjustment and parameter\u0000adjustment, resulting in our proposed method Synapses & Neurons (SAN) that\u0000learns scaling matrices for features and propagates their effects towards\u0000posterior weight matrices. Our approach draws strong inspiration from\u0000well-known neuroscience phenomena - Long-term Potentiation (LTP) and Long-term\u0000Depression (LTD), which also reveal the relationship between synapse\u0000development and neurotransmitter release levels. We conducted extensive\u0000comparisons of PEFT on 26 datasets using attention-based networks as well as\u0000convolution-based networks, leading to significant improvements compared to\u0000other tuning methods (+8.5% over fully-finetune, +7% over Visual Prompt Tuning,\u0000and +3.2% over LoRA). The codes would be released.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VFM-Det: Towards High-Performance Vehicle Detection via Large Foundation Models VFM-Det:通过大型基础模型实现高性能车辆检测
Pub Date : 2024-08-23 DOI: arxiv-2408.13031
Wentao Wu, Fanghua Hong, Xiao Wang, Chenglong Li, Jin Tang
Existing vehicle detectors are usually obtained by training a typicaldetector (e.g., YOLO, RCNN, DETR series) on vehicle images based on apre-trained backbone (e.g., ResNet, ViT). Some researchers also exploit andenhance the detection performance using pre-trained large foundation models.However, we think these detectors may only get sub-optimal results because thelarge models they use are not specifically designed for vehicles. In addition,their results heavily rely on visual features, and seldom of they consider thealignment between the vehicle's semantic information and visualrepresentations. In this work, we propose a new vehicle detection paradigmbased on a pre-trained foundation vehicle model (VehicleMAE) and a largelanguage model (T5), termed VFM-Det. It follows the region proposal-baseddetection framework and the features of each proposal can be enhanced usingVehicleMAE. More importantly, we propose a new VAtt2Vec module that predictsthe vehicle semantic attributes of these proposals and transforms them intofeature vectors to enhance the vision features via contrastive learning.Extensive experiments on three vehicle detection benchmark datasets thoroughlyproved the effectiveness of our vehicle detector. Specifically, our modelimproves the baseline approach by $+5.1%$, $+6.2%$ on the $AP_{0.5}$,$AP_{0.75}$ metrics, respectively, on the Cityscapes dataset.The source code ofthis work will be released at https://github.com/Event-AHU/VFM-Det.
现有的车辆检测器通常是通过在车辆图像上训练一个典型的检测器(如 YOLO、RCNN、DETR 系列)获得的,该检测器基于预先训练的骨干网(如 ResNet、ViT)。然而,我们认为这些检测器可能只能获得次优结果,因为它们使用的大型模型并非专为车辆设计。此外,它们的结果严重依赖于视觉特征,很少考虑车辆语义信息与视觉呈现之间的匹配问题。在这项工作中,我们提出了一种基于预训练基础车辆模型(VehicleMAE)和大型语言模型(T5)的全新车辆检测范式,称为 VFM-Det。 它遵循基于区域提案的检测框架,每个提案的特征都可以通过 VehicleMAE 得到增强。更重要的是,我们提出了一个新的 VAtt2Vec 模块,它可以预测这些提案的车辆语义属性,并将其转换为特征向量,通过对比学习增强视觉特征。具体来说,在城市景观数据集上,我们的模型在 $AP_{0.5}$、$AP_{0.75}$ 指标上分别比基线方法提高了 $+5.1%$、$+6.2%$。这项工作的源代码将在 https://github.com/Event-AHU/VFM-Det 上发布。
{"title":"VFM-Det: Towards High-Performance Vehicle Detection via Large Foundation Models","authors":"Wentao Wu, Fanghua Hong, Xiao Wang, Chenglong Li, Jin Tang","doi":"arxiv-2408.13031","DOIUrl":"https://doi.org/arxiv-2408.13031","url":null,"abstract":"Existing vehicle detectors are usually obtained by training a typical\u0000detector (e.g., YOLO, RCNN, DETR series) on vehicle images based on a\u0000pre-trained backbone (e.g., ResNet, ViT). Some researchers also exploit and\u0000enhance the detection performance using pre-trained large foundation models.\u0000However, we think these detectors may only get sub-optimal results because the\u0000large models they use are not specifically designed for vehicles. In addition,\u0000their results heavily rely on visual features, and seldom of they consider the\u0000alignment between the vehicle's semantic information and visual\u0000representations. In this work, we propose a new vehicle detection paradigm\u0000based on a pre-trained foundation vehicle model (VehicleMAE) and a large\u0000language model (T5), termed VFM-Det. It follows the region proposal-based\u0000detection framework and the features of each proposal can be enhanced using\u0000VehicleMAE. More importantly, we propose a new VAtt2Vec module that predicts\u0000the vehicle semantic attributes of these proposals and transforms them into\u0000feature vectors to enhance the vision features via contrastive learning.\u0000Extensive experiments on three vehicle detection benchmark datasets thoroughly\u0000proved the effectiveness of our vehicle detector. Specifically, our model\u0000improves the baseline approach by $+5.1%$, $+6.2%$ on the $AP_{0.5}$,\u0000$AP_{0.75}$ metrics, respectively, on the Cityscapes dataset.The source code of\u0000this work will be released at https://github.com/Event-AHU/VFM-Det.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Spiking Neural Networks with Hybrid Coding 混合编码的自适应尖峰神经网络
Pub Date : 2024-08-22 DOI: arxiv-2408.12407
Huaxu He
The Spiking Neural Network (SNN), due to its unique spiking-driven nature, isa more energy-efficient and effective neural network compared to ArtificialNeural Networks (ANNs). The encoding method directly influences the overallperformance of the network, and currently, direct encoding is primarily usedfor directly trained SNNs. When working with static image datasets, directencoding inputs the same feature map at every time step, failing to fullyexploit the spatiotemporal properties of SNNs. While temporal encoding convertsinput data into spike trains with spatiotemporal characteristics, traditionalSNNs utilize the same neurons when processing input data across different timesteps, limiting their ability to integrate and utilize spatiotemporalinformation effectively.To address this, this paper employs temporal encodingand proposes the Adaptive Spiking Neural Network (ASNN), enhancing theutilization of temporal encoding in conventional SNNs. Additionally, temporalencoding is less frequently used because short time steps can lead tosignificant loss of input data information, often necessitating a higher numberof time steps in practical applications. However, training large SNNs with longtime steps is challenging due to hardware constraints. To overcome this, thispaper introduces a hybrid encoding approach that not only reduces the requiredtime steps for training but also continues to improve the overall networkperformance.Notably, significant improvements in classification performance areobserved on both Spikformer and Spiking ResNet architectures.our code isavailable at https://github.com/hhx0320/ASNN
尖峰神经网络(SNN)由于其独特的尖峰驱动特性,与人工神经网络(ANN)相比是一种更节能、更有效的神经网络。编码方法直接影响网络的整体性能,目前,直接编码主要用于直接训练的 SNN。在处理静态图像数据集时,直接编码会在每个时间步输入相同的特征图,无法充分发挥 SNN 的时空特性。虽然时态编码将输入数据转换为具有时空特性的尖峰列车,但传统 SNN 在处理不同时间步的输入数据时使用相同的神经元,从而限制了其有效整合和利用时空信息的能力。此外,由于短时间步长会导致输入数据信息的大量丢失,在实际应用中往往需要更多的时间步长,因此时间编码较少使用。然而,由于硬件限制,用较长的时间步长训练大型 SNN 是一项挑战。为了克服这一问题,本文介绍了一种混合编码方法,它不仅减少了训练所需的时间步长,还能继续提高网络的整体性能。值得注意的是,在 Spikformer 和 Spiking ResNet 架构上,分类性能都有显著提高。我们的代码可在 https://github.com/hhx0320/ASNN 上获取。
{"title":"Adaptive Spiking Neural Networks with Hybrid Coding","authors":"Huaxu He","doi":"arxiv-2408.12407","DOIUrl":"https://doi.org/arxiv-2408.12407","url":null,"abstract":"The Spiking Neural Network (SNN), due to its unique spiking-driven nature, is\u0000a more energy-efficient and effective neural network compared to Artificial\u0000Neural Networks (ANNs). The encoding method directly influences the overall\u0000performance of the network, and currently, direct encoding is primarily used\u0000for directly trained SNNs. When working with static image datasets, direct\u0000encoding inputs the same feature map at every time step, failing to fully\u0000exploit the spatiotemporal properties of SNNs. While temporal encoding converts\u0000input data into spike trains with spatiotemporal characteristics, traditional\u0000SNNs utilize the same neurons when processing input data across different time\u0000steps, limiting their ability to integrate and utilize spatiotemporal\u0000information effectively.To address this, this paper employs temporal encoding\u0000and proposes the Adaptive Spiking Neural Network (ASNN), enhancing the\u0000utilization of temporal encoding in conventional SNNs. Additionally, temporal\u0000encoding is less frequently used because short time steps can lead to\u0000significant loss of input data information, often necessitating a higher number\u0000of time steps in practical applications. However, training large SNNs with long\u0000time steps is challenging due to hardware constraints. To overcome this, this\u0000paper introduces a hybrid encoding approach that not only reduces the required\u0000time steps for training but also continues to improve the overall network\u0000performance.Notably, significant improvements in classification performance are\u0000observed on both Spikformer and Spiking ResNet architectures.our code is\u0000available at https://github.com/hhx0320/ASNN","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When In-memory Computing Meets Spiking Neural Networks -- A Perspective on Device-Circuit-System-and-Algorithm Co-design 当内存计算遇到尖峰神经网络--设备-电路-系统-算法协同设计透视
Pub Date : 2024-08-22 DOI: arxiv-2408.12767
Abhishek Moitra, Abhiroop Bhattacharjee, Yuhang Li, Youngeun Kim, Priyadarshini Panda
This review explores the intersection of bio-plausible artificialintelligence in the form of Spiking Neural Networks (SNNs) with the analogIn-Memory Computing (IMC) domain, highlighting their collective potential forlow-power edge computing environments. Through detailed investigation at thedevice, circuit, and system levels, we highlight the pivotal synergies betweenSNNs and IMC architectures. Additionally, we emphasize the critical need forcomprehensive system-level analyses, considering the inter-dependencies betweenalgorithms, devices, circuit & system parameters, crucial for optimalperformance. An in-depth analysis leads to identification of key system-levelbottlenecks arising from device limitations which can be addressed usingSNN-specific algorithm-hardware co-design techniques. This review underscoresthe imperative for holistic device to system design space co-exploration,highlighting the critical aspects of hardware and algorithm research endeavorsfor low-power neuromorphic solutions.
这篇综述探讨了以尖峰神经网络(SNN)为形式的仿生人工智能与模拟内存计算(IMC)领域的交叉点,强调了它们在低功耗边缘计算环境中的共同潜力。通过对设备、电路和系统层面的详细研究,我们强调了 SNN 与 IMC 架构之间的关键协同作用。此外,我们还强调了全面系统级分析的关键需求,考虑了算法、设备、电路和系统参数之间的相互依存关系,这对实现最佳性能至关重要。通过深入分析,可以识别出由于器件限制而产生的关键系统级瓶颈,这些瓶颈可以通过特定于 SNN 的算法-硬件协同设计技术来解决。这篇综述强调了从器件到系统设计空间的整体共同探索的必要性,突出了低功耗神经形态解决方案的硬件和算法研究工作的关键方面。
{"title":"When In-memory Computing Meets Spiking Neural Networks -- A Perspective on Device-Circuit-System-and-Algorithm Co-design","authors":"Abhishek Moitra, Abhiroop Bhattacharjee, Yuhang Li, Youngeun Kim, Priyadarshini Panda","doi":"arxiv-2408.12767","DOIUrl":"https://doi.org/arxiv-2408.12767","url":null,"abstract":"This review explores the intersection of bio-plausible artificial\u0000intelligence in the form of Spiking Neural Networks (SNNs) with the analog\u0000In-Memory Computing (IMC) domain, highlighting their collective potential for\u0000low-power edge computing environments. Through detailed investigation at the\u0000device, circuit, and system levels, we highlight the pivotal synergies between\u0000SNNs and IMC architectures. Additionally, we emphasize the critical need for\u0000comprehensive system-level analyses, considering the inter-dependencies between\u0000algorithms, devices, circuit & system parameters, crucial for optimal\u0000performance. An in-depth analysis leads to identification of key system-level\u0000bottlenecks arising from device limitations which can be addressed using\u0000SNN-specific algorithm-hardware co-design techniques. This review underscores\u0000the imperative for holistic device to system design space co-exploration,\u0000highlighting the critical aspects of hardware and algorithm research endeavors\u0000for low-power neuromorphic solutions.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Efficient Formal Verification of Spiking Neural Network 实现尖峰神经网络的高效形式验证
Pub Date : 2024-08-20 DOI: arxiv-2408.10900
Baekryun Seong, Jieung Kim, Sang-Ki Ko
Recently, AI research has primarily focused on large language models (LLMs),and increasing accuracy often involves scaling up and consuming more power. Thepower consumption of AI has become a significant societal issue; in thiscontext, spiking neural networks (SNNs) offer a promising solution. SNNsoperate event-driven, like the human brain, and compress informationtemporally. These characteristics allow SNNs to significantly reduce powerconsumption compared to perceptron-based artificial neural networks (ANNs),highlighting them as a next-generation neural network technology. However,societal concerns regarding AI go beyond power consumption, with thereliability of AI models being a global issue. For instance, adversarialattacks on AI models are a well-studied problem in the context of traditionalneural networks. Despite their importance, the stability and propertyverification of SNNs remains in the early stages of research. Most SNNverification methods are time-consuming and barely scalable, making practicalapplications challenging. In this paper, we introduce temporal encoding toachieve practical performance in verifying the adversarial robustness of SNNs.We conduct a theoretical analysis of this approach and demonstrate its successin verifying SNNs at previously unmanageable scales. Our contribution advancesSNN verification to a practical level, facilitating the safer application ofSNNs.
最近,人工智能研究主要集中在大型语言模型(LLM)上,而要提高准确性,往往需要扩大规模,消耗更多电力。在这种情况下,尖峰神经网络(SNN)提供了一个很有前景的解决方案。尖峰神经网络像人脑一样由事件驱动运行,并按时间压缩信息。与基于感知器的人工神经网络(ANN)相比,尖峰神经网络的这些特点使其能够显著降低功耗,从而成为下一代神经网络技术。然而,社会对人工智能的关注不仅限于功耗,人工智能模型的可靠性也是一个全球性问题。例如,在传统神经网络中,对人工智能模型的对抗性攻击是一个经过深入研究的问题。尽管 SNNs 十分重要,但其稳定性和属性验证仍处于早期研究阶段。大多数 SNN 验证方法既耗时又难以扩展,使实际应用面临挑战。我们对这种方法进行了理论分析,并证明它能在以前无法管理的规模上成功验证 SNN。我们的贡献将 SNN 验证提升到了实用水平,从而促进了 SNN 的更安全应用。
{"title":"Towards Efficient Formal Verification of Spiking Neural Network","authors":"Baekryun Seong, Jieung Kim, Sang-Ki Ko","doi":"arxiv-2408.10900","DOIUrl":"https://doi.org/arxiv-2408.10900","url":null,"abstract":"Recently, AI research has primarily focused on large language models (LLMs),\u0000and increasing accuracy often involves scaling up and consuming more power. The\u0000power consumption of AI has become a significant societal issue; in this\u0000context, spiking neural networks (SNNs) offer a promising solution. SNNs\u0000operate event-driven, like the human brain, and compress information\u0000temporally. These characteristics allow SNNs to significantly reduce power\u0000consumption compared to perceptron-based artificial neural networks (ANNs),\u0000highlighting them as a next-generation neural network technology. However,\u0000societal concerns regarding AI go beyond power consumption, with the\u0000reliability of AI models being a global issue. For instance, adversarial\u0000attacks on AI models are a well-studied problem in the context of traditional\u0000neural networks. Despite their importance, the stability and property\u0000verification of SNNs remains in the early stages of research. Most SNN\u0000verification methods are time-consuming and barely scalable, making practical\u0000applications challenging. In this paper, we introduce temporal encoding to\u0000achieve practical performance in verifying the adversarial robustness of SNNs.\u0000We conduct a theoretical analysis of this approach and demonstrate its success\u0000in verifying SNNs at previously unmanageable scales. Our contribution advances\u0000SNN verification to a practical level, facilitating the safer application of\u0000SNNs.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Driven AI Correction in Laser Absorption Sensing Quantification 激光吸收传感定量中的物理驱动人工智能校正
Pub Date : 2024-08-20 DOI: arxiv-2408.10714
Ruiyuan Kang, Panos Liatsis, Meixia Geng, Qingjie Yang
Laser absorption spectroscopy (LAS) quantification is a popular tool used inmeasuring temperature and concentration of gases. It has low error tolerance,whereas current ML-based solutions cannot guarantee their measure reliability.In this work, we propose a new framework, SPEC, to address this issue. Inaddition to the conventional ML estimator-based estimation mode, SPEC alsoincludes a Physics-driven Anomaly Detection module (PAD) to assess the error ofthe estimation. And a Correction mode is designed to correct the unreliableestimation. The correction mode is a network-based optimization algorithm,which uses the guidance of error to iteratively correct the estimation. Ahybrid surrogate error model is proposed to estimate the error distribution,which contains an ensemble of networks to simulate reconstruction error, andtrue feasible error computation. A greedy ensemble search is proposed to findthe optimal correction robustly and efficiently from the gradient guidance ofsurrogate model. The proposed SPEC is validated on the test scenarios which areoutside the training distribution. The results show that SPEC can significantlyimprove the estimation quality, and the correction mode outperforms currentnetwork-based optimization algorithms. In addition, SPEC has thereconfigurability, which can be easily adapted to different quantificationtasks via changing PAD without retraining the ML estimator.
激光吸收光谱(LAS)定量是测量温度和气体浓度的常用工具。在这项工作中,我们提出了一个新的框架 SPEC 来解决这个问题。除了传统的基于 ML 估算器的估算模式外,SPEC 还包括一个物理驱动的异常检测模块(PAD),用于评估估算误差。此外,还设计了一种修正模式来纠正不可靠的估计。修正模式是一种基于网络的优化算法,它利用误差的指导来迭代修正估算。提出了一种混合代用误差模型来估计误差分布,该模型包含模拟重建误差的网络集合和真实可行误差计算。提出了一种贪婪集合搜索方法,以便从代理模型的梯度引导中稳健高效地找到最优修正。提出的 SPEC 在训练分布之外的测试场景中进行了验证。结果表明,SPEC 可以显著提高估计质量,其修正模式优于当前基于网络的优化算法。此外,SPEC 还具有可配置性,可以通过改变 PAD 轻松适应不同的量化任务,而无需重新训练 ML 估计器。
{"title":"Physics-Driven AI Correction in Laser Absorption Sensing Quantification","authors":"Ruiyuan Kang, Panos Liatsis, Meixia Geng, Qingjie Yang","doi":"arxiv-2408.10714","DOIUrl":"https://doi.org/arxiv-2408.10714","url":null,"abstract":"Laser absorption spectroscopy (LAS) quantification is a popular tool used in\u0000measuring temperature and concentration of gases. It has low error tolerance,\u0000whereas current ML-based solutions cannot guarantee their measure reliability.\u0000In this work, we propose a new framework, SPEC, to address this issue. In\u0000addition to the conventional ML estimator-based estimation mode, SPEC also\u0000includes a Physics-driven Anomaly Detection module (PAD) to assess the error of\u0000the estimation. And a Correction mode is designed to correct the unreliable\u0000estimation. The correction mode is a network-based optimization algorithm,\u0000which uses the guidance of error to iteratively correct the estimation. A\u0000hybrid surrogate error model is proposed to estimate the error distribution,\u0000which contains an ensemble of networks to simulate reconstruction error, and\u0000true feasible error computation. A greedy ensemble search is proposed to find\u0000the optimal correction robustly and efficiently from the gradient guidance of\u0000surrogate model. The proposed SPEC is validated on the test scenarios which are\u0000outside the training distribution. The results show that SPEC can significantly\u0000improve the estimation quality, and the correction mode outperforms current\u0000network-based optimization algorithms. In addition, SPEC has the\u0000reconfigurability, which can be easily adapted to different quantification\u0000tasks via changing PAD without retraining the ML estimator.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Neural and Evolutionary Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1