首页 > 最新文献

arXiv - CS - Neural and Evolutionary Computing最新文献

英文 中文
Connective Viewpoints of Signal-to-Noise Diffusion Models 信噪扩散模型的关联观点
Pub Date : 2024-08-08 DOI: arxiv-2408.04221
Khanh Doan, Long Tung Vuong, Tuan Nguyen, Anh Tuan Bui, Quyen Tran, Thanh-Toan Do, Dinh Phung, Trung Le
Diffusion models (DM) have become fundamental components of generativemodels, excelling across various domains such as image creation, audiogeneration, and complex data interpolation. Signal-to-Noise diffusion modelsconstitute a diverse family covering most state-of-the-art diffusion models.While there have been several attempts to study Signal-to-Noise (S2N) diffusionmodels from various perspectives, there remains a need for a comprehensivestudy connecting different viewpoints and exploring new perspectives. In thisstudy, we offer a comprehensive perspective on noise schedulers, examiningtheir role through the lens of the signal-to-noise ratio (SNR) and itsconnections to information theory. Building upon this framework, we havedeveloped a generalized backward equation to enhance the performance of theinference process.
扩散模型(DM)已成为生成模型的基本组成部分,在图像创建、音频生成和复杂数据插值等多个领域表现出色。信噪比扩散模型是一个多样化的模型系列,涵盖了大多数最先进的扩散模型。虽然已经有很多人尝试从不同角度研究信噪比(S2N)扩散模型,但仍然需要一项连接不同观点和探索新视角的全面研究。在本研究中,我们从信噪比(SNR)及其与信息论的联系的角度,全面审视了噪声调度器的作用。在此框架基础上,我们开发了一种广义的后向方程,以提高推理过程的性能。
{"title":"Connective Viewpoints of Signal-to-Noise Diffusion Models","authors":"Khanh Doan, Long Tung Vuong, Tuan Nguyen, Anh Tuan Bui, Quyen Tran, Thanh-Toan Do, Dinh Phung, Trung Le","doi":"arxiv-2408.04221","DOIUrl":"https://doi.org/arxiv-2408.04221","url":null,"abstract":"Diffusion models (DM) have become fundamental components of generative\u0000models, excelling across various domains such as image creation, audio\u0000generation, and complex data interpolation. Signal-to-Noise diffusion models\u0000constitute a diverse family covering most state-of-the-art diffusion models.\u0000While there have been several attempts to study Signal-to-Noise (S2N) diffusion\u0000models from various perspectives, there remains a need for a comprehensive\u0000study connecting different viewpoints and exploring new perspectives. In this\u0000study, we offer a comprehensive perspective on noise schedulers, examining\u0000their role through the lens of the signal-to-noise ratio (SNR) and its\u0000connections to information theory. Building upon this framework, we have\u0000developed a generalized backward equation to enhance the performance of the\u0000inference process.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Theoretical Advantage of Multiobjective Evolutionary Algorithms for Problems with Different Degrees of Conflict 针对不同冲突程度问题的多目标进化算法的理论优势
Pub Date : 2024-08-08 DOI: arxiv-2408.04207
Weijie Zheng
The field of multiobjective evolutionary algorithms (MOEAs) often emphasizesits popularity for optimization problems with conflicting objectives. However,it is still theoretically unknown how MOEAs perform for different degrees ofconflict, even for no conflicts, compared with typical approaches outside thisfield. As the first step to tackle this question, we propose the OneMaxMin$_k$benchmark class with the degree of the conflict $kin[0..n]$, a generalizedvariant of COCZ and OneMinMax. Two typical non-MOEA approaches, scalarization(weighted-sum approach) and $epsilon$-constraint approach, are considered. Weprove that for any set of weights, the set of optima found by scalarizationapproach cannot cover the full Pareto front. Although the set of the optima ofconstrained problems constructed via $epsilon$-constraint approach can coverthe full Pareto front, the general used ways (via exterior or nonparameterpenalty functions) to solve such constrained problems encountered difficulties.The nonparameter penalty function way cannot construct the set of optima whosefunction values are the Pareto front, and the exterior way helps (with expectedruntime of $O(nln n)$ for the randomized local search algorithm for reachingany Pareto front point) but with careful settings of $epsilon$ and $r$($r>1/(epsilon+1-lceil epsilon rceil)$). In constrast, the generally analyzed MOEAs can efficiently solveOneMaxMin$_k$ without above careful designs. We prove that (G)SEMO, MOEA/D,NSGA-II, and SMS-EMOA can cover the full Pareto front in $O(max{k,1}nln n)$expected number of function evaluations, which is the same asymptotic runtimeas the exterior way in $epsilon$-constraint approach with careful settings. Asa side result, our results also give the performance analysis of solving aconstrained problem via multiobjective way.
多目标进化算法(MOEAs)领域通常强调其在具有冲突目标的优化问题上的受欢迎程度。然而,从理论上讲,与该领域之外的典型方法相比,MOEAs在不同程度的冲突甚至无冲突情况下的表现如何,仍然是个未知数。作为解决这一问题的第一步,我们提出了冲突度为 $k/in[0..n]$的 OneMaxMin$_k$ 基准类,它是 COCZ 和 OneMinMax 的广义变量。我们考虑了两种典型的非MOEA 方法:标量化(加权求和方法)和 $epsilon$ 约束方法。我们证明,对于任何权重集,标量化方法找到的最优集都不能覆盖整个帕累托前沿。虽然通过$epsilon$-约束方法构造的约束问题的最优集可以覆盖整个帕累托前沿,但解决这类约束问题的一般方法(通过外部或非参数惩罚函数)遇到了困难。非参数惩罚函数方法无法构造出其函数值为帕累托前沿的最优集,外部方法有所帮助(达到任何帕累托前沿点的随机局部搜索算法的预期运行时间为 $O(nln n)$),但需要谨慎设置 $epsilon$和 $r$($r>1/(epsilon+1-lceil epsilonrceil)$)。相比之下,一般分析的 MOEAs 无需上述精心设计即可高效求解 OneMaxMin$_k$。我们证明,(G)SEMO、MOEA/D、NSGA-II 和 SMS-EMOA 可以在 $O(max{k,1}nln n)$ 预期函数求值次数内覆盖整个帕累托前沿,这与仔细设置的 $epsilon$ 约束方法中的外部方法的渐进运行时间相同。作为附带结果,我们的结果还给出了通过多目标方法求解约束问题的性能分析。
{"title":"Theoretical Advantage of Multiobjective Evolutionary Algorithms for Problems with Different Degrees of Conflict","authors":"Weijie Zheng","doi":"arxiv-2408.04207","DOIUrl":"https://doi.org/arxiv-2408.04207","url":null,"abstract":"The field of multiobjective evolutionary algorithms (MOEAs) often emphasizes\u0000its popularity for optimization problems with conflicting objectives. However,\u0000it is still theoretically unknown how MOEAs perform for different degrees of\u0000conflict, even for no conflicts, compared with typical approaches outside this\u0000field. As the first step to tackle this question, we propose the OneMaxMin$_k$\u0000benchmark class with the degree of the conflict $kin[0..n]$, a generalized\u0000variant of COCZ and OneMinMax. Two typical non-MOEA approaches, scalarization\u0000(weighted-sum approach) and $epsilon$-constraint approach, are considered. We\u0000prove that for any set of weights, the set of optima found by scalarization\u0000approach cannot cover the full Pareto front. Although the set of the optima of\u0000constrained problems constructed via $epsilon$-constraint approach can cover\u0000the full Pareto front, the general used ways (via exterior or nonparameter\u0000penalty functions) to solve such constrained problems encountered difficulties.\u0000The nonparameter penalty function way cannot construct the set of optima whose\u0000function values are the Pareto front, and the exterior way helps (with expected\u0000runtime of $O(nln n)$ for the randomized local search algorithm for reaching\u0000any Pareto front point) but with careful settings of $epsilon$ and $r$\u0000($r>1/(epsilon+1-lceil epsilon rceil)$). In constrast, the generally analyzed MOEAs can efficiently solve\u0000OneMaxMin$_k$ without above careful designs. We prove that (G)SEMO, MOEA/D,\u0000NSGA-II, and SMS-EMOA can cover the full Pareto front in $O(max{k,1}nln n)$\u0000expected number of function evaluations, which is the same asymptotic runtime\u0000as the exterior way in $epsilon$-constraint approach with careful settings. As\u0000a side result, our results also give the performance analysis of solving a\u0000constrained problem via multiobjective way.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics ParetoTracker:通过可视化分析了解多目标进化算法中的种群动态
Pub Date : 2024-08-08 DOI: arxiv-2408.04539
Zherui Zhang, Fan Yang, Ran Cheng, Yuxin Ma
Multi-objective evolutionary algorithms (MOEAs) have emerged as powerfultools for solving complex optimization problems characterized by multiple,often conflicting, objectives. While advancements have been made incomputational efficiency as well as diversity and convergence of solutions, acritical challenge persists: the internal evolutionary mechanisms are opaque tohuman users. Drawing upon the successes of explainable AI in explaining complexalgorithms and models, we argue that the need to understand the underlyingevolutionary operators and population dynamics within MOEAs aligns well with avisual analytics paradigm. This paper introduces ParetoTracker, a visualanalytics framework designed to support the comprehension and inspection ofpopulation dynamics in the evolutionary processes of MOEAs. Informed bypreliminary literature review and expert interviews, the framework establishesa multi-level analysis scheme, which caters to user engagement and explorationranging from examining overall trends in performance metrics to conductingfine-grained inspections of evolutionary operations. In contrast toconventional practices that require manual plotting of solutions for eachgeneration, ParetoTracker facilitates the examination of temporal trends anddynamics across consecutive generations in an integrated visual interface. Theeffectiveness of the framework is demonstrated through case studies and expertinterviews focused on widely adopted benchmark optimization problems.
多目标进化算法(MOEAs)已成为解决复杂优化问题的有力工具,这些问题的特点是具有多个目标,而且往往相互冲突。虽然在计算效率以及解决方案的多样性和收敛性方面取得了进步,但一个严峻的挑战依然存在:内部进化机制对人类用户来说是不透明的。借鉴可解释人工智能在解释复杂算法和模型方面的成功经验,我们认为,理解 MOEAs 中的底层进化算子和种群动态的需求与可视分析范式不谋而合。本文介绍的 ParetoTracker 是一个可视化分析框架,旨在支持理解和检查 MOEAs 演化过程中的种群动态。通过初步文献回顾和专家访谈,该框架建立了一个多层次的分析方案,可满足用户从检查性能指标的整体趋势到对进化操作进行细粒度检查的各种参与和探索需求。与需要手动绘制每一代解决方案的传统做法不同,ParetoTracker 可在一个集成的可视化界面上方便地检查连续几代的时间趋势和动态。该框架的有效性通过案例研究和专家访谈得到了证明,主要集中在广泛采用的基准优化问题上。
{"title":"ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics","authors":"Zherui Zhang, Fan Yang, Ran Cheng, Yuxin Ma","doi":"arxiv-2408.04539","DOIUrl":"https://doi.org/arxiv-2408.04539","url":null,"abstract":"Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful\u0000tools for solving complex optimization problems characterized by multiple,\u0000often conflicting, objectives. While advancements have been made in\u0000computational efficiency as well as diversity and convergence of solutions, a\u0000critical challenge persists: the internal evolutionary mechanisms are opaque to\u0000human users. Drawing upon the successes of explainable AI in explaining complex\u0000algorithms and models, we argue that the need to understand the underlying\u0000evolutionary operators and population dynamics within MOEAs aligns well with a\u0000visual analytics paradigm. This paper introduces ParetoTracker, a visual\u0000analytics framework designed to support the comprehension and inspection of\u0000population dynamics in the evolutionary processes of MOEAs. Informed by\u0000preliminary literature review and expert interviews, the framework establishes\u0000a multi-level analysis scheme, which caters to user engagement and exploration\u0000ranging from examining overall trends in performance metrics to conducting\u0000fine-grained inspections of evolutionary operations. In contrast to\u0000conventional practices that require manual plotting of solutions for each\u0000generation, ParetoTracker facilitates the examination of temporal trends and\u0000dynamics across consecutive generations in an integrated visual interface. The\u0000effectiveness of the framework is demonstrated through case studies and expert\u0000interviews focused on widely adopted benchmark optimization problems.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Solving QUBO on the Loihi 2 Neuromorphic Processor 在 Loihi 2 神经形态处理器上解决 QUBO 问题
Pub Date : 2024-08-06 DOI: arxiv-2408.03076
Alessandro Pierro, Philipp Stratmann, Gabriel Andres Fonseca Guerra, Sumedh Risbud, Timothy Shea, Ashish Rao Mangalore, Andreas Wild
In this article, we describe an algorithm for solving Quadratic UnconstrainedBinary Optimization problems on the Intel Loihi 2 neuromorphic processor. Thesolver is based on a hardware-aware fine-grained parallel simulated annealingalgorithm developed for Intel's neuromorphic research chip Loihi 2. Preliminaryresults show that our approach can generate feasible solutions in as little as1 ms and up to 37x more energy efficient compared to two baseline solversrunning on a CPU. These advantages could be especially relevant for size-,weight-, and power-constrained edge computing applications.
本文介绍了一种在英特尔 Loihi 2 神经形态处理器上解决二次无约束优化问题的算法。该算法基于为英特尔神经形态研究芯片 Loihi 2 开发的硬件感知细粒度并行模拟退火算法。初步结果表明,我们的方法能在 1 毫秒内生成可行的解决方案,与在 CPU 上运行的两个基线求解器相比,能效最高可提高 37 倍。这些优势对于尺寸、重量和功耗受限的边缘计算应用尤为重要。
{"title":"Solving QUBO on the Loihi 2 Neuromorphic Processor","authors":"Alessandro Pierro, Philipp Stratmann, Gabriel Andres Fonseca Guerra, Sumedh Risbud, Timothy Shea, Ashish Rao Mangalore, Andreas Wild","doi":"arxiv-2408.03076","DOIUrl":"https://doi.org/arxiv-2408.03076","url":null,"abstract":"In this article, we describe an algorithm for solving Quadratic Unconstrained\u0000Binary Optimization problems on the Intel Loihi 2 neuromorphic processor. The\u0000solver is based on a hardware-aware fine-grained parallel simulated annealing\u0000algorithm developed for Intel's neuromorphic research chip Loihi 2. Preliminary\u0000results show that our approach can generate feasible solutions in as little as\u00001 ms and up to 37x more energy efficient compared to two baseline solvers\u0000running on a CPU. These advantages could be especially relevant for size-,\u0000weight-, and power-constrained edge computing applications.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synaptic Modulation using Interspike Intervals Increases Energy Efficiency of Spiking Neural Networks 利用间期突触调制提高尖峰神经网络的能效
Pub Date : 2024-08-06 DOI: arxiv-2408.02961
Dylan Adams, Magda Zajaczkowska, Ashiq Anjum, Andrea Soltoggio, Shirin Dora
Despite basic differences between Spiking Neural Networks (SNN) andArtificial Neural Networks (ANN), most research on SNNs involve adaptingANN-based methods for SNNs. Pruning (dropping connections) and quantization(reducing precision) are often used to improve energy efficiency of SNNs. Thesemethods are very effective for ANNs whose energy needs are determined bysignals transmitted on synapses. However, the event-driven paradigm in SNNsimplies that energy is consumed by spikes. In this paper, we propose a newsynapse model whose weights are modulated by Interspike Intervals (ISI) i.e.time difference between two spikes. SNNs composed of this synapse model, termedISI Modulated SNNs (IMSNN), can use gradient descent to estimate how the ISI ofa neuron changes after updating its synaptic parameters. A higher ISI impliesfewer spikes and vice-versa. The learning algorithm for IMSNNs exploits thisinformation to selectively propagate gradients such that learning is achievedby increasing the ISIs resulting in a network that generates fewer spikes. Theperformance of IMSNNs with dense and convolutional layers have been evaluatedin terms of classification accuracy and the number of spikes using the MNISTand FashionMNIST datasets. The performance comparison with conventional SNNsshows that IMSNNs exhibit upto 90% reduction in the number of spikes whilemaintaining similar classification accuracy.
尽管尖峰神经网络(SNN)与人工神经网络(ANN)之间存在基本差异,但有关 SNN 的大多数研究都涉及将基于 ANN 的方法应用于 SNN。剪枝(放弃连接)和量化(降低精度)通常用于提高 SNN 的能效。这些方法对于能量需求由突触上传输的信号决定的智能网络非常有效。然而,SNN 中的事件驱动模式意味着能量会被尖峰消耗掉。在本文中,我们提出了一种新闻突触模型,该模型的权重由突触间期(ISI)(即两个尖峰之间的时间差)调制。由这种突触模型组成的 SNN 被称为 ISI 调制 SNN(IMSN),可以使用梯度下降法来估计神经元在更新突触参数后 ISI 的变化情况。ISI 越高意味着尖峰越少,反之亦然。IMSNN 的学习算法就是利用这一信息,有选择地传播梯度,从而通过提高 ISI 来实现学习,使网络产生更少的尖峰。我们使用 MNIST 和 FashionMNIST 数据集评估了具有密集层和卷积层的 IMSNN 在分类准确性和尖峰数量方面的性能。与传统 SNN 的性能比较表明,IMSNN 在保持类似分类准确性的同时,尖峰数量减少了高达 90%。
{"title":"Synaptic Modulation using Interspike Intervals Increases Energy Efficiency of Spiking Neural Networks","authors":"Dylan Adams, Magda Zajaczkowska, Ashiq Anjum, Andrea Soltoggio, Shirin Dora","doi":"arxiv-2408.02961","DOIUrl":"https://doi.org/arxiv-2408.02961","url":null,"abstract":"Despite basic differences between Spiking Neural Networks (SNN) and\u0000Artificial Neural Networks (ANN), most research on SNNs involve adapting\u0000ANN-based methods for SNNs. Pruning (dropping connections) and quantization\u0000(reducing precision) are often used to improve energy efficiency of SNNs. These\u0000methods are very effective for ANNs whose energy needs are determined by\u0000signals transmitted on synapses. However, the event-driven paradigm in SNNs\u0000implies that energy is consumed by spikes. In this paper, we propose a new\u0000synapse model whose weights are modulated by Interspike Intervals (ISI) i.e.\u0000time difference between two spikes. SNNs composed of this synapse model, termed\u0000ISI Modulated SNNs (IMSNN), can use gradient descent to estimate how the ISI of\u0000a neuron changes after updating its synaptic parameters. A higher ISI implies\u0000fewer spikes and vice-versa. The learning algorithm for IMSNNs exploits this\u0000information to selectively propagate gradients such that learning is achieved\u0000by increasing the ISIs resulting in a network that generates fewer spikes. The\u0000performance of IMSNNs with dense and convolutional layers have been evaluated\u0000in terms of classification accuracy and the number of spikes using the MNIST\u0000and FashionMNIST datasets. The performance comparison with conventional SNNs\u0000shows that IMSNNs exhibit upto 90% reduction in the number of spikes while\u0000maintaining similar classification accuracy.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"137 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PENDRAM: Enabling High-Performance and Energy-Efficient Processing of Deep Neural Networks through a Generalized DRAM Data Mapping Policy PENDRAM:通过通用 DRAM 数据映射策略实现高性能、高能效的深度神经网络处理
Pub Date : 2024-08-05 DOI: arxiv-2408.02412
Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
Convolutional Neural Networks (CNNs), a prominent type of Deep NeuralNetworks (DNNs), have emerged as a state-of-the-art solution for solvingmachine learning tasks. To improve the performance and energy efficiency of CNNinference, the employment of specialized hardware accelerators is prevalent.However, CNN accelerators still face performance- and energy-efficiencychallenges due to high off-chip memory (DRAM) access latency and energy, whichare especially crucial for latency- and energy-constrained embeddedapplications. Moreover, different DRAM architectures have different profiles ofaccess latency and energy, thus making it challenging to optimize them for highperformance and energy-efficient CNN accelerators. To address this, we presentPENDRAM, a novel design space exploration methodology that enableshigh-performance and energy-efficient CNN acceleration through a generalizedDRAM data mapping policy. Specifically, it explores the impact of differentDRAM data mapping policies and DRAM architectures across different CNNpartitioning and scheduling schemes on the DRAM access latency and energy, thenidentifies the pareto-optimal design choices. The experimental results showthat our DRAM data mapping policy improves the energy-delay-product of DRAMaccesses in the CNN accelerator over other mapping policies by up to 96%. Inthis manner, our PENDRAM methodology offers high-performance andenergy-efficient CNN acceleration under any given DRAM architectures fordiverse embedded AI applications.
卷积神经网络(CNN)是深度神经网络(DNN)的一种重要类型,已成为解决机器学习任务的最先进解决方案。然而,由于芯片外存储器(DRAM)访问延迟和能耗较高,CNN 加速器仍然面临着性能和能效挑战,这对于延迟和能耗受限的嵌入式应用尤为重要。此外,不同的 DRAM 体系结构具有不同的访问延迟和能耗特征,因此要优化它们以实现高性能、高能效的 CNN 加速器具有挑战性。为了解决这个问题,我们提出了一种新颖的设计空间探索方法--PENDRAM,通过通用 DRAM 数据映射策略实现高性能、高能效的 CNN 加速。具体来说,它探索了不同 CNN 分区和调度方案中的不同 DRAM 数据映射策略和 DRAM 架构对 DRAM 访问延迟和能耗的影响,然后确定了帕累托最优设计选择。实验结果表明,与其他映射策略相比,我们的 DRAM 数据映射策略可将 CNN 加速器中 DRAM 访问的能耗-延迟积提高 96%。因此,我们的 PENDRAM 方法可在任何给定的 DRAM 架构下为各种嵌入式人工智能应用提供高性能、高能效的 CNN 加速。
{"title":"PENDRAM: Enabling High-Performance and Energy-Efficient Processing of Deep Neural Networks through a Generalized DRAM Data Mapping Policy","authors":"Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique","doi":"arxiv-2408.02412","DOIUrl":"https://doi.org/arxiv-2408.02412","url":null,"abstract":"Convolutional Neural Networks (CNNs), a prominent type of Deep Neural\u0000Networks (DNNs), have emerged as a state-of-the-art solution for solving\u0000machine learning tasks. To improve the performance and energy efficiency of CNN\u0000inference, the employment of specialized hardware accelerators is prevalent.\u0000However, CNN accelerators still face performance- and energy-efficiency\u0000challenges due to high off-chip memory (DRAM) access latency and energy, which\u0000are especially crucial for latency- and energy-constrained embedded\u0000applications. Moreover, different DRAM architectures have different profiles of\u0000access latency and energy, thus making it challenging to optimize them for high\u0000performance and energy-efficient CNN accelerators. To address this, we present\u0000PENDRAM, a novel design space exploration methodology that enables\u0000high-performance and energy-efficient CNN acceleration through a generalized\u0000DRAM data mapping policy. Specifically, it explores the impact of different\u0000DRAM data mapping policies and DRAM architectures across different CNN\u0000partitioning and scheduling schemes on the DRAM access latency and energy, then\u0000identifies the pareto-optimal design choices. The experimental results show\u0000that our DRAM data mapping policy improves the energy-delay-product of DRAM\u0000accesses in the CNN accelerator over other mapping policies by up to 96%. In\u0000this manner, our PENDRAM methodology offers high-performance and\u0000energy-efficient CNN acceleration under any given DRAM architectures for\u0000diverse embedded AI applications.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"84 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MARCO: A Memory-Augmented Reinforcement Framework for Combinatorial Optimization MARCO:用于组合优化的记忆增强强化框架
Pub Date : 2024-08-05 DOI: arxiv-2408.02207
Andoni I. Garmendia, Quentin Cappart, Josu Ceberio, Alexander Mendiburu
Neural Combinatorial Optimization (NCO) is an emerging domain where deeplearning techniques are employed to address combinatorial optimization problemsas a standalone solver. Despite their potential, existing NCO methods oftensuffer from inefficient search space exploration, frequently leading to localoptima entrapment or redundant exploration of previously visited states. Thispaper introduces a versatile framework, referred to as Memory-AugmentedReinforcement for Combinatorial Optimization (MARCO), that can be used toenhance both constructive and improvement methods in NCO through an innovativememory module. MARCO stores data collected throughout the optimizationtrajectory and retrieves contextually relevant information at each state. Thisway, the search is guided by two competing criteria: making the best decisionin terms of the quality of the solution and avoiding revisiting alreadyexplored solutions. This approach promotes a more efficient use of theavailable optimization budget. Moreover, thanks to the parallel nature of NCOmodels, several search threads can run simultaneously, all sharing the samememory module, enabling an efficient collaborative exploration. Empiricalevaluations, carried out on the maximum cut, maximum independent set andtravelling salesman problems, reveal that the memory module effectivelyincreases the exploration, enabling the model to discover diverse,higher-quality solutions. MARCO achieves good performance in a lowcomputational cost, establishing a promising new direction in the field of NCO.
神经组合优化(NCO)是一个新兴领域,在该领域中,深度学习技术被用作独立求解器来解决组合优化问题。尽管潜力巨大,但现有的 NCO 方法往往存在搜索空间探索效率低下的问题,经常导致局部优化陷入困境或对之前访问过的状态进行冗余探索。本文介绍了一种名为 "组合优化记忆增强"(MARCO)的多功能框架,通过创新的记忆模块,该框架可用于增强 NCO 中的构造和改进方法。MARCO 存储在整个优化轨迹中收集的数据,并在每个状态下检索与上下文相关的信息。通过这种方式,搜索将遵循两个相互竞争的标准:根据解决方案的质量做出最佳决策,以及避免重访已探索过的解决方案。这种方法能更有效地利用可用的优化预算。此外,由于 NCO 模型的并行特性,多个搜索线程可以同时运行,共享同一个内存模块,从而实现高效的协作探索。在最大切割、最大独立集和旅行推销员问题上进行的经验评估表明,内存模块有效地提高了探索效率,使模型能够发现多样化、高质量的解决方案。MARCO 以较低的计算成本实现了良好的性能,为 NCO 领域开辟了一个前景广阔的新方向。
{"title":"MARCO: A Memory-Augmented Reinforcement Framework for Combinatorial Optimization","authors":"Andoni I. Garmendia, Quentin Cappart, Josu Ceberio, Alexander Mendiburu","doi":"arxiv-2408.02207","DOIUrl":"https://doi.org/arxiv-2408.02207","url":null,"abstract":"Neural Combinatorial Optimization (NCO) is an emerging domain where deep\u0000learning techniques are employed to address combinatorial optimization problems\u0000as a standalone solver. Despite their potential, existing NCO methods often\u0000suffer from inefficient search space exploration, frequently leading to local\u0000optima entrapment or redundant exploration of previously visited states. This\u0000paper introduces a versatile framework, referred to as Memory-Augmented\u0000Reinforcement for Combinatorial Optimization (MARCO), that can be used to\u0000enhance both constructive and improvement methods in NCO through an innovative\u0000memory module. MARCO stores data collected throughout the optimization\u0000trajectory and retrieves contextually relevant information at each state. This\u0000way, the search is guided by two competing criteria: making the best decision\u0000in terms of the quality of the solution and avoiding revisiting already\u0000explored solutions. This approach promotes a more efficient use of the\u0000available optimization budget. Moreover, thanks to the parallel nature of NCO\u0000models, several search threads can run simultaneously, all sharing the same\u0000memory module, enabling an efficient collaborative exploration. Empirical\u0000evaluations, carried out on the maximum cut, maximum independent set and\u0000travelling salesman problems, reveal that the memory module effectively\u0000increases the exploration, enabling the model to discover diverse,\u0000higher-quality solutions. MARCO achieves good performance in a low\u0000computational cost, establishing a promising new direction in the field of NCO.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms 关于在进化算法中使用大型语言模型进行超参数调整的研究
Pub Date : 2024-08-05 DOI: arxiv-2408.02451
Leonardo Lucio Custode, Fabio Caraffini, Anil Yaman, Giovanni Iacca
Hyperparameter optimization is a crucial problem in Evolutionary Computation.In fact, the values of the hyperparameters directly impact the trajectory takenby the optimization process, and their choice requires extensive reasoning byhuman operators. Although a variety of self-adaptive Evolutionary Algorithmshave been proposed in the literature, no definitive solution has been found. Inthis work, we perform a preliminary investigation to automate the reasoningprocess that leads to the choice of hyperparameter values. We employ twoopen-source Large Language Models (LLMs), namely Llama2-70b and Mixtral, toanalyze the optimization logs online and provide novel real-time hyperparameterrecommendations. We study our approach in the context of step-size adaptationfor (1+1)-ES. The results suggest that LLMs can be an effective method foroptimizing hyperparameters in Evolution Strategies, encouraging furtherresearch in this direction.
超参数优化是进化计算中的一个关键问题。事实上,超参数的取值直接影响优化过程的轨迹,而它们的选择需要人为操作者进行大量推理。虽然文献中提出了多种自适应进化算法,但至今尚未找到确切的解决方案。在这项工作中,我们进行了一项初步研究,以实现超参数值选择推理过程的自动化。我们采用了两个开源大型语言模型(LLM),即 Llama2-70b 和 Mixtral,对优化日志进行在线分析,并提供新颖的实时超参数建议。我们以 (1+1)-ES 的步长适应为背景研究了我们的方法。结果表明,LLMs 可以成为进化策略中优化超参数的有效方法,从而鼓励了在这一方向上的进一步研究。
{"title":"An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms","authors":"Leonardo Lucio Custode, Fabio Caraffini, Anil Yaman, Giovanni Iacca","doi":"arxiv-2408.02451","DOIUrl":"https://doi.org/arxiv-2408.02451","url":null,"abstract":"Hyperparameter optimization is a crucial problem in Evolutionary Computation.\u0000In fact, the values of the hyperparameters directly impact the trajectory taken\u0000by the optimization process, and their choice requires extensive reasoning by\u0000human operators. Although a variety of self-adaptive Evolutionary Algorithms\u0000have been proposed in the literature, no definitive solution has been found. In\u0000this work, we perform a preliminary investigation to automate the reasoning\u0000process that leads to the choice of hyperparameter values. We employ two\u0000open-source Large Language Models (LLMs), namely Llama2-70b and Mixtral, to\u0000analyze the optimization logs online and provide novel real-time hyperparameter\u0000recommendations. We study our approach in the context of step-size adaptation\u0000for (1+1)-ES. The results suggest that LLMs can be an effective method for\u0000optimizing hyperparameters in Evolution Strategies, encouraging further\u0000research in this direction.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Landscape-Aware Differential Evolution for Multimodal Optimization Problems 多模式优化问题的景观感知差分进化论
Pub Date : 2024-08-05 DOI: arxiv-2408.02340
Guo-Yun Lin, Zong-Gan Chen, Yuncheng Jiang, Zhi-Hui Zhan, Jun Zhang
How to simultaneously locate multiple global peaks and achieve certainaccuracy on the found peaks are two key challenges in solving multimodaloptimization problems (MMOPs). In this paper, a landscape-aware differentialevolution (LADE) algorithm is proposed for MMOPs, which utilizes landscapeknowledge to maintain sufficient diversity and provide efficient searchguidance. In detail, the landscape knowledge is efficiently utilized in thefollowing three aspects. First, a landscape-aware peak exploration helps eachindividual evolve adaptively to locate a peak and simulates the regions of thefound peaks according to search history to avoid an individual locating a foundpeak. Second, a landscape-aware peak distinction distinguishes whether anindividual locates a new global peak, a new local peak, or a found peak.Accuracy refinement can thus only be conducted on the global peaks to enhancethe search efficiency. Third, a landscape-aware reinitialization specifies theinitial position of an individual adaptively according to the distribution ofthe found peaks, which helps explore more peaks. The experiments are conductedon 20 widely-used benchmark MMOPs. Experimental results show that LADE obtainsgenerally better or competitive performance compared with seven well-performedalgorithms proposed recently and four winner algorithms in the IEEE CECcompetitions for multimodal optimization.
如何同时定位多个全局峰值并使找到的峰值达到一定的精度,是解决多模态优化问题(MMOPs)的两个关键挑战。本文提出了一种针对多模态优化问题的景观感知差分进化(LADE)算法,该算法利用景观知识来保持足够的多样性并提供高效的搜索指导。具体来说,景观知识在以下三个方面得到了有效利用。首先,景观感知峰值探索帮助每个个体自适应地进化定位峰值,并根据搜索历史模拟发现峰值的区域,避免个体定位到已发现的峰值。其次,景观感知峰值区分可以区分个体定位的是新的全局峰值、新的局部峰值还是已发现的峰值,因此只能对全局峰值进行精度改进,以提高搜索效率。第三,景观感知重初始化根据发现峰的分布自适应地指定个体的初始位置,这有助于探索更多的峰。实验在 20 个广泛使用的基准 MMOP 上进行。实验结果表明,与最近提出的七种性能良好的算法和 IEEE CEC 多模态优化竞赛中的四种优胜算法相比,LADE 获得了普遍较好或具有竞争力的性能。
{"title":"A Landscape-Aware Differential Evolution for Multimodal Optimization Problems","authors":"Guo-Yun Lin, Zong-Gan Chen, Yuncheng Jiang, Zhi-Hui Zhan, Jun Zhang","doi":"arxiv-2408.02340","DOIUrl":"https://doi.org/arxiv-2408.02340","url":null,"abstract":"How to simultaneously locate multiple global peaks and achieve certain\u0000accuracy on the found peaks are two key challenges in solving multimodal\u0000optimization problems (MMOPs). In this paper, a landscape-aware differential\u0000evolution (LADE) algorithm is proposed for MMOPs, which utilizes landscape\u0000knowledge to maintain sufficient diversity and provide efficient search\u0000guidance. In detail, the landscape knowledge is efficiently utilized in the\u0000following three aspects. First, a landscape-aware peak exploration helps each\u0000individual evolve adaptively to locate a peak and simulates the regions of the\u0000found peaks according to search history to avoid an individual locating a found\u0000peak. Second, a landscape-aware peak distinction distinguishes whether an\u0000individual locates a new global peak, a new local peak, or a found peak.\u0000Accuracy refinement can thus only be conducted on the global peaks to enhance\u0000the search efficiency. Third, a landscape-aware reinitialization specifies the\u0000initial position of an individual adaptively according to the distribution of\u0000the found peaks, which helps explore more peaks. The experiments are conducted\u0000on 20 widely-used benchmark MMOPs. Experimental results show that LADE obtains\u0000generally better or competitive performance compared with seven well-performed\u0000algorithms proposed recently and four winner algorithms in the IEEE CEC\u0000competitions for multimodal optimization.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abstraction in Neural Networks 神经网络中的抽象
Pub Date : 2024-08-04 DOI: arxiv-2408.02125
Nancy Lynch
We show how brain networks, modeled as Spiking Neural Networks, can be viewedat different levels of abstraction. Lower levels include complications such asfailures of neurons and edges. Higher levels are more abstract, makingsimplifying assumptions to avoid these complications. We show preciserelationships between executions of networks at different levels, which enablesus to understand the behavior of lower-level networks in terms of the behaviorof higher-level networks. We express our results using two abstract networks, A1 and A2, one to expressfiring guarantees and the other to express non-firing guarantees, and onedetailed network D. The abstract networks contain reliable neurons and edges,whereas the detailed network has neurons and edges that may fail, subject tosome constraints. Here we consider just initial stopping failures. To definethese networks, we begin with abstract network A1 and modify it systematicallyto obtain the other two networks. To obtain A2, we simply lower the firingthresholds of the neurons. To obtain D, we introduce failures of neurons andedges, and incorporate redundancy in the neurons and edges in order tocompensate for the failures. We also define corresponding inputs for thenetworks, and corresponding executions of the networks. We prove two main theorems, one relating corresponding executions of A1 and Dand the other relating corresponding executions of A2 and D. Together, thesegive both firing and non-firing guarantees for the detailed network D. We alsogive a third theorem, relating the effects of D on an external reliableactuator neuron to the effects of the abstract networks on the same actuatorneuron.
我们展示了以尖峰神经网络(Spiking Neural Networks)为模型的大脑网络如何在不同的抽象层级上进行观察。较低层次的抽象包括神经元和边缘失效等复杂情况。较高层次的抽象程度更高,可以做出简化假设以避免这些复杂性。我们展示了不同层次网络执行之间的精确关系,这使我们能够从高层网络的行为来理解低层网络的行为。我们使用两个抽象网络 A1 和 A2(一个用于表达触发保证,另一个用于表达非触发保证)以及一个详细网络 D 来表达我们的结果。抽象网络包含可靠的神经元和边,而详细网络则包含可能失效的神经元和边,并受到一些约束条件的限制。这里我们只考虑初始停止失败。为了定义这些网络,我们从抽象网络 A1 开始,并对其进行系统修改,以获得其他两个网络。为了得到 A2,我们只需降低神经元的发射阈值。为了得到 D,我们引入了神经元和边的失效,并在神经元和边中加入冗余以补偿失效。我们还定义了网络的相应输入和网络的相应执行。我们证明了两个主要定理,一个是关于 A1 和 D 的相应执行的定理,另一个是关于 A2 和 D 的相应执行的定理。我们还给出了第三个定理,即关于 D 对外部可靠执行器神经元的影响和抽象网络对同一执行器神经元的影响的定理。
{"title":"Abstraction in Neural Networks","authors":"Nancy Lynch","doi":"arxiv-2408.02125","DOIUrl":"https://doi.org/arxiv-2408.02125","url":null,"abstract":"We show how brain networks, modeled as Spiking Neural Networks, can be viewed\u0000at different levels of abstraction. Lower levels include complications such as\u0000failures of neurons and edges. Higher levels are more abstract, making\u0000simplifying assumptions to avoid these complications. We show precise\u0000relationships between executions of networks at different levels, which enables\u0000us to understand the behavior of lower-level networks in terms of the behavior\u0000of higher-level networks. We express our results using two abstract networks, A1 and A2, one to express\u0000firing guarantees and the other to express non-firing guarantees, and one\u0000detailed network D. The abstract networks contain reliable neurons and edges,\u0000whereas the detailed network has neurons and edges that may fail, subject to\u0000some constraints. Here we consider just initial stopping failures. To define\u0000these networks, we begin with abstract network A1 and modify it systematically\u0000to obtain the other two networks. To obtain A2, we simply lower the firing\u0000thresholds of the neurons. To obtain D, we introduce failures of neurons and\u0000edges, and incorporate redundancy in the neurons and edges in order to\u0000compensate for the failures. We also define corresponding inputs for the\u0000networks, and corresponding executions of the networks. We prove two main theorems, one relating corresponding executions of A1 and D\u0000and the other relating corresponding executions of A2 and D. Together, these\u0000give both firing and non-firing guarantees for the detailed network D. We also\u0000give a third theorem, relating the effects of D on an external reliable\u0000actuator neuron to the effects of the abstract networks on the same actuator\u0000neuron.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Neural and Evolutionary Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1