首页 > 最新文献

ACM Transactions on Modeling and Computer Simulation最新文献

英文 中文
RayNet: A Simulation Platform for Developing Reinforcement Learning-Driven Network Protocols RayNet:开发强化学习驱动网络协议的仿真平台
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-03-30 DOI: 10.1145/3653975
Luca Giacomoni, Basil Benny, George Parisis

Reinforcement Learning (RL) has gained significant momentum in the development of network protocols. However, RL-based protocols are still in their infancy, and substantial research is required to build deployable solutions. Developing a protocol based on RL is a complex and challenging process that involves several model design decisions and requires significant training and evaluation in real and simulated network topologies. Network simulators offer an efficient training environment for RL-based protocols, because they are deterministic and can run in parallel. In this paper, we introduce RayNet, a scalable and adaptable simulation platform for the development of RL-based network protocols. RayNet integrates OMNeT++, a fully programmable network simulator, with Ray/RLlib, a scalable training platform for distributed RL. RayNet facilitates the methodical development of RL-based network protocols so that researchers can focus on the problem at hand and not on implementation details of the learning aspect of their research. We developed a simple RL-based congestion control approach as a proof of concept showcasing that RayNet can be a valuable platform for RL-based research in computer networks, enabling scalable training and evaluation. We compared RayNet with ns3-gym, a platform with similar objectives to RayNet, and showed that RayNet performs better in terms of how fast agents can collect experience in RL environments.

强化学习(RL)在网络协议的发展中获得了巨大的动力。然而,基于 RL 的协议仍处于起步阶段,需要进行大量研究才能建立可部署的解决方案。开发基于 RL 的协议是一个复杂而具有挑战性的过程,涉及多个模型设计决策,需要在真实和模拟网络拓扑中进行大量的训练和评估。网络模拟器为基于 RL 的协议提供了高效的训练环境,因为它们是确定的,可以并行运行。在本文中,我们介绍了 RayNet,这是一个用于开发基于 RL 的网络协议的可扩展、适应性强的仿真平台。RayNet 将完全可编程网络模拟器 OMNeT++ 与分布式 RL 的可扩展训练平台 Ray/RLlib 集成在一起。RayNet 有助于有条不紊地开发基于 RL 的网络协议,这样研究人员就可以专注于手头的问题,而不是研究学习方面的实施细节。我们开发了一种简单的基于 RL 的拥塞控制方法,作为概念验证,展示了 RayNet 可以成为计算机网络中基于 RL 研究的重要平台,实现可扩展的培训和评估。我们将 RayNet 与 ns3-gym(一个目标与 RayNet 类似的平台)进行了比较,结果表明 RayNet 在代理如何快速收集 RL 环境中的经验方面表现更好。
{"title":"RayNet: A Simulation Platform for Developing Reinforcement Learning-Driven Network Protocols","authors":"Luca Giacomoni, Basil Benny, George Parisis","doi":"10.1145/3653975","DOIUrl":"https://doi.org/10.1145/3653975","url":null,"abstract":"<p>Reinforcement Learning (RL) has gained significant momentum in the development of network protocols. However, RL-based protocols are still in their infancy, and substantial research is required to build deployable solutions. Developing a protocol based on RL is a complex and challenging process that involves several model design decisions and requires significant training and evaluation in real and simulated network topologies. Network simulators offer an efficient training environment for RL-based protocols, because they are deterministic and can run in parallel. In this paper, we introduce <i>RayNet</i>, a scalable and adaptable simulation platform for the development of RL-based network protocols. RayNet integrates OMNeT++, a fully programmable network simulator, with Ray/RLlib, a scalable training platform for distributed RL. RayNet facilitates the methodical development of RL-based network protocols so that researchers can focus on the problem at hand and not on implementation details of the learning aspect of their research. We developed a simple RL-based congestion control approach as a proof of concept showcasing that RayNet can be a valuable platform for RL-based research in computer networks, enabling scalable training and evaluation. We compared RayNet with <i>ns3-gym</i>, a platform with similar objectives to RayNet, and showed that RayNet performs better in terms of how fast agents can collect experience in RL environments.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"24 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140600049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overlapping Batch Confidence Intervals on Statistical Functionals Constructed from Time Series: Application to Quantiles, Optimization, and Estimation 从时间序列构建统计函数的重叠批量置信区间:定量、优化和估计的应用
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-03-14 DOI: 10.1145/3649437
Ziwei Su, Raghu Pasupathy, Yingchieh Yeh, Peter W. Glynn

We propose a general purpose confidence interval procedure (CIP) for statistical functionals constructed using data from a stationary time series. The procedures we propose are based on derived distribution-free analogues of the χ2 and Student’s t random variables for the statistical functional context, and hence apply in a wide variety of settings including quantile estimation, gradient estimation, M-estimation, CVAR-estimation, and arrival process rate estimation, apart from more traditional statistical settings. Like the method of subsampling, we use overlapping batches of time series data to estimate the underlying variance parameter; unlike subsampling and the bootstrap, however, we assume that the implied point estimator of the statistical functional obeys a central limit theorem (CLT) to help identify the weak asymptotics (called OB-x limits, x=I,II,III) of batched Studentized statistics. The OB-x limits, certain functionals of the Wiener process parameterized by the size of the batches and the extent of their overlap, form the essential machinery for characterizing dependence, and consequently the correctness of the proposed CIPs. The message from extensive numerical experimentation is that in settings where a functional CLT on the point estimator is in effect, using large overlapping batches alongside OB-x critical values yields confidence intervals that are often of significantly higher quality than those obtained from more generic methods like subsampling or the bootstrap. We illustrate using examples from CVaR estimation, ARMA parameter estimation, and NHPP rate estimation; R and MATLAB code for OB-x critical values is available at web.ics.purdue.edu/ ∼ pasupath.

我们为使用静态时间序列数据构建的统计函数提出了一种通用置信区间程序(CIP)。我们提出的程序基于统计函数上下文的 χ2 和 Student's t 随机变量的派生无分布类比,因此除了更传统的统计设置外,还适用于各种设置,包括量化估计、梯度估计、M 估计、CVAR 估计和到达过程率估计。与子抽样法一样,我们使用重叠的时间序列数据批次来估计基本方差参数;但与子抽样法和引导法不同的是,我们假设统计函数的隐含点估计器服从中心极限定理(CLT),以帮助确定批次学生化统计的弱渐近线(称为 OB-x 极限,x=I,II,III)。OB-x 极限是维纳过程的某些函数,由批次大小及其重叠程度参数化,是表征依赖性的基本机制,因此也是建议的 CIP 正确性的基本机制。大量的数值实验结果表明,在点估计函数 CLT 有效的情况下,使用大的重叠批次和 OB-x 临界值所得到的置信区间,其质量往往明显高于子采样或自举法等更通用的方法所得到的置信区间。我们以 CVaR 估计、ARMA 参数估计和 NHPP 率估计为例进行说明;OB-x 临界值的 R 和 MATLAB 代码可在 web.ics.purdue.edu/ ∼ pasupath 上获取。
{"title":"Overlapping Batch Confidence Intervals on Statistical Functionals Constructed from Time Series: Application to Quantiles, Optimization, and Estimation","authors":"Ziwei Su, Raghu Pasupathy, Yingchieh Yeh, Peter W. Glynn","doi":"10.1145/3649437","DOIUrl":"https://doi.org/10.1145/3649437","url":null,"abstract":"<p>We propose a general purpose confidence interval procedure (CIP) for statistical functionals constructed using data from a stationary time series. The procedures we propose are based on derived distribution-free analogues of the <i>χ</i><sup>2</sup> and Student’s <i>t</i> random variables for the statistical functional context, and hence apply in a wide variety of settings including quantile estimation, gradient estimation, M-estimation, CVAR-estimation, and arrival process rate estimation, apart from more traditional statistical settings. Like the method of subsampling, we use overlapping batches of time series data to estimate the underlying variance parameter; unlike subsampling and the bootstrap, however, we assume that the implied point estimator of the statistical functional obeys a central limit theorem (CLT) to help identify the weak asymptotics (called OB-x limits, x=I,II,III) of batched Studentized statistics. The OB-x limits, certain functionals of the Wiener process parameterized by the size of the batches and the extent of their overlap, form the essential machinery for characterizing dependence, and consequently the correctness of the proposed CIPs. The message from extensive numerical experimentation is that in settings where a functional CLT on the point estimator is in effect, using <i>large overlapping batches</i> alongside OB-x critical values yields confidence intervals that are often of significantly higher quality than those obtained from more generic methods like subsampling or the bootstrap. We illustrate using examples from CVaR estimation, ARMA parameter estimation, and NHPP rate estimation; R and MATLAB code for OB-x critical values is available at <monospace>web.ics.purdue.edu/ ∼ pasupath</monospace>.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"17 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140124327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Evaluation of Spintronic-Based Spiking Neural Networks Using Parallel Discrete-Event Simulation 利用并行离散事件仿真评估基于自旋电子的尖峰神经网络的性能
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-03-05 DOI: 10.1145/3649464
Elkin Cruz-Camacho, Siyuan Qian, Ankit Shukla, Neil McGlohon, Shaloo Rakheja, Christopher D. Carothers

Spintronics devices that use the spin of electrons as the information state variable have the potential to emulate neuro-synaptic dynamics and can be realized within a compact form-factor, while operating at ultra-low energy-delay point. In this paper, we benchmark the performance of a spintronics hardware platform designed for handling neuromorphic tasks.

To explore the benefits of spintronics-based hardware on realistic neuromorphic workloads, we developed a Parallel Discrete-Event Simulation model called Doryta, which is further integrated with a materials-to-systems benchmarking framework. The benchmarking framework allows us to obtain quantitative metrics on the throughput and energy of spintronics-based neuromorphic computing and compare these against standard CMOS-based approaches. Although spintronics hardware offers significant energy and latency advantages, we find that for larger neuromorphic circuits, the performance is limited by the interconnection networks rather than the spintronics-based neurons and synapses. This limitation can be overcome by architectural changes to the network.

Through Doryta we are also able to show the power of neuromorphic computing by simulating Conway’s Game of Life (GoL), thus showing that it is Turing complete. We show that Doryta obtains over 300 × speedup using 1,024 CPU cores when tested on a convolutional, sparse, neural architecture. When scaled-up 64 times, to a 200 million neuron model, the simulation ran in 3:42 minutes for a total of 2000 virtual clock steps. The conservative approach of execution was found to be faster in most cases than the optimistic approach, even when a tie-breaking mechanism to guarantee deterministic execution, was deactivated.

利用电子自旋作为信息状态变量的自旋电子器件具有模拟神经突触动力学的潜力,可以在紧凑的外形尺寸内实现,同时以超低能量延迟点运行。在本文中,我们对专为处理神经形态任务而设计的自旋电子硬件平台的性能进行了基准测试。为了探索基于自旋电子学的硬件在现实神经形态工作负载中的优势,我们开发了一个名为 Doryta 的并行离散事件仿真模型,并将其与材料到系统基准测试框架进一步整合。通过该基准测试框架,我们可以获得基于自旋电子学的神经形态计算的吞吐量和能耗的量化指标,并将其与基于 CMOS 的标准方法进行比较。虽然自旋电子硬件在能量和延迟方面具有显著优势,但我们发现,对于较大的神经形态电路,其性能受限于互连网络,而不是基于自旋电子的神经元和突触。这种限制可以通过改变网络结构来克服。通过 Doryta,我们还能模拟康威的生命游戏(GoL),展示神经形态计算的威力,从而证明它是图灵完备的。我们的研究表明,在卷积、稀疏神经架构上进行测试时,Doryta 使用 1,024 个 CPU 内核的速度提高了 300 倍以上。如果将其放大 64 倍,即 2 亿个神经元模型,模拟运行时间为 3:42 分钟,虚拟时钟步数为 2000 步。我们发现,在大多数情况下,保守的执行方法比乐观的执行方法更快,即使在保证确定性执行的决胜机制被停用的情况下也是如此。
{"title":"Performance Evaluation of Spintronic-Based Spiking Neural Networks Using Parallel Discrete-Event Simulation","authors":"Elkin Cruz-Camacho, Siyuan Qian, Ankit Shukla, Neil McGlohon, Shaloo Rakheja, Christopher D. Carothers","doi":"10.1145/3649464","DOIUrl":"https://doi.org/10.1145/3649464","url":null,"abstract":"<p>Spintronics devices that use the spin of electrons as the information state variable have the potential to emulate neuro-synaptic dynamics and can be realized within a compact form-factor, while operating at ultra-low energy-delay point. In this paper, we benchmark the performance of a spintronics hardware platform designed for handling neuromorphic tasks. </p><p>To explore the benefits of spintronics-based hardware on realistic neuromorphic workloads, we developed a Parallel Discrete-Event Simulation model called Doryta, which is further integrated with a materials-to-systems benchmarking framework. The benchmarking framework allows us to obtain quantitative metrics on the throughput and energy of spintronics-based neuromorphic computing and compare these against standard CMOS-based approaches. Although spintronics hardware offers significant energy and latency advantages, we find that for larger neuromorphic circuits, the performance is limited by the interconnection networks rather than the spintronics-based neurons and synapses. This limitation can be overcome by architectural changes to the network. </p><p>Through Doryta we are also able to show the power of neuromorphic computing by simulating Conway’s Game of Life (GoL), thus showing that it is Turing complete. We show that Doryta obtains over 300 × speedup using 1,024 CPU cores when tested on a convolutional, sparse, neural architecture. When scaled-up 64 times, to a 200 million neuron model, the simulation ran in 3:42 minutes for a total of 2000 virtual clock steps. The conservative approach of execution was found to be faster in most cases than the optimistic approach, even when a tie-breaking mechanism to guarantee deterministic execution, was deactivated.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"99 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140047707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Projected Gaussian Markov Improvement Algorithm for High-dimensional Discrete Optimization via Simulation 通过模拟实现高维离散优化的投射高斯马尔可夫改进算法
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-03-01 DOI: 10.1145/3649463
Xinru Li, Eunhye Song

This paper considers a discrete optimization via simulation (DOvS) problem defined on a graph embedded in the high-dimensional integer grid. Several DOvS algorithms that model the responses at the solutions as a realization of a Gaussian Markov random field (GMRF) have been proposed exploiting its inferential power and computational benefits. However, the computational cost of inference increases exponentially in dimension. We propose the projected Gaussian Markov improvement algorithm (pGMIA), which projects the solution space onto a lower-dimensional space creating the region-layer graph to reduce the cost of inference. Each node on the region-layer graph can be mapped to a set of solutions projected to the node; these solutions form a lower-dimensional solution-layer graph. We define the response at each region-layer node to be the average of the responses within the corresponding solution-layer graph. From this relation, we derive the region-layer GMRF to model the region-layer responses. The pGMIA alternates between the two layers to make a sampling decision at each iteration; it first selects a region-layer node based on the lower-resolution inference provided by the region-layer GMRF, then makes a sampling decision among the solutions within the solution-layer graph of the node based on the higher-resolution inference from the solution-layer GMRF. To solve even higher-dimensional problems (e.g., 100 dimensions), we also propose the pGMIA+: a multi-layer extension of the pGMIA.We show that both pGMIA and pGMIA+ converge to the optimum almost surely asymptotically and empirically demonstrate their competitiveness against state-of-the-art high-dimensional Bayesian optimization algorithms.

本文研究的是一个离散模拟优化(DOvS)问题,该问题定义在嵌入高维整数网格的图形上。利用高斯马尔可夫随机场(GMRF)的推理能力和计算优势,已经提出了几种 DOvS 算法,这些算法将解决方案的响应建模为高斯马尔可夫随机场(GMRF)的实现。然而,推理的计算成本随维度呈指数增长。我们提出了投影高斯马尔可夫改进算法(pGMIA),它将解空间投影到低维空间,创建区域层图,以降低推理成本。区域层图上的每个节点都可以映射到投影到该节点的一组解;这些解构成了一个低维的解层图。我们将每个区域层节点的响应定义为相应解决方案层图中响应的平均值。根据这一关系,我们推导出区域层 GMRF,为区域层响应建模。pGMIA 在每次迭代时都会在两层之间交替进行采样决策;它首先根据区域层 GMRF 提供的低分辨率推论选择一个区域层节点,然后根据解法层 GMRF 提供的高分辨率推论在该节点的解法层图中进行采样决策。为了解决更高维的问题(例如 100 维),我们还提出了 pGMIA+:pGMIA 的多层扩展。我们的研究表明,pGMIA 和 pGMIA+ 几乎肯定会渐进地收敛到最优值,并通过实证证明了它们与最先进的高维贝叶斯优化算法相比的竞争力。
{"title":"Projected Gaussian Markov Improvement Algorithm for High-dimensional Discrete Optimization via Simulation","authors":"Xinru Li, Eunhye Song","doi":"10.1145/3649463","DOIUrl":"https://doi.org/10.1145/3649463","url":null,"abstract":"<p>This paper considers a discrete optimization via simulation (DOvS) problem defined on a graph embedded in the high-dimensional integer grid. Several DOvS algorithms that model the responses at the solutions as a realization of a Gaussian Markov random field (GMRF) have been proposed exploiting its inferential power and computational benefits. However, the computational cost of inference increases exponentially in dimension. We propose the projected Gaussian Markov improvement algorithm (pGMIA), which projects the solution space onto a lower-dimensional space creating the region-layer graph to reduce the cost of inference. Each node on the region-layer graph can be mapped to a set of solutions projected to the node; these solutions form a lower-dimensional solution-layer graph. We define the response at each region-layer node to be the average of the responses within the corresponding solution-layer graph. From this relation, we derive the region-layer GMRF to model the region-layer responses. The pGMIA alternates between the two layers to make a sampling decision at each iteration; it first selects a region-layer node based on the lower-resolution inference provided by the region-layer GMRF, then makes a sampling decision among the solutions within the solution-layer graph of the node based on the higher-resolution inference from the solution-layer GMRF. To solve even higher-dimensional problems (e.g., 100 dimensions), we also propose the pGMIA+: a multi-layer extension of the pGMIA.We show that both pGMIA and pGMIA+ converge to the optimum almost surely asymptotically and empirically demonstrate their competitiveness against state-of-the-art high-dimensional Bayesian optimization algorithms.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"63 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140002077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-End Statistical Model Checking for Parameterization and Stability Analysis of ODE Models 用于 ODE 模型参数化和稳定性分析的端到端统计模型检查
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-24 DOI: 10.1145/3649438
David Julien, Gilles Ardourel, Guillaume Cantin, Benoît Delahaye

We propose a simulation-based technique for the parameterization and the stability analysis of parametric Ordinary Differential Equations. This technique is an adaptation of Statistical Model Checking, often used to verify the validity of biological models, to the setting of Ordinary Differential Equations systems. The aim of our technique is to estimate the probability of satisfying a given property under the variability of the parameter or initial condition of the ODE, with any metrics of choice. To do so, we discretize the values space and use statistical model checking to evaluate each individual value w.r.t. provided data. Contrary to other existing methods, we provide statistical guarantees regarding our results that take into account the unavoidable approximation errors introduced through the numerical integration of the ODE system performed while simulating. In order to show the potential of our technique, we present its application to two case studies taken from the literature, one relative to the growth of a jellyfish population, and the other concerning a well-known oscillator model.

我们提出了一种基于模拟的参数化技术和参数常微分方程稳定性分析技术。这种技术是统计模型检查(Statistical Model Checking)的一种改良,通常用于验证生物模型的有效性,也适用于常微分方程系统。我们的技术旨在估算在常微分方程参数或初始条件变化的情况下,满足给定属性的概率。为此,我们对数值空间进行离散化处理,并使用统计模型检查来根据所提供的数据评估每个单独的数值。与其他现有方法不同的是,我们对结果提供统计保证,其中考虑到了模拟时通过对 ODE 系统进行数值积分而引入的不可避免的近似误差。为了展示我们技术的潜力,我们将其应用于文献中的两个案例研究,一个与水母种群的增长有关,另一个与著名的振荡器模型有关。
{"title":"End-to-End Statistical Model Checking for Parameterization and Stability Analysis of ODE Models","authors":"David Julien, Gilles Ardourel, Guillaume Cantin, Benoît Delahaye","doi":"10.1145/3649438","DOIUrl":"https://doi.org/10.1145/3649438","url":null,"abstract":"<p>We propose a simulation-based technique for the parameterization and the stability analysis of parametric Ordinary Differential Equations. This technique is an adaptation of Statistical Model Checking, often used to verify the validity of biological models, to the setting of Ordinary Differential Equations systems. The aim of our technique is to estimate the probability of satisfying a given property under the variability of the parameter or initial condition of the ODE, with any metrics of choice. To do so, we discretize the values space and use statistical model checking to evaluate each individual value w.r.t. provided data. Contrary to other existing methods, we provide statistical guarantees regarding our results that take into account the unavoidable approximation errors introduced through the numerical integration of the ODE system performed while simulating. In order to show the potential of our technique, we present its application to two case studies taken from the literature, one relative to the growth of a jellyfish population, and the other concerning a well-known oscillator model.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"13 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperparameter Tuning with Gaussian Processes for Optimal Abstraction Control in Simulation-based Optimization of Smart Semiconductor Manufacturing Systems 在基于仿真的智能半导体制造系统优化中使用高斯过程进行超参数调整,以实现最佳抽象控制
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-17 DOI: 10.1145/3646549
Moon Gi Seok, Wen Jun Tan, Boyi Su, Wentong Cai, Jisu Kwon, Seon Han Choi

Smart manufacturing utilizes digital twins that are virtual forms of their production plants for analyzing and optimizing decisions. Digital twins have been mainly developed as discrete-event models (DEMs) to represent the detailed and stochastic dynamics of productions in the plants. The optimum decision is achieved after simulating the DEM-based digital twins under various what-if decision candidates; thus, simulation acceleration is crucial for rapid optimum determination for given problems. For the acceleration of discrete-event simulations, adaptive abstraction-level conversion approaches have been previously proposed to switch active models of each machine group between a set of DEM components and a corresponding lookup table-based mean-delay model during runtime. The switching is decided by detecting the machine group’s convergence into (or divergence from) a steady state. However, there is a tradeoff between speedup and accuracy loss in the adaptive abstraction convertible simulation (AACS), and inaccurate simulation can degrade the quality of the optimum (i.e., the distance between the calculated optimum and the actual optimum). In this paper, we propose a simulation-based optimization (SBO) that optimizes the problem based on a genetic algorithm (GA) while tuning specific hyperparameters (related to the tradeoff control) to maximize the speedup of AACS under a specified accuracy constraint. For each individual, the proposed method distributes the overall computing budget for multiple simulation runs (considering the digital twin’s probabilistic property) into the hyperparameter optimization (HPO) and fitness evaluation. We propose an efficient HPO method that manages multiple Gaussian process models (as speedup-estimation models) to acquire promising optimal hyperparameter candidates (that maximize the simulation speedups) with few attempts. The method also reduces each individual’s exploration overhead (as the population evolves) by estimating each hyperparameter’s expected speedup using previous exploration results of neighboring individuals without actual simulation executions. The proposed method was applied to optimize raw-material releases of a large-scale manufacturing system to prove the concept and evaluate the performance under various situations.

智能制造利用数字孪生来分析和优化决策,数字孪生是生产工厂的虚拟形式。数字孪生主要是作为离散事件模型(DEM)开发的,用于表示工厂生产的详细随机动态。最佳决策是在模拟基于 DEM 的数字孪生模型的各种假设决策候选方案后实现的;因此,模拟加速对于快速确定特定问题的最佳决策至关重要。为了加速离散事件仿真,之前有人提出了自适应抽象级转换方法,在运行期间在一组 DEM 组件和相应的基于查找表的平均延迟模型之间切换每个机器组的活动模型。切换是通过检测机器组收敛到(或偏离)稳定状态来决定的。然而,在自适应抽象可转换仿真(AACS)中,速度提升与精度损失之间存在权衡,不准确的仿真会降低最优结果的质量(即计算出的最优结果与实际最优结果之间的距离)。在本文中,我们提出了一种基于仿真的优化(SBO)方法,它基于遗传算法(GA)对问题进行优化,同时调整特定的超参数(与权衡控制有关),以在指定的精度约束下最大限度地提高 AACS 的速度。对于每个个体,建议的方法将多次模拟运行的总体计算预算(考虑到数字孪生的概率属性)分配到超参数优化(HPO)和适配性评估中。我们提出了一种高效的 HPO 方法,该方法可管理多个高斯过程模型(作为加速度估算模型),从而以较少的尝试获得有希望的最优超参数候选值(最大化仿真加速度)。该方法还通过使用相邻个体之前的探索结果估算每个超参数的预期加速度,而无需实际模拟执行,从而减少了每个个体的探索开销(随着群体的发展)。我们将所提出的方法应用于优化大规模制造系统的原材料释放,以证明这一概念并评估其在各种情况下的性能。
{"title":"Hyperparameter Tuning with Gaussian Processes for Optimal Abstraction Control in Simulation-based Optimization of Smart Semiconductor Manufacturing Systems","authors":"Moon Gi Seok, Wen Jun Tan, Boyi Su, Wentong Cai, Jisu Kwon, Seon Han Choi","doi":"10.1145/3646549","DOIUrl":"https://doi.org/10.1145/3646549","url":null,"abstract":"<p>Smart manufacturing utilizes digital twins that are virtual forms of their production plants for analyzing and optimizing decisions. Digital twins have been mainly developed as discrete-event models (DEMs) to represent the detailed and stochastic dynamics of productions in the plants. The optimum decision is achieved after simulating the DEM-based digital twins under various what-if decision candidates; thus, simulation acceleration is crucial for rapid optimum determination for given problems. For the acceleration of discrete-event simulations, adaptive abstraction-level conversion approaches have been previously proposed to switch active models of each machine group between a set of DEM components and a corresponding lookup table-based mean-delay model during runtime. The switching is decided by detecting the machine group’s convergence into (or divergence from) a steady state. However, there is a tradeoff between speedup and accuracy loss in the adaptive abstraction convertible simulation (AACS), and inaccurate simulation can degrade the quality of the optimum (i.e., the distance between the calculated optimum and the actual optimum). In this paper, we propose a simulation-based optimization (SBO) that optimizes the problem based on a genetic algorithm (GA) while tuning specific hyperparameters (related to the tradeoff control) to maximize the speedup of AACS under a specified accuracy constraint. For each individual, the proposed method distributes the overall computing budget for multiple simulation runs (considering the digital twin’s probabilistic property) into the hyperparameter optimization (HPO) and fitness evaluation. We propose an efficient HPO method that manages multiple Gaussian process models (as speedup-estimation models) to acquire promising optimal hyperparameter candidates (that maximize the simulation speedups) with few attempts. The method also reduces each individual’s exploration overhead (as the population evolves) by estimating each hyperparameter’s expected speedup using previous exploration results of neighboring individuals without actual simulation executions. The proposed method was applied to optimize raw-material releases of a large-scale manufacturing system to prove the concept and evaluate the performance under various situations.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"6 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139903945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sufficient Conditions for Central Limit Theorems and Confidence Intervals for Randomized Quasi-Monte Carlo Methods 随机准蒙特卡罗方法的中心极限定理和置信区间的充分条件
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-14 DOI: 10.1145/3643847
Marvin K. Nakayama, Bruno Tuffin

Randomized quasi-Monte Carlo methods have been introduced with the main purpose of yielding a computable measure of error for quasi-Monte Carlo approximations through the implicit application of a central limit theorem over independent randomizations. But to increase precision for a given computational budget, the number of independent randomizations is usually set to a small value so that a large number of points are used from each randomized low-discrepancy sequence to benefit from the fast convergence rate of quasi-Monte Carlo. While a central limit theorem has been previously established for a specific but computationally expensive type of randomization, it is also known in general that fixing the number of randomizations and increasing the length of the sequence used for quasi-Monte Carlo can lead to a non-Gaussian limiting distribution. This paper presents sufficient conditions on the relative growth rates of the number of randomizations and the quasi-Monte Carlo sequence length to ensure a central limit theorem and also an asymptotically valid confidence interval. We obtain several results based on the Lindeberg condition for triangular arrays and expressed in terms of the regularity of the integrand and the convergence speed of the quasi-Monte Carlo method. We also analyze the resulting estimator’s convergence rate.

随机化准蒙特卡罗方法的主要目的是通过对独立随机化隐式应用中心极限定理,得出准蒙特卡罗近似的可计算误差度量。但是,为了在给定的计算预算下提高精度,独立随机化的数量通常设置为一个较小的值,以便从每个随机化的低差异序列中使用大量的点,从而受益于准蒙特卡罗的快速收敛率。虽然之前已经针对一种特定但计算昂贵的随机化类型建立了中心极限定理,但一般来说,固定随机化次数和增加准蒙特卡罗所用序列的长度也会导致非高斯极限分布。本文提出了随机化次数和准蒙特卡罗序列长度相对增长率的充分条件,以确保中心极限定理和渐近有效的置信区间。我们基于三角形阵列的林德伯格条件,用积分的正则性和准蒙特卡罗方法的收敛速度来表示,得到了一些结果。我们还分析了由此得出的估计值的收敛速度。
{"title":"Sufficient Conditions for Central Limit Theorems and Confidence Intervals for Randomized Quasi-Monte Carlo Methods","authors":"Marvin K. Nakayama, Bruno Tuffin","doi":"10.1145/3643847","DOIUrl":"https://doi.org/10.1145/3643847","url":null,"abstract":"<p>Randomized quasi-Monte Carlo methods have been introduced with the main purpose of yielding a computable measure of error for quasi-Monte Carlo approximations through the implicit application of a central limit theorem over independent randomizations. But to increase precision for a given computational budget, the number of independent randomizations is usually set to a small value so that a large number of points are used from each randomized low-discrepancy sequence to benefit from the fast convergence rate of quasi-Monte Carlo. While a central limit theorem has been previously established for a specific but computationally expensive type of randomization, it is also known in general that fixing the number of randomizations and increasing the length of the sequence used for quasi-Monte Carlo can lead to a non-Gaussian limiting distribution. This paper presents sufficient conditions on the relative growth rates of the number of randomizations and the quasi-Monte Carlo sequence length to ensure a central limit theorem and also an asymptotically valid confidence interval. We obtain several results based on the Lindeberg condition for triangular arrays and expressed in terms of the regularity of the integrand and the convergence speed of the quasi-Monte Carlo method. We also analyze the resulting estimator’s convergence rate.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"13 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139767333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel Simulation of Quantum Networks with Distributed Quantum State Management 利用分布式量子状态管理并行模拟量子网络
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-01-31 DOI: 10.1145/3634701
Xiaoliang Wu, Alexander Kolar, Joaquin Chung, Dong Jin, Martin Suchara, Rajkumar Kettimuthu

Quantum network simulators offer the opportunity to cost-efficiently investigate potential avenues for building networks that scale with the number of users, communication distance, and application demands by simulating alternative hardware designs and control protocols. Several quantum network simulators have been recently developed with these goals in mind. As the size of the simulated networks increases, however, sequential execution becomes time-consuming. Parallel execution presents a suitable method for scalable simulations of large-scale quantum networks, but the unique attributes of quantum information create unexpected challenges. In this work, we identify requirements for parallel simulation of quantum networks and develop the first parallel discrete-event quantum network simulator by modifying the existing serial simulator SeQUeNCe. Our contributions include the design and development of a quantum state manager (QSM) that maintains shared quantum information distributed across multiple processes. We also optimize our parallel code by minimizing the overhead of the QSM and decreasing the amount of synchronization needed among processes. Using these techniques, we observe a speedup of 2 to 25 times when simulating a 1,024-node linear network topology using 2 to 128 processes. We also observe an efficiency greater than 0.5 for up to 32 processes in a linear network topology of the same size and with the same workload. We repeat this evaluation with a randomized workload on a caveman network. We also introduce several methods for partitioning networks by mapping them to different parallel simulation processes. We have released the parallel SeQUeNCe simulator as an open-source tool alongside the existing sequential version.

量子网络模拟器通过模拟替代硬件设计和控制协议,为以低成本、高效率的方式研究构建可随用户数量、通信距离和应用需求扩展的网络的潜在途径提供了机会。最近开发的几种量子网络模拟器都考虑到了这些目标。然而,随着模拟网络规模的扩大,顺序执行变得非常耗时。并行执行为大规模量子网络的可扩展模拟提供了一种合适的方法,但量子信息的独特属性带来了意想不到的挑战。在这项工作中,我们确定了量子网络并行仿真的要求,并通过修改现有的串行仿真器 SeQUeNCe,开发了首个并行离散事件量子网络仿真器。我们的贡献包括设计和开发了一个量子态管理器(QSM),用于维护分布在多个进程中的共享量子信息。我们还通过最小化 QSM 的开销和减少进程间所需的同步量来优化并行代码。利用这些技术,我们观察到在使用 2 到 128 个进程模拟 1,024 个节点的线性网络拓扑时,速度提高了 2 到 25 倍。我们还观察到,在相同规模和相同工作量的线性网络拓扑中,最多 32 个进程的效率大于 0.5。我们使用洞穴人网络上的随机工作负载重复了这一评估。我们还介绍了几种通过将网络映射到不同并行仿真进程来划分网络的方法。我们已将并行 SeQUeNCe 仿真器作为开源工具与现有的顺序版本一起发布。
{"title":"Parallel Simulation of Quantum Networks with Distributed Quantum State Management","authors":"Xiaoliang Wu, Alexander Kolar, Joaquin Chung, Dong Jin, Martin Suchara, Rajkumar Kettimuthu","doi":"10.1145/3634701","DOIUrl":"https://doi.org/10.1145/3634701","url":null,"abstract":"<p>Quantum network simulators offer the opportunity to cost-efficiently investigate potential avenues for building networks that scale with the number of users, communication distance, and application demands by simulating alternative hardware designs and control protocols. Several quantum network simulators have been recently developed with these goals in mind. As the size of the simulated networks increases, however, sequential execution becomes time-consuming. Parallel execution presents a suitable method for scalable simulations of large-scale quantum networks, but the unique attributes of quantum information create unexpected challenges. In this work, we identify requirements for parallel simulation of quantum networks and develop the first parallel discrete-event quantum network simulator by modifying the existing serial simulator SeQUeNCe. Our contributions include the design and development of a quantum state manager (QSM) that maintains shared quantum information distributed across multiple processes. We also optimize our parallel code by minimizing the overhead of the QSM and decreasing the amount of synchronization needed among processes. Using these techniques, we observe a speedup of 2 to 25 times when simulating a 1,024-node linear network topology using 2 to 128 processes. We also observe an efficiency greater than 0.5 for up to 32 processes in a linear network topology of the same size and with the same workload. We repeat this evaluation with a randomized workload on a caveman network. We also introduce several methods for partitioning networks by mapping them to different parallel simulation processes. We have released the parallel SeQUeNCe simulator as an open-source tool alongside the existing sequential version.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"177 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139645592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exact and Approximate Moment Derivation for Probabilistic Loops With Non-Polynomial Assignments 非多项式赋值概率循环的精确和近似矩推导
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-01-23 DOI: 10.1145/3641545
Andrey Kofnov, Marcel Moosbrugger, Miroslav Stankovič, Ezio Bartocci, Efstathia Bura

Many stochastic continuous-state dynamical systems can be modeled as probabilistic programs with nonlinear non-polynomial updates in non-nested loops. We present two methods, one approximate and one exact, to automatically compute, without sampling, moment-based invariants for such probabilistic programs as closed-form solutions parameterized by the loop iteration. The exact method applies to probabilistic programs with trigonometric and exponential updates and is embedded in the Polar tool. The approximate method for moment computation applies to any nonlinear random function as it exploits the theory of polynomial chaos expansion to approximate non-polynomial updates as the sum of orthogonal polynomials. This translates the dynamical system to a non-nested loop with polynomial updates, and thus renders it conformable with the Polar tool that computes the moments of any order of the state variables. We evaluate our methods on an extensive number of examples ranging from modeling monetary policy to several physical motion systems in uncertain environments. The experimental results demonstrate the advantages of our approach with respect to the current state-of-the-art.

许多随机连续态动力学系统都可以建模为在非嵌套循环中具有非线性非多项式更新的概率程序。我们提出了两种方法,一种是近似方法,一种是精确方法,无需采样即可自动计算此类概率程序的矩不变式,并将其作为以循环迭代为参数的闭式解。精确方法适用于具有三角更新和指数更新的概率程序,并已嵌入 Polar 工具。矩计算的近似方法适用于任何非线性随机函数,因为它利用多项式混沌展开理论,将非多项式更新近似为正交多项式之和。这将动态系统转化为具有多项式更新的非嵌套循环,从而使其与计算状态变量任意阶矩的 Polar 工具相一致。我们在大量示例中评估了我们的方法,这些示例包括货币政策建模和不确定环境中的几个物理运动系统。实验结果证明了我们的方法相对于当前最先进方法的优势。
{"title":"Exact and Approximate Moment Derivation for Probabilistic Loops With Non-Polynomial Assignments","authors":"Andrey Kofnov, Marcel Moosbrugger, Miroslav Stankovič, Ezio Bartocci, Efstathia Bura","doi":"10.1145/3641545","DOIUrl":"https://doi.org/10.1145/3641545","url":null,"abstract":"<p>Many stochastic continuous-state dynamical systems can be modeled as probabilistic programs with nonlinear non-polynomial updates in non-nested loops. We present two methods, one approximate and one exact, to automatically compute, without sampling, moment-based invariants for such probabilistic programs as closed-form solutions parameterized by the loop iteration. The exact method applies to probabilistic programs with trigonometric and exponential updates and is embedded in the <span>Polar</span> tool. The approximate method for moment computation applies to any nonlinear random function as it exploits the theory of polynomial chaos expansion to approximate non-polynomial updates as the sum of orthogonal polynomials. This translates the dynamical system to a non-nested loop with polynomial updates, and thus renders it conformable with the <span>Polar</span> tool that computes the moments of any order of the state variables. We evaluate our methods on an extensive number of examples ranging from modeling monetary policy to several physical motion systems in uncertain environments. The experimental results demonstrate the advantages of our approach with respect to the current state-of-the-art.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"38 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Optimisation for Constrained Problems 针对受限问题的贝叶斯优化法
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-01-22 DOI: 10.1145/3641544
Juan Ungredda, Juergen Branke

Many real-world optimisation problems such as hyperparameter tuning in machine learning or simulation-based optimisation can be formulated as expensive-to-evaluate black-box functions. A popular approach to tackle such problems is Bayesian optimisation, which builds a response surface model based on the data collected so far, and uses the mean and uncertainty predicted by the model to decide what information to collect next. In this paper, we propose a generalisation of the well-known Knowledge Gradient acquisition function that allows it to handle constraints. We empirically compare the new algorithm with four other state-of-the-art constrained Bayesian optimisation algorithms and demonstrate its superior performance. We also prove theoretical convergence in the infinite budget limit.

现实世界中的许多优化问题,如机器学习中的超参数调整或基于模拟的优化,都可以表述为评估成本高昂的黑盒函数。解决此类问题的一种流行方法是贝叶斯优化,它根据迄今为止收集到的数据建立响应面模型,并利用模型预测的平均值和不确定性来决定下一步收集哪些信息。在本文中,我们提出了对著名的知识梯度获取函数的概括,使其能够处理约束条件。我们将新算法与其他四种最先进的约束贝叶斯优化算法进行了实证比较,证明了它的优越性能。我们还证明了在无限预算极限下的理论收敛性。
{"title":"Bayesian Optimisation for Constrained Problems","authors":"Juan Ungredda, Juergen Branke","doi":"10.1145/3641544","DOIUrl":"https://doi.org/10.1145/3641544","url":null,"abstract":"<p>Many real-world optimisation problems such as hyperparameter tuning in machine learning or simulation-based optimisation can be formulated as expensive-to-evaluate black-box functions. A popular approach to tackle such problems is Bayesian optimisation, which builds a response surface model based on the data collected so far, and uses the mean and uncertainty predicted by the model to decide what information to collect next. In this paper, we propose a generalisation of the well-known Knowledge Gradient acquisition function that allows it to handle constraints. We empirically compare the new algorithm with four other state-of-the-art constrained Bayesian optimisation algorithms and demonstrate its superior performance. We also prove theoretical convergence in the infinite budget limit.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"56 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139516607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Modeling and Computer Simulation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1