首页 > 最新文献

ACM Transactions on Modeling and Computer Simulation最新文献

英文 中文
Divergence Reduction in Monte Carlo Neutron Transport with On-GPU Asynchronous Scheduling 基于gpu异步调度的蒙特卡罗中子传输散度减小
4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-10-19 DOI: 10.1145/3626957
Braxton Cuneo, Mike Bailey
While Monte Carlo Neutron Transport (MCNT) is near-embarrasingly parallel, the effectively unpredictable lifetime of neutrons can lead to divergence when MCNT is evaluated on GPUs. Divergence is the phenomenon of adjacent threads in a warp executing different control flow paths; on GPUS, it reduces performance because each work group may only execute one path at a time. The process of Thread Data Remapping (TDR) resolves these discrepancies by moving data across hardware such that data in the same warp will be processed through similar paths. A common issue among prior implementations of TDR is the synchronous nature of its remapping and processing cycles, which exhaustively sort data produced by prior processing passes and exhaustively evaluate the sorted data. In another paper, we defined a method of remapping data through an asynchronous scheduler which allows for work to be stored in shared memory and deferred arbitrarily until that work is a viable option for low-divergence evaluation. This paper surveys a wider set of cases, with the goal of characterizing performance trends across a more comprehensive set of parameters. These parameters include cross sections of scattering/capturing/fission, use of implicit capture, source neutron counts, simulation time spans, and tuned memory allocations. Across these cases, we have recorded minimum and average execution times, as well as a heuristically-tuned near-optimal memory allocation size for both synchronous and asynchronous scheduling. Across the collected data, it is shown that the asynchronous method is faster and more memory efficient in the majority of cases, and that it requires less tuning to achieve competitive performance.
虽然蒙特卡罗中子输运(MCNT)几乎是令人尴尬的并行,但在gpu上评估MCNT时,中子的有效不可预测的寿命可能导致分歧。发散是指经纱中相邻线程执行不同控制流路径的现象;在gpu上,它会降低性能,因为每个工作组一次只能执行一条路径。线程数据重新映射(TDR)过程通过在硬件之间移动数据来解决这些差异,从而使相同warp中的数据通过相似的路径进行处理。TDR以前实现中的一个常见问题是其重新映射和处理周期的同步性,这将对先前处理过程产生的数据进行彻底排序,并对排序后的数据进行彻底评估。在另一篇论文中,我们定义了一种通过异步调度程序重新映射数据的方法,该方法允许将工作存储在共享内存中并任意延迟,直到该工作成为低发散评估的可行选择。本文调查了一组更广泛的案例,目的是通过一组更全面的参数来描述性能趋势。这些参数包括散射/捕获/裂变的横截面、隐式捕获的使用、源中子计数、模拟时间跨度和调优内存分配。在这些情况下,我们记录了最小和平均执行时间,以及针对同步和异步调度的启发式调整的接近最佳的内存分配大小。通过收集的数据可以看出,异步方法在大多数情况下更快,内存效率更高,并且需要更少的调优来实现具有竞争力的性能。
{"title":"Divergence Reduction in Monte Carlo Neutron Transport with On-GPU Asynchronous Scheduling","authors":"Braxton Cuneo, Mike Bailey","doi":"10.1145/3626957","DOIUrl":"https://doi.org/10.1145/3626957","url":null,"abstract":"While Monte Carlo Neutron Transport (MCNT) is near-embarrasingly parallel, the effectively unpredictable lifetime of neutrons can lead to divergence when MCNT is evaluated on GPUs. Divergence is the phenomenon of adjacent threads in a warp executing different control flow paths; on GPUS, it reduces performance because each work group may only execute one path at a time. The process of Thread Data Remapping (TDR) resolves these discrepancies by moving data across hardware such that data in the same warp will be processed through similar paths. A common issue among prior implementations of TDR is the synchronous nature of its remapping and processing cycles, which exhaustively sort data produced by prior processing passes and exhaustively evaluate the sorted data. In another paper, we defined a method of remapping data through an asynchronous scheduler which allows for work to be stored in shared memory and deferred arbitrarily until that work is a viable option for low-divergence evaluation. This paper surveys a wider set of cases, with the goal of characterizing performance trends across a more comprehensive set of parameters. These parameters include cross sections of scattering/capturing/fission, use of implicit capture, source neutron counts, simulation time spans, and tuned memory allocations. Across these cases, we have recorded minimum and average execution times, as well as a heuristically-tuned near-optimal memory allocation size for both synchronous and asynchronous scheduling. Across the collected data, it is shown that the asynchronous method is faster and more memory efficient in the majority of cases, and that it requires less tuning to achieve competitive performance.","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Cache or Credit for Parallel Ranking and Selection 使用缓存或信用进行并行排序和选择
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-04 DOI: 10.1145/3618299
Harun Avci, Barry L. Nelson, Eunhye Song, Andreas Wächter
In this paper, we focus on ranking and selection procedures that sequentially allocate replications to systems by applying some acquisition function. We propose an acquisition function, called gCEI, which exploits the gradient of the complete expected improvement with respect to the number of replications. We prove that the gCEI procedure, which adopts gCEI as the acquisition function in a serial computing environment, achieves the asymptotically optimal static replication allocation of Glynn and Juneja in the limit under a normality assumption. We also propose two procedures, called caching and credit, that extend any acquisition-function-based procedure in a serial environment into both synchronous and asynchronous parallel environments. While allocating replications to systems, both procedures use persistence forecasts for the unavailable outputs of the currently running replications, but differ in usage of the available outputs. We prove that under certain assumptions, the caching procedure achieves the same asymptotic allocation as in the serial environment. A similar result holds for the credit procedure using gCEI as the acquisition function. In terms of efficiency and effectiveness, the credit procedure empirically performs as well as the caching procedure despite not carefully controlling the output history as the caching procedure does, and is faster than the serial version without any number-of-replications penalty due to using persistence forecasts. Both procedures are designed to solve small-to-medium-sized problems on computers with a modest number of processors, such as laptops and desktops as opposed to high-performance clusters, and are superior to state-of-the-art parallel procedures in this setting.
在本文中,我们关注的是排序和选择过程,通过应用一些获取函数,顺序地将复制分配给系统。我们提出了一个称为gCEI的获取函数,它利用了相对于复制数量的完全预期改进的梯度。我们证明了在串行计算环境下,采用gCEI作为获取函数的gCEI程序,在正态性假设的极限下,实现了Glynn和Juneja静态复制分配的渐近最优。我们还提出了两个过程,称为缓存和信用,将串行环境中任何基于获取函数的过程扩展到同步和异步并行环境中。在向系统分配复制时,这两个过程都对当前运行的复制的不可用输出使用持久性预测,但对可用输出的使用有所不同。在一定的假设条件下,证明了缓存过程与串行环境下的渐近分配是相同的。使用gCEI作为获取函数的信用过程也有类似的结果。在效率和有效性方面,信用过程的经验表现与缓存过程一样好,尽管不像缓存过程那样仔细控制输出历史,并且比串行版本更快,而且由于使用持久性预测而没有任何复制数量的损失。这两个过程都被设计用于解决处理器数量有限的计算机(如笔记本电脑和台式机,而不是高性能集群)上的中小型问题,并且在这种设置中优于最先进的并行过程。
{"title":"Using Cache or Credit for Parallel Ranking and Selection","authors":"Harun Avci, Barry L. Nelson, Eunhye Song, Andreas Wächter","doi":"10.1145/3618299","DOIUrl":"https://doi.org/10.1145/3618299","url":null,"abstract":"In this paper, we focus on ranking and selection procedures that sequentially allocate replications to systems by applying some acquisition function. We propose an acquisition function, called gCEI, which exploits the gradient of the complete expected improvement with respect to the number of replications. We prove that the gCEI procedure, which adopts gCEI as the acquisition function in a serial computing environment, achieves the asymptotically optimal static replication allocation of Glynn and Juneja in the limit under a normality assumption. We also propose two procedures, called caching and credit, that extend any acquisition-function-based procedure in a serial environment into both synchronous and asynchronous parallel environments. While allocating replications to systems, both procedures use persistence forecasts for the unavailable outputs of the currently running replications, but differ in usage of the available outputs. We prove that under certain assumptions, the caching procedure achieves the same asymptotic allocation as in the serial environment. A similar result holds for the credit procedure using gCEI as the acquisition function. In terms of efficiency and effectiveness, the credit procedure empirically performs as well as the caching procedure despite not carefully controlling the output history as the caching procedure does, and is faster than the serial version without any number-of-replications penalty due to using persistence forecasts. Both procedures are designed to solve small-to-medium-sized problems on computers with a modest number of processors, such as laptops and desktops as opposed to high-performance clusters, and are superior to state-of-the-art parallel procedures in this setting.","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44825472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Stochastic Approximation for Multi-period Simulation Optimization with Streaming Input Data 流输入数据多周期仿真优化的随机逼近
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-29 DOI: 10.1145/3617595
Linyun He, U. Shanbhag, Eunhye Song
We consider a continuous-valued simulation optimization (SO) problem, where a simulator is built to optimize an expected performance measure of a real-world system while parameters of the simulator are estimated from streaming data collected periodically from the system. At each period, a new batch of data is combined with the cumulative data and the parameters are re-estimated with higher precision. The system requires the decision variable to be selected in all periods. Therefore, it is sensible for the decision-maker to update the decision variable at each period by solving a more precise SO problem with the updated parameter estimate to reduce the performance loss with respect to the target system. We define this decision-making process as the multi-period SO problem and introduce a multi-period stochastic approximation (SA) framework that generates a sequence of solutions. Two algorithms are proposed: Re-start SA (ReSA) reinitializes the stepsize sequence in each period, whereas Warm-start SA (WaSA) carefully tunes the stepsizes, taking both fewer and shorter gradient-descent steps in later periods as parameter estimates become increasingly more precise. We show that under suitable strong convexity and regularity conditions, ReSA and WaSA achieve the best possible convergence rate in expected sub-optimality either when an unbiased or a simultaneous perturbation gradient estimator is employed, while WaSA accrues significantly lower computational cost as the number of periods increases. In addition, we present the regularized ReSA which obviates the need to know the strong convexity constant and achieves the same convergence rate at the expense of additional computation.
我们考虑一个连续值模拟优化(SO)问题,其中建立模拟器来优化真实世界系统的预期性能度量,同时根据从系统定期收集的流数据来估计模拟器的参数。在每个周期,将新的一批数据与累积数据相结合,并以更高的精度重新估计参数。系统要求在所有时段中选择决策变量。因此,决策者在每个周期更新决策变量是明智的,通过用更新的参数估计来解决更精确的SO问题,以减少相对于目标系统的性能损失。我们将这个决策过程定义为多周期SO问题,并引入了一个多周期随机逼近(SA)框架,该框架生成一系列解。提出了两种算法:重新启动SA(ReSA)在每个周期重新初始化步长序列,而暖启动SA(WaSA)仔细调整步长,随着参数估计变得越来越精确,在以后的周期中采取更少和更短的梯度下降步骤。我们证明,在适当的强凸性和正则性条件下,当使用无偏或同时扰动梯度估计器时,ReSA和WaSA在预期次最优性中实现了尽可能好的收敛速度,而随着周期数的增加,WaSA的计算成本显著降低。此外,我们提出了正则化ReSA,它不需要知道强凸性常数,并以额外的计算为代价实现了相同的收敛速度。
{"title":"Stochastic Approximation for Multi-period Simulation Optimization with Streaming Input Data","authors":"Linyun He, U. Shanbhag, Eunhye Song","doi":"10.1145/3617595","DOIUrl":"https://doi.org/10.1145/3617595","url":null,"abstract":"We consider a continuous-valued simulation optimization (SO) problem, where a simulator is built to optimize an expected performance measure of a real-world system while parameters of the simulator are estimated from streaming data collected periodically from the system. At each period, a new batch of data is combined with the cumulative data and the parameters are re-estimated with higher precision. The system requires the decision variable to be selected in all periods. Therefore, it is sensible for the decision-maker to update the decision variable at each period by solving a more precise SO problem with the updated parameter estimate to reduce the performance loss with respect to the target system. We define this decision-making process as the multi-period SO problem and introduce a multi-period stochastic approximation (SA) framework that generates a sequence of solutions. Two algorithms are proposed: Re-start SA (ReSA) reinitializes the stepsize sequence in each period, whereas Warm-start SA (WaSA) carefully tunes the stepsizes, taking both fewer and shorter gradient-descent steps in later periods as parameter estimates become increasingly more precise. We show that under suitable strong convexity and regularity conditions, ReSA and WaSA achieve the best possible convergence rate in expected sub-optimality either when an unbiased or a simultaneous perturbation gradient estimator is employed, while WaSA accrues significantly lower computational cost as the number of periods increases. In addition, we present the regularized ReSA which obviates the need to know the strong convexity constant and achieves the same convergence rate at the expense of additional computation.","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45117582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSMC Evaluation Stages: Fostering Robust and Safe Behavior in Deep Reinforcement Learning – Extended Version DSMC评估阶段:在深度强化学习中培养稳健和安全的行为-扩展版
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-07-12 DOI: https://dl.acm.org/doi/10.1145/3607198
Timo P. Gros, Joschka Groß, Daniel Höller, Jörg Hoffmann, Michaela Klauck, Hendrik Meerkamp, Nicola J. Müller, Lukas Schaller, Verena Wolf

Neural networks (NN) are gaining importance in sequential decision-making. Deep reinforcement learning (DRL), in particular, is extremely successful in learning action policies in complex and dynamic environments. Despite this success, however, DRL technology is not without its failures, especially in safety-critical applications: (i) the training objective maximizes average rewards, which may disregard rare but critical situations and hence lack local robustness; (ii) optimization objectives targeting safety typically yield degenerated reward structures which for DRL to work must be replaced with proxy objectives. Here we introduce a methodology that can help to address both deficiencies. We incorporate evaluation stages (ES) into DRL, leveraging recent work on deep statistical model checking (DSMC), which verifies NN policies in Markov decision processes. Our ES apply DSMC at regular intervals to determine state space regions with weak performance. We adapt the subsequent DRL training priorities based on the outcome, (i) focusing DRL on critical situations, and (ii) allowing to foster arbitrary objectives.

We run case studies on two benchmarks. One of them is the Racetrack, an abstraction of autonomous driving that requires navigating a map without crashing into a wall. The other is MiniGrid, a widely used benchmark in the AI community. Our results show that DSMC-based ES can significantly improve both (i) and (ii).

神经网络(NN)在序列决策中越来越重要。特别是深度强化学习(DRL)在复杂和动态环境中学习行动策略方面非常成功。然而,尽管取得了成功,DRL技术并非没有失败,特别是在安全关键应用中:(i)训练目标最大化平均奖励,这可能会忽略罕见但关键的情况,因此缺乏局部鲁棒性;(ii)以安全为目标的优化目标通常会产生退化的奖励结构,DRL必须用代理目标代替。在这里,我们介绍一种可以帮助解决这两个缺陷的方法。我们将评估阶段(ES)纳入DRL,利用最近在深度统计模型检查(DSMC)方面的工作,该工作验证了马尔可夫决策过程中的神经网络策略。我们的ES定期应用DSMC来确定性能较弱的状态空间区域。我们根据结果调整随后的DRL培训重点,(i)将DRL重点放在关键情况上,(ii)允许培养任意目标。我们在两个基准上运行案例研究。其中之一是Racetrack,这是一种抽象的自动驾驶,需要在地图上导航而不会撞到墙上。另一个是MiniGrid,一个在人工智能社区广泛使用的基准。我们的研究结果表明,基于dsmc的ES可以显著改善(i)和(ii)。
{"title":"DSMC Evaluation Stages: Fostering Robust and Safe Behavior in Deep Reinforcement Learning – Extended Version","authors":"Timo P. Gros, Joschka Groß, Daniel Höller, Jörg Hoffmann, Michaela Klauck, Hendrik Meerkamp, Nicola J. Müller, Lukas Schaller, Verena Wolf","doi":"https://dl.acm.org/doi/10.1145/3607198","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3607198","url":null,"abstract":"<p>Neural networks (NN) are gaining importance in sequential decision-making. Deep reinforcement learning (DRL), in particular, is extremely successful in learning action policies in complex and dynamic environments. Despite this success, however, DRL technology is not without its failures, especially in safety-critical applications: (i) the training objective maximizes <i>average</i> rewards, which may disregard rare but critical situations and hence lack local robustness; (ii) optimization objectives targeting safety typically yield degenerated reward structures which for DRL to work must be replaced with proxy objectives. Here we introduce a methodology that can help to address both deficiencies. We incorporate <i>evaluation stages</i> (ES) into DRL, leveraging recent work on deep statistical model checking (DSMC), which verifies NN policies in Markov decision processes. Our ES apply DSMC at regular intervals to determine state space regions with weak performance. We adapt the subsequent DRL training priorities based on the outcome, (i) focusing DRL on critical situations, and (ii) allowing to foster arbitrary objectives. </p><p>We run case studies on two benchmarks. One of them is the Racetrack, an abstraction of autonomous driving that requires navigating a map without crashing into a wall. The other is MiniGrid, a widely used benchmark in the AI community. Our results show that DSMC-based ES can significantly improve both (i) and (ii).</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"13 10","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138523757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSMC Evaluation Stages: Fostering Robust and Safe Behavior in Deep Reinforcement Learning – Extended Version DSMC评估阶段:在深度强化学习中培养稳健和安全的行为-扩展版
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-07-12 DOI: 10.1145/3607198
Timo P. Gros, D. Höller, Jörg Hoffmann, M. Klauck, Hendrik Meerkamp, Verena Wolf
Neural networks (NN) are gaining importance in sequential decision-making. Deep reinforcement learning (DRL), in particular, is extremely successful in learning action policies in complex and dynamic environments. Despite this success, however, DRL technology is not without its failures, especially in safety-critical applications: (i) the training objective maximizes average rewards, which may disregard rare but critical situations and hence lack local robustness; (ii) optimization objectives targeting safety typically yield degenerated reward structures which for DRL to work must be replaced with proxy objectives. Here we introduce a methodology that can help to address both deficiencies. We incorporate evaluation stages (ES) into DRL, leveraging recent work on deep statistical model checking (DSMC), which verifies NN policies in Markov decision processes. Our ES apply DSMC at regular intervals to determine state space regions with weak performance. We adapt the subsequent DRL training priorities based on the outcome, (i) focusing DRL on critical situations, and (ii) allowing to foster arbitrary objectives. We run case studies on two benchmarks. One of them is the Racetrack, an abstraction of autonomous driving that requires navigating a map without crashing into a wall. The other is MiniGrid, a widely used benchmark in the AI community. Our results show that DSMC-based ES can significantly improve both (i) and (ii).
神经网络(NN)在序列决策中越来越重要。特别是深度强化学习(DRL)在复杂和动态环境中学习行动策略方面非常成功。然而,尽管取得了成功,DRL技术并非没有失败,特别是在安全关键应用中:(i)训练目标最大化平均奖励,这可能会忽略罕见但关键的情况,因此缺乏局部鲁棒性;(ii)以安全为目标的优化目标通常会产生退化的奖励结构,DRL必须用代理目标代替。在这里,我们介绍一种可以帮助解决这两个缺陷的方法。我们将评估阶段(ES)纳入DRL,利用最近在深度统计模型检查(DSMC)方面的工作,该工作验证了马尔可夫决策过程中的神经网络策略。我们的ES定期应用DSMC来确定性能较弱的状态空间区域。我们根据结果调整随后的DRL培训重点,(i)将DRL重点放在关键情况上,(ii)允许培养任意目标。我们在两个基准上运行案例研究。其中之一是Racetrack,这是一种抽象的自动驾驶,需要在地图上导航而不会撞到墙上。另一个是MiniGrid,一个在人工智能社区广泛使用的基准。我们的研究结果表明,基于dsmc的ES可以显著改善(i)和(ii)。
{"title":"DSMC Evaluation Stages: Fostering Robust and Safe Behavior in Deep Reinforcement Learning – Extended Version","authors":"Timo P. Gros, D. Höller, Jörg Hoffmann, M. Klauck, Hendrik Meerkamp, Verena Wolf","doi":"10.1145/3607198","DOIUrl":"https://doi.org/10.1145/3607198","url":null,"abstract":"Neural networks (NN) are gaining importance in sequential decision-making. Deep reinforcement learning (DRL), in particular, is extremely successful in learning action policies in complex and dynamic environments. Despite this success, however, DRL technology is not without its failures, especially in safety-critical applications: (i) the training objective maximizes average rewards, which may disregard rare but critical situations and hence lack local robustness; (ii) optimization objectives targeting safety typically yield degenerated reward structures which for DRL to work must be replaced with proxy objectives. Here we introduce a methodology that can help to address both deficiencies. We incorporate evaluation stages (ES) into DRL, leveraging recent work on deep statistical model checking (DSMC), which verifies NN policies in Markov decision processes. Our ES apply DSMC at regular intervals to determine state space regions with weak performance. We adapt the subsequent DRL training priorities based on the outcome, (i) focusing DRL on critical situations, and (ii) allowing to foster arbitrary objectives. We run case studies on two benchmarks. One of them is the Racetrack, an abstraction of autonomous driving that requires navigating a map without crashing into a wall. The other is MiniGrid, a widely used benchmark in the AI community. Our results show that DSMC-based ES can significantly improve both (i) and (ii).","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43824767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Optimizing reachability probabilities for a restricted class of Stochastic Hybrid Automata via Flowpipe-Construction 基于流动管道构造的受限类随机混合自动机可达概率优化
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-07-11 DOI: https://dl.acm.org/doi/10.1145/3607197
Carina da Silva, Stefan Schupp, Anne Remke

Stochastic hybrid automata (SHA) are a powerful tool to evaluate the dependability and safety of critical infrastructures. However, the resolution of nondeterminism, which is present in many purely hybrid models, is often only implicitly considered in SHA. This paper instead proposes algorithms for computing maximum and minimum reachability probabilities for singular automata with urgent transitions and random clocks which follow arbitrary continuous probability distributions. We borrow a well-known approach from hybrid systems reachability analysis, namely flowpipe construction, which is then extended to optimize nondeterminism in the presence of random variables. Firstly, valuations of random clocks which ensure reachability of specific goal states are extracted from the computed flowpipes and secondly, reachability probabilities are computed by integrating over these valuations. We compute maximum and minimum probabilities for history-dependent prophetic and non-prophetic schedulers using set-based methods. The implementation featuring the library HyPro and the complexity of the approach are discussed in detail. Two case studies featuring nondeterministic choices show the feasibility of the approach.

随机混合自动机(SHA)是评估关键基础设施可靠性和安全性的有力工具。然而,在许多纯混合模型中存在的不确定性的解决通常只在SHA中隐式地考虑。本文提出了一种计算具有紧急过渡和随机时钟的奇异自动机的最大和最小可达概率的算法,这些自动机遵循任意连续概率分布。我们借鉴了混合系统可达性分析中众所周知的方法,即流动管道构造,然后将其扩展到随机变量存在下的非确定性优化。首先,从计算的流管中提取保证特定目标状态可达性的随机时钟值,然后对这些值进行积分计算可达性概率。我们使用基于集合的方法计算历史依赖的预言性和非预言性调度程序的最大和最小概率。详细讨论了HyPro库的实现以及该方法的复杂性。两个具有不确定性选择的案例研究表明了该方法的可行性。
{"title":"Optimizing reachability probabilities for a restricted class of Stochastic Hybrid Automata via Flowpipe-Construction","authors":"Carina da Silva, Stefan Schupp, Anne Remke","doi":"https://dl.acm.org/doi/10.1145/3607197","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3607197","url":null,"abstract":"<p>Stochastic hybrid automata (SHA) are a powerful tool to evaluate the dependability and safety of critical infrastructures. However, the resolution of nondeterminism, which is present in many purely hybrid models, is often only implicitly considered in SHA. This paper instead proposes algorithms for computing maximum and minimum reachability probabilities for singular automata with <i>urgent</i> transitions and random clocks which follow arbitrary continuous probability distributions. We borrow a well-known approach from hybrid systems reachability analysis, namely flowpipe construction, which is then extended to optimize nondeterminism in the presence of random variables. Firstly, valuations of random clocks which ensure reachability of specific goal states are extracted from the computed flowpipes and secondly, reachability probabilities are computed by integrating over these valuations. We compute maximum and minimum probabilities for history-dependent prophetic and non-prophetic schedulers using set-based methods. The implementation featuring the library <span>HyPro</span> and the complexity of the approach are discussed in detail. Two case studies featuring nondeterministic choices show the feasibility of the approach.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"28 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138523776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing reachability probabilities for a restricted class of Stochastic Hybrid Automata via Flowpipe-Construction 用Flowpipe构造优化一类受限随机混合自动机的可达性概率
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-07-11 DOI: 10.1145/3607197
Carina Pilch, Stefan Schupp, Anne Remke
Stochastic hybrid automata (SHA) are a powerful tool to evaluate the dependability and safety of critical infrastructures. However, the resolution of nondeterminism, which is present in many purely hybrid models, is often only implicitly considered in SHA. This paper instead proposes algorithms for computing maximum and minimum reachability probabilities for singular automata with urgent transitions and random clocks which follow arbitrary continuous probability distributions. We borrow a well-known approach from hybrid systems reachability analysis, namely flowpipe construction, which is then extended to optimize nondeterminism in the presence of random variables. Firstly, valuations of random clocks which ensure reachability of specific goal states are extracted from the computed flowpipes and secondly, reachability probabilities are computed by integrating over these valuations. We compute maximum and minimum probabilities for history-dependent prophetic and non-prophetic schedulers using set-based methods. The implementation featuring the library HyPro and the complexity of the approach are discussed in detail. Two case studies featuring nondeterministic choices show the feasibility of the approach.
随机混合自动机(SHA)是评估关键基础设施可靠性和安全性的有力工具。然而,在许多纯混合模型中存在的不确定性的解决方案,通常只在SHA中被隐含地考虑。相反,本文提出了计算具有紧急转移和随机时钟的奇异自动机的最大和最小可达性概率的算法,这些奇异自动机遵循任意连续概率分布。我们借用了混合系统可达性分析中的一种众所周知的方法,即流管构造,然后将其扩展到在存在随机变量的情况下优化不确定性。首先,从计算的流管道中提取确保特定目标状态可达性的随机时钟的估值,其次,通过对这些估值进行积分来计算可达性概率。我们使用基于集合的方法计算历史相关的预言和非预言调度器的最大和最小概率。详细讨论了以HyPro库为特征的实现以及该方法的复杂性。两个以不确定性选择为特征的案例研究表明了该方法的可行性。
{"title":"Optimizing reachability probabilities for a restricted class of Stochastic Hybrid Automata via Flowpipe-Construction","authors":"Carina Pilch, Stefan Schupp, Anne Remke","doi":"10.1145/3607197","DOIUrl":"https://doi.org/10.1145/3607197","url":null,"abstract":"Stochastic hybrid automata (SHA) are a powerful tool to evaluate the dependability and safety of critical infrastructures. However, the resolution of nondeterminism, which is present in many purely hybrid models, is often only implicitly considered in SHA. This paper instead proposes algorithms for computing maximum and minimum reachability probabilities for singular automata with urgent transitions and random clocks which follow arbitrary continuous probability distributions. We borrow a well-known approach from hybrid systems reachability analysis, namely flowpipe construction, which is then extended to optimize nondeterminism in the presence of random variables. Firstly, valuations of random clocks which ensure reachability of specific goal states are extracted from the computed flowpipes and secondly, reachability probabilities are computed by integrating over these valuations. We compute maximum and minimum probabilities for history-dependent prophetic and non-prophetic schedulers using set-based methods. The implementation featuring the library HyPro and the complexity of the approach are discussed in detail. Two case studies featuring nondeterministic choices show the feasibility of the approach.","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46721864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Toward Data Center Digital Twins via Knowledge-based Model Calibration and Reduction 基于知识的数据中心数字孪生模型校正与约简
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-06-10 DOI: https://dl.acm.org/doi/10.1145/3604283
Ruihang Wang, Deneng Xia, Zhiwei Cao, Yonggang Wen, Rui Tan, Xin Zhou

Computational fluid dynamics (CFD) models have been widely used for prototyping data centers. Evolving them into high-fidelity and real-time digital twins is desirable for online operations of data centers. However, CFD models often have unsatisfactory accuracy and high computation overhead. Manually calibrating the CFD model parameters is tedious and labor-intensive. Existing automatic calibration approaches apply heuristics to search the model configurations. However, each search step requires a long-lasting process of repeatedly solving the CFD model, rendering them impractical especially for complex CFD models. This paper presents Kalibre, a knowledge-based neural surrogate approach that calibrates a CFD model by iterating four steps of i) training a neural surrogate model, ii) finding the optimal parameters through neural surrogate retraining, iii) configuring the found parameters back to the CFD model, and iv) validating the CFD model using sensor-measured data. Thus, the parameter search is offloaded to the lightweight neural surrogate. To speed up Kalibre’s convergence, we incorporate prior knowledge in training data initialization and surrogate architecture design. With about ten hours computation on a 64-core processor, Kalibre achieves mean absolute errors (MAEs) of 0.57°C and 0.88°C in calibrating the CFD models of two production data halls hosting thousands of servers. To accelerate CFD-based simulation, we further propose Kalibreduce that incorporates the energy balance principle to reduce the order of the calibrated CFD model. Evaluation shows the model reduction only introduces 0.1°C to 0.27°C extra errors, while accelerating the CFD-based simulations by thousand times.

计算流体动力学(CFD)模型已广泛用于数据中心的原型设计。将它们发展成高保真、实时的数字孪生是数据中心在线运营的需要。然而,CFD模型往往精度不理想,计算开销大。手动校准CFD模型参数是一项繁琐且费力的工作。现有的自动校准方法采用启发式方法来搜索模型配置。然而,每个搜索步骤都需要一个长时间的反复求解CFD模型的过程,这使得它们不太适用于复杂的CFD模型。本文介绍了一种基于知识的神经代理方法calibre,该方法通过迭代四个步骤来校准CFD模型:1)训练神经代理模型;2)通过神经代理再训练找到最佳参数;3)将找到的参数配置回CFD模型;4)使用传感器测量数据验证CFD模型。因此,参数搜索被卸载到轻量级神经代理。为了加快kalbre的收敛速度,我们在训练数据初始化和代理架构设计中加入了先验知识。在64核处理器上进行大约10小时的计算,calibre在校准两个承载数千台服务器的生产数据大厅的CFD模型时实现了0.57°C和0.88°C的平均绝对误差(MAEs)。为了加速CFD模拟,我们进一步提出了结合能量平衡原理的Kalibreduce来降低校正后的CFD模型的阶数。评估表明,模型减小只引入0.1°C至0.27°C的额外误差,同时将基于cfd的模拟速度提高了数千倍。
{"title":"Toward Data Center Digital Twins via Knowledge-based Model Calibration and Reduction","authors":"Ruihang Wang, Deneng Xia, Zhiwei Cao, Yonggang Wen, Rui Tan, Xin Zhou","doi":"https://dl.acm.org/doi/10.1145/3604283","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3604283","url":null,"abstract":"<p>Computational fluid dynamics (CFD) models have been widely used for prototyping data centers. Evolving them into high-fidelity and real-time digital twins is desirable for online operations of data centers. However, CFD models often have unsatisfactory accuracy and high computation overhead. Manually calibrating the CFD model parameters is tedious and labor-intensive. Existing automatic calibration approaches apply heuristics to search the model configurations. However, each search step requires a long-lasting process of repeatedly solving the CFD model, rendering them impractical especially for complex CFD models. This paper presents <i>Kalibre</i>, a knowledge-based neural surrogate approach that calibrates a CFD model by iterating four steps of i) training a neural surrogate model, ii) finding the optimal parameters through neural surrogate retraining, iii) configuring the found parameters back to the CFD model, and iv) validating the CFD model using sensor-measured data. Thus, the parameter search is offloaded to the lightweight neural surrogate. To speed up Kalibre’s convergence, we incorporate prior knowledge in training data initialization and surrogate architecture design. With about ten hours computation on a 64-core processor, Kalibre achieves mean absolute errors (MAEs) of 0.57°C and 0.88°C in calibrating the CFD models of two production data halls hosting thousands of servers. To accelerate CFD-based simulation, we further propose <i>Kalibreduce</i> that incorporates the energy balance principle to reduce the order of the calibrated CFD model. Evaluation shows the model reduction only introduces 0.1°C to 0.27°C extra errors, while accelerating the CFD-based simulations by thousand times.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"77 3","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138523750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Data Center Digital Twins via Knowledge-based Model Calibration and Reduction 基于知识的数据中心数字孪生模型校正与约简
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-06-10 DOI: 10.1145/3604283
Ruihang Wang, Deneng Xia, Zhi-Ying Cao, Yonggang Wen, Rui Tan, Xiaoxia Zhou
Computational fluid dynamics (CFD) models have been widely used for prototyping data centers. Evolving them into high-fidelity and real-time digital twins is desirable for online operations of data centers. However, CFD models often have unsatisfactory accuracy and high computation overhead. Manually calibrating the CFD model parameters is tedious and labor-intensive. Existing automatic calibration approaches apply heuristics to search the model configurations. However, each search step requires a long-lasting process of repeatedly solving the CFD model, rendering them impractical especially for complex CFD models. This paper presents Kalibre, a knowledge-based neural surrogate approach that calibrates a CFD model by iterating four steps of i) training a neural surrogate model, ii) finding the optimal parameters through neural surrogate retraining, iii) configuring the found parameters back to the CFD model, and iv) validating the CFD model using sensor-measured data. Thus, the parameter search is offloaded to the lightweight neural surrogate. To speed up Kalibre’s convergence, we incorporate prior knowledge in training data initialization and surrogate architecture design. With about ten hours computation on a 64-core processor, Kalibre achieves mean absolute errors (MAEs) of 0.57°C and 0.88°C in calibrating the CFD models of two production data halls hosting thousands of servers. To accelerate CFD-based simulation, we further propose Kalibreduce that incorporates the energy balance principle to reduce the order of the calibrated CFD model. Evaluation shows the model reduction only introduces 0.1°C to 0.27°C extra errors, while accelerating the CFD-based simulations by thousand times.
计算流体动力学(CFD)模型已广泛用于数据中心的原型设计。将它们发展成高保真、实时的数字孪生是数据中心在线运营的需要。然而,CFD模型往往精度不理想,计算开销大。手动校准CFD模型参数是一项繁琐且费力的工作。现有的自动校准方法采用启发式方法来搜索模型配置。然而,每个搜索步骤都需要一个长时间的反复求解CFD模型的过程,这使得它们不太适用于复杂的CFD模型。本文介绍了一种基于知识的神经代理方法calibre,该方法通过迭代四个步骤来校准CFD模型:1)训练神经代理模型;2)通过神经代理再训练找到最佳参数;3)将找到的参数配置回CFD模型;4)使用传感器测量数据验证CFD模型。因此,参数搜索被卸载到轻量级神经代理。为了加快kalbre的收敛速度,我们在训练数据初始化和代理架构设计中加入了先验知识。在64核处理器上进行大约10小时的计算,calibre在校准两个承载数千台服务器的生产数据大厅的CFD模型时实现了0.57°C和0.88°C的平均绝对误差(MAEs)。为了加速CFD模拟,我们进一步提出了结合能量平衡原理的Kalibreduce来降低校正后的CFD模型的阶数。评估表明,模型减小只引入0.1°C至0.27°C的额外误差,同时将基于cfd的模拟速度提高了数千倍。
{"title":"Toward Data Center Digital Twins via Knowledge-based Model Calibration and Reduction","authors":"Ruihang Wang, Deneng Xia, Zhi-Ying Cao, Yonggang Wen, Rui Tan, Xiaoxia Zhou","doi":"10.1145/3604283","DOIUrl":"https://doi.org/10.1145/3604283","url":null,"abstract":"Computational fluid dynamics (CFD) models have been widely used for prototyping data centers. Evolving them into high-fidelity and real-time digital twins is desirable for online operations of data centers. However, CFD models often have unsatisfactory accuracy and high computation overhead. Manually calibrating the CFD model parameters is tedious and labor-intensive. Existing automatic calibration approaches apply heuristics to search the model configurations. However, each search step requires a long-lasting process of repeatedly solving the CFD model, rendering them impractical especially for complex CFD models. This paper presents Kalibre, a knowledge-based neural surrogate approach that calibrates a CFD model by iterating four steps of i) training a neural surrogate model, ii) finding the optimal parameters through neural surrogate retraining, iii) configuring the found parameters back to the CFD model, and iv) validating the CFD model using sensor-measured data. Thus, the parameter search is offloaded to the lightweight neural surrogate. To speed up Kalibre’s convergence, we incorporate prior knowledge in training data initialization and surrogate architecture design. With about ten hours computation on a 64-core processor, Kalibre achieves mean absolute errors (MAEs) of 0.57°C and 0.88°C in calibrating the CFD models of two production data halls hosting thousands of servers. To accelerate CFD-based simulation, we further propose Kalibreduce that incorporates the energy balance principle to reduce the order of the calibrated CFD model. Evaluation shows the model reduction only introduces 0.1°C to 0.27°C extra errors, while accelerating the CFD-based simulations by thousand times.","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":" ","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42854944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware Simulation of Adaptive Systems 自适应系统的不确定性感知仿真
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-05-13 DOI: https://dl.acm.org/doi/10.1145/3589517
Jean-Marc Jézéquel, Antonio Vallecillo

Adaptive systems manage and regulate the behavior of devices or other systems using control loops to automatically adjust the value of some measured variables to equal the value of a desired set-point. These systems normally interact with physical parts or operate in physical environments, where uncertainty is unavoidable. Traditional approaches to manage that uncertainty use either robust control algorithms that consider bounded variations of the uncertain variables and worst-case scenarios or adaptive control methods that estimate the parameters and change the control laws accordingly. In this article, we propose to include the sources of uncertainty in the system models as first-class entities using random variables to simulate adaptive and control systems more faithfully, including not only the use of random variables to represent and operate with uncertain values but also to represent decisions based on their comparisons. Two exemplar systems are used to illustrate and validate our proposal.

自适应系统管理和调节设备或其他系统的行为,使用控制回路自动调整一些测量变量的值,使其等于期望的设定点的值。这些系统通常与物理部件相互作用或在物理环境中运行,其中不确定性是不可避免的。管理这种不确定性的传统方法要么使用鲁棒控制算法,考虑不确定变量和最坏情况的有界变化,要么使用自适应控制方法,估计参数并相应地改变控制律。在本文中,我们建议将系统模型中的不确定性来源作为使用随机变量更忠实地模拟自适应和控制系统的第一类实体,不仅包括使用随机变量来表示和操作不确定值,而且还包括基于它们的比较来表示决策。使用两个示例系统来说明和验证我们的建议。
{"title":"Uncertainty-aware Simulation of Adaptive Systems","authors":"Jean-Marc Jézéquel, Antonio Vallecillo","doi":"https://dl.acm.org/doi/10.1145/3589517","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3589517","url":null,"abstract":"<p>Adaptive systems manage and regulate the behavior of devices or other systems using control loops to automatically adjust the value of some measured variables to equal the value of a desired set-point. These systems normally interact with physical parts or operate in physical environments, where uncertainty is unavoidable. Traditional approaches to manage that uncertainty use either robust control algorithms that consider bounded variations of the uncertain variables and worst-case scenarios or adaptive control methods that estimate the parameters and change the control laws accordingly. In this article, we propose to include the sources of uncertainty in the system models as first-class entities using random variables to simulate adaptive and control systems more faithfully, including not only the use of random variables to represent and operate with uncertain values but also to represent decisions based on their comparisons. Two exemplar systems are used to illustrate and validate our proposal.</p>","PeriodicalId":50943,"journal":{"name":"ACM Transactions on Modeling and Computer Simulation","volume":"48 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138523765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Modeling and Computer Simulation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1