首页 > 最新文献

Performance Evaluation最新文献

英文 中文
A comprehensive exploration of approximate DNN models with a novel floating-point simulation framework 利用新型浮点模拟框架全面探索近似 DNN 模型
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-25 DOI: 10.1016/j.peva.2024.102423
Myeongjin Kwak, Jeonggeun Kim, Yongtae Kim

This paper introduces TorchAxf1, a framework for fast simulation of diverse approximate deep neural network (DNN) models, including spiking neural networks (SNNs). The proposed framework utilizes various approximate adders and multipliers, supports industrial standard reduced precision floating-point formats, such as bfloat16, and accommodates user-customized precision representations. Leveraging GPU acceleration on the PyTorch framework, TorchAxf accelerates approximate DNN training and inference. In addition, it allows seamless integration of arbitrary approximate arithmetic algorithms with C/C++ behavioral models to emulate approximate DNN hardware accelerators.

We utilize the proposed TorchAxf framework to assess twelve popular DNN models under approximate multiply-and-accumulate (MAC) operations. Through comprehensive experiments, we determine the suitable degree of floating-point arithmetic approximation for these DNN models without significant accuracy loss and offer the optimal reduced precision formats for each DNN model. Additionally, we demonstrate that approximate-aware re-training can rectify errors and enhance pre-trained DNN models under reduced precision formats. Furthermore, TorchAxf, operating on GPU, remarkably reduces simulation time for complex DNN models using approximate arithmetic by up to 131.38× compared to the baseline optimized CPU implementation. Finally, we compare the proposed framework with state-of-the-art frameworks to highlight its superiority.

本文介绍了用于快速模拟各种近似深度神经网络(DNN)模型(包括尖峰神经网络(SNN))的框架 TorchAxf1。拟议的框架利用各种近似加法器和乘法器,支持工业标准的降低精度浮点格式(如 bfloat16),并可容纳用户定制的精度表示。利用 PyTorch 框架上的 GPU 加速,TorchAxf 加快了近似 DNN 的训练和推理。此外,它还允许将任意近似算术算法与 C/C++ 行为模型无缝集成,以模拟近似 DNN 硬件加速器。我们利用提出的 TorchAxf 框架评估了近似乘法累加(MAC)操作下的 12 种流行 DNN 模型。通过全面的实验,我们确定了这些 DNN 模型的浮点算术近似程度,而不会造成显著的精度损失,并为每个 DNN 模型提供了最佳的精度降低格式。此外,我们还证明了近似感知再训练可以纠正错误,并在降低精度格式下增强预训练的 DNN 模型。此外,在 GPU 上运行的 TorchAxf,使用近似算法对复杂 DNN 模型进行仿真的时间比基准优化 CPU 实现显著缩短了 131.38 倍。最后,我们将所提出的框架与最先进的框架进行了比较,以突出其优越性。
{"title":"A comprehensive exploration of approximate DNN models with a novel floating-point simulation framework","authors":"Myeongjin Kwak,&nbsp;Jeonggeun Kim,&nbsp;Yongtae Kim","doi":"10.1016/j.peva.2024.102423","DOIUrl":"https://doi.org/10.1016/j.peva.2024.102423","url":null,"abstract":"<div><p>This paper introduces <em>TorchAxf</em><span><sup>1</sup></span>, a framework for fast simulation of diverse approximate deep neural network (DNN) models, including spiking neural networks (SNNs). The proposed framework utilizes various approximate adders and multipliers, supports industrial standard reduced precision floating-point formats, such as <span>bfloat16</span>, and accommodates user-customized precision representations. Leveraging GPU acceleration on the PyTorch framework, <em>TorchAxf</em> accelerates approximate DNN training and inference. In addition, it allows seamless integration of arbitrary approximate arithmetic algorithms with C/C++ behavioral models to emulate approximate DNN hardware accelerators.</p><p>We utilize the proposed <em>TorchAxf</em> framework to assess twelve popular DNN models under approximate multiply-and-accumulate (MAC) operations. Through comprehensive experiments, we determine the suitable degree of floating-point arithmetic approximation for these DNN models without significant accuracy loss and offer the optimal reduced precision formats for each DNN model. Additionally, we demonstrate that approximate-aware re-training can rectify errors and enhance pre-trained DNN models under reduced precision formats. Furthermore, <em>TorchAxf</em>, operating on GPU, remarkably reduces simulation time for complex DNN models using approximate arithmetic by up to 131.38<span><math><mo>×</mo></math></span> compared to the baseline optimized CPU implementation. Finally, we compare the proposed framework with state-of-the-art frameworks to highlight its superiority.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"165 ","pages":"Article 102423"},"PeriodicalIF":2.2,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141239841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance analysis of a collision channel with abandonments 带放弃功能的碰撞信道的性能分析
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-25 DOI: 10.1016/j.peva.2024.102424
Dieter Fiems , Tuan Phung-Duc

We consider a Markovian retrial queueing system with customer collisions and abandonment in the context of carrier-sense multiple access systems. Using z-transform techniques, we find a set of first-order differential equations for the probability generating functions of the orbit size when the server is empty, busy, or in the collision phase. We then rely on series expansion techniques to extract approximations for relevant performance measures from this set of differential equations. More precisely, we construct a numerical algorithm to calculate the terms in the series expansions of various factorial moments of the orbit size. To improve the accuracy of our series expansion approach, we apply Wynn’s epsilon algorithm which not only speeds up convergence, but also extends the region of convergence. We illustrate the accuracy of our approach by means of some numerical examples, and find that the method is both fast and accurate for a wide range of the parameter values.

我们在载波感应多路访问系统的背景下,考虑了一个具有客户碰撞和放弃的马尔可夫重试排队系统。利用 z 变换技术,我们找到了一组一阶微分方程,用于计算服务器空闲、繁忙或处于碰撞阶段时轨道大小的概率生成函数。然后,我们利用数列展开技术,从这组微分方程中提取相关性能指标的近似值。更准确地说,我们构建了一种数值算法,用于计算轨道大小各种阶乘矩的级数展开项。为了提高数列展开方法的精度,我们采用了 Wynn 的ε算法,该算法不仅加快了收敛速度,还扩大了收敛区域。我们通过一些数值示例来说明我们的方法的准确性,并发现该方法在很大的参数值范围内既快速又准确。
{"title":"Performance analysis of a collision channel with abandonments","authors":"Dieter Fiems ,&nbsp;Tuan Phung-Duc","doi":"10.1016/j.peva.2024.102424","DOIUrl":"https://doi.org/10.1016/j.peva.2024.102424","url":null,"abstract":"<div><p>We consider a Markovian retrial queueing system with customer collisions and abandonment in the context of carrier-sense multiple access systems. Using <span><math><mi>z</mi></math></span>-transform techniques, we find a set of first-order differential equations for the probability generating functions of the orbit size when the server is empty, busy, or in the collision phase. We then rely on series expansion techniques to extract approximations for relevant performance measures from this set of differential equations. More precisely, we construct a numerical algorithm to calculate the terms in the series expansions of various factorial moments of the orbit size. To improve the accuracy of our series expansion approach, we apply Wynn’s epsilon algorithm which not only speeds up convergence, but also extends the region of convergence. We illustrate the accuracy of our approach by means of some numerical examples, and find that the method is both fast and accurate for a wide range of the parameter values.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"165 ","pages":"Article 102424"},"PeriodicalIF":2.2,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic load balancing in energy packet networks 能源分组网络中的动态负载平衡
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-04-23 DOI: 10.1016/j.peva.2024.102414
A. Bušić , J. Doncel , J.M. Fourneau

Energy Packet Networks (EPNs) model the interaction between renewable sources generating energy following a random process and communication devices that consume energy. This network is formed by cells and, in each cell, there is a queue that handles energy packets and another queue that handles data packets. We assume Poisson arrivals of energy packets and of data packets to all the cells and exponential service times. We consider an EPN model with a dynamic load balancing where a cell without data packets can poll other cells to migrate jobs. This migration can only take place when there is enough energy in both interacting cells, in which case a batch of data packets is transferred and the required energy is consumed (i.e. it disappears). We consider that data packet also consume energy to be routed to the next station. Our main result shows that the steady-state distribution of jobs in the queues admits a product form solution provided that a stable solution of a fixed point equation exists. We prove sufficient conditions for irreducibility. Under these conditions and when the fixed point equation has a solution, the Markov chain is ergodic. We also provide sufficient conditions for the existence of a solution of the fixed point equation. We then focus on layered networks and we study the polling rates that must be set to achieve a fair load balancing, i.e., such that, in the same layer, the load of the queues handling data packets is the same. Our numerical experiments illustrate that dynamic load balancing satisfies several interesting properties such as performance improvement or fair load balancing.

能量包网络(EPN)模拟了按照随机过程产生能量的可再生能源与消耗能量的通信设备之间的相互作用。该网络由小区组成,每个小区都有一个处理能量包的队列和另一个处理数据包的队列。我们假设所有小区的能量包和数据包均为泊松到达,服务时间为指数级。我们考虑了一个具有动态负载平衡的 EPN 模型,在该模型中,没有数据包的小区可以轮询其他小区以迁移工作。这种迁移只有在两个交互小区都有足够能量时才能进行,在这种情况下,一批数据包被传输,所需的能量被消耗(即能量消失)。我们认为,数据包在路由到下一个站点时也会消耗能量。我们的主要结果表明,只要存在定点方程的稳定解,队列中作业的稳态分布就能以积形式求解。我们证明了不可还原性的充分条件。在这些条件下,当定点方程有一个解时,马尔可夫链就是遍历的。我们还提供了定点方程解存在的充分条件。然后,我们将重点放在分层网络上,研究为实现公平负载平衡(即在同一层中,处理数据包的队列负载相同)而必须设置的轮询率。我们的数值实验表明,动态负载平衡满足多个有趣的特性,如性能改善或公平负载平衡。
{"title":"Dynamic load balancing in energy packet networks","authors":"A. Bušić ,&nbsp;J. Doncel ,&nbsp;J.M. Fourneau","doi":"10.1016/j.peva.2024.102414","DOIUrl":"10.1016/j.peva.2024.102414","url":null,"abstract":"<div><p>Energy Packet Networks (EPNs) model the interaction between renewable sources generating energy following a random process and communication devices that consume energy. This network is formed by cells and, in each cell, there is a queue that handles energy packets and another queue that handles data packets. We assume Poisson arrivals of energy packets and of data packets to all the cells and exponential service times. We consider an EPN model with a dynamic load balancing where a cell without data packets can poll other cells to migrate jobs. This migration can only take place when there is enough energy in both interacting cells, in which case a batch of data packets is transferred and the required energy is consumed (i.e. it disappears). We consider that data packet also consume energy to be routed to the next station. Our main result shows that the steady-state distribution of jobs in the queues admits a product form solution provided that a stable solution of a fixed point equation exists. We prove sufficient conditions for irreducibility. Under these conditions and when the fixed point equation has a solution, the Markov chain is ergodic. We also provide sufficient conditions for the existence of a solution of the fixed point equation. We then focus on layered networks and we study the polling rates that must be set to achieve a fair load balancing, i.e., such that, in the same layer, the load of the queues handling data packets is the same. Our numerical experiments illustrate that dynamic load balancing satisfies several interesting properties such as performance improvement or fair load balancing.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"165 ","pages":"Article 102414"},"PeriodicalIF":2.2,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166531624000191/pdfft?md5=fb96a067de593ae411502a62f32b10ab&pid=1-s2.0-S0166531624000191-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140777407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Network slicing: Is it worth regulating in a network neutrality context? 网络切片:在网络中立的背景下是否值得监管?
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-04-22 DOI: 10.1016/j.peva.2024.102422
Yassine Hadjadj-Aoul , Maël Le Treust , Patrick Maillé , Bruno Tuffin

Network slicing is a key component of 5G-and-beyond networks but induces many questions related to an associated business model and its need to be regulated due to its difficult co-existence with the network neutrality debate. We propose in this paper a slicing model in the case of heterogeneous users/applications where a service provider may purchase a slice in a wireless network and offer a “premium” service where the improved quality stems from higher prices leading to less demand and less congestion than the basic service offered by the network owner, a scheme known as Paris Metro Pricing. We obtain thanks to game theory the economically-optimal slice size and prices charged by all actors. We also compare with the case of a unique “pipe” (no premium service) corresponding to a fully-neutral scenario and with the case of vertical integration to evaluate the impact of slicing on all actors and identify the “best” economic scenario and the eventual need for regulation.

网络切片是 5G 及 5G 以上网络的关键组成部分,但由于其与网络中立性的争论难以共存,引发了许多与相关商业模式和监管需求有关的问题。我们在本文中提出了一个异构用户/应用情况下的切片模型,即服务提供商可以购买无线网络中的一个切片,并提供 "优质 "服务,与网络所有者提供的基本服务相比,这种服务的质量提高源于价格上涨导致需求减少和拥塞降低,这种方案被称为 "巴黎地铁定价"。通过博弈论,我们得出了经济上最优的分片规模和所有参与者的收费价格。我们还比较了与完全中立方案相对应的唯一 "管道"(无溢价服务)方案和垂直整合方案,以评估分片对所有参与者的影响,并确定 "最佳 "经济方案和最终的监管需求。
{"title":"Network slicing: Is it worth regulating in a network neutrality context?","authors":"Yassine Hadjadj-Aoul ,&nbsp;Maël Le Treust ,&nbsp;Patrick Maillé ,&nbsp;Bruno Tuffin","doi":"10.1016/j.peva.2024.102422","DOIUrl":"https://doi.org/10.1016/j.peva.2024.102422","url":null,"abstract":"<div><p>Network slicing is a key component of 5G-and-beyond networks but induces many questions related to an associated business model and its need to be regulated due to its difficult co-existence with the network neutrality debate. We propose in this paper a slicing model in the case of heterogeneous users/applications where a service provider may purchase a slice in a wireless network and offer a “premium” service where the improved quality stems from higher prices leading to less demand and less congestion than the basic service offered by the network owner, a scheme known as Paris Metro Pricing. We obtain thanks to game theory the economically-optimal slice size and prices charged by all actors. We also compare with the case of a unique “pipe” (no premium service) corresponding to a fully-neutral scenario and with the case of vertical integration to evaluate the impact of slicing on all actors and identify the “best” economic scenario and the eventual need for regulation.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"165 ","pages":"Article 102422"},"PeriodicalIF":2.2,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166531624000270/pdfft?md5=5fc9a43434d102c218d3c6bee4bb9f76&pid=1-s2.0-S0166531624000270-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140645416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing the age of information in prioritized status update systems under an interruption-based hybrid discipline 分析基于中断的混合纪律下优先级状态更新系统中的信息年龄
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-04-15 DOI: 10.1016/j.peva.2024.102415
Tamer E. Fahim , Sherif I. Rabia , Ahmed H. Abd El-Malek , Waheed K. Zahra

Motivated by real-life applications, a special research-work interest has been recently directed towards the prioritized status update systems, which prioritize the update streams according to their timeliness constraints. The preferential service treatment between priority classes is commonly based on classical disciplines, preemption and non-preemption. However, both disciplines fail to give an even satisfaction between all classes. In our work, an interruption-based hybrid preemptive/non-preemptive discipline is proposed under a single-buffer system modeled as an M/M/1/2 priority queueing system. Each class being served (resp. buffered) can be preempted unless its recorded number of service preemptions reaches the predetermined in-service (resp. in-waiting) threshold. All thresholds between classes are the controlling parameters of the whole system’s performance. Using the stochastic hybrid system approach, the age of information (AoI) performance metric is analyzed in terms of its statistical average along with the higher-order moments, considering a general number of priority classes. Closed-form results are also obtained for some special cases, giving analytical insights about the AoI stability in heavy loading conditions. The average AoI and its dispersion are numerically investigated for the case of a three-class network. The significance of the proposed model is manifested in achieving a compromise satisfaction between all priority classes by a thorough adjustment of its threshold parameters. Two approaches are proposed to clarify the adjustment of these parameters. It turned out that the proposed hybrid discipline compensates for the limited buffer resource, achieving more promising performance with low design complexity and low cost. Moreover, the proposed scheme can operate under a wider span of the total offered load, through which the whole network satisfaction can be optimized under some legitimate constraints on the age-sensitive classes.

在现实应用的推动下,优先级状态更新系统成为近期研究工作的一个特别关注点,该系统根据更新流的及时性约束确定更新流的优先级。优先级之间的优先服务处理通常基于经典规则,即抢占和非抢占。然而,这两种规则都无法使所有类别的服务都得到均衡的满足。在我们的工作中,提出了一种基于中断的混合抢占/非抢占规则,该规则在单缓冲系统中被模拟为 M/M/1/2 优先级队列系统。每个被服务(或缓冲)的类都可以被抢占,除非其记录的服务抢占次数达到预定的服务中(或等待中)阈值。班级之间的所有阈值都是整个系统性能的控制参数。利用随机混合系统方法,在考虑到一般优先级数量的情况下,根据信息年龄(AoI)性能指标的统计平均值和高阶矩对其进行了分析。此外,还获得了一些特殊情况下的闭式结果,为重载条件下的 AoI 稳定性提供了分析见解。对三类网络的平均 AoI 及其离散性进行了数值研究。所提模型的重要性体现在通过彻底调整其阈值参数,实现所有优先级之间的折中满足。本文提出了两种方法来明确这些参数的调整。结果表明,所提出的混合纪律弥补了有限的缓冲资源,以较低的设计复杂度和较低的成本实现了更有前途的性能。此外,建议的方案可以在更大的总提供负载跨度下运行,通过这种方法,可以在对年龄敏感类的一些合法限制条件下优化整个网络的满意度。
{"title":"Analyzing the age of information in prioritized status update systems under an interruption-based hybrid discipline","authors":"Tamer E. Fahim ,&nbsp;Sherif I. Rabia ,&nbsp;Ahmed H. Abd El-Malek ,&nbsp;Waheed K. Zahra","doi":"10.1016/j.peva.2024.102415","DOIUrl":"https://doi.org/10.1016/j.peva.2024.102415","url":null,"abstract":"<div><p>Motivated by real-life applications, a special research-work interest has been recently directed towards the prioritized status update systems, which prioritize the update streams according to their timeliness constraints. The preferential service treatment between priority classes is commonly based on classical disciplines, preemption and non-preemption. However, both disciplines fail to give an even satisfaction between all classes. In our work, an interruption-based hybrid preemptive/non-preemptive discipline is proposed under a single-buffer system modeled as an M/M/1/2 priority queueing system. Each class being served (resp. buffered) can be preempted unless its recorded number of service preemptions reaches the predetermined in-service (resp. in-waiting) threshold. All thresholds between classes are the controlling parameters of the whole system’s performance. Using the stochastic hybrid system approach, the age of information (AoI) performance metric is analyzed in terms of its statistical average along with the higher-order moments, considering a general number of priority classes. Closed-form results are also obtained for some special cases, giving analytical insights about the AoI stability in heavy loading conditions. The average AoI and its dispersion are numerically investigated for the case of a three-class network. The significance of the proposed model is manifested in achieving a compromise satisfaction between all priority classes by a thorough adjustment of its threshold parameters. Two approaches are proposed to clarify the adjustment of these parameters. It turned out that the proposed hybrid discipline compensates for the limited buffer resource, achieving more promising performance with low design complexity and low cost. Moreover, the proposed scheme can operate under a wider span of the total offered load, through which the whole network satisfaction can be optimized under some legitimate constraints on the age-sensitive classes.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"165 ","pages":"Article 102415"},"PeriodicalIF":2.2,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140640916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced performance prediction of ATL model transformations 增强 ATL 模型转换的性能预测
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-04-05 DOI: 10.1016/j.peva.2024.102413
Raffaela Groner , Peter Bellmann , Stefan Höppner , Patrick Thiam , Friedhelm Schwenker , Hans A. Kestler , Matthias Tichy

Model transformation languages are domain-specific languages used to define transformations of models. These transformations consist of the translation from one modeling formalism into another or just the updating of a given model. Such transformations are often described declaratively and are often implemented based on very small models that cover the language of the input model. As a result, transformation developers are often unable to assess the time required to transform a larger model.

Hence, we propose a prediction approach based on machine learning which uses a set of model characteristics as input and provides a prediction of the execution time of a transformation defined in the Atlas Transformation Language (ATL). In our previous work (Groner et al., 2023), we already showed that support vector regression in combination with a model characterization based on the number of model elements, the number of references, and the number of attributes is the best choice in terms of usability and prediction accuracy for the transformations considered in our experiments.

A major weakness of our previous approach is that it fails to predict the performance of transformations that also transform attribute values of arbitrary length, such as string values. Therefore, we investigate in this work whether an extension of our feature sets that describes the average size of string attributes can help to overcome this weakness.

Our results show that the random forest approach in combination with model characterizations based on the number of model elements, the number of references, the number of attributes, and the average size of string attributes filtered by the 85th percentile of their variance is the best choice in terms of the simple way to describe a model and the quality of the obtained prediction. With this combination, we obtained a mean absolute percentage error (MAPE) of 5.07% over all modules and a MAPE of 4.82% over all modules excluding the transformation for which our previous approach failed. Whereas, we obtained previously a MAPE of 38.48% over all modules and a MAPE of 4.45% over all modules excluding the transformation for which our previous approach failed.

模型转换语言是用于定义模型转换的特定领域语言。这些转换包括从一种建模形式转化为另一种建模形式,或者只是更新给定的模型。这种转换通常以声明的方式进行描述,而且通常是基于涵盖输入模型语言的非常小的模型来实现的。因此,我们提出了一种基于机器学习的预测方法,该方法使用一组模型特征作为输入,并对 Atlas 转换语言(ATL)中定义的转换的执行时间进行预测。在我们之前的工作(Groner 等人,2023 年)中,我们已经证明,对于我们实验中考虑的转换,基于模型元素数量、引用数量和属性数量的支持向量回归与模型特征相结合,是可用性和预测准确性方面的最佳选择。因此,我们在这项工作中研究了描述字符串属性平均大小的特征集扩展是否有助于克服这一弱点。我们的结果表明,就描述模型的简单方法和所获得预测的质量而言,将随机森林方法与基于模型元素数、引用数、属性数以及由其方差第 85 百分位数筛选出的字符串属性平均大小的模型特征相结合是最佳选择。通过这种组合,我们在所有模块中获得了 5.07% 的平均绝对百分比误差 (MAPE),而在所有模块(不包括转换模块)中获得了 4.82% 的平均绝对百分比误差 (MAPE)。而之前,我们在所有模块中获得的平均绝对误差为 38.48%,在所有模块中获得的平均绝对误差为 4.45%,其中不包括我们之前预测失败的转换模块。
{"title":"Enhanced performance prediction of ATL model transformations","authors":"Raffaela Groner ,&nbsp;Peter Bellmann ,&nbsp;Stefan Höppner ,&nbsp;Patrick Thiam ,&nbsp;Friedhelm Schwenker ,&nbsp;Hans A. Kestler ,&nbsp;Matthias Tichy","doi":"10.1016/j.peva.2024.102413","DOIUrl":"https://doi.org/10.1016/j.peva.2024.102413","url":null,"abstract":"<div><p>Model transformation languages are domain-specific languages used to define transformations of models. These transformations consist of the translation from one modeling formalism into another or just the updating of a given model. Such transformations are often described declaratively and are often implemented based on very small models that cover the language of the input model. As a result, transformation developers are often unable to assess the time required to transform a larger model.</p><p>Hence, we propose a prediction approach based on machine learning which uses a set of model characteristics as input and provides a prediction of the execution time of a transformation defined in the Atlas Transformation Language (ATL). In our previous work (Groner et al., 2023), we already showed that support vector regression in combination with a model characterization based on the number of model elements, the number of references, and the number of attributes is the best choice in terms of usability and prediction accuracy for the transformations considered in our experiments.</p><p>A major weakness of our previous approach is that it fails to predict the performance of transformations that also transform attribute values of arbitrary length, such as string values. Therefore, we investigate in this work whether an extension of our feature sets that describes the average size of string attributes can help to overcome this weakness.</p><p>Our results show that the random forest approach in combination with model characterizations based on the number of model elements, the number of references, the number of attributes, and the average size of string attributes filtered by the 85th percentile of their variance is the best choice in terms of the simple way to describe a model and the quality of the obtained prediction. With this combination, we obtained a mean absolute percentage error (MAPE) of 5.07% over all modules and a MAPE of 4.82% over all modules excluding the transformation for which our previous approach failed. Whereas, we obtained previously a MAPE of 38.48% over all modules and a MAPE of 4.45% over all modules excluding the transformation for which our previous approach failed.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"164 ","pages":"Article 102413"},"PeriodicalIF":2.2,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S016653162400018X/pdfft?md5=58a866ca1d0c949f2646b2162533ef3f&pid=1-s2.0-S016653162400018X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140555333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Age- and deviation-of-information of hybrid time- and event-triggered systems: What matters more, determinism or resource conservation? 时间和事件触发混合系统的信息年龄和偏差:决定论和资源保护哪个更重要?
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-03-19 DOI: 10.1016/j.peva.2024.102412
Mahsa Noroozi, Markus Fidler

Age-of-information is a metric that quantifies the freshness of information obtained by sampling a remote sensor. In signal-agnostic sampling, sensor updates are triggered at certain times without being conditioned on the actual sensor signal. Optimal update policies have been researched and it is accepted that periodic updates achieve smaller age-of-information than random updates. We contribute a study of a signal-aware policy, where updates are triggered randomly by a defined sensor event. By definition, this implies random updates and as a consequence inferior age-of-information. Considering a notion of deviation-of-information as a signal-aware metric, our results show, however, that event-triggered systems can perform equally well as time-triggered systems while causing smaller mean network utilization. We use the stochastic network calculus to derive bounds of age- and deviation-of-information that are exceeded at most with a small, defined probability. We include simulation results that confirm the tail decay of the bounds. We also evaluate a hybrid time- and event-triggered policy where the event-triggered system is complemented by a minimal and a maximal update interval.

信息年龄(Age-of-information)是量化远程传感器采样所获信息新鲜度的指标。在信号无关性采样中,传感器更新在特定时间触发,而不以实际传感器信号为条件。对最佳更新策略进行了研究,发现定期更新比随机更新能获得更小的信息年龄。我们对一种信号感知策略进行了研究,在这种策略中,更新由定义的传感器事件随机触发。根据定义,这意味着随机更新,因此信息年龄较小。不过,考虑到信息偏差概念是一种信号感知指标,我们的研究结果表明,事件触发系统与时间触发系统性能相当,但平均网络利用率较低。我们利用随机网络微积分推导出信息年龄和信息偏差的界限,这些界限最多以很小的概率被超出。我们的模拟结果证实了这些界限的尾部衰减。我们还评估了一种混合时间和事件触发策略,其中事件触发系统通过最小和最大更新间隔进行补充。
{"title":"Age- and deviation-of-information of hybrid time- and event-triggered systems: What matters more, determinism or resource conservation?","authors":"Mahsa Noroozi,&nbsp;Markus Fidler","doi":"10.1016/j.peva.2024.102412","DOIUrl":"10.1016/j.peva.2024.102412","url":null,"abstract":"<div><p>Age-of-information is a metric that quantifies the freshness of information obtained by sampling a remote sensor. In signal-agnostic sampling, sensor updates are triggered at certain times without being conditioned on the actual sensor signal. Optimal update policies have been researched and it is accepted that periodic updates achieve smaller age-of-information than random updates. We contribute a study of a signal-aware policy, where updates are triggered randomly by a defined sensor event. By definition, this implies random updates and as a consequence inferior age-of-information. Considering a notion of deviation-of-information as a signal-aware metric, our results show, however, that event-triggered systems can perform equally well as time-triggered systems while causing smaller mean network utilization. We use the stochastic network calculus to derive bounds of age- and deviation-of-information that are exceeded at most with a small, defined probability. We include simulation results that confirm the tail decay of the bounds. We also evaluate a hybrid time- and event-triggered policy where the event-triggered system is complemented by a minimal and a maximal update interval.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"164 ","pages":"Article 102412"},"PeriodicalIF":2.2,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166531624000178/pdfft?md5=8cf8b229cde1a9343b74d531721bff2d&pid=1-s2.0-S0166531624000178-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140170536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stepwise migration of a monolith to a microservice architecture: Performance and migration effort evaluation 逐步将单体迁移到微服务架构:性能和迁移工作量评估
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-03-12 DOI: 10.1016/j.peva.2024.102411
Diogo Faustino , Nuno Gonçalves , Manuel Portela , António Rito Silva

Due to scalability requirements and the split of large software development projects into small agile teams, there is a current trend toward the migration of monolith systems to the microservice architecture. However, the split of the monolith into microservices, its encapsulation through well-defined interfaces, and the introduction of inter-microservice communication add a cost in terms of performance. In this paper, we describe a case study of the migration of a monolith to a microservice architecture, where a modular monolith architecture is used as an intermediate step. The impact on migration effort and performance is measured for both steps. Current state-of-the-art analyses the migration of monolith systems to a microservice architecture, but we observed that migration effort and performance issues are already significant in the migration to a modular monolith. Therefore, a clear distinction is established for each of the steps, which may inform software architects on the planning of the migration of monolith systems. In particular, we consider the trade-offs of doing all the migration process or just migrating to a modular monolith.

由于对可扩展性的要求以及大型软件开发项目被拆分为小型敏捷团队,目前的趋势是将单体系统迁移到微服务架构。然而,将单体拆分为微服务、通过定义明确的接口对其进行封装以及引入微服务间通信都会在性能方面增加成本。在本文中,我们介绍了一个将单体迁移到微服务架构的案例研究,其中使用了模块化单体架构作为中间步骤。我们测量了这两个步骤对迁移工作量和性能的影响。目前最先进的分析方法是将单体系统迁移到微服务架构,但我们注意到,在向模块化单体迁移的过程中,迁移工作量和性能问题已经非常严重。因此,我们对每个步骤进行了明确区分,这可以为软件架构师规划单体系统迁移提供参考。特别是,我们考虑了进行所有迁移过程或仅迁移到模块化单体的利弊权衡。
{"title":"Stepwise migration of a monolith to a microservice architecture: Performance and migration effort evaluation","authors":"Diogo Faustino ,&nbsp;Nuno Gonçalves ,&nbsp;Manuel Portela ,&nbsp;António Rito Silva","doi":"10.1016/j.peva.2024.102411","DOIUrl":"https://doi.org/10.1016/j.peva.2024.102411","url":null,"abstract":"<div><p>Due to scalability requirements and the split of large software development projects into small agile teams, there is a current trend toward the migration of monolith systems to the microservice architecture. However, the split of the monolith into microservices, its encapsulation through well-defined interfaces, and the introduction of inter-microservice communication add a cost in terms of performance. In this paper, we describe a case study of the migration of a monolith to a microservice architecture, where a modular monolith architecture is used as an intermediate step. The impact on migration effort and performance is measured for both steps. Current state-of-the-art analyses the migration of monolith systems to a microservice architecture, but we observed that migration effort and performance issues are already significant in the migration to a modular monolith. Therefore, a clear distinction is established for each of the steps, which may inform software architects on the planning of the migration of monolith systems. In particular, we consider the trade-offs of doing all the migration process or just migrating to a modular monolith.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"164 ","pages":"Article 102411"},"PeriodicalIF":2.2,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140141848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of load comparison errors on the power-of-d load balancing 负载比较误差对负载平衡功率的影响
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-02-28 DOI: 10.1016/j.peva.2024.102408
Sanidhay Bhambay , Arpan Mukhopadhyay , Thirupathaiah Vasantam

We consider a system with n unit-rate servers where jobs arrive according a Poisson process with rate nλ (λ<1). In the standard Power-of-d or Pod scheme with d2, for each incoming job, a dispatcher samples d servers uniformly at random and sends the incoming job to the least loaded of the d sampled servers. However, in practice, load comparisons may not always be accurate. In this paper, we analyse the effects of noisy load comparisons on the performance of the Pod scheme. To test the robustness of the Pod scheme against load comparison errors, we assume an adversarial setting where, in the event of an error, the adversary assigns the incoming job to the worst possible server, i.e., the server with the maximum load among the d sampled servers. We consider two error models: load-dependent and load-independent errors. In the load-dependent error model, the adversary has limited power in that it is able to cause an error with probability ϵ[0,1] only when the difference in the minimum and the maximum queue lengths of the d sampled servers is bounded by a constant threshold g0. For this type of errors, we show that, in the large system limit, the benefits of the Pod scheme are retained even if g and ϵ are arbitrarily large as long as the system is heavily loaded, i.e., λ is close to 1. In the load-independent error model, the adversary is assumed to be more powerful in that it can cause an error with probability ϵ independent of the loads of the sampled servers. For this model, we show that the performance benefits of the Pod scheme are retained only if ϵ1/d; for ϵ>1/d we show that the stability region of the system reduces and the system performs poorly in comparison to the random scheme. Our mean-field analysis uses a new approach to characterise fixed points which neither have closed form solutions nor admit any recursion. Furthermore, we develop a generic approach to prove tightness and stability for any state-dependent load balancing scheme.

我们考虑一个有 n 台单位速率服务器的系统,在这个系统中,作业是按照速率为 nλ (λ<1) 的泊松过程到达的。在 d≥2 的标准 Power-of-d 或 Pod 方案中,对于每个到达的作业,调度员都会随机均匀地抽样 d 台服务器,并将到达的作业发送给 d 台抽样服务器中负载最小的一台。然而,在实际操作中,负载比较不一定总是准确的。本文分析了有噪声的负载比较对 Pod 方案性能的影响。为了测试 Pod 方案对负载比较误差的鲁棒性,我们假设了一个对抗环境,在出现误差的情况下,对抗者会将接收到的任务分配给最差的服务器,即 d 个采样服务器中负载最大的服务器。我们考虑了两种错误模型:与负载相关的错误和与负载无关的错误。在与负载相关的错误模型中,对手的能力有限,只有当 d 台采样服务器的最小队列长度和最大队列长度之差被一个恒定阈值 g≥0 限定时,它才能以概率ϵ∈[0,1]引发错误。对于这类错误,我们的研究表明,在大系统极限中,只要系统负载较重,即使 g 和ϵ 任意大,Pod 方案的优势依然存在,即在与负载无关的错误模型中,假定对手更强大,因为它能以与采样服务器负载无关的概率 ϵ 引发错误。对于这个模型,我们表明,只有当ϵ≤1/d 时,才能保留 Pod 方案的性能优势;当ϵ>1/d 时,我们表明系统的稳定区域缩小,与随机方案相比,系统性能较差。我们的均值场分析采用了一种新方法来描述固定点的特征,这些固定点既没有封闭形式解,也不允许任何递归。此外,我们还开发了一种通用方法来证明任何与状态相关的负载平衡方案的严密性和稳定性。
{"title":"The impact of load comparison errors on the power-of-d load balancing","authors":"Sanidhay Bhambay ,&nbsp;Arpan Mukhopadhyay ,&nbsp;Thirupathaiah Vasantam","doi":"10.1016/j.peva.2024.102408","DOIUrl":"https://doi.org/10.1016/j.peva.2024.102408","url":null,"abstract":"<div><p>We consider a system with <span><math><mi>n</mi></math></span> unit-rate servers where jobs arrive according a Poisson process with rate <span><math><mrow><mi>n</mi><mi>λ</mi></mrow></math></span> (<span><math><mrow><mi>λ</mi><mo>&lt;</mo><mn>1</mn></mrow></math></span>). In the standard <em>Power-of-</em><span><math><mi>d</mi></math></span> or Pod scheme with <span><math><mrow><mi>d</mi><mo>≥</mo><mn>2</mn></mrow></math></span>, for each incoming job, a dispatcher samples <span><math><mi>d</mi></math></span> servers uniformly at random and sends the incoming job to the least loaded of the <span><math><mi>d</mi></math></span> sampled servers. However, in practice, load comparisons may not always be accurate. In this paper, we analyse the effects of noisy load comparisons on the performance of the Pod scheme. To test the robustness of the Pod scheme against load comparison errors, we assume an adversarial setting where, in the event of an error, the adversary assigns the incoming job to the worst possible server, i.e., the server with the maximum load among the <span><math><mi>d</mi></math></span> sampled servers. We consider two error models: <em>load-dependent</em> and <em>load-independent</em> errors. In the load-dependent error model, the adversary has limited power in that it is able to cause an error with probability <span><math><mrow><mi>ϵ</mi><mo>∈</mo><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow></mrow></math></span> only when the difference in the minimum and the maximum queue lengths of the <span><math><mi>d</mi></math></span> sampled servers is bounded by a constant threshold <span><math><mrow><mi>g</mi><mo>≥</mo><mn>0</mn></mrow></math></span>. For this type of errors, we show that, in the large system limit, the benefits of the Pod scheme are retained even if <span><math><mi>g</mi></math></span> and <span><math><mi>ϵ</mi></math></span> are arbitrarily large as long as the system is heavily loaded, i.e., <span><math><mi>λ</mi></math></span> is close to 1. In the load-independent error model, the adversary is assumed to be more powerful in that it can cause an error with probability <span><math><mi>ϵ</mi></math></span> independent of the loads of the sampled servers. For this model, we show that the performance benefits of the Pod scheme are retained only if <span><math><mrow><mi>ϵ</mi><mo>≤</mo><mn>1</mn><mo>/</mo><mi>d</mi></mrow></math></span>; for <span><math><mrow><mi>ϵ</mi><mo>&gt;</mo><mn>1</mn><mo>/</mo><mi>d</mi></mrow></math></span> we show that the stability region of the system reduces and the system performs poorly in comparison to the <em>random scheme</em>. Our mean-field analysis uses a new approach to characterise fixed points which neither have closed form solutions nor admit any recursion. Furthermore, we develop a generic approach to prove tightness and stability for any state-dependent load balancing scheme.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"164 ","pages":"Article 102408"},"PeriodicalIF":2.2,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166531624000130/pdfft?md5=e219034bb5ef6f93c589b57673e3885d&pid=1-s2.0-S0166531624000130-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139993482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dependence graph pattern mining method for processor performance analysis 一种用于处理器性能分析的依赖图模式挖掘方法
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-02-28 DOI: 10.1016/j.peva.2024.102409
Yawen Zheng , Chenji Han , Tingting Zhang , Fuxin Zhang , Jian Wang

As the complexity of processor microarchitecture and applications increases, obtaining performance optimization knowledge, such as critical dependent chains, becomes more challenging. To tackle this issue, this paper employs pattern mining methods to analyze the critical path of processor micro-execution dependence graphs. We propose a high average utility pattern mining algorithm called Dependence Graph Miner (DG-Miner) based on the characteristics of dependence graphs. DG-Miner overcomes the limitations of current pattern mining algorithms for dependence graph pattern mining by offering support for variable utility, candidate generation using endpoint matching, the adjustable upper bound, and the concise pattern judgment mechanism. Experiments reveal that, compared with existing upper bound candidate generation methods, the adjustable upper bound reduces the number of candidate patterns by 28.14% and the running time by 27% on average. The concise pattern judgment mechanism enhances the conciseness of mining results by 16.31% and reduces the running time by 39.82%. Furthermore, DG-Miner aids in identifying critical dependent chains, critical program regions, and performance exceptions.

随着处理器微体系结构和应用复杂性的增加,获取性能优化知识(如关键依赖链)变得更具挑战性。为解决这一问题,本文采用模式挖掘方法来分析处理器微执行依赖图的关键路径。我们根据依赖图的特点,提出了一种名为依赖图挖掘器(DG-Miner)的高平均效用模式挖掘算法。DG-Miner 通过支持可变效用、使用端点匹配生成候选、可调上界和简洁的模式判断机制,克服了当前模式挖掘算法在依赖图模式挖掘方面的局限性。实验表明,与现有的上界候选生成方法相比,可调上界平均减少了 28.14% 的候选模式数量和 27% 的运行时间。简洁模式判断机制使挖掘结果的简洁性提高了 16.31%,运行时间缩短了 39.82%。此外,DG-Miner 还有助于识别关键依赖链、关键程序区域和性能异常。
{"title":"A dependence graph pattern mining method for processor performance analysis","authors":"Yawen Zheng ,&nbsp;Chenji Han ,&nbsp;Tingting Zhang ,&nbsp;Fuxin Zhang ,&nbsp;Jian Wang","doi":"10.1016/j.peva.2024.102409","DOIUrl":"https://doi.org/10.1016/j.peva.2024.102409","url":null,"abstract":"<div><p>As the complexity of processor microarchitecture and applications increases, obtaining performance optimization knowledge, such as critical dependent chains, becomes more challenging. To tackle this issue, this paper employs pattern mining methods to analyze the critical path of processor micro-execution dependence graphs. We propose a high average utility pattern mining algorithm called Dependence Graph Miner (DG-Miner) based on the characteristics of dependence graphs. DG-Miner overcomes the limitations of current pattern mining algorithms for dependence graph pattern mining by offering support for variable utility, candidate generation using endpoint matching, the adjustable upper bound, and the concise pattern judgment mechanism. Experiments reveal that, compared with existing upper bound candidate generation methods, the adjustable upper bound reduces the number of candidate patterns by 28.14% and the running time by 27% on average. The concise pattern judgment mechanism enhances the conciseness of mining results by 16.31% and reduces the running time by 39.82%. Furthermore, DG-Miner aids in identifying critical dependent chains, critical program regions, and performance exceptions.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"164 ","pages":"Article 102409"},"PeriodicalIF":2.2,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140014628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Performance Evaluation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1