首页 > 最新文献

Performance Evaluation最新文献

英文 中文
Quality competition among internet service providers 互联网服务提供商之间的质量竞争
IF 2.2 4区 计算机科学 Q2 Mathematics Pub Date : 2023-10-12 DOI: 10.1016/j.peva.2023.102375
Simon Scherrer, Seyedali Tabaeiaghdaei, Adrian Perrig

Internet service providers (ISPs) have a variety of quality attributes that determine their attractiveness for data transmission, ranging from quality-of-service metrics such as jitter to security properties such as the presence of DDoS defense systems. ISPs should optimize these attributes in line with their profit objective, i.e., maximize revenue from attracted traffic while minimizing attribute-related cost, all in the context of alternative offers by competing ISPs. However, this attribute optimization is difficult not least because many aspects of ISP competition are barely understood on a systematic level, e.g., the multi-dimensional and cost-driving nature of path quality, and the distributed decision making of ISPs on the same path.

In this paper, we improve this understanding by analyzing how ISP competition affects path quality and ISP profits. To that end, we develop a game-theoretic model in which ISPs (i) affect path quality via multiple attributes that entail costs, (ii) are on paths together with other selfish ISPs, and (iii) are in competition with alternative paths when attracting traffic. The model enables an extensive theoretical analysis, surprisingly showing that competition can have both positive and negative effects on path quality and ISP profits, depending on the network topology and the cost structure of ISPs. However, a large-scale simulation, which draws on real-world data to instantiate the model, shows that the positive effects will likely prevail in practice: If the number of selectable paths towards any destination increases from 1 to 5, the prevalence of quality attributes increases by at least 50%, while 75% of ISPs improve their profit.

互联网服务提供商(isp)具有各种质量属性,这些属性决定了它们对数据传输的吸引力,范围从抖动等服务质量指标到DDoS防御系统等安全属性。互联网服务提供商应根据其利润目标优化这些属性,即,在竞争的互联网服务提供商提供替代方案的背景下,从吸引的流量中获得最大的收入,同时最小化属性相关的成本。然而,这种属性优化是困难的,尤其是因为ISP竞争的许多方面在系统层面上几乎没有被理解,例如,路径质量的多维性和成本驱动性质,以及同一路径上的ISP的分布式决策。在本文中,我们通过分析ISP竞争如何影响路径质量和ISP利润来改进这种理解。为此,我们开发了一个博弈论模型,其中isp (i)通过包含成本的多个属性影响路径质量,(ii)与其他自私的isp一起在路径上,以及(iii)在吸引流量时与其他路径竞争。该模型可以进行广泛的理论分析,令人惊讶的是,竞争可以对路径质量和ISP利润产生积极和消极的影响,这取决于网络拓扑结构和ISP的成本结构。然而,利用真实世界数据来实例化模型的大规模模拟表明,积极影响可能在实践中普遍存在:如果通往任何目的地的可选择路径数量从1增加到5,则质量属性的流行率至少增加50%,而75%的互联网服务提供商提高了他们的利润。
{"title":"Quality competition among internet service providers","authors":"Simon Scherrer,&nbsp;Seyedali Tabaeiaghdaei,&nbsp;Adrian Perrig","doi":"10.1016/j.peva.2023.102375","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102375","url":null,"abstract":"<div><p>Internet service providers (ISPs) have a variety of quality attributes that determine their attractiveness for data transmission, ranging from quality-of-service metrics such as jitter to security properties such as the presence of DDoS defense systems. ISPs should optimize these attributes in line with their profit objective, i.e., maximize revenue from attracted traffic while minimizing attribute-related cost, all in the context of alternative offers by competing ISPs. However, this attribute optimization is difficult not least because many aspects of ISP competition are barely understood on a systematic level, e.g., the multi-dimensional and cost-driving nature of path quality, and the distributed decision making of ISPs on the same path.</p><p>In this paper, we improve this understanding by analyzing how ISP competition affects path quality and ISP profits. To that end, we develop a game-theoretic model in which ISPs (i) affect path quality via multiple attributes that entail costs, (ii) are on paths together with other selfish ISPs, and (iii) are in competition with alternative paths when attracting traffic. The model enables an extensive theoretical analysis, surprisingly showing that competition can have both positive and negative effects on path quality and ISP profits, depending on the network topology and the cost structure of ISPs. However, a large-scale simulation, which draws on real-world data to instantiate the model, shows that the positive effects will likely prevail in practice: If the number of selectable paths towards any destination increases from 1 to 5, the prevalence of quality attributes increases by at least 50%, while 75% of ISPs improve their profit.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Load balancing policies without feedback using timed replicas 使用定时副本的无反馈负载平衡策略
IF 2.2 4区 计算机科学 Q2 Mathematics Pub Date : 2023-10-11 DOI: 10.1016/j.peva.2023.102381
Rooji Jinan , Ajay Badita , Tejas Bodas , Parimal Parag

Dispatching policies such as join the shortest queue (JSQ), join the queue with smallest workload (JSW), and their power of two variants are used in load balancing systems where the instantaneous queue length or workload information at all queues or a subset of them can be queried. In situations where the dispatcher has an associated memory, one can minimize this query overhead by maintaining a list of idle servers to which jobs can be dispatched. Recent alternative approaches that do not require querying such information include the cancel-on-start and cancel-on-complete replication policies. The downside of such policies however is that the servers must communicate either the start or the completion time instant of each service to the dispatcher and must allow the coordinated and instantaneous cancellation of all redundant replicas. In practice, the requirements of query messaging, memory, and replica cancellation pose challenges in their implementation and their advantages are not clear. In this work, we consider load-balancing policies that do not need to query load information, do not need memory, and do not need to cancel replicas. Our policies allow the dispatcher to append a timer to each job or its replica. A job or a replica is discarded if its timer expires before it starts receiving service. We analyze several variants of this policy which are novel and simple to implement. We numerically observe that the variants of the proposed policy outperform popular feedback-based policies for low arrival rates, despite no feedback from servers to the dispatcher.

调度策略,如加入最短队列(JSQ)、加入工作负载最小的队列(JSW),以及它们的两种变体功能,都用于负载平衡系统中,在这些系统中,可以查询所有队列或其中一个子集的瞬时队列长度或工作负载信息。在调度程序具有关联内存的情况下,可以通过维护空闲服务器列表来最小化查询开销,这些空闲服务器可以将作业分派到这些服务器上。最近不需要查询此类信息的替代方法包括启动时取消复制策略和完成时取消复制策略。然而,这种策略的缺点是服务器必须将每个服务的开始时间或完成时间瞬间与调度程序通信,并且必须允许协调和即时取消所有冗余副本。在实践中,查询消息传递、内存和副本取消的需求给它们的实现带来了挑战,而且它们的优势并不清楚。在这项工作中,我们考虑了不需要查询负载信息、不需要内存、不需要取消副本的负载均衡策略。我们的策略允许调度程序为每个作业或其副本附加一个计时器。如果作业或副本的定时器在开始接受服务之前过期,则丢弃该作业或副本。我们分析了该策略的几种变体,它们新颖且易于实现。我们在数值上观察到,尽管没有从服务器到调度程序的反馈,但对于低到达率,建议策略的变体优于流行的基于反馈的策略。
{"title":"Load balancing policies without feedback using timed replicas","authors":"Rooji Jinan ,&nbsp;Ajay Badita ,&nbsp;Tejas Bodas ,&nbsp;Parimal Parag","doi":"10.1016/j.peva.2023.102381","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102381","url":null,"abstract":"<div><p>Dispatching policies such as join the shortest queue (JSQ), join the queue with smallest workload (JSW), and their power of two variants are used in load balancing systems where the instantaneous queue length or workload information at all queues or a subset of them can be queried. In situations where the dispatcher has an associated memory, one can minimize this query overhead by maintaining a list of idle servers to which jobs can be dispatched. Recent alternative approaches that do not require querying such information include the cancel-on-start and cancel-on-complete replication policies. The downside of such policies however is that the servers must communicate either the start or the completion time instant of each service to the dispatcher and must allow the coordinated and instantaneous cancellation of all redundant replicas. In practice, the requirements of query messaging, memory, and replica cancellation pose challenges in their implementation and their advantages are not clear. In this work, we consider load-balancing policies that do not need to query load information, do not need memory, and do not need to cancel replicas. Our policies allow the dispatcher to append a timer to each job or its replica. A job or a replica is discarded if its timer expires before it starts receiving service. We analyze several variants of this policy which are novel and simple to implement. We numerically observe that the variants of the proposed policy outperform popular feedback-based policies for low arrival rates, despite no feedback from servers to the dispatcher.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The RESET and MARC techniques, with application to multiserver-job analysis RESET和MARC技术,以及在多服务器作业分析中的应用
IF 2.2 4区 计算机科学 Q2 Mathematics Pub Date : 2023-10-11 DOI: 10.1016/j.peva.2023.102378
Isaac Grosof, Yige Hong, Mor Harchol-Balter, Alan Scheller-Wolf

Multiserver-job (MSJ) systems, where jobs need to run concurrently across many servers, are increasingly common in practice. The default service ordering in many settings is First-Come First-Served (FCFS) service. Virtually all theoretical work on MSJ FCFS models focuses on characterizing the stability region, with almost nothing known about mean response time.

We derive the first explicit characterization of mean response time in the MSJ FCFS system. Our formula characterizes mean response time up to an additive constant, which becomes negligible as arrival rate approaches throughput, and allows for general phase-type job durations.

We derive our result by utilizing two key techniques: REduction to Saturated for Expected Time (RESET) and MArkovian Relative Completions (MARC).

Using our novel RESET technique, we reduce the problem of characterizing mean response time in the MSJ FCFS system to an M/M/1 with Markovian service rate (MMSR). The Markov chain controlling the service rate is based on the saturated system, a simpler closed system which is far more analytically tractable.

Unfortunately, the MMSR has no explicit characterization of mean response time. We therefore use our novel MARC technique to give the first explicit characterization of mean response time in the MMSR, again up to constant additive error. We specifically introduce the concept of “relative completions,” which is the cornerstone of our MARC technique.

多服务器作业(MSJ)系统在实践中越来越普遍,其中作业需要在许多服务器上并发运行。在许多设置中,默认的服务顺序是先到先得(FCFS)服务。实际上,所有关于MSJ FCFS模型的理论工作都集中在描述稳定区域上,对平均响应时间几乎一无所知。我们首次导出了MSJ FCFS系统中平均响应时间的显式表征。我们的公式将平均响应时间描述为一个可加常数,当到达率接近吞吐量时,该常数可以忽略不计,并允许一般阶段类型的工作持续时间。我们通过使用两种关键技术得出了我们的结果:预期时间饱和还原(RESET)和马尔可夫相对完井(MARC)。利用我们新颖的RESET技术,我们将MSJ FCFS系统的平均响应时间表征问题降低到具有马尔可夫服务率(MMSR)的M/M/1。控制服务率的马尔可夫链基于饱和系统,这是一种更简单的封闭系统,更易于分析处理。不幸的是,MMSR没有明确的平均响应时间表征。因此,我们使用新颖的MARC技术给出了MMSR中平均响应时间的第一个明确表征,再次达到恒定的加性误差。我们特别介绍了“相对补全”的概念,这是MARC技术的基石。
{"title":"The RESET and MARC techniques, with application to multiserver-job analysis","authors":"Isaac Grosof,&nbsp;Yige Hong,&nbsp;Mor Harchol-Balter,&nbsp;Alan Scheller-Wolf","doi":"10.1016/j.peva.2023.102378","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102378","url":null,"abstract":"<div><p>Multiserver-job (MSJ) systems, where jobs need to run concurrently across many servers, are increasingly common in practice. The default service ordering in many settings is First-Come First-Served (FCFS) service. Virtually all theoretical work on MSJ FCFS models focuses on characterizing the stability region, with almost nothing known about mean response time.</p><p><span>We derive the first explicit characterization of mean response time in the MSJ FCFS system. Our formula characterizes mean response time up to an additive constant, which becomes negligible as </span>arrival rate approaches throughput, and allows for general phase-type job durations.</p><p>We derive our result by utilizing two key techniques: REduction to Saturated for Expected Time (RESET) and MArkovian Relative Completions (MARC).</p><p>Using our novel RESET technique, we reduce the problem of characterizing mean response time in the MSJ FCFS system to an M/M/1 with Markovian service rate (MMSR). The Markov chain controlling the service rate is based on the saturated system, a simpler closed system which is far more analytically tractable.</p><p>Unfortunately, the MMSR has no explicit characterization of mean response time. We therefore use our novel MARC technique to give the first explicit characterization of mean response time in the MMSR, again up to constant additive error. We specifically introduce the concept of “relative completions,” which is the cornerstone of our MARC technique.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
POBO: Safe and optimal resource management for cloud microservices POBO:云微服务的安全和优化资源管理
IF 2.2 4区 计算机科学 Q2 Mathematics Pub Date : 2023-10-10 DOI: 10.1016/j.peva.2023.102376
Hengquan Guo , Hongchen Cao , Jingzhu He, Xin Liu, Yuanming Shi

Resource management in microservices is challenging due to the uncertain latency–resource relationship, dynamic environment, and strict Service-Level Agreement (SLA) guarantees. This paper presents a Pessimistic and Optimistic Bayesian Optimization framework, named POBO, for safe and optimal resource configuration for microservice applications. POBO leverages Bayesian learning to estimate the uncertain latency–resource functions and combines primal–dual and penalty-based optimization to maximize resource efficiency while guaranteeing strict SLAs. We prove that POBO can achieve sublinear regret and SLA violation against the optimal resource configuration in hindsight. We have implemented a prototype of POBO and conducted extensive experiments on a real-world microservice application. Our results show that POBO can find the safe and optimal configuration efficiently, outperforming Kubernetes’ built-in auto-scaling module and the state-of-the-art algorithms.

由于不确定的延迟-资源关系、动态环境和严格的服务水平协议(SLA)保证,微服务中的资源管理具有挑战性。本文提出了一种悲观和乐观贝叶斯优化框架,即POBO,用于微服务应用的安全优化资源配置。POBO利用贝叶斯学习来估计不确定的延迟资源函数,并结合原始对偶和基于惩罚的优化来最大化资源效率,同时保证严格的sla。事后证明POBO可以实现对最优资源配置的次线性后悔和SLA违反。我们已经实现了POBO的原型,并在一个真实的微服务应用程序上进行了大量的实验。我们的研究结果表明,POBO可以有效地找到安全和最佳配置,优于Kubernetes内置的自动缩放模块和最先进的算法。
{"title":"POBO: Safe and optimal resource management for cloud microservices","authors":"Hengquan Guo ,&nbsp;Hongchen Cao ,&nbsp;Jingzhu He,&nbsp;Xin Liu,&nbsp;Yuanming Shi","doi":"10.1016/j.peva.2023.102376","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102376","url":null,"abstract":"<div><p>Resource management in microservices<span> is challenging due to the uncertain latency–resource relationship, dynamic environment, and strict Service-Level Agreement (SLA) guarantees. This paper presents a Pessimistic and Optimistic Bayesian Optimization<span><span> framework, named POBO, for safe and optimal resource configuration for microservice applications. POBO leverages </span>Bayesian learning to estimate the uncertain latency–resource functions and combines primal–dual and penalty-based optimization to maximize resource efficiency while guaranteeing strict SLAs. We prove that POBO can achieve sublinear regret and SLA violation against the optimal resource configuration in hindsight. We have implemented a prototype of POBO and conducted extensive experiments on a real-world microservice application. Our results show that POBO can find the safe and optimal configuration efficiently, outperforming Kubernetes’ built-in auto-scaling module and the state-of-the-art algorithms.</span></span></p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation modeling of Zoom traffic on a campus network: A case study 校园网上Zoom流量的仿真建模:一个案例研究
IF 2.2 4区 计算机科学 Q2 Mathematics Pub Date : 2023-10-10 DOI: 10.1016/j.peva.2023.102382
Mehdi Karamollahi, Carey Williamson, Martin Arlitt

In this paper, we develop a synthetic workload model for the Zoom network application based on empirical Zoom traffic measurements from a campus network. We then use this model in a simulation study of Zoom network traffic at the campus scale. The simulation results show that hybrid learning places a substantial load on the campus network. Additional simulation experiments investigate the potential benefits of locally-hosted Zoom infrastructure, improved load balancing strategies for Zoom servers, and multicast delivery for Zoom network traffic. The simulation results show that the multicast approach offers the greatest potential benefit for improving Zoom performance on our campus network.

本文基于对某校园网Zoom流量的实证测量,建立了Zoom网络应用的综合工作负载模型。然后,我们使用该模型对校园规模的Zoom网络流量进行了仿真研究。仿真结果表明,混合学习给校园网带来了很大的负荷。另外的仿真实验研究了本地托管的Zoom基础设施、改进的Zoom服务器负载平衡策略以及Zoom网络流量的多播传递的潜在好处。仿真结果表明,组播方法对提高校园网的变焦性能具有最大的潜在效益。
{"title":"Simulation modeling of Zoom traffic on a campus network: A case study","authors":"Mehdi Karamollahi,&nbsp;Carey Williamson,&nbsp;Martin Arlitt","doi":"10.1016/j.peva.2023.102382","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102382","url":null,"abstract":"<div><p>In this paper, we develop a synthetic workload model for the Zoom network application based on empirical Zoom traffic measurements from a campus network. We then use this model in a simulation study of Zoom network traffic at the campus scale. The simulation results show that hybrid learning places a substantial load on the campus network. Additional simulation experiments investigate the potential benefits of locally-hosted Zoom infrastructure, improved load balancing strategies for Zoom servers, and multicast delivery for Zoom network traffic. The simulation results show that the multicast approach offers the greatest potential benefit for improving Zoom performance on our campus network.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimizing age of information under arbitrary arrival model with arbitrary packet size 在任意数据包大小的任意到达模型下最小化信息的年龄
IF 2.2 4区 计算机科学 Q2 Mathematics Pub Date : 2023-10-10 DOI: 10.1016/j.peva.2023.102373
Kumar Saurav, Rahul Vaze

We consider a single source–destination pair, where information updates (in short, updates) arrive at the source at arbitrary time instants. For each update, its size, i.e. the service time required for complete transmission to the destination, is also arbitrary. At any time, the source may choose which update to transmit, while incurring transmission cost that is proportional to the duration of transmission. We consider the age of information (AoI) metric that quantifies the staleness of the update (information) at the destination. At any time, AoI is equal to the difference between the current time, and the arrival time of the latest update (at the source) that has been completely transmitted (to the destination). The goal is to find a causal (i.e. online) scheduling policy that minimizes the sum of the AoI and the transmission cost, where the possible decisions at any time are (i) whether to preempt the update under transmission upon arrival of a new update, and (ii) if no update is under transmission, then choose which update to transmit (among the available updates). In this paper, we propose a causal policy called SRPT+ that at each time, (i) preempts the update under transmission if a new update arrives with a smaller size (compared to the remaining size of the update under transmission), and (ii) if no update is under transmission, then from the set of available updates with size less than a threshold (which is a function of the transmission cost and the current AoI), begins to transmit the update for which the ratio of the reduction in AoI upon complete transmission (if not preempted in future) and the remaining size, is maximum. We characterize the performance of SRPT+ using a metric called the competitive ratio, i.e. the ratio of the cost of causal policy and the cost of an optimal offline policy (that knows the entire input in advance), maximized over all possible inputs. We show that the competitive ratio of SRPT+ is at most 5. In the special case when there is no transmission cost, we further show that the competitive ratio of SRPT+ is at most 3.

我们考虑一个单一的源-目标对,其中信息更新(简而言之,更新)在任意时刻到达源。对于每次更新,其大小,即完成传输到目的地所需的服务时间,也是任意的。在任何时候,源可以选择传输哪个更新,同时产生与传输时间成正比的传输成本。我们考虑信息年龄(AoI)度量,它量化了目的地更新(信息)的过时程度。在任何时候,AoI都等于当前时间与已完全传输(到目的地)的最新更新(在源)到达时间之间的差值。目标是找到一个因果(即在线)调度策略,使AoI和传输成本的总和最小化,其中任何时候可能的决策是(i)是否在新更新到达时抢占正在传输的更新,以及(ii)如果没有更新正在传输,则选择传输哪个更新(在可用更新中)。在本文中,我们提出了一种称为SRPT+的因果策略,该策略每次(i)如果新更新以较小的大小到达(与传输下更新的剩余大小相比),则抢占传输下的更新;(ii)如果没有更新正在传输,则从大小小于阈值(这是传输成本和当前AoI的函数)的可用更新集合中,开始传输更新,在完成传输时(如果将来没有被抢占)AoI减少与剩余大小的比率是最大的。我们使用一个称为竞争比的度量来描述SRPT+的性能,即因果策略的成本与最优离线策略(提前知道整个输入)的成本之比,在所有可能的输入中最大化。结果表明,SRPT+的竞争比不超过5。在不存在传输成本的特殊情况下,我们进一步证明SRPT+的竞争比不超过3。
{"title":"Minimizing age of information under arbitrary arrival model with arbitrary packet size","authors":"Kumar Saurav,&nbsp;Rahul Vaze","doi":"10.1016/j.peva.2023.102373","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102373","url":null,"abstract":"<div><p>We consider a single source–destination pair, where information updates (in short, updates) arrive at the source at arbitrary time instants. For each update, its size, i.e. the service time required for complete transmission to the destination, is also arbitrary. At any time, the source may choose which update to transmit, while incurring transmission cost that is proportional to the duration of transmission. We consider the age of information (AoI) metric that quantifies the staleness of the update (information) at the destination. At any time, AoI is equal to the difference between the current time, and the arrival time of the latest update (at the source) that has been completely transmitted (to the destination). The goal is to find a causal (i.e. online) scheduling policy that minimizes the sum of the AoI and the transmission cost, where the possible decisions at any time are (i) whether to preempt the update under transmission upon arrival of a new update, and (ii) if no update is under transmission, then choose which update to transmit (among the available updates). In this paper, we propose a causal policy called SRPT<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span> that at each time, (i) preempts the update under transmission if a new update arrives with a smaller size (compared to the remaining size of the update under transmission), and (ii) if no update is under transmission, then from the set of available updates with size less than a threshold (which is a function of the transmission cost and the current AoI), begins to transmit the update for which the ratio of the reduction in AoI upon complete transmission (if not preempted in future) and the remaining size, is maximum. We characterize the performance of SRPT<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span><span> using a metric called the competitive ratio, i.e. the ratio of the cost of causal policy and the cost of an optimal offline policy (that knows the entire input in advance), maximized over all possible inputs. We show that the competitive ratio of SRPT</span><span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span> is at most 5. In the special case when there is no transmission cost, we further show that the competitive ratio of SRPT<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span> is at most 3.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The saturated Multiserver Job Queuing Model with two classes of jobs: Exact and approximate results 具有两类作业的饱和多服务器作业排队模型:精确结果和近似结果
IF 2.2 4区 计算机科学 Q2 Mathematics Pub Date : 2023-10-06 DOI: 10.1016/j.peva.2023.102370
Diletta Olliaro , Marco Ajmone Marsan , Simonetta Balsamo , Andrea Marin

We consider a multiserver queue where jobs request for a varying number of servers for a random service time. The requested number of servers is assigned to each job following a First-In First-Out (FIFO) order. When the number of free servers is not sufficient to accommodate the next job in line, that job and any subsequent jobs in the queue are forced to wait. As a result, not all available servers are allocated to jobs if the next job requires more servers than are currently free. This queuing system is often called a Multiserver Job Queuing Model (MJQM).

In this paper, we study the behavior of a MJQM under saturation, i.e., when the waiting line always contains jobs to be served. We categorize jobs into two classes: the first class consists of jobs that only require one server, while the second class includes jobs that require a larger number of servers. We obtain the system utilization and the throughput of the two job classes for the case in which the number of servers requested by jobs in the second class is equal to the number of available servers, using a simple approach that allows for a general distribution of the service time of jobs in the second class. Hence, we derive the stability condition of the non-saturated MJQM under these assumptions. Additionally, we develop an approximate analysis for the case in which the jobs of the second class require a fraction of the available servers.

Based on analytical and numerical results, we highlight interesting system properties and insights.

我们考虑一个多服务器队列,其中作业在随机服务时间内请求不同数量的服务器。请求的服务器数量按照先进先出(FIFO)顺序分配给每个作业。当可用服务器的数量不足以容纳队列中的下一个作业时,该作业和队列中的任何后续作业都将被迫等待。因此,如果下一个作业需要比当前可用服务器更多的服务器,则不会将所有可用服务器分配给作业。这种排队系统通常被称为多服务器作业队列模型(MJQM)。在本文中,我们研究了MJQM在饱和状态下的行为,即当等待队列总是包含要服务的作业时。我们将作业分为两类:第一类由只需要一台服务器的作业组成,而第二类包括需要更多服务器的作业。对于第二类作业请求的服务器数量等于可用服务器数量的情况,我们使用一种简单的方法来获得两个作业类的系统利用率和吞吐量,该方法允许第二类中作业的服务时间的一般分布。因此,我们导出了在这些假设下非饱和MJQM的稳定性条件。此外,我们对第二类作业需要一小部分可用服务器的情况进行了近似分析。基于分析和数值结果,我们强调了有趣的系统性质和见解。
{"title":"The saturated Multiserver Job Queuing Model with two classes of jobs: Exact and approximate results","authors":"Diletta Olliaro ,&nbsp;Marco Ajmone Marsan ,&nbsp;Simonetta Balsamo ,&nbsp;Andrea Marin","doi":"10.1016/j.peva.2023.102370","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102370","url":null,"abstract":"<div><p><span>We consider a multiserver queue where jobs request for a varying number of servers for a random service time. The requested number of servers is assigned to each job following a First-In First-Out (FIFO) order. When the number of free servers is not sufficient to accommodate the next job in line, that job and any subsequent jobs in the queue are forced to wait. As a result, not all available servers are allocated to jobs if the next job requires more servers than are currently free. This queuing system is often called a </span><span><em>Multiserver Job </em><em>Queuing Model</em></span> (MJQM).</p><p>In this paper, we study the behavior of a MJQM under saturation, i.e., when the waiting line always contains jobs to be served. We categorize jobs into two classes: the first class consists of jobs that only require one server, while the second class includes jobs that require a larger number of servers. We obtain the system utilization and the throughput of the two job classes for the case in which the number of servers requested by jobs in the second class is equal to the number of available servers, using a simple approach that allows for a general distribution of the service time of jobs in the second class. Hence, we derive the stability condition of the non-saturated MJQM under these assumptions. Additionally, we develop an approximate analysis for the case in which the jobs of the second class require a fraction of the available servers.</p><p>Based on analytical and numerical results, we highlight interesting system properties and insights.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probabilistic indoor tracking of Bluetooth Low-Energy beacons 低功耗蓝牙信标的概率室内跟踪
IF 2.2 4区 计算机科学 Q2 Mathematics Pub Date : 2023-10-05 DOI: 10.1016/j.peva.2023.102374
F. Serhan Daniş , Cem Ersoy , A. Taylan Cemgil

We construct a practical and real-time probabilistic framework for fine target tracking. In our scenario, a Bluetooth Low-Energy (BLE) device navigating in the environment publishes BLE packets that are captured by stationary BLE sensors. The aim is to accurately estimate the live position of the BLE device emitting these packets. The framework is built upon a hidden Markov model (HMM), the components of which are determined with a combination of heuristic and data-driven approaches. In the data-driven part, we rely on the fingerprints formed priorly by extracting received signal strength indicators (RSSI) from the packets. These data are then transformed into probabilistic radio-frequency maps that are used for measuring the likelihood between an RSSI data and a position. The heuristic part involves the movement of the tracked object. Having no access to any inertial information of the object, this movement is modeled with Gaussian densities with variable model parameters that are to be determined heuristically. The practicality of the framework comes from the associated small parameter set used to discretize the components of the HMM. By tuning these parameters, such as the grid size of the area, the mask size and the covariance of the Gaussian; a probabilistic filtering becomes tractable for discrete state spaces. The filtering is then performed by the forward algorithm given the instantaneous sequential RSSI measurements. The performance of the system is evaluated by taking the mean squared errors of the most probable positions at each time step to their corresponding ground-truth positions. We report the statistics of the error distributions and see that we achieve promising results. The approach is also finally evaluated by its runtime and memory usage.

我们构建了一个实用的实时概率框架,用于精细目标跟踪。在我们的场景中,在环境中导航的蓝牙低功耗(BLE)设备发布由固定BLE传感器捕获的BLE数据包。目的是准确估计发射这些数据包的BLE设备的活动位置。该框架建立在隐马尔可夫模型(HMM)之上,隐马尔可夫模型的组成部分采用启发式和数据驱动方法相结合的方法确定。在数据驱动部分,我们依赖于先前通过从数据包中提取接收信号强度指标(RSSI)形成的指纹。然后将这些数据转换为概率射频图,用于测量RSSI数据与位置之间的可能性。启发式部分涉及被跟踪对象的运动。由于无法获得物体的任何惯性信息,这种运动用高斯密度建模,模型参数可变,需要启发式地确定。该框架的实用性来自于用于离散HMM组件的相关小参数集。通过调整这些参数,如网格面积的大小,掩模的大小和高斯的协方差;对于离散状态空间,概率滤波变得易于处理。然后通过给定瞬时连续RSSI测量值的前向算法执行滤波。通过在每个时间步长取最可能位置的均方误差到相应的真值位置来评估系统的性能。我们报告了误差分布的统计数据,并看到我们取得了有希望的结果。最后还根据运行时和内存使用情况对该方法进行评估。
{"title":"Probabilistic indoor tracking of Bluetooth Low-Energy beacons","authors":"F. Serhan Daniş ,&nbsp;Cem Ersoy ,&nbsp;A. Taylan Cemgil","doi":"10.1016/j.peva.2023.102374","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102374","url":null,"abstract":"<div><p><span>We construct a practical and real-time probabilistic framework<span> for fine target tracking. In our scenario, a Bluetooth Low-Energy (BLE) device navigating in the environment publishes BLE packets that are captured by stationary BLE sensors. The aim is to accurately estimate the live position of the BLE device emitting these packets. The framework is built upon a hidden Markov model (HMM), the components of which are determined with a combination of heuristic and data-driven approaches. In the data-driven part, we rely on the fingerprints formed priorly by extracting received signal strength<span> indicators (RSSI) from the packets. These data are then transformed into probabilistic radio-frequency maps that are used for measuring the likelihood between an RSSI data and a position. The heuristic part involves the movement of the tracked object. Having no access to any inertial information of the object, this movement is modeled with Gaussian densities with variable model parameters that are to be determined heuristically. The practicality of the framework comes from the associated small parameter set used to discretize the components of the HMM. By tuning these parameters, such as the grid size of the area, the mask size and the covariance of the Gaussian; a probabilistic filtering becomes tractable for discrete state spaces. The filtering is then performed by the forward algorithm given the instantaneous sequential RSSI measurements. The performance of the system is evaluated by taking the mean squared errors of the most probable positions at each time step to their corresponding ground-truth positions. We report the </span></span></span>statistics of the error distributions and see that we achieve promising results. The approach is also finally evaluated by its runtime and memory usage.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hardware-independent time estimation method for inference process of convolutional layers on GPU 基于GPU的卷积层推理过程中与硬件无关的时间估计方法
IF 2.2 4区 计算机科学 Q2 Mathematics Pub Date : 2023-09-20 DOI: 10.1016/j.peva.2023.102368
Chengzhen Meng, Hongjun Dai

Nowadays, various AI applications based on Convolutional Neural Networks (CNNs) are widely deployed on GPU-accelerated devices. However, due to the lack of visibility into GPU internal scheduling, accurately modeling the performance of CNN inference tasks or estimating the latency of CNN tasks that are executing or waiting on the GPU is challenging. This hurts the multi-model scheduling on multi-device and CNN real-time inference. Therefore, in this paper, we propose a time estimation method to estimate the forward execution time of a convolutional layer with an arbitrary shape on a GPU. The proposed method divides an explicit General Matrix Multiplication (GEMM) convolution operation into a series of estimatable GPU operations and constructs performance models at the level of sub-operations rather than hardware instructions or entire models. Also, the proposed method can be easily adapted to different hardware devices or underlying algorithm implementations, since it focuses on the variation of execution time relative to the input data scale, without focusing on specific instructions or hardware actions. According to the experiments on four typical CUDA compatible platforms, the proposed method has an average error rate of less than 5% for convolutional layers in some practical CNN models, and has about 8% error rate in estimating GEMM convolution implementations provided by cuDNN library. Experiments show that the proposed method can predict the forward execution time of convolutional layers of arbitrary size in CNN inference tasks on different GPU models.

目前,基于卷积神经网络(cnn)的各种人工智能应用被广泛部署在gpu加速设备上。然而,由于缺乏对GPU内部调度的可见性,准确建模CNN推理任务的性能或估计正在执行或等待GPU的CNN任务的延迟是具有挑战性的。这不利于多设备上的多模型调度和CNN实时推理。因此,在本文中,我们提出了一种时间估计方法来估计任意形状的卷积层在GPU上的前向执行时间。该方法将显式的通用矩阵乘法(General Matrix Multiplication, GEMM)卷积运算分解为一系列可估计的GPU运算,并在子运算层面构建性能模型,而不是硬件指令或整个模型。此外,所提出的方法可以很容易地适应不同的硬件设备或底层算法实现,因为它关注相对于输入数据规模的执行时间的变化,而不关注特定的指令或硬件操作。在四种典型CUDA兼容平台上的实验表明,该方法对一些实际CNN模型的卷积层的平均错误率小于5%,对cuDNN库提供的GEMM卷积实现的估计错误率约为8%。实验表明,该方法可以在不同GPU模型上预测CNN推理任务中任意大小卷积层的前向执行时间。
{"title":"A hardware-independent time estimation method for inference process of convolutional layers on GPU","authors":"Chengzhen Meng,&nbsp;Hongjun Dai","doi":"10.1016/j.peva.2023.102368","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102368","url":null,"abstract":"<div><p><span><span>Nowadays, various AI applications based on </span>Convolutional Neural Networks<span> (CNNs) are widely deployed on GPU-accelerated devices. However, due to the lack of visibility into GPU internal scheduling, accurately modeling the performance of CNN inference tasks or estimating the latency of CNN tasks that are executing or waiting on the GPU is challenging. This hurts the multi-model scheduling on multi-device and CNN real-time inference. Therefore, in this paper, we propose a time estimation<span> method to estimate the forward execution time of a convolutional layer with an arbitrary shape on a GPU. The proposed method divides an explicit </span></span></span><em>General Matrix Multiplication</em><span> (GEMM) convolution<span> operation into a series of estimatable GPU operations and constructs performance models at the level of sub-operations rather than hardware instructions or entire models. Also, the proposed method can be easily adapted to different hardware devices or underlying algorithm implementations, since it focuses on the variation of execution time relative to the input data scale, without focusing on specific instructions or hardware actions. According to the experiments on four typical CUDA compatible platforms, the proposed method has an average error rate of less than 5% for convolutional layers in some practical CNN models, and has about 8% error rate in estimating GEMM convolution implementations provided by cuDNN library. Experiments show that the proposed method can predict the forward execution time of convolutional layers of arbitrary size in CNN inference tasks on different GPU models.</span></span></p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the regret of online edge service hosting 关于在线边缘服务托管的遗憾
IF 2.2 4区 计算机科学 Q2 Mathematics Pub Date : 2023-09-11 DOI: 10.1016/j.peva.2023.102367
R. Sri Prakash, Nikhil Karamchandani, Sharayu Moharir

We consider the problem of service hosting where a service provider can dynamically rent edge resources via short term contracts to ensure better quality of service to its customers. The service can also be partially hosted at the edge, in which case, customers’ requests can be partially served at the edge. The total cost incurred by the system is modeled as a combination of the rent cost, the service cost incurred due to latency in serving customers, and the fetch cost incurred as a result of the bandwidth used to fetch the code/databases of the service from the cloud servers to host the service at the edge. In this paper, we compare multiple hosting policies with regret as a metric, defined as the difference in the cost incurred by the policy and the optimal policy over some time horizon T. In particular we consider the Retro Renting (RR) and Follow The Perturbed Leader (FTPL) policies proposed in the literature and provide performance guarantees on the regret of these policies. We show that under i.i.d stochastic arrivals, RR policy has linear regret while FTPL policy has constant regret. Next, we propose a variant of FTPL, namely Wait then FTPL (W-FTPL), which also has constant regret while demonstrating much better dependence on the fetch cost. We also show that under adversarial arrivals, RR policy has linear regret while both FTPL and W-FTPL have regret O(T) which is order-optimal.

我们考虑服务托管的问题,服务提供商可以通过短期合同动态租用边缘资源,以确保为客户提供更好的服务质量。服务也可以部分托管在边缘,在这种情况下,客户的请求可以部分在边缘得到服务。系统产生的总成本被建模为租金成本、由于服务客户的延迟而产生的服务成本以及由于用于从云服务器获取服务的代码/数据库以在边缘托管服务的带宽而产生的获取成本的组合。在本文中,我们比较了以后悔为度量标准的多个托管策略,后悔被定义为策略和最优策略在一段时间内产生的成本差异。特别是,我们考虑了文献中提出的回溯租赁(RR)和跟随扰动领导者(FTPL)策略,并为这些策略的后悔提供了性能保证。我们证明了在i.i.d随机到达下,RR策略具有线性后悔,而FTPL策略具有恒定后悔。接下来,我们提出了FTPL的一个变体,即等待然后FTPL(W-FTPL),它也有持续的遗憾,同时表现出对获取成本的更好依赖性。我们还证明了在对抗性到达下,RR策略具有线性后悔,而FTPL和W-FTPL都具有阶最优的后悔O(T)。
{"title":"On the regret of online edge service hosting","authors":"R. Sri Prakash,&nbsp;Nikhil Karamchandani,&nbsp;Sharayu Moharir","doi":"10.1016/j.peva.2023.102367","DOIUrl":"https://doi.org/10.1016/j.peva.2023.102367","url":null,"abstract":"<div><p><span>We consider the problem of service hosting where a service provider can dynamically rent edge resources via short term contracts to ensure better quality of service to its customers. The service can also be partially hosted at the edge, in which case, customers’ requests can be partially served at the edge. The total cost incurred by the system is modeled as a combination of the rent cost, the service cost incurred due to latency in serving customers, and the fetch cost incurred as a result of the bandwidth used to fetch the code/databases of the service from the cloud servers to host the service at the edge. In this paper, we compare multiple hosting policies with regret as a metric, defined as the difference in the cost incurred by the policy and the optimal policy over some time horizon </span><span><math><mi>T</mi></math></span>. In particular we consider the Retro Renting (RR) and Follow The Perturbed Leader (FTPL) policies proposed in the literature and provide performance guarantees on the regret of these policies. We show that under i.i.d stochastic arrivals, RR policy has linear regret while FTPL policy has constant regret. Next, we propose a variant of FTPL, namely Wait then FTPL (W-FTPL), which also has constant regret while demonstrating much better dependence on the fetch cost. We also show that under adversarial arrivals, RR policy has linear regret while both FTPL and W-FTPL have regret <span><math><mrow><mi>O</mi><mrow><mo>(</mo><msqrt><mrow><mi>T</mi></mrow></msqrt><mo>)</mo></mrow></mrow></math></span> which is order-optimal.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49874545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Performance Evaluation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1