首页 > 最新文献

Proceedings of the ACM on Measurement and Analysis of Computing Systems最新文献

英文 中文
POMACS V7, N1, March 2023 Editorial 《植物学报》V7, N1, 2023年3月社论
K. Avrachenkov, P. Gill, B. Urgaonkar
The Proceedings of the ACM on Measurement and Analysis of Computing Systems (POMACS) focuses on the measurement and performance evaluation of computer systems and operates in close collaboration with the ACM Special Interest Group SIGMETRICS. All papers in this issue of POMACS will be presented at the ACM SIGMETRICS 2023 conference on June 19-23, 2023, in Orlando, Florida, USA. These papers have been selected during the fall submission round by the 91 members of the ACM SIGMETRICS 2023 program committee via a rigorous review process. Each paper was conditionally accepted (and shepherded), allowed a "one-shot" revision (to be resubmitted to one of the subsequent three SIGMETRICS deadlines), or rejected (with re-submission allowed after a year). For this issue, which represents the fall deadline, POMACS is publishing 26 papers out of 119 submissions. All submissions received at least 3 reviews and borderline cases were extensively discussed during the online program committee meeting. Based on the indicated track(s), roughly 21% of the submissions were in the Theory track, 40% were in the Measurement & Applied Modeling track, 29% were in the Systems track, and 39% were in the Learning track. Many individuals contributed to the success of this issue of POMACS. First, we would like to thank the authors, who submitted their best work to SIGMETRICS/POMACS. Second, we would like to thank the program committee members who provided constructive feedback in their reviews to authors and participated in the online discussions and program committee meeting. We also thank the several external reviewers who provided their expert opinion on specific submissions that required additional input. We are also grateful to the SIGMETRICS Board Chair, Giuliano Casale, and to past program committee Chairs, Niklas Carlsson, Edith Cohen, and Philippe Robert, who provided a wealth of information and guidance. Finally, we are grateful to the Organization Committee and to the SIGMETRICS Board for their ongoing efforts and initiatives for creating an exciting program for ACM SIGMETRICS 2023.
ACM计算系统测量与分析论文集(POMACS)侧重于计算机系统的测量和性能评估,并与ACM特别兴趣小组SIGMETRICS密切合作。本期《POMACS》的所有论文将于2023年6月19日至23日在美国佛罗里达州奥兰多举行的ACM SIGMETRICS 2023会议上发表。这些论文是由ACM SIGMETRICS 2023项目委员会的91名成员通过严格的审查程序在秋季提交的一轮中选出的。每篇论文都被有条件地接受(和指导),允许“一次性”修改(在随后的三个SIGMETRICS截止日期之一重新提交),或者拒绝(在一年后允许重新提交)。这一期代表着秋季截止日期,POMACS发表了119篇投稿中的26篇。所有提交的作品都至少接受了3次评审,在在线项目委员会会议上,对边缘性案例进行了广泛讨论。根据所指示的轨道,大约21%的提交在理论轨道,40%在测量和应用建模轨道,29%在系统轨道,39%在学习轨道。许多人对本期《POMACS》的成功做出了贡献。首先,我们要感谢作者,他们向SIGMETRICS/POMACS提交了他们最好的作品。其次,我们要感谢项目委员会成员,他们在对作者的评审中提供了建设性的反馈,并参与了在线讨论和项目委员会会议。我们还要感谢几位外部审稿人,他们就需要额外投入的具体提交文件提供了专家意见。我们还要感谢SIGMETRICS董事会主席Giuliano Casale,以及过去的项目委员会主席Niklas Carlsson、Edith Cohen和Philippe Robert,他们提供了丰富的信息和指导。最后,我们感谢组织委员会和SIGMETRICS董事会为ACM SIGMETRICS 2023创建一个令人兴奋的项目所做的持续努力和倡议。
{"title":"POMACS V7, N1, March 2023 Editorial","authors":"K. Avrachenkov, P. Gill, B. Urgaonkar","doi":"10.1145/3579311","DOIUrl":"https://doi.org/10.1145/3579311","url":null,"abstract":"The Proceedings of the ACM on Measurement and Analysis of Computing Systems (POMACS) focuses on the measurement and performance evaluation of computer systems and operates in close collaboration with the ACM Special Interest Group SIGMETRICS. All papers in this issue of POMACS will be presented at the ACM SIGMETRICS 2023 conference on June 19-23, 2023, in Orlando, Florida, USA. These papers have been selected during the fall submission round by the 91 members of the ACM SIGMETRICS 2023 program committee via a rigorous review process. Each paper was conditionally accepted (and shepherded), allowed a \"one-shot\" revision (to be resubmitted to one of the subsequent three SIGMETRICS deadlines), or rejected (with re-submission allowed after a year). For this issue, which represents the fall deadline, POMACS is publishing 26 papers out of 119 submissions. All submissions received at least 3 reviews and borderline cases were extensively discussed during the online program committee meeting. Based on the indicated track(s), roughly 21% of the submissions were in the Theory track, 40% were in the Measurement & Applied Modeling track, 29% were in the Systems track, and 39% were in the Learning track. Many individuals contributed to the success of this issue of POMACS. First, we would like to thank the authors, who submitted their best work to SIGMETRICS/POMACS. Second, we would like to thank the program committee members who provided constructive feedback in their reviews to authors and participated in the online discussions and program committee meeting. We also thank the several external reviewers who provided their expert opinion on specific submissions that required additional input. We are also grateful to the SIGMETRICS Board Chair, Giuliano Casale, and to past program committee Chairs, Niklas Carlsson, Edith Cohen, and Philippe Robert, who provided a wealth of information and guidance. Finally, we are grateful to the Organization Committee and to the SIGMETRICS Board for their ongoing efforts and initiatives for creating an exciting program for ACM SIGMETRICS 2023.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125601937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLITS: Sparsity-Lightened Intelligent Thread Scheduling SLITS:稀疏化智能线程调度
Wangkai Jin, Xiangjun Peng
A diverse set of scheduling objectives (e.g., resource contention, fairness, priority, etc.) breed a series of objective-specific schedulers for multi-core architectures. Existing designs incorporate thread-to-thread statistics at runtime, and schedule threads based on such an abstraction (we formalize thread-to-thread interaction as the Thread-Interaction Matrix). However, such an abstraction also reveals a consistently-overlooked issue: the Thread-Interaction Matrix (TIM) is highly sparse. Therefore, existing designs can only deliver sub-optimal decisions, since the sparsity issue limits the amount of thread permutations (and its statistics) to be exploited when performing scheduling decisions. We introduce Sparsity-Lightened Intelligent Thread Scheduling (SLITS), a general scheduler design for mitigating the sparsity issue of TIM, with the customizability for different scheduling objectives. SLITS is designed upon the key insight that: the sparsity issue of the TIM can be effectively mitigated via advanced Machine Learning (ML) techniques. SLITS has three components. First, SLITS profiles Thread Interactions for only a small number of thread permutations, and form the TIM using the run-time statistics. Second, SLITS estimates the missing values in the TIM using Factorization Machine (FM), a novel ML technique that can fill in the missing values within a large-scale sparse matrix based on the limited information. Third, SLITS leverages Lazy Reschedule, a general mechanism as the building block for customizing different scheduling policies for different scheduling objectives. We show how SLITS can be (1) customized for different scheduling objectives, including resource contention and fairness; and (2) implemented with only negligible hardware costs. We also discuss how SLITS can be potentially applied to other contexts of thread scheduling. We evaluate two SLITS variants against four state-of-the-art scheduler designs. We highlight that, averaged across 11 benchmarks, SLITS achieves an average speedup of 1.08X over the de facto standard for thread scheduler - the Completely Fair Scheduler, under the 16-core setting for a variety of number of threads (i.e., 32, 64 and 128). Our analysis reveals that the benefits of SLITS are credited to significant improvements of cache utilization. In addition, our experimental results confirm that SLITS is scalable and the benefits are robust when of the number of threads increases. We also perform extensive studies to (1) break down SLITS components to justify the synergy of our design choices, (2) examine the impacts of varying the estimation coverage of FM, (3) justify the necessity of Lazy Reschedule rather than periodic rescheduling, and (4) demonstrate the hardware overheads for SLITS implementations can be marginal (<1% chip area and power).
一组不同的调度目标(例如,资源争用、公平性、优先级等)产生了一系列针对多核架构的目标特定的调度程序。现有的设计在运行时包含线程到线程的统计信息,并基于这样的抽象来调度线程(我们将线程到线程的交互形式化为线程交互矩阵)。然而,这样的抽象也揭示了一个一直被忽视的问题:线程交互矩阵(TIM)是高度稀疏的。因此,现有的设计只能提供次优决策,因为稀疏性问题限制了执行调度决策时可以利用的线程排列数量(及其统计数据)。我们介绍了稀疏智能线程调度(SLITS),这是一种用于减轻TIM的稀疏性问题的通用调度程序,具有针对不同调度目标的可定制性。SLITS的设计基于以下关键见解:TIM的稀疏性问题可以通过先进的机器学习(ML)技术有效缓解。SLITS有三个组成部分。首先,SLITS仅为少量线程排列配置线程交互,并使用运行时统计信息形成TIM。其次,SLITS使用因子分解机(FM)估计TIM中的缺失值,这是一种新颖的机器学习技术,可以根据有限的信息填充大规模稀疏矩阵中的缺失值。第三,SLITS利用Lazy Reschedule(一种通用机制)作为构建块,为不同的调度目标定制不同的调度策略。我们展示了如何(1)为不同的调度目标定制狭缝,包括资源争用和公平性;(2)实现的硬件成本可以忽略不计。我们还讨论了如何将SLITS潜在地应用于线程调度的其他上下文中。我们针对四种最先进的调度器设计评估了两种SLITS变体。我们强调,在11个基准测试中,SLITS的平均速度比线程调度器的实际标准——完全公平调度器(complete Fair scheduler)——在16核设置下,针对各种线程数量(即32、64和128)的平均速度提高了1.08倍。我们的分析表明,SLITS的好处归功于缓存利用率的显著提高。此外,我们的实验结果证实,当线程数量增加时,SLITS是可扩展的,并且好处是健壮的。我们还进行了广泛的研究,以(1)分解SLITS组件以证明我们的设计选择的协同作用,(2)检查改变FM估计覆盖范围的影响,(3)证明延迟重新调度而不是定期重新调度的必要性,以及(4)证明SLITS实现的硬件开销可以是微不足道的(<1%的芯片面积和功耗)。
{"title":"SLITS: Sparsity-Lightened Intelligent Thread Scheduling","authors":"Wangkai Jin, Xiangjun Peng","doi":"10.1145/3579436","DOIUrl":"https://doi.org/10.1145/3579436","url":null,"abstract":"A diverse set of scheduling objectives (e.g., resource contention, fairness, priority, etc.) breed a series of objective-specific schedulers for multi-core architectures. Existing designs incorporate thread-to-thread statistics at runtime, and schedule threads based on such an abstraction (we formalize thread-to-thread interaction as the Thread-Interaction Matrix). However, such an abstraction also reveals a consistently-overlooked issue: the Thread-Interaction Matrix (TIM) is highly sparse. Therefore, existing designs can only deliver sub-optimal decisions, since the sparsity issue limits the amount of thread permutations (and its statistics) to be exploited when performing scheduling decisions. We introduce Sparsity-Lightened Intelligent Thread Scheduling (SLITS), a general scheduler design for mitigating the sparsity issue of TIM, with the customizability for different scheduling objectives. SLITS is designed upon the key insight that: the sparsity issue of the TIM can be effectively mitigated via advanced Machine Learning (ML) techniques. SLITS has three components. First, SLITS profiles Thread Interactions for only a small number of thread permutations, and form the TIM using the run-time statistics. Second, SLITS estimates the missing values in the TIM using Factorization Machine (FM), a novel ML technique that can fill in the missing values within a large-scale sparse matrix based on the limited information. Third, SLITS leverages Lazy Reschedule, a general mechanism as the building block for customizing different scheduling policies for different scheduling objectives. We show how SLITS can be (1) customized for different scheduling objectives, including resource contention and fairness; and (2) implemented with only negligible hardware costs. We also discuss how SLITS can be potentially applied to other contexts of thread scheduling. We evaluate two SLITS variants against four state-of-the-art scheduler designs. We highlight that, averaged across 11 benchmarks, SLITS achieves an average speedup of 1.08X over the de facto standard for thread scheduler - the Completely Fair Scheduler, under the 16-core setting for a variety of number of threads (i.e., 32, 64 and 128). Our analysis reveals that the benefits of SLITS are credited to significant improvements of cache utilization. In addition, our experimental results confirm that SLITS is scalable and the benefits are robust when of the number of threads increases. We also perform extensive studies to (1) break down SLITS components to justify the synergy of our design choices, (2) examine the impacts of varying the estimation coverage of FM, (3) justify the necessity of Lazy Reschedule rather than periodic rescheduling, and (4) demonstrate the hardware overheads for SLITS implementations can be marginal (<1% chip area and power).","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125569879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Each at its Own Pace: Third-Party Dependency and Centralization Around the World 各有各的步调:世界各地的第三方依赖和集中化
Rashna Kumar, Sana Asif, Elise Lee, F. Bustamante
We describe the results of a large-scale study of third-party dependencies around the world based on regional top-500 popular websites accessed from vantage points in 50 countries, together covering all inhabited continents. This broad perspective shows that dependencies on a third-party DNS, CDN or CA provider vary widely around the world, ranging from 19% to as much as 76% of websites, across all countries. The critical dependencies of websites -- where the site depends on a single third-party provider -- are equally spread ranging from 5% to 60% (CDN in Costa Rica and DNS in China, respectively). Interestingly, despite this high variability, our results suggest a highly concentrated market of third-party providers: three providers across all countries serve an average of 92% and Google, by itself, serves an average of 70% of the surveyed websites. Even more concerning, these differences persist a year later with increasing dependencies, particularly for DNS and CDNs. We briefly explore various factors that may help explain the differences and similarities in degrees of third-party dependency across countries, including economic conditions, Internet development, economic trading partners, categories, home countries, and traffic skewness of the country's top-500 sites.
我们描述了一项大规模的第三方依赖性研究的结果,该研究基于从50个国家的有利位置访问的区域500强热门网站,总共覆盖了所有有人居住的大陆。从广泛的角度来看,对第三方DNS、CDN或CA提供商的依赖程度在世界各地差异很大,从19%到76%的网站,在所有国家。网站的关键依赖关系——网站依赖于单一的第三方提供商——平均分布在5%到60%之间(哥斯达黎加的CDN和中国的DNS分别为5%到60%)。有趣的是,尽管存在如此高的可变性,但我们的结果表明第三方提供商的市场高度集中:所有国家的三家提供商平均服务92%,b谷歌本身平均服务70%的被调查网站。更令人担忧的是,这些差异会在一年后持续存在,依赖性也会增加,特别是对于DNS和cdn。我们简要探讨了各种因素,这些因素可能有助于解释各国对第三方依赖程度的差异和相似之处,包括经济条件、互联网发展、经济贸易伙伴、类别、母国和该国前500强网站的流量偏差。
{"title":"Each at its Own Pace: Third-Party Dependency and Centralization Around the World","authors":"Rashna Kumar, Sana Asif, Elise Lee, F. Bustamante","doi":"10.1145/3579437","DOIUrl":"https://doi.org/10.1145/3579437","url":null,"abstract":"We describe the results of a large-scale study of third-party dependencies around the world based on regional top-500 popular websites accessed from vantage points in 50 countries, together covering all inhabited continents. This broad perspective shows that dependencies on a third-party DNS, CDN or CA provider vary widely around the world, ranging from 19% to as much as 76% of websites, across all countries. The critical dependencies of websites -- where the site depends on a single third-party provider -- are equally spread ranging from 5% to 60% (CDN in Costa Rica and DNS in China, respectively). Interestingly, despite this high variability, our results suggest a highly concentrated market of third-party providers: three providers across all countries serve an average of 92% and Google, by itself, serves an average of 70% of the surveyed websites. Even more concerning, these differences persist a year later with increasing dependencies, particularly for DNS and CDNs. We briefly explore various factors that may help explain the differences and similarities in degrees of third-party dependency across countries, including economic conditions, Internet development, economic trading partners, categories, home countries, and traffic skewness of the country's top-500 sites.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129462707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
PEACH: Proactive and Environment-Aware Channel State Information Prediction with Depth Images 基于深度图像的主动环境感知信道状态信息预测
Serkut Ayvaşık, Fidan Mehmeti, Edwin Babaians, W. Kellerer
Up-to-date and accurate prediction of Channel State Information (CSI) is of paramount importance in Ultra-Reliable Low-Latency Communications (URLLC), specifically in dynamic environments where unpredictable mobility is inherent. CSI can be meticulously tracked by means of frequent pilot transmissions, which on the downside lead to an increase in metadata (overhead signaling) and latency, which are both detrimental for URLLC. To overcome these issues, in this paper, we take a fundamentally different approach and propose PEACH, a machine learning system which utilizes environmental information with depth images to predict CSI amplitude in beyond 5G systems, without requiring metadata radio resources, such as pilot overheads or any feedback mechanism. PEACH exploits depth images by employing a convolutional neural network to predict the current and the next 100 ms CSI amplitudes. The proposed system is experimentally validated with extensive measurements conducted in an indoor environment, involving two static receivers and two transmitters, one of which is placed on top of a mobile robot. We prove that environmental information can be instrumental towards proactive CSI amplitude acquisition of both static and mobile users on base stations, while providing an almost similar performance as pilot-based methods, and completely avoiding the dependency on feedback and pilot transmission for both downlink and uplink CSI information. Furthermore, compared to demodulation reference signal based traditional pilot estimation in ideal conditions without interference, our experimental results show that PEACH yields the same performance in terms of average bit error rate when channel conditions are poor (using low order modulation), while not being much worse when using higher modulation orders, like 16-QAM or 64-QAM. More importantly, in the realistic cases with interference taken into account, our experiments demonstrate considerable improvements introduced by PEACH in terms of normalized mean square error of CSI amplitude estimation, up to 6 dB, when compared to traditional approaches.
最新和准确的信道状态信息(CSI)预测在超可靠低延迟通信(URLLC)中至关重要,特别是在不可预测移动性固有的动态环境中。CSI可以通过频繁的导频传输来精确地跟踪,其缺点是导致元数据(开销信号)和延迟的增加,这对URLLC都是有害的。为了克服这些问题,在本文中,我们采用了一种完全不同的方法,并提出了PEACH,这是一种机器学习系统,它利用带有深度图像的环境信息来预测5G以上系统中的CSI振幅,而不需要元数据无线电资源,如飞行员开销或任何反馈机制。PEACH通过使用卷积神经网络来预测当前和下一个100毫秒CSI振幅,从而利用深度图像。该系统通过在室内环境中进行的大量测量进行了实验验证,包括两个静态接收器和两个发射器,其中一个放置在移动机器人的顶部。我们证明了环境信息可以有助于基站静态和移动用户的主动CSI振幅采集,同时提供与基于导频的方法几乎相似的性能,并且完全避免了对下行和上行CSI信息的反馈和导频传输的依赖。此外,在无干扰的理想条件下,与基于传统导频估计的解调参考信号相比,我们的实验结果表明,当信道条件较差(使用低阶调制)时,PEACH在平均误码率方面具有相同的性能,而当使用更高的调制顺序(如16-QAM或64-QAM)时,PEACH的性能并没有差太多。更重要的是,在考虑干扰的实际情况下,我们的实验表明,与传统方法相比,PEACH在CSI振幅估计的归一化均方误差方面带来了相当大的改进,最高可达6 dB。
{"title":"PEACH: Proactive and Environment-Aware Channel State Information Prediction with Depth Images","authors":"Serkut Ayvaşık, Fidan Mehmeti, Edwin Babaians, W. Kellerer","doi":"10.1145/3579450","DOIUrl":"https://doi.org/10.1145/3579450","url":null,"abstract":"Up-to-date and accurate prediction of Channel State Information (CSI) is of paramount importance in Ultra-Reliable Low-Latency Communications (URLLC), specifically in dynamic environments where unpredictable mobility is inherent. CSI can be meticulously tracked by means of frequent pilot transmissions, which on the downside lead to an increase in metadata (overhead signaling) and latency, which are both detrimental for URLLC. To overcome these issues, in this paper, we take a fundamentally different approach and propose PEACH, a machine learning system which utilizes environmental information with depth images to predict CSI amplitude in beyond 5G systems, without requiring metadata radio resources, such as pilot overheads or any feedback mechanism. PEACH exploits depth images by employing a convolutional neural network to predict the current and the next 100 ms CSI amplitudes. The proposed system is experimentally validated with extensive measurements conducted in an indoor environment, involving two static receivers and two transmitters, one of which is placed on top of a mobile robot. We prove that environmental information can be instrumental towards proactive CSI amplitude acquisition of both static and mobile users on base stations, while providing an almost similar performance as pilot-based methods, and completely avoiding the dependency on feedback and pilot transmission for both downlink and uplink CSI information. Furthermore, compared to demodulation reference signal based traditional pilot estimation in ideal conditions without interference, our experimental results show that PEACH yields the same performance in terms of average bit error rate when channel conditions are poor (using low order modulation), while not being much worse when using higher modulation orders, like 16-QAM or 64-QAM. More importantly, in the realistic cases with interference taken into account, our experiments demonstrate considerable improvements introduced by PEACH in terms of normalized mean square error of CSI amplitude estimation, up to 6 dB, when compared to traditional approaches.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127532417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
(Private) Kernelized Bandits with Distributed Biased Feedback (私有)分布式有偏差反馈的kernel - ized Bandits
Fengjiao Li, Xingyu Zhou, Bo Ji
In this paper, we study kernelized bandits with distributed biased feedback. This problem is motivated by several real-world applications (such as dynamic pricing, cellular network configuration, and policy making), where users from a large population contribute to the reward of the action chosen by a central entity, but it is difficult to collect feedback from all users. Instead, only biased feedback (due to user heterogeneity) from a subset of users may be available. In addition to such partial biased feedback, we are also faced with two practical challenges due to communication cost and computation complexity. To tackle these challenges, we carefully design a new distributed phase-then-batch-based elimination (DPBE) algorithm, which samples users in phases for collecting feedback to reduce the bias and employs maximum variance reduction to select actions in batches within each phase. By properly choosing the phase length, the batch size, and the confidence width used for eliminating suboptimal actions, we show that DPBE achieves a sublinear regret of ~O(T1-α/2 +√γT T), where α ∈ (0,1) is the user-sampling parameter one can tune. Moreover, DPBE can significantly reduce both communication cost and computation complexity in distributed kernelized bandits, compared to some variants of the state-of-the-art algorithms (originally developed for standard kernelized bandits). Furthermore, by incorporating various differential privacy models (including the central, local, and shuffle models), we generalize DPBE to provide privacy guarantees for users participating in the distributed learning process. Finally, we conduct extensive simulations to validate our theoretical results and evaluate the empirical performance.
本文研究了具有分布偏反馈的核化强盗。这个问题是由几个现实世界的应用程序(如动态定价、蜂窝网络配置和政策制定)引起的,在这些应用程序中,来自大量人口的用户为中央实体选择的行动的奖励做出了贡献,但很难从所有用户那里收集反馈。相反,可能只有来自用户子集的有偏差的反馈(由于用户的异质性)是可用的。除了这种偏反馈之外,我们还面临着通信成本和计算复杂性两方面的实际挑战。为了应对这些挑战,我们精心设计了一种新的分布式基于阶段-批处理的消除(DPBE)算法,该算法分阶段对用户进行采样以收集反馈以减少偏差,并在每个阶段中使用最大方差减少来批量选择操作。通过正确选择相位长度、批处理大小和用于消除次优行为的置信宽度,我们表明DPBE实现了~O(T1-α/2 +√γT T)的亚线性遗憾,其中α∈(0,1)是可以调整的用户采样参数。此外,与一些最先进的算法变体(最初是为标准的内核化强盗开发的)相比,DPBE可以显著降低分布式内核化强盗中的通信成本和计算复杂性。此外,通过结合各种差分隐私模型(包括中央模型、局部模型和随机模型),我们对DPBE进行了推广,为参与分布式学习过程的用户提供隐私保障。最后,我们进行了大量的仿真来验证我们的理论结果和评估实证性能。
{"title":"(Private) Kernelized Bandits with Distributed Biased Feedback","authors":"Fengjiao Li, Xingyu Zhou, Bo Ji","doi":"10.1145/3579318","DOIUrl":"https://doi.org/10.1145/3579318","url":null,"abstract":"In this paper, we study kernelized bandits with distributed biased feedback. This problem is motivated by several real-world applications (such as dynamic pricing, cellular network configuration, and policy making), where users from a large population contribute to the reward of the action chosen by a central entity, but it is difficult to collect feedback from all users. Instead, only biased feedback (due to user heterogeneity) from a subset of users may be available. In addition to such partial biased feedback, we are also faced with two practical challenges due to communication cost and computation complexity. To tackle these challenges, we carefully design a new distributed phase-then-batch-based elimination (DPBE) algorithm, which samples users in phases for collecting feedback to reduce the bias and employs maximum variance reduction to select actions in batches within each phase. By properly choosing the phase length, the batch size, and the confidence width used for eliminating suboptimal actions, we show that DPBE achieves a sublinear regret of ~O(T1-α/2 +√γT T), where α ∈ (0,1) is the user-sampling parameter one can tune. Moreover, DPBE can significantly reduce both communication cost and computation complexity in distributed kernelized bandits, compared to some variants of the state-of-the-art algorithms (originally developed for standard kernelized bandits). Furthermore, by incorporating various differential privacy models (including the central, local, and shuffle models), we generalize DPBE to provide privacy guarantees for users participating in the distributed learning process. Finally, we conduct extensive simulations to validate our theoretical results and evaluate the empirical performance.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133484410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Characterizing Cryptocurrency-themed Malicious Browser Extensions 以加密货币为主题的恶意浏览器扩展
Kailong Wang, Yuxi Ling, Yanjun Zhang, Zhou Yu, Haoyu Wang, Guangdong Bai, B. Ooi, J. Dong
Due to the surging popularity of various cryptocurrencies in recent years, a large number of browser extensions have been developed as portals to access relevant services, such as cryptocurrency exchanges and wallets. This has stimulated a wild growth of cryptocurrency themed malicious extensions that cause heavy financial losses to the users and legitimate service providers. They have shown their capability of evading the stringent vetting processes of the extension stores, highlighting a lack of understanding of this emerging type of malware in our community. In this work, we conduct the first systematic study to identify and characterize cryptocurrency-themed malicious extensions. We monitor seven official and third-party extension distribution venues for 18 months (December 2020 to June 2022) and have collected around 3600 unique cryptocurrency-themed extensions. Leveraging a hybrid analysis, we have identified 186 malicious extensions that belong to five categories. We then characterize those extensions from various perspectives including their distribution channels, life cycles, developers, illicit behaviors, and illegal gains. Our work unveils the status quo of the cryptocurrency-themed malicious extensions and reveals their disguises and programmatic features on which detection techniques can be based. Our work serves as a warning to extension users, and an appeal to extension store operators to enact dedicated countermeasures. To facilitate future research in this area, we release our dataset of the identified malicious extensions and open-source our analyzer.
由于近年来各种加密货币的流行,大量的浏览器扩展被开发作为访问相关服务的门户,例如加密货币交易所和钱包。这刺激了以加密货币为主题的恶意扩展的疯狂增长,给用户和合法服务提供商造成了严重的经济损失。他们已经展示了他们逃避扩展商店严格审查过程的能力,突出了我们社区对这种新兴恶意软件类型缺乏了解。在这项工作中,我们进行了第一次系统研究,以识别和表征以加密货币为主题的恶意扩展。我们对7个官方和第三方扩展分发场所进行了为期18个月的监控(2020年12月至2022年6月),并收集了大约3600个独特的加密货币主题扩展。利用混合分析,我们已经确定了186个恶意扩展,它们属于5个类别。然后,我们从不同的角度描述这些扩展,包括它们的分销渠道、生命周期、开发者、非法行为和非法收益。我们的工作揭示了以加密货币为主题的恶意扩展的现状,并揭示了它们的伪装和检测技术可以基于的编程特征。我们的工作是对扩展用户的警告,并呼吁扩展存储运营商制定专门的对策。为了促进该领域的未来研究,我们发布了已识别的恶意扩展的数据集,并开源了我们的分析器。
{"title":"Characterizing Cryptocurrency-themed Malicious Browser Extensions","authors":"Kailong Wang, Yuxi Ling, Yanjun Zhang, Zhou Yu, Haoyu Wang, Guangdong Bai, B. Ooi, J. Dong","doi":"10.1145/3570603","DOIUrl":"https://doi.org/10.1145/3570603","url":null,"abstract":"Due to the surging popularity of various cryptocurrencies in recent years, a large number of browser extensions have been developed as portals to access relevant services, such as cryptocurrency exchanges and wallets. This has stimulated a wild growth of cryptocurrency themed malicious extensions that cause heavy financial losses to the users and legitimate service providers. They have shown their capability of evading the stringent vetting processes of the extension stores, highlighting a lack of understanding of this emerging type of malware in our community. In this work, we conduct the first systematic study to identify and characterize cryptocurrency-themed malicious extensions. We monitor seven official and third-party extension distribution venues for 18 months (December 2020 to June 2022) and have collected around 3600 unique cryptocurrency-themed extensions. Leveraging a hybrid analysis, we have identified 186 malicious extensions that belong to five categories. We then characterize those extensions from various perspectives including their distribution channels, life cycles, developers, illicit behaviors, and illegal gains. Our work unveils the status quo of the cryptocurrency-themed malicious extensions and reveals their disguises and programmatic features on which detection techniques can be based. Our work serves as a warning to extension users, and an appeal to extension store operators to enact dedicated countermeasures. To facilitate future research in this area, we release our dataset of the identified malicious extensions and open-source our analyzer.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126455028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dynamic Bin Packing with Predictions 动态装箱与预测
Mozhengfu Liu, Xueyan Tang
The MinUsageTime Dynamic Bin Packing (DBP) problem aims to minimize the accumulated bin usage time for packing a sequence of items into bins. It is often used to model job dispatching for optimizing the busy time of servers, where the items and bins match the jobs and servers respectively. It is known that the competitiveness of MinUsageTime DBP has tight bounds of Θ(√łog μ ) and Θ(μ) in the clairvoyant and non-clairvoyant settings respectively, where μ is the max/min duration ratio of all items. In practice, the information about the items' durations (i.e., job lengths) obtained via predictions is usually prone to errors. In this paper, we study the MinUsageTime DBP problem with predictions of the items' durations. We find that an existing O(√łog μ )-competitive clairvoyant algorithm, if using predicted durations rather than real durations for packing, does not provide any bounded performance guarantee when the predictions are adversarially bad. We develop a new online algorithm with a competitive ratio of minØ(ε^2 √łog(ε^2 μ) ), O(μ) (where ε is the maximum multiplicative error of prediction among all items), achieving O(√łog μ) consistency (competitiveness under perfect predictions where ε = 1) and O(μ) robustness (competitiveness under terrible predictions), both of which are asymptotically optimal.
MinUsageTime动态箱装箱(DBP)问题的目标是最小化将一系列物品打包到箱中的累积箱使用时间。它通常用于建模作业调度,以优化服务器的繁忙时间,其中项和箱分别与作业和服务器相匹配。我们知道,MinUsageTime DBP在千里眼和非千里眼设置下的竞争力分别具有Θ(√łog μ)和Θ(μ)的紧边界,其中μ为所有项目的最大/最小持续时间比。在实践中,通过预测获得的关于项目持续时间(即作业长度)的信息通常容易出错。在本文中,我们研究了带有项目持续时间预测的MinUsageTime DBP问题。我们发现,现有的O(√łog μ)竞争千里眼算法,如果使用预测持续时间而不是实际持续时间进行打包,当预测结果相对较差时,不能提供任何有界性能保证。我们开发了一种新的在线算法,其竞争比为minØ(ε^2√łog(ε^2 μ)), O(μ)(其中ε为所有项目之间的最大乘法误差),实现了O(√łog μ)一致性(ε = 1的完美预测下的竞争力)和O(μ)鲁棒性(糟糕预测下的竞争力),两者都是渐近最优的。
{"title":"Dynamic Bin Packing with Predictions","authors":"Mozhengfu Liu, Xueyan Tang","doi":"10.1145/3570605","DOIUrl":"https://doi.org/10.1145/3570605","url":null,"abstract":"The MinUsageTime Dynamic Bin Packing (DBP) problem aims to minimize the accumulated bin usage time for packing a sequence of items into bins. It is often used to model job dispatching for optimizing the busy time of servers, where the items and bins match the jobs and servers respectively. It is known that the competitiveness of MinUsageTime DBP has tight bounds of Θ(√łog μ ) and Θ(μ) in the clairvoyant and non-clairvoyant settings respectively, where μ is the max/min duration ratio of all items. In practice, the information about the items' durations (i.e., job lengths) obtained via predictions is usually prone to errors. In this paper, we study the MinUsageTime DBP problem with predictions of the items' durations. We find that an existing O(√łog μ )-competitive clairvoyant algorithm, if using predicted durations rather than real durations for packing, does not provide any bounded performance guarantee when the predictions are adversarially bad. We develop a new online algorithm with a competitive ratio of minØ(ε^2 √łog(ε^2 μ) ), O(μ) (where ε is the maximum multiplicative error of prediction among all items), achieving O(√łog μ) consistency (competitiveness under perfect predictions where ε = 1) and O(μ) robustness (competitiveness under terrible predictions), both of which are asymptotically optimal.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132932759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Malcolm: Multi-agent Learning for Cooperative Load Management at Rack Scale 机架规模下协同负载管理的多智能体学习
Ali Hossein Abbasi Abyaneh, Maizi Liao, S. Zahedi
We consider the problem of balancing the load among servers in dense racks for microsecond-scale workloads. To balance the load in such settings tens of millions of scheduling decisions have to be made per second. Achieving this throughput while providing microsecond-scale latency and high availability is extremely challenging. To address this challenge, we design a fully decentralized load-balancing framework. In this framework, servers collectively balance the load in the system. We model the interactions among servers as a cooperative stochastic game. To find the game's parametric Nash equilibrium, we design and implement a decentralized algorithm based on multi-agent-learning theory. We empirically show that our proposed algorithm is adaptive and scalable while outperforming state-of-the art alternatives. In homogeneous settings, Malcolm performs as well as the best alternative among other baselines. In heterogeneous settings, compared to other baselines, for lower loads, Malcolm improves tail latency by up to a factor of four. And for the same tail latency, Malcolm achieves up to 60% more throughput compared to the best alternative among other baselines.
我们考虑在密集机架中的服务器之间平衡微秒级工作负载的问题。为了在这种设置中平衡负载,每秒必须做出数千万个调度决策。在提供微秒级延迟和高可用性的同时实现这种吞吐量是极具挑战性的。为了应对这一挑战,我们设计了一个完全分散的负载平衡框架。在这个框架中,服务器共同平衡系统中的负载。我们将服务器间的交互建模为一个合作的随机博弈。为了找到博弈的参数纳什均衡,我们设计并实现了一个基于多智能体学习理论的去中心化算法。我们的经验表明,我们提出的算法是自适应和可扩展的,同时优于最先进的替代方案。在同质环境中,马尔科姆的表现与其他基线中的最佳选择一样好。在异构环境中,与其他基线相比,对于较低负载,Malcolm将尾部延迟提高了四倍。对于相同的尾部延迟,与其他基线中的最佳替代方案相比,Malcolm实现了高达60%的吞吐量。
{"title":"Malcolm: Multi-agent Learning for Cooperative Load Management at Rack Scale","authors":"Ali Hossein Abbasi Abyaneh, Maizi Liao, S. Zahedi","doi":"10.1145/3570611","DOIUrl":"https://doi.org/10.1145/3570611","url":null,"abstract":"We consider the problem of balancing the load among servers in dense racks for microsecond-scale workloads. To balance the load in such settings tens of millions of scheduling decisions have to be made per second. Achieving this throughput while providing microsecond-scale latency and high availability is extremely challenging. To address this challenge, we design a fully decentralized load-balancing framework. In this framework, servers collectively balance the load in the system. We model the interactions among servers as a cooperative stochastic game. To find the game's parametric Nash equilibrium, we design and implement a decentralized algorithm based on multi-agent-learning theory. We empirically show that our proposed algorithm is adaptive and scalable while outperforming state-of-the art alternatives. In homogeneous settings, Malcolm performs as well as the best alternative among other baselines. In heterogeneous settings, compared to other baselines, for lower loads, Malcolm improves tail latency by up to a factor of four. And for the same tail latency, Malcolm achieves up to 60% more throughput compared to the best alternative among other baselines.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125050835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Noise in the Clouds 云中的噪音
Daniele De Sensi, T. De Matteis, Konstantin Taranov, Salvatore Di Girolamo, Tobias Rahn, Torsten Hoefler
Cloud computing represents an appealing opportunity for cost-effective deployment of HPC workloads on the best-fitting hardware. However, although cloud and on-premise HPC systems offer similar computational resources, their network architecture and performance may differ significantly. For example, these systems use fundamentally different network transport and routing protocols, which may introduce network noise that can eventually limit the application scaling. This work analyzes network performance, scalability, and cost of running HPC workloads on cloud systems. First, we consider latency, bandwidth, and collective communication patterns in detailed small-scale measurements, and then we simulate network performance at a larger scale. We validate our approach on four popular cloud providers and three on-premise HPC systems, showing that network (and also OS) noise can significantly impact performance and cost both at small and large scale.
云计算为在最合适的硬件上经济高效地部署HPC工作负载提供了诱人的机会。然而,尽管云和本地HPC系统提供类似的计算资源,但它们的网络架构和性能可能有很大不同。例如,这些系统使用根本不同的网络传输和路由协议,这可能会引入网络噪声,最终限制应用程序的扩展。这项工作分析了在云系统上运行HPC工作负载的网络性能、可伸缩性和成本。首先,我们在详细的小规模测量中考虑延迟、带宽和集体通信模式,然后我们在更大规模上模拟网络性能。我们在四家流行的云提供商和三家本地HPC系统上验证了我们的方法,结果表明网络(以及操作系统)噪声会显著影响小型和大型系统的性能和成本。
{"title":"Noise in the Clouds","authors":"Daniele De Sensi, T. De Matteis, Konstantin Taranov, Salvatore Di Girolamo, Tobias Rahn, Torsten Hoefler","doi":"10.1145/3570609","DOIUrl":"https://doi.org/10.1145/3570609","url":null,"abstract":"Cloud computing represents an appealing opportunity for cost-effective deployment of HPC workloads on the best-fitting hardware. However, although cloud and on-premise HPC systems offer similar computational resources, their network architecture and performance may differ significantly. For example, these systems use fundamentally different network transport and routing protocols, which may introduce network noise that can eventually limit the application scaling. This work analyzes network performance, scalability, and cost of running HPC workloads on cloud systems. First, we consider latency, bandwidth, and collective communication patterns in detailed small-scale measurements, and then we simulate network performance at a larger scale. We validate our approach on four popular cloud providers and three on-premise HPC systems, showing that network (and also OS) noise can significantly impact performance and cost both at small and large scale.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123928344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Joint Learning and Control in Stochastic Queueing Networks with Unknown Utilities 未知效用随机排队网络的联合学习与控制
Xinzhe Fu, E. Modiano
We study the optimal control problem in stochastic queueing networks with a set of job dispatchers connected to a set of parallel servers with queues. Jobs arrive at the dispatchers and get routed to the servers following some routing policy. The arrival processes of jobs and the service processes of servers are stochastic with unknown arrival rates and service rates. Upon the completion of each job from dispatcher un at server sm, a random utility whose mean is unknown is obtained. We seek to design a control policy that makes routing decisions at the dispatchers and scheduling decisions at the servers to maximize the total utility obtained by the end of a finite time horizon T. The performance of policies is measured by regret, which is defined as the difference in total expected utility with respect to the optimal dynamic policy that has access to arrival rates, service rates and underlying utilities. We first show that the expected utility of the optimal dynamic policy is upper bounded by T times the solution to a static linear program, where the optimization variables correspond to rates of jobs from dispatchers to servers and the feasibility region is parameterized by arrival rates and service rates. We next propose a policy for the optimal control problem that is an integration of a learning algorithm and a control policy. The learning algorithm seeks to learn the optimal extreme point solution to the static linear program based on the information available in the optimal control problem. The control policy, a mixture of priority-based and Joint-the-Shortest-Queue routing at the dispatchers and priority-based scheduling at the servers, makes decisions based on the graphical structure induced by the extreme point solutions provided by the learning algorithm. We prove that our policy achieves logarithmic regret whereas application of existing techniques to the optimal control problem would lead to Ω(√T)-regret. The theoretical analysis is further complemented with simulations to evaluate the empirical performance of our policy.
研究了一类随机排队网络的最优控制问题,其中一组作业调度程序与一组具有队列的并行服务器相连。作业到达调度程序,并按照某种路由策略路由到服务器。作业的到达过程和服务器的服务过程都是随机的,到达率和服务率都是未知的。在服务器sm上的调度程序完成每个作业后,获得一个均值未知的随机效用。我们试图设计一个控制策略,在调度器上做出路由决策,在服务器上做出调度决策,以最大限度地提高有限时间范围t结束时获得的总效用。策略的性能通过后悔来衡量,后悔被定义为总期望效用与可访问到达率、服务率和底层效用的最优动态策略的差异。我们首先证明了最优动态策略的期望效用的上限是T乘以静态线性规划的解,其中优化变量对应于从调度器到服务器的作业率,可行性区域由到达率和服务率参数化。接下来,我们提出了一个最优控制问题的策略,它是一个学习算法和控制策略的集成。学习算法基于最优控制问题中可用的信息来学习静态线性规划的最优极值点解。控制策略混合了调度端基于优先级和联合最短队列的路由,服务器端基于优先级的调度,根据学习算法提供的极值点解生成的图形结构进行决策。我们证明了我们的策略实现了对数后悔,而现有技术应用于最优控制问题将导致Ω(√T)-后悔。理论分析进一步与模拟相辅相成,以评估我们的政策的实证表现。
{"title":"Joint Learning and Control in Stochastic Queueing Networks with Unknown Utilities","authors":"Xinzhe Fu, E. Modiano","doi":"10.1145/3570619","DOIUrl":"https://doi.org/10.1145/3570619","url":null,"abstract":"We study the optimal control problem in stochastic queueing networks with a set of job dispatchers connected to a set of parallel servers with queues. Jobs arrive at the dispatchers and get routed to the servers following some routing policy. The arrival processes of jobs and the service processes of servers are stochastic with unknown arrival rates and service rates. Upon the completion of each job from dispatcher un at server sm, a random utility whose mean is unknown is obtained. We seek to design a control policy that makes routing decisions at the dispatchers and scheduling decisions at the servers to maximize the total utility obtained by the end of a finite time horizon T. The performance of policies is measured by regret, which is defined as the difference in total expected utility with respect to the optimal dynamic policy that has access to arrival rates, service rates and underlying utilities. We first show that the expected utility of the optimal dynamic policy is upper bounded by T times the solution to a static linear program, where the optimization variables correspond to rates of jobs from dispatchers to servers and the feasibility region is parameterized by arrival rates and service rates. We next propose a policy for the optimal control problem that is an integration of a learning algorithm and a control policy. The learning algorithm seeks to learn the optimal extreme point solution to the static linear program based on the information available in the optimal control problem. The control policy, a mixture of priority-based and Joint-the-Shortest-Queue routing at the dispatchers and priority-based scheduling at the servers, makes decisions based on the graphical structure induced by the extreme point solutions provided by the learning algorithm. We prove that our policy achieves logarithmic regret whereas application of existing techniques to the optimal control problem would lead to Ω(√T)-regret. The theoretical analysis is further complemented with simulations to evaluate the empirical performance of our policy.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126615133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the ACM on Measurement and Analysis of Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1