首页 > 最新文献

Proceedings of the ACM on Measurement and Analysis of Computing Systems最新文献

英文 中文
A First Look at Wi-Fi 6 in Action: Throughput, Latency, Energy Efficiency, and Security Wi-Fi 6的实际应用:吞吐量、延迟、能效和安全性
Ruofeng Liu, Nakjung Choi
This paper presents a first-of-its-kind performance measurement of Wi-Fi 6 (IEEE 802.11ax) using real experiments. Our experiments focus on multi-client scenarios. The results reveal the impact of the new channel access mechanisms (i.e., OFDMA and TWT) on the spectrum efficiency, energy consumption, latency, and network security. (i) A comparison with the legacy CSMA/CA scheme shows that the commodity Wi-Fi 6 achieves 3× overall throughput and dramatically reduce the latency (5×) when coexisting with legacy Wi-Fi network. (ii) However, the current OFDMA implementation significantly increases the power consumption (6×), implying a design tradeoff between throughput and latency gain versus the cost of energy consumption. (iii) Finally, TWT negotiating procedure is vulnerable to various malicious attacks. We believe that our findings provide critical insights for the scheduling algorithm design, power optimization, and security protection of the next-generation WLANs.
本文首次通过实际实验对Wi-Fi 6 (IEEE 802.11ax)进行了性能测量。我们的实验侧重于多客户机场景。研究结果揭示了新的信道接入机制(即OFDMA和TWT)对频谱效率、能耗、时延和网络安全性的影响。(i)与传统CSMA/CA方案的比较表明,商品Wi-Fi 6在与传统Wi-Fi网络共存时实现了3倍的总吞吐量,并显着降低了延迟(5倍)。(ii)然而,当前的OFDMA实现显著增加了功耗(6倍),这意味着在吞吐量和延迟增益与能耗成本之间进行设计权衡。(三)最后,TWT谈判程序容易受到各种恶意攻击。我们相信我们的研究结果为下一代无线局域网的调度算法设计、功率优化和安全保护提供了重要的见解。
{"title":"A First Look at Wi-Fi 6 in Action: Throughput, Latency, Energy Efficiency, and Security","authors":"Ruofeng Liu, Nakjung Choi","doi":"10.1145/3579451","DOIUrl":"https://doi.org/10.1145/3579451","url":null,"abstract":"This paper presents a first-of-its-kind performance measurement of Wi-Fi 6 (IEEE 802.11ax) using real experiments. Our experiments focus on multi-client scenarios. The results reveal the impact of the new channel access mechanisms (i.e., OFDMA and TWT) on the spectrum efficiency, energy consumption, latency, and network security. (i) A comparison with the legacy CSMA/CA scheme shows that the commodity Wi-Fi 6 achieves 3× overall throughput and dramatically reduce the latency (5×) when coexisting with legacy Wi-Fi network. (ii) However, the current OFDMA implementation significantly increases the power consumption (6×), implying a design tradeoff between throughput and latency gain versus the cost of energy consumption. (iii) Finally, TWT negotiating procedure is vulnerable to various malicious attacks. We believe that our findings provide critical insights for the scheduling algorithm design, power optimization, and security protection of the next-generation WLANs.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116123799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DaeMon: Architectural Support for Efficient Data Movement in Fully Disaggregated Systems 在完全分解的系统中对有效数据移动的体系结构支持
Christina Giannoula, Kailong Huang, Jonathan Tang, N. Koziris, G. Goumas, Zeshan A. Chishti, Nandita Vijaykumar
Resource disaggregation offers a cost effective solution to resource scaling, utilization, and failure-handling in data centers by physically separating hardware devices in a server. Servers are architected as pools of processor, memory, and storage devices, organized as independent failure-isolated components interconnected by a high-bandwidth network. A critical challenge, however, is the high performance penalty of accessing data from a remote memory module over the network. Addressing this challenge is difficult as disaggregated systems have high runtime variability in network latencies/bandwidth, and page migration can significantly delay critical path cache line accesses in other pages. This paper conducts a characterization analysis on different data movement strategies in fully disaggregated systems, evaluates their performance overheads in a variety of workloads, and introduces DaeMon, the first software-transparent mechanism to significantly alleviate data movement overheads in fully disaggregated systems. First, to enable scalability to multiple hardware components in the system, we enhance each compute and memory unit with specialized engines that transparently handle data migrations. Second, to achieve high performance and provide robustness across various network, architecture and application characteristics, we implement a synergistic approach of bandwidth partitioning, link compression, decoupled data movement of multiple granularities, and adaptive granularity selection in data movements. We evaluate DaeMon in a wide variety of workloads at different network and architecture configurations using a state-of-the-art simulator. DaeMon improves system performance and data access costs by 2.39× and 3.06×, respectively, over the widely-adopted approach of moving data at page granularity.
资源分解通过物理分离服务器中的硬件设备,为数据中心中的资源扩展、利用和故障处理提供了一种经济有效的解决方案。服务器架构为处理器、内存和存储设备池,组织为通过高带宽网络相互连接的独立故障隔离组件。然而,一个关键的挑战是通过网络从远程内存模块访问数据的高性能损失。解决这一挑战是困难的,因为分解系统在网络延迟/带宽方面具有很高的运行时可变性,并且页面迁移会显著延迟其他页面中的关键路径缓存线访问。本文对完全分解系统中不同的数据移动策略进行了特征分析,评估了它们在各种工作负载下的性能开销,并介绍了DaeMon,这是第一个显著减轻完全分解系统中数据移动开销的软件透明机制。首先,为了支持系统中多个硬件组件的可伸缩性,我们使用专门的引擎来增强每个计算和内存单元,这些引擎可以透明地处理数据迁移。其次,为了实现高性能并提供跨各种网络、架构和应用特性的鲁棒性,我们实现了带宽划分、链路压缩、多粒度解耦数据移动以及数据移动中的自适应粒度选择的协同方法。我们使用最先进的模拟器在不同网络和体系结构配置的各种工作负载下评估DaeMon。与广泛采用的按页面粒度移动数据的方法相比,DaeMon将系统性能和数据访问成本分别提高了2.39倍和3.06倍。
{"title":"DaeMon: Architectural Support for Efficient Data Movement in Fully Disaggregated Systems","authors":"Christina Giannoula, Kailong Huang, Jonathan Tang, N. Koziris, G. Goumas, Zeshan A. Chishti, Nandita Vijaykumar","doi":"10.1145/3579445","DOIUrl":"https://doi.org/10.1145/3579445","url":null,"abstract":"Resource disaggregation offers a cost effective solution to resource scaling, utilization, and failure-handling in data centers by physically separating hardware devices in a server. Servers are architected as pools of processor, memory, and storage devices, organized as independent failure-isolated components interconnected by a high-bandwidth network. A critical challenge, however, is the high performance penalty of accessing data from a remote memory module over the network. Addressing this challenge is difficult as disaggregated systems have high runtime variability in network latencies/bandwidth, and page migration can significantly delay critical path cache line accesses in other pages. This paper conducts a characterization analysis on different data movement strategies in fully disaggregated systems, evaluates their performance overheads in a variety of workloads, and introduces DaeMon, the first software-transparent mechanism to significantly alleviate data movement overheads in fully disaggregated systems. First, to enable scalability to multiple hardware components in the system, we enhance each compute and memory unit with specialized engines that transparently handle data migrations. Second, to achieve high performance and provide robustness across various network, architecture and application characteristics, we implement a synergistic approach of bandwidth partitioning, link compression, decoupled data movement of multiple granularities, and adaptive granularity selection in data movements. We evaluate DaeMon in a wide variety of workloads at different network and architecture configurations using a state-of-the-art simulator. DaeMon improves system performance and data access costs by 2.39× and 3.06×, respectively, over the widely-adopted approach of moving data at page granularity.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123463262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gacha Game Analysis and Design Gacha游戏分析与设计
Canhui Chen, Zhixuan Fang
Gacha game is a special opaque selling approach, where the seller is selling gacha pulls to the buyer. Each gacha pull provides a certain probability for the buyer to win the gacha game reward. The gacha game has been enthusiastically embraced in numerous online video games and has a wide range of potential applications.In this work, we model the complex interaction between the seller and the buyer as a Stackelberg game, where the sequential decision of the buyer is modeled as a Markov Decision Process (MDP). We define the whale property in the context of gacha games. Then, we show that this is the necessary condition to achieve optimal revenue. Moreover, we provide the revenue-optimal gacha game design and show that it is equivalent to the single-item single-bidder Myerson auction.We further explore two popular multi-item gacha games, namely, the sequential multi-item gacha game and the banner-based multi-item gacha game. We also discuss the subsidies in the gacha game and demonstrate how subsidies may encourage the buyer to engage in grinding behavior. Finally, we provide a case study on blockchain systems as gacha games.
Gacha游戏是一种特殊的不透明的销售方式,卖家向买家出售Gacha。每次gacha拉动都为购买者赢得gacha游戏奖励提供了一定的概率。gacha游戏在众多在线视频游戏中受到热烈欢迎,并具有广泛的潜在应用。在这项工作中,我们将卖方和买方之间的复杂互动建模为Stackelberg博弈,其中买方的顺序决策建模为马尔可夫决策过程(MDP)。我们在gacha游戏中定义鲸鱼属性。然后,我们证明了这是实现最优收益的必要条件。此外,我们提供了收益最优的gacha游戏设计,并表明它相当于单一物品单一投标人的Myerson拍卖。我们进一步探讨了两款流行的多项目gacha游戏,即顺序多项目gacha游戏和基于横幅的多项目gacha游戏。我们还讨论了gacha游戏中的补贴,并展示了补贴是如何鼓励购买者进行刷任务行为的。最后,我们提供了一个关于区块链系统作为gacha游戏的案例研究。
{"title":"Gacha Game Analysis and Design","authors":"Canhui Chen, Zhixuan Fang","doi":"10.1145/3579438","DOIUrl":"https://doi.org/10.1145/3579438","url":null,"abstract":"Gacha game is a special opaque selling approach, where the seller is selling gacha pulls to the buyer. Each gacha pull provides a certain probability for the buyer to win the gacha game reward. The gacha game has been enthusiastically embraced in numerous online video games and has a wide range of potential applications.In this work, we model the complex interaction between the seller and the buyer as a Stackelberg game, where the sequential decision of the buyer is modeled as a Markov Decision Process (MDP). We define the whale property in the context of gacha games. Then, we show that this is the necessary condition to achieve optimal revenue. Moreover, we provide the revenue-optimal gacha game design and show that it is equivalent to the single-item single-bidder Myerson auction.We further explore two popular multi-item gacha games, namely, the sequential multi-item gacha game and the banner-based multi-item gacha game. We also discuss the subsidies in the gacha game and demonstrate how subsidies may encourage the buyer to engage in grinding behavior. Finally, we provide a case study on blockchain systems as gacha games.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124179393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Duo: A High-Throughput Reconfigurable Datacenter Network Using Local Routing and Control 使用本地路由和控制的高吞吐量可重构数据中心网络
Johannes Zerwas, Csaba Györgyi, Andreas Blenk, Stefan Schmid, C. Avin
The performance of many cloud-based applications critically depends on the capacity of the underlying datacenter network. A particularly innovative approach to improve the throughput in datacenters is enabled by emerging optical technologies, which allow to dynamically adjust the physical network topology, both in an oblivious or demand-aware manner. However, such topology engineering, i.e., the operation and control of dynamic datacenter networks, is considered complex and currently comes with restrictions and overheads. We present Duo, a novel demand-aware reconfigurable rack-to-rack datacenter network design realized with a simple and efficient control plane. Duo is based on the well-known de Bruijn topology (implemented using a small number of optical circuit switches) and the key observation that this topology can be enhanced using dynamic (''opportunistic'') links between its nodes. In contrast to previous systems, Duo has several desired features: i) It makes effective use of the network capacity by supporting integrated and multi-hop routing (paths that combine both static and dynamic links). ii) It uses a work-conserving queue scheduling which enables out-of-the-box TCP support. iii) Duo employs greedy routing that is implemented using standard IP longest prefix match with small forwarding tables. And iv) during topological reconfigurations, routing tables require only local updates, making this approach ideal for dynamic networks. We evaluate Duo in end-to-end packet-level simulations, comparing it to the state-of-the-art static and dynamic networks designs. We show that Duo provides higher throughput, shorter paths, lower flow completion times for high priority flows, and minimal packet reordering, all using existing network and transport layer protocols. We also report on a proof-of-concept implementation of Duo's control and data plane.
许多基于云的应用程序的性能严重依赖于底层数据中心网络的容量。新兴的光学技术是提高数据中心吞吐量的一种特别创新的方法,它允许以无意识或需求感知的方式动态调整物理网络拓扑。然而,这种拓扑工程,即动态数据中心网络的操作和控制,被认为是复杂的,并且目前具有限制和开销。我们提出了Duo,一种新的需求感知可重构机架到机架数据中心网络设计,采用简单高效的控制平面实现。Duo基于著名的de Bruijn拓扑(使用少量光学电路开关实现),并且关键观察到该拓扑可以通过其节点之间的动态(“机会主义”)链接来增强。与以前的系统相比,Duo有几个令人满意的特性:i)它通过支持集成和多跳路由(结合静态和动态链接的路径)有效地利用了网络容量。ii)它使用节省工作的队列调度,使开箱即用的TCP支持成为可能。iii) Duo采用贪婪路由,使用标准IP最长前缀匹配和小转发表实现。iv)在拓扑重构过程中,路由表只需要本地更新,使得这种方法非常适合动态网络。我们在端到端分组级模拟中评估Duo,将其与最先进的静态和动态网络设计进行比较。我们展示了Duo为高优先级流提供更高的吞吐量、更短的路径、更低的流完成时间和最小的数据包重新排序,所有这些都使用现有的网络和传输层协议。我们还报告了Duo的控制和数据平面的概念验证实现。
{"title":"Duo: A High-Throughput Reconfigurable Datacenter Network Using Local Routing and Control","authors":"Johannes Zerwas, Csaba Györgyi, Andreas Blenk, Stefan Schmid, C. Avin","doi":"10.1145/3579449","DOIUrl":"https://doi.org/10.1145/3579449","url":null,"abstract":"The performance of many cloud-based applications critically depends on the capacity of the underlying datacenter network. A particularly innovative approach to improve the throughput in datacenters is enabled by emerging optical technologies, which allow to dynamically adjust the physical network topology, both in an oblivious or demand-aware manner. However, such topology engineering, i.e., the operation and control of dynamic datacenter networks, is considered complex and currently comes with restrictions and overheads. We present Duo, a novel demand-aware reconfigurable rack-to-rack datacenter network design realized with a simple and efficient control plane. Duo is based on the well-known de Bruijn topology (implemented using a small number of optical circuit switches) and the key observation that this topology can be enhanced using dynamic (''opportunistic'') links between its nodes. In contrast to previous systems, Duo has several desired features: i) It makes effective use of the network capacity by supporting integrated and multi-hop routing (paths that combine both static and dynamic links). ii) It uses a work-conserving queue scheduling which enables out-of-the-box TCP support. iii) Duo employs greedy routing that is implemented using standard IP longest prefix match with small forwarding tables. And iv) during topological reconfigurations, routing tables require only local updates, making this approach ideal for dynamic networks. We evaluate Duo in end-to-end packet-level simulations, comparing it to the state-of-the-art static and dynamic networks designs. We show that Duo provides higher throughput, shorter paths, lower flow completion times for high priority flows, and minimal packet reordering, all using existing network and transport layer protocols. We also report on a proof-of-concept implementation of Duo's control and data plane.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133695455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Network Monitoring on Multi-Pipe Switches 多管交换机网络监控
Marco Chiesa, F. Verdi
Programmable switches have been widely used to design network monitoring solutions that operate in the fast data-plane level, e.g., detecting heavy hitters, super-spreaders, computing flow size distributions and their entropy. Many existing works on networking monitoring assume switches deploy a single memory that is accessible by each processed packet. However, high-speed ASIC switches increasingly deploymultiple independent pipes, each equipped with its own independent memory thatcannot be accessed by other pipes. In this work, we initiate the study of deploying existing heavy-hitter data-plane monitoring solutions on multi-pipe switches where packets of a "flow" may spread over multiple pipes, i.e., stored into distinct memories. We first quantify the accuracy degradation due to splitting a monitoring data structure across multiple pipes (e.g., up to 3000x worse flow-size estimation average error). We then present PipeCache, a system that adaptsexisting data-plane mechanisms to multi-pipe switches by carefully storing all the monitoring information of each traffic class into exactly one specific pipe (as opposed to replicate the information on multiple pipes). PipeCache relies on the idea of briefly storing monitoring information into a per-pipe cache and then piggybacking this information onto existing data packets to the correct pipeentirely at data-plane speed. We implement PipeCache on ASIC switches and we evaluate it using a real-world trace. We show that existing data-plane mechanisms achieves accuracy levels and memory requirements similar to single-pipe deployments when augmented with PipeCache (i.e., up to 16x lower memory requirements).
可编程交换机已被广泛用于设计运行在快速数据平面级别的网络监控解决方案,例如,检测重击者,超级传播者,计算流量大小分布及其熵。许多现有的网络监控工作都假设交换机部署了单个内存,每个处理过的数据包都可以访问该内存。然而,高速ASIC交换机越来越多地部署多个独立的管道,每个管道都配备自己的独立存储器,不能被其他管道访问。在这项工作中,我们开始研究在多管道交换机上部署现有的重量级数据平面监控解决方案,其中“流”的数据包可能会在多个管道上传播,即存储到不同的存储器中。我们首先量化了由于在多个管道上分割监测数据结构而导致的精度下降(例如,高达3000倍的流量大小估计平均误差)。然后,我们介绍了PipeCache,这是一个将现有数据平面机制适应多管道交换机的系统,它将每个流量类的所有监控信息仔细存储到一个特定的管道中(而不是在多个管道上复制信息)。PipeCache依赖于将监控信息短暂地存储到每个管道缓存中,然后完全以数据平面速度将这些信息装载到现有数据包上,并传输到正确的管道。我们在ASIC交换机上实现了PipeCache,并使用真实世界的跟踪来评估它。我们表明,现有的数据平面机制在使用PipeCache增强后,可以实现与单管道部署类似的精度级别和内存需求(即,内存需求降低16倍)。
{"title":"Network Monitoring on Multi-Pipe Switches","authors":"Marco Chiesa, F. Verdi","doi":"10.1145/3579321","DOIUrl":"https://doi.org/10.1145/3579321","url":null,"abstract":"Programmable switches have been widely used to design network monitoring solutions that operate in the fast data-plane level, e.g., detecting heavy hitters, super-spreaders, computing flow size distributions and their entropy. Many existing works on networking monitoring assume switches deploy a single memory that is accessible by each processed packet. However, high-speed ASIC switches increasingly deploymultiple independent pipes, each equipped with its own independent memory thatcannot be accessed by other pipes. In this work, we initiate the study of deploying existing heavy-hitter data-plane monitoring solutions on multi-pipe switches where packets of a \"flow\" may spread over multiple pipes, i.e., stored into distinct memories. We first quantify the accuracy degradation due to splitting a monitoring data structure across multiple pipes (e.g., up to 3000x worse flow-size estimation average error). We then present PipeCache, a system that adaptsexisting data-plane mechanisms to multi-pipe switches by carefully storing all the monitoring information of each traffic class into exactly one specific pipe (as opposed to replicate the information on multiple pipes). PipeCache relies on the idea of briefly storing monitoring information into a per-pipe cache and then piggybacking this information onto existing data packets to the correct pipeentirely at data-plane speed. We implement PipeCache on ASIC switches and we evaluate it using a real-world trace. We show that existing data-plane mechanisms achieves accuracy levels and memory requirements similar to single-pipe deployments when augmented with PipeCache (i.e., up to 16x lower memory requirements).","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123020223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Go-to-Controller is Better: Efficient and Optimal LPM Caching with Splicing 转到控制器更好:带拼接的高效和最优LPM缓存
Itamar Gozlan, C. Avin, Gil Einziger, Gabriel Scalosub
Modern data center networks are required to support huge and complex forwarding policies as they handle the traffic of the various tenants. However, these policies cannot be stored in their entirety within the limited memory available at commodity switches. The common approach in such scenarios is to have SDN controllers manage the memory available at the switch as a fast cache, updating and changing the forwarding rules in the cache according to the workloads dynamics and the global policy at hand. Many such policies, such as Longest-prefix-match (LPM) policies, introduce dependencies between the forwarding rules. Ensuring that the cache content is always consistent with the global policy often requires the switch to store (potentially many) superfluous rules, which may lead to suboptimal performance in terms of delay and throughput. To overcome these deficiencies, previous work suggested the concept of splicing, where modified Go-to-Controller rules can be inserted into the cache to improve performance while maintaining consistency. These works focused mostly on heuristics, and it was conjectured that the problem is computationally intractable. As our main result, we show that the problem of determining the optimal set of rules, with splicing, can actually be solved efficiently by presenting a polynomial-time algorithm that produces an optimal solution, i.e., for a given cache size we find an optimal set of rules, some of which are go-to-controller, which maximize the total weight of the cache while maintaining consistency. However, such optimality comes at a cost, encompassed by the fact that our algorithm has a significantly larger running time than SoTA solutions which do not employ splicing. Therefore, we further present a heuristic exhibiting close-to-optimal performance, with significantly improved running time, matching that of the best algorithm, which does not employ splicing. In addition, we present the results of an evaluation study that compares the performance of our solutions with that of SoTA approaches, showing that splicing can reduce the cache miss ratio by as much as 30%, without increasing the cache size. Lastly, we propose a simple and fast-to-compute metric (that is consistency-oblivious) in order to evaluate the potential benefits of splicing compared to classical LPM-caching, for a given policy and traffic distribution. We show that our metric is highly correlated with such benefits, thus serving as an indication of whether splicing should be incorporated within the system architecture.
现代数据中心网络在处理各种租户的流量时,需要支持庞大而复杂的转发策略。然而,这些策略不能全部存储在商品交换机有限的可用内存中。在这种情况下,常见的方法是让SDN控制器管理交换机上可用的内存作为快速缓存,根据工作负载动态和手头的全局策略更新和更改缓存中的转发规则。许多这样的策略,如LPM (Longest-prefix-match)策略,引入了转发规则之间的依赖关系。为了确保缓存内容始终与全局策略一致,通常需要切换到存储(可能是许多)多余的规则,这可能会导致延迟和吞吐量方面的次优性能。为了克服这些缺陷,以前的工作提出了拼接的概念,其中修改的Go-to-Controller规则可以插入到缓存中,以提高性能,同时保持一致性。这些工作主要集中在启发式上,人们推测这个问题在计算上是难以解决的。作为我们的主要结果,我们展示了确定最优规则集的问题,通过拼接,实际上可以通过提出一个产生最优解的多项式时间算法来有效地解决,即,对于给定的缓存大小,我们找到一组最优规则,其中一些是去控制器,在保持一致性的同时最大化缓存的总权重。然而,这种最优性是有代价的,我们的算法比不使用拼接的SoTA解决方案的运行时间要长得多。因此,我们进一步提出了一种具有接近最优性能的启发式算法,其运行时间显著提高,与不使用拼接的最佳算法相匹配。此外,我们还展示了一项评估研究的结果,该研究将我们的解决方案与SoTA方法的性能进行了比较,结果表明,拼接可以在不增加缓存大小的情况下将缓存缺失率降低多达30%。最后,对于给定的策略和流量分布,我们提出了一个简单且易于计算的度量(即一致性无关的),以便评估与经典lpm缓存相比,拼接的潜在优势。我们表明,我们的度量与这些好处高度相关,因此作为是否应该在系统架构中合并剪接的指示。
{"title":"Go-to-Controller is Better: Efficient and Optimal LPM Caching with Splicing","authors":"Itamar Gozlan, C. Avin, Gil Einziger, Gabriel Scalosub","doi":"10.1145/3579441","DOIUrl":"https://doi.org/10.1145/3579441","url":null,"abstract":"Modern data center networks are required to support huge and complex forwarding policies as they handle the traffic of the various tenants. However, these policies cannot be stored in their entirety within the limited memory available at commodity switches. The common approach in such scenarios is to have SDN controllers manage the memory available at the switch as a fast cache, updating and changing the forwarding rules in the cache according to the workloads dynamics and the global policy at hand. Many such policies, such as Longest-prefix-match (LPM) policies, introduce dependencies between the forwarding rules. Ensuring that the cache content is always consistent with the global policy often requires the switch to store (potentially many) superfluous rules, which may lead to suboptimal performance in terms of delay and throughput. To overcome these deficiencies, previous work suggested the concept of splicing, where modified Go-to-Controller rules can be inserted into the cache to improve performance while maintaining consistency. These works focused mostly on heuristics, and it was conjectured that the problem is computationally intractable. As our main result, we show that the problem of determining the optimal set of rules, with splicing, can actually be solved efficiently by presenting a polynomial-time algorithm that produces an optimal solution, i.e., for a given cache size we find an optimal set of rules, some of which are go-to-controller, which maximize the total weight of the cache while maintaining consistency. However, such optimality comes at a cost, encompassed by the fact that our algorithm has a significantly larger running time than SoTA solutions which do not employ splicing. Therefore, we further present a heuristic exhibiting close-to-optimal performance, with significantly improved running time, matching that of the best algorithm, which does not employ splicing. In addition, we present the results of an evaluation study that compares the performance of our solutions with that of SoTA approaches, showing that splicing can reduce the cache miss ratio by as much as 30%, without increasing the cache size. Lastly, we propose a simple and fast-to-compute metric (that is consistency-oblivious) in order to evaluate the potential benefits of splicing compared to classical LPM-caching, for a given policy and traffic distribution. We show that our metric is highly correlated with such benefits, thus serving as an indication of whether splicing should be incorporated within the system architecture.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121377682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting and Measuring Aggressive Location Harvesting in Mobile Apps via Data-flow Path Embedding 通过数据流路径嵌入检测和测量移动应用程序中的侵略性位置收集
Haoran Lu, Qingchuan Zhao, Yongliang Chen, Xiaojing Liao, Zhiqiang Lin
Today, location-based services have become prevalent in the mobile platform, where mobile apps provide specific services to a user based on his or her location. Unfortunately, mobile apps can aggressively harvest location data with much higher accuracy and frequency than they need because the coarse-grained access control mechanism currently implemented in mobile operating systems (e.g., Android) cannot regulate such behavior. This unnecessary data collection violates the data minimization policy, yet no previous studies have investigated privacy violations from this perspective, and existing techniques are insufficient to address this violation. To fill this knowledge gap, we take the first step toward detecting and measuring this privacy risk in mobile apps at scale. Particularly, we annotate and release thefirst dataset to characterize those aggressive location harvesting apps and understand the challenges of automatic detection and classification. Next, we present a novel system, LocationScope, to address these challenges by(i) uncovering how an app collects locations and how to use such data through a fine-tuned value set analysis technique,(ii) recognizing the fine-grained location-based services an app provides via embedding data-flow paths, which is a combination of program analysis and machine learning techniques, extracted from its location data usages, and(iii) identifying aggressive apps with an outlier detection technique achieving a precision of 97% in aggressive app detection. Our technique has further been applied to millions of free Android apps from Google Play as of 2019 and 2021. Highlights of our measurements on detected aggressive apps include their growing trend from 2019 to 2021 and the app generators' significant contribution of aggressive location harvesting apps.
如今,基于位置的服务已经在移动平台上流行起来,移动应用程序根据用户的位置为用户提供特定的服务。不幸的是,移动应用程序可以以比他们需要的更高的精度和频率积极地收集位置数据,因为目前在移动操作系统(如Android)中实现的粗粒度访问控制机制无法规范这种行为。这种不必要的数据收集违反了数据最小化政策,但之前没有研究从这个角度调查隐私侵犯,现有技术不足以解决这种侵犯。为了填补这一知识空白,我们迈出了大规模检测和衡量移动应用程序中这种隐私风险的第一步。特别是,我们注释并发布了第一个数据集,以描述那些积极的位置收集应用程序,并了解自动检测和分类的挑战。接下来,我们提出了一个新的系统,LocationScope,通过(i)揭示应用程序如何收集位置以及如何通过微调值集分析技术使用这些数据来解决这些挑战,(ii)通过嵌入数据流路径识别应用程序提供的基于位置的细粒度服务,这是程序分析和机器学习技术的结合,从其位置数据使用中提取。(iii)使用异常值检测技术识别攻击性应用,在攻击性应用检测中达到97%的精度。截至2019年和2021年,我们的技术已进一步应用于Google Play上数百万款免费Android应用。我们对检测到的攻击性应用的测量亮点包括,它们在2019年至2021年的增长趋势,以及应用生成器对攻击性位置收集应用的重大贡献。
{"title":"Detecting and Measuring Aggressive Location Harvesting in Mobile Apps via Data-flow Path Embedding","authors":"Haoran Lu, Qingchuan Zhao, Yongliang Chen, Xiaojing Liao, Zhiqiang Lin","doi":"10.1145/3579447","DOIUrl":"https://doi.org/10.1145/3579447","url":null,"abstract":"Today, location-based services have become prevalent in the mobile platform, where mobile apps provide specific services to a user based on his or her location. Unfortunately, mobile apps can aggressively harvest location data with much higher accuracy and frequency than they need because the coarse-grained access control mechanism currently implemented in mobile operating systems (e.g., Android) cannot regulate such behavior. This unnecessary data collection violates the data minimization policy, yet no previous studies have investigated privacy violations from this perspective, and existing techniques are insufficient to address this violation. To fill this knowledge gap, we take the first step toward detecting and measuring this privacy risk in mobile apps at scale. Particularly, we annotate and release thefirst dataset to characterize those aggressive location harvesting apps and understand the challenges of automatic detection and classification. Next, we present a novel system, LocationScope, to address these challenges by(i) uncovering how an app collects locations and how to use such data through a fine-tuned value set analysis technique,(ii) recognizing the fine-grained location-based services an app provides via embedding data-flow paths, which is a combination of program analysis and machine learning techniques, extracted from its location data usages, and(iii) identifying aggressive apps with an outlier detection technique achieving a precision of 97% in aggressive app detection. Our technique has further been applied to millions of free Android apps from Google Play as of 2019 and 2021. Highlights of our measurements on detected aggressive apps include their growing trend from 2019 to 2021 and the app generators' significant contribution of aggressive location harvesting apps.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114493090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fiat Lux: Illuminating IPv6 Apportionment with Different Datasets Fiat Lux:不同数据集的IPv6分配
Amanda Hsu, Frank H. Li, P. Pearce
IPv6 adoption continues to grow, making up more than 40% of client traffic to Google globally. While the ubiquity of the IPv4 address space makes it comparably easier to understand, the vast and less studied IPv6 address space motivates a variety of works detailing methodology to collect and analyze IPv6 properties, many of which use knowledge from specific data sources as a lens for answering research questions. Despite such work, questions remain on basic properties such as the appropriate prefix size for different research tasks. Our work fills this knowledge gap by presenting an analysis of the apportionment of the IPv6 address space from the ground-up, using data and knowledge from numerous data sources simultaneously, aimed at identifying how to leverage IPv6 address information for a variety of research tasks. Utilizing WHOIS data from RIRs, routing data, and hitlists, we highlight fundamental differences in apportionment sizes and structural properties depending on data source and examination method. We focus on the different perspectives each dataset offers and the disjoint, heterogeneous nature of these datasets when taken together. We additionally leverage a graph-based analysis method for these datasets that allows us to draw conclusions regarding when and how to intersect the datasets and their utility. The differences in each dataset's perspective is not due to dataset problems but rather stems from a variety of differing structural and deployment behaviors across RIRs and IPv6 providers alike. In light of these inconsistencies, we discuss network address partitioning, best practices, and considerations for future IPv6 measurement and analysis projects.
IPv6的采用持续增长,占谷歌全球客户流量的40%以上。虽然无处不在的IPv4地址空间使其相对更容易理解,但广泛而较少研究的IPv6地址空间激发了各种详细收集和分析IPv6属性的方法的工作,其中许多工作使用来自特定数据源的知识作为回答研究问题的镜头。尽管做了这样的工作,但在诸如不同研究任务的适当前缀大小等基本属性上仍然存在问题。我们的工作填补了这一知识空白,从底层开始分析IPv6地址空间的分配,同时使用来自众多数据源的数据和知识,旨在确定如何利用IPv6地址信息进行各种研究任务。利用来自RIRs、路由数据和命中列表的WHOIS数据,我们强调了根据数据源和检查方法在分配大小和结构属性方面的根本差异。我们关注每个数据集提供的不同视角,以及这些数据集在一起时的不相交、异构性质。我们还利用基于图的分析方法对这些数据集进行分析,使我们能够得出关于何时以及如何交叉数据集及其效用的结论。每个数据集视角的差异不是由于数据集问题,而是源于rir和IPv6提供商之间各种不同的结构和部署行为。鉴于这些不一致,我们讨论了网络地址划分、最佳实践以及对未来IPv6测量和分析项目的考虑。
{"title":"Fiat Lux: Illuminating IPv6 Apportionment with Different Datasets","authors":"Amanda Hsu, Frank H. Li, P. Pearce","doi":"10.1145/3579334","DOIUrl":"https://doi.org/10.1145/3579334","url":null,"abstract":"IPv6 adoption continues to grow, making up more than 40% of client traffic to Google globally. While the ubiquity of the IPv4 address space makes it comparably easier to understand, the vast and less studied IPv6 address space motivates a variety of works detailing methodology to collect and analyze IPv6 properties, many of which use knowledge from specific data sources as a lens for answering research questions. Despite such work, questions remain on basic properties such as the appropriate prefix size for different research tasks. Our work fills this knowledge gap by presenting an analysis of the apportionment of the IPv6 address space from the ground-up, using data and knowledge from numerous data sources simultaneously, aimed at identifying how to leverage IPv6 address information for a variety of research tasks. Utilizing WHOIS data from RIRs, routing data, and hitlists, we highlight fundamental differences in apportionment sizes and structural properties depending on data source and examination method. We focus on the different perspectives each dataset offers and the disjoint, heterogeneous nature of these datasets when taken together. We additionally leverage a graph-based analysis method for these datasets that allows us to draw conclusions regarding when and how to intersect the datasets and their utility. The differences in each dataset's perspective is not due to dataset problems but rather stems from a variety of differing structural and deployment behaviors across RIRs and IPv6 providers alike. In light of these inconsistencies, we discuss network address partitioning, best practices, and considerations for future IPv6 measurement and analysis projects.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115083945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DiffForward: On Balancing Forwarding Traffic for Modern Cloud Block Services via Differentiated Forwarding DiffForward:基于差异化转发的现代云块业务转发流量均衡研究
Wenzheng Zhu, Yongkun Li, Erci Xu, Fei Li, Yinlong Xu, John C.S. Lui
Modern cloud block service provides cloud users with virtual block disks (VDisks), and it usually relies on a forwarding layer consisting of multiple proxy servers to forward the block-level writes from applications to the underlying distributed storage. However, we discover that severe traffic imbalance exists among the proxy servers at the forwarding layer, thus creating a performance bottleneck which severely prolongs the latency of accessing VDisks. Worse yet, due to the diverse access patterns of VDisk s, stable traffic and burst traffic coexist at the forwarding layer, and thus making existing load balancing designs inefficient for balancing the traffic at the forwarding layer of VDisk s, as they are unaware of and also lacks the ability to differentiate the decomposable burst and stable traffic. To this end, we propose a novel traffic forwarding scheme DiffForward for cloud block services. DiffForward differentiates the burst traffic from stable traffic in an accurate and efficient way at the client side, then it forwards the burst traffic to a decentralized distributed log store to realize real-time load balance by writing the data in a round-robin manner and balances the stable traffic by segmentation. DiffForward also judiciously coordinates the stable and burst traffic and preserves strong consistency under differentiated forwarding. Extensive experiments with reallife workloads on our prototype show that DiffForward effectively balances the traffic at the forwarding layer at a fine-grained subsecond level, thus significantly reducing the write latency of VDisks.
现代云块服务为云用户提供虚拟块磁盘(VDisks),它通常依赖于由多个代理服务器组成的转发层,将应用程序的块级写操作转发到底层分布式存储。但是,我们发现转发层代理服务器之间存在严重的流量不均衡,从而造成性能瓶颈,严重延长了访问vdisk的延迟。更糟糕的是,由于VDisk s的访问方式多种多样,转发层的稳定流量和突发流量同时存在,使得现有的负载均衡设计对于VDisk s转发层的流量均衡效率不高,无法区分可分解的突发流量和稳定流量。为此,我们提出了一种新的云块服务流量转发方案DiffForward。DiffForward在客户端对突发流量和稳定流量进行准确高效的区分,然后将突发流量转发到分散的分布式日志存储中,通过轮询方式写入数据实现实时负载均衡,对稳定流量进行分段均衡。DiffForward在差别化转发的情况下,还能很好地协调稳定流量和突发流量,保持强一致性。在我们的原型上对现实工作负载进行的大量实验表明,DiffForward在细粒度的亚秒级上有效地平衡了转发层的流量,从而显着降低了vdisk的写延迟。
{"title":"DiffForward: On Balancing Forwarding Traffic for Modern Cloud Block Services via Differentiated Forwarding","authors":"Wenzheng Zhu, Yongkun Li, Erci Xu, Fei Li, Yinlong Xu, John C.S. Lui","doi":"10.1145/3579444","DOIUrl":"https://doi.org/10.1145/3579444","url":null,"abstract":"Modern cloud block service provides cloud users with virtual block disks (VDisks), and it usually relies on a forwarding layer consisting of multiple proxy servers to forward the block-level writes from applications to the underlying distributed storage. However, we discover that severe traffic imbalance exists among the proxy servers at the forwarding layer, thus creating a performance bottleneck which severely prolongs the latency of accessing VDisks. Worse yet, due to the diverse access patterns of VDisk s, stable traffic and burst traffic coexist at the forwarding layer, and thus making existing load balancing designs inefficient for balancing the traffic at the forwarding layer of VDisk s, as they are unaware of and also lacks the ability to differentiate the decomposable burst and stable traffic. To this end, we propose a novel traffic forwarding scheme DiffForward for cloud block services. DiffForward differentiates the burst traffic from stable traffic in an accurate and efficient way at the client side, then it forwards the burst traffic to a decentralized distributed log store to realize real-time load balance by writing the data in a round-robin manner and balances the stable traffic by segmentation. DiffForward also judiciously coordinates the stable and burst traffic and preserves strong consistency under differentiated forwarding. Extensive experiments with reallife workloads on our prototype show that DiffForward effectively balances the traffic at the forwarding layer at a fine-grained subsecond level, thus significantly reducing the write latency of VDisks.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121675958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Asynchronous Automata Processing on GPUs gpu上的异步自动机处理
Hongyuan Liu, Sreepathi Pai, Adwait Jog
Finite-state automata serve as compute kernels for many application domains such as pattern matching and data analytics. Existing approaches on GPUs exploit three levels of parallelism in automata processing tasks: 1)~input stream level, 2)~automaton-level and 3)~state-level. Among these, only state-level parallelism is intrinsic to automata while the other two levels of parallelism depend on the number of automata and input streams to be processed. As GPU resources increase, a parallelism-limited automata processing task can underutilize GPU compute resources. To this end, we propose AsyncAP, a low-overhead approach that optimizes for both scalability and throughput. Our insight is that most automata processing tasks have an additional source of parallelism originating from the input symbols which has not been leveraged before. Making the matching process associated with the automata tasks asynchronous, i.e., parallel GPU threads start processing an input stream from different input locations instead of processing it serially, improves throughput significantly and scales with input length. When the task does not have enough parallelism to utilize all the GPU cores, detailed evaluation across 12 evaluated applications shows that AsyncAP achieves up to 58× speedup on average over the state-of-the-art GPU automata processing engine. When the tasks have enough parallelism to utilize GPU cores, AsyncAP still achieves 2.4× speedup.
有限状态自动机作为许多应用领域的计算内核,例如模式匹配和数据分析。现有的gpu方法在自动机处理任务中利用了三个层次的并行性:1)~输入流级,2)~自动机级和3)~状态级。其中,只有状态级的并行性是自动机固有的,而其他两个级别的并行性取决于要处理的自动机和输入流的数量。随着GPU资源的增加,并行性受限的自动机处理任务可能会导致GPU计算资源的利用率不足。为此,我们提出了AsyncAP,这是一种低开销的方法,可以优化可伸缩性和吞吐量。我们的见解是,大多数自动机处理任务都有来自输入符号的额外并行性来源,这在以前没有被利用过。使与自动机任务相关联的匹配过程异步,即并行GPU线程从不同的输入位置开始处理输入流,而不是串行地处理它,可以显着提高吞吐量并随输入长度扩展。当任务没有足够的并行性来利用所有GPU内核时,对12个被评估应用程序的详细评估表明,与最先进的GPU自动机处理引擎相比,AsyncAP平均实现了高达58倍的加速。当任务有足够的并行性来利用GPU内核时,AsyncAP仍然可以实现2.4倍的加速。
{"title":"Asynchronous Automata Processing on GPUs","authors":"Hongyuan Liu, Sreepathi Pai, Adwait Jog","doi":"10.1145/3579453","DOIUrl":"https://doi.org/10.1145/3579453","url":null,"abstract":"Finite-state automata serve as compute kernels for many application domains such as pattern matching and data analytics. Existing approaches on GPUs exploit three levels of parallelism in automata processing tasks: 1)~input stream level, 2)~automaton-level and 3)~state-level. Among these, only state-level parallelism is intrinsic to automata while the other two levels of parallelism depend on the number of automata and input streams to be processed. As GPU resources increase, a parallelism-limited automata processing task can underutilize GPU compute resources. To this end, we propose AsyncAP, a low-overhead approach that optimizes for both scalability and throughput. Our insight is that most automata processing tasks have an additional source of parallelism originating from the input symbols which has not been leveraged before. Making the matching process associated with the automata tasks asynchronous, i.e., parallel GPU threads start processing an input stream from different input locations instead of processing it serially, improves throughput significantly and scales with input length. When the task does not have enough parallelism to utilize all the GPU cores, detailed evaluation across 12 evaluated applications shows that AsyncAP achieves up to 58× speedup on average over the state-of-the-art GPU automata processing engine. When the tasks have enough parallelism to utilize GPU cores, AsyncAP still achieves 2.4× speedup.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121321660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the ACM on Measurement and Analysis of Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1