首页 > 最新文献

Proceedings of the 6th Asia-Pacific Workshop on Systems最新文献

英文 中文
Go Gentle into the Good Night via Controlled Battery Discharging 通过控制电池放电,温柔地进入晚安
Pub Date : 2015-07-27 DOI: 10.1145/2797022.2797035
Shih-Hao Liang, T. Chiueh, Welkin Ling
Prevalent battery management approaches for mobile devices treated a battery's residual capacity as a given budget and tried to make the best of this budget by turning off lower-priority tasks. In contrast, the research reported in this paper aims to maximize the quantitative value of a battery's residual capacity by operating the battery according to its discharge characteristic curves (DCC), which describes a battery's discharging dynamics in terms of the correlation among its voltage level, capacity and discharging current. According to the DCC theory, it is possible to increase a battery's effective capacity in terms of Ampere hours by capping the discharging current in a certain way after its capacity falls below a threshold. This paper describes a DCC-based Battery Discharging (or DBD) technique that is capable of automatically deriving a battery's DCC, using the DCC to determine a suitable instantaneous discharge current budget, and limiting the total discharge current under that budget. Measurements on an operational prototype show that DBD is capable of extending a battery's residual capacity by more than 20% after its SOC is reduced to 30%.
移动设备的普遍电池管理方法将电池的剩余容量视为给定的预算,并试图通过关闭低优先级任务来充分利用这一预算。而本文研究的目的是根据电池的放电特性曲线(discharge characteristic curves, DCC)对电池进行操作,最大限度地获得电池剩余容量的定量值。放电特性曲线通过电池的电压水平、容量和放电电流之间的相关关系来描述电池的放电动态。根据DCC理论,在电池容量低于某一阈值后,通过某种方式限制放电电流,可以提高电池的有效容量(以安培小时计)。本文描述了一种基于DCC的电池放电(或DBD)技术,该技术能够自动导出电池的DCC,使用DCC确定合适的瞬时放电电流预算,并将总放电电流限制在该预算范围内。在运行原型上的测量表明,DBD能够在电池SOC降低到30%后将电池的剩余容量延长20%以上。
{"title":"Go Gentle into the Good Night via Controlled Battery Discharging","authors":"Shih-Hao Liang, T. Chiueh, Welkin Ling","doi":"10.1145/2797022.2797035","DOIUrl":"https://doi.org/10.1145/2797022.2797035","url":null,"abstract":"Prevalent battery management approaches for mobile devices treated a battery's residual capacity as a given budget and tried to make the best of this budget by turning off lower-priority tasks. In contrast, the research reported in this paper aims to maximize the quantitative value of a battery's residual capacity by operating the battery according to its discharge characteristic curves (DCC), which describes a battery's discharging dynamics in terms of the correlation among its voltage level, capacity and discharging current. According to the DCC theory, it is possible to increase a battery's effective capacity in terms of Ampere hours by capping the discharging current in a certain way after its capacity falls below a threshold. This paper describes a DCC-based Battery Discharging (or DBD) technique that is capable of automatically deriving a battery's DCC, using the DCC to determine a suitable instantaneous discharge current budget, and limiting the total discharge current under that budget. Measurements on an operational prototype show that DBD is capable of extending a battery's residual capacity by more than 20% after its SOC is reduced to 30%.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"77A 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132723359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
For a Microkernel, a Big Lock Is Fine 对于微内核来说,一个大的锁是可以的
Pub Date : 2015-07-27 DOI: 10.1145/2797022.2797042
S. Peters, A. Danis, Kevin Elphinstone, G. Heiser
It is well-established that high-end scalability requires fine-grained locking, and for a system like Linux, a big lock degrades performance even at moderate core counts. Nevertheless, we argue that a big lock may be fine-grained enough for a microkernel designed to run on closely-coupled cores (sharing a cache), as with the short system calls typical for a well-designed microkernel, lock contention remains low under realistic loads.
众所周知,高端的可伸缩性需要细粒度的锁定,对于Linux这样的系统,即使在中等核数的情况下,大的锁也会降低性能。尽管如此,我们认为对于设计在紧密耦合的内核(共享缓存)上运行的微内核来说,一个大锁可能是足够细粒度的,因为对于一个设计良好的微内核来说,典型的短系统调用,在实际负载下锁争用仍然很低。
{"title":"For a Microkernel, a Big Lock Is Fine","authors":"S. Peters, A. Danis, Kevin Elphinstone, G. Heiser","doi":"10.1145/2797022.2797042","DOIUrl":"https://doi.org/10.1145/2797022.2797042","url":null,"abstract":"It is well-established that high-end scalability requires fine-grained locking, and for a system like Linux, a big lock degrades performance even at moderate core counts. Nevertheless, we argue that a big lock may be fine-grained enough for a microkernel designed to run on closely-coupled cores (sharing a cache), as with the short system calls typical for a well-designed microkernel, lock contention remains low under realistic loads.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130466475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Rethinking Compiler Optimizations for the Linux Kernel: An Explorative Study 重新思考Linux内核的编译器优化:一个探索性研究
Pub Date : 2015-07-27 DOI: 10.1145/2797022.2797030
Pengfei Yuan, Yao Guo, Xiangqun Chen
Performance of the operating system kernel is critical to many applications running on it. Although many efforts have been spent on improving Linux kernel performance, there is not enough attention on GCC, the compiler used to build Linux. As a result, the vanilla Linux kernel is typically compiled using the same -O2 option as most user programs. This paper investigates how different configurations of GCC may affect the performance of the Linux kernel. We have compared a number of compiler variations from different aspects on the Linux kernel, including switching simple options, using different GCC versions, controlling specific optimizations, as well as performing profile-guided optimization. We present detailed analysis on the experimental results and discuss potential compiler optimizations to further improve kernel performance. As the current GCC is far from optimal for compiling the Linux kernel, a future compiler for the kernel should include specialized optimizations, while more advanced compiler optimizations should also be incorporated to improve kernel performance.
操作系统内核的性能对于在其上运行的许多应用程序至关重要。尽管在改进Linux内核性能方面已经付出了很多努力,但对用于构建Linux的编译器GCC的关注还不够。因此,普通Linux内核通常使用与大多数用户程序相同的-O2选项进行编译。本文研究了GCC的不同配置如何影响Linux内核的性能。我们比较了Linux内核上来自不同方面的许多编译器变体,包括切换简单选项、使用不同的GCC版本、控制特定的优化以及执行配置文件引导的优化。我们对实验结果进行了详细的分析,并讨论了可能的编译器优化以进一步提高内核性能。由于当前的GCC远不是编译Linux内核的最佳选择,因此未来的内核编译器应该包含专门的优化,同时还应该纳入更高级的编译器优化,以提高内核性能。
{"title":"Rethinking Compiler Optimizations for the Linux Kernel: An Explorative Study","authors":"Pengfei Yuan, Yao Guo, Xiangqun Chen","doi":"10.1145/2797022.2797030","DOIUrl":"https://doi.org/10.1145/2797022.2797030","url":null,"abstract":"Performance of the operating system kernel is critical to many applications running on it. Although many efforts have been spent on improving Linux kernel performance, there is not enough attention on GCC, the compiler used to build Linux. As a result, the vanilla Linux kernel is typically compiled using the same -O2 option as most user programs. This paper investigates how different configurations of GCC may affect the performance of the Linux kernel. We have compared a number of compiler variations from different aspects on the Linux kernel, including switching simple options, using different GCC versions, controlling specific optimizations, as well as performing profile-guided optimization. We present detailed analysis on the experimental results and discuss potential compiler optimizations to further improve kernel performance. As the current GCC is far from optimal for compiling the Linux kernel, a future compiler for the kernel should include specialized optimizations, while more advanced compiler optimizations should also be incorporated to improve kernel performance.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132292599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
TotalCOW: Unleash the Power of Copy-On-Write for Thin-provisioned Containers TotalCOW:为瘦配置容器释放写时复制的能力
Pub Date : 2015-07-27 DOI: 10.1145/2797022.2797024
Xingbo Wu, Wenguang Wang, Song Jiang
Modern file systems leverage the Copy-on-Write (COW) technique to efficiently create snapshots. COW can significantly reduce demand on disk space and I/O bandwidth by not duplicating entire files at the time of making the snapshots. However, memory space and I/O requests demanded by applications cannot benefit from this technique. In existing systems, a disk block shared by multiple files due to COW would be read from the disk multiple times. Each block in the reads is treated as an independent one in different files and is cached as a sperate block in memory. This issue is due to the fact that current file access and caching are based on logic file addresses. It poses a significant challenge on the emerging light-weight container virtualization techniques, such as Linux Container and Docker, which rely on COW to quickly spawn a large number of thin-provisioned container instances. We propose a lightweight approach to address this issue by leveraging knowledge about files produced by COW. Experimental results show that a prototyped system using the approach, named TotalCOW, can significantly remove redundant disk reads and caching without compromising efficiency of accessing COW files.
现代文件系统利用写时复制(Copy-on-Write, COW)技术高效地创建快照。通过在创建快照时不复制整个文件,COW可以显著减少对磁盘空间和I/O带宽的需求。但是,应用程序所需的内存空间和I/O请求不能从这种技术中获益。在现有系统中,由于COW而由多个文件共享的磁盘块将从磁盘中多次读取。读取中的每个块都被视为不同文件中的独立块,并作为单独的块缓存在内存中。这个问题是由于当前文件访问和缓存是基于逻辑文件地址的。它对新兴的轻量级容器虚拟化技术(如Linux container和Docker)提出了重大挑战,这些技术依赖于COW来快速生成大量瘦配置容器实例。我们提出一种轻量级的方法,通过利用有关COW生成的文件的知识来解决这个问题。实验结果表明,使用该方法的原型系统TotalCOW可以显著去除冗余磁盘读取和缓存,而不会影响访问COW文件的效率。
{"title":"TotalCOW: Unleash the Power of Copy-On-Write for Thin-provisioned Containers","authors":"Xingbo Wu, Wenguang Wang, Song Jiang","doi":"10.1145/2797022.2797024","DOIUrl":"https://doi.org/10.1145/2797022.2797024","url":null,"abstract":"Modern file systems leverage the Copy-on-Write (COW) technique to efficiently create snapshots. COW can significantly reduce demand on disk space and I/O bandwidth by not duplicating entire files at the time of making the snapshots. However, memory space and I/O requests demanded by applications cannot benefit from this technique. In existing systems, a disk block shared by multiple files due to COW would be read from the disk multiple times. Each block in the reads is treated as an independent one in different files and is cached as a sperate block in memory. This issue is due to the fact that current file access and caching are based on logic file addresses. It poses a significant challenge on the emerging light-weight container virtualization techniques, such as Linux Container and Docker, which rely on COW to quickly spawn a large number of thin-provisioned container instances. We propose a lightweight approach to address this issue by leveraging knowledge about files produced by COW. Experimental results show that a prototyped system using the approach, named TotalCOW, can significantly remove redundant disk reads and caching without compromising efficiency of accessing COW files.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123709419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Anatomy of Cloud Monitoring and Metering: A case study and open problems 云监测和计量的剖析:一个案例研究和开放性问题
Pub Date : 2015-07-27 DOI: 10.1145/2797022.2797039
Ali Anwar, A. Sailer, Andrzej Kochut, A. Butt
Microservices based architecture has recently gained traction among the cloud service providers in quest for a more scalable and reliable modular architecture. In parallel with this architectural choice, cloud providers are also facing the market demand for fine grained usage based prices. Both the management of the microservices complex dependencies, as well as the fine grained metering require the providers to track and log detailed monitoring data from their deployed cloud setups. Hence, on one hand, the providers need to record all such performance changes and events, while on the other hand, they are concerned with the additional cost associated with the resources required to store and process this ever increasing amount of collected data. In this paper, we analyze the design of the monitoring subsystem provided by open source cloud solutions, such as OpenStack. Specifically, we analyze how the monitoring data is collected by OpenStack and assess the characteristics of the data it collects, aiming to pinpoint the limitations of the current approach and suggest alternate solutions. Our preliminary evaluation of the proposed solutions reveals that it is possible to reduce the monitored data size by up to 80% and missed anomaly detection rate from 3% to as low as 0.05% to 0.1%.
基于微服务的架构最近在云服务提供商中获得了吸引力,他们寻求一种更具可扩展性和可靠性的模块化架构。在这种架构选择的同时,云提供商还面临着基于细粒度使用的价格的市场需求。微服务复杂依赖关系的管理,以及细粒度的计量,都要求提供商跟踪和记录来自其部署的云设置的详细监控数据。因此,一方面,提供者需要记录所有这些性能变化和事件,而另一方面,他们又要考虑存储和处理这些不断增加的收集数据所需的资源所带来的额外成本。本文分析了开源云解决方案(如OpenStack)提供的监控子系统的设计。具体来说,我们分析了OpenStack是如何收集监控数据的,并评估了它收集的数据的特征,旨在指出当前方法的局限性,并提出替代解决方案。我们对提出的解决方案的初步评估表明,有可能将监测数据大小减少多达80%,并将未检测异常率从3%降低到0.05%至0.1%。
{"title":"Anatomy of Cloud Monitoring and Metering: A case study and open problems","authors":"Ali Anwar, A. Sailer, Andrzej Kochut, A. Butt","doi":"10.1145/2797022.2797039","DOIUrl":"https://doi.org/10.1145/2797022.2797039","url":null,"abstract":"Microservices based architecture has recently gained traction among the cloud service providers in quest for a more scalable and reliable modular architecture. In parallel with this architectural choice, cloud providers are also facing the market demand for fine grained usage based prices. Both the management of the microservices complex dependencies, as well as the fine grained metering require the providers to track and log detailed monitoring data from their deployed cloud setups. Hence, on one hand, the providers need to record all such performance changes and events, while on the other hand, they are concerned with the additional cost associated with the resources required to store and process this ever increasing amount of collected data. In this paper, we analyze the design of the monitoring subsystem provided by open source cloud solutions, such as OpenStack. Specifically, we analyze how the monitoring data is collected by OpenStack and assess the characteristics of the data it collects, aiming to pinpoint the limitations of the current approach and suggest alternate solutions. Our preliminary evaluation of the proposed solutions reveals that it is possible to reduce the monitored data size by up to 80% and missed anomaly detection rate from 3% to as low as 0.05% to 0.1%.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125348242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Zero-copy Migration for Lightweight Software Rejuvenation of Virtualized Systems 虚拟化系统轻量级软件复兴的零拷贝迁移
Pub Date : 2015-07-27 DOI: 10.1145/2797022.2797026
Kenichi Kourai, H. Ooba
Virtualized systems tend to suffer from software aging, which is the phenomenon that the state of a running system degrades with time. Software aging is restored by a technique called software rejuvenation, e.g., a system reboot. To reduce the downtime due to software rejuvenation, all the virtual machines (VMs) on an aged virtualized system have to be migrated in advance. However, VM migration stresses the system and causes performance degradation. In this paper, we propose VMBeam, which enables lightweight software rejuvenation of virtualized systems using zero-copy migration. When rejuvenating an aged virtualized system, VMBeam starts a new virtualized system at the same host by using nested virtualization. Then it migrates all the VMs from the aged virtualized system to the clean one. At this time, VMBeam directly relocates the memory of the VMs on the aged virtualized system to the clean virtualized system without any copy. We have implemented VMBeam in Xen and confirmed the decreases of system loads.
虚拟化系统往往会遭受软件老化的困扰,即运行系统的状态会随着时间的推移而退化。软件老化可以通过一种叫做软件再生的技术来恢复,例如,系统重新启动。为了减少因软件年轻化而导致的停机时间,需要对老化的虚拟化系统中的所有虚拟机进行提前迁移。虚拟机迁移会对系统造成一定的压力,导致系统性能下降。在本文中,我们提出了VMBeam,它可以使用零拷贝迁移实现虚拟化系统的轻量级软件复兴。当恢复旧的虚拟化系统时,VMBeam通过使用嵌套虚拟化在同一主机上启动一个新的虚拟化系统。然后将旧的虚拟化系统上的所有虚拟机迁移到新系统上。此时,VMBeam直接将旧虚拟化系统上的虚拟机内存重新定位到新虚拟化系统上,不做任何拷贝。我们已经在Xen中实现了VMBeam,并证实了系统负载的降低。
{"title":"Zero-copy Migration for Lightweight Software Rejuvenation of Virtualized Systems","authors":"Kenichi Kourai, H. Ooba","doi":"10.1145/2797022.2797026","DOIUrl":"https://doi.org/10.1145/2797022.2797026","url":null,"abstract":"Virtualized systems tend to suffer from software aging, which is the phenomenon that the state of a running system degrades with time. Software aging is restored by a technique called software rejuvenation, e.g., a system reboot. To reduce the downtime due to software rejuvenation, all the virtual machines (VMs) on an aged virtualized system have to be migrated in advance. However, VM migration stresses the system and causes performance degradation. In this paper, we propose VMBeam, which enables lightweight software rejuvenation of virtualized systems using zero-copy migration. When rejuvenating an aged virtualized system, VMBeam starts a new virtualized system at the same host by using nested virtualization. Then it migrates all the VMs from the aged virtualized system to the clean one. At this time, VMBeam directly relocates the memory of the VMs on the aged virtualized system to the clean virtualized system without any copy. We have implemented VMBeam in Xen and confirmed the decreases of system loads.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127540208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Mjölnir: The Magical Web Application Hammer Mjölnir:神奇的Web应用程序之锤
Pub Date : 2015-07-27 DOI: 10.1145/2797022.2797025
Jelle van den Hooff, David Lazar, James W. Mickens
Conventional wisdom suggests that rich, large-scale web applications are difficult to build and maintain. An implicit assumption behind this intuition is that a large web application requires massive numbers of servers, and complicated, one-off back-end architectures. We provide empirical evidence to disprove this intuition. We then propose new programming abstractions and a new deployment model that reduce the overhead of building and running web services.
传统观点认为,丰富的大规模web应用程序很难构建和维护。这种直觉背后隐含的假设是,大型web应用程序需要大量的服务器和复杂的一次性后端架构。我们提供经验证据来反驳这种直觉。然后,我们提出新的编程抽象和新的部署模型,以减少构建和运行web服务的开销。
{"title":"Mjölnir: The Magical Web Application Hammer","authors":"Jelle van den Hooff, David Lazar, James W. Mickens","doi":"10.1145/2797022.2797025","DOIUrl":"https://doi.org/10.1145/2797022.2797025","url":null,"abstract":"Conventional wisdom suggests that rich, large-scale web applications are difficult to build and maintain. An implicit assumption behind this intuition is that a large web application requires massive numbers of servers, and complicated, one-off back-end architectures. We provide empirical evidence to disprove this intuition. We then propose new programming abstractions and a new deployment model that reduce the overhead of building and running web services.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124568620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InterFS: An Interplanted Distributed File System to Improve Storage Utilization InterFS:一种提高存储利用率的嵌入式分布式文件系统
Pub Date : 2015-07-27 DOI: 10.1145/2797022.2797036
Peng Wang, LeThanhMan Cao, Chunbo Lai, Leqi Zou, Guangyu Sun, J. Cong
Resource under-utilization is a common problem in modern data centers. Though researchers have proposed consolidation techniques to improve utilization of computing resources, there still lacks an approach to mitigate particularly low utilization of storage capacity in clusters for online services. A potential solution is to "interplant" a distributed storage system together with the services on these clusters to leverage the unused storage. However, avoiding performance interference with existing services is an essential prerequisite for interplanting. Thus, we propose InterFS, a POSIX-compliant distributed file system aiming at fully exploiting the storage resource on data center clusters. We adopt intelligent resource isolation, peak load dodging, and region-based replica placement schemes in InterFS. Therefore, it can be interplanted with other resource-intensive services without interfering with them, and amply fulfill the storage requirements of small-scale applications in the data center. Currently InterFS is deployed in 20,000+ servers at Baidu, providing 80 PB storage space to 200+ long-tailed services.
资源利用不足是现代数据中心普遍存在的问题。尽管研究人员提出了整合技术来提高计算资源的利用率,但仍然缺乏一种方法来缓解在线服务集群中存储容量的低利用率。一个潜在的解决方案是将分布式存储系统与这些集群上的服务“插入”在一起,以利用未使用的存储。然而,避免对现有服务的性能干扰是插入的必要前提。因此,我们提出了InterFS,一个兼容posix的分布式文件系统,旨在充分利用数据中心集群上的存储资源。我们在InterFS中采用了智能资源隔离、峰值负载躲避和基于区域的副本放置方案。因此,它可以与其他资源密集型服务相互插入而不相互干扰,充分满足数据中心小型应用的存储需求。目前,InterFS部署在百度2万多台服务器上,为200+长尾业务提供80 PB的存储空间。
{"title":"InterFS: An Interplanted Distributed File System to Improve Storage Utilization","authors":"Peng Wang, LeThanhMan Cao, Chunbo Lai, Leqi Zou, Guangyu Sun, J. Cong","doi":"10.1145/2797022.2797036","DOIUrl":"https://doi.org/10.1145/2797022.2797036","url":null,"abstract":"Resource under-utilization is a common problem in modern data centers. Though researchers have proposed consolidation techniques to improve utilization of computing resources, there still lacks an approach to mitigate particularly low utilization of storage capacity in clusters for online services. A potential solution is to \"interplant\" a distributed storage system together with the services on these clusters to leverage the unused storage. However, avoiding performance interference with existing services is an essential prerequisite for interplanting. Thus, we propose InterFS, a POSIX-compliant distributed file system aiming at fully exploiting the storage resource on data center clusters. We adopt intelligent resource isolation, peak load dodging, and region-based replica placement schemes in InterFS. Therefore, it can be interplanted with other resource-intensive services without interfering with them, and amply fulfill the storage requirements of small-scale applications in the data center. Currently InterFS is deployed in 20,000+ servers at Baidu, providing 80 PB storage space to 200+ long-tailed services.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116420376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MemScope: Analyzing Memory Duplication on Android Systems MemScope:分析Android系统上的内存复制
Pub Date : 2015-07-27 DOI: 10.1145/2797022.2797023
Byeoksan Lee, Seongmin Kim, Eru Park, Dongsu Han
Main memory is one of the most important and valuable resources in mobile devices. While resource efficiency, in general, is important in mobile computing where programs run on limited battery power and resources, managing main memory is especially critical because it has a significant impact on user experience. However, there is mounting evidence that Android systems do not utilize main memory efficiently, and actually cause page-level duplications in the physical memory. This paper takes the first step in accurately measuring the level of memory duplication and diagnosing the root cause of the problem. To this end, we develop a system called MemScope that automatically identifies and measures memory duplication levels for Android systems. It identifies which memory segment contains duplicate memory pages by analyzing the page table and the memory content. We present the design of MemScope and our preliminary evaluation. The results show that 10 to 20% of memory pages used by applications are redundant. We identify several possible causes of the problem.
主存是移动设备中最重要、最有价值的资源之一。在移动计算中,程序运行在有限的电池电量和资源上,虽然资源效率通常很重要,但管理主内存尤其重要,因为它对用户体验有重大影响。然而,有越来越多的证据表明Android系统不能有效地利用主内存,并且实际上会导致物理内存中的页面级重复。本文在准确测量内存重复的程度和诊断问题的根本原因方面迈出了第一步。为此,我们开发了一个名为MemScope的系统,用于自动识别和测量Android系统的内存重复级别。它通过分析页表和内存内容来识别包含重复内存页的内存段。我们介绍了MemScope的设计和初步评估。结果表明,应用程序使用的内存页中有10%到20%是冗余的。我们找出了这个问题的几个可能的原因。
{"title":"MemScope: Analyzing Memory Duplication on Android Systems","authors":"Byeoksan Lee, Seongmin Kim, Eru Park, Dongsu Han","doi":"10.1145/2797022.2797023","DOIUrl":"https://doi.org/10.1145/2797022.2797023","url":null,"abstract":"Main memory is one of the most important and valuable resources in mobile devices. While resource efficiency, in general, is important in mobile computing where programs run on limited battery power and resources, managing main memory is especially critical because it has a significant impact on user experience. However, there is mounting evidence that Android systems do not utilize main memory efficiently, and actually cause page-level duplications in the physical memory. This paper takes the first step in accurately measuring the level of memory duplication and diagnosing the root cause of the problem. To this end, we develop a system called MemScope that automatically identifies and measures memory duplication levels for Android systems. It identifies which memory segment contains duplicate memory pages by analyzing the page table and the memory content. We present the design of MemScope and our preliminary evaluation. The results show that 10 to 20% of memory pages used by applications are redundant. We identify several possible causes of the problem.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133327674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Anatomizing System Activities on Interactive Wearable Devices 交互式可穿戴设备上的系统活动解剖
Pub Date : 2015-07-27 DOI: 10.1145/2797022.2797032
Renju Liu, Lintong Jiang, Ningzhe Jiang, F. Lin
This paper presents a detailed, first-of-its-kind anatomy of a commodity interactive wearable system. We asked two questions: (1) do interactive wearables deliver "close-to-metal" energy efficiency and interactive performance, and if not (2) what are the root causes preventing them from doing so? Recognizing that the usage of a wearable device is dominated by simple, short use scenarios, we profile a core set of the scenarios on two cutting-edge Android Wear devices. Following a drill down approach, we capture system behaviors at a wide spectrum of granularities, from system power and user-perceived latencies, to OS activities, to function calls happened in individual processes. To make such a profiling possible, we have extensively customized profilers, analyzers, and kernel facilities. The profiling results suggest that the current Android Wear devices are far from efficient and responsive: simply updating a displayed time keeps a device intermittently busy for 400 ms; touching to show a notification takes more than 1 second. Our results further suggest that the Android Wear OS, which inherits much of its architecture from handheld, be responsible. For example, the OS's activity and window managers often dominate CPU usage; a simple UI task, which should finish in a snap, is often scheduled to be interleaved with numerous CPU idle periods and other unrelated tasks. Our findings urge a rethink of the OS towards directly addressing wearable's unique usage.
本文介绍了一种详细的,首创的商品交互可穿戴系统的解剖。我们提出了两个问题:(1)交互式可穿戴设备是否提供“接近金属”的能源效率和交互性能,如果没有(2)阻止它们这样做的根本原因是什么?认识到可穿戴设备的使用主要是由简单、短时间的使用场景所主导,我们在两个尖端的Android Wear设备上分析了一组核心场景。按照向下钻取的方法,我们以广泛的粒度捕获系统行为,从系统功率和用户感知的延迟,到操作系统活动,再到单个进程中发生的函数调用。为了使这样的分析成为可能,我们有广泛定制的分析程序、分析程序和内核工具。分析结果表明,目前的Android Wear设备远远不够高效和响应:简单地更新显示时间会使设备间歇性地忙碌400毫秒;触摸显示通知需要超过1秒。我们的研究结果进一步表明,继承了大量手持设备架构的Android Wear OS应该对此负责。例如,操作系统的活动和窗口管理器经常支配CPU的使用;一个简单的UI任务,应该在短时间内完成,经常被安排与许多CPU空闲期和其他不相关的任务交织在一起。我们的发现促使人们重新思考操作系统,以直接解决可穿戴设备的独特用途。
{"title":"Anatomizing System Activities on Interactive Wearable Devices","authors":"Renju Liu, Lintong Jiang, Ningzhe Jiang, F. Lin","doi":"10.1145/2797022.2797032","DOIUrl":"https://doi.org/10.1145/2797022.2797032","url":null,"abstract":"This paper presents a detailed, first-of-its-kind anatomy of a commodity interactive wearable system. We asked two questions: (1) do interactive wearables deliver \"close-to-metal\" energy efficiency and interactive performance, and if not (2) what are the root causes preventing them from doing so? Recognizing that the usage of a wearable device is dominated by simple, short use scenarios, we profile a core set of the scenarios on two cutting-edge Android Wear devices. Following a drill down approach, we capture system behaviors at a wide spectrum of granularities, from system power and user-perceived latencies, to OS activities, to function calls happened in individual processes. To make such a profiling possible, we have extensively customized profilers, analyzers, and kernel facilities. The profiling results suggest that the current Android Wear devices are far from efficient and responsive: simply updating a displayed time keeps a device intermittently busy for 400 ms; touching to show a notification takes more than 1 second. Our results further suggest that the Android Wear OS, which inherits much of its architecture from handheld, be responsible. For example, the OS's activity and window managers often dominate CPU usage; a simple UI task, which should finish in a snap, is often scheduled to be interleaved with numerous CPU idle periods and other unrelated tasks. Our findings urge a rethink of the OS towards directly addressing wearable's unique usage.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116374846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Proceedings of the 6th Asia-Pacific Workshop on Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1