首页 > 最新文献

International Conference on Virtual Execution Environments最新文献

英文 中文
Block storage virtualization with commodity secure digital cards 块存储虚拟化与商品安全数字卡
Pub Date : 2012-03-03 DOI: 10.1145/2151024.2151050
Harvey Tuch, Cyprien Laplace, K. Barr, Bi Wu
Smartphones, tablets and other mobile platforms typically accommodate bulk data storage with low-cost, FAT-formatted Secure Digital cards. When one uses a mobile device to run a full-system virtual machine (VM), there can be a mismatch between 1) the VM's I/O mixture, security and reliability requirements and 2) the properties of the storage media available for VM block storage and checkpoint images. To resolve this mismatch, this paper presents a new VM disk image format called the Logging Block Store (LBS). After motivating the need for a new format, LBS is described in detail with experimental results demonstrating its efficacy. As a result of this work, recommendations are made for future optimizations throughout the stack that may simplify and improve the performance of storage virtualization systems on mobile platforms.
智能手机、平板电脑和其他移动平台通常采用低成本、fat格式的安全数字卡来存储大容量数据。当使用移动设备运行全系统虚拟机(VM)时,可能会出现以下情况:1)VM的I/O混合、安全性和可靠性需求;2)VM块存储和检查点映像可用的存储介质属性不匹配。为了解决这种不匹配,本文提出了一种新的虚拟机磁盘映像格式,称为日志块存储(LBS)。在激发了对新格式的需求之后,本文对LBS进行了详细的描述,并通过实验结果证明了其有效性。作为这项工作的结果,对整个堆栈的未来优化提出了建议,这些优化可能会简化和提高移动平台上存储虚拟化系统的性能。
{"title":"Block storage virtualization with commodity secure digital cards","authors":"Harvey Tuch, Cyprien Laplace, K. Barr, Bi Wu","doi":"10.1145/2151024.2151050","DOIUrl":"https://doi.org/10.1145/2151024.2151050","url":null,"abstract":"Smartphones, tablets and other mobile platforms typically accommodate bulk data storage with low-cost, FAT-formatted Secure Digital cards. When one uses a mobile device to run a full-system virtual machine (VM), there can be a mismatch between 1) the VM's I/O mixture, security and reliability requirements and 2) the properties of the storage media available for VM block storage and checkpoint images. To resolve this mismatch, this paper presents a new VM disk image format called the Logging Block Store (LBS). After motivating the need for a new format, LBS is described in detail with experimental results demonstrating its efficacy. As a result of this work, recommendations are made for future optimizations throughout the stack that may simplify and improve the performance of storage virtualization systems on mobile platforms.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129639313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Enhancing TCP throughput of highly available virtual machines via speculative communication 通过推测性通信增强高可用性虚拟机的TCP吞吐量
Pub Date : 2012-03-03 DOI: 10.1145/2151024.2151038
Balazs Gerofi, Y. Ishikawa
Checkpoint-recovery based virtual machine (VM) replication is an attractive technique for accommodating VM installations with high-availability. It provides seamless failover for the entire software stack executed in the VM regardless the application or the underlying operating system (OS), it runs on commodity hardware, and it is inherently capable of dealing with shared memory non-determinism of symmetric multiprocessing (SMP) configurations. There have been several studies aiming at alleviating the overhead of replication, however, due to consistency requirements, network performance of the basic replication mechanism remains extremely poor., In this paper we revisit the replication protocol and extend it with speculative communication. Speculative communication silently acknowledges TCP packets of the VM, enabling the guest's TCP stack to progress with transmission without exposing the messages to the clients before the corresponding execution state is checkpointed to the backup host. Furthermore, we propose replication aware congestion control, an extension to the guest's TCP stack that aggressively fills up the VMM's replication buffer so that speculative packets can be backed up and released earlier to the clients. We observe up to an order of magnitude improvement in bulk data transfer with speculative communication, and close to native VM network performance when replication awareness is enabled in the guest OS. We provide results of micro-, as well as application-level benchmarks.
基于检查点恢复的虚拟机(VM)复制对于容纳具有高可用性的VM安装是一种很有吸引力的技术。它为在VM中执行的整个软件堆栈提供无缝的故障转移,而不考虑应用程序或底层操作系统(OS),它运行在商品硬件上,并且它天生能够处理对称多处理(SMP)配置的共享内存非确定性。已经有一些研究旨在减轻复制的开销,然而,由于一致性要求,基本复制机制的网络性能仍然非常差。在本文中,我们重新审视了复制协议,并将其扩展为推测通信。推测性通信静默地确认VM的TCP数据包,使客户机的TCP堆栈能够在传输过程中继续进行,而不会在相应的执行状态被检查到备份主机之前将消息暴露给客户机。此外,我们提出了可感知复制的拥塞控制,这是对客户机TCP堆栈的扩展,可以积极地填充VMM的复制缓冲区,以便可以备份推测数据包并更早地释放给客户机。我们观察到,使用推测通信的批量数据传输有了数量级的改善,并且当在客户机操作系统中启用复制感知时,其性能接近本机VM网络性能。我们提供了微观和应用级基准测试的结果。
{"title":"Enhancing TCP throughput of highly available virtual machines via speculative communication","authors":"Balazs Gerofi, Y. Ishikawa","doi":"10.1145/2151024.2151038","DOIUrl":"https://doi.org/10.1145/2151024.2151038","url":null,"abstract":"Checkpoint-recovery based virtual machine (VM) replication is an attractive technique for accommodating VM installations with high-availability. It provides seamless failover for the entire software stack executed in the VM regardless the application or the underlying operating system (OS), it runs on commodity hardware, and it is inherently capable of dealing with shared memory non-determinism of symmetric multiprocessing (SMP) configurations. There have been several studies aiming at alleviating the overhead of replication, however, due to consistency requirements, network performance of the basic replication mechanism remains extremely poor.,\u0000 In this paper we revisit the replication protocol and extend it with speculative communication. Speculative communication silently acknowledges TCP packets of the VM, enabling the guest's TCP stack to progress with transmission without exposing the messages to the clients before the corresponding execution state is checkpointed to the backup host. Furthermore, we propose replication aware congestion control, an extension to the guest's TCP stack that aggressively fills up the VMM's replication buffer so that speculative packets can be backed up and released earlier to the clients. We observe up to an order of magnitude improvement in bulk data transfer with speculative communication, and close to native VM network performance when replication awareness is enabled in the guest OS. We provide results of micro-, as well as application-level benchmarks.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126266677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Execution mining 执行采矿
Pub Date : 2012-03-03 DOI: 10.1145/2151024.2151044
Geoffrey Lefebvre, Brendan Cully, Christopher C. D. Head, Mark Spear, N. Hutchinson, M. Feeley, A. Warfield
Operating systems represent large pieces of complex software that are carefully tested and broadly deployed. Despite this, developers frequently have little more than their source code to understand how they behave. This static representation of a system results in limited insight into execution dynamics, such as what code is important, how data flows through a system, or how threads interact with one another. We describe Tralfamadore, a system that preserves complete traces of machine execution as an artifact that can be queried and analyzed with a library of simple, reusable operators, making it easy to develop and run new dynamic analyses. We demonstrate the benefits of this approach with several example applications, including a novel unified source and execution browser.
操作系统代表了经过仔细测试和广泛部署的大型复杂软件。尽管如此,开发人员通常只有源代码来理解它们的行为。系统的这种静态表示导致对执行动态的了解有限,例如哪些代码是重要的,数据如何流经系统,或者线程如何相互交互。我们将Tralfamadore描述为一个系统,它保留了机器执行的完整痕迹,可以用一个简单的、可重用的操作符库进行查询和分析,从而使开发和运行新的动态分析变得容易。我们用几个示例应用程序演示了这种方法的好处,包括一个新的统一源和执行浏览器。
{"title":"Execution mining","authors":"Geoffrey Lefebvre, Brendan Cully, Christopher C. D. Head, Mark Spear, N. Hutchinson, M. Feeley, A. Warfield","doi":"10.1145/2151024.2151044","DOIUrl":"https://doi.org/10.1145/2151024.2151044","url":null,"abstract":"Operating systems represent large pieces of complex software that are carefully tested and broadly deployed. Despite this, developers frequently have little more than their source code to understand how they behave. This static representation of a system results in limited insight into execution dynamics, such as what code is important, how data flows through a system, or how threads interact with one another. We describe Tralfamadore, a system that preserves complete traces of machine execution as an artifact that can be queried and analyzed with a library of simple, reusable operators, making it easy to develop and run new dynamic analyses. We demonstrate the benefits of this approach with several example applications, including a novel unified source and execution browser.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131461225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
CompSC: live migration with pass-through devices CompSC:实时迁移与直通设备
Pub Date : 2012-03-03 DOI: 10.1145/2151024.2151040
Zhenhao Pan, Yaozu Dong, Yu Chen, Lei Zhang, Zhijiao Zhang
Live migration is one of the most important features of virtualization technology. With regard to recent virtualization techniques, performance of network I/O is critical. Current network I/O virtualization (e.g. Para-virtualized I/O, VMDq) has a significant performance gap with native network I/O. Pass-through network devices have near native performance, however, they have thus far prevented live migration. No existing methods solve the problem of live migration with pass-through devices perfectly. In this paper, we propose CompSC: a solution of hardware state migration that will enable the live migration support of pass-through devices. We go on to apply CompSC to SR-IOV network interface controllers. We discuss the attributes of different hardware states in pass-through devices and migrate them with corresponding techniques. Our experiments show that CompSC enables live migration on an Intel 82599 VF with a throughput 282.66% higher than para-virtualized devices. In addition, service downtime during live migration is 42.9% less than para-virtualized devices.
动态迁移是虚拟化技术最重要的特性之一。对于最近的虚拟化技术,网络I/O的性能是至关重要的。当前的网络I/O虚拟化(例如,准虚拟化I/O, VMDq)与本地网络I/O有很大的性能差距。直通网络设备具有接近本机的性能,但是,到目前为止,它们阻止了实时迁移。现有的方法都不能很好地解决直通装置的实时迁移问题。在本文中,我们提出了CompSC:一个硬件状态迁移的解决方案,它将支持直通设备的实时迁移。我们继续将CompSC应用于SR-IOV网络接口控制器。讨论了直通设备中不同硬件状态的属性,并采用相应的技术对其进行迁移。我们的实验表明,CompSC可以在Intel 82599 VF上实现实时迁移,吞吐量比准虚拟化设备高282.66%。此外,热迁移期间的服务停机时间比准虚拟化设备少42.9%。
{"title":"CompSC: live migration with pass-through devices","authors":"Zhenhao Pan, Yaozu Dong, Yu Chen, Lei Zhang, Zhijiao Zhang","doi":"10.1145/2151024.2151040","DOIUrl":"https://doi.org/10.1145/2151024.2151040","url":null,"abstract":"Live migration is one of the most important features of virtualization technology. With regard to recent virtualization techniques, performance of network I/O is critical. Current network I/O virtualization (e.g. Para-virtualized I/O, VMDq) has a significant performance gap with native network I/O. Pass-through network devices have near native performance, however, they have thus far prevented live migration. No existing methods solve the problem of live migration with pass-through devices perfectly.\u0000 In this paper, we propose CompSC: a solution of hardware state migration that will enable the live migration support of pass-through devices. We go on to apply CompSC to SR-IOV network interface controllers. We discuss the attributes of different hardware states in pass-through devices and migrate them with corresponding techniques. Our experiments show that CompSC enables live migration on an Intel 82599 VF with a throughput 282.66% higher than para-virtualized devices. In addition, service downtime during live migration is 42.9% less than para-virtualized devices.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133786872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Transparent dynamic instrumentation 透明动态仪器
Pub Date : 2012-03-03 DOI: 10.1145/2151024.2151043
Derek Bruening, Qin Zhao, Saman P. Amarasinghe
Process virtualization provides a virtual execution environment within which an unmodified application can be monitored and controlled while it executes. The provided layer of control can be used for purposes ranging from sandboxing to compatibility to profiling. The additional operations required for this layer are performed clandestinely alongside regular program execution. Software dynamic instrumentation is one method for implementing process virtualization which dynamically instruments an application such that the application's code and the inserted code are interleaved together. DynamoRIO is a process virtualization system implemented using software code cache techniques that allows users to build customized dynamic instrumentation tools. There are many challenges to building such a runtime system. One major obstacle is transparency. In order to support executing arbitrary applications, DynamoRIO must be fully transparent so that an application cannot distinguish between running inside the virtual environment and native execution. In addition, any desired extra operations for a particular tool must avoid interfering with the behavior of the application. Transparency has historically been provided on an ad-hoc basis, as a reaction to observed problems in target applications. This paper identifies a necessary set of transparency requirements for running mainstream Windows and Linux applications. We discuss possible solutions to each transparency issue, evaluate tradeoffs between different choices, and identify cases where maintaining transparency is not practically solvable. We believe this will provide a guideline for better design and implementation of transparent dynamic instrumentation, as well as other similar process virtualization systems using software code caches.
流程虚拟化提供了一个虚拟执行环境,可以在其中监视和控制未修改的应用程序。所提供的控制层可用于从沙箱到兼容性再到分析的各种目的。这一层所需的附加操作是在常规程序执行的同时秘密执行的。软件动态插装是实现过程虚拟化的一种方法,它动态地插装应用程序,使应用程序的代码和插入的代码交织在一起。DynamoRIO是一个使用软件代码缓存技术实现的过程虚拟化系统,允许用户构建定制的动态仪器工具。构建这样一个运行时系统有许多挑战。一个主要障碍是透明度。为了支持执行任意应用程序,DynamoRIO必须是完全透明的,这样应用程序就无法区分在虚拟环境中运行和本地执行。此外,任何特定工具所需的额外操作都必须避免干扰应用程序的行为。一直以来,透明度都是在特别的基础上提供的,作为对目标应用程序中观察到的问题的反应。本文确定了运行主流Windows和Linux应用程序的一组必要的透明度要求。我们讨论每个透明度问题的可能解决方案,评估不同选择之间的权衡,并确定维持透明度实际上无法解决的情况。我们相信,这将为更好地设计和实现透明动态检测以及其他使用软件代码缓存的类似进程虚拟化系统提供指导。
{"title":"Transparent dynamic instrumentation","authors":"Derek Bruening, Qin Zhao, Saman P. Amarasinghe","doi":"10.1145/2151024.2151043","DOIUrl":"https://doi.org/10.1145/2151024.2151043","url":null,"abstract":"Process virtualization provides a virtual execution environment within which an unmodified application can be monitored and controlled while it executes. The provided layer of control can be used for purposes ranging from sandboxing to compatibility to profiling. The additional operations required for this layer are performed clandestinely alongside regular program execution. Software dynamic instrumentation is one method for implementing process virtualization which dynamically instruments an application such that the application's code and the inserted code are interleaved together. DynamoRIO is a process virtualization system implemented using software code cache techniques that allows users to build customized dynamic instrumentation tools. There are many challenges to building such a runtime system. One major obstacle is transparency. In order to support executing arbitrary applications, DynamoRIO must be fully transparent so that an application cannot distinguish between running inside the virtual environment and native execution. In addition, any desired extra operations for a particular tool must avoid interfering with the behavior of the application.\u0000 Transparency has historically been provided on an ad-hoc basis, as a reaction to observed problems in target applications. This paper identifies a necessary set of transparency requirements for running mainstream Windows and Linux applications. We discuss possible solutions to each transparency issue, evaluate tradeoffs between different choices, and identify cases where maintaining transparency is not practically solvable. We believe this will provide a guideline for better design and implementation of transparent dynamic instrumentation, as well as other similar process virtualization systems using software code caches.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121198951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 122
Fast restore of checkpointed memory using working set estimation 使用工作集估计快速恢复检查点内存
Pub Date : 2011-03-09 DOI: 10.1145/1952682.1952695
Irene Zhang, Alex Garthwaite, Y. Baskakov, K. Barr
In order to make save and restore features practical, saved virtual machines (VMs) must be able to quickly restore to normal operation. Unfortunately, fetching a saved memory image from persistent storage can be slow, especially as VMs grow in memory size. One possible solution for reducing this time is to lazily restore memory after the VM starts. However, accesses to unrestored memory after the VM starts can degrade performance, sometimes rendering the VM unusable for even longer. Existing performance metrics do not account for performance degradation after the VM starts, making it difficult to compare lazily restoring memory against other approaches. In this paper, we propose both a better metric for evaluating the performance of different restore techniques and a better scheme for restoring saved VMs. Existing performance metrics do not reflect what is really important to the user -- the time until the VM returns to normal operation. We introduce the time-to-responsiveness metric, which better characterizes user experience while restoring a saved VM by measuring the time until there is no longer a noticeable performance impact on the restoring VM. We propose a new lazy restore technique, called working set restore, that minimizes performance degradation after the VM starts by prefetching the working set. We also introduce a novel working set estimator based on memory tracing that we use to test working set restore, along with an estimator that uses access-bit scanning. We show that working set restore can improve the performance of restoring a saved VM by more than 89% for some workloads.
为了使保存和恢复功能实用,保存的虚拟机必须能够快速恢复到正常运行状态。不幸的是,从持久存储中获取保存的内存映像可能很慢,特别是当虚拟机内存大小增加时。减少此时间的一个可能解决方案是在VM启动后惰性恢复内存。但是,在虚拟机启动后访问未恢复的内存会降低性能,有时会使虚拟机无法使用更长时间。现有的性能指标没有考虑VM启动后的性能下降,因此很难将惰性恢复内存与其他方法进行比较。在本文中,我们提出了一个更好的指标来评估不同恢复技术的性能和一个更好的方案来恢复保存的虚拟机。现有的性能指标并不能反映对用户来说真正重要的东西——VM恢复正常运行所需的时间。我们引入了响应时间指标,它通过测量恢复VM不再受到明显性能影响的时间,更好地描述了恢复保存VM时的用户体验。我们提出了一种新的延迟恢复技术,称为工作集恢复,它通过预取工作集来最小化VM启动后的性能下降。我们还介绍了一种新的基于内存跟踪的工作集估计器,我们使用它来测试工作集恢复,以及使用访问位扫描的估计器。我们表明,对于某些工作负载,工作集还原可以将恢复保存的VM的性能提高89%以上。
{"title":"Fast restore of checkpointed memory using working set estimation","authors":"Irene Zhang, Alex Garthwaite, Y. Baskakov, K. Barr","doi":"10.1145/1952682.1952695","DOIUrl":"https://doi.org/10.1145/1952682.1952695","url":null,"abstract":"In order to make save and restore features practical, saved virtual machines (VMs) must be able to quickly restore to normal operation. Unfortunately, fetching a saved memory image from persistent storage can be slow, especially as VMs grow in memory size. One possible solution for reducing this time is to lazily restore memory after the VM starts. However, accesses to unrestored memory after the VM starts can degrade performance, sometimes rendering the VM unusable for even longer. Existing performance metrics do not account for performance degradation after the VM starts, making it difficult to compare lazily restoring memory against other approaches. In this paper, we propose both a better metric for evaluating the performance of different restore techniques and a better scheme for restoring saved VMs.\u0000 Existing performance metrics do not reflect what is really important to the user -- the time until the VM returns to normal operation. We introduce the time-to-responsiveness metric, which better characterizes user experience while restoring a saved VM by measuring the time until there is no longer a noticeable performance impact on the restoring VM. We propose a new lazy restore technique, called working set restore, that minimizes performance degradation after the VM starts by prefetching the working set. We also introduce a novel working set estimator based on memory tracing that we use to test working set restore, along with an estimator that uses access-bit scanning. We show that working set restore can improve the performance of restoring a saved VM by more than 89% for some workloads.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122513618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Fast and correct performance recovery of operating systems using a virtual machine monitor 使用虚拟机监视器快速和正确地恢复操作系统的性能
Pub Date : 2011-03-09 DOI: 10.1145/1952682.1952696
Kenichi Kourai
Rebooting an operating system is a final but effective recovery technique. However, the system performance largely degrades just after the reboot due to the page cache being lost in the main memory. For fast performance recovery, we propose a new reboot mechanism called the warm-cache reboot. The warm-cache reboot preserves the page cache during the reboot and enables an operating system to restore it after the reboot, with the help of a virtual machine monitor (VMM). To perform correct recovery, the VMM guarantees that the reused page cache is consistent with the corresponding files on disks. We have implemented the warm-cache reboot mechanism in the Xen VMM and the Linux operating system. Our experimental results showed that the warm-cache reboot decreased performance degradation just after the reboot. In addition, we confirmed that the file cache corrupted by faults was not reused. The overheads for maintaining cache consistency were not usually large.
重新启动操作系统是最后但有效的恢复技术。但是,由于页面缓存在主内存中丢失,系统性能在重新启动后会大幅下降。为了快速恢复性能,我们提出了一种新的重启机制,称为热缓存重启。热缓存重新启动在重新启动期间保留页面缓存,并使操作系统能够在重新启动后借助虚拟机监视器(VMM)恢复页面缓存。为了执行正确的恢复,VMM保证重用的页面缓存与磁盘上相应的文件一致。我们已经在Xen VMM和Linux操作系统中实现了热缓存重启机制。我们的实验结果表明,热缓存重启在重启后降低了性能下降。此外,我们确认被故障损坏的文件缓存没有被重用。维护缓存一致性的开销通常并不大。
{"title":"Fast and correct performance recovery of operating systems using a virtual machine monitor","authors":"Kenichi Kourai","doi":"10.1145/1952682.1952696","DOIUrl":"https://doi.org/10.1145/1952682.1952696","url":null,"abstract":"Rebooting an operating system is a final but effective recovery technique. However, the system performance largely degrades just after the reboot due to the page cache being lost in the main memory. For fast performance recovery, we propose a new reboot mechanism called the warm-cache reboot. The warm-cache reboot preserves the page cache during the reboot and enables an operating system to restore it after the reboot, with the help of a virtual machine monitor (VMM). To perform correct recovery, the VMM guarantees that the reused page cache is consistent with the corresponding files on disks. We have implemented the warm-cache reboot mechanism in the Xen VMM and the Linux operating system. Our experimental results showed that the warm-cache reboot decreased performance degradation just after the reboot. In addition, we confirmed that the file cache corrupted by faults was not reused. The overheads for maintaining cache consistency were not usually large.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132493318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Perfctr-Xen: a framework for performance counter virtualization perfr - xen:性能计数器虚拟化框架
Pub Date : 2011-03-09 DOI: 10.1145/1952682.1952687
R. Nikolaev, Godmar Back
Virtualization is a powerful technique used for variety of application domains, including emerging cloud environments that provide access to virtual machines as a service. Because of the interaction of virtual machines with multiple underlying software and hardware layers, the analysis of the performance of applications running in virtualized environments has been difficult. Moreover, performance analysis tools commonly used in native environments were not available in virtualized environments, a gap which our work closes. This paper discusses the challenges of performance monitoring inherent to virtualized environments and introduces a technique to virtualize access to low-level performance counters on a per-thread basis. The technique was implemented in perfctr-xen, a framework for the Xen hypervisor that provides an infrastructure for higher-level profilers. This framework supports both accumulative event counts and interrupt-driven event sampling. It is light-weight, providing direct user mode access to logical counter values. perfctr-xen supports multiple modes of virtualization, including paravirtualization and hardware-assisted virtualization. perfctr-xen applies guest kernel-hypervisor coordination techniques to reduce virtualization overhead. We present experimental results based on microbenchmarks and SPEC CPU2006 macrobenchmarks that show the accuracy and usability of the obtained measurements when compared to native execution.
虚拟化是一种强大的技术,可用于各种应用程序领域,包括新兴的云环境,这些环境提供对虚拟机的服务访问。由于虚拟机与多个底层软件和硬件层之间存在交互,因此很难分析在虚拟环境中运行的应用程序的性能。此外,通常在本地环境中使用的性能分析工具在虚拟环境中不可用,我们的工作弥补了这一差距。本文讨论了虚拟化环境固有的性能监视挑战,并介绍了一种技术,可以在每个线程的基础上虚拟化对低级性能计数器的访问。该技术是在perfr - Xen中实现的,perfr - Xen是Xen管理程序的框架,它为高级剖析器提供基础设施。该框架既支持累积事件计数,也支持中断驱动的事件采样。它是轻量级的,提供对逻辑计数器值的直接用户模式访问。perfr -xen支持多种虚拟化模式,包括半虚拟化和硬件辅助虚拟化。perfr -xen应用客户内核-管理程序协调技术来减少虚拟化开销。我们给出了基于微基准测试和SPEC CPU2006宏基准测试的实验结果,这些结果显示了与本机执行相比获得的测量结果的准确性和可用性。
{"title":"Perfctr-Xen: a framework for performance counter virtualization","authors":"R. Nikolaev, Godmar Back","doi":"10.1145/1952682.1952687","DOIUrl":"https://doi.org/10.1145/1952682.1952687","url":null,"abstract":"Virtualization is a powerful technique used for variety of application domains, including emerging cloud environments that provide access to virtual machines as a service. Because of the interaction of virtual machines with multiple underlying software and hardware layers, the analysis of the performance of applications running in virtualized environments has been difficult. Moreover, performance analysis tools commonly used in native environments were not available in virtualized environments, a gap which our work closes.\u0000 This paper discusses the challenges of performance monitoring inherent to virtualized environments and introduces a technique to virtualize access to low-level performance counters on a per-thread basis. The technique was implemented in perfctr-xen, a framework for the Xen hypervisor that provides an infrastructure for higher-level profilers. This framework supports both accumulative event counts and interrupt-driven event sampling. It is light-weight, providing direct user mode access to logical counter values. perfctr-xen supports multiple modes of virtualization, including paravirtualization and hardware-assisted virtualization. perfctr-xen applies guest kernel-hypervisor coordination techniques to reduce virtualization overhead. We present experimental results based on microbenchmarks and SPEC CPU2006 macrobenchmarks that show the accuracy and usability of the obtained measurements when compared to native execution.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115209191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Dolly: virtualization-driven database provisioning for the cloud Dolly:为云提供虚拟化驱动的数据库供应
Pub Date : 2011-03-09 DOI: 10.1145/1952682.1952691
E. Cecchet, Rahul Singh, Upendra Sharma, P. Shenoy
Cloud computing platforms are becoming increasingly popular for e-commerce applications that can be scaled on-demand in a very cost effective way. Dynamic provisioning is used to autonomously add capacity in multi-tier cloud-based applications that see workload increases. While many solutions exist to provision tiers with little or no state in applications, the database tier remains problematic for dynamic provisioning due to the need to replicate its large disk state. In this paper, we explore virtual machine (VM) cloning techniques to spawn database replicas and address the challenges of provisioning shared-nothing replicated databases in the cloud. We argue that being able to determine state replication time is crucial for provisioning databases and show that VM cloning provides this property. We propose Dolly, a database provisioning system based on VM cloning and cost models to adapt the provisioning policy to the cloud infrastructure specifics and application requirements. We present an implementation of Dolly in a commercial-grade replication middleware and evaluate database provisioning strategies for a TPC-W workload on a private cloud and on Amazon EC2. By being aware of VM-based state replication cost, Dolly can solve the challenge of automated provisioning for replicated databases on cloud platforms.
云计算平台在电子商务应用程序中变得越来越流行,这些应用程序可以以一种非常经济有效的方式按需扩展。动态供应用于在工作负载增加的多层基于云的应用程序中自主添加容量。虽然存在许多解决方案来提供应用程序中状态很少或没有状态的层,但数据库层在动态供应方面仍然存在问题,因为需要复制其大磁盘状态。在本文中,我们将探讨虚拟机(VM)克隆技术,以生成数据库副本,并解决在云中提供无共享复制数据库的挑战。我们认为,能够确定状态复制时间对于配置数据库至关重要,并表明VM克隆提供了这一属性。我们提出了基于虚拟机克隆和成本模型的数据库供应系统Dolly,以使供应策略适应云基础设施的具体情况和应用程序需求。我们在商业级复制中间件中展示了Dolly的实现,并评估了私有云和Amazon EC2上TPC-W工作负载的数据库供应策略。通过了解基于vm的状态复制成本,Dolly可以解决云平台上复制数据库自动供应的挑战。
{"title":"Dolly: virtualization-driven database provisioning for the cloud","authors":"E. Cecchet, Rahul Singh, Upendra Sharma, P. Shenoy","doi":"10.1145/1952682.1952691","DOIUrl":"https://doi.org/10.1145/1952682.1952691","url":null,"abstract":"Cloud computing platforms are becoming increasingly popular for e-commerce applications that can be scaled on-demand in a very cost effective way. Dynamic provisioning is used to autonomously add capacity in multi-tier cloud-based applications that see workload increases. While many solutions exist to provision tiers with little or no state in applications, the database tier remains problematic for dynamic provisioning due to the need to replicate its large disk state. In this paper, we explore virtual machine (VM) cloning techniques to spawn database replicas and address the challenges of provisioning shared-nothing replicated databases in the cloud. We argue that being able to determine state replication time is crucial for provisioning databases and show that VM cloning provides this property. We propose Dolly, a database provisioning system based on VM cloning and cost models to adapt the provisioning policy to the cloud infrastructure specifics and application requirements. We present an implementation of Dolly in a commercial-grade replication middleware and evaluate database provisioning strategies for a TPC-W workload on a private cloud and on Amazon EC2. By being aware of VM-based state replication cost, Dolly can solve the challenge of automated provisioning for replicated databases on cloud platforms.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125459157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
ReHype: enabling VM survival across hypervisor failures ReHype:允许虚拟机在虚拟机管理程序故障时存活
Pub Date : 2011-03-09 DOI: 10.1145/1952682.1952692
Michael V. Le, Y. Tamir
With existing virtualized systems, hypervisor failures lead to overall system failure and the loss of all the work in progress of virtual machines (VMs) running on the system. We introduce ReHype, a mechanism for recovery from hypervisor failures by booting a new instance of the hypervisor while preserving the state of running VMs. VMs are stalled during the hypervisor reboot and resume normal execution once the new hypervisor instance is running. Hypervisor failures can lead to arbitrary state corruption and inconsistencies throughout the system. ReHype deals with the challenge of protecting the recovered hypervisor instance from such corrupted state and resolving inconsistencies between different parts of hypervisor state as well as between the hypervisor and VMs and between the hypervisor and the hardware. We have implemented ReHype for the Xen hypervisor. The implementation was done incrementally, using results from fault injection experiments to identify the sources of dangerous state corruption and inconsistencies. The implementation of ReHype involved only 880 LOC added or modified in Xen. The memory space overhead of ReHype is only 2.1MB for a pristine copy of the hypervisor code and static data plus a small reserved memory area. The fault injection campaigns used to evaluate the effectiveness of ReHype involved a system with multiple VMs running I/O and hypercall-intensive benchmarks. Our experimental results show that the ReHype prototype can successfully recover from over 90% of detected hypervisor failures.
对于现有的虚拟化系统,管理程序故障会导致整个系统故障,并且丢失系统上运行的虚拟机正在进行的所有工作。我们引入ReHype,这是一种通过在保持运行中的虚拟机状态的同时启动虚拟机管理程序的新实例来从虚拟机管理程序故障中恢复的机制。虚拟机在hypervisor重启期间会停止运行,在新的hypervisor实例运行后恢复正常运行。管理程序故障可能导致整个系统的任意状态损坏和不一致。ReHype处理的挑战是保护已恢复的管理程序实例免受这种损坏状态的影响,并解决管理程序状态的不同部分之间、管理程序与vm之间以及管理程序与硬件之间的不一致。我们已经为Xen管理程序实现了ReHype。该实现是逐步完成的,使用故障注入实验的结果来识别危险状态损坏和不一致的来源。ReHype的实现只涉及在Xen中添加或修改的880个LOC。ReHype的内存空间开销仅为2.1MB,用于管理程序代码和静态数据的原始副本以及一个小的保留内存区域。用于评估ReHype有效性的故障注入活动涉及一个具有多个运行I/O和超调用密集型基准的vm的系统。我们的实验结果表明,ReHype原型可以成功地从90%以上检测到的hypervisor故障中恢复。
{"title":"ReHype: enabling VM survival across hypervisor failures","authors":"Michael V. Le, Y. Tamir","doi":"10.1145/1952682.1952692","DOIUrl":"https://doi.org/10.1145/1952682.1952692","url":null,"abstract":"With existing virtualized systems, hypervisor failures lead to overall system failure and the loss of all the work in progress of virtual machines (VMs) running on the system. We introduce ReHype, a mechanism for recovery from hypervisor failures by booting a new instance of the hypervisor while preserving the state of running VMs. VMs are stalled during the hypervisor reboot and resume normal execution once the new hypervisor instance is running. Hypervisor failures can lead to arbitrary state corruption and inconsistencies throughout the system. ReHype deals with the challenge of protecting the recovered hypervisor instance from such corrupted state and resolving inconsistencies between different parts of hypervisor state as well as between the hypervisor and VMs and between the hypervisor and the hardware. We have implemented ReHype for the Xen hypervisor. The implementation was done incrementally, using results from fault injection experiments to identify the sources of dangerous state corruption and inconsistencies. The implementation of ReHype involved only 880 LOC added or modified in Xen. The memory space overhead of ReHype is only 2.1MB for a pristine copy of the hypervisor code and static data plus a small reserved memory area. The fault injection campaigns used to evaluate the effectiveness of ReHype involved a system with multiple VMs running I/O and hypercall-intensive benchmarks. Our experimental results show that the ReHype prototype can successfully recover from over 90% of detected hypervisor failures.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125057508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
期刊
International Conference on Virtual Execution Environments
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1