首页 > 最新文献

International Conference on Virtual Execution Environments最新文献

英文 中文
Introspection-based memory de-duplication and migration 基于内省的内存重复数据删除和迁移
Pub Date : 2013-03-16 DOI: 10.1145/2451512.2451525
J. Chiang, Han-Lin Li, T. Chiueh
Memory virtualization abstracts a physical machine's memory resource and presents to the virtual machines running on it a piece of physical memory that could be shared, compressed and moved. To optimize the memory resource utilization by fully leveraging the flexibility afforded by memory virtualization, it is essential that the hypervisor have some sense of how the guest VMs use their allocated physical memory. One way to do this is virtual machine introspection (VMI), which interprets byte values in a guest memory space into semantically meaningful data structures. However, identifying a guest VM's memory usage information such as free memory pool is non-trivial. This paper describes a bootstrapping VM introspection technique that could accurately extract free memory pool information from multiple versions of Windows and Linux without kernel version-specific hard-coding, how to apply this technique to improve the efficiency of memory de-duplication and memory state migration, and the resulting improvement in memory de-duplication speed, gain in additional memory pages de-duplicated, and reduction in traffic loads associated with memory state migration.
内存虚拟化将物理机的内存资源抽象出来,并向在其上运行的虚拟机提供一块可以共享、压缩和移动的物理内存。为了通过充分利用内存虚拟化提供的灵活性来优化内存资源利用,hypervisor必须了解来宾虚拟机如何使用它们分配的物理内存。一种方法是虚拟机自省(VMI),它将来宾内存空间中的字节值解释为语义上有意义的数据结构。但是,识别客户机虚拟机的内存使用信息(如空闲内存池)是非常重要的。本文描述了一种无需特定于内核版本的硬编码就能从多个版本的Windows和Linux中准确提取空闲内存池信息的自引导VM自省技术,以及如何应用该技术来提高内存重复数据删除和内存状态迁移的效率,从而提高内存重复数据删除速度,获得额外的内存重复数据删除页面,并减少与内存状态迁移相关的流量负载。
{"title":"Introspection-based memory de-duplication and migration","authors":"J. Chiang, Han-Lin Li, T. Chiueh","doi":"10.1145/2451512.2451525","DOIUrl":"https://doi.org/10.1145/2451512.2451525","url":null,"abstract":"Memory virtualization abstracts a physical machine's memory resource and presents to the virtual machines running on it a piece of physical memory that could be shared, compressed and moved. To optimize the memory resource utilization by fully leveraging the flexibility afforded by memory virtualization, it is essential that the hypervisor have some sense of how the guest VMs use their allocated physical memory. One way to do this is virtual machine introspection (VMI), which interprets byte values in a guest memory space into semantically meaningful data structures. However, identifying a guest VM's memory usage information such as free memory pool is non-trivial. This paper describes a bootstrapping VM introspection technique that could accurately extract free memory pool information from multiple versions of Windows and Linux without kernel version-specific hard-coding, how to apply this technique to improve the efficiency of memory de-duplication and memory state migration, and the resulting improvement in memory de-duplication speed, gain in additional memory pages de-duplicated, and reduction in traffic loads associated with memory state migration.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128848639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Improving dynamic binary optimization through early-exit guided code region formation 通过提前退出引导码区形成改进动态二进制优化
Pub Date : 2013-03-16 DOI: 10.1145/2451512.2451519
Chun-Chen Hsu, Pangfeng Liu, Jan-Jan Wu, P. Yew, Ding-Yong Hong, W. Hsu, Chien-Min Wang
Most dynamic binary translators (DBT) and optimizers (DBO) target binary traces, i.e. frequently executed paths, as code regions to be translated and optimized. Code region formation is the most important first step in all DBTs and DBOs. The quality of the dynamically formed code regions determines the extent and the types of optimization opportunities that can be exposed to DBTs and DBOs, and thus, determines the ultimate quality of the final optimized code. The Next-Executing-Tail (NET) trace formation method used in HP Dynamo is an early example of such techniques. Many existing trace formation schemes are variants of NET. They work very well for most binary traces, but they also suffer a major problem: the formed traces may contain a large number of early exits that could be branched out during the execution. If this happens frequently, the program execution will spend more time in the slow binary interpreter or in the unoptimized code regions than in the optimized traces in code cache. The benefit of the trace optimization is thus lost. Traces/regions with frequently taken early-exits are called delinquent traces/regions. Our empirical study shows that at least 8 of the 12 SPEC CPU2006 integer benchmarks have delinquent traces. In this paper, we propose a light-weight region formation technique called Early-Exit Guided Region Formation (EEG) to improve the quality of the formed traces/regions. It iteratively identifies and merges delinquent regions into larger code regions. We have implemented our EEG algorithm in two LLVM-based multi-threaded DBTs targeting ARM and IA32 instruction set architecture (ISA), respectively. Using SPEC CPU2006 benchmark suite with reference inputs, our results show that compared to an NET-variant currently used in QEMU, a state-of-the-art retargetable DBT, EEG can achieve a significant performance improvement of up to 72% (27% on average), and to 49% (23% on average) for IA32 and ARM, respectively.
大多数动态二进制翻译器(DBT)和优化器(DBO)将二进制跟踪(即频繁执行的路径)作为要翻译和优化的代码区域。码区形成是所有dbt和dbo中最重要的第一步。动态形成的代码区域的质量决定了可以暴露给dbt和dbo的优化机会的范围和类型,从而决定了最终优化代码的最终质量。HP Dynamo中使用的下一个执行尾(NET)跟踪生成方法是此类技术的早期示例。许多现有的轨迹形成方案都是。NET的变体。它们对于大多数二进制跟踪都工作得很好,但是它们也有一个主要问题:形成的跟踪可能包含大量的早期出口,这些出口可能在执行期间被分支出来。如果经常发生这种情况,程序执行将在缓慢的二进制解释器或未优化的代码区域中花费更多的时间,而不是在代码缓存中的优化跟踪中花费更多的时间。跟踪优化的好处就这样失去了。经常提前退出的轨迹/区域称为拖欠轨迹/区域。我们的实证研究表明,在12个SPEC CPU2006整数基准测试中,至少有8个有拖欠的痕迹。在本文中,我们提出了一种轻量级的区域形成技术,称为早期出口引导区域形成(EEG),以提高形成的道/区域的质量。它迭代地识别错误区域并将其合并为更大的代码区域。我们已经在两个基于llvm的多线程dbt中实现了EEG算法,分别针对ARM和IA32指令集架构(ISA)。使用带有参考输入的SPEC CPU2006基准测试套件,我们的结果表明,与QEMU中目前使用的net变体(最先进的可重新定位DBT)相比,EEG可以实现显著的性能改进,在IA32和ARM上分别提高72%(平均27%)和49%(平均23%)。
{"title":"Improving dynamic binary optimization through early-exit guided code region formation","authors":"Chun-Chen Hsu, Pangfeng Liu, Jan-Jan Wu, P. Yew, Ding-Yong Hong, W. Hsu, Chien-Min Wang","doi":"10.1145/2451512.2451519","DOIUrl":"https://doi.org/10.1145/2451512.2451519","url":null,"abstract":"Most dynamic binary translators (DBT) and optimizers (DBO) target binary traces, i.e. frequently executed paths, as code regions to be translated and optimized. Code region formation is the most important first step in all DBTs and DBOs. The quality of the dynamically formed code regions determines the extent and the types of optimization opportunities that can be exposed to DBTs and DBOs, and thus, determines the ultimate quality of the final optimized code. The Next-Executing-Tail (NET) trace formation method used in HP Dynamo is an early example of such techniques. Many existing trace formation schemes are variants of NET. They work very well for most binary traces, but they also suffer a major problem: the formed traces may contain a large number of early exits that could be branched out during the execution. If this happens frequently, the program execution will spend more time in the slow binary interpreter or in the unoptimized code regions than in the optimized traces in code cache. The benefit of the trace optimization is thus lost. Traces/regions with frequently taken early-exits are called delinquent traces/regions. Our empirical study shows that at least 8 of the 12 SPEC CPU2006 integer benchmarks have delinquent traces.\u0000 In this paper, we propose a light-weight region formation technique called Early-Exit Guided Region Formation (EEG) to improve the quality of the formed traces/regions. It iteratively identifies and merges delinquent regions into larger code regions. We have implemented our EEG algorithm in two LLVM-based multi-threaded DBTs targeting ARM and IA32 instruction set architecture (ISA), respectively. Using SPEC CPU2006 benchmark suite with reference inputs, our results show that compared to an NET-variant currently used in QEMU, a state-of-the-art retargetable DBT, EEG can achieve a significant performance improvement of up to 72% (27% on average), and to 49% (23% on average) for IA32 and ARM, respectively.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121927814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Optimizing virtual machine live storage migration in heterogeneous storage environment 异构存储环境下虚拟机热存储迁移优化
Pub Date : 2013-03-16 DOI: 10.1145/2451512.2451529
Ruijin Zhou, Fang Liu, Chao Li, Tao Li
Virtual machine (VM) live storage migration techniques significantly increase the mobility and manageability of virtual machines in the era of cloud computing. On the other hand, as solid state drives (SSDs) become increasingly popular in data centers, VM live storage migration will inevitably encounter heterogeneous storage environments. Nevertheless, conventional migration mechanisms do not consider the speed discrepancy and SSD's wear-out issue, which not only causes significant performance degradation but also shortens SSD's lifetime. This paper, for the first time, addresses the efficiency of VM live storage migration in heterogeneous storage environments from a multi-dimensional perspective, i.e., user experience, device wearing, and manageability. We derive a flexible metric (migration cost), which captures various design preference. Based on that, we propose and prototype three new storage migration strategies, namely: 1) Low Redundancy (LR), which generates the least amount of redundant writes; 2) Source-based Low Redundancy (SLR), which keeps the balance between IO performance and write redundancy; and 3) Asynchronous IO Mirroring, which seeks the highest IO performance. The evaluation of our prototyped system shows that our techniques outperform existing live storage migration by a significant margin. Furthermore, by adaptively mixing our proposed schemes, the cost of massive VM live storage migration can be even lower than that of only using the best of individual mechanism.
在云计算时代,虚拟机(VM)实时存储迁移技术显著提高了虚拟机的可移动性和可管理性。另一方面,随着固态硬盘(ssd)在数据中心的日益普及,虚拟机热存储迁移将不可避免地遇到异构存储环境。然而,传统的迁移机制没有考虑到速度差异和SSD的磨损问题,这不仅会导致性能显著下降,而且会缩短SSD的使用寿命。本文首次从用户体验、设备穿戴和可管理性等多维角度探讨了异构存储环境下虚拟机热存储迁移的效率问题。我们推导出一个灵活的度量(迁移成本),它捕获了各种设计偏好。在此基础上,我们提出并原型化了三种新的存储迁移策略,即:1)低冗余(LR),它产生的冗余写量最少;2)基于源的低冗余(SLR),在IO性能和写冗余之间保持平衡;3)异步IO镜像,寻求最高的IO性能。对原型系统的评估表明,我们的技术在很大程度上优于现有的实时存储迁移。此外,通过自适应混合我们提出的方案,大规模VM实时存储迁移的成本甚至可以低于仅使用最佳单个机制的成本。
{"title":"Optimizing virtual machine live storage migration in heterogeneous storage environment","authors":"Ruijin Zhou, Fang Liu, Chao Li, Tao Li","doi":"10.1145/2451512.2451529","DOIUrl":"https://doi.org/10.1145/2451512.2451529","url":null,"abstract":"Virtual machine (VM) live storage migration techniques significantly increase the mobility and manageability of virtual machines in the era of cloud computing. On the other hand, as solid state drives (SSDs) become increasingly popular in data centers, VM live storage migration will inevitably encounter heterogeneous storage environments. Nevertheless, conventional migration mechanisms do not consider the speed discrepancy and SSD's wear-out issue, which not only causes significant performance degradation but also shortens SSD's lifetime. This paper, for the first time, addresses the efficiency of VM live storage migration in heterogeneous storage environments from a multi-dimensional perspective, i.e., user experience, device wearing, and manageability. We derive a flexible metric (migration cost), which captures various design preference. Based on that, we propose and prototype three new storage migration strategies, namely: 1) Low Redundancy (LR), which generates the least amount of redundant writes; 2) Source-based Low Redundancy (SLR), which keeps the balance between IO performance and write redundancy; and 3) Asynchronous IO Mirroring, which seeks the highest IO performance. The evaluation of our prototyped system shows that our techniques outperform existing live storage migration by a significant margin. Furthermore, by adaptively mixing our proposed schemes, the cost of massive VM live storage migration can be even lower than that of only using the best of individual mechanism.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128454534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Efficient live migration of virtual machines using shared storage 使用共享存储的虚拟机的高效实时迁移
Pub Date : 2013-03-16 DOI: 10.1145/2451512.2451524
Changyeon Jo, E. Gustafsson, Jeongseok Son, Bernhard Egger
Live migration of virtual machines (VM) across distinct physical hosts is an important feature of virtualization technology for maintenance, load-balancing and energy reduction, especially so for data centers operators and cluster service providers. Several techniques have been proposed to reduce the downtime of the VM being transferred, often at the expense of the total migration time. In this work, we present a technique to reduce the total time required to migrate a running VM from one host to another while keeping the downtime to a minimum. Based on the observation that modern operating systems use the better part of the physical memory to cache data from secondary storage, our technique tracks the VM's I/O operations to the network-attached storage device and maintains an updated mapping of memory pages that currently reside in identical form on the storage device. During the iterative pre-copy live migration process, instead of transferring those pages from the source to the target host, the memory-to-disk mapping is sent to the target host which then fetches the contents directly from the network-attached storage device. We have implemented our approach into the Xen hypervisor and ran a series of experiments with Linux HVM guests. On average, the presented technique shows a reduction of up over 30% on average of the total transfer time for a series of benchmarks.
虚拟机(VM)跨不同物理主机的实时迁移是虚拟化技术用于维护、负载平衡和节能的重要特性,对于数据中心运营商和集群服务提供商来说尤其如此。已经提出了几种技术来减少正在转移的VM的停机时间,通常以牺牲总迁移时间为代价。在这项工作中,我们提出了一种技术,可以减少将正在运行的VM从一台主机迁移到另一台主机所需的总时间,同时将停机时间降至最低。基于对现代操作系统使用大部分物理内存来缓存来自二级存储的数据的观察,我们的技术将VM的I/O操作跟踪到网络连接的存储设备,并维护当前以相同形式驻留在存储设备上的内存页的更新映射。在迭代的预复制实时迁移过程中,内存到磁盘的映射被发送到目标主机,而不是将这些页面从源主机传输到目标主机,然后目标主机直接从网络连接的存储设备获取内容。我们已经在Xen管理程序中实现了我们的方法,并对Linux HVM客户机进行了一系列实验。平均而言,本文提出的技术显示,在一系列基准测试中,总传输时间平均减少了30%以上。
{"title":"Efficient live migration of virtual machines using shared storage","authors":"Changyeon Jo, E. Gustafsson, Jeongseok Son, Bernhard Egger","doi":"10.1145/2451512.2451524","DOIUrl":"https://doi.org/10.1145/2451512.2451524","url":null,"abstract":"Live migration of virtual machines (VM) across distinct physical hosts is an important feature of virtualization technology for maintenance, load-balancing and energy reduction, especially so for data centers operators and cluster service providers. Several techniques have been proposed to reduce the downtime of the VM being transferred, often at the expense of the total migration time. In this work, we present a technique to reduce the total time required to migrate a running VM from one host to another while keeping the downtime to a minimum. Based on the observation that modern operating systems use the better part of the physical memory to cache data from secondary storage, our technique tracks the VM's I/O operations to the network-attached storage device and maintains an updated mapping of memory pages that currently reside in identical form on the storage device. During the iterative pre-copy live migration process, instead of transferring those pages from the source to the target host, the memory-to-disk mapping is sent to the target host which then fetches the contents directly from the network-attached storage device. We have implemented our approach into the Xen hypervisor and ran a series of experiments with Linux HVM guests. On average, the presented technique shows a reduction of up over 30% on average of the total transfer time for a series of benchmarks.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132270464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 102
Leveraging phase change memory to achieve efficient virtual machine execution 利用相变内存实现高效的虚拟机执行
Pub Date : 2013-03-16 DOI: 10.1145/2451512.2451547
Ruijin Zhou, Tao Li
Virtualization technology is being widely adopted by servers and data centers in the cloud computing era to improve resource utilization and energy efficiency. Nevertheless, the heterogeneous memory demands from multiple virtual machines (VM) make it more challenging to design efficient memory systems. Even worse, mission critical VM management activities (e.g. checkpointing) could incur significant runtime overhead due to intensive IO operations. In this paper, we propose to leverage the adaptable and non-volatile features of the emerging phase change memory (PCM) to achieve efficient virtual machine execution. Towards this end, we exploit VM-aware PCM management mechanisms, which 1) smartly tune SLC/MLC page allocation within a single VM and across different VMs and 2) keep critical checkpointing pages in PCM to reduce I/O traffic. Experimental results show that our single VM design (IntraVM) improves performance by 10% and 20% compared to pure SLC- and MLC- based systems. Further incorporating VM-aware resource management schemes (IntraVM+InterVM) increases system performance by 15%. In addition, our design saves 46% of checkpoint/restore duration and reduces 50% of overall IO penalty to the system.
在云计算时代,服务器和数据中心广泛采用虚拟化技术,以提高资源利用率和能源效率。然而,来自多个虚拟机(VM)的异构内存需求使得设计高效的内存系统变得更加困难。更糟糕的是,关键任务VM管理活动(例如检查点)可能会由于密集的IO操作而导致显著的运行时开销。在本文中,我们建议利用新兴的相变存储器(PCM)的适应性和非易失性特征来实现高效的虚拟机执行。为此,我们利用了VM感知的PCM管理机制,它1)在单个VM内和不同VM之间巧妙地调整SLC/MLC页面分配,2)在PCM中保留关键的检查点页面以减少I/O流量。实验结果表明,与基于SLC和MLC的纯系统相比,我们的单VM设计(intram)的性能分别提高了10%和20%。进一步整合虚拟机感知资源管理方案(intram +InterVM)可使系统性能提高15%。此外,我们的设计节省了46%的检查点/恢复持续时间,并减少了50%的系统总体IO损失。
{"title":"Leveraging phase change memory to achieve efficient virtual machine execution","authors":"Ruijin Zhou, Tao Li","doi":"10.1145/2451512.2451547","DOIUrl":"https://doi.org/10.1145/2451512.2451547","url":null,"abstract":"Virtualization technology is being widely adopted by servers and data centers in the cloud computing era to improve resource utilization and energy efficiency. Nevertheless, the heterogeneous memory demands from multiple virtual machines (VM) make it more challenging to design efficient memory systems. Even worse, mission critical VM management activities (e.g. checkpointing) could incur significant runtime overhead due to intensive IO operations. In this paper, we propose to leverage the adaptable and non-volatile features of the emerging phase change memory (PCM) to achieve efficient virtual machine execution. Towards this end, we exploit VM-aware PCM management mechanisms, which 1) smartly tune SLC/MLC page allocation within a single VM and across different VMs and 2) keep critical checkpointing pages in PCM to reduce I/O traffic. Experimental results show that our single VM design (IntraVM) improves performance by 10% and 20% compared to pure SLC- and MLC- based systems. Further incorporating VM-aware resource management schemes (IntraVM+InterVM) increases system performance by 15%. In addition, our design saves 46% of checkpoint/restore duration and reduces 50% of overall IO penalty to the system.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131361927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A lightweight VMM on many core for high performance computing 用于高性能计算的多核轻量级VMM
Pub Date : 2013-03-16 DOI: 10.1145/2451512.2451535
Yue-hua Dai, Yong Qi, Jianbao Ren, Yi Shi, Xiaoguang Wang, Xuan Yu
Traditional Virtual Machine Monitor (VMM) virtualizes some devices and instructions, which induces performance overhead to guest operating systems. Furthermore, the virtualization contributes a large amount of codes to VMM, which makes a VMM prone to bugs and vulnerabilities. On the other hand, in cloud computing, cloud service provider configures virtual machines based on requirements which are specified by customers in advance. As resources in a multi-core server increase to more than adequate in the future, virtualization is not necessary although it provides convenience for cloud computing. Based on the above observations, this paper presents an alternative way for constructing a VMM: configuring a booting interface instead of virtualization technology. A lightweight virtual machine monitor - OSV is proposed based on this idea. OSV can host multiple full functional Linux kernels with little performance overhead. There are only 6 hyper-calls in OSV. The Linux running on top of OSV is intercepted only for the inter-processor interrupts. The resource isolation is implemented with hardware-assist virtualization. The resource sharing is controlled by distributed protocols embedded in current operating systems. We implement a prototype of OSV on AMD Opteron processor based 32-core servers with SVM and cache-coherent NUMA architectures. OSV can host up to 8 Linux kernels on the server with less than 10 lines of code modifications to Linux kernel. OSV has about 8000 lines of code which can be easily tuned and debugged. The experiment results show that OSV VMM has 23.7% performance improvement compared with Xen VMM.
传统的虚拟机监视器(VMM)虚拟化一些设备和指令,这会给客户机操作系统带来性能开销。此外,虚拟化为VMM提供了大量代码,这使得VMM容易出现错误和漏洞。另一方面,在云计算中,云服务提供商根据客户预先指定的需求配置虚拟机。随着未来多核服务器中的资源越来越多,虚拟化不再是必需的,尽管它为云计算提供了便利。基于上述观察,本文提出了构建VMM的另一种方法:配置引导接口而不是虚拟化技术。基于这一思想,提出了一种轻量级虚拟机监视器——OSV。OSV可以承载多个功能完备的Linux内核,而且性能开销很小。在OSV中只有6个超调用。运行在OSV之上的Linux只在处理器间中断时被拦截。资源隔离是通过硬件辅助虚拟化实现的。资源共享由嵌入在当前操作系统中的分布式协议控制。我们在基于AMD Opteron处理器的32核服务器上实现了一个基于SVM和缓存相干NUMA架构的OSV原型。OSV可以在服务器上承载多达8个Linux内核,只需对Linux内核进行不到10行代码修改。OSV大约有8000行代码,可以很容易地调优和调试。实验结果表明,与Xen VMM相比,OSV VMM的性能提高了23.7%。
{"title":"A lightweight VMM on many core for high performance computing","authors":"Yue-hua Dai, Yong Qi, Jianbao Ren, Yi Shi, Xiaoguang Wang, Xuan Yu","doi":"10.1145/2451512.2451535","DOIUrl":"https://doi.org/10.1145/2451512.2451535","url":null,"abstract":"Traditional Virtual Machine Monitor (VMM) virtualizes some devices and instructions, which induces performance overhead to guest operating systems. Furthermore, the virtualization contributes a large amount of codes to VMM, which makes a VMM prone to bugs and vulnerabilities.\u0000 On the other hand, in cloud computing, cloud service provider configures virtual machines based on requirements which are specified by customers in advance. As resources in a multi-core server increase to more than adequate in the future, virtualization is not necessary although it provides convenience for cloud computing. Based on the above observations, this paper presents an alternative way for constructing a VMM: configuring a booting interface instead of virtualization technology. A lightweight virtual machine monitor - OSV is proposed based on this idea. OSV can host multiple full functional Linux kernels with little performance overhead. There are only 6 hyper-calls in OSV. The Linux running on top of OSV is intercepted only for the inter-processor interrupts. The resource isolation is implemented with hardware-assist virtualization. The resource sharing is controlled by distributed protocols embedded in current operating systems.\u0000 We implement a prototype of OSV on AMD Opteron processor based 32-core servers with SVM and cache-coherent NUMA architectures. OSV can host up to 8 Linux kernels on the server with less than 10 lines of code modifications to Linux kernel. OSV has about 8000 lines of code which can be easily tuned and debugged. The experiment results show that OSV VMM has 23.7% performance improvement compared with Xen VMM.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125516993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Traveling forward in time to newer operating systems using ShadowReboot 使用ShadowReboot及时旅行到更新的操作系统
Pub Date : 2013-03-16 DOI: 10.1145/2451512.2451536
H. Yamada, K. Kono
Operating system (OS) reboots are an essential part of updating kernels and applications on laptops and desktop PCs. Long downtime during OS reboots severely disrupts users' computational activities. This long disruption discourages the users from conducting OS reboots, failing to enforce them to conduct software updates. This paper presents ShadowReboot, a virtual machine monitor (VMM)-based approach that shortens downtime of OS reboots in software updates. ShadowReboot conceals OS reboot activities from user's applications by spawning a VM dedicated to an OS reboot and systematically producing the rebooted state where the updated kernel and applications are ready for use. ShadowReboot provides an illusion to the users that the guest OS travels forward in time to the rebooted state. ShadowReboot offers the following advantages. It can be used to apply patches to the kernels and even system configuration updates. Next, it does not require any special patch requiring detailed knowledge about the target kernels. Lastly, it does not require any target kernel modification. We implemented a prototype in VirtualBox 4.0.10 OSE. Our experimental results show that ShadowReboot successfully updated software on unmodified commodity OS kernels and shortened the downtime of commodity OS reboots on five Linux distributions (Fedora, Ubuntu, Gentoo, Cent, and SUSE) by 91 to 98%.
操作系统(OS)重启是更新笔记本电脑和台式电脑上的内核和应用程序的重要部分。操作系统重启期间的长时间停机严重干扰了用户的计算活动。这种长时间的中断阻碍了用户进行操作系统重启,无法强制他们进行软件更新。本文介绍了ShadowReboot,这是一种基于虚拟机监视器(VMM)的方法,可以缩短软件更新中操作系统重启的停机时间。ShadowReboot通过生成一个专门用于操作系统重启的虚拟机,并系统地产生重新启动状态,从而使更新后的内核和应用程序准备使用,从而对用户的应用程序隐藏操作系统重启活动。ShadowReboot为用户提供了一种错觉,即客户机操作系统及时向前移动到重新启动状态。ShadowReboot提供了以下优点。它可以用于向内核应用补丁,甚至可以用于系统配置更新。其次,它不需要任何需要详细了解目标内核的特殊补丁。最后,它不需要任何目标内核修改。我们在VirtualBox 4.0.10 OSE中实现了一个原型。我们的实验结果表明,ShadowReboot在未修改的商用操作系统内核上成功地更新了软件,并将5个Linux发行版(Fedora、Ubuntu、Gentoo、Cent和SUSE)上商用操作系统重启的停机时间缩短了91%至98%。
{"title":"Traveling forward in time to newer operating systems using ShadowReboot","authors":"H. Yamada, K. Kono","doi":"10.1145/2451512.2451536","DOIUrl":"https://doi.org/10.1145/2451512.2451536","url":null,"abstract":"Operating system (OS) reboots are an essential part of updating kernels and applications on laptops and desktop PCs. Long downtime during OS reboots severely disrupts users' computational activities. This long disruption discourages the users from conducting OS reboots, failing to enforce them to conduct software updates. This paper presents ShadowReboot, a virtual machine monitor (VMM)-based approach that shortens downtime of OS reboots in software updates. ShadowReboot conceals OS reboot activities from user's applications by spawning a VM dedicated to an OS reboot and systematically producing the rebooted state where the updated kernel and applications are ready for use. ShadowReboot provides an illusion to the users that the guest OS travels forward in time to the rebooted state. ShadowReboot offers the following advantages. It can be used to apply patches to the kernels and even system configuration updates. Next, it does not require any special patch requiring detailed knowledge about the target kernels. Lastly, it does not require any target kernel modification. We implemented a prototype in VirtualBox 4.0.10 OSE. Our experimental results show that ShadowReboot successfully updated software on unmodified commodity OS kernels and shortened the downtime of commodity OS reboots on five Linux distributions (Fedora, Ubuntu, Gentoo, Cent, and SUSE) by 91 to 98%.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133437618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Performance potential of optimization phase selection during dynamic JIT compilation 动态JIT编译中优化阶段选择的性能潜力
Pub Date : 2013-03-16 DOI: 10.1145/2451512.2451539
Michael R. Jantz, P. Kulkarni
Phase selection is the process of customizing the applied set of compiler optimization phases for individual functions or programs to improve performance of generated code. Researchers have recently developed novel feature-vector based heuristic techniques to perform phase selection during online JIT compilation. While these heuristics improve program startup speed, steady-state performance was not seen to benefit over the default fixed single sequence baseline. Unfortunately, it is still not conclusively known whether this lack of steady-state performance gain is due to a failure of existing online phase selection heuristics, or because there is, indeed, little or no speedup to be gained by phase selection in online JIT environments. The goal of this work is to resolve this question, while examining the phase selection related behavior of optimizations, and assessing and improving the effectiveness of existing heuristic solutions. We conduct experiments to find and understand the potency of the factors that can cause the phase selection problem in JIT compilers. Next, using long-running genetic algorithms we determine that program-wide and method-specific phase selection in the HotSpot JIT compiler can produce ideal steady-state performance gains of up to 15% (4.3% average) and 44% (6.2% average) respectively. We also find that existing state-of-the-art heuristic solutions are unable to realize these performance gains (in our experimental setup), discuss possible causes, and show that exploiting knowledge of optimization phase behavior can help improve such heuristic solutions. Our work develops a robust open-source production-quality framework using the HotSpot JVM to further explore this problem in the future.
阶段选择是为单个函数或程序定制应用的编译器优化阶段集,以提高生成代码的性能的过程。研究人员最近开发了一种新的基于特征向量的启发式技术,用于在线JIT编译过程中的相位选择。虽然这些启发式方法提高了程序启动速度,但稳态性能并没有优于默认的固定单序列基线。不幸的是,这种缺乏稳态性能增益的情况是由于现有的在线相位选择启发式方法失败,还是因为在线JIT环境中的相位选择确实很少或根本没有加速。这项工作的目标是解决这个问题,同时检查优化的阶段选择相关行为,并评估和改进现有启发式解决方案的有效性。我们通过实验来发现和理解在JIT编译器中可能导致阶段选择问题的因素的效力。接下来,使用长时间运行的遗传算法,我们确定HotSpot JIT编译器中程序范围和特定于方法的阶段选择可以产生理想的稳态性能增益,分别高达15%(平均4.3%)和44%(平均6.2%)。我们还发现现有的最先进的启发式解决方案无法实现这些性能增益(在我们的实验设置中),讨论了可能的原因,并表明利用优化阶段行为的知识可以帮助改进此类启发式解决方案。我们的工作是开发一个健壮的开源生产质量框架,使用HotSpot JVM在未来进一步探索这个问题。
{"title":"Performance potential of optimization phase selection during dynamic JIT compilation","authors":"Michael R. Jantz, P. Kulkarni","doi":"10.1145/2451512.2451539","DOIUrl":"https://doi.org/10.1145/2451512.2451539","url":null,"abstract":"Phase selection is the process of customizing the applied set of compiler optimization phases for individual functions or programs to improve performance of generated code. Researchers have recently developed novel feature-vector based heuristic techniques to perform phase selection during online JIT compilation. While these heuristics improve program startup speed, steady-state performance was not seen to benefit over the default fixed single sequence baseline. Unfortunately, it is still not conclusively known whether this lack of steady-state performance gain is due to a failure of existing online phase selection heuristics, or because there is, indeed, little or no speedup to be gained by phase selection in online JIT environments. The goal of this work is to resolve this question, while examining the phase selection related behavior of optimizations, and assessing and improving the effectiveness of existing heuristic solutions.\u0000 We conduct experiments to find and understand the potency of the factors that can cause the phase selection problem in JIT compilers. Next, using long-running genetic algorithms we determine that program-wide and method-specific phase selection in the HotSpot JIT compiler can produce ideal steady-state performance gains of up to 15% (4.3% average) and 44% (6.2% average) respectively. We also find that existing state-of-the-art heuristic solutions are unable to realize these performance gains (in our experimental setup), discuss possible causes, and show that exploiting knowledge of optimization phase behavior can help improve such heuristic solutions. Our work develops a robust open-source production-quality framework using the HotSpot JVM to further explore this problem in the future.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124939595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
A framework for application guidance in virtual memory systems 虚拟内存系统中应用指导的框架
Pub Date : 2013-03-16 DOI: 10.1145/2451512.2451543
Michael R. Jantz, Carl Strickland, Karthik Kumar, Martin Dimitrov, K. Doshi
This paper proposes a collaborative approach in which applications can provide guidance to the operating system regarding allocation and recycling of physical memory. The operating system incorporates this guidance to decide which physical page should be used to back a particular virtual page. The key intuition behind this approach is that application software, as a generator of memory accesses, is best equipped to inform the operating system about the relative access rates and overlapping patterns of usage of its own address space. It is also capable of steering its own algorithms in order to keep its dynamic memory footprint under check when there is a need to reduce power or to contain the spillover effects from bursts in demand. Application software, working cooperatively with the operating system, can therefore help the latter schedule memory more effectively and efficiently than when the operating system is forced to act alone without such guidance. It is particularly difficult to achieve power efficiency without application guidance since power expended in memory is a function not merely of the intensity with which memory is accessed in time but also how many physical ranks are affected by an application's memory usage. Our framework introduces an abstraction called "colors" for the application to communicate its intent to the operating system. We modify the operating system to receive this communication in an efficient way, and to organize physical memory pages into intermediate level grouping structures called "trays" which capture the physically independent access channels and self-refresh domains, so that it can apply this guidance without entangling the application in lower level details of power or bandwidth management. This paper describes how we re-architect the memory management of a recent Linux kernel to realize a three way collaboration between hardware, supervisory software, and application tasks.
本文提出了一种协作方法,其中应用程序可以为操作系统提供有关物理内存分配和回收的指导。操作系统结合了这个指导来决定应该使用哪个物理页面来支持特定的虚拟页面。这种方法背后的关键直觉是,作为内存访问的生成器,应用程序软件最好能够将其自身地址空间的相对访问速率和重叠使用模式告知操作系统。它还能够控制自己的算法,以便在需要减少功率或控制需求突发的溢出效应时控制其动态内存占用。因此,与操作系统协同工作的应用程序软件可以帮助操作系统更有效地调度内存,而不是在没有这种指导的情况下强迫操作系统单独行动。在没有应用程序指导的情况下,实现电源效率特别困难,因为内存中的功耗不仅与及时访问内存的强度有关,还与受应用程序内存使用影响的物理排名有关。我们的框架引入了一个称为“颜色”的抽象,用于应用程序与操作系统沟通其意图。我们修改操作系统以有效的方式接收这种通信,并将物理内存页组织到称为“托盘”的中间级别分组结构中,该结构捕获物理上独立的访问通道和自刷新域,以便它可以应用此指导,而不会使应用程序纠缠于较低级别的电源或带宽管理细节。本文描述了我们如何重新架构一个最新的Linux内核的内存管理,以实现硬件、监控软件和应用程序任务之间的三向协作。
{"title":"A framework for application guidance in virtual memory systems","authors":"Michael R. Jantz, Carl Strickland, Karthik Kumar, Martin Dimitrov, K. Doshi","doi":"10.1145/2451512.2451543","DOIUrl":"https://doi.org/10.1145/2451512.2451543","url":null,"abstract":"This paper proposes a collaborative approach in which applications can provide guidance to the operating system regarding allocation and recycling of physical memory. The operating system incorporates this guidance to decide which physical page should be used to back a particular virtual page. The key intuition behind this approach is that application software, as a generator of memory accesses, is best equipped to inform the operating system about the relative access rates and overlapping patterns of usage of its own address space. It is also capable of steering its own algorithms in order to keep its dynamic memory footprint under check when there is a need to reduce power or to contain the spillover effects from bursts in demand. Application software, working cooperatively with the operating system, can therefore help the latter schedule memory more effectively and efficiently than when the operating system is forced to act alone without such guidance. It is particularly difficult to achieve power efficiency without application guidance since power expended in memory is a function not merely of the intensity with which memory is accessed in time but also how many physical ranks are affected by an application's memory usage.\u0000 Our framework introduces an abstraction called \"colors\" for the application to communicate its intent to the operating system. We modify the operating system to receive this communication in an efficient way, and to organize physical memory pages into intermediate level grouping structures called \"trays\" which capture the physically independent access channels and self-refresh domains, so that it can apply this guidance without entangling the application in lower level details of power or bandwidth management.\u0000 This paper describes how we re-architect the memory management of a recent Linux kernel to realize a three way collaboration between hardware, supervisory software, and application tasks.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128567401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
A modular approach to on-stack replacement in LLVM LLVM中栈上替换的模块化方法
Pub Date : 2013-03-16 DOI: 10.1145/2451512.2451541
Nurudeen Lameed, L. Hendren
On-stack replacement (OSR) is a technique that allows a virtual machine to interrupt running code during the execution of a function/method, to re-optimize the function on-the-fly using an optimizing JIT compiler, and then to resume the interrupted function at the point and state at which it was interrupted. OSR is particularly useful for programs with potentially long-running loops, as it allows dynamic optimization of those loops as soon as they become hot. This paper presents a modular approach to implementing OSR for the LLVM compiler infrastructure. This is an important step forward because LLVM is gaining popular support, and adding the OSR capability allows compiler developers to develop new dynamic techniques. In particular, it will enable more sophisticated LLVM-based JIT compiler approaches. Indeed, other compiler/VM developers can use our approach because it is a clean modular addition to the standard LLVM distribution. Further, our approach is defined completely at the LLVM-IR level and thus does not require any modifications to the target code generation. The OSR implementation can be used by different compilers to support a variety of dynamic optimizations. As a demonstration of our OSR approach, we have used it to support dynamic inlining in McVM. McVM is a virtual machine for MATLAB which uses a LLVM-based JIT compiler. MATLAB is a popular dynamic language for scientific and engineering applications that typically manipulate large matrices and often contain long-running loops, and is thus an ideal target for dynamic JIT compilation and OSRs. Using our McVM example, we demonstrate reasonable overheads for our benchmark set, and performance improvements when using it to perform dynamic inlining.
栈上替换(OSR)是一种技术,它允许虚拟机在函数/方法执行期间中断正在运行的代码,使用优化JIT编译器动态地重新优化函数,然后在被中断的点和状态恢复被中断的函数。OSR对于可能长时间运行循环的程序特别有用,因为它允许在循环变热时对这些循环进行动态优化。本文提出了一种模块化的方法来实现LLVM编译器基础架构的OSR。这是向前迈出的重要一步,因为LLVM正在获得广泛的支持,并且添加OSR功能允许编译器开发人员开发新的动态技术。特别是,它将支持更复杂的基于llvm的JIT编译器方法。实际上,其他编译器/VM开发人员可以使用我们的方法,因为它是对标准LLVM发行版的一个干净的模块化补充。此外,我们的方法完全是在LLVM-IR级别定义的,因此不需要对目标代码生成进行任何修改。不同的编译器可以使用OSR实现来支持各种动态优化。作为OSR方法的演示,我们使用它来支持McVM中的动态内联。McVM是MATLAB的虚拟机,它使用基于llvm的JIT编译器。MATLAB是一种流行的动态语言,用于科学和工程应用程序,这些应用程序通常操作大型矩阵,并且通常包含长时间运行的循环,因此是动态JIT编译和osr的理想目标。使用我们的McVM示例,我们演示了基准集的合理开销,以及使用它执行动态内联时的性能改进。
{"title":"A modular approach to on-stack replacement in LLVM","authors":"Nurudeen Lameed, L. Hendren","doi":"10.1145/2451512.2451541","DOIUrl":"https://doi.org/10.1145/2451512.2451541","url":null,"abstract":"On-stack replacement (OSR) is a technique that allows a virtual machine to interrupt running code during the execution of a function/method, to re-optimize the function on-the-fly using an optimizing JIT compiler, and then to resume the interrupted function at the point and state at which it was interrupted. OSR is particularly useful for programs with potentially long-running loops, as it allows dynamic optimization of those loops as soon as they become hot.\u0000 This paper presents a modular approach to implementing OSR for the LLVM compiler infrastructure. This is an important step forward because LLVM is gaining popular support, and adding the OSR capability allows compiler developers to develop new dynamic techniques. In particular, it will enable more sophisticated LLVM-based JIT compiler approaches. Indeed, other compiler/VM developers can use our approach because it is a clean modular addition to the standard LLVM distribution. Further, our approach is defined completely at the LLVM-IR level and thus does not require any modifications to the target code generation.\u0000 The OSR implementation can be used by different compilers to support a variety of dynamic optimizations. As a demonstration of our OSR approach, we have used it to support dynamic inlining in McVM. McVM is a virtual machine for MATLAB which uses a LLVM-based JIT compiler. MATLAB is a popular dynamic language for scientific and engineering applications that typically manipulate large matrices and often contain long-running loops, and is thus an ideal target for dynamic JIT compilation and OSRs. Using our McVM example, we demonstrate reasonable overheads for our benchmark set, and performance improvements when using it to perform dynamic inlining.","PeriodicalId":202844,"journal":{"name":"International Conference on Virtual Execution Environments","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123122828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
International Conference on Virtual Execution Environments
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1