首页 > 最新文献

Proceedings of the Eleventh European Conference on Computer Systems最新文献

英文 中文
From application requests to virtual IOPs: provisioned key-value storage with Libra 从应用请求到虚拟IOPs:使用Libra发放键值存储
Pub Date : 2014-04-14 DOI: 10.1145/2592798.2592823
David Shue, M. Freedman
Achieving predictable performance in shared cloud storage services is hard. Tenants want reservations in terms of system-wide application-level throughput, but the provider must ultimately deal with low-level IO resources at each storage node where contention arises. Such a guarantee has thus proven elusive, due to the complexities inherent to modern storage stacks: non-uniform IO amplification, unpredictable IO interference, and non-linear IO performance. This paper presents Libra, a local IO scheduling framework designed for a shared SSD-backed key-value storage system. Libra guarantees per-tenant application-request throughput while achieving high utilization. To accomplish this, Libra leverages two techniques. First, Libra tracks the IO resource consumption of a tenant's application-level requests across complex storage stack interactions, down to low-level IO operations. This allows Libra to allocate per-tenant IO resources for achieving app-request reservations based on their dynamic IO usage profile. Second, Libra uses a disk-IO cost model based on virtual IO operations (VOP) that captures the non-linear relationship between SSD IO bandwidth and IO operation (IOP) throughput. Using VOPs, Libra can both account for the true cost of an IOP and determine the amount of provisionable IO resources available under IO interference. An evaluation shows that Libra, when applied to a LevelDB-based prototype with SSD-backed storage, satisfies tenant app-request reservations and achieves accurate low-level VOP allocations over a range of workloads, while still supporting high utilization.
在共享云存储服务中实现可预测的性能是困难的。租户希望在系统范围的应用程序级吞吐量方面有所保留,但是提供者最终必须处理产生争用的每个存储节点上的低级IO资源。由于现代存储栈固有的复杂性:不均匀的IO放大、不可预测的IO干扰和非线性的IO性能,这种保证被证明是难以捉摸的。本文介绍了Libra,这是一个为共享ssd支持的键值存储系统设计的本地IO调度框架。Libra保证了每个租户的应用请求吞吐量,同时实现了高利用率。为了做到这一点,天秤座利用了两种技巧。首先,Libra跟踪租户跨复杂存储堆栈交互的应用程序级请求的IO资源消耗,直至低级IO操作。这允许Libra分配每个租户的IO资源,以实现基于动态IO使用配置文件的应用程序请求预订。其次,Libra使用基于虚拟IO操作(VOP)的磁盘IO成本模型,该模型捕获了SSD IO带宽与IO操作(IOP)吞吐量之间的非线性关系。使用VOPs, Libra既可以计算IOP的真实成本,又可以确定在IO干扰下可用的可配置IO资源的数量。一项评估表明,当将Libra应用于基于leveldb的带有ssd支持存储的原型时,可以满足租户应用程序请求预订,并在一系列工作负载上实现准确的低级VOP分配,同时仍然支持高利用率。
{"title":"From application requests to virtual IOPs: provisioned key-value storage with Libra","authors":"David Shue, M. Freedman","doi":"10.1145/2592798.2592823","DOIUrl":"https://doi.org/10.1145/2592798.2592823","url":null,"abstract":"Achieving predictable performance in shared cloud storage services is hard. Tenants want reservations in terms of system-wide application-level throughput, but the provider must ultimately deal with low-level IO resources at each storage node where contention arises. Such a guarantee has thus proven elusive, due to the complexities inherent to modern storage stacks: non-uniform IO amplification, unpredictable IO interference, and non-linear IO performance.\u0000 This paper presents Libra, a local IO scheduling framework designed for a shared SSD-backed key-value storage system. Libra guarantees per-tenant application-request throughput while achieving high utilization. To accomplish this, Libra leverages two techniques. First, Libra tracks the IO resource consumption of a tenant's application-level requests across complex storage stack interactions, down to low-level IO operations. This allows Libra to allocate per-tenant IO resources for achieving app-request reservations based on their dynamic IO usage profile. Second, Libra uses a disk-IO cost model based on virtual IO operations (VOP) that captures the non-linear relationship between SSD IO bandwidth and IO operation (IOP) throughput. Using VOPs, Libra can both account for the true cost of an IOP and determine the amount of provisionable IO resources available under IO interference.\u0000 An evaluation shows that Libra, when applied to a LevelDB-based prototype with SSD-backed storage, satisfies tenant app-request reservations and achieves accurate low-level VOP allocations over a range of workloads, while still supporting high utilization.","PeriodicalId":20737,"journal":{"name":"Proceedings of the Eleventh European Conference on Computer Systems","volume":"67 1","pages":"17:1-17:14"},"PeriodicalIF":0.0,"publicationDate":"2014-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83144610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Callisto: co-scheduling parallel runtime systems Callisto:协同调度并行运行时系统
Pub Date : 2014-04-14 DOI: 10.1145/2592798.2592807
T. Harris, Martin Maas, Virendra J. Marathe
It is increasingly important for parallel applications to run together on the same machine. However, current performance is often poor: programs do not adapt well to dynamically varying numbers of cores, and the CPU time received by concurrent jobs can differ drastically. This paper introduces Callisto, a resource management layer for parallel runtime systems. We describe Callisto and the implementation of two Callisto-enabled runtime systems---one for OpenMP, and another for a task-parallel programming model. We show how Callisto eliminates almost all of the scheduler-related interference between concurrent jobs, while still allowing jobs to claim otherwise-idle cores. We use examples from two recent graph analytics projects and from SPEC OMP.
并行应用程序在同一台机器上一起运行变得越来越重要。但是,当前的性能通常很差:程序不能很好地适应动态变化的内核数量,并发作业接收的CPU时间可能相差很大。本文介绍了并行运行时系统的资源管理层——Callisto。我们描述了Callisto和两个支持Callisto的运行时系统的实现——一个用于OpenMP,另一个用于任务并行编程模型。我们将展示Callisto如何消除并发作业之间几乎所有与调度器相关的干扰,同时仍然允许作业占用空闲的内核。我们使用的例子来自最近的两个图形分析项目和SPEC OMP。
{"title":"Callisto: co-scheduling parallel runtime systems","authors":"T. Harris, Martin Maas, Virendra J. Marathe","doi":"10.1145/2592798.2592807","DOIUrl":"https://doi.org/10.1145/2592798.2592807","url":null,"abstract":"It is increasingly important for parallel applications to run together on the same machine. However, current performance is often poor: programs do not adapt well to dynamically varying numbers of cores, and the CPU time received by concurrent jobs can differ drastically. This paper introduces Callisto, a resource management layer for parallel runtime systems. We describe Callisto and the implementation of two Callisto-enabled runtime systems---one for OpenMP, and another for a task-parallel programming model. We show how Callisto eliminates almost all of the scheduler-related interference between concurrent jobs, while still allowing jobs to claim otherwise-idle cores. We use examples from two recent graph analytics projects and from SPEC OMP.","PeriodicalId":20737,"journal":{"name":"Proceedings of the Eleventh European Conference on Computer Systems","volume":"3 1","pages":"24:1-24:14"},"PeriodicalIF":0.0,"publicationDate":"2014-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90624667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Caching in video CDNs: building strong lines of defense 视频cdn缓存:建立坚固的防线
Pub Date : 2014-04-14 DOI: 10.1145/2592798.2592817
Kianoosh Mokhtarian, H. Jacobsen
Planet-scale video Content Delivery Networks (CDNs) deliver a significant fraction of the entire Internet traffic. Effective caching at the edge is vital for the feasibility of these CDNs, which can otherwise incur significant monetary costs and resource overloads in the Internet. We analyze the challenges and requirements for video caching on these CDNs which cannot be addressed by standard solutions. We develop multiple algorithms for caching in these CDNs: (i) An LRU-based baseline solution to address the requirements, (ii) an intelligent ingress-efficient algorithm, (iii) an offline cache aware of future requests (greedy) to estimate the maximum caching efficiency we can expect from any online algorithm, and (iv) an optimal offline cache (for limited scales). We use anonymized actual data from a large-scale, global CDN to evaluate the algorithms and draw conclusions on their suitability for different settings.
全球规模的视频内容交付网络(cdn)提供了整个互联网流量的很大一部分。在边缘进行有效的缓存对于这些cdn的可行性至关重要,否则会在Internet中导致大量的货币成本和资源过载。我们分析了这些cdn上视频缓存的挑战和需求,这些问题是标准解决方案无法解决的。我们在这些cdn中开发了多种缓存算法:(i)基于lru的基线解决方案来满足需求,(ii)智能入口高效算法,(iii)感知未来请求(贪婪)的离线缓存,以估计我们可以从任何在线算法中期望的最大缓存效率,以及(iv)最佳离线缓存(用于有限规模)。我们使用来自大型全球CDN的匿名实际数据来评估算法,并得出它们对不同设置的适用性的结论。
{"title":"Caching in video CDNs: building strong lines of defense","authors":"Kianoosh Mokhtarian, H. Jacobsen","doi":"10.1145/2592798.2592817","DOIUrl":"https://doi.org/10.1145/2592798.2592817","url":null,"abstract":"Planet-scale video Content Delivery Networks (CDNs) deliver a significant fraction of the entire Internet traffic. Effective caching at the edge is vital for the feasibility of these CDNs, which can otherwise incur significant monetary costs and resource overloads in the Internet.\u0000 We analyze the challenges and requirements for video caching on these CDNs which cannot be addressed by standard solutions. We develop multiple algorithms for caching in these CDNs: (i) An LRU-based baseline solution to address the requirements, (ii) an intelligent ingress-efficient algorithm, (iii) an offline cache aware of future requests (greedy) to estimate the maximum caching efficiency we can expect from any online algorithm, and (iv) an optimal offline cache (for limited scales). We use anonymized actual data from a large-scale, global CDN to evaluate the algorithms and draw conclusions on their suitability for different settings.","PeriodicalId":20737,"journal":{"name":"Proceedings of the Eleventh European Conference on Computer Systems","volume":"17 1","pages":"13:1-13:13"},"PeriodicalIF":0.0,"publicationDate":"2014-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83497093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
TrustLite: a security architecture for tiny embedded devices TrustLite:用于微型嵌入式设备的安全架构
Pub Date : 2014-04-14 DOI: 10.1145/2592798.2592824
Patrick Koeberl, Steffen Schulz, A. Sadeghi, V. Varadharajan
Embedded systems are increasingly pervasive, interdependent and in many cases critical to our every day life and safety. Tiny devices that cannot afford sophisticated hardware security mechanisms are embedded in complex control infrastructures, medical support systems and entertainment products [51]. As such devices are increasingly subject to attacks, new hardware protection mechanisms are needed to provide the required resilience and dependency at low cost. In this work, we present the TrustLite security architecture for flexible, hardware-enforced isolation of software modules. We describe mechanisms for secure exception handling and communication between protected modules, enabling seamless interoperability with untrusted operating systems and tasks. TrustLite scales from providing a simple protected firmware runtime to advanced functionality such as attestation and trusted execution of userspace tasks. Our FPGA prototype shows that these capabilities are achievable even on low-cost embedded systems.
嵌入式系统越来越普遍,相互依赖,在许多情况下对我们的日常生活和安全至关重要。无法承担复杂硬件安全机制的微型设备被嵌入到复杂的控制基础设施、医疗支持系统和娱乐产品中[51]。由于此类设备越来越容易受到攻击,因此需要新的硬件保护机制以低成本提供所需的弹性和依赖性。在这项工作中,我们提出了TrustLite安全体系结构,用于灵活的、硬件强制的软件模块隔离。我们描述了安全异常处理和受保护模块之间通信的机制,从而实现了与不受信任的操作系统和任务的无缝互操作性。TrustLite从提供简单的受保护固件运行时扩展到高级功能,如用户空间任务的认证和可信执行。我们的FPGA原型表明,即使在低成本的嵌入式系统上也可以实现这些功能。
{"title":"TrustLite: a security architecture for tiny embedded devices","authors":"Patrick Koeberl, Steffen Schulz, A. Sadeghi, V. Varadharajan","doi":"10.1145/2592798.2592824","DOIUrl":"https://doi.org/10.1145/2592798.2592824","url":null,"abstract":"Embedded systems are increasingly pervasive, interdependent and in many cases critical to our every day life and safety. Tiny devices that cannot afford sophisticated hardware security mechanisms are embedded in complex control infrastructures, medical support systems and entertainment products [51]. As such devices are increasingly subject to attacks, new hardware protection mechanisms are needed to provide the required resilience and dependency at low cost.\u0000 In this work, we present the TrustLite security architecture for flexible, hardware-enforced isolation of software modules. We describe mechanisms for secure exception handling and communication between protected modules, enabling seamless interoperability with untrusted operating systems and tasks. TrustLite scales from providing a simple protected firmware runtime to advanced functionality such as attestation and trusted execution of userspace tasks. Our FPGA prototype shows that these capabilities are achievable even on low-cost embedded systems.","PeriodicalId":20737,"journal":{"name":"Proceedings of the Eleventh European Conference on Computer Systems","volume":"29 1","pages":"10:1-10:14"},"PeriodicalIF":0.0,"publicationDate":"2014-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76013270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 326
Using restricted transactional memory to build a scalable in-memory database 使用受限的事务内存构建可扩展的内存内数据库
Pub Date : 2014-04-14 DOI: 10.1145/2592798.2592815
Zhaoguo Wang, Hao Qian, Jinyang Li, Haibo Chen
The recent availability of Intel Haswell processors marks the transition of hardware transactional memory from research toys to mainstream reality. DBX is an in-memory database that uses Intel's restricted transactional memory (RTM) to achieve high performance and good scalability across multi-core machines. The main limitation (and also key to practicality) of RTM is its constrained working set size: an RTM region that reads or writes too much data will always be aborted. The design of DBX addresses this challenge in several ways. First, DBX builds a database transaction layer on top of an underlying shared-memory store. The two layers use separate RTM regions to synchronize shared memory access. Second, DBX uses optimistic concurrency control to separate transaction execution from its commit. Only the commit stage uses RTM for synchronization. As a result, the working set of the RTMs used scales with the meta-data of reads and writes in a database transaction as opposed to the amount of data read/written. Our evaluation using TPC-C workload mix shows that DBX achieves 506,817 transactions per second on a 4-core machine.
最近英特尔Haswell处理器的出现标志着硬件事务性内存从研究玩具到主流现实的转变。DBX是一种内存数据库,它使用Intel的受限事务性内存(RTM)来实现跨多核机器的高性能和良好的可伸缩性。RTM的主要限制(也是实用性的关键)是其受限的工作集大小:读取或写入过多数据的RTM区域总是会被终止。DBX的设计从几个方面解决了这一挑战。首先,DBX在底层共享内存存储之上构建数据库事务层。这两层使用单独的RTM区域来同步共享内存访问。其次,DBX使用乐观并发控制将事务执行与其提交分开。只有提交阶段使用RTM进行同步。因此,所使用的rtm的工作集随数据库事务中读写的元数据而变化,而不是随读/写的数据量变化。我们使用TPC-C工作负载组合进行的评估显示,DBX在4核机器上实现了每秒506,817个事务。
{"title":"Using restricted transactional memory to build a scalable in-memory database","authors":"Zhaoguo Wang, Hao Qian, Jinyang Li, Haibo Chen","doi":"10.1145/2592798.2592815","DOIUrl":"https://doi.org/10.1145/2592798.2592815","url":null,"abstract":"The recent availability of Intel Haswell processors marks the transition of hardware transactional memory from research toys to mainstream reality. DBX is an in-memory database that uses Intel's restricted transactional memory (RTM) to achieve high performance and good scalability across multi-core machines. The main limitation (and also key to practicality) of RTM is its constrained working set size: an RTM region that reads or writes too much data will always be aborted. The design of DBX addresses this challenge in several ways. First, DBX builds a database transaction layer on top of an underlying shared-memory store. The two layers use separate RTM regions to synchronize shared memory access. Second, DBX uses optimistic concurrency control to separate transaction execution from its commit. Only the commit stage uses RTM for synchronization. As a result, the working set of the RTMs used scales with the meta-data of reads and writes in a database transaction as opposed to the amount of data read/written. Our evaluation using TPC-C workload mix shows that DBX achieves 506,817 transactions per second on a 4-core machine.","PeriodicalId":20737,"journal":{"name":"Proceedings of the Eleventh European Conference on Computer Systems","volume":"67 1","pages":"26:1-26:15"},"PeriodicalIF":0.0,"publicationDate":"2014-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72943912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
Practical techniques to obviate setuid-to-root binaries 避免设置到根二进制文件的实用技术
Pub Date : 2014-04-14 DOI: 10.1145/2592798.2592811
Bhushan Jain, Chia-che Tsai, J. John, Donald E. Porter
Trusted, setuid-to-root binaries have been a substantial, long-lived source of privilege escalation vulnerabilities on Unix systems. Prior work on limiting privilege escalation has only considered privilege from the perspective of the administrator, neglecting the perspective of regular users---the primary reason for having setuid-to-root binaries. The paper presents a study of the current state of setuid-to-root binaries on Linux, focusing on the 28 most commonly deployed setuid binaries in the Debian and Ubuntu distributions. This study reveals several points where Linux kernel policies and abstractions are a poor fit for the policies desired by the administrator, and root privilege is used to create point solutions. The majority of these point solutions address 8 system calls that require administrator privilege, but also export functionality required by unprivileged users. This paper demonstrates how least privilege can be achieved on modern systems for non-administrator users. We identify the policies currently encoded in setuid-to-root binaries, and present a framework for expressing and enforcing these policy categories in the kernel. Our prototype, called Protego, deprivileges over 10,000 lines of code by changing only 715 lines of Linux kernel code. Protego also adds additional utilities to keep the kernel policy synchronized with legacy, policy-relevant configuration files, such as /etc/sudoers. Although some previously-privileged binaries may require changes, Protego provides users with the same functionality as Linux and introduces acceptable performance overheads. For instance, a Linux kernel compile incurs less than 2% overhead on Protego.
在Unix系统上,受信任的、从设置到根的二进制文件一直是长期存在的特权升级漏洞的重要来源。以前关于限制权限升级的工作只考虑了管理员的权限,而忽略了普通用户的权限——这是设置到root二进制文件的主要原因。本文研究了Linux上setuid-to-root二进制文件的当前状态,重点研究了Debian和Ubuntu发行版中最常用的28个setuid二进制文件。本研究揭示了Linux内核策略和抽象不适合管理员所需策略的几个点,并且使用根特权创建点解决方案。这些点解决方案中的大多数解决了需要管理员权限的系统调用,但也导出了非特权用户所需的功能。本文演示了如何在现代系统上为非管理员用户实现最小权限。我们确定了当前在setuid到root二进制文件中编码的策略,并提出了一个框架,用于在内核中表达和执行这些策略类别。我们的原型,称为Protego,通过仅更改715行Linux内核代码,剥夺了超过10,000行代码的特权。Protego还添加了额外的实用程序,以保持内核策略与遗留的、与策略相关的配置文件(如/etc/sudoers)同步尽管一些以前的特权二进制文件可能需要更改,但Protego为用户提供了与Linux相同的功能,并引入了可接受的性能开销。例如,Linux内核编译在Protego上的开销不到2%。
{"title":"Practical techniques to obviate setuid-to-root binaries","authors":"Bhushan Jain, Chia-che Tsai, J. John, Donald E. Porter","doi":"10.1145/2592798.2592811","DOIUrl":"https://doi.org/10.1145/2592798.2592811","url":null,"abstract":"Trusted, setuid-to-root binaries have been a substantial, long-lived source of privilege escalation vulnerabilities on Unix systems. Prior work on limiting privilege escalation has only considered privilege from the perspective of the administrator, neglecting the perspective of regular users---the primary reason for having setuid-to-root binaries.\u0000 The paper presents a study of the current state of setuid-to-root binaries on Linux, focusing on the 28 most commonly deployed setuid binaries in the Debian and Ubuntu distributions. This study reveals several points where Linux kernel policies and abstractions are a poor fit for the policies desired by the administrator, and root privilege is used to create point solutions. The majority of these point solutions address 8 system calls that require administrator privilege, but also export functionality required by unprivileged users.\u0000 This paper demonstrates how least privilege can be achieved on modern systems for non-administrator users. We identify the policies currently encoded in setuid-to-root binaries, and present a framework for expressing and enforcing these policy categories in the kernel. Our prototype, called Protego, deprivileges over 10,000 lines of code by changing only 715 lines of Linux kernel code. Protego also adds additional utilities to keep the kernel policy synchronized with legacy, policy-relevant configuration files, such as /etc/sudoers. Although some previously-privileged binaries may require changes, Protego provides users with the same functionality as Linux and introduces acceptable performance overheads. For instance, a Linux kernel compile incurs less than 2% overhead on Protego.","PeriodicalId":20737,"journal":{"name":"Proceedings of the Eleventh European Conference on Computer Systems","volume":"24 1","pages":"8:1-8:14"},"PeriodicalIF":0.0,"publicationDate":"2014-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74086498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Efficiently, effectively detecting mobile app bugs with AppDoctor 高效,有效地检测移动应用程序的漏洞与AppDoctor
Pub Date : 2014-04-14 DOI: 10.1145/2592798.2592813
Gang Hu, Xinhao Yuan, Yang Tang, Junfeng Yang
Mobile apps bring unprecedented levels of convenience, yet they are often buggy, and their bugs offset the convenience the apps bring. A key reason for buggy apps is that they must handle a vast variety of system and user actions such as being randomly killed by the OS to save resources, but app developers, facing tough competitions, lack time to thoroughly test these actions. AppDoctor is a system for efficiently and effectively testing apps against many system and user actions, and helping developers diagnose the resultant bug reports. It quickly screens for potential bugs using approximate execution, which runs much faster than real execution and exposes bugs but may cause false positives. From the reports, AppDoctor automatically verifies most bugs and prunes most false positives, greatly saving manual inspection effort. It uses action slicing to further speed up bug diagnosis. We implement AppDoctor in Android. It operates as a cloud of physical devices or emulators to scale up testing. Evaluation on 53 out of 100 most popular apps in Google Play and 11 of the most popular open-source apps shows that, AppDoctor effectively detects 72 bugs---including two bugs in the Android framework that affect all apps---with quick checking sessions, speeds up testing by 13.3 times, and vastly reduces diagnosis effort.
手机应用带来了前所未有的便利,但它们也经常存在漏洞,这些漏洞抵消了应用带来的便利。存在漏洞的应用的一个关键原因是,它们必须处理各种各样的系统和用户操作,例如被操作系统随机杀死以节省资源,但应用开发者面临着激烈的竞争,缺乏时间来彻底测试这些操作。AppDoctor是一个针对许多系统和用户行为高效测试应用的系统,并帮助开发者诊断由此产生的漏洞报告。它使用近似执行快速筛选潜在的错误,这种执行比实际执行快得多,并暴露错误,但可能导致误报。从报告中,AppDoctor自动验证大多数错误并删除大多数误报,从而大大节省了人工检查工作。它使用动作切片来进一步加速错误诊断。我们在Android中实现AppDoctor。它作为物理设备或模拟器的云来运行,以扩展测试。通过对Google Play中100款最受欢迎应用中的53款和11款最受欢迎的开源应用的评估,AppDoctor能够快速检测出72个漏洞(游戏邦注:包括Android框架中影响所有应用的两个漏洞),将测试速度提高13.3倍,大大减少了诊断工作量。
{"title":"Efficiently, effectively detecting mobile app bugs with AppDoctor","authors":"Gang Hu, Xinhao Yuan, Yang Tang, Junfeng Yang","doi":"10.1145/2592798.2592813","DOIUrl":"https://doi.org/10.1145/2592798.2592813","url":null,"abstract":"Mobile apps bring unprecedented levels of convenience, yet they are often buggy, and their bugs offset the convenience the apps bring. A key reason for buggy apps is that they must handle a vast variety of system and user actions such as being randomly killed by the OS to save resources, but app developers, facing tough competitions, lack time to thoroughly test these actions. AppDoctor is a system for efficiently and effectively testing apps against many system and user actions, and helping developers diagnose the resultant bug reports. It quickly screens for potential bugs using approximate execution, which runs much faster than real execution and exposes bugs but may cause false positives. From the reports, AppDoctor automatically verifies most bugs and prunes most false positives, greatly saving manual inspection effort. It uses action slicing to further speed up bug diagnosis. We implement AppDoctor in Android. It operates as a cloud of physical devices or emulators to scale up testing. Evaluation on 53 out of 100 most popular apps in Google Play and 11 of the most popular open-source apps shows that, AppDoctor effectively detects 72 bugs---including two bugs in the Android framework that affect all apps---with quick checking sessions, speeds up testing by 13.3 times, and vastly reduces diagnosis effort.","PeriodicalId":20737,"journal":{"name":"Proceedings of the Eleventh European Conference on Computer Systems","volume":"59 1","pages":"18:1-18:15"},"PeriodicalIF":0.0,"publicationDate":"2014-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88899107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 107
Cooperation and security isolation of library OSes for multi-process applications 面向多进程应用的库操作系统的协作和安全隔离
Pub Date : 2014-04-14 DOI: 10.1145/2592798.2592812
Chia-che Tsai, Kumar Saurabh Arora, N. Bandi, Bhushan Jain, William Jannen, J. John, Harry A. Kalodner, Vrushali Kulkarni, Daniela Oliveira, Donald E. Porter
Library OSes are a promising approach for applications to efficiently obtain the benefits of virtual machines, including security isolation, host platform compatibility, and migration. Library OSes refactor a traditional OS kernel into an application library, avoiding overheads incurred by duplicate functionality. When compared to running a single application on an OS kernel in a VM, recent library OSes reduce the memory footprint by an order-of-magnitude. Previous library OS (libOS) research has focused on single-process applications, yet many Unix applications, such as network servers and shell scripts, span multiple processes. Key design challenges for a multi-process libOS include management of shared state and minimal expansion of the security isolation boundary. This paper presents Graphene, a library OS that seamlessly and efficiently executes both single and multi-process applications, generally with low memory and performance overheads. Graphene broadens the libOS paradigm to support secure, multi-process APIs, such as copy-on-write fork, signals, and System V IPC. Multiple libOS instances coordinate over pipe-like byte streams to implement a consistent, distributed POSIX abstraction. These coordination streams provide a simple vantage point to enforce security isolation.
库操作系统是一种很有前途的方法,可以让应用程序有效地获得虚拟机的好处,包括安全隔离、主机平台兼容性和迁移。库操作系统将传统的操作系统内核重构为应用程序库,避免了重复功能带来的开销。与在虚拟机的操作系统内核上运行单个应用程序相比,最新的库操作系统将内存占用减少了一个数量级。以前的库操作系统(libOS)研究主要集中在单进程应用程序上,但是许多Unix应用程序,如网络服务器和shell脚本,都是跨多个进程的。多进程libo的主要设计挑战包括共享状态的管理和安全隔离边界的最小扩展。本文介绍了石墨烯,这是一个库操作系统,可以无缝高效地执行单进程和多进程应用程序,通常具有较低的内存和性能开销。石墨烯拓宽了libOS范例,以支持安全的多进程api,如写时复制(copy-on-write) fork、信号和System V IPC。多个libOS实例在类似管道的字节流上进行协调,以实现一致的分布式POSIX抽象。这些协调流提供了一个简单的有利位置来实施安全隔离。
{"title":"Cooperation and security isolation of library OSes for multi-process applications","authors":"Chia-che Tsai, Kumar Saurabh Arora, N. Bandi, Bhushan Jain, William Jannen, J. John, Harry A. Kalodner, Vrushali Kulkarni, Daniela Oliveira, Donald E. Porter","doi":"10.1145/2592798.2592812","DOIUrl":"https://doi.org/10.1145/2592798.2592812","url":null,"abstract":"Library OSes are a promising approach for applications to efficiently obtain the benefits of virtual machines, including security isolation, host platform compatibility, and migration. Library OSes refactor a traditional OS kernel into an application library, avoiding overheads incurred by duplicate functionality. When compared to running a single application on an OS kernel in a VM, recent library OSes reduce the memory footprint by an order-of-magnitude.\u0000 Previous library OS (libOS) research has focused on single-process applications, yet many Unix applications, such as network servers and shell scripts, span multiple processes. Key design challenges for a multi-process libOS include management of shared state and minimal expansion of the security isolation boundary.\u0000 This paper presents Graphene, a library OS that seamlessly and efficiently executes both single and multi-process applications, generally with low memory and performance overheads. Graphene broadens the libOS paradigm to support secure, multi-process APIs, such as copy-on-write fork, signals, and System V IPC. Multiple libOS instances coordinate over pipe-like byte streams to implement a consistent, distributed POSIX abstraction. These coordination streams provide a simple vantage point to enforce security isolation.","PeriodicalId":20737,"journal":{"name":"Proceedings of the Eleventh European Conference on Computer Systems","volume":"4 1","pages":"9:1-9:14"},"PeriodicalIF":0.0,"publicationDate":"2014-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88726662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 147
A compiler-level intermediate representation based binary analysis and rewriting system 一个基于编译器级中间表示的二进制分析和重写系统
Pub Date : 2013-04-15 DOI: 10.1145/2465351.2465380
K. Anand, M. Smithson, Khaled Elwazeer, A. Kotha, Jim Gruen, Nathan Giles, R. Barua
This paper presents component techniques essential for converting executables to a high-level intermediate representation (IR) of an existing compiler. The compiler IR is then employed for three distinct applications: binary rewriting using the compiler's binary back-end, vulnerability detection using source-level symbolic execution, and source-code recovery using the compiler's C backend. Our techniques enable complex high-level transformations not possible in existing binary systems, address a major challenge of input-derived memory addresses in symbolic execution and are the first to enable recovery of a fully functional source-code. We present techniques to segment the flat address space in an executable containing undifferentiated blocks of memory. We demonstrate the inadequacy of existing variable identification methods for their promotion to symbols and present our methods for symbol promotion. We also present methods to convert the physically addressed stack in an executable (with a stack pointer) to an abstract stack (without a stack pointer). Our methods do not use symbolic, relocation, or debug information since these are usually absent in deployed executables. We have integrated our techniques with a prototype x86 binary framework called SecondWrite that uses LLVM as IR. The robustness of the framework is demonstrated by handling executables totaling more than a million lines of source-code, produced by two different compilers (gcc and Microsoft Visual Studio compiler), three languages (C, C++, and Fortran), two operating systems (Windows and Linux) and a real world program (Apache server).
本文介绍了将可执行文件转换为现有编译器的高级中间表示(IR)所必需的组件技术。然后将编译器IR用于三个不同的应用程序:使用编译器的二进制后端进行二进制重写,使用源代码级符号执行进行漏洞检测,以及使用编译器的C后端进行源代码恢复。我们的技术实现了现有二进制系统中不可能实现的复杂高级转换,解决了符号执行中输入派生内存地址的主要挑战,并且是第一个实现完整功能源代码恢复的技术。我们提出了在包含未分化内存块的可执行文件中分割平面地址空间的技术。我们论证了现有变量识别方法在符号推广方面的不足,并提出了我们的符号推广方法。我们还提供了将可执行文件(有堆栈指针)中的物理寻址堆栈转换为抽象堆栈(没有堆栈指针)的方法。我们的方法不使用符号、重定位或调试信息,因为在部署的可执行文件中通常没有这些信息。我们将我们的技术与一个名为SecondWrite的原型x86二进制框架集成在一起,该框架使用LLVM作为IR。该框架的健壮性通过处理由两种不同的编译器(gcc和Microsoft Visual Studio编译器)、三种语言(C、c++和Fortran)、两种操作系统(Windows和Linux)和一个真实世界的程序(Apache服务器)生成的总计超过一百万行源代码的可执行文件得到了证明。
{"title":"A compiler-level intermediate representation based binary analysis and rewriting system","authors":"K. Anand, M. Smithson, Khaled Elwazeer, A. Kotha, Jim Gruen, Nathan Giles, R. Barua","doi":"10.1145/2465351.2465380","DOIUrl":"https://doi.org/10.1145/2465351.2465380","url":null,"abstract":"This paper presents component techniques essential for converting executables to a high-level intermediate representation (IR) of an existing compiler. The compiler IR is then employed for three distinct applications: binary rewriting using the compiler's binary back-end, vulnerability detection using source-level symbolic execution, and source-code recovery using the compiler's C backend. Our techniques enable complex high-level transformations not possible in existing binary systems, address a major challenge of input-derived memory addresses in symbolic execution and are the first to enable recovery of a fully functional source-code.\u0000 We present techniques to segment the flat address space in an executable containing undifferentiated blocks of memory. We demonstrate the inadequacy of existing variable identification methods for their promotion to symbols and present our methods for symbol promotion. We also present methods to convert the physically addressed stack in an executable (with a stack pointer) to an abstract stack (without a stack pointer). Our methods do not use symbolic, relocation, or debug information since these are usually absent in deployed executables.\u0000 We have integrated our techniques with a prototype x86 binary framework called SecondWrite that uses LLVM as IR. The robustness of the framework is demonstrated by handling executables totaling more than a million lines of source-code, produced by two different compilers (gcc and Microsoft Visual Studio compiler), three languages (C, C++, and Fortran), two operating systems (Windows and Linux) and a real world program (Apache server).","PeriodicalId":20737,"journal":{"name":"Proceedings of the Eleventh European Conference on Computer Systems","volume":"13 1","pages":"295-308"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91516867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
ChainReaction: a causal+ consistent datastore based on chain replication ChainReaction:基于链复制的因果+一致性数据存储
Pub Date : 2013-04-15 DOI: 10.1145/2465351.2465361
Sérgio Almeida, J. Leitao, L. Rodrigues
This paper proposes a Geo-distributed key-value datastore, named ChainReaction, that offers causal+ consistency, with high performance, fault-tolerance, and scalability. ChainReaction enforces causal+ consistency which is stronger than eventual consistency by leveraging on a new variant of chain replication. We have experimentally evaluated the benefits of our approach by running the Yahoo! Cloud Serving Benchmark. Experimental results show that ChainReaction has better performance in read intensive workloads while offering competitive performance for other workloads. Also we show that our solution requires less metadata when compared with previous work.
本文提出了一个地理分布式键值数据存储,命名为ChainReaction,它提供因果一致性,具有高性能,容错性和可伸缩性。ChainReaction通过利用链复制的新变体,强制因果一致性比最终一致性更强。我们通过运行Yahoo!云服务基准。实验结果表明,ChainReaction在读取密集型工作负载中具有较好的性能,同时在其他工作负载中具有竞争力。此外,与以前的工作相比,我们的解决方案需要更少的元数据。
{"title":"ChainReaction: a causal+ consistent datastore based on chain replication","authors":"Sérgio Almeida, J. Leitao, L. Rodrigues","doi":"10.1145/2465351.2465361","DOIUrl":"https://doi.org/10.1145/2465351.2465361","url":null,"abstract":"This paper proposes a Geo-distributed key-value datastore, named ChainReaction, that offers causal+ consistency, with high performance, fault-tolerance, and scalability. ChainReaction enforces causal+ consistency which is stronger than eventual consistency by leveraging on a new variant of chain replication. We have experimentally evaluated the benefits of our approach by running the Yahoo! Cloud Serving Benchmark. Experimental results show that ChainReaction has better performance in read intensive workloads while offering competitive performance for other workloads. Also we show that our solution requires less metadata when compared with previous work.","PeriodicalId":20737,"journal":{"name":"Proceedings of the Eleventh European Conference on Computer Systems","volume":"23 1","pages":"85-98"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83293605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 165
期刊
Proceedings of the Eleventh European Conference on Computer Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1