首页 > 最新文献

Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation最新文献

英文 中文
Towards higher disk head utilization: extracting free bandwidth from busy disk drives 迈向更高的磁盘磁头利用率:从繁忙的磁盘驱动器中提取空闲带宽
Christopher R. Lumb, J. Schindler, G. Ganger, D. Nagle, E. Riedel
Freeblock scheduling is a new approach to utilizing more of a disk's potential media bandwidth. By filling rotational latency periods with useful media transfers, 20-50% of a never-idle disk's bandwidth can often be provided to background applications with no effect on foreground response times. This paper describes freeblock scheduling and demonstrates its value with simulation studies of two concrete applications: segment cleaning and data mining. Free segment cleaning often allows an LFS file system to maintain its ideal write performance when cleaning overheads would otherwise reduce performance by up to a factor of three. Free data mining can achieve over 47 full disk scans per day on an active transaction processing system, with no effect on its disk performance.
Freeblock调度是一种利用更多磁盘潜在媒体带宽的新方法。通过用有用的媒体传输填充旋转延迟期,可以将从不空闲的磁盘带宽的20-50%提供给后台应用程序,而不会影响前台响应时间。本文描述了自由块调度,并通过对两个具体应用:分段清理和数据挖掘的仿真研究来证明它的价值。空闲段清理通常允许LFS文件系统保持理想的写性能,否则清理开销会使性能降低三倍。免费的数据挖掘可以在活动事务处理系统上实现每天超过47次的全磁盘扫描,而不会影响其磁盘性能。
{"title":"Towards higher disk head utilization: extracting free bandwidth from busy disk drives","authors":"Christopher R. Lumb, J. Schindler, G. Ganger, D. Nagle, E. Riedel","doi":"10.21236/ada382318","DOIUrl":"https://doi.org/10.21236/ada382318","url":null,"abstract":"Freeblock scheduling is a new approach to utilizing more of a disk's potential media bandwidth. By filling rotational latency periods with useful media transfers, 20-50% of a never-idle disk's bandwidth can often be provided to background applications with no effect on foreground response times. This paper describes freeblock scheduling and demonstrates its value with simulation studies of two concrete applications: segment cleaning and data mining. Free segment cleaning often allows an LFS file system to maintain its ideal write performance when cleaning overheads would otherwise reduce performance by up to a factor of three. Free data mining can achieve over 47 full disk scans per day on an active transaction processing system, with no effect on its disk performance.","PeriodicalId":90294,"journal":{"name":"Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation","volume":"8 1","pages":"87-102"},"PeriodicalIF":0.0,"publicationDate":"2000-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85134982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 172
Resource containers: a new facility for resource management in server systems 资源容器:服务器系统中用于资源管理的新工具
G. Banga, P. Druschel, J. Mogul
General-purpose operating systems provide inadequate support for resource management in large-scale servers. Applications lack sufficient control over scheduling and management of machine resources, which makes it difficult to enforce priority policies, and to provide robust and controlled service. There is a fundamental mismatch between the original design assumptions underlying the resource management mechanisms of current general-purpose operating systems, and the behavior of modern server applications. In particular, the operating system's notions of protection domain and resource principal coincide in the process abstraction. This coincidence prevents a process that manages large numbers of network connections, for example, from properly allocating system resources among those connections. We propose and evaluate a new operating system abstraction called a resource container, which separates the notion of a protection domain from that of a resource principal. Resource containers enable fine-grained resource management in server systems and allow the development of robust servers, with simple and firm control over priority policies.
通用操作系统对大型服务器中的资源管理提供的支持不足。应用程序对机器资源的调度和管理缺乏足够的控制,这使得执行优先级策略和提供健壮且受控的服务变得困难。当前通用操作系统的资源管理机制的原始设计假设与现代服务器应用程序的行为之间存在根本的不匹配。特别是,操作系统的保护域和资源主体的概念在进程抽象中是一致的。这种巧合使管理大量网络连接的进程(例如)无法在这些连接之间正确分配系统资源。我们提出并评估了一种称为资源容器的新操作系统抽象,它将保护域的概念与资源主体的概念分离开来。资源容器支持在服务器系统中进行细粒度的资源管理,并允许开发健壮的服务器,对优先级策略进行简单而牢固的控制。
{"title":"Resource containers: a new facility for resource management in server systems","authors":"G. Banga, P. Druschel, J. Mogul","doi":"10.1145/296806.296810","DOIUrl":"https://doi.org/10.1145/296806.296810","url":null,"abstract":"General-purpose operating systems provide inadequate support for resource management in large-scale servers. Applications lack sufficient control over scheduling and management of machine resources, which makes it difficult to enforce priority policies, and to provide robust and controlled service. There is a fundamental mismatch between the original design assumptions underlying the resource management mechanisms of current general-purpose operating systems, and the behavior of modern server applications. In particular, the operating system's notions of protection domain and resource principal coincide in the process abstraction. This coincidence prevents a process that manages large numbers of network connections, for example, from properly allocating system resources among those connections. We propose and evaluate a new operating system abstraction called a resource container, which separates the notion of a protection domain from that of a resource principal. Resource containers enable fine-grained resource management in server systems and allow the development of robust servers, with simple and firm control over priority policies.","PeriodicalId":90294,"journal":{"name":"Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation","volume":"26 1","pages":"45-58"},"PeriodicalIF":0.0,"publicationDate":"1999-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76966421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 817
Fine-grained dynamic instrumentation of commodity operating system kernels 商品操作系统内核的细粒度动态检测
Ariel Tamches, B. Miller
We have developed a technology, fine-grained dynamic instrumentation of commodity kernels, which can splice (insert) dynamically generated code before almost any machine code instruction of a completely unmodified running commodity operating system kernel. This technology is well-suited to performance profiling, debugging, code coverage, security auditing, runtime code optimizations, and kernel extensions. We have designed and implemented a tool called Kernlnst that performs dynamic instrumentation on a stock production Solaris kernel running on an UltraSPARC, On top of KernInst, we have implemented a kernel performance profiling tool, and used it to understand kernel and application performance under a Web proxy server workload. We used this information to make two changes (one to the kernel, one to the proxy) that cumulatively reduce the percentage of elapsed time that the proxy spends opening disk cache files from 40% to 7%.
我们已经开发了一种技术,即商品内核的细粒度动态插装,它可以在一个完全未修改的运行的商品操作系统内核的几乎任何机器码指令之前拼接(插入)动态生成的代码。这种技术非常适合性能分析、调试、代码覆盖、安全审计、运行时代码优化和内核扩展。我们设计并实现了一个名为Kernlnst的工具,它可以在运行在UltraSPARC上的Solaris内核上执行动态检测。在Kernlnst之上,我们实现了一个内核性能分析工具,并使用它来了解Web代理服务器工作负载下的内核和应用程序性能。我们使用这些信息进行了两个更改(一个对内核,一个对代理),这些更改累计减少了代理打开磁盘缓存文件所花费的运行时间百分比,从40%减少到7%。
{"title":"Fine-grained dynamic instrumentation of commodity operating system kernels","authors":"Ariel Tamches, B. Miller","doi":"10.1145/296806.296817","DOIUrl":"https://doi.org/10.1145/296806.296817","url":null,"abstract":"We have developed a technology, fine-grained dynamic instrumentation of commodity kernels, which can splice (insert) dynamically generated code before almost any machine code instruction of a completely unmodified running commodity operating system kernel. This technology is well-suited to performance profiling, debugging, code coverage, security auditing, runtime code optimizations, and kernel extensions. We have designed and implemented a tool called Kernlnst that performs dynamic instrumentation on a stock production Solaris kernel running on an UltraSPARC, On top of KernInst, we have implemented a kernel performance profiling tool, and used it to understand kernel and application performance under a Web proxy server workload. We used this information to make two changes (one to the kernel, one to the proxy) that cumulatively reduce the percentage of elapsed time that the proxy spends opening disk cache files from 40% to 7%.","PeriodicalId":90294,"journal":{"name":"Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation","volume":"13 1","pages":"117-130"},"PeriodicalIF":0.0,"publicationDate":"1999-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81892823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 190
Automatic I/O hint generation through speculative execution 通过推测执行自动生成I/O提示
Fay W. Chang, Garth A. Gibson
Aggressive prefetching is an effective technique for reducing the execution times of disk-bound applications; that is, applications that manipulate data too large or too infrequently used to be found in file or disk caches. While automatic prefetching approaches based on static analysis or historical access patterns are effective for some workloads, they are not as effective as manually-driven (programmer-inserted) prefetching for applications with irregular or input-dependent access patterns. In this paper, we propose to exploit whatever processor cycles are left idle while an application is stalled on I/O by using these cycles to dynamically analyze the application and predict its future I/O accesses. Our approach is to speculatively pre-execute the application’s code in order to discover and issue hints for its future read accesses. Coupled with an aggressive hint-driven prefetching system, this automatic approach could be applied to arbitrary applications, and should be particularly effective for those with irregular and, up to a point, input-dependent access patterns.
主动预取是一种减少磁盘绑定应用程序执行时间的有效技术;也就是说,应用程序操作的数据太大或太不常用,无法在文件或磁盘缓存中找到。虽然基于静态分析或历史访问模式的自动预取方法对某些工作负载是有效的,但对于具有不规则或输入依赖访问模式的应用程序,它们不如手动驱动(程序员插入)预取有效。在本文中,我们建议在应用程序在I/O上停滞时利用任何空闲的处理器周期,通过使用这些周期来动态分析应用程序并预测其未来的I/O访问。我们的方法是推测性地预执行应用程序的代码,以便发现并为其未来的读访问发出提示。与积极的提示驱动的预取系统相结合,这种自动方法可以应用于任意应用程序,并且对于那些具有不规则和某种程度上依赖于输入的访问模式的应用程序应该特别有效。
{"title":"Automatic I/O hint generation through speculative execution","authors":"Fay W. Chang, Garth A. Gibson","doi":"10.1145/296806.296807","DOIUrl":"https://doi.org/10.1145/296806.296807","url":null,"abstract":"Aggressive prefetching is an effective technique for reducing the execution times of disk-bound applications; that is, applications that manipulate data too large or too infrequently used to be found in file or disk caches. While automatic prefetching approaches based on static analysis or historical access patterns are effective for some workloads, they are not as effective as manually-driven (programmer-inserted) prefetching for applications with irregular or input-dependent access patterns. In this paper, we propose to exploit whatever processor cycles are left idle while an application is stalled on I/O by using these cycles to dynamically analyze the application and predict its future I/O accesses. Our approach is to speculatively pre-execute the application’s code in order to discover and issue hints for its future read accesses. Coupled with an aggressive hint-driven prefetching system, this automatic approach could be applied to arbitrary applications, and should be particularly effective for those with irregular and, up to a point, input-dependent access patterns.","PeriodicalId":90294,"journal":{"name":"Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation","volume":"29 1","pages":"1-14"},"PeriodicalIF":0.0,"publicationDate":"1999-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80372029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 202
The Coign automatic distributed partitioning system Coign自动分布式分区系统
G. Hunt, M. Scott
Although successive generations of middleware (such as RPC, CORBA, and DCOM) have made it easier to connect distributed programs, the process of distributed application decomposition has changed little: programmers manually divide applications into sub-programs and manually assign those sub-programs to machines. Often the techniques used to choose a distribution are ad hoc and create one-time solutions biased to a specific combination of users, machines, and networks. We assert that system software, not the programmer, should manage the task of distributed decomposition. To validate our assertion we present Coign, an automatic distributed partitioning system that significantly eases the development of distributed applications. Given an application (in binary form) built from distributable COM components, Coign constructs a graph model of the application's inter-component communication through scenario-based profiling. Later, Coign applies a graph-cutting algorithm to partition the application across a network and minimize execution delay due to network communication. Using Coign, even an end user (without access to source code) can transform a non-distributed application into an optimized, distributed application. Coign has automatically distributed binaries from over 2 million lines of application code, including Microsoft '5 PhotoDraw 2000 image processor. To our knowledge, Coign is the first system to automatically partition and distribute binary applications.
尽管连续几代的中间件(如RPC、CORBA和DCOM)使连接分布式程序变得更加容易,但分布式应用程序分解的过程几乎没有改变:程序员手动地将应用程序划分为子程序,并手动地将这些子程序分配给机器。通常,用于选择发行版的技术是特别的,并创建针对用户、机器和网络的特定组合的一次性解决方案。我们断言系统软件,而不是程序员,应该管理分布式分解的任务。为了验证我们的断言,我们介绍了Coign,这是一个自动分布式分区系统,它大大简化了分布式应用程序的开发。给定一个由可分发COM组件构建的应用程序(二进制形式),Coign通过基于场景的分析构建了应用程序组件间通信的图形模型。随后,Coign应用图切割算法跨网络对应用程序进行分区,并最大限度地减少由于网络通信造成的执行延迟。使用Coign,即使是最终用户(不访问源代码)也可以将非分布式应用程序转换为优化的分布式应用程序。Coign从超过200万行应用程序代码中自动分发二进制文件,包括微软的5 PhotoDraw 2000图像处理器。据我们所知,Coign是第一个自动分区和分发二进制应用程序的系统。
{"title":"The Coign automatic distributed partitioning system","authors":"G. Hunt, M. Scott","doi":"10.1145/296806.296826","DOIUrl":"https://doi.org/10.1145/296806.296826","url":null,"abstract":"Although successive generations of middleware (such as RPC, CORBA, and DCOM) have made it easier to connect distributed programs, the process of distributed application decomposition has changed little: programmers manually divide applications into sub-programs and manually assign those sub-programs to machines. Often the techniques used to choose a distribution are ad hoc and create one-time solutions biased to a specific combination of users, machines, and networks. We assert that system software, not the programmer, should manage the task of distributed decomposition. To validate our assertion we present Coign, an automatic distributed partitioning system that significantly eases the development of distributed applications. Given an application (in binary form) built from distributable COM components, Coign constructs a graph model of the application's inter-component communication through scenario-based profiling. Later, Coign applies a graph-cutting algorithm to partition the application across a network and minimize execution delay due to network communication. Using Coign, even an end user (without access to source code) can transform a non-distributed application into an optimized, distributed application. Coign has automatically distributed binaries from over 2 million lines of application code, including Microsoft '5 PhotoDraw 2000 image processor. To our knowledge, Coign is the first system to automatically partition and distribute binary applications.","PeriodicalId":90294,"journal":{"name":"Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation","volume":"45 1","pages":"187-200"},"PeriodicalIF":0.0,"publicationDate":"1999-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78339935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 293
MultiView and Millipage — fine-grain sharing in page-based DSMs MultiView和Millipage——基于页面的dsm中的细粒度共享
Ayal Itzkovitz, A. Schuster
In this paper we develop a novel technique, called MULTIVIEW, which enables implementation of page-based fine-grain DSMs. We show how the traditional techniques for implementing page-based DSMs can be extended to control the sharing granularity in a flexible way, even when the size of the sharing unit varies, and is smaller than the operating system's page size. The run-time overhead imposed in the proposed technique is negligible. We present a DSM system, called MILLIPAGE, which builds upon MULTIVIEW in order to support sharing in variable-size units. MILLIPAGE efficiently implements Sequential Consistency and shows comparable (sometimes superior) performance to related systems which use relaxed consistency models. It uses standard user-level operating system API and requires no compiler intervention, page twinning, diffs, code instrumentation, or sophisticated protocols. The resulting system is a thin software layer consisting mainly of a simple, clean protocol that handles page-faults.
在本文中,我们开发了一种称为MULTIVIEW的新技术,它可以实现基于页面的细粒度dsm。我们将展示如何扩展用于实现基于页面的dsm的传统技术,以便以灵活的方式控制共享粒度,即使共享单元的大小发生变化,并且小于操作系统的页面大小。在建议的技术中施加的运行时开销可以忽略不计。我们提出了一个DSM系统,称为MILLIPAGE,它建立在MULTIVIEW之上,以支持可变大小单元的共享。MILLIPAGE有效地实现了顺序一致性,并显示出与使用宽松一致性模型的相关系统相当(有时甚至更好)的性能。它使用标准的用户级操作系统API,不需要编译器干预、页面孪生、差分、代码插装或复杂的协议。由此产生的系统是一个薄的软件层,主要由一个简单、干净的协议组成,用于处理页面错误。
{"title":"MultiView and Millipage — fine-grain sharing in page-based DSMs","authors":"Ayal Itzkovitz, A. Schuster","doi":"10.1145/296806.296830","DOIUrl":"https://doi.org/10.1145/296806.296830","url":null,"abstract":"In this paper we develop a novel technique, called MULTIVIEW, which enables implementation of page-based fine-grain DSMs. We show how the traditional techniques for implementing page-based DSMs can be extended to control the sharing granularity in a flexible way, even when the size of the sharing unit varies, and is smaller than the operating system's page size. The run-time overhead imposed in the proposed technique is negligible. We present a DSM system, called MILLIPAGE, which builds upon MULTIVIEW in order to support sharing in variable-size units. MILLIPAGE efficiently implements Sequential Consistency and shows comparable (sometimes superior) performance to related systems which use relaxed consistency models. It uses standard user-level operating system API and requires no compiler intervention, page twinning, diffs, code instrumentation, or sophisticated protocols. The resulting system is a thin software layer consisting mainly of a simple, clean protocol that handles page-faults.","PeriodicalId":90294,"journal":{"name":"Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation","volume":"10 11","pages":"215-228"},"PeriodicalIF":0.0,"publicationDate":"1999-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91426504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
A feedback-driven proportion allocator for real-rate scheduling 一种实时调度的反馈驱动比例分配器
D. Steere, Ashvin Goel, J. Gruenberg, D. McNamee, C. Pu, J. Walpole
In this paper we propose changing the decades-old practice of allocating CPU to threads based on priority to a scheme based on proportion and period. Our scheme allocates to each thread a percentage of CPU cycles over a period of time, and uses a feedback-based adaptive scheduler to assign automatically both proportion and period. Applications with known require-ments, such as isochronous software devices, can bypass the adaptive scheduler by specifying their desired proportion and/or period. As a result, our scheme provides reservations to applica-tions that need them, and the benefits of proportion and period to applications that do not need reservations. Adaptive scheduling using proportion and period has several distinct benefits over either fixed or adaptive priority based schemes: finer grain control of allocation, lower variance in the amount of cycles allocated to a thread, and avoidance of accidental priority inversion and starvation, including defense against denial-of-service attacks. This paper describes our design of an adaptive controller and proportion-period scheduler, its implementation in Linux, and presents experimental validation of our approach.
在本文中,我们建议将几十年来基于优先级为线程分配CPU的做法改为基于比例和周期的方案。我们的方案在一段时间内为每个线程分配一定百分比的CPU周期,并使用基于反馈的自适应调度器自动分配比例和周期。具有已知需求的应用程序,例如同步软件设备,可以通过指定所需的比例和/或周期来绕过自适应调度器。因此,我们的方案为需要预订的应用程序提供了预订,并为不需要预订的应用程序提供了比例和周期的好处。与基于固定优先级或基于自适应优先级的方案相比,使用比例和周期的自适应调度有几个明显的优点:更精细的分配控制、分配给线程的周期数量变化更小、避免意外的优先级反转和饥饿,包括防御拒绝服务攻击。本文介绍了一种自适应控制器和比例周期调度程序的设计及其在Linux上的实现,并给出了我们方法的实验验证。
{"title":"A feedback-driven proportion allocator for real-rate scheduling","authors":"D. Steere, Ashvin Goel, J. Gruenberg, D. McNamee, C. Pu, J. Walpole","doi":"10.1145/296806.296820","DOIUrl":"https://doi.org/10.1145/296806.296820","url":null,"abstract":"In this paper we propose changing the decades-old practice of allocating CPU to threads based on priority to a scheme based on proportion and period. Our scheme allocates to each thread a percentage of CPU cycles over a period of time, and uses a feedback-based adaptive scheduler to assign automatically both proportion and period. Applications with known require-ments, such as isochronous software devices, can bypass the adaptive scheduler by specifying their desired proportion and/or period. As a result, our scheme provides reservations to applica-tions that need them, and the benefits of proportion and period to applications that do not need reservations. Adaptive scheduling using proportion and period has several distinct benefits over either fixed or adaptive priority based schemes: finer grain control of allocation, lower variance in the amount of cycles allocated to a thread, and avoidance of accidental priority inversion and starvation, including defense against denial-of-service attacks. This paper describes our design of an adaptive controller and proportion-period scheduler, its implementation in Linux, and presents experimental validation of our approach.","PeriodicalId":90294,"journal":{"name":"Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation","volume":"1 1","pages":"145-158"},"PeriodicalIF":0.0,"publicationDate":"1999-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84408607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 333
Self-paging in the Nemesis operating system Nemesis操作系统中的自分页
S. Hand
In contemporary operating systems, continuous media (CM) applications are sensitive to the behaviour of other tasks in the system. This is due to contention in the kernel (or in servers) between these applications. To properly support CM tasks, we require Quality of Service Firewalling between different applications. This paper presents a memory management system supporting Quality of Service (QoS) within the Nemesis operating system. It combines application-level paging techniques with isolation, exposure and responsibility in a manner we call self-paging. This enables rich virtual memory usage alongside (or even within) continuous media applications.
在当前的操作系统中,连续介质(CM)应用程序对系统中其他任务的行为非常敏感。这是由于内核(或服务器)中这些应用程序之间的争用。为了正确地支持CM任务,我们需要在不同应用程序之间设置服务质量防火墙。本文提出了一种在Nemesis操作系统中支持服务质量(QoS)的内存管理系统。它以一种我们称为自分页的方式将应用程序级分页技术与隔离、公开和责任结合起来。这使得在连续媒体应用程序中(甚至在连续媒体应用程序中)可以使用丰富的虚拟内存。
{"title":"Self-paging in the Nemesis operating system","authors":"S. Hand","doi":"10.1145/296806.296812","DOIUrl":"https://doi.org/10.1145/296806.296812","url":null,"abstract":"In contemporary operating systems, continuous media (CM) applications are sensitive to the behaviour of other tasks in the system. This is due to contention in the kernel (or in servers) between these applications. To properly support CM tasks, we require Quality of Service Firewalling between different applications. This paper presents a memory management system supporting Quality of Service (QoS) within the Nemesis operating system. It combines application-level paging techniques with isolation, exposure and responsibility in a manner we call self-paging. This enables rich virtual memory usage alongside (or even within) continuous media applications.","PeriodicalId":90294,"journal":{"name":"Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation","volume":"45 1","pages":"73-86"},"PeriodicalIF":0.0,"publicationDate":"1999-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84791580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 156
Practical Byzantine fault tolerance 实用的拜占庭容错
M. Castro
This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3% slower than a standard unreplicated NFS.
本文提出了一种新的能够容忍拜占庭故障的复制算法。我们相信拜占庭容错算法在未来将变得越来越重要,因为恶意攻击和软件错误越来越常见,并可能导致故障节点表现出任意行为。尽管以前的算法假设系统是同步的,或者在实践中使用太慢,但本文描述的算法是实用的:它适用于像互联网这样的异步环境,并结合了几个重要的优化,这些优化将以前的算法的响应时间提高了一个数量级以上。我们使用我们的算法实现了一个拜占庭容错NFS服务,并测量了它的性能。结果表明,我们的服务只比标准的未复制NFS慢3%。
{"title":"Practical Byzantine fault tolerance","authors":"M. Castro","doi":"10.1145/296806.296824","DOIUrl":"https://doi.org/10.1145/296806.296824","url":null,"abstract":"This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3% slower than a standard unreplicated NFS.","PeriodicalId":90294,"journal":{"name":"Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation","volume":"63 1","pages":"173-186"},"PeriodicalIF":0.0,"publicationDate":"1999-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84981172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3925
Tapeworm: high-level abstractions of shared accesses 绦虫:共享访问的高级抽象
P. Keleher
We describe the design and use of the tape mechanism, a new high-level abstraction of accesses to shared data for software DSMs. Tapes can be used to record shared accesses. These recordings can be used to predict future accesses. Tapes can be used to tailor data movement to application semantics. These data movement policies are layered on top of existing shared memory protocols. We have used tapes to create the Tapeworm prefetching library. Tapeworm implements sophisticated record/replay mechanisms across barriers, augments locks with data movement semantics and allows the use of producer-consumer segments, which move entire modified segments when any portion of the segment is accessed. We show that Tapeworm eliminates 85% of remote misses, reduces message traffic by 63%, and improves performance by an average of 29% for our application suite.
我们描述了磁带机制的设计和使用,磁带机制是软件dsm中访问共享数据的一种新的高级抽象。磁带可用于记录共享访问。这些记录可以用来预测未来的访问。磁带可用于根据应用程序语义调整数据移动。这些数据移动策略位于现有共享内存协议之上。我们已经使用磁带创建了绦虫预取库。绦虫实现了跨越障碍的复杂记录/重放机制,通过数据移动语义增强了锁,并允许使用生产者-消费者段,当段的任何部分被访问时,它会移动整个修改的段。我们发现绦虫消除了85%的远程错误,减少了63%的消息流量,并将我们的应用程序套件的性能平均提高了29%。
{"title":"Tapeworm: high-level abstractions of shared accesses","authors":"P. Keleher","doi":"10.1145/296806.296828","DOIUrl":"https://doi.org/10.1145/296806.296828","url":null,"abstract":"We describe the design and use of the tape mechanism, a new high-level abstraction of accesses to shared data for software DSMs. Tapes can be used to record shared accesses. These recordings can be used to predict future accesses. Tapes can be used to tailor data movement to application semantics. These data movement policies are layered on top of existing shared memory protocols. We have used tapes to create the Tapeworm prefetching library. Tapeworm implements sophisticated record/replay mechanisms across barriers, augments locks with data movement semantics and allows the use of producer-consumer segments, which move entire modified segments when any portion of the segment is accessed. We show that Tapeworm eliminates 85% of remote misses, reduces message traffic by 63%, and improves performance by an average of 29% for our application suite.","PeriodicalId":90294,"journal":{"name":"Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation","volume":"2 1","pages":"201-214"},"PeriodicalIF":0.0,"publicationDate":"1999-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85741400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
Proceedings of the -- USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Symposium on Operating Systems Design and Implementation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1