{"title":"Reminiscences on SOSP history day","authors":"P. Neumann","doi":"10.1145/2830903.2847551","DOIUrl":"https://doi.org/10.1145/2830903.2847551","url":null,"abstract":"","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126040974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
My question is: how and when did the key OS principles emerge? Timelines of the evolution of operating systems follow available technologies and respond to market concerns. There were four stages from the 1950s to present: batch, interactive, distributed network, and cloud-mobile. The SOSP symposia, founded to focus on developing and validating OS principles, have involved thousands of researchers over the past fifty years. OS research has contributed a dozen great principles to all of computer science, including as processes, locality, interactive computing, concurrency control, location independent naming, and virtualization. I will look more closely at the research around two principles I was involved with: locality and location independent naming. Virtual memory -- a new, alluring, but controversial technology in the 1960s -- motivated both areas. The early concerns were whether the automation of paging would perform well, and whether name-to-location mappings could be done with no significant performance degradation. Performance was a major concern for virtual memory because the speed gap between a main memory access and a disk address was 10,000 or more; even a few page faults hurt performance. (The gap is worse today.) We hypothesized that paging would perform well if memory managers could guarantee that each process's working set is in memory. We justified this from intuitions about locality, which predicts that the working set is the maximum likelihood predictor of the process's memory demand in the immediate future. These ideas were extensively validated through years of study of paging algorithms, multiprogramming, and thrashing, leading to control systems that measured working sets, avoided thrashing, and optimized system throughput. Locality is harnessed today in all levels of systems, including the many layers of cache built into chips and memory control systems, the platforms for powering cloud computing, and in the Internet itself to cache pages near their frequent users and avoid bottlenecks at popular servers. Location independent naming is the other principle that permeated all generations of virtual memory over the years. This principle gave us hierarchical systems to generate names and very fast mappings from names to the physical locations of objects. This principle was present in the original virtual memory, which had a contiguous address space made of pages, and is present in today's Internet, which provides a huge address space made of URLs, DOIs, and capabilities.
{"title":"Perspectives on OS foundations","authors":"P. Denning","doi":"10.1145/2830903.2830904","DOIUrl":"https://doi.org/10.1145/2830903.2830904","url":null,"abstract":"My question is: how and when did the key OS principles emerge? Timelines of the evolution of operating systems follow available technologies and respond to market concerns. There were four stages from the 1950s to present: batch, interactive, distributed network, and cloud-mobile. The SOSP symposia, founded to focus on developing and validating OS principles, have involved thousands of researchers over the past fifty years. OS research has contributed a dozen great principles to all of computer science, including as processes, locality, interactive computing, concurrency control, location independent naming, and virtualization. I will look more closely at the research around two principles I was involved with: locality and location independent naming. Virtual memory -- a new, alluring, but controversial technology in the 1960s -- motivated both areas. The early concerns were whether the automation of paging would perform well, and whether name-to-location mappings could be done with no significant performance degradation. Performance was a major concern for virtual memory because the speed gap between a main memory access and a disk address was 10,000 or more; even a few page faults hurt performance. (The gap is worse today.) We hypothesized that paging would perform well if memory managers could guarantee that each process's working set is in memory. We justified this from intuitions about locality, which predicts that the working set is the maximum likelihood predictor of the process's memory demand in the immediate future. These ideas were extensively validated through years of study of paging algorithms, multiprogramming, and thrashing, leading to control systems that measured working sets, avoided thrashing, and optimized system throughput. Locality is harnessed today in all levels of systems, including the many layers of cache built into chips and memory control systems, the platforms for powering cloud computing, and in the Internet itself to cache pages near their frequent users and avoid bottlenecks at popular servers. Location independent naming is the other principle that permeated all generations of virtual memory over the years. This principle gave us hierarchical systems to generate names and very fast mappings from names to the physical locations of objects. This principle was present in the original virtual memory, which had a contiguous address space made of pages, and is present in today's Internet, which provides a huge address space made of URLs, DOIs, and capabilities.","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132471824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Butler Lampson traces a long history of protection mechanisms in spite of which security remains a major problem. He considers isolation, access control, access policy, information flow control, cryptography, trust, and assurance. In the end, people dislike the inconvenience security causes.
{"title":"Perspectives on protection and security","authors":"","doi":"10.1145/2830903.2830905","DOIUrl":"https://doi.org/10.1145/2830903.2830905","url":null,"abstract":"Butler Lampson traces a long history of protection mechanisms in spite of which security remains a major problem. He considers isolation, access control, access policy, information flow control, cryptography, trust, and assurance. In the end, people dislike the inconvenience security causes.","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126153109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this talk I will describe the development of systems that underlie modern cloud computing systems. This development shares much of its motivation with the related fields of transaction processing systems and high performance computing, but because of scale, these systems tend to have more emphasis on fault tolerance using software techniques. Important developments in the development of modern cloud systems include very high performance distributed file system, such as the Google File System (Ghemawat et al., SOSP 2003), reliable computational frameworks such as MapReduce (Dean & Ghemawat, OSDI 2004) and Dryad (Isard et al., 2007), and large scale structured storage systems such as BigTable (Chang et al. 2006), Dynamo (DeCandia et al., 2007), and Spanner (Corbett et al., 2012). Scheduling computations can either be done using virtual machines (exemplified by VMWare's products), or as individual processes or containers. The development of public cloud platforms such as AWS, Microsoft Azure, and Google Cloud Platform, allow external developers to utilize these large-scale services to build new and interesting services and products, benefiting from the economies of scale of large datacenters and the ability to grow and shrink computing resources on demand across millions of customers.
{"title":"The rise of cloud computing systems","authors":"J. Dean","doi":"10.1145/2830903.2830913","DOIUrl":"https://doi.org/10.1145/2830903.2830913","url":null,"abstract":"In this talk I will describe the development of systems that underlie modern cloud computing systems. This development shares much of its motivation with the related fields of transaction processing systems and high performance computing, but because of scale, these systems tend to have more emphasis on fault tolerance using software techniques. Important developments in the development of modern cloud systems include very high performance distributed file system, such as the Google File System (Ghemawat et al., SOSP 2003), reliable computational frameworks such as MapReduce (Dean & Ghemawat, OSDI 2004) and Dryad (Isard et al., 2007), and large scale structured storage systems such as BigTable (Chang et al. 2006), Dynamo (DeCandia et al., 2007), and Spanner (Corbett et al., 2012). Scheduling computations can either be done using virtual machines (exemplified by VMWare's products), or as individual processes or containers. The development of public cloud platforms such as AWS, Microsoft Azure, and Google Cloud Platform, allow external developers to utilize these large-scale services to build new and interesting services and products, benefiting from the economies of scale of large datacenters and the ability to grow and shrink computing resources on demand across millions of customers.","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132630819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dave Clark digs through his long experience in getting network protocols (notably TCP/IP) to work efficiently with the OS. It was a long hard slog to gain deep understanding of the efficiency of each little part of the protocol software. Eventually the protocols were successfully integrated and today s OS all include the network.
Dave Clark深入挖掘了他在使网络协议(特别是TCP/IP)与操作系统高效工作方面的长期经验。要深入了解协议软件的每个小部分的效率是一个漫长而艰难的过程。最终,这些协议被成功地集成在一起,今天的操作系统都包含了网络。
{"title":"The network and the OS","authors":"D. Clark","doi":"10.1145/2830903.2830912","DOIUrl":"https://doi.org/10.1145/2830903.2830912","url":null,"abstract":"Dave Clark digs through his long experience in getting network protocols (notably TCP/IP) to work efficiently with the OS. It was a long hard slog to gain deep understanding of the efficiency of each little part of the protocol software. Eventually the protocols were successfully integrated and today s OS all include the network.","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115407603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We start by looking back at 50 years of computer architecture, where philosophical debates on instruction sets (RISC vs. CISC, VLIW vs. RISC) and parallel architectures (NUMA vs clusters) were settled with billion dollar investments on both sides. In the second half, we look forward. First, Moore's Law is ending, so the free ride is over software-oblivious increasing performance. Since we've already played the multicore card, the most-likely/only path left is domain-specific processors. The memory system is radically changing too. First, Jim Gray's decade-old prediction is finally true: "Tape is dead; flash is disk; disk is tape." New ways to connect to DRAM and new non-volatile memory technologies promise to make the memory hierarchy even deeper. Finally, and surprisingly, there is now widespread agreement on instruction set architecture, namely Reduced Instruction Set Computers. However, unlike most other fields, despite this harmony has been no open alternative to proprietary offerings from ARM and Intel. RISC-V ("RISC Five") is the proposed free and open champion. It has a small base of classic RISC instructions that run a full open-source software stack; opcodes reserved for tailoring an System-On-a-Chip (SOC) to applications; standard instruction extensions optionally included in an SoC; and it is unrestricted: there is no cost, no paperwork, and anyone can use it. The ability to prototype using ever-more-powerful FPGAs and astonishingly inexpensive custom chips combined with collaboration on open-source software and hardware offers hope of a new golden era for hardware/software systems.
我们首先回顾50年来的计算机体系结构,其中关于指令集(RISC vs. CISC, VLIW vs. RISC)和并行体系结构(NUMA vs.集群)的哲学辩论双方都投入了数十亿美元。下半场,我们向前看。首先,摩尔定律(Moore’s Law)正在终结,因此免费乘车的趋势将超越软件无关的性能提升。既然我们已经玩过多核这张牌,那么最有可能的/唯一的路径就是特定领域的处理器。记忆系统也在发生根本性的变化。首先,吉姆•格雷十年前的预言终于成真了:“磁带已死;闪存是磁盘;磁盘就是磁带。”连接DRAM的新方法和新的非易失性存储器技术有望使存储器层次结构更加深入。最后,令人惊讶的是,现在对指令集架构,即精简指令集计算机,已经有了广泛的共识。然而,与大多数其他领域不同的是,尽管如此,ARM和英特尔的专有产品还没有开放的替代品。RISC- v(“RISC 5”)是提议的免费和开放的冠军。它有一小部分经典的RISC指令,运行一个完整的开源软件堆栈;保留用于定制片上系统(SOC)应用程序的操作码;标准指令扩展可选地包括在SoC;而且它是不受限制的:没有成本,没有文书工作,任何人都可以使用它。使用更强大的fpga和令人惊讶的廉价定制芯片进行原型设计的能力,加上开源软件和硬件的协作,为硬件/软件系统的新黄金时代带来了希望。
{"title":"Past and future of hardware and architecture","authors":"D. Patterson","doi":"10.1145/2830903.2830910","DOIUrl":"https://doi.org/10.1145/2830903.2830910","url":null,"abstract":"We start by looking back at 50 years of computer architecture, where philosophical debates on instruction sets (RISC vs. CISC, VLIW vs. RISC) and parallel architectures (NUMA vs clusters) were settled with billion dollar investments on both sides. In the second half, we look forward. First, Moore's Law is ending, so the free ride is over software-oblivious increasing performance. Since we've already played the multicore card, the most-likely/only path left is domain-specific processors. The memory system is radically changing too. First, Jim Gray's decade-old prediction is finally true: \"Tape is dead; flash is disk; disk is tape.\" New ways to connect to DRAM and new non-volatile memory technologies promise to make the memory hierarchy even deeper. Finally, and surprisingly, there is now widespread agreement on instruction set architecture, namely Reduced Instruction Set Computers. However, unlike most other fields, despite this harmony has been no open alternative to proprietary offerings from ARM and Intel. RISC-V (\"RISC Five\") is the proposed free and open champion. It has a small base of classic RISC instructions that run a full open-source software stack; opcodes reserved for tailoring an System-On-a-Chip (SOC) to applications; standard instruction extensions optionally included in an SoC; and it is unrestricted: there is no cost, no paperwork, and anyone can use it. The ability to prototype using ever-more-powerful FPGAs and astonishingly inexpensive custom chips combined with collaboration on open-source software and hardware offers hope of a new golden era for hardware/software systems.","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131373776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After a short summary of how the SOSP series began in 1967, EmCee Jeanna Matthews introduces the speakers. She has photos of them in their younger days when they were inventing OS principles.
{"title":"Overview of the day","authors":"Jeanna Neefe Matthews","doi":"10.1145/2830903.2839321","DOIUrl":"https://doi.org/10.1145/2830903.2839321","url":null,"abstract":"After a short summary of how the SOSP series began in 1967, EmCee Jeanna Matthews introduces the speakers. She has photos of them in their younger days when they were inventing OS principles.","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130658867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahadev Satyanarayanan (Satya) presented his thoughts on "The Evolution of Memory and File Systems". He observed that over a 60-year period, there have been four drivers of progress: the quests for scale, performance, transparency, and robustness. At the dawn of computing, the quest for scale was dominant. Easing the memory limitations of early computers was crucial to the growth of computing and the creation of new applications, because memory was so scarce and so expensive. That quest has been phenomenally successful. On a cost per bit basis, volatile and persistent memory technologies have improved by nearly 13 orders of magnitude. The quest for performance has been dominated by the growing gap between processor performance and memory performance. This gap has been most apparent since the use of DRAM technology by the early 1980s, but it was already a serious issue 20 years before that in the era of core memory. Over time, memory hierarchies of increasing depth have improved average case performance by exploiting temporal and spatial locality. These have been crucial in overcoming the processor-memory performance gap, with clever prefetching and write-back techniques also playing important roles. For the first decade or so, the price of improving scale and performance was the need to rewrite software as computers were replaced by new ones. By the early 1960s, this cost was becoming significant. Over time, as people costs have increased relative to hardware costs, disruptive software changes have become unacceptable. This has led to the quest for transparency. In its System/360, IBM pioneered the concept of an invariant architecture with multiple implementations at different price/performance points. The principle of transparent management of data across levels of a memory hierarchy, which we broadly term "caching", was pioneered at the software level by the Atlas computer in the early 1960s. At the hardware level, it was demonstrated first in the IBM System 360 Model 85 in 1968. Since then, caching has been applied at virtually every system level and is today perhaps the most ubiquitous and powerful systems technique for achieving scale, performance and transparency. By the late 1960s, as computers began to used in mission-critical contexts, the negative impact of hardware and software failures escalated. This led to he emergence of techniques to improve robustness even at the possible cost of performance or storage efficiency. The concept of separate address spaces emerged partly because it isolated the consequences of buggy software. Improved resilience to buggy sofware has also been one of the reasons that memory and file systems have remained distinct, even though systems based on the single-level storage concept have been proposed and experimentally demonstrated. In addition, to cope with hardware, software and networking failures, technqiues such as RAID, software replication, and disconnected operation emerged. The quest for robustness
Mahadev Satyanarayanan (Satya)发表了他对“内存和文件系统的演变”的看法。他观察到,在过去的60年里,有四个驱动进步的因素:对规模、性能、透明度和稳健性的追求。在计算机诞生之初,对规模的追求占主导地位。缓解早期计算机的内存限制对计算机的发展和新应用程序的创建至关重要,因为内存是如此稀缺和昂贵。这一探索取得了惊人的成功。在每比特成本的基础上,易失性和持久性存储器技术已经提高了近13个数量级。对性能的追求一直被处理器性能和内存性能之间日益扩大的差距所主导。这种差距在20世纪80年代初使用DRAM技术后最为明显,但在20年前的核心内存时代就已经是一个严重的问题。随着时间的推移,深度增加的记忆层次通过利用时间和空间局部性提高了平均情况的性能。这对于克服处理器-内存性能差距至关重要,聪明的预取和回写技术也发挥了重要作用。在头十年左右的时间里,提高规模和性能的代价是,随着计算机被新计算机取代,需要重写软件。到20世纪60年代初,这一成本变得越来越高。随着时间的推移,随着人力成本相对于硬件成本的增加,破坏性的软件更改已经变得不可接受。这导致了对透明度的追求。在System/360中,IBM率先提出了具有不同价格/性能点的多种实现的不变体系结构概念。跨内存层次对数据进行透明管理的原则,我们笼统地称之为“缓存”,是20世纪60年代早期Atlas计算机在软件层面首创的。在硬件层面,它首先在1968年的IBM System 360 Model 85中得到演示。从那时起,缓存几乎应用于每个系统级别,并且可能是今天实现规模、性能和透明度的最普遍和最强大的系统技术。到20世纪60年代末,随着计算机开始在关键任务环境中使用,硬件和软件故障的负面影响不断升级。这导致了提高健壮性的技术的出现,甚至可能以性能或存储效率为代价。独立地址空间概念的出现部分是因为它隔离了有缺陷的软件的后果。尽管基于单级存储概念的系统已经被提出并通过实验证明,但内存和文件系统仍然保持不同的原因之一也是对有缺陷的软件的改进的弹性。此外,为了应对硬件、软件和网络故障,出现了RAID、软件复制和断开连接操作等技术。随着故障成本相对于内存和存储成本的增加,对健壮性的追求变得越来越重要。最后,Satya评论了最近的预测,即经典的分级文件系统将很快消失。他指出,这样的预测并不新鲜。经典文件系统可能被使用不同抽象的非分层接口覆盖(例如Java应用程序的Android接口)。然而,对于必须保存很长时间的非结构化数据,它们将继续发挥重要作用。Satya注意到,层级文件系统模型经久不衰的深层原因在Herb Simon 1962年的著作《复杂性架构》(the Architecture of Complexity)中得到了广泛阐述。从本质上讲,等级制度是由于人类思维的认知局限而产生的。文件系统实现已经演变为非常适合这些认知限制。它们可能会伴随我们很长一段时间。
{"title":"Evolution of file and memory management","authors":"M. Satyanarayanan","doi":"10.1145/2830903.2830907","DOIUrl":"https://doi.org/10.1145/2830903.2830907","url":null,"abstract":"Mahadev Satyanarayanan (Satya) presented his thoughts on \"The Evolution of Memory and File Systems\". He observed that over a 60-year period, there have been four drivers of progress: the quests for scale, performance, transparency, and robustness. At the dawn of computing, the quest for scale was dominant. Easing the memory limitations of early computers was crucial to the growth of computing and the creation of new applications, because memory was so scarce and so expensive. That quest has been phenomenally successful. On a cost per bit basis, volatile and persistent memory technologies have improved by nearly 13 orders of magnitude. The quest for performance has been dominated by the growing gap between processor performance and memory performance. This gap has been most apparent since the use of DRAM technology by the early 1980s, but it was already a serious issue 20 years before that in the era of core memory. Over time, memory hierarchies of increasing depth have improved average case performance by exploiting temporal and spatial locality. These have been crucial in overcoming the processor-memory performance gap, with clever prefetching and write-back techniques also playing important roles. For the first decade or so, the price of improving scale and performance was the need to rewrite software as computers were replaced by new ones. By the early 1960s, this cost was becoming significant. Over time, as people costs have increased relative to hardware costs, disruptive software changes have become unacceptable. This has led to the quest for transparency. In its System/360, IBM pioneered the concept of an invariant architecture with multiple implementations at different price/performance points. The principle of transparent management of data across levels of a memory hierarchy, which we broadly term \"caching\", was pioneered at the software level by the Atlas computer in the early 1960s. At the hardware level, it was demonstrated first in the IBM System 360 Model 85 in 1968. Since then, caching has been applied at virtually every system level and is today perhaps the most ubiquitous and powerful systems technique for achieving scale, performance and transparency. By the late 1960s, as computers began to used in mission-critical contexts, the negative impact of hardware and software failures escalated. This led to he emergence of techniques to improve robustness even at the possible cost of performance or storage efficiency. The concept of separate address spaces emerged partly because it isolated the consequences of buggy software. Improved resilience to buggy sofware has also been one of the reasons that memory and file systems have remained distinct, even though systems based on the single-level storage concept have been proposed and experimentally demonstrated. In addition, to cope with hardware, software and networking failures, technqiues such as RAID, software replication, and disconnected operation emerged. The quest for robustness ","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127011031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jack Dennis launched the SOSP series in 1967. He saw an opportunity to bring out the emerging principles of operating systems and communication networks.
杰克丹尼斯在1967年推出了SOSP系列。他看到了将操作系统和通信网络的新兴原理带出来的机会。
{"title":"The founding of the SOSP conferences","authors":"J. Dennis","doi":"10.1145/2830903.2839323","DOIUrl":"https://doi.org/10.1145/2830903.2839323","url":null,"abstract":"Jack Dennis launched the SOSP series in 1967. He saw an opportunity to bring out the emerging principles of operating systems and communication networks.","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124930236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ken Birman's talk focused on controversies surrounding fault-tolerance and consistency. Looking at the 1990's, he pointed to debate around the so-called CATOCS question (CATOCS refers to causally and totally ordered communication primitives) and drew a parallel to the more modern debate about consistency at cloud scale (often referred to as the CAP conjecture). Ken argued that the underlying tension is actually one that opposes basic principles of the field against the seemingly unavoidable complexity of mechanisms strong enough to solve consensus, particularly the family of protocols with Paxos-like structures. Over time, this was resolved: He concluded that today, we finally know how to build very fast and scalable solutions (those who attended SOSP 2015 itself saw ten or more of the paper on such topics). On the other hand, Ken sees a new generation of challenges on the horizon: cloud-scale applications that will need a novel mix of scalable consistency and real-time guarantees, will need to leverage new new hardware options (RDMA, NVRAM and other "middle memory" options), and may need to be restructured to reflect a control-plane/data-plane split. These trends invite a new look at what has become a core topic for the SOSP community.
Ken Birman的演讲集中在围绕容错和一致性的争议上。回顾20世纪90年代,他指出了围绕所谓CATOCS问题(CATOCS指的是因果关系和完全有序的通信原语)的争论,并将其与更现代的关于云规模一致性的争论(通常被称为CAP猜想)进行了类比。Ken认为,潜在的紧张关系实际上是反对该领域的基本原则,反对看似不可避免的机制复杂性,这些机制强大到足以解决共识,特别是具有paxos结构的协议家族。随着时间的推移,这个问题得到了解决:他总结说,今天,我们终于知道如何构建非常快速和可扩展的解决方案(参加SOSP 2015的人看到了十篇或更多关于此类主题的论文)。另一方面,Ken看到了即将出现的新一代挑战:云规模的应用程序将需要可扩展一致性和实时保证的新组合,将需要利用新的硬件选项(RDMA、NVRAM和其他“中间内存”选项),并且可能需要重新构建以反映控制平面/数据平面的分裂。这些趋势促使人们重新审视已经成为SOSP社区核心话题的问题。
{"title":"Evolution of fault tolerance","authors":"K. Birman","doi":"10.1145/2830903.2830908","DOIUrl":"https://doi.org/10.1145/2830903.2830908","url":null,"abstract":"Ken Birman's talk focused on controversies surrounding fault-tolerance and consistency. Looking at the 1990's, he pointed to debate around the so-called CATOCS question (CATOCS refers to causally and totally ordered communication primitives) and drew a parallel to the more modern debate about consistency at cloud scale (often referred to as the CAP conjecture). Ken argued that the underlying tension is actually one that opposes basic principles of the field against the seemingly unavoidable complexity of mechanisms strong enough to solve consensus, particularly the family of protocols with Paxos-like structures. Over time, this was resolved: He concluded that today, we finally know how to build very fast and scalable solutions (those who attended SOSP 2015 itself saw ten or more of the paper on such topics). On the other hand, Ken sees a new generation of challenges on the horizon: cloud-scale applications that will need a novel mix of scalable consistency and real-time guarantees, will need to leverage new new hardware options (RDMA, NVRAM and other \"middle memory\" options), and may need to be restructured to reflect a control-plane/data-plane split. These trends invite a new look at what has become a core topic for the SOSP community.","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128559215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}