重新思考面向快速设备、廉价内核和相干互连的编程 I/O

Anastasiia Ruzhanskaia, Pengcheng Xu, David Cock, Timothy Roscoe
{"title":"重新思考面向快速设备、廉价内核和相干互连的编程 I/O","authors":"Anastasiia Ruzhanskaia, Pengcheng Xu, David Cock, Timothy Roscoe","doi":"arxiv-2409.08141","DOIUrl":null,"url":null,"abstract":"Conventional wisdom holds that an efficient interface between an OS running\non a CPU and a high-bandwidth I/O device should be based on Direct Memory\nAccess (DMA), descriptor rings, and interrupts: DMA offloads transfers from the\nCPU, descriptor rings provide buffering and queuing, and interrupts facilitate\nasynchronous interaction between cores and device with a lightweight\nnotification mechanism. In this paper we question this wisdom in the light of\nmodern hardware and workloads, particularly in cloud servers. We argue that the\nassumptions that led to this model are obsolete, and in many use-cases use of\nprogrammed I/O, where the CPU explicitly transfers data and control information\nto and from a device via loads and stores, actually results in a more efficient\nsystem. We quantitatively demonstrate these advantages using three use-cases:\nfine-grained RPC-style invocation of functions on an accelerator, offloading of\noperators in a streaming dataflow engine, and a network interface targeting for\nserverless functions. Moreover, we show that while these advantages are\nsignificant over a modern PCIe peripheral bus, a truly cache-coherent\ninterconnect offers significant additional efficiency gains.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Rethinking Programmed I/O for Fast Devices, Cheap Cores, and Coherent Interconnects\",\"authors\":\"Anastasiia Ruzhanskaia, Pengcheng Xu, David Cock, Timothy Roscoe\",\"doi\":\"arxiv-2409.08141\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Conventional wisdom holds that an efficient interface between an OS running\\non a CPU and a high-bandwidth I/O device should be based on Direct Memory\\nAccess (DMA), descriptor rings, and interrupts: DMA offloads transfers from the\\nCPU, descriptor rings provide buffering and queuing, and interrupts facilitate\\nasynchronous interaction between cores and device with a lightweight\\nnotification mechanism. In this paper we question this wisdom in the light of\\nmodern hardware and workloads, particularly in cloud servers. We argue that the\\nassumptions that led to this model are obsolete, and in many use-cases use of\\nprogrammed I/O, where the CPU explicitly transfers data and control information\\nto and from a device via loads and stores, actually results in a more efficient\\nsystem. We quantitatively demonstrate these advantages using three use-cases:\\nfine-grained RPC-style invocation of functions on an accelerator, offloading of\\noperators in a streaming dataflow engine, and a network interface targeting for\\nserverless functions. Moreover, we show that while these advantages are\\nsignificant over a modern PCIe peripheral bus, a truly cache-coherent\\ninterconnect offers significant additional efficiency gains.\",\"PeriodicalId\":501333,\"journal\":{\"name\":\"arXiv - CS - Operating Systems\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Operating Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08141\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Operating Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08141","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

传统观念认为,CPU 上运行的操作系统与高带宽 I/O 设备之间的高效接口应基于直接内存访问 (DMA)、描述符环和中断:DMA 可卸载来自 CPU 的传输,描述符环可提供缓冲和队列,而中断则可通过轻量级通知机制促进内核与设备之间的异步交互。在本文中,我们根据现代硬件和工作负载,尤其是云服务器的情况,对这一智慧提出了质疑。我们认为,导致这种模式的假设已经过时,在许多使用案例中,使用编程 I/O(CPU 通过加载和存储向设备明确传输数据和控制信息)实际上会带来更高效的系统。我们通过三个用例定量证明了这些优势:在加速器上对函数进行细粒度 RPC 式调用、卸载流数据流引擎中的操作器,以及针对无服务器函数的网络接口。此外,我们还展示了与现代 PCIe 外围总线相比这些优势的显著性,而真正的高速缓存相干互连则提供了额外的显著效率提升。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Rethinking Programmed I/O for Fast Devices, Cheap Cores, and Coherent Interconnects
Conventional wisdom holds that an efficient interface between an OS running on a CPU and a high-bandwidth I/O device should be based on Direct Memory Access (DMA), descriptor rings, and interrupts: DMA offloads transfers from the CPU, descriptor rings provide buffering and queuing, and interrupts facilitate asynchronous interaction between cores and device with a lightweight notification mechanism. In this paper we question this wisdom in the light of modern hardware and workloads, particularly in cloud servers. We argue that the assumptions that led to this model are obsolete, and in many use-cases use of programmed I/O, where the CPU explicitly transfers data and control information to and from a device via loads and stores, actually results in a more efficient system. We quantitatively demonstrate these advantages using three use-cases: fine-grained RPC-style invocation of functions on an accelerator, offloading of operators in a streaming dataflow engine, and a network interface targeting for serverless functions. Moreover, we show that while these advantages are significant over a modern PCIe peripheral bus, a truly cache-coherent interconnect offers significant additional efficiency gains.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Analysis of Synchronization Mechanisms in Operating Systems Skip TLB flushes for reused pages within mmap's eBPF-mm: Userspace-guided memory management in Linux with eBPF BULKHEAD: Secure, Scalable, and Efficient Kernel Compartmentalization with PKS Rethinking Programmed I/O for Fast Devices, Cheap Cores, and Coherent Interconnects
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1