首页 > 最新文献

arXiv - CS - Operating Systems最新文献

英文 中文
Securing Monolithic Kernels using Compartmentalization 利用分隔确保单片内核安全
Pub Date : 2024-04-12 DOI: arxiv-2404.08716
Soo Yee Lim, Sidhartha Agrawal, Xueyuan Han, David Eyers, Dan O'Keeffe, Thomas Pasquier
Monolithic operating systems, where all kernel functionality resides in asingle, shared address space, are the foundation of most mainstream computersystems. However, a single flaw, even in a non-essential part of the kernel(e.g., device drivers), can cause the entire operating system to fall under anattacker's control. Kernel hardening techniques might prevent certain types ofvulnerabilities, but they fail to address a fundamental weakness: the lack ofintra-kernel security that safely isolates different parts of the kernel. Wesurvey kernel compartmentalization techniques that define and enforceintra-kernel boundaries and propose a taxonomy that allows the community tocompare and discuss future work. We also identify factors that complicatecomparisons among compartmentalized systems, suggest new ways to compare futureapproaches with existing work meaningfully, and discuss emerging researchdirections.
单片操作系统是大多数主流计算机系统的基础,所有内核功能都位于单一的共享地址空间。然而,即使是内核的非必要部分(如设备驱动程序)出现一个缺陷,也会导致整个操作系统落入攻击者的控制之下。内核加固技术可以防止某些类型的漏洞,但却无法解决一个根本性的弱点:缺乏可安全隔离内核不同部分的内核安全性。我们调查了定义和执行内核边界的内核分隔技术,并提出了一个分类标准,以便社区比较和讨论未来的工作。我们还确定了使分隔系统间比较复杂化的因素,提出了将未来方法与现有工作进行有意义比较的新方法,并讨论了新出现的研究方向。
{"title":"Securing Monolithic Kernels using Compartmentalization","authors":"Soo Yee Lim, Sidhartha Agrawal, Xueyuan Han, David Eyers, Dan O'Keeffe, Thomas Pasquier","doi":"arxiv-2404.08716","DOIUrl":"https://doi.org/arxiv-2404.08716","url":null,"abstract":"Monolithic operating systems, where all kernel functionality resides in a\u0000single, shared address space, are the foundation of most mainstream computer\u0000systems. However, a single flaw, even in a non-essential part of the kernel\u0000(e.g., device drivers), can cause the entire operating system to fall under an\u0000attacker's control. Kernel hardening techniques might prevent certain types of\u0000vulnerabilities, but they fail to address a fundamental weakness: the lack of\u0000intra-kernel security that safely isolates different parts of the kernel. We\u0000survey kernel compartmentalization techniques that define and enforce\u0000intra-kernel boundaries and propose a taxonomy that allows the community to\u0000compare and discuss future work. We also identify factors that complicate\u0000comparisons among compartmentalized systems, suggest new ways to compare future\u0000approaches with existing work meaningfully, and discuss emerging research\u0000directions.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"298 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140592002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HookChain: A new perspective for Bypassing EDR Solutions HookChain:绕过 EDR 解决方案的新视角
Pub Date : 2024-04-04 DOI: arxiv-2404.16856
Helvio Carvalho Junior
In the current digital security ecosystem, where threats evolve rapidly andwith complexity, companies developing Endpoint Detection and Response (EDR)solutions are in constant search for innovations that not only keep up but alsoanticipate emerging attack vectors. In this context, this article introducesthe HookChain, a look from another perspective at widely known techniques,which when combined, provide an additional layer of sophisticated evasionagainst traditional EDR systems. Through a precise combination of IAT Hookingtechniques, dynamic SSN resolution, and indirect system calls, HookChainredirects the execution flow of Windows subsystems in a way that remainsinvisible to the vigilant eyes of EDRs that only act on Ntdll.dll, withoutrequiring changes to the source code of the applications and malwares involved.This work not only challenges current conventions in cybersecurity but alsosheds light on a promising path for future protection strategies, leveragingthe understanding that continuous evolution is key to the effectiveness ofdigital security. By developing and exploring the HookChain technique, thisstudy significantly contributes to the body of knowledge in endpoint security,stimulating the development of more robust and adaptive solutions that caneffectively address the ever-changing dynamics of digital threats. This workaspires to inspire deep reflection and advancement in the research anddevelopment of security technologies that are always several steps ahead ofadversaries.
在当前的数字安全生态系统中,威胁发展迅速且复杂多变,开发端点检测和响应(EDR)解决方案的公司不断寻求创新,不仅要跟上时代的步伐,还要预见到新出现的攻击载体。在这种情况下,本文将介绍钩链,从另一个角度审视广为人知的技术,这些技术结合在一起,就能为传统的 EDR 系统提供多一层复杂的规避手段。通过将 IAT 挂钩技术、动态 SSN 解析和间接系统调用精确地结合在一起,HookChain 可以对 Windows 子系统的执行流进行重定向,而那些只对 Ntdll.dll 采取行动的 EDR 则无法察觉,同时也无需更改相关应用程序和恶意软件的源代码。通过开发和探索 HookChain 技术,本研究极大地丰富了端点安全领域的知识体系,促进了更稳健、适应性更强的解决方案的开发,从而能够有效地应对不断变化的数字威胁动态。这项工作将激励人们在安全技术的研究和开发方面进行深入思考并不断进步,从而始终领先对手几步。
{"title":"HookChain: A new perspective for Bypassing EDR Solutions","authors":"Helvio Carvalho Junior","doi":"arxiv-2404.16856","DOIUrl":"https://doi.org/arxiv-2404.16856","url":null,"abstract":"In the current digital security ecosystem, where threats evolve rapidly and\u0000with complexity, companies developing Endpoint Detection and Response (EDR)\u0000solutions are in constant search for innovations that not only keep up but also\u0000anticipate emerging attack vectors. In this context, this article introduces\u0000the HookChain, a look from another perspective at widely known techniques,\u0000which when combined, provide an additional layer of sophisticated evasion\u0000against traditional EDR systems. Through a precise combination of IAT Hooking\u0000techniques, dynamic SSN resolution, and indirect system calls, HookChain\u0000redirects the execution flow of Windows subsystems in a way that remains\u0000invisible to the vigilant eyes of EDRs that only act on Ntdll.dll, without\u0000requiring changes to the source code of the applications and malwares involved.\u0000This work not only challenges current conventions in cybersecurity but also\u0000sheds light on a promising path for future protection strategies, leveraging\u0000the understanding that continuous evolution is key to the effectiveness of\u0000digital security. By developing and exploring the HookChain technique, this\u0000study significantly contributes to the body of knowledge in endpoint security,\u0000stimulating the development of more robust and adaptive solutions that can\u0000effectively address the ever-changing dynamics of digital threats. This work\u0000aspires to inspire deep reflection and advancement in the research and\u0000development of security technologies that are always several steps ahead of\u0000adversaries.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140811374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory Sharing with CXL: Hardware and Software Design Approaches 利用 CXL 共享内存:硬件和软件设计方法
Pub Date : 2024-04-04 DOI: arxiv-2404.03245
Sunita Jain, Nagaradhesh Yeleswarapu, Hasan Al Maruf, Rita Gupta
Compute Express Link (CXL) is a rapidly emerging coherent interconnectstandard that provides opportunities for memory pooling and sharing. Memorysharing is a well-established software feature that improves memory utilizationby avoiding unnecessary data movement. In this paper, we discuss multipleapproaches to enable memory sharing with different generations of CXL protocol(i.e., CXL 2.0 and CXL 3.0) considering the challenges with each of thearchitectures from the device hardware and software viewpoint.
Compute Express Link(CXL)是一种迅速崛起的相干互连标准,为内存池和共享提供了机会。内存共享是一种成熟的软件功能,可通过避免不必要的数据移动提高内存利用率。在本文中,我们从设备硬件和软件的角度出发,讨论了利用不同世代的 CXL 协议(即 CXL 2.0 和 CXL 3.0)实现内存共享的多种方法,并考虑了每种架构所面临的挑战。
{"title":"Memory Sharing with CXL: Hardware and Software Design Approaches","authors":"Sunita Jain, Nagaradhesh Yeleswarapu, Hasan Al Maruf, Rita Gupta","doi":"arxiv-2404.03245","DOIUrl":"https://doi.org/arxiv-2404.03245","url":null,"abstract":"Compute Express Link (CXL) is a rapidly emerging coherent interconnect\u0000standard that provides opportunities for memory pooling and sharing. Memory\u0000sharing is a well-established software feature that improves memory utilization\u0000by avoiding unnecessary data movement. In this paper, we discuss multiple\u0000approaches to enable memory sharing with different generations of CXL protocol\u0000(i.e., CXL 2.0 and CXL 3.0) considering the challenges with each of the\u0000architectures from the device hardware and software viewpoint.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
THEMIS: Time, Heterogeneity, and Energy Minded Scheduling for Fair Multi-Tenant Use in FPGAs THEMIS:面向 FPGA 多租户公平使用的时间、异构性和能量调度系统
Pub Date : 2024-03-31 DOI: arxiv-2404.00507
Emre Karabulut, Arsalan Ali Malik, Amro Awad, Aydin Aysu
Using correct design metrics and understanding the limitations of theunderlying technology is critical to developing effective schedulingalgorithms. Unfortunately, existing scheduling techniques used emph{incorrect}metrics and had emph{unrealistic} assumptions for fair scheduling ofmulti-tenant FPGAs where each tenant is aimed to share approximately the samenumber of resources both spatially and temporally. This paper introduces an enhanced fair scheduling algorithm for multi-tenantFPGA use, addressing previous metric and assumption issues, with three specificimprovements claimed First, our method ensures spatiotemporal fairness byconsidering both spatial and temporal aspects, addressing the limitation ofprior work that assumed uniform task latency. Second, we incorporate energyconsiderations into fairness by adjusting scheduling intervals and accountingfor energy overhead, thereby balancing energy efficiency with fairness. Third,we acknowledge overlooked aspects of FPGA multi-tenancy, includingheterogeneous regions and the constraints on dynamically merging/splittingpartially reconfigurable regions. We develop and evaluate our improved fairscheduling algorithm with these three enhancements. Inspired by the Greekgoddess of law and personification of justice, we name our fair schedulingsolution THEMIS: underline{T}ime, underline{H}eterogeneity, andunderline{E}nergy underline{Mi}nded underline{S}cheduling. We used the Xilinx Zedboard XC7Z020 to quantify our approach's savings.Compared to previous algorithms, our improved scheduling algorithm enhancesfairness between 24.2--98.4% and allows a trade-off between 55.3$times$ inenergy vs. 69.3$times$ in fairness. The paper thus informs cloud providersabout future scheduling optimizations for fairness with related challenges andopportunities.
使用正确的设计指标和了解基础技术的局限性对于开发有效的调度算法至关重要。遗憾的是,现有的调度技术在多租户 FPGA 的公平调度方面使用了 "不正确 "的指标和 "不现实 "的假设,而在多租户 FPGA 中,每个租户的目标是在空间和时间上共享大致相同数量的资源。本文介绍了一种用于多租户 FPGA 的增强型公平调度算法,解决了之前的度量和假设问题,并提出了三项具体改进要求 首先,我们的方法通过同时考虑空间和时间方面来确保时空公平性,解决了之前工作中假设任务延迟一致的局限性。其次,我们通过调整调度间隔和考虑能源开销,将能源因素纳入公平性考虑,从而在能源效率和公平性之间取得平衡。第三,我们认识到了 FPGA 多租户被忽视的方面,包括异构区域和动态合并/拆分部分可重构区域的限制。我们开发并评估了具有这三个增强功能的改进型公平调度算法。受到希腊神话中法律女神和正义化身的启发,我们将公平调度解决方案命名为 "THEMIS":时间(underline{T}time)、异质性(underline{H}eterogeneity)和能量(underline{E}energy)。与之前的算法相比,我们改进的调度算法提高了 24.2%-98.4% 的公平性,并在 55.3% 的能耗与 69.3% 的公平性之间实现了权衡。因此,本文向云提供商介绍了未来针对公平性的调度优化,以及相关的挑战和机遇。
{"title":"THEMIS: Time, Heterogeneity, and Energy Minded Scheduling for Fair Multi-Tenant Use in FPGAs","authors":"Emre Karabulut, Arsalan Ali Malik, Amro Awad, Aydin Aysu","doi":"arxiv-2404.00507","DOIUrl":"https://doi.org/arxiv-2404.00507","url":null,"abstract":"Using correct design metrics and understanding the limitations of the\u0000underlying technology is critical to developing effective scheduling\u0000algorithms. Unfortunately, existing scheduling techniques used emph{incorrect}\u0000metrics and had emph{unrealistic} assumptions for fair scheduling of\u0000multi-tenant FPGAs where each tenant is aimed to share approximately the same\u0000number of resources both spatially and temporally. This paper introduces an enhanced fair scheduling algorithm for multi-tenant\u0000FPGA use, addressing previous metric and assumption issues, with three specific\u0000improvements claimed First, our method ensures spatiotemporal fairness by\u0000considering both spatial and temporal aspects, addressing the limitation of\u0000prior work that assumed uniform task latency. Second, we incorporate energy\u0000considerations into fairness by adjusting scheduling intervals and accounting\u0000for energy overhead, thereby balancing energy efficiency with fairness. Third,\u0000we acknowledge overlooked aspects of FPGA multi-tenancy, including\u0000heterogeneous regions and the constraints on dynamically merging/splitting\u0000partially reconfigurable regions. We develop and evaluate our improved fair\u0000scheduling algorithm with these three enhancements. Inspired by the Greek\u0000goddess of law and personification of justice, we name our fair scheduling\u0000solution THEMIS: underline{T}ime, underline{H}eterogeneity, and\u0000underline{E}nergy underline{Mi}nded underline{S}cheduling. We used the Xilinx Zedboard XC7Z020 to quantify our approach's savings.\u0000Compared to previous algorithms, our improved scheduling algorithm enhances\u0000fairness between 24.2--98.4% and allows a trade-off between 55.3$times$ in\u0000energy vs. 69.3$times$ in fairness. The paper thus informs cloud providers\u0000about future scheduling optimizations for fairness with related challenges and\u0000opportunities.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PerOS: Personalized Self-Adapting Operating Systems in the Cloud PerOS:云计算中的个性化自适应操作系统
Pub Date : 2024-03-26 DOI: arxiv-2404.00057
Hongyu Hè
Operating systems (OSes) are foundational to computer systems, managinghardware resources and ensuring secure environments for diverse applications.However, despite their enduring importance, the fundamental design objectivesof OSes have seen minimal evolution over decades. Traditionally prioritizingaspects like speed, memory efficiency, security, and scalability, theseobjectives often overlook the crucial aspect of intelligence as well aspersonalized user experience. The lack of intelligence becomes increasinglycritical amid technological revolutions, such as the remarkable advancements inmachine learning (ML). Today's personal devices, evolving into intimate companions for users, poseunique challenges for traditional OSes like Linux and iOS, especially with theemergence of specialized hardware featuring heterogeneous components.Furthermore, the rise of large language models (LLMs) in ML has introducedtransformative capabilities, reshaping user interactions and softwaredevelopment paradigms. While existing literature predominantly focuses on leveraging ML methods forsystem optimization or accelerating ML workloads, there is a significant gap inaddressing personalized user experiences at the OS level. To tackle thischallenge, this work proposes PerOS, a personalized OS ingrained with LLMcapabilities. PerOS aims to provide tailored user experiences whilesafeguarding privacy and personal data through declarative interfaces,self-adaptive kernels, and secure data management in a scalable cloud-centricarchitecture; therein lies the main research question of this work: How can wedevelop intelligent, secure, and scalable OSes that deliver personalizedexperiences to thousands of users?
操作系统(OS)是计算机系统的基础,它可以管理硬件资源,确保为各种应用提供安全的环境。然而,尽管其重要性经久不衰,但几十年来,操作系统的基本设计目标却鲜有变化。传统上,这些目标优先考虑速度、内存效率、安全性和可扩展性等方面,但往往忽略了智能化和个性化用户体验等重要方面。在技术革命(如机器学习(ML)的显著进步)的背景下,智能的缺失变得越来越关键。如今的个人设备已发展成为用户的亲密伙伴,对 Linux 和 iOS 等传统操作系统提出了独特的挑战,特别是随着具有异构组件的专用硬件的出现。此外,ML 中大型语言模型(LLM)的兴起引入了变革能力,重塑了用户交互和软件开发范式。虽然现有文献主要关注利用 ML 方法进行系统优化或加速 ML 工作负载,但在操作系统层面解决个性化用户体验方面还存在巨大差距。为了应对这一挑战,本研究提出了具有 LLM 能力的个性化操作系统 PerOS。PerOS旨在提供量身定制的用户体验,同时通过声明式界面、自适应内核和以云为中心的可扩展架构中的安全数据管理来保护隐私和个人数据:我们如何才能开发出智能、安全、可扩展的操作系统,为成千上万的用户提供个性化体验?
{"title":"PerOS: Personalized Self-Adapting Operating Systems in the Cloud","authors":"Hongyu Hè","doi":"arxiv-2404.00057","DOIUrl":"https://doi.org/arxiv-2404.00057","url":null,"abstract":"Operating systems (OSes) are foundational to computer systems, managing\u0000hardware resources and ensuring secure environments for diverse applications.\u0000However, despite their enduring importance, the fundamental design objectives\u0000of OSes have seen minimal evolution over decades. Traditionally prioritizing\u0000aspects like speed, memory efficiency, security, and scalability, these\u0000objectives often overlook the crucial aspect of intelligence as well as\u0000personalized user experience. The lack of intelligence becomes increasingly\u0000critical amid technological revolutions, such as the remarkable advancements in\u0000machine learning (ML). Today's personal devices, evolving into intimate companions for users, pose\u0000unique challenges for traditional OSes like Linux and iOS, especially with the\u0000emergence of specialized hardware featuring heterogeneous components.\u0000Furthermore, the rise of large language models (LLMs) in ML has introduced\u0000transformative capabilities, reshaping user interactions and software\u0000development paradigms. While existing literature predominantly focuses on leveraging ML methods for\u0000system optimization or accelerating ML workloads, there is a significant gap in\u0000addressing personalized user experiences at the OS level. To tackle this\u0000challenge, this work proposes PerOS, a personalized OS ingrained with LLM\u0000capabilities. PerOS aims to provide tailored user experiences while\u0000safeguarding privacy and personal data through declarative interfaces,\u0000self-adaptive kernels, and secure data management in a scalable cloud-centric\u0000architecture; therein lies the main research question of this work: How can we\u0000develop intelligent, secure, and scalable OSes that deliver personalized\u0000experiences to thousands of users?","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"298 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UPSS: a User-centric Private Storage System with its applications UPSS:以用户为中心的私有存储系统及其应用
Pub Date : 2024-03-23 DOI: arxiv-2403.15884
Arastoo Bozorgi, Mahya Soleimani Jadidi, Jonathan Anderson
Strong confidentiality, integrity, user control, reliability and performanceare critical requirements in privacy-sensitive applications. Such applicationswould benefit from a data storage and sharing infrastructure that providesthese properties even in decentralized topologies with untrusted storagebackends, but users today are forced to choose between systemic securityproperties and system reliability or performance. As an alternative to thisstatus quo we present UPSS: the user-centric private sharing system, acryptographic storage system that can be used as a conventional filesystem oras the foundation for security-sensitive applications such as redaction withintegrity and private revision control. We demonstrate that both the securityand performance properties of UPSS exceed that of existing cryptographicfilesystems and that its performance is comparable to mature conventionalfilesystems - in some cases, even superior. Whether used directly via its RustAPI or as a conventional filesystem, UPSS provides strong security andpractical performance on untrusted storage.
强大的保密性、完整性、用户控制、可靠性和性能是隐私敏感型应用的关键要求。即使在不信任存储后端的分散拓扑结构中,数据存储和共享基础架构也能提供这些特性,这将使此类应用受益匪浅,但如今用户不得不在系统安全特性和系统可靠性或性能之间做出选择。作为这种现状的替代方案,我们提出了 UPSS:以用户为中心的私有共享系统,这是一种加密存储系统,既可用作传统的文件系统,也可作为安全敏感型应用(如完整性编辑和私有修订控制)的基础。我们证明,UPSS 的安全性和性能都超过了现有的加密文件系统,其性能可与成熟的传统文件系统媲美,在某些情况下甚至更胜一筹。无论是通过 RustAPI 直接使用,还是作为传统文件系统使用,UPSS 都能在不受信任的存储上提供强大的安全性和实用性能。
{"title":"UPSS: a User-centric Private Storage System with its applications","authors":"Arastoo Bozorgi, Mahya Soleimani Jadidi, Jonathan Anderson","doi":"arxiv-2403.15884","DOIUrl":"https://doi.org/arxiv-2403.15884","url":null,"abstract":"Strong confidentiality, integrity, user control, reliability and performance\u0000are critical requirements in privacy-sensitive applications. Such applications\u0000would benefit from a data storage and sharing infrastructure that provides\u0000these properties even in decentralized topologies with untrusted storage\u0000backends, but users today are forced to choose between systemic security\u0000properties and system reliability or performance. As an alternative to this\u0000status quo we present UPSS: the user-centric private sharing system, a\u0000cryptographic storage system that can be used as a conventional filesystem or\u0000as the foundation for security-sensitive applications such as redaction with\u0000integrity and private revision control. We demonstrate that both the security\u0000and performance properties of UPSS exceed that of existing cryptographic\u0000filesystems and that its performance is comparable to mature conventional\u0000filesystems - in some cases, even superior. Whether used directly via its Rust\u0000API or as a conventional filesystem, UPSS provides strong security and\u0000practical performance on untrusted storage.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"233 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140298325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLM as a System Service on Mobile Devices 作为移动设备系统服务的 LLM
Pub Date : 2024-03-18 DOI: arxiv-2403.11805
Wangsong Yin, Mengwei Xu, Yuanchun Li, Xuanzhe Liu
Being more powerful and intrusive into user-device interactions, LLMs areeager for on-device execution to better preserve user privacy. In this work, wepropose a new paradigm of mobile AI: LLM as a system service on mobile devices(LLMaaS). Unlike traditional DNNs that execute in a stateless manner, such asystem service is stateful: LLMs execution often needs to maintain persistentstates (mainly KV cache) across multiple invocations. To minimize the LLMcontext switching overhead under tight device memory budget, this work presentsLLMS, which decouples the memory management of app and LLM contexts with a keyidea of fine-grained, chunk-wise, globally-optimized KV cache compression andswapping. By fully leveraging KV cache's unique characteristics, it proposesthree novel techniques: (1) Tolerance-Aware Compression: it compresses chunksbased on their measured accuracy tolerance to compression. (2) IO-RecomputePipelined Loading: it introduces recompute to swapping-in for acceleration. (3)Chunk Lifecycle Management: it optimizes the memory activities of chunks withan ahead-of-time swapping-out and an LCTRU (Least Compression-Tolerable andRecently-Used) queue based eviction. In evaluations conducted onwell-established traces and various edge devices, sys reduces contextswitching latency by up to 2 orders of magnitude when compared to competitivebaseline solutions.
由于 LLM 功能更强大,对用户与设备的交互更具侵入性,因此 LLM 渴望在设备上执行,以更好地保护用户隐私。在这项工作中,我们提出了一种新的移动人工智能范式:LLM 作为移动设备上的系统服务(LLMaaS)。与以无状态方式执行的传统 DNN 不同,这种系统服务是有状态的:LLM 的执行通常需要在多次调用中保持持久状态(主要是 KV 缓存)。为了在设备内存预算紧张的情况下最大限度地减少 LLM 上下文切换的开销,这项工作提出了LLMS,通过细粒度、分块、全局优化的 KV 缓存压缩和交换这一关键理念,将应用程序和 LLM 上下文的内存管理分离开来。通过充分利用 KV 缓存的独特特性,它提出了三项新技术:(1)容忍度感知压缩:它根据测量到的压缩精度容忍度来压缩数据块。(2) IO-重新计算管道式加载:将重新计算引入交换加速。(3) 块生命周期管理:通过提前换出和基于队列的 LCTRU(最小压缩容忍度和最近使用量)驱逐,优化块的内存活动。在对完善的跟踪和各种边缘设备进行的评估中,与竞争性的基准解决方案相比,sys最多可将上下文切换延迟降低两个数量级。
{"title":"LLM as a System Service on Mobile Devices","authors":"Wangsong Yin, Mengwei Xu, Yuanchun Li, Xuanzhe Liu","doi":"arxiv-2403.11805","DOIUrl":"https://doi.org/arxiv-2403.11805","url":null,"abstract":"Being more powerful and intrusive into user-device interactions, LLMs are\u0000eager for on-device execution to better preserve user privacy. In this work, we\u0000propose a new paradigm of mobile AI: LLM as a system service on mobile devices\u0000(LLMaaS). Unlike traditional DNNs that execute in a stateless manner, such a\u0000system service is stateful: LLMs execution often needs to maintain persistent\u0000states (mainly KV cache) across multiple invocations. To minimize the LLM\u0000context switching overhead under tight device memory budget, this work presents\u0000LLMS, which decouples the memory management of app and LLM contexts with a key\u0000idea of fine-grained, chunk-wise, globally-optimized KV cache compression and\u0000swapping. By fully leveraging KV cache's unique characteristics, it proposes\u0000three novel techniques: (1) Tolerance-Aware Compression: it compresses chunks\u0000based on their measured accuracy tolerance to compression. (2) IO-Recompute\u0000Pipelined Loading: it introduces recompute to swapping-in for acceleration. (3)\u0000Chunk Lifecycle Management: it optimizes the memory activities of chunks with\u0000an ahead-of-time swapping-out and an LCTRU (Least Compression-Tolerable and\u0000Recently-Used) queue based eviction. In evaluations conducted on\u0000well-established traces and various edge devices, sys reduces context\u0000switching latency by up to 2 orders of magnitude when compared to competitive\u0000baseline solutions.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"147 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140169333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Next4: Snapshots in Ext4 File System 下一页4:Ext4 文件系统中的快照
Pub Date : 2024-03-11 DOI: arxiv-2403.06790
Aditya Dani, Shardul Mangade, Piyush Nimbalkar, Harshad Shirwadkar
The growing value of data as a strategic asset has given rise to thenecessity of implementing reliable backup and recovery solutions in the mostefficient and cost-effective manner. The data backup methods available today onlinux are not effective enough, because while running, most of them block I/Osto guarantee data integrity. We propose and implement Next4 - file system basedsnapshot feature in Ext4 which creates an instant image of the file system, toprovide incremental versions of data, enabling reliable backup and datarecovery. In our design, the snapshot feature is implemented by efficientlyinfusing the copy-on-write strategy in the write-in-place, extent based Ext4file system, without affecting its basic structure. Each snapshot is anincremental backup of the data within the system. What distinguishes Next4 isthe way that the data is backed up, improving both space utilization as well asperformance.
数据作为一种战略资产的价值越来越大,因此有必要以最高效、最经济的方式实施可靠的备份和恢复解决方案。目前 Linux 上可用的数据备份方法不够有效,因为大多数方法在运行时都会阻塞 I/O,以保证数据的完整性。我们提出并实现了 Next4--Ext4 中基于文件系统的快照功能,它可以创建文件系统的即时镜像,提供增量版本的数据,从而实现可靠的备份和数据恢复。在我们的设计中,快照功能是通过在不影响其基本结构的情况下,在基于程度的就地写入 Ext4 文件系统中有效地注入写时复制策略来实现的。每个快照都是系统内数据的递增备份。Next4 的与众不同之处在于数据备份的方式,既提高了空间利用率,又提高了性能。
{"title":"Next4: Snapshots in Ext4 File System","authors":"Aditya Dani, Shardul Mangade, Piyush Nimbalkar, Harshad Shirwadkar","doi":"arxiv-2403.06790","DOIUrl":"https://doi.org/arxiv-2403.06790","url":null,"abstract":"The growing value of data as a strategic asset has given rise to the\u0000necessity of implementing reliable backup and recovery solutions in the most\u0000efficient and cost-effective manner. The data backup methods available today on\u0000linux are not effective enough, because while running, most of them block I/Os\u0000to guarantee data integrity. We propose and implement Next4 - file system based\u0000snapshot feature in Ext4 which creates an instant image of the file system, to\u0000provide incremental versions of data, enabling reliable backup and data\u0000recovery. In our design, the snapshot feature is implemented by efficiently\u0000infusing the copy-on-write strategy in the write-in-place, extent based Ext4\u0000file system, without affecting its basic structure. Each snapshot is an\u0000incremental backup of the data within the system. What distinguishes Next4 is\u0000the way that the data is backed up, improving both space utilization as well as\u0000performance.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140106450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
I/O Transit Caching for PMem-based Block Device 基于 PMem 的块设备的 I/O 转接缓存
Pub Date : 2024-03-10 DOI: arxiv-2403.06120
Qing Xu, Qisheng Jiang, Chundong Wang
Byte-addressable non-volatile memory (NVM) sitting on the memory bus isemployed to make persistent memory (PMem) in general-purpose computing systemsand embedded systems for data storage. Researchers develop software driverssuch as the block translation table (BTT) to build block devices on PMem, soprogrammers can keep using mature and reliable conventional storage stack whileexpecting high performance by exploiting fast PMem. However, our quantitativestudy shows that BTT underutilizes PMem and yields inferior performance, due tothe absence of the imperative in-device cache. We add a conventional I/Ostaging cache made of DRAM space to BTT. As DRAM and PMem have comparableaccess latency, I/O staging cache is likely to be fully filled over time.Continual cache evictions and fsyncs thus cause on-demand flushes with severestalls, such that the I/O staging cache is concretely unappealing forPMem-based block devices. We accordingly propose an algorithm named Caiti withnovel I/O transit caching. Caiti eagerly evicts buffered data to PMem throughCPU's multi-cores. It also conditionally bypasses a full cache and directlywrites data into PMem to further alleviate I/O stalls. Experiments confirm thatCaiti significantly boosts the performance with BTT by up to 3.6x, without lossof block-level write atomicity.
在通用计算系统和嵌入式系统中,内存总线上的字节可寻址非易失性存储器(NVM)被用来制作用于数据存储的持久存储器(PMem)。研究人员开发了块转换表(BTT)等软件驱动程序,用于在 PMem 上构建块设备,这样程序员就可以继续使用成熟可靠的传统存储堆栈,同时期望通过利用快速 PMem 获得高性能。 然而,我们的定量研究表明,由于缺乏必要的设备内缓存,BTT 对 PMem 的利用不足,性能较差。我们在 BTT 中添加了一个由 DRAM 空间构成的传统 I/O 暂存缓存。由于 DRAM 和 PMem 的访问延迟相当,I/O 暂存缓存很可能会随着时间的推移而被完全填满。持续的缓存驱逐和同步会导致严重的按需刷新,因此对于基于 PMem 的块设备来说,I/O 暂存缓存并不理想。因此,我们提出了一种名为 Caiti 的算法,它具有新颖的 I/O 中转缓存功能。Caiti 通过CPU 的多核急切地将缓冲数据驱逐到 PMem。它还会有条件地绕过完整缓存,直接将数据写入 PMem,以进一步缓解 I/O 阻塞。实验证实,Caiti 将 BTT 的性能显著提高了 3.6 倍,而且不会丢失块级写原子性。
{"title":"I/O Transit Caching for PMem-based Block Device","authors":"Qing Xu, Qisheng Jiang, Chundong Wang","doi":"arxiv-2403.06120","DOIUrl":"https://doi.org/arxiv-2403.06120","url":null,"abstract":"Byte-addressable non-volatile memory (NVM) sitting on the memory bus is\u0000employed to make persistent memory (PMem) in general-purpose computing systems\u0000and embedded systems for data storage. Researchers develop software drivers\u0000such as the block translation table (BTT) to build block devices on PMem, so\u0000programmers can keep using mature and reliable conventional storage stack while\u0000expecting high performance by exploiting fast PMem. However, our quantitative\u0000study shows that BTT underutilizes PMem and yields inferior performance, due to\u0000the absence of the imperative in-device cache. We add a conventional I/O\u0000staging cache made of DRAM space to BTT. As DRAM and PMem have comparable\u0000access latency, I/O staging cache is likely to be fully filled over time.\u0000Continual cache evictions and fsyncs thus cause on-demand flushes with severe\u0000stalls, such that the I/O staging cache is concretely unappealing for\u0000PMem-based block devices. We accordingly propose an algorithm named Caiti with\u0000novel I/O transit caching. Caiti eagerly evicts buffered data to PMem through\u0000CPU's multi-cores. It also conditionally bypasses a full cache and directly\u0000writes data into PMem to further alleviate I/O stalls. Experiments confirm that\u0000Caiti significantly boosts the performance with BTT by up to 3.6x, without loss\u0000of block-level write atomicity.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"2016 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140106471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtuoso: An Open-Source, Comprehensive and Modular Simulation Framework for Virtual Memory Research Virtuoso:用于虚拟内存研究的开源、综合和模块化仿真框架
Pub Date : 2024-03-07 DOI: arxiv-2403.04635
Konstantinos Kanellopoulos, Konstantinos Sgouras, Onur Mutlu
Virtual memory is a cornerstone of modern computing systems.Introduced as oneof the earliest instances of hardware-software co-design, VM facilitatesprogrammer-transparent memory man agement, data sharing, process isolation andmemory protection. Evaluating the efficiency of various virtual memory (VM)designs is crucial (i) given their significant impact on the system, includingthe CPU caches, the main memory, and the storage device and (ii) given thatdifferent system architectures might benefit from various VM techniques. Suchan evaluation is not straightforward, as it heavily hinges on modeling theinterplay between different VM techniques and the interactions of VM with thesystem architecture. Modern simulators, however, struggle to keep up with therapid VM research developments, lacking the capability to model a wide range ofcontemporary VM techniques and their interactions. To this end, we presentVirtuoso, an open-source, comprehensive and modular simulation framework thatmodels various VM designs to establish a common ground for virtual memoryresearch. We demonstrate the versatility and the potential of Virtuoso withfour new case studies. Virtuoso is freely open-source and can be found athttps://github.com/CMU-SAFARI/Virtuoso.
虚拟内存是现代计算系统的基石。作为硬件-软件协同设计的最早实例之一,虚拟内存为程序员透明的内存管理、数据共享、进程隔离和内存保护提供了便利。评估各种虚拟内存(VM)设计的效率至关重要:(i) 因为它们对系统(包括 CPU 高速缓存、主存储器和存储设备)有重大影响;(ii) 因为不同的系统架构可能受益于各种虚拟内存技术。这样的评估并不简单,因为它在很大程度上取决于对不同虚拟机技术之间的相互作用以及虚拟机与系统架构之间的相互作用进行建模。然而,现代模拟器难以跟上虚拟机研究的快速发展,缺乏对各种当代虚拟机技术及其交互进行建模的能力。为此,我们提出了一个开源、全面和模块化的仿真框架--Virtuoso,它可以模拟各种虚拟机设计,为虚拟内存研究建立一个共同基础。我们通过四个新案例研究展示了 Virtuoso 的多功能性和潜力。Virtuoso免费开源,可在https://github.com/CMU-SAFARI/Virtuoso。
{"title":"Virtuoso: An Open-Source, Comprehensive and Modular Simulation Framework for Virtual Memory Research","authors":"Konstantinos Kanellopoulos, Konstantinos Sgouras, Onur Mutlu","doi":"arxiv-2403.04635","DOIUrl":"https://doi.org/arxiv-2403.04635","url":null,"abstract":"Virtual memory is a cornerstone of modern computing systems.Introduced as one\u0000of the earliest instances of hardware-software co-design, VM facilitates\u0000programmer-transparent memory man agement, data sharing, process isolation and\u0000memory protection. Evaluating the efficiency of various virtual memory (VM)\u0000designs is crucial (i) given their significant impact on the system, including\u0000the CPU caches, the main memory, and the storage device and (ii) given that\u0000different system architectures might benefit from various VM techniques. Such\u0000an evaluation is not straightforward, as it heavily hinges on modeling the\u0000interplay between different VM techniques and the interactions of VM with the\u0000system architecture. Modern simulators, however, struggle to keep up with the\u0000rapid VM research developments, lacking the capability to model a wide range of\u0000contemporary VM techniques and their interactions. To this end, we present\u0000Virtuoso, an open-source, comprehensive and modular simulation framework that\u0000models various VM designs to establish a common ground for virtual memory\u0000research. We demonstrate the versatility and the potential of Virtuoso with\u0000four new case studies. Virtuoso is freely open-source and can be found at\u0000https://github.com/CMU-SAFARI/Virtuoso.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140070467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Operating Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1