首页 > 最新文献

Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems最新文献

英文 中文
DARTS: Distributed IoT Architecture for Real-Time, Resilient and AI-Compressed Workflows dart:用于实时、弹性和ai压缩工作流的分布式物联网架构
Ragini Gupta, Bo Chen, Shengzhong Liu, Tianshi Wang, S. Sandha, Abel Souza, K. Nahrstedt, T. Abdelzaher, M. Srivastava, P. Shenoy, Jeffrey Smith, Maggie B. Wigness, Niranjan Suri
IoT (Internet of Things) sensor devices are becoming ubiquitous in diverse smart environments, including smart homes, smart cities, smart laboratories, and others. To handle their IoT sensor data, distributed edge-cloud infrastructures are emerging to capture, distribute, and analyze them and deliver important services and utilities to different communities. However, there are several challenges for these IoT-edge-cloud infrastructures to provide efficient and effective services to users: (1) how to deliver real-time distributed services under diverse IoT devices, including cameras, meteorological and other sensors; (2) how to provide robustness and resilience of distributed services within the IoT-edge-cloud infrastructures to withstand failures or attacks; (3) how to handle AI workloads are in an efficient manner under constrained network conditions. To address these challenges, we present DARTS, which is composed of different IoT, edge, cloud services addressing application portability, real-time robust data transfer and AI-driven capabilities. We benchmark and evaluate these services to showcase the practical deployment of DARTS catering to application-specific constraints.
IoT(物联网)传感器设备在各种智能环境中变得无处不在,包括智能家居、智能城市、智能实验室等。为了处理物联网传感器数据,分布式边缘云基础设施正在兴起,以捕获、分发和分析这些数据,并向不同社区提供重要的服务和公用事业。然而,这些物联网边缘云基础设施为用户提供高效和有效的服务面临几个挑战:(1)如何在不同的物联网设备(包括摄像头、气象和其他传感器)下提供实时分布式服务;(2)如何在物联网边缘云基础设施中提供分布式服务的鲁棒性和弹性,以抵御故障或攻击;(3)如何在受限的网络条件下高效地处理AI工作负载。为了应对这些挑战,我们提出了DARTS,它由不同的物联网、边缘、云服务组成,解决了应用程序可移植性、实时健壮的数据传输和人工智能驱动的功能。我们对这些服务进行基准测试和评估,以展示满足应用程序特定约束的dart的实际部署。
{"title":"DARTS: Distributed IoT Architecture for Real-Time, Resilient and AI-Compressed Workflows","authors":"Ragini Gupta, Bo Chen, Shengzhong Liu, Tianshi Wang, S. Sandha, Abel Souza, K. Nahrstedt, T. Abdelzaher, M. Srivastava, P. Shenoy, Jeffrey Smith, Maggie B. Wigness, Niranjan Suri","doi":"10.1145/3524053.3542742","DOIUrl":"https://doi.org/10.1145/3524053.3542742","url":null,"abstract":"IoT (Internet of Things) sensor devices are becoming ubiquitous in diverse smart environments, including smart homes, smart cities, smart laboratories, and others. To handle their IoT sensor data, distributed edge-cloud infrastructures are emerging to capture, distribute, and analyze them and deliver important services and utilities to different communities. However, there are several challenges for these IoT-edge-cloud infrastructures to provide efficient and effective services to users: (1) how to deliver real-time distributed services under diverse IoT devices, including cameras, meteorological and other sensors; (2) how to provide robustness and resilience of distributed services within the IoT-edge-cloud infrastructures to withstand failures or attacks; (3) how to handle AI workloads are in an efficient manner under constrained network conditions. To address these challenges, we present DARTS, which is composed of different IoT, edge, cloud services addressing application portability, real-time robust data transfer and AI-driven capabilities. We benchmark and evaluate these services to showcase the practical deployment of DARTS catering to application-specific constraints.","PeriodicalId":254571,"journal":{"name":"Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123755472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Graph Neural Networks as Application of Distributed Algorithms 图神经网络作为分布式算法的应用
Roger Wattenhofer
At first sight, distributed computing and machine learning are two distant areas in computer science. However, there are many connections, for instance in the area of graphs, which are the focus of my talk. Distributed computing has studied distributed graph algorithms for many decades. Meanwhile in machine learning, graph neural networks are picking up steam. When it comes to dealing with graphical inputs, one can almost claim that graph neural networks are an application of distributed algorithms. I will introduce central concepts in learning such as underreaching and oversquashing, which have been known in the distributed computing community for decades, as local and congest models. In addition I am going to present some algorithmic insights, and a software framework that helps with explaining learning. Generally speaking, I would like to present a path to learning for those who are familiar with distributed message passing algorithms. This talk is based on a number of papers recently published at learning conferences such as ICML and NeurIPS, co-authored by Pál András Papp and Karolis Martinkus.
乍一看,分布式计算和机器学习是计算机科学中两个遥远的领域。然而,有许多联系,例如在图的领域,这是我演讲的重点。分布式计算已经对分布式图算法进行了几十年的研究。与此同时,在机器学习领域,图神经网络正在加速发展。当涉及到处理图形输入时,人们几乎可以声称图形神经网络是分布式算法的应用。我将介绍学习中的核心概念,例如underreach和overquashing,它们在分布式计算社区中已经被称为本地模型和最拥挤模型几十年了。此外,我还将介绍一些算法见解,以及一个有助于解释学习的软件框架。一般来说,我想为那些熟悉分布式消息传递算法的人提供一个学习路径。本次演讲基于最近在ICML和NeurIPS等学习会议上发表的一些论文,这些论文由Pál András Papp和Karolis Martinkus共同撰写。
{"title":"Graph Neural Networks as Application of Distributed Algorithms","authors":"Roger Wattenhofer","doi":"10.1145/3524053.3542745","DOIUrl":"https://doi.org/10.1145/3524053.3542745","url":null,"abstract":"At first sight, distributed computing and machine learning are two distant areas in computer science. However, there are many connections, for instance in the area of graphs, which are the focus of my talk. Distributed computing has studied distributed graph algorithms for many decades. Meanwhile in machine learning, graph neural networks are picking up steam. When it comes to dealing with graphical inputs, one can almost claim that graph neural networks are an application of distributed algorithms. I will introduce central concepts in learning such as underreaching and oversquashing, which have been known in the distributed computing community for decades, as local and congest models. In addition I am going to present some algorithmic insights, and a software framework that helps with explaining learning. Generally speaking, I would like to present a path to learning for those who are familiar with distributed message passing algorithms. This talk is based on a number of papers recently published at learning conferences such as ICML and NeurIPS, co-authored by Pál András Papp and Karolis Martinkus.","PeriodicalId":254571,"journal":{"name":"Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124694309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Closer Look at Detectable Objects for Persistent Memory 仔细观察可检测对象的持久记忆
Mohammad Moridi, E. Wang, Amelia Cui, W. Golab
Research on multi-core algorithms is adapting rapidly to the new opportunities and challenges posed by persistent memory. One of these challenges is the fundamental problem of formalizing the behaviour of concurrent objects in the presence of crash failures, and giving precise meaning to the semantics of recovery from such failures. Li and Golab (DISC'21) recently proposed a sequential specification for such recoverable objects, called the detectable sequential specification or DSS. Building on their work, we explore examples of how DSS-based objects can be used by a sample application, and examine more closely the division of labour between the application's environment, the application code, and the recoverable object used by the application. We also propose an alternative formal definition of correctness, called the unified detectable sequential specification (UDSS), that simplifies both the object's interface and the application code. Using a black box transformation, we show how a UDSS-based object can be implemented from one that conforms to Li and Golab's specification. Finally, we present experiments conducted using Intel Optane persistent memory to quantify the performance overhead of our transformation.
多核算法的研究正在迅速适应持久存储带来的新机遇和挑战。这些挑战之一是在出现崩溃失败时形式化并发对象的行为的基本问题,并为从此类失败中恢复的语义提供精确的含义。Li和Golab (DISC'21)最近提出了这种可恢复对象的顺序规范,称为可检测顺序规范或DSS。在他们工作的基础上,我们将探索示例应用程序如何使用基于dss的对象的示例,并更仔细地研究应用程序环境、应用程序代码和应用程序使用的可恢复对象之间的分工。我们还提出了另一种正确性的正式定义,称为统一可检测顺序规范(UDSS),它简化了对象的接口和应用程序代码。使用黑盒转换,我们展示了如何从符合Li和Golab规范的对象实现基于uds的对象。最后,我们展示了使用Intel Optane持久内存进行的实验,以量化我们转换的性能开销。
{"title":"A Closer Look at Detectable Objects for Persistent Memory","authors":"Mohammad Moridi, E. Wang, Amelia Cui, W. Golab","doi":"10.1145/3524053.3542749","DOIUrl":"https://doi.org/10.1145/3524053.3542749","url":null,"abstract":"Research on multi-core algorithms is adapting rapidly to the new opportunities and challenges posed by persistent memory. One of these challenges is the fundamental problem of formalizing the behaviour of concurrent objects in the presence of crash failures, and giving precise meaning to the semantics of recovery from such failures. Li and Golab (DISC'21) recently proposed a sequential specification for such recoverable objects, called the detectable sequential specification or DSS. Building on their work, we explore examples of how DSS-based objects can be used by a sample application, and examine more closely the division of labour between the application's environment, the application code, and the recoverable object used by the application. We also propose an alternative formal definition of correctness, called the unified detectable sequential specification (UDSS), that simplifies both the object's interface and the application code. Using a black box transformation, we show how a UDSS-based object can be implemented from one that conforms to Li and Golab's specification. Finally, we present experiments conducted using Intel Optane persistent memory to quantify the performance overhead of our transformation.","PeriodicalId":254571,"journal":{"name":"Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127659266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Roadmap To Post-Moore Era for Distributed Systems 分布式系统后摩尔时代的路线图
Vincenzo De Maio, Atakan Aral, I. Brandić
We are reaching the limits of the von Neumann computing architectures (also called Moore's law era) as there is no free ride of the performance growth from simply shrinking the transistor features. As one of the consequences, we experience the rise of highly specialized architectures ranging from neuromorphic to quantum computing, exploiting completely different physical phenomena and demanding the development of entirely new architectures - that, however, can perform the computations within a fraction of the energy needed by the von Neumann architecture. Thus, we experience the paradigm shift from generalized architectures of the Von Neumann era to highly specialized architectures in the Post-Moore era where we expect the coexistence of multiple types of architectures specialized for different types of computation. In this paper, we discuss the implications of the post-Moore era for distributed systems.
我们正在达到冯·诺伊曼计算架构(也称为摩尔定律时代)的极限,因为简单地缩小晶体管的特性是无法实现性能增长的。作为结果之一,我们经历了高度专业化架构的兴起,从神经形态到量子计算,利用完全不同的物理现象,要求开发全新的架构——然而,这些架构可以在冯·诺伊曼架构所需能量的一小部分内执行计算。因此,我们经历了从冯·诺伊曼时代的通用架构到后摩尔时代的高度专门化架构的范式转变,在后摩尔时代,我们期望多种类型的架构共存,专门用于不同类型的计算。在本文中,我们讨论了后摩尔时代对分布式系统的影响。
{"title":"A Roadmap To Post-Moore Era for Distributed Systems","authors":"Vincenzo De Maio, Atakan Aral, I. Brandić","doi":"10.1145/3524053.3542747","DOIUrl":"https://doi.org/10.1145/3524053.3542747","url":null,"abstract":"We are reaching the limits of the von Neumann computing architectures (also called Moore's law era) as there is no free ride of the performance growth from simply shrinking the transistor features. As one of the consequences, we experience the rise of highly specialized architectures ranging from neuromorphic to quantum computing, exploiting completely different physical phenomena and demanding the development of entirely new architectures - that, however, can perform the computations within a fraction of the energy needed by the von Neumann architecture. Thus, we experience the paradigm shift from generalized architectures of the Von Neumann era to highly specialized architectures in the Post-Moore era where we expect the coexistence of multiple types of architectures specialized for different types of computation. In this paper, we discuss the implications of the post-Moore era for distributed systems.","PeriodicalId":254571,"journal":{"name":"Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127729497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cascade: An Edge Computing Platform for Real-time Machine Intelligence Cascade:用于实时机器智能的边缘计算平台
Weijia Song, Yuting Yang, Thompson Liu, Andrea Merlina, Thiago Garrett, R. Vitenberg, Lorenzo Rosa, Aahil Awatramani, Zheng Wang, K. Birman
Intelligent IoT is a prerequisite for societal priorities such as a smart power grid, smart urban infrastructures and smart highways. These applications bring requirements such as real-time guarantees, data and action consistency, fault-tolerance, high availability, temporal data indexing, scalability, and even self-organization and self-stabilization. Existing platforms are oriented towards asynchronous, out of band upload of data to the cloud: Important functionality, but not enough to address the need. Cornell's Cascade project seeks to close the gap by creating a new platform for hosting ML and AI, optimized to achieve sharply lower delay and substantially higher bandwidth than in any existing platform. At the same time, Cascade introduces much stronger guarantees - a mix that we believe will be particularly appealing in applications where events should trigger a quick and trustworthy response. This short paper is intended as a brief overview of the effort, with details to be published elsewhere.
智能物联网是智能电网、智能城市基础设施和智能高速公路等社会优先事项的先决条件。这些应用程序带来了诸如实时保证、数据和操作一致性、容错、高可用性、临时数据索引、可伸缩性,甚至自组织和自稳定等需求。现有的平台面向异步的、带外的数据上传到云:重要的功能,但不足以满足需求。康奈尔大学的Cascade项目旨在通过创建一个托管ML和AI的新平台来缩小差距,该平台经过优化,可以实现比任何现有平台更低的延迟和更高的带宽。与此同时,Cascade引入了更强的保证——我们相信,在事件应该触发快速可靠响应的应用程序中,这种组合将特别有吸引力。这篇简短的论文旨在作为工作的简要概述,细节将在其他地方发布。
{"title":"Cascade: An Edge Computing Platform for Real-time Machine Intelligence","authors":"Weijia Song, Yuting Yang, Thompson Liu, Andrea Merlina, Thiago Garrett, R. Vitenberg, Lorenzo Rosa, Aahil Awatramani, Zheng Wang, K. Birman","doi":"10.1145/3524053.3542741","DOIUrl":"https://doi.org/10.1145/3524053.3542741","url":null,"abstract":"Intelligent IoT is a prerequisite for societal priorities such as a smart power grid, smart urban infrastructures and smart highways. These applications bring requirements such as real-time guarantees, data and action consistency, fault-tolerance, high availability, temporal data indexing, scalability, and even self-organization and self-stabilization. Existing platforms are oriented towards asynchronous, out of band upload of data to the cloud: Important functionality, but not enough to address the need. Cornell's Cascade project seeks to close the gap by creating a new platform for hosting ML and AI, optimized to achieve sharply lower delay and substantially higher bandwidth than in any existing platform. At the same time, Cascade introduces much stronger guarantees - a mix that we believe will be particularly appealing in applications where events should trigger a quick and trustworthy response. This short paper is intended as a brief overview of the effort, with details to be published elsewhere.","PeriodicalId":254571,"journal":{"name":"Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114768352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Research Summary: Deterministic, Explainable and Efficient Stream Processing 研究综述:确定性、可解释和高效的流处理
Dimitrios Palyvos-Giannas, M. Papatriantafilou, Vincenzo Gulisano
The vast amounts of data collected and processed by technologies such as Cyber-Physical Systems require new processing paradigms that can keep up with the increasing data volumes. Edge computing and stream processing are two such paradigms that, combined, allow users to process unbounded datasets in an online manner, delivering high-throughput, low-latency insights. Moving stream processing to the edge introduces challenges related to the heterogeneity and resource constraints of the processing infrastructure. In this work, we present state-of-the-art research results that improve the facilities of Stream Processing Engines (SPEs) with data provenance, custom scheduling, and other techniques that can support the usability and performance of streaming applications, spanning through the edge-cloud contexts, as needed.
信息物理系统等技术收集和处理的大量数据需要新的处理范式,以跟上不断增长的数据量。边缘计算和流处理是两种这样的范例,它们结合在一起,允许用户以在线方式处理无界数据集,提供高吞吐量、低延迟的见解。将流处理移动到边缘引入了与处理基础设施的异构性和资源约束相关的挑战。在这项工作中,我们展示了最先进的研究成果,通过数据来源、自定义调度和其他技术来改进流处理引擎(spe)的设施,这些技术可以支持流应用程序的可用性和性能,根据需要跨越边缘云上下文。
{"title":"Research Summary: Deterministic, Explainable and Efficient Stream Processing","authors":"Dimitrios Palyvos-Giannas, M. Papatriantafilou, Vincenzo Gulisano","doi":"10.1145/3524053.3542750","DOIUrl":"https://doi.org/10.1145/3524053.3542750","url":null,"abstract":"The vast amounts of data collected and processed by technologies such as Cyber-Physical Systems require new processing paradigms that can keep up with the increasing data volumes. Edge computing and stream processing are two such paradigms that, combined, allow users to process unbounded datasets in an online manner, delivering high-throughput, low-latency insights. Moving stream processing to the edge introduces challenges related to the heterogeneity and resource constraints of the processing infrastructure. In this work, we present state-of-the-art research results that improve the facilities of Stream Processing Engines (SPEs) with data provenance, custom scheduling, and other techniques that can support the usability and performance of streaming applications, spanning through the edge-cloud contexts, as needed.","PeriodicalId":254571,"journal":{"name":"Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126637672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Drone-Truck Cooperated Delivery Under Time Varying Dynamics 时变动态下无人机-卡车协同配送
A. Khanda, Federico Coró, Sajal K. Das
Rapid technological developments in autonomous unmanned aerial vehicles (or drones) could soon lead to their large-scale implementation in the last-mile delivery of products. However, drones have a number of problems such as limited energy budget, limited carrying capacity, etc. On the other hand, trucks have a larger carrying capacity, but they cannot reach all the places easily. Intriguingly, last-mile delivery cooperation between drones and trucks can synergistically improve delivery efficiency. In this paper, we present a drone-truck co-operated delivery framework under time-varying dynamics. Our framework minimizes the total delivery time while considering low energy consumption as the secondary objective. The empirical results support our claim and show that our algorithm can help to complete the deliveries time efficiently and saves energy.
自主无人驾驶飞行器(或无人机)的快速技术发展可能很快导致它们在最后一英里的产品交付中大规模实施。然而,无人机存在能源预算有限、承载能力有限等问题。另一方面,卡车的承载能力更大,但它们不能很容易地到达所有的地方。有趣的是,无人机和卡车之间的最后一英里配送合作可以协同提高配送效率。本文提出了一种时变动态下的无人机-卡车协同配送框架。我们的框架最大限度地减少总交付时间,同时考虑低能耗作为次要目标。实证结果支持了我们的观点,表明我们的算法可以有效地完成配送时间并节省能源。
{"title":"Drone-Truck Cooperated Delivery Under Time Varying Dynamics","authors":"A. Khanda, Federico Coró, Sajal K. Das","doi":"10.1145/3524053.3542743","DOIUrl":"https://doi.org/10.1145/3524053.3542743","url":null,"abstract":"Rapid technological developments in autonomous unmanned aerial vehicles (or drones) could soon lead to their large-scale implementation in the last-mile delivery of products. However, drones have a number of problems such as limited energy budget, limited carrying capacity, etc. On the other hand, trucks have a larger carrying capacity, but they cannot reach all the places easily. Intriguingly, last-mile delivery cooperation between drones and trucks can synergistically improve delivery efficiency. In this paper, we present a drone-truck co-operated delivery framework under time-varying dynamics. Our framework minimizes the total delivery time while considering low energy consumption as the secondary objective. The empirical results support our claim and show that our algorithm can help to complete the deliveries time efficiently and saves energy.","PeriodicalId":254571,"journal":{"name":"Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123674572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Exploring the use of Strongly Consistent Distributed Shared Memory in 3D NVEs 探索在3D nve中使用强一致性分布式共享内存
T. Hadjistasi, N. Nicolaou, E. Stavrakis
Virtual and Augmented Reality is one of the key driving technologies of the 4th Industrial Revolution, which is expected to radically disrupt almost every business sector and transform the way we live and interact with our environment and each other. End-user devices will soon enable users to immerse in 3D Virtual Environments (VEs) that offer access to remote services, such as health care, training and education, entertainment and social interaction. The advent of fast highly-available network connectivity in combination with afford- able 3D hardware (GPUs, VR/AR HMDs, etc.) has enabled making Networked Virtual Environments (NVEs) possible and available to multiple simultaneous end-users beyond the confines of expensive purpose-built 3D facilities and laboratories. However, the algorithms making possible the NVEs of today are already reaching their limits, proving unreliable, suffer asynchronies and deployed over an inherently fault-prone network infrastructure. Current developments of distributed architectures used in NVEs handle concurrency by either providing weak consistency guarantees (e.g., eventual consistency), or by relying on the bounded life span of inconsistent states. Although sufficient for non-critical, yet time sensitive applications, those solutions will be incapable of handling the next generation of interactive Virtual Environments, where precise consistency guarantees will be required. Thus, new scalable, robust, and responsive strategies that can support the needs of the NVEs of tomorrow are necessary. Recent scientific works are shifting the viewpoint around the practicality of strongly consistent distributed storage spaces by proposing latency-efficient algorithms of atomic R/W Distributed Shared Memory (DSM) with provable consistency guarantees. In this work we focus on transforming the theoretical findings of DSMs into tangible implementations and in investigating the practicality of those algorithmic solutions in Virtual Environments.
虚拟和增强现实是第四次工业革命的关键驱动技术之一,预计它将从根本上颠覆几乎所有的商业领域,改变我们的生活方式以及与环境和彼此互动的方式。终端用户设备将很快让用户沉浸在3D虚拟环境(ve)中,提供远程服务,如医疗保健、培训和教育、娱乐和社交互动。快速、高可用性网络连接的出现,加上价格合理的3D硬件(gpu、VR/AR头戴式显示器等),使得网络虚拟环境(NVEs)成为可能,并且可以同时为多个最终用户提供,而不受昂贵的专用3D设施和实验室的限制。然而,使nve成为可能的算法已经达到了极限,证明不可靠,遭受异步,并且部署在固有的易发生故障的网络基础设施上。nve中使用的分布式架构的当前发展通过提供弱一致性保证(例如,最终一致性)或依赖于不一致状态的有限生命周期来处理并发性。尽管对于非关键的、时间敏感的应用程序来说已经足够了,但这些解决方案将无法处理需要精确一致性保证的下一代交互式虚拟环境。因此,需要新的可扩展的、健壮的、响应迅速的策略来支持未来nve的需求。最近的科学工作通过提出具有可证明一致性保证的原子R/W分布式共享内存(DSM)延迟效率算法,正在围绕强一致性分布式存储空间的实用性转变观点。在这项工作中,我们专注于将dsm的理论发现转化为切实的实现,并研究这些算法解决方案在虚拟环境中的实用性。
{"title":"Exploring the use of Strongly Consistent Distributed Shared Memory in 3D NVEs","authors":"T. Hadjistasi, N. Nicolaou, E. Stavrakis","doi":"10.1145/3524053.3542748","DOIUrl":"https://doi.org/10.1145/3524053.3542748","url":null,"abstract":"Virtual and Augmented Reality is one of the key driving technologies of the 4th Industrial Revolution, which is expected to radically disrupt almost every business sector and transform the way we live and interact with our environment and each other. End-user devices will soon enable users to immerse in 3D Virtual Environments (VEs) that offer access to remote services, such as health care, training and education, entertainment and social interaction. The advent of fast highly-available network connectivity in combination with afford- able 3D hardware (GPUs, VR/AR HMDs, etc.) has enabled making Networked Virtual Environments (NVEs) possible and available to multiple simultaneous end-users beyond the confines of expensive purpose-built 3D facilities and laboratories. However, the algorithms making possible the NVEs of today are already reaching their limits, proving unreliable, suffer asynchronies and deployed over an inherently fault-prone network infrastructure. Current developments of distributed architectures used in NVEs handle concurrency by either providing weak consistency guarantees (e.g., eventual consistency), or by relying on the bounded life span of inconsistent states. Although sufficient for non-critical, yet time sensitive applications, those solutions will be incapable of handling the next generation of interactive Virtual Environments, where precise consistency guarantees will be required. Thus, new scalable, robust, and responsive strategies that can support the needs of the NVEs of tomorrow are necessary. Recent scientific works are shifting the viewpoint around the practicality of strongly consistent distributed storage spaces by proposing latency-efficient algorithms of atomic R/W Distributed Shared Memory (DSM) with provable consistency guarantees. In this work we focus on transforming the theoretical findings of DSMs into tangible implementations and in investigating the practicality of those algorithmic solutions in Virtual Environments.","PeriodicalId":254571,"journal":{"name":"Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116050334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Colder Than the Warm Start and Warmer Than the Cold Start! Experience the Spawn Start in FaaS Providers 比热启动更冷,比冷启动更热!体验FaaS提供程序中的刷出开始
S. Ristov, Christian Hollaus, Mika Hautz
Many researchers reported considerable delay of up to a few seconds when invoking serverless functions for the first time. This phenomenon, which is known as a cold start, affects even more when users are running multiple serverless functions orchestrated in a workflow. However, in many cases users need to instantly spawn numerous serverless functions, usually as a part of parallel loops. In this paper, we introduce the spawn start and analyze the behavior of three Function-as-a-Service (FaaS) providers AWS Lambda, Google Cloud Functions, and IBM Cloud Functions when running parallel loops, both as warm and cold starts. We conducted a series of experiments and observed three insights that are beneficial for the research community. Firstly, cold start on IBM Cloud Functions, which is up to 2s delay compared to the warm start, is negligible compared to the spawn start because the latter generates additional 20s delay. Secondly, Google Cloud Functions' cold start is "warmer" than the warm start of the same serverless function. Finally, while Google Cloud Functions and IBM Cloud Functions run the same serverless function with low concurrency faster than AWS Lambda, the spawn start effect on Google Cloud Functions and IBM Cloud Functions makes AWS the preferred provider when spawning numerous serverless functions.
许多研究人员报告说,当第一次调用无服务器函数时,会出现长达几秒钟的相当大的延迟。这种现象被称为冷启动,当用户运行工作流中编排的多个无服务器功能时影响更大。然而,在许多情况下,用户需要立即生成大量无服务器函数,通常作为并行循环的一部分。在本文中,我们介绍了衍生启动,并分析了三个功能即服务(FaaS)提供商AWS Lambda、Google Cloud Functions和IBM Cloud Functions在运行并行循环时的行为,包括热启动和冷启动。我们进行了一系列的实验,并观察到三个对研究界有益的见解。首先,与热启动相比,IBM Cloud Functions上的冷启动最多延迟2s,与刷出启动相比可以忽略不计,因为后者会产生额外的20s延迟。其次,Google Cloud Functions的冷启动比相同的无服务器功能的热启动“更温暖”。最后,虽然Google Cloud Functions和IBM Cloud Functions运行相同的无服务器功能,但其低并发性比AWS Lambda更快,但Google Cloud Functions和IBM Cloud Functions的衍生启动效应使AWS成为衍生大量无服务器功能的首选提供商。
{"title":"Colder Than the Warm Start and Warmer Than the Cold Start! Experience the Spawn Start in FaaS Providers","authors":"S. Ristov, Christian Hollaus, Mika Hautz","doi":"10.1145/3524053.3542751","DOIUrl":"https://doi.org/10.1145/3524053.3542751","url":null,"abstract":"Many researchers reported considerable delay of up to a few seconds when invoking serverless functions for the first time. This phenomenon, which is known as a cold start, affects even more when users are running multiple serverless functions orchestrated in a workflow. However, in many cases users need to instantly spawn numerous serverless functions, usually as a part of parallel loops. In this paper, we introduce the spawn start and analyze the behavior of three Function-as-a-Service (FaaS) providers AWS Lambda, Google Cloud Functions, and IBM Cloud Functions when running parallel loops, both as warm and cold starts. We conducted a series of experiments and observed three insights that are beneficial for the research community. Firstly, cold start on IBM Cloud Functions, which is up to 2s delay compared to the warm start, is negligible compared to the spawn start because the latter generates additional 20s delay. Secondly, Google Cloud Functions' cold start is \"warmer\" than the warm start of the same serverless function. Finally, while Google Cloud Functions and IBM Cloud Functions run the same serverless function with low concurrency faster than AWS Lambda, the spawn start effect on Google Cloud Functions and IBM Cloud Functions makes AWS the preferred provider when spawning numerous serverless functions.","PeriodicalId":254571,"journal":{"name":"Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129347869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards an Approximation-Aware Computational Workflow Framework for Accelerating Large-Scale Discovery Tasks: Invited paper 面向加速大规模发现任务的逼近感知计算工作流框架:特邀论文
Michael Johnston, V. Vassiliadis
The use of approximation is fundamental in computational science. Almost all computational methods adopt approximations in some form in order to obtain a favourable cost/accuracy trade-off and there are usually many approximations that could be used. As a result, when a researcher wishes to measure a property of a system with a computational technique, they are faced with an array of options. Current computational workflow frameworks focus on helping researchers automate a sequence of steps on a particular platform. The aim is often to obtain a computational measurement of a property. However these frameworks are unaware that there may be a large number of ways to do so. As such, they cannot support researchers in making these choices during development or at execution-time. We argue that computational workflow frameworks should be designed to beapproximation-aware - that is, support the fact that a given workflow description represents a task thatcould be performed in different ways. This is key to unlocking the potential of computational workflows to accelerate discovery tasks, particularly those involving searches of large entity spaces. It will enable efficiently obtaining measurements of entity properties, given a set of constraints, by directly leveraging the space of choices available. In this paper we describe the basic functions that an approximation-aware workflow framework should provide, how those functions can be realized in practice, and illustrate some of the powerful capabilities it would enable, including approximate memoization, surrogate model support, and automated workflow composition.
近似值的使用是计算科学的基础。几乎所有的计算方法都采用某种形式的近似值,以获得有利的成本/精度权衡,通常可以使用许多近似值。因此,当研究人员希望用计算技术测量系统的属性时,他们面临着一系列的选择。当前的计算工作流框架专注于帮助研究人员在特定平台上自动执行一系列步骤。其目的往往是获得一个属性的计算测量。然而,这些框架没有意识到可能有很多方法可以做到这一点。因此,它们不能支持研究人员在开发或执行阶段做出这些选择。我们认为计算工作流框架应该被设计成近似感知的——也就是说,支持这样一个事实,即给定的工作流描述代表了一个可以以不同方式执行的任务。这是释放计算工作流潜力的关键,可以加速发现任务,特别是那些涉及大型实体空间搜索的任务。它将通过直接利用可用的选择空间,在给定一组约束条件的情况下,有效地获得实体属性的度量。在本文中,我们描述了近似感知工作流框架应该提供的基本功能,这些功能如何在实践中实现,并说明了它将启用的一些强大功能,包括近似记忆、代理模型支持和自动化工作流组合。
{"title":"Towards an Approximation-Aware Computational Workflow Framework for Accelerating Large-Scale Discovery Tasks: Invited paper","authors":"Michael Johnston, V. Vassiliadis","doi":"10.1145/3524053.3542746","DOIUrl":"https://doi.org/10.1145/3524053.3542746","url":null,"abstract":"The use of approximation is fundamental in computational science. Almost all computational methods adopt approximations in some form in order to obtain a favourable cost/accuracy trade-off and there are usually many approximations that could be used. As a result, when a researcher wishes to measure a property of a system with a computational technique, they are faced with an array of options. Current computational workflow frameworks focus on helping researchers automate a sequence of steps on a particular platform. The aim is often to obtain a computational measurement of a property. However these frameworks are unaware that there may be a large number of ways to do so. As such, they cannot support researchers in making these choices during development or at execution-time. We argue that computational workflow frameworks should be designed to beapproximation-aware - that is, support the fact that a given workflow description represents a task thatcould be performed in different ways. This is key to unlocking the potential of computational workflows to accelerate discovery tasks, particularly those involving searches of large entity spaces. It will enable efficiently obtaining measurements of entity properties, given a set of constraints, by directly leveraging the space of choices available. In this paper we describe the basic functions that an approximation-aware workflow framework should provide, how those functions can be realized in practice, and illustrate some of the powerful capabilities it would enable, including approximate memoization, surrogate model support, and automated workflow composition.","PeriodicalId":254571,"journal":{"name":"Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123996386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2022 Workshop on Advanced tools, programming languages, and PLatforms for Implementing and Evaluating algorithms for Distributed systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1