首页 > 最新文献

2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)最新文献

英文 中文
A Privacy Preserving System for AI-assisted Video Analytics 用于人工智能辅助视频分析的隐私保护系统
Pub Date : 2021-05-01 DOI: 10.1109/ICFEC51620.2021.00018
Clemens Lachner, T. Rausch, S. Dustdar
The emerging Edge computing paradigm facilitates the deployment of distributed AI-applications and hardware, capable of processing video data in real time. AI-assisted video analytics can provide valuable information and benefits for parties in various domains. Face recognition, object detection, or movement tracing are prominent examples enabled by this technology. However, the widespread deployment of such mechanism in public areas are a growing cause of privacy and security concerns. Data protection strategies need to be appropriately designed and correctly implemented in order to mitigate the associated risks. Most existing approaches focus on privacy and security related operations of the video stream itself or protecting its transmission. In this paper, we propose a privacy preserving system for AI-assisted video analytics, that extracts relevant information from video data and governs the secure access to that information. The system ensures that applications leveraging extracted data have no access to the video stream. An attribute-based authorization scheme allows applications to only query a predefined subset of extracted data. We demonstrate the feasibility of our approach by evaluating an application motivated by the recent COVID-19 pandemic, deployed on typical edge computing infrastructure.
新兴的边缘计算范式促进了分布式人工智能应用程序和硬件的部署,能够实时处理视频数据。人工智能辅助视频分析可以为各个领域的各方提供有价值的信息和利益。人脸识别,目标检测或运动跟踪是该技术实现的突出示例。然而,这种机制在公共场所的广泛部署日益引起人们对隐私和安全的担忧。数据保护策略需要适当设计和正确实施,以减轻相关风险。大多数现有的方法侧重于视频流本身的隐私和安全相关操作或保护其传输。在本文中,我们提出了一种用于人工智能辅助视频分析的隐私保护系统,该系统从视频数据中提取相关信息并管理对该信息的安全访问。该系统确保利用提取数据的应用程序无法访问视频流。基于属性的授权方案允许应用程序仅查询提取数据的预定义子集。我们通过评估在典型边缘计算基础设施上部署的由最近的COVID-19大流行驱动的应用程序来证明我们方法的可行性。
{"title":"A Privacy Preserving System for AI-assisted Video Analytics","authors":"Clemens Lachner, T. Rausch, S. Dustdar","doi":"10.1109/ICFEC51620.2021.00018","DOIUrl":"https://doi.org/10.1109/ICFEC51620.2021.00018","url":null,"abstract":"The emerging Edge computing paradigm facilitates the deployment of distributed AI-applications and hardware, capable of processing video data in real time. AI-assisted video analytics can provide valuable information and benefits for parties in various domains. Face recognition, object detection, or movement tracing are prominent examples enabled by this technology. However, the widespread deployment of such mechanism in public areas are a growing cause of privacy and security concerns. Data protection strategies need to be appropriately designed and correctly implemented in order to mitigate the associated risks. Most existing approaches focus on privacy and security related operations of the video stream itself or protecting its transmission. In this paper, we propose a privacy preserving system for AI-assisted video analytics, that extracts relevant information from video data and governs the secure access to that information. The system ensures that applications leveraging extracted data have no access to the video stream. An attribute-based authorization scheme allows applications to only query a predefined subset of extracted data. We demonstrate the feasibility of our approach by evaluating an application motivated by the recent COVID-19 pandemic, deployed on typical edge computing infrastructure.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131541186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
PA-Offload: Performability-Aware Adaptive Fog Offloading for Drone Image Processing PA-Offload:无人机图像处理的性能感知自适应雾卸载
Pub Date : 2021-05-01 DOI: 10.1109/ICFEC51620.2021.00017
F. Machida, E. Andrade
Smart drone systems have built-in computing resources for processing real-world images captured by cameras to recognize their surroundings. Due to limited resources and battery constraints, resource-intensive image processing tasks cannot always run on drones. Thus, offloading computation tasks to any available node in a fog computing infrastructure can be considered as a promising solution. An important challenge when applying fog offloading is deciding when to start or stop offloading, taking into account performance and availability impacts under varying workloads and communication link states. In this paper, we present a performability-aware adaptive offloading scheme called PA-Offload that controls the offloading of image processing tasks from a drone to a fog node. To incorporate uncertainty factors, we introduce Stochastic Reward Nets (SRNs) to model the entire system behavior and compute a performability metric that is a composite measure of service throughput and system availability. The estimated performability value is then used to determine when to start or stop the offloading in order to make a better trade-off between performance and availability. Our numerical experiments show the effectiveness of PA-offload in terms of performability compared to non-adaptive fog offloading schemes.
智能无人机系统拥有内置的计算资源,用于处理摄像头拍摄的真实世界图像,以识别周围环境。由于有限的资源和电池的限制,资源密集型的图像处理任务并不总是在无人机上运行。因此,将计算任务卸载到雾计算基础设施中的任何可用节点可以被认为是一个很有前途的解决方案。应用雾卸载时的一个重要挑战是决定何时开始或停止卸载,同时考虑到不同工作负载和通信链路状态下的性能和可用性影响。在本文中,我们提出了一种性能感知的自适应卸载方案,称为PA-Offload,该方案控制图像处理任务从无人机到雾节点的卸载。为了纳入不确定性因素,我们引入随机奖励网络(srn)来对整个系统行为建模,并计算一个性能度量,该度量是服务吞吐量和系统可用性的综合度量。然后使用估计的性能值来确定何时开始或停止卸载,以便在性能和可用性之间做出更好的权衡。我们的数值实验表明,与非自适应雾卸载方案相比,pa卸载在性能方面是有效的。
{"title":"PA-Offload: Performability-Aware Adaptive Fog Offloading for Drone Image Processing","authors":"F. Machida, E. Andrade","doi":"10.1109/ICFEC51620.2021.00017","DOIUrl":"https://doi.org/10.1109/ICFEC51620.2021.00017","url":null,"abstract":"Smart drone systems have built-in computing resources for processing real-world images captured by cameras to recognize their surroundings. Due to limited resources and battery constraints, resource-intensive image processing tasks cannot always run on drones. Thus, offloading computation tasks to any available node in a fog computing infrastructure can be considered as a promising solution. An important challenge when applying fog offloading is deciding when to start or stop offloading, taking into account performance and availability impacts under varying workloads and communication link states. In this paper, we present a performability-aware adaptive offloading scheme called PA-Offload that controls the offloading of image processing tasks from a drone to a fog node. To incorporate uncertainty factors, we introduce Stochastic Reward Nets (SRNs) to model the entire system behavior and compute a performability metric that is a composite measure of service throughput and system availability. The estimated performability value is then used to determine when to start or stop the offloading in order to make a better trade-off between performance and availability. Our numerical experiments show the effectiveness of PA-offload in terms of performability compared to non-adaptive fog offloading schemes.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116516963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Priority-enabled Load Balancing for Dispersed Computing 分布式计算负载均衡优先级
Pub Date : 2021-05-01 DOI: 10.1109/ICFEC51620.2021.00009
Aaron M. Paulos, S. Dasgupta, J. Beal, Yuanqiu Mo, Jon Schewe, Alexander Wald, P. Pal, R. Schantz, J. B. Lyles
Opportunistic managed access to local in-network compute resources can improve the performance of distributed applications and reduce the dependence on shared network resources. Instead of backhauling application data to a centralized cloud data center for processing, networked services may be adaptively and continuously dispersed into shared compute resources that are closer to the source of need. While this approach has several benefits, support for mission-aware access to computation is often an afterthought, and is implemented as a brittle extension over traditional load-balancer solutions.In this work, we investigate the design of two priority-aware resource allocation strategies and two load-balancing dispatching strategies as first class citizens in an open-source dispersed computing middleware. We present a control theoretic analysis of these load-balancing primitives to identify weaknesses and strengths in our design, and recommend future directions. In parallel, we prototype two priority-aware allocation algorithms to validate our priority predictions. In initial experiments our prototype shows substantial gains in processing prioritized load. Finally, we make our source-code and experimental configurations open source.
对本地网络内计算资源的机会管理访问可以提高分布式应用程序的性能,并减少对共享网络资源的依赖。网络服务可以自适应地、连续地分散到更接近需求源的共享计算资源中,而不是将应用程序数据返回到集中式云数据中心进行处理。虽然这种方法有几个好处,但对任务感知访问计算的支持通常是事后才想到的,并且作为传统负载平衡器解决方案的脆弱扩展来实现。在这项工作中,我们研究了两种优先级感知的资源分配策略和两种负载平衡调度策略作为开源分散计算中间件中的一级公民的设计。我们提出了这些负载平衡原语的控制理论分析,以确定我们设计中的弱点和优势,并建议未来的方向。同时,我们对两种优先级感知分配算法进行了原型化,以验证我们的优先级预测。在最初的实验中,我们的原型在处理优先级负载方面显示出显著的增益。最后,我们将源代码和实验配置开放源代码。
{"title":"Priority-enabled Load Balancing for Dispersed Computing","authors":"Aaron M. Paulos, S. Dasgupta, J. Beal, Yuanqiu Mo, Jon Schewe, Alexander Wald, P. Pal, R. Schantz, J. B. Lyles","doi":"10.1109/ICFEC51620.2021.00009","DOIUrl":"https://doi.org/10.1109/ICFEC51620.2021.00009","url":null,"abstract":"Opportunistic managed access to local in-network compute resources can improve the performance of distributed applications and reduce the dependence on shared network resources. Instead of backhauling application data to a centralized cloud data center for processing, networked services may be adaptively and continuously dispersed into shared compute resources that are closer to the source of need. While this approach has several benefits, support for mission-aware access to computation is often an afterthought, and is implemented as a brittle extension over traditional load-balancer solutions.In this work, we investigate the design of two priority-aware resource allocation strategies and two load-balancing dispatching strategies as first class citizens in an open-source dispersed computing middleware. We present a control theoretic analysis of these load-balancing primitives to identify weaknesses and strengths in our design, and recommend future directions. In parallel, we prototype two priority-aware allocation algorithms to validate our priority predictions. In initial experiments our prototype shows substantial gains in processing prioritized load. Finally, we make our source-code and experimental configurations open source.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131571702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multilayer Resource-aware Partitioning for Fog Application Placement 用于雾应用放置的多层资源感知分区
Pub Date : 2021-05-01 DOI: 10.1109/ICFEC51620.2021.00010
Zahra Najafabadi Samani, Nishant Saurabh, R. Prodan
Fog computing emerged as a crucial platform for the deployment of IoT applications. The complexity of such applications require methods that handle the resource diversity and network structure of Fog devices, while maximizing the service placement and reducing the resource wastage. Prior studies in this domain primarily focused on optimizing application-specific requirements and fail to address the network topology combined with the different types of resources encountered in Fog devices. To overcome these problems, we propose a multilayer resource-aware partitioning method to minimize the resource wastage and maximize the service placement and deadline satisfaction rates in a Fog infrastructure with high multi-user application placement requests. Our method represents the heterogeneous Fog resources as a multilayered network graph and partitions them based on network topology and resource features. Afterwards, it identifies the appropriate device partitions for placing an application according to its requirements, which need to overlap in the same network topology partition. Simulation results show that our multilayer resource-aware partitioning method is able to place twice as many services, satisfy deadlines for three times as many application requests, and reduce the resource wastage by up to 15–32 times compared to two availability-aware and resource-aware state-of-the-art methods.
雾计算成为部署物联网应用程序的关键平台。此类应用的复杂性要求处理雾设备的资源多样性和网络结构的方法,同时最大化服务放置和减少资源浪费。该领域的先前研究主要集中在优化特定应用需求上,而未能解决雾设备中遇到的网络拓扑结构和不同类型资源的问题。为了克服这些问题,我们提出了一种多层资源感知分区方法,以最大限度地减少资源浪费,最大限度地提高多用户应用程序放置请求的雾基础设施中的服务放置和截止日期满意度。该方法将异构雾资源表示为多层网络图,并根据网络拓扑结构和资源特征对其进行划分。然后,它根据应用程序的需求确定适当的设备分区,这些分区需要在相同的网络拓扑分区中重叠。仿真结果表明,与可用性感知和资源感知两种最先进的方法相比,我们的多层资源感知分区方法能够放置两倍的服务,满足三倍的应用程序请求的截止日期,并减少高达15-32倍的资源浪费。
{"title":"Multilayer Resource-aware Partitioning for Fog Application Placement","authors":"Zahra Najafabadi Samani, Nishant Saurabh, R. Prodan","doi":"10.1109/ICFEC51620.2021.00010","DOIUrl":"https://doi.org/10.1109/ICFEC51620.2021.00010","url":null,"abstract":"Fog computing emerged as a crucial platform for the deployment of IoT applications. The complexity of such applications require methods that handle the resource diversity and network structure of Fog devices, while maximizing the service placement and reducing the resource wastage. Prior studies in this domain primarily focused on optimizing application-specific requirements and fail to address the network topology combined with the different types of resources encountered in Fog devices. To overcome these problems, we propose a multilayer resource-aware partitioning method to minimize the resource wastage and maximize the service placement and deadline satisfaction rates in a Fog infrastructure with high multi-user application placement requests. Our method represents the heterogeneous Fog resources as a multilayered network graph and partitions them based on network topology and resource features. Afterwards, it identifies the appropriate device partitions for placing an application according to its requirements, which need to overlap in the same network topology partition. Simulation results show that our multilayer resource-aware partitioning method is able to place twice as many services, satisfy deadlines for three times as many application requests, and reduce the resource wastage by up to 15–32 times compared to two availability-aware and resource-aware state-of-the-art methods.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126807840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Mapping IoT Applications on the Edge to Cloud Continuum with a Filter Stream Model 通过过滤器流模型将边缘的物联网应用映射到云连续体
Pub Date : 2021-05-01 DOI: 10.1109/ICFEC51620.2021.00016
Shuangsheng Lou, G. Agrawal
In the context of developing streaming applications for IoT (or Edge Computing) environments, this paper presents a framework for automated deployment with an emphasis on optimizing latency in the presence of resource constraints. A dynamic programming based deployment algorithm is developed to make deployment decisions. With battery power being a key constraint, a major component of our work is a power model to help assess the power consumption of the edge devices at the runtime. Using three applications, we show the large reductions in both power consumption and response latency with our framework, as compared to a baseline involving cloud-only execution.
在为物联网(或边缘计算)环境开发流应用程序的背景下,本文提出了一个自动化部署框架,重点是在存在资源限制的情况下优化延迟。提出了一种基于动态规划的部署算法来进行部署决策。由于电池电量是一个关键的限制因素,我们工作的一个主要组成部分是功率模型,以帮助评估边缘设备在运行时的功耗。通过使用三个应用程序,我们展示了与仅云执行的基线相比,我们的框架在功耗和响应延迟方面的显著降低。
{"title":"Mapping IoT Applications on the Edge to Cloud Continuum with a Filter Stream Model","authors":"Shuangsheng Lou, G. Agrawal","doi":"10.1109/ICFEC51620.2021.00016","DOIUrl":"https://doi.org/10.1109/ICFEC51620.2021.00016","url":null,"abstract":"In the context of developing streaming applications for IoT (or Edge Computing) environments, this paper presents a framework for automated deployment with an emphasis on optimizing latency in the presence of resource constraints. A dynamic programming based deployment algorithm is developed to make deployment decisions. With battery power being a key constraint, a major component of our work is a power model to help assess the power consumption of the edge devices at the runtime. Using three applications, we show the large reductions in both power consumption and response latency with our framework, as compared to a baseline involving cloud-only execution.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129722629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing the Mission Time of Drone Applications through Location-Aware Edge Computing 通过位置感知边缘计算减少无人机应用的任务时间
Pub Date : 2021-05-01 DOI: 10.1109/ICFEC51620.2021.00014
Theodoros Kasidakis, Giorgos Polychronis, Manos Koutsoubelias, S. Lalis
In data-driven applications, which go beyond simple data collection, drones may need to process sensor measurements at certain locations, during the mission. However, the onboard computing platforms typically have strong resource limitations, which may lead to significant delays and long mission times. To address this problem, we explore the potential of offloading heavyweight computations from the drone to nearby edge computing infrastructure. We discuss a concrete implementation for a service-oriented application software stack, which takes offloading decisions based on the expected service invocation time and the locations of the servers expected to be available in the mission area. We evaluate our implementation using an experimental setup that combines a hardware-in-the-loop and software-in-the-loop configuration. Our results show that the proposed approach can reduce the total mission time significantly, by up to 48% vs local-only processing, and by 10% vs more naive opportunistic offloading, depending on the mission scenario.
在数据驱动的应用中,无人机可能需要在任务期间处理特定位置的传感器测量数据,而不仅仅是简单的数据收集。然而,机载计算平台通常具有很强的资源限制,这可能导致重大延迟和较长的任务时间。为了解决这个问题,我们探索了将重型计算从无人机卸载到附近边缘计算基础设施的潜力。我们将讨论面向服务的应用程序软件堆栈的具体实现,它根据预期的服务调用时间和任务区域中预期可用的服务器位置做出卸载决策。我们使用结合了硬件在环和软件在环配置的实验设置来评估我们的实现。我们的结果表明,所提出的方法可以显着减少总任务时间,根据任务场景,与本地处理相比,最多可减少48%,与更天真的机会主义卸载相比,可减少10%。
{"title":"Reducing the Mission Time of Drone Applications through Location-Aware Edge Computing","authors":"Theodoros Kasidakis, Giorgos Polychronis, Manos Koutsoubelias, S. Lalis","doi":"10.1109/ICFEC51620.2021.00014","DOIUrl":"https://doi.org/10.1109/ICFEC51620.2021.00014","url":null,"abstract":"In data-driven applications, which go beyond simple data collection, drones may need to process sensor measurements at certain locations, during the mission. However, the onboard computing platforms typically have strong resource limitations, which may lead to significant delays and long mission times. To address this problem, we explore the potential of offloading heavyweight computations from the drone to nearby edge computing infrastructure. We discuss a concrete implementation for a service-oriented application software stack, which takes offloading decisions based on the expected service invocation time and the locations of the servers expected to be available in the mission area. We evaluate our implementation using an experimental setup that combines a hardware-in-the-loop and software-in-the-loop configuration. Our results show that the proposed approach can reduce the total mission time significantly, by up to 48% vs local-only processing, and by 10% vs more naive opportunistic offloading, depending on the mission scenario.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129882192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CHANGE: Delay-Aware Service Function Chain Orchestration at the Edge 变更:边缘的延迟感知服务功能链编排
Pub Date : 2021-05-01 DOI: 10.1109/ICFEC51620.2021.00011
Lei Wang, Mahdi Dolati, Majid Ghaderi
In Mobile Edge Computing (MEC), the network’s edge is equipped with computing and storage resources in order to reduce latency by minimizing communication with remote clouds. However, the available computing capacity at the edge is limited compared to that of remote clouds. A promising solution for efficient utilization of the limited capacity at the edge is fine-grained processing of user demands via Virtual Network Functions (VNFs). In this approach, user service demands are expressed as Service Function Chains (SFCs), which are composed of virtual network functions. Such service composition allows constituent VNFs to be flexibly deployed at the edge or in the cloud such that the service latency is minimized. The increasing number of users, however, challenges the scalability of system-managed SFC orchestration. To address this problem, we propose a user-managed online SFC orchestration framework at the edge of the network, called CHANGE, that minimizes service latency by jointly considering the effect of user mobility, edge capacity and service migration. We first present the theoretical foundations of CHANGE and then evaluate its performance via model-driven simulations and realistic Mininet-WiFi emulations. Our results show that CHANGE can improve latency performance by nearly 20% compared to other approaches.
在移动边缘计算(MEC)中,网络边缘配备了计算和存储资源,以便通过减少与远程云的通信来减少延迟。然而,与远程云相比,边缘可用的计算能力是有限的。有效利用边缘有限容量的一个很有前途的解决方案是通过虚拟网络功能(VNFs)对用户需求进行细粒度处理。该方法将用户的业务需求表达为业务功能链(sfc), sfc由虚拟网络功能组成。这种服务组合允许在边缘或云中灵活地部署组成的VNFs,从而最大限度地减少服务延迟。然而,越来越多的用户对系统管理的SFC编排的可伸缩性提出了挑战。为了解决这个问题,我们在网络边缘提出了一个用户管理的在线SFC编排框架,称为CHANGE,该框架通过联合考虑用户移动性、边缘容量和业务迁移的影响,最大限度地减少了服务延迟。我们首先介绍了CHANGE的理论基础,然后通过模型驱动仿真和现实的迷你wifi仿真来评估其性能。我们的结果表明,与其他方法相比,CHANGE可以将延迟性能提高近20%。
{"title":"CHANGE: Delay-Aware Service Function Chain Orchestration at the Edge","authors":"Lei Wang, Mahdi Dolati, Majid Ghaderi","doi":"10.1109/ICFEC51620.2021.00011","DOIUrl":"https://doi.org/10.1109/ICFEC51620.2021.00011","url":null,"abstract":"In Mobile Edge Computing (MEC), the network’s edge is equipped with computing and storage resources in order to reduce latency by minimizing communication with remote clouds. However, the available computing capacity at the edge is limited compared to that of remote clouds. A promising solution for efficient utilization of the limited capacity at the edge is fine-grained processing of user demands via Virtual Network Functions (VNFs). In this approach, user service demands are expressed as Service Function Chains (SFCs), which are composed of virtual network functions. Such service composition allows constituent VNFs to be flexibly deployed at the edge or in the cloud such that the service latency is minimized. The increasing number of users, however, challenges the scalability of system-managed SFC orchestration. To address this problem, we propose a user-managed online SFC orchestration framework at the edge of the network, called CHANGE, that minimizes service latency by jointly considering the effect of user mobility, edge capacity and service migration. We first present the theoretical foundations of CHANGE and then evaluate its performance via model-driven simulations and realistic Mininet-WiFi emulations. Our results show that CHANGE can improve latency performance by nearly 20% compared to other approaches.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"128 15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130048310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Performance Evaluation of Some Adaptive Task Allocation Algorithms for Fog Networks 雾网络中一些自适应任务分配算法的性能评价
Pub Date : 2021-05-01 DOI: 10.1109/ICFEC51620.2021.00020
Ioanna-Vasiliki Stypsanelli, O. Brun, B. Prabhu
Fog Computing brings resources closer to the end-user and improves user experience. Tasks with stringent QoS requirements can be processed locally in the Edge while the more elastic ones can be sent to the Cloud. For the benefits of this flexible architecture to be seen, task allocation algorithms should be dynamic and adapt to the load in the Fog and in the Cloud. Using a discrete-event simulation approach, we evaluate the performance of four simple adaptive algorithms based on congestion estimation and compare them with the standard nearest node algorithm that uses non adaptive routing. We consider a setting in which base stations (access nodes) forward traffic to computing nodes (Fog and Cloud nodes) in a distributed way without coordination and sharing of state-information between the access and computing nodes. The algorithms are tested for their adaptability to sudden changes in the arrival rate of requests (to model peak hours) as well as robustness to the variance of the request-size distributions to understand the advantages and drawbacks of each of them. They are shown to perform well in scenarios with and without offloading.
雾计算使资源更接近最终用户,并改善用户体验。对QoS要求比较严格的任务可以在边缘本地处理,而对QoS要求比较高的任务可以发送到云端。为了看到这种灵活架构的好处,任务分配算法应该是动态的,并适应雾和云中的负载。使用离散事件模拟方法,我们评估了基于拥塞估计的四种简单自适应算法的性能,并将它们与使用非自适应路由的标准最近节点算法进行了比较。我们考虑了一种设置,其中基站(接入节点)以分布式方式将流量转发给计算节点(雾节点和云节点),而不需要在接入节点和计算节点之间协调和共享状态信息。测试了算法对请求到达率突然变化的适应性(以模拟高峰时间)以及对请求大小分布差异的鲁棒性,以了解每种算法的优点和缺点。它们在有和没有卸载的情况下都表现良好。
{"title":"Performance Evaluation of Some Adaptive Task Allocation Algorithms for Fog Networks","authors":"Ioanna-Vasiliki Stypsanelli, O. Brun, B. Prabhu","doi":"10.1109/ICFEC51620.2021.00020","DOIUrl":"https://doi.org/10.1109/ICFEC51620.2021.00020","url":null,"abstract":"Fog Computing brings resources closer to the end-user and improves user experience. Tasks with stringent QoS requirements can be processed locally in the Edge while the more elastic ones can be sent to the Cloud. For the benefits of this flexible architecture to be seen, task allocation algorithms should be dynamic and adapt to the load in the Fog and in the Cloud. Using a discrete-event simulation approach, we evaluate the performance of four simple adaptive algorithms based on congestion estimation and compare them with the standard nearest node algorithm that uses non adaptive routing. We consider a setting in which base stations (access nodes) forward traffic to computing nodes (Fog and Cloud nodes) in a distributed way without coordination and sharing of state-information between the access and computing nodes. The algorithms are tested for their adaptability to sudden changes in the arrival rate of requests (to model peak hours) as well as robustness to the variance of the request-size distributions to understand the advantages and drawbacks of each of them. They are shown to perform well in scenarios with and without offloading.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121368992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TOD: Transprecise Object Detection to Maximise Real-Time Accuracy on the Edge TOD:透明的目标检测,最大限度地提高边缘的实时精度
Pub Date : 2021-05-01 DOI: 10.1109/ICFEC51620.2021.00015
JunKyu Lee, B. Varghese, Roger Francis Woods, H. Vandierendonck
Real-time video analytics on the edge is challenging as the computationally constrained resources typically cannot analyse video streams at full fidelity and frame rate, which results in loss of accuracy. This paper proposes a Transprecise Object Detector (TOD) which maximises the real-time object detection accuracy on an edge device by selecting an appropriate Deep Neural Network (DNN) on the fly with negligible computational overhead. TOD makes two key contributions over the state of the art: (1) TOD leverages characteristics of the video stream such as object size and speed of movement to identify networks with high prediction accuracy for the current frames; (2) it selects the best-performing network based on projected accuracy and computational demand using an effective and low-overhead decision mechanism. Experimental evaluation on a Jetson Nano demonstrates that TOD improves the average object detection precision by 34.7% over the YOLOv4-tiny-288 model on average over the MOT17Det dataset. In the MOT17-05 test dataset, TOD utilises only 45.1% of GPU resource and 62.7% of the GPU board power without losing accuracy, compared to YOLOv4-416 model. We expect that TOD will maximise the application of edge devices to real-time object detection, since TOD maximises real-time object detection accuracy given edge devices according to dynamic input features without increasing inference latency in practice.
边缘实时视频分析具有挑战性,因为计算资源有限,通常无法以全保真度和帧率分析视频流,从而导致准确性损失。本文提出了一种透明目标检测器(TOD),该检测器通过动态选择合适的深度神经网络(DNN),在计算开销可以忽略不计的情况下,最大限度地提高边缘设备上的实时目标检测精度。TOD在目前的技术水平上做出了两个关键贡献:(1)TOD利用视频流的特征,如物体大小和运动速度,来识别当前帧具有高预测精度的网络;(2)采用有效的低开销决策机制,根据预测精度和计算需求选择性能最佳的网络。在Jetson Nano上的实验评估表明,在MOT17Det数据集上,TOD模型比YOLOv4-tiny-288模型平均提高了34.7%的目标检测精度。在MOT17-05测试数据集中,与YOLOv4-416模型相比,TOD只使用了45.1%的GPU资源和62.7%的GPU主板功率,而不损失精度。我们期望TOD将最大化边缘设备在实时目标检测中的应用,因为TOD根据动态输入特征最大化给定边缘设备的实时目标检测精度,而不会增加实践中的推理延迟。
{"title":"TOD: Transprecise Object Detection to Maximise Real-Time Accuracy on the Edge","authors":"JunKyu Lee, B. Varghese, Roger Francis Woods, H. Vandierendonck","doi":"10.1109/ICFEC51620.2021.00015","DOIUrl":"https://doi.org/10.1109/ICFEC51620.2021.00015","url":null,"abstract":"Real-time video analytics on the edge is challenging as the computationally constrained resources typically cannot analyse video streams at full fidelity and frame rate, which results in loss of accuracy. This paper proposes a Transprecise Object Detector (TOD) which maximises the real-time object detection accuracy on an edge device by selecting an appropriate Deep Neural Network (DNN) on the fly with negligible computational overhead. TOD makes two key contributions over the state of the art: (1) TOD leverages characteristics of the video stream such as object size and speed of movement to identify networks with high prediction accuracy for the current frames; (2) it selects the best-performing network based on projected accuracy and computational demand using an effective and low-overhead decision mechanism. Experimental evaluation on a Jetson Nano demonstrates that TOD improves the average object detection precision by 34.7% over the YOLOv4-tiny-288 model on average over the MOT17Det dataset. In the MOT17-05 test dataset, TOD utilises only 45.1% of GPU resource and 62.7% of the GPU board power without losing accuracy, compared to YOLOv4-416 model. We expect that TOD will maximise the application of edge devices to real-time object detection, since TOD maximises real-time object detection accuracy given edge devices according to dynamic input features without increasing inference latency in practice.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114773997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Exploring Task Placement for Edge-to-Cloud Applications using Emulation 探索使用仿真的边缘到云应用程序的任务放置
Pub Date : 2021-04-07 DOI: 10.1109/ICFEC51620.2021.00019
André Luckow, Kartik Rattan, S. Jha
A vast and growing number of IoT applications connect physical devices, such as scientific instruments, technical equipment, machines, and cameras, across heterogenous infrastructure from the edge to the cloud to provide responsive, intelligent services while complying with privacy and security requirements. However, the integration of heterogeneous IoT, edge, and cloud technologies and the design of end-to-end applications that seamlessly work across multiple layers and types of infrastructures is challenging. A significant issue is resource management and the need to ensure that the right type and scale of resources is allocated on every layer to fulfill the application’s processing needs. As edge and cloud layers are increasingly tightly integrated, imbalanced resource allocations and sub-optimally placed tasks can quickly deteriorate the overall system performance. This paper proposes an emulation approach for the investigation of task placements across the edge-to-cloud continuum. We demonstrate that emulation can address the complexity and many degrees-of-freedom of the problem, allowing us to investigate essential deployment patterns and trade-offs. We evaluate our approach using a machine learning-based workload, demonstrating the validity by comparing emulation and real-world experiments. Further, we show that the right task placement strategy has a significant impact on performance – in our experiments, between 5% and 65% depending on the scenario.
越来越多的物联网应用连接物理设备,如科学仪器、技术设备、机器和相机,跨越从边缘到云的异构基础设施,提供响应迅速的智能服务,同时符合隐私和安全要求。然而,异构物联网、边缘和云技术的集成以及跨多层和多种基础设施无缝工作的端到端应用程序的设计是具有挑战性的。一个重要的问题是资源管理,需要确保在每一层上分配正确类型和规模的资源,以满足应用程序的处理需求。随着边缘层和云层越来越紧密地集成,不平衡的资源分配和次优的任务放置会迅速降低整个系统的性能。本文提出了一种仿真方法来研究跨边缘到云连续体的任务布置。我们演示了模拟可以解决问题的复杂性和许多自由度,允许我们调查基本的部署模式和权衡。我们使用基于机器学习的工作负载来评估我们的方法,通过比较仿真和现实世界的实验来证明其有效性。此外,我们表明,正确的任务放置策略对性能有显著影响——在我们的实验中,根据场景的不同,影响在5%到65%之间。
{"title":"Exploring Task Placement for Edge-to-Cloud Applications using Emulation","authors":"André Luckow, Kartik Rattan, S. Jha","doi":"10.1109/ICFEC51620.2021.00019","DOIUrl":"https://doi.org/10.1109/ICFEC51620.2021.00019","url":null,"abstract":"A vast and growing number of IoT applications connect physical devices, such as scientific instruments, technical equipment, machines, and cameras, across heterogenous infrastructure from the edge to the cloud to provide responsive, intelligent services while complying with privacy and security requirements. However, the integration of heterogeneous IoT, edge, and cloud technologies and the design of end-to-end applications that seamlessly work across multiple layers and types of infrastructures is challenging. A significant issue is resource management and the need to ensure that the right type and scale of resources is allocated on every layer to fulfill the application’s processing needs. As edge and cloud layers are increasingly tightly integrated, imbalanced resource allocations and sub-optimally placed tasks can quickly deteriorate the overall system performance. This paper proposes an emulation approach for the investigation of task placements across the edge-to-cloud continuum. We demonstrate that emulation can address the complexity and many degrees-of-freedom of the problem, allowing us to investigate essential deployment patterns and trade-offs. We evaluate our approach using a machine learning-based workload, demonstrating the validity by comparing emulation and real-world experiments. Further, we show that the right task placement strategy has a significant impact on performance – in our experiments, between 5% and 65% depending on the scenario.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126739384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1