Pub Date : 2021-09-01DOI: 10.1109/EDGE53862.2021.00018
Sanaz Rabinia, Haydar Mehryar, Marco Brocanelli, Daniel Grosu
Edge computing allows end-user devices to offload heavy computation to nearby edge servers for reduced latency, maximized profit, and/or minimized energy consumption. Data-dependent tasks that analyze locally-acquired sensing data are one of the most common candidates for task offloading in edge computing. As a result, the total latency and network load are affected by the total amount of data transferred from end-user devices to the selected edge servers. Most existing solutions for task allocation in edge computing do not take into consideration that some user tasks may actually operate on the same data items. Making the task allocation algorithm aware of the existing data sharing characteristics of tasks can help reduce network load at a negligible profit loss by allocating more tasks sharing data on the same server. In this paper, we formulate the data sharing-aware task allocation problem that make decisions on task allocation for maximized profit and minimized network load by taking into account the data-sharing characteristics of tasks. In addition, because the problem is NP-hard, we design the DSTA algorithm, which finds a solution to the problem in polynomial time. We analyze the performance of the proposed algorithm against a state-of-the-art baseline that only maximizes profit. Our extensive analysis shows that DSTA leads to about 8 times lower data load on the network while being within 1.03 times of the total profit on average compared to the state-of-the-art.
{"title":"Data Sharing-Aware Task Allocation in Edge Computing Systems","authors":"Sanaz Rabinia, Haydar Mehryar, Marco Brocanelli, Daniel Grosu","doi":"10.1109/EDGE53862.2021.00018","DOIUrl":"https://doi.org/10.1109/EDGE53862.2021.00018","url":null,"abstract":"Edge computing allows end-user devices to offload heavy computation to nearby edge servers for reduced latency, maximized profit, and/or minimized energy consumption. Data-dependent tasks that analyze locally-acquired sensing data are one of the most common candidates for task offloading in edge computing. As a result, the total latency and network load are affected by the total amount of data transferred from end-user devices to the selected edge servers. Most existing solutions for task allocation in edge computing do not take into consideration that some user tasks may actually operate on the same data items. Making the task allocation algorithm aware of the existing data sharing characteristics of tasks can help reduce network load at a negligible profit loss by allocating more tasks sharing data on the same server. In this paper, we formulate the data sharing-aware task allocation problem that make decisions on task allocation for maximized profit and minimized network load by taking into account the data-sharing characteristics of tasks. In addition, because the problem is NP-hard, we design the DSTA algorithm, which finds a solution to the problem in polynomial time. We analyze the performance of the proposed algorithm against a state-of-the-art baseline that only maximizes profit. Our extensive analysis shows that DSTA leads to about 8 times lower data load on the network while being within 1.03 times of the total profit on average compared to the state-of-the-art.","PeriodicalId":115969,"journal":{"name":"2021 IEEE International Conference on Edge Computing (EDGE)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122190964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/EDGE53862.2021.00021
Andrea Morichetta
A captivating set of hypotheses from the field of neuroscience suggests that human and animal brain mechanisms result from few powerful principles. If proved to be accurate, these assumptions could open a deep understanding of the way humans and animals manage to cope with the unpredictability of events and imagination. Modern distributed systems also deal with uncertain scenarios, where environments, infrastructures, and applications are widely diverse. In the scope of Edge- Fog-Cloud computing, leveraging these neuroscience-inspired principles and mechanisms could aid in building more flexible solutions able to generalize over different environments. In this work, we focus on the approaches that center on high-level, general strategies, like the Free Energy Principle and Global Neuronal Workspace theories. The goal of exploring these techniques is to introduce principles that can potentially help us build distributed systems able to jointly work on the whole computing continuum, from the Edge to the Cloud, with self-adapting capabilities, i.e., dealing with uncertainty and the need for generalization, which is currently an open issue.
{"title":"A roadmap on learning and reasoning for distributed computing continuum ecosystems","authors":"Andrea Morichetta","doi":"10.1109/EDGE53862.2021.00021","DOIUrl":"https://doi.org/10.1109/EDGE53862.2021.00021","url":null,"abstract":"A captivating set of hypotheses from the field of neuroscience suggests that human and animal brain mechanisms result from few powerful principles. If proved to be accurate, these assumptions could open a deep understanding of the way humans and animals manage to cope with the unpredictability of events and imagination. Modern distributed systems also deal with uncertain scenarios, where environments, infrastructures, and applications are widely diverse. In the scope of Edge- Fog-Cloud computing, leveraging these neuroscience-inspired principles and mechanisms could aid in building more flexible solutions able to generalize over different environments. In this work, we focus on the approaches that center on high-level, general strategies, like the Free Energy Principle and Global Neuronal Workspace theories. The goal of exploring these techniques is to introduce principles that can potentially help us build distributed systems able to jointly work on the whole computing continuum, from the Edge to the Cloud, with self-adapting capabilities, i.e., dealing with uncertainty and the need for generalization, which is currently an open issue.","PeriodicalId":115969,"journal":{"name":"2021 IEEE International Conference on Edge Computing (EDGE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130090366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/edge53862.2021.00009
{"title":"EDGE 2021 Program Committee","authors":"","doi":"10.1109/edge53862.2021.00009","DOIUrl":"https://doi.org/10.1109/edge53862.2021.00009","url":null,"abstract":"","PeriodicalId":115969,"journal":{"name":"2021 IEEE International Conference on Edge Computing (EDGE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126634063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/EDGE53862.2021.00023
Kaustabha Ray, A. Banerjee
Multi-Access Edge Computing (MEC) is a promising new paradigm enabling low-latency access to services deployed on edge servers. This helps to avert network latencies often encountered in accessing cloud services. The cornerstone of a MEC environment is a resource allocation policy used to partition and allocate computational resources such as bandwidth, memory available on the edge server to user service invocations availing such services. In this work, we propose a generic data-driven framework to model and analyze such MEC resource allocation policies. We model a MEC system as a Turn-Based Stochastic Multi-Player Game and use Probabilistic Model Checking to derive quantitative guarantees on resource allocation policies against requirements expressed in Probabilistic Alternating-Time Temporal Logic with Rewards. We present results on state-of-the-art MEC resource allocation policies to demonstrate the effectiveness of our framework.
{"title":"A Framework for Analyzing Resource Allocation Policies for Multi-Access Edge Computing","authors":"Kaustabha Ray, A. Banerjee","doi":"10.1109/EDGE53862.2021.00023","DOIUrl":"https://doi.org/10.1109/EDGE53862.2021.00023","url":null,"abstract":"Multi-Access Edge Computing (MEC) is a promising new paradigm enabling low-latency access to services deployed on edge servers. This helps to avert network latencies often encountered in accessing cloud services. The cornerstone of a MEC environment is a resource allocation policy used to partition and allocate computational resources such as bandwidth, memory available on the edge server to user service invocations availing such services. In this work, we propose a generic data-driven framework to model and analyze such MEC resource allocation policies. We model a MEC system as a Turn-Based Stochastic Multi-Player Game and use Probabilistic Model Checking to derive quantitative guarantees on resource allocation policies against requirements expressed in Probabilistic Alternating-Time Temporal Logic with Rewards. We present results on state-of-the-art MEC resource allocation policies to demonstrate the effectiveness of our framework.","PeriodicalId":115969,"journal":{"name":"2021 IEEE International Conference on Edge Computing (EDGE)","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115980115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/EDGE53862.2021.00014
Hamta Sedghani, Federica Filippini, D. Ardagna
Artificial Intelligence (AI) and Deep Learning (DL) are pervasive today, with applications spanning from personal assistants to healthcare. Nowadays, the accelerated migration towards mobile computing and Internet of Things, where a huge amount of data is generated by widespread end devices, is determining the rise of the edge computing paradigm, where computing resources are distributed among devices with highly heterogeneous capacities. In this fragmented scenario, efficient component placement and resource allocation algorithms are crucial to orchestrate at best the computing continuum resources. In this paper, we propose a tool to effectively address the component placement problem for AI applications at design time. Through a randomized greedy algorithm, our approach identifies the placement of minimum cost providing performance guar-antees across heterogeneous resources including edge devices, cloud GPU-based Virtual Machines and Function as a Service solutions. Finally, we compare the random greedy method with the HyperOpt framework and demonstrate that our proposed approach converges to a near-optimal solution much faster, especially in large scale systems.
{"title":"A Random Greedy based Design Time Tool for AI Applications Component Placement and Resource Selection in Computing Continua","authors":"Hamta Sedghani, Federica Filippini, D. Ardagna","doi":"10.1109/EDGE53862.2021.00014","DOIUrl":"https://doi.org/10.1109/EDGE53862.2021.00014","url":null,"abstract":"Artificial Intelligence (AI) and Deep Learning (DL) are pervasive today, with applications spanning from personal assistants to healthcare. Nowadays, the accelerated migration towards mobile computing and Internet of Things, where a huge amount of data is generated by widespread end devices, is determining the rise of the edge computing paradigm, where computing resources are distributed among devices with highly heterogeneous capacities. In this fragmented scenario, efficient component placement and resource allocation algorithms are crucial to orchestrate at best the computing continuum resources. In this paper, we propose a tool to effectively address the component placement problem for AI applications at design time. Through a randomized greedy algorithm, our approach identifies the placement of minimum cost providing performance guar-antees across heterogeneous resources including edge devices, cloud GPU-based Virtual Machines and Function as a Service solutions. Finally, we compare the random greedy method with the HyperOpt framework and demonstrate that our proposed approach converges to a near-optimal solution much faster, especially in large scale systems.","PeriodicalId":115969,"journal":{"name":"2021 IEEE International Conference on Edge Computing (EDGE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132238510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/EDGE53862.2021.00013
Ziwen Zhou, Tianming Zhao, Wei Li, Albert Y. Zomaya
Due to the flexibility and scalability, the increasing number of mobile edge computing applications involves Mobile Edge Servers (MES). MES introduces the new challenge of online resource scheduling to serve different requests under limited energy to offer similar functionalities of the immobile edge servers. The previous studies consider the case of the identical server, which has poor scalability and is hard to apply to real-world applications. This work proposes a novel model named the distributed k-server problem that formulates the MES resource scheduling to address the heterogeneity issues in both servers and requests. We design an algorithm named DWFA based on the efficient network flow-based Work Function Algorithm (WFA) to the classic k-server problem as an immediate solution to the proposed problem. DWFA inher-its the competitiveness of WFA but has high computational complexity. To further increase scalability via the computing power of MES, we parallelise DWFA to design a distributed algorithm named FD-WFA as a distributed execution of DWFA, which significantly reduces the computational complexity and increases the practicality. Extensive simulations have been con-ducted to verify the theoretical results and show the advantages of FD-WFA over the benchmarks.
{"title":"Distributed Online Resource Scheduling for Mobile Edge Servers","authors":"Ziwen Zhou, Tianming Zhao, Wei Li, Albert Y. Zomaya","doi":"10.1109/EDGE53862.2021.00013","DOIUrl":"https://doi.org/10.1109/EDGE53862.2021.00013","url":null,"abstract":"Due to the flexibility and scalability, the increasing number of mobile edge computing applications involves Mobile Edge Servers (MES). MES introduces the new challenge of online resource scheduling to serve different requests under limited energy to offer similar functionalities of the immobile edge servers. The previous studies consider the case of the identical server, which has poor scalability and is hard to apply to real-world applications. This work proposes a novel model named the distributed k-server problem that formulates the MES resource scheduling to address the heterogeneity issues in both servers and requests. We design an algorithm named DWFA based on the efficient network flow-based Work Function Algorithm (WFA) to the classic k-server problem as an immediate solution to the proposed problem. DWFA inher-its the competitiveness of WFA but has high computational complexity. To further increase scalability via the computing power of MES, we parallelise DWFA to design a distributed algorithm named FD-WFA as a distributed execution of DWFA, which significantly reduces the computational complexity and increases the practicality. Extensive simulations have been con-ducted to verify the theoretical results and show the advantages of FD-WFA over the benchmarks.","PeriodicalId":115969,"journal":{"name":"2021 IEEE International Conference on Edge Computing (EDGE)","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116155960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/EDGE53862.2021.00015
M. Anisetti, C. Ardagna, Nicola Bena, Ruslan Bondaruc
Current distributed systems increasingly rely on hybrid architectures built on top of IoT, edge, and cloud, backed by dynamically configurable networking technologies like 5G. In this complex environment, traditional security governance solutions cannot provide the holistic view needed to manage these systems in an effective and efficient way. In this paper, we propose a security assurance framework for edge and IoT systems based on an advanced architecture capable of dealing with 5G-native applications.
{"title":"Towards an Assurance Framework for Edge and IoT Systems","authors":"M. Anisetti, C. Ardagna, Nicola Bena, Ruslan Bondaruc","doi":"10.1109/EDGE53862.2021.00015","DOIUrl":"https://doi.org/10.1109/EDGE53862.2021.00015","url":null,"abstract":"Current distributed systems increasingly rely on hybrid architectures built on top of IoT, edge, and cloud, backed by dynamically configurable networking technologies like 5G. In this complex environment, traditional security governance solutions cannot provide the holistic view needed to manage these systems in an effective and efficient way. In this paper, we propose a security assurance framework for edge and IoT systems based on an advanced architecture capable of dealing with 5G-native applications.","PeriodicalId":115969,"journal":{"name":"2021 IEEE International Conference on Edge Computing (EDGE)","volume":"223 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133287190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/edge53862.2021.00006
{"title":"Message from Congress General Chairs of IEEE SERVICES 2021","authors":"","doi":"10.1109/edge53862.2021.00006","DOIUrl":"https://doi.org/10.1109/edge53862.2021.00006","url":null,"abstract":"","PeriodicalId":115969,"journal":{"name":"2021 IEEE International Conference on Edge Computing (EDGE)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129374507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/EDGE53862.2021.00010
Qing Li, Shangguang Wang, Xiao Ma, Ao Zhou, Fangchun Yang
Recently, Low Earth Orbit (LEO) satellites experience rapid development and satellite edge computing emerges to address the limitation of bent-pipe architecture in existing satellite systems. Introducing energy-consuming computing components in satellite edge computing increases the depth of battery discharge. This will shorten batteries' life and influences the satellites' operation in orbit. In this paper, we aim to extend batteries' life by minimizing the depth of discharge for Earth observation missions. Facing the challenges of wireless uncertainty and energy harvesting dynamics, our work develops an online energy scheduling algorithm within an online convex optimization framework. Our algorithm achieves sub-linear regret and the constraint violation asymptotically approaches zero. Simulation results show that our algorithm can reduce the depth of discharge significantly.
{"title":"Towards Sustainable Satellite Edge Computing","authors":"Qing Li, Shangguang Wang, Xiao Ma, Ao Zhou, Fangchun Yang","doi":"10.1109/EDGE53862.2021.00010","DOIUrl":"https://doi.org/10.1109/EDGE53862.2021.00010","url":null,"abstract":"Recently, Low Earth Orbit (LEO) satellites experience rapid development and satellite edge computing emerges to address the limitation of bent-pipe architecture in existing satellite systems. Introducing energy-consuming computing components in satellite edge computing increases the depth of battery discharge. This will shorten batteries' life and influences the satellites' operation in orbit. In this paper, we aim to extend batteries' life by minimizing the depth of discharge for Earth observation missions. Facing the challenges of wireless uncertainty and energy harvesting dynamics, our work develops an online energy scheduling algorithm within an online convex optimization framework. Our algorithm achieves sub-linear regret and the constraint violation asymptotically approaches zero. Simulation results show that our algorithm can reduce the depth of discharge significantly.","PeriodicalId":115969,"journal":{"name":"2021 IEEE International Conference on Edge Computing (EDGE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121821600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}