{"title":"Deep Reinforcement Learning (DRL) based data analytics framework for Edge based IoT devices latency and resource optimization","authors":"Sudhakar Majjari, K. R. Anne, Joseph George","doi":"10.1109/ACCESS57397.2023.10200511","DOIUrl":null,"url":null,"abstract":"Internet of Things (IoT) trends show rising data processing computational needs. Sensor data is uploaded to backend cloud nodes before data analyses at the network edge. IoT devices are usually resource-constrained and unable to execute operations quickly and accurately. Cloud servers are impractical and increase communication overhead. Cloud platforms offer machine learning services with pretrained models to understand IoT data. To use the cloud service, personal data must be transferred, and network problems may impede timely analysis results. Data and analysis are shifting to edge platforms to solve these concerns. Most edge devices can't analyze and train a lot of data. Edge-enabled systems provide efficient compute and control at the network edge to reduce scalability and latency. IoT applications provide large heterogeneous data, which makes edge computing difficult. To solve this issue, Deep Reinforcement Learning (DRL) based data analytics framework for Edge based IoT devices to enable devices to execute tasks jointly, leveraging proximity and resource complementarity. It supports parallel data input and strengthen the comprehensive communication overhead handling through data scheduling optimization. The simulation results conveys that the proposed approach uses DRL to optimize execution accuracy and time without requiring a priori IoT node information. Moreover, the average delay time, percentage of failure and cost of rewards are computed in which being compared with the existing scheduling methods includes Proximal Policy Optimization technique (PPO), and Deep Deterministic Policy Gradient technique (DDPG).","PeriodicalId":345351,"journal":{"name":"2023 3rd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 3rd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACCESS57397.2023.10200511","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Internet of Things (IoT) trends show rising data processing computational needs. Sensor data is uploaded to backend cloud nodes before data analyses at the network edge. IoT devices are usually resource-constrained and unable to execute operations quickly and accurately. Cloud servers are impractical and increase communication overhead. Cloud platforms offer machine learning services with pretrained models to understand IoT data. To use the cloud service, personal data must be transferred, and network problems may impede timely analysis results. Data and analysis are shifting to edge platforms to solve these concerns. Most edge devices can't analyze and train a lot of data. Edge-enabled systems provide efficient compute and control at the network edge to reduce scalability and latency. IoT applications provide large heterogeneous data, which makes edge computing difficult. To solve this issue, Deep Reinforcement Learning (DRL) based data analytics framework for Edge based IoT devices to enable devices to execute tasks jointly, leveraging proximity and resource complementarity. It supports parallel data input and strengthen the comprehensive communication overhead handling through data scheduling optimization. The simulation results conveys that the proposed approach uses DRL to optimize execution accuracy and time without requiring a priori IoT node information. Moreover, the average delay time, percentage of failure and cost of rewards are computed in which being compared with the existing scheduling methods includes Proximal Policy Optimization technique (PPO), and Deep Deterministic Policy Gradient technique (DDPG).