{"title":"Computational Offloading and Resource Allocation for IoT applications using Decision Tree based Reinforcement Learning","authors":"Guneet Kaur Walia, Mohit Kumar","doi":"10.1016/j.adhoc.2024.103751","DOIUrl":null,"url":null,"abstract":"<div><div>The pervasive penetration of IoT devices in various domains such as autonomous vehicles, supply chain management, video surveillance, healthcare, industrial automation etc. necessitates for advanced computing paradigms to achieve real time response delivery. Edge computing offers prompt service response via its competent decentralized platform for catering disseminate workload, hence serving as front-runner for competently handling a wide spectrum of IoT applications. However, optimal distribution of workload in the form of incoming tasks to appropriate destinations remains a challenging issue due to multiple factors such as dynamic offloading decision, optimal resource allocation, heterogeneity of devices, unbalanced workload etc in collaborative Cloud-Edge layered architecture. Employing advanced Artificial Intelligence (AI)-based techniques, provides promising solutions to address the complex task assignment problem. However, existing solutions encounter significant challenges, including prolonged convergence time, extended learning periods for agents and inability to adapt to a stochastic environment. Hence, our work aims to design a unified framework for performing computational offloading and resource allocation in diverse IoT applications using Decision Tree Empowered Reinforcement Learning (DTRL) technique. The proposed work formulates the optimization problem for offloading decisions at runtime and allocates the optimal resources for incoming tasks to improve the Quality-of-Service parameters (QoS). The computational results conducted over a simulation environment proved that the proposed approach has the high convergence ability, exploration and exploitation capability and outperforms the existing state-of-the-art approaches in terms of delay, energy consumption, waiting time, task acceptance ratio and service cost.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"170 ","pages":"Article 103751"},"PeriodicalIF":4.4000,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ad Hoc Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1570870524003627","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The pervasive penetration of IoT devices in various domains such as autonomous vehicles, supply chain management, video surveillance, healthcare, industrial automation etc. necessitates for advanced computing paradigms to achieve real time response delivery. Edge computing offers prompt service response via its competent decentralized platform for catering disseminate workload, hence serving as front-runner for competently handling a wide spectrum of IoT applications. However, optimal distribution of workload in the form of incoming tasks to appropriate destinations remains a challenging issue due to multiple factors such as dynamic offloading decision, optimal resource allocation, heterogeneity of devices, unbalanced workload etc in collaborative Cloud-Edge layered architecture. Employing advanced Artificial Intelligence (AI)-based techniques, provides promising solutions to address the complex task assignment problem. However, existing solutions encounter significant challenges, including prolonged convergence time, extended learning periods for agents and inability to adapt to a stochastic environment. Hence, our work aims to design a unified framework for performing computational offloading and resource allocation in diverse IoT applications using Decision Tree Empowered Reinforcement Learning (DTRL) technique. The proposed work formulates the optimization problem for offloading decisions at runtime and allocates the optimal resources for incoming tasks to improve the Quality-of-Service parameters (QoS). The computational results conducted over a simulation environment proved that the proposed approach has the high convergence ability, exploration and exploitation capability and outperforms the existing state-of-the-art approaches in terms of delay, energy consumption, waiting time, task acceptance ratio and service cost.
期刊介绍:
The Ad Hoc Networks is an international and archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in ad hoc and sensor networking areas. The Ad Hoc Networks considers original, high quality and unpublished contributions addressing all aspects of ad hoc and sensor networks. Specific areas of interest include, but are not limited to:
Mobile and Wireless Ad Hoc Networks
Sensor Networks
Wireless Local and Personal Area Networks
Home Networks
Ad Hoc Networks of Autonomous Intelligent Systems
Novel Architectures for Ad Hoc and Sensor Networks
Self-organizing Network Architectures and Protocols
Transport Layer Protocols
Routing protocols (unicast, multicast, geocast, etc.)
Media Access Control Techniques
Error Control Schemes
Power-Aware, Low-Power and Energy-Efficient Designs
Synchronization and Scheduling Issues
Mobility Management
Mobility-Tolerant Communication Protocols
Location Tracking and Location-based Services
Resource and Information Management
Security and Fault-Tolerance Issues
Hardware and Software Platforms, Systems, and Testbeds
Experimental and Prototype Results
Quality-of-Service Issues
Cross-Layer Interactions
Scalability Issues
Performance Analysis and Simulation of Protocols.