Sujit Bebortta;Subhranshu Sekhar Tripathy;Surbhi Bhatia Khan;Maryam M. Al Dabel;Ahlam Almusharraf;Ali Kashif Bashir
{"title":"TinyDeepUAV: A Tiny Deep Reinforcement Learning Framework for UAV Task Offloading in Edge-Based Consumer Electronics","authors":"Sujit Bebortta;Subhranshu Sekhar Tripathy;Surbhi Bhatia Khan;Maryam M. Al Dabel;Ahlam Almusharraf;Ali Kashif Bashir","doi":"10.1109/TCE.2024.3445290","DOIUrl":null,"url":null,"abstract":"Recently, there has been a rise in the use of Unmanned Areal Vehicles (UAVs) in consumer electronics, particularly for the critical situations. Internet of Things (IoT) technology and the accessibility of inexpensive edge computing devices present novel prospects for enhanced functionality in various domains through the utilization of IoT-based UAVs. One major difficulty of this perspective is the challenges of computation offloading between resource-constrained edge devices, and UAVs. This paper proposes an innovative framework to solve the computation offloading problem using a multi-objective Deep reinforcement learning (DRL) technique. The proposed approach helps in finding a balance between delays and energy consumption by using the concept of Tiny Machine Learning (TinyML). It develops a low complexity frameworks that make it feasible for offloading tasks to edge devices. Catering to the dynamic nature of edge-based UAV networks, TinyDeepUAV suggests a vector reinforcement that can change weights dynamically based on various user preferences. It is further conjectured that the structure can be enhanced by Double Dueling Deep Q Network (D3QN) for optimal improvement of the optimization problem. The simulation results depicts a trade-off between delay and energy consumption, enabling more effective offloading decisions while outperforming benchmark approaches.","PeriodicalId":13208,"journal":{"name":"IEEE Transactions on Consumer Electronics","volume":"70 4","pages":"7357-7364"},"PeriodicalIF":4.3000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Consumer Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10643436/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, there has been a rise in the use of Unmanned Areal Vehicles (UAVs) in consumer electronics, particularly for the critical situations. Internet of Things (IoT) technology and the accessibility of inexpensive edge computing devices present novel prospects for enhanced functionality in various domains through the utilization of IoT-based UAVs. One major difficulty of this perspective is the challenges of computation offloading between resource-constrained edge devices, and UAVs. This paper proposes an innovative framework to solve the computation offloading problem using a multi-objective Deep reinforcement learning (DRL) technique. The proposed approach helps in finding a balance between delays and energy consumption by using the concept of Tiny Machine Learning (TinyML). It develops a low complexity frameworks that make it feasible for offloading tasks to edge devices. Catering to the dynamic nature of edge-based UAV networks, TinyDeepUAV suggests a vector reinforcement that can change weights dynamically based on various user preferences. It is further conjectured that the structure can be enhanced by Double Dueling Deep Q Network (D3QN) for optimal improvement of the optimization problem. The simulation results depicts a trade-off between delay and energy consumption, enabling more effective offloading decisions while outperforming benchmark approaches.
最近,在消费电子产品中,特别是在关键情况下,无人驾驶区域车辆(uav)的使用有所增加。物联网(IoT)技术和廉价边缘计算设备的可及性为利用基于物联网的无人机在各个领域增强功能提供了新的前景。这种观点的一个主要困难是在资源受限的边缘设备和无人机之间进行计算卸载的挑战。本文提出了一种利用多目标深度强化学习(DRL)技术解决计算卸载问题的创新框架。提出的方法有助于通过使用微型机器学习(TinyML)的概念在延迟和能耗之间找到平衡。它开发了一个低复杂性的框架,使得将任务卸载到边缘设备是可行的。为了迎合基于边缘的无人机网络的动态特性,TinyDeepUAV提出了一种向量强化,可以根据各种用户偏好动态改变权重。进一步推测双Dueling Deep Q Network (D3QN)可以对结构进行增强,从而对优化问题进行优化改进。仿真结果描述了延迟和能耗之间的权衡,在优于基准方法的同时实现了更有效的卸载决策。
期刊介绍:
The main focus for the IEEE Transactions on Consumer Electronics is the engineering and research aspects of the theory, design, construction, manufacture or end use of mass market electronics, systems, software and services for consumers.