TinyDeepUAV: A Tiny Deep Reinforcement Learning Framework for UAV Task Offloading in Edge-Based Consumer Electronics

IF 4.3 2区 计算机科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Consumer Electronics Pub Date : 2024-08-21 DOI:10.1109/TCE.2024.3445290
Sujit Bebortta;Subhranshu Sekhar Tripathy;Surbhi Bhatia Khan;Maryam M. Al Dabel;Ahlam Almusharraf;Ali Kashif Bashir
{"title":"TinyDeepUAV: A Tiny Deep Reinforcement Learning Framework for UAV Task Offloading in Edge-Based Consumer Electronics","authors":"Sujit Bebortta;Subhranshu Sekhar Tripathy;Surbhi Bhatia Khan;Maryam M. Al Dabel;Ahlam Almusharraf;Ali Kashif Bashir","doi":"10.1109/TCE.2024.3445290","DOIUrl":null,"url":null,"abstract":"Recently, there has been a rise in the use of Unmanned Areal Vehicles (UAVs) in consumer electronics, particularly for the critical situations. Internet of Things (IoT) technology and the accessibility of inexpensive edge computing devices present novel prospects for enhanced functionality in various domains through the utilization of IoT-based UAVs. One major difficulty of this perspective is the challenges of computation offloading between resource-constrained edge devices, and UAVs. This paper proposes an innovative framework to solve the computation offloading problem using a multi-objective Deep reinforcement learning (DRL) technique. The proposed approach helps in finding a balance between delays and energy consumption by using the concept of Tiny Machine Learning (TinyML). It develops a low complexity frameworks that make it feasible for offloading tasks to edge devices. Catering to the dynamic nature of edge-based UAV networks, TinyDeepUAV suggests a vector reinforcement that can change weights dynamically based on various user preferences. It is further conjectured that the structure can be enhanced by Double Dueling Deep Q Network (D3QN) for optimal improvement of the optimization problem. The simulation results depicts a trade-off between delay and energy consumption, enabling more effective offloading decisions while outperforming benchmark approaches.","PeriodicalId":13208,"journal":{"name":"IEEE Transactions on Consumer Electronics","volume":"70 4","pages":"7357-7364"},"PeriodicalIF":4.3000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Consumer Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10643436/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, there has been a rise in the use of Unmanned Areal Vehicles (UAVs) in consumer electronics, particularly for the critical situations. Internet of Things (IoT) technology and the accessibility of inexpensive edge computing devices present novel prospects for enhanced functionality in various domains through the utilization of IoT-based UAVs. One major difficulty of this perspective is the challenges of computation offloading between resource-constrained edge devices, and UAVs. This paper proposes an innovative framework to solve the computation offloading problem using a multi-objective Deep reinforcement learning (DRL) technique. The proposed approach helps in finding a balance between delays and energy consumption by using the concept of Tiny Machine Learning (TinyML). It develops a low complexity frameworks that make it feasible for offloading tasks to edge devices. Catering to the dynamic nature of edge-based UAV networks, TinyDeepUAV suggests a vector reinforcement that can change weights dynamically based on various user preferences. It is further conjectured that the structure can be enhanced by Double Dueling Deep Q Network (D3QN) for optimal improvement of the optimization problem. The simulation results depicts a trade-off between delay and energy consumption, enabling more effective offloading decisions while outperforming benchmark approaches.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
TinyDeepUAV:用于边缘消费电子产品中无人机任务卸载的微型深度强化学习框架
最近,在消费电子产品中,特别是在关键情况下,无人驾驶区域车辆(uav)的使用有所增加。物联网(IoT)技术和廉价边缘计算设备的可及性为利用基于物联网的无人机在各个领域增强功能提供了新的前景。这种观点的一个主要困难是在资源受限的边缘设备和无人机之间进行计算卸载的挑战。本文提出了一种利用多目标深度强化学习(DRL)技术解决计算卸载问题的创新框架。提出的方法有助于通过使用微型机器学习(TinyML)的概念在延迟和能耗之间找到平衡。它开发了一个低复杂性的框架,使得将任务卸载到边缘设备是可行的。为了迎合基于边缘的无人机网络的动态特性,TinyDeepUAV提出了一种向量强化,可以根据各种用户偏好动态改变权重。进一步推测双Dueling Deep Q Network (D3QN)可以对结构进行增强,从而对优化问题进行优化改进。仿真结果描述了延迟和能耗之间的权衡,在优于基准方法的同时实现了更有效的卸载决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.70
自引率
9.30%
发文量
59
审稿时长
3.3 months
期刊介绍: The main focus for the IEEE Transactions on Consumer Electronics is the engineering and research aspects of the theory, design, construction, manufacture or end use of mass market electronics, systems, software and services for consumers.
期刊最新文献
Table of Contents Guest Editorial Consumer-Driven Energy-Efficient WSNs Architecture for Personalization and Contextualization in E-Commerce Systems IEEE Consumer Technology Society Officers and Committee Chairs Energy-Efficient Secure Architecture For Personalization E-Commerce WSN IEEE Consumer Technology Society
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1