基于神经网络的5G NOMA系统能量和业务延迟计算卸载最小化

{"title":"基于神经网络的5G NOMA系统能量和业务延迟计算卸载最小化","authors":"","doi":"10.24425/ijet.2023.147685","DOIUrl":null,"url":null,"abstract":"— The future Internet of Things (IoT) era is anticipated to support computation-intensive and time-critical applications using edge computing for mobile (MEC), which is regarded as promising technique. However, the transmitting uplink performance will be highly impacted by the hostile wireless channel, the low bandwidth, and the low transmission power of IoT devices. Using edge computing for mobile (MEC) to offload tasks becomes a crucial technology to reduce service latency for computation-intensive applications and reduce the computational workloads of mobile devices. Under the restrictions of computation latency and cloud computing capacity, our goal is to reduce the overall energy consumption of all users, including transmission energy and local computation energy. In this article, the Deep Q Network Algorithm (DQNA) to deal with the data rates with respect to the user base in different time slots of 5G NOMA network. The DQNA is optimized by considering more number of cell structures like 2, 4, 6 and 8. Therefore, the DQNA provides the optimal distribution of power among all 3 users in the 5G network, which gives the increased data rates. The existing various power distribution algorithms like frequent pattern (FP), weighted least squares mean error weighted least squares mean error (WLSME), and Random Power and Maximal Power allocation are used to justify the proposed DQNA technique. The proposed technique which gives 81.6% more the data rates when increased the cell structure to 8. Thus 25% more in comparison to other algorithms like FP, WLSME Random Power and Maximal Power allocation","PeriodicalId":13922,"journal":{"name":"International Journal of Electronics and Telecommunications","volume":null,"pages":null},"PeriodicalIF":0.5000,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Minimization of Energy and Service Latency Computation Offloading using Neural Network in 5G NOMA System\",\"authors\":\"\",\"doi\":\"10.24425/ijet.2023.147685\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"— The future Internet of Things (IoT) era is anticipated to support computation-intensive and time-critical applications using edge computing for mobile (MEC), which is regarded as promising technique. However, the transmitting uplink performance will be highly impacted by the hostile wireless channel, the low bandwidth, and the low transmission power of IoT devices. Using edge computing for mobile (MEC) to offload tasks becomes a crucial technology to reduce service latency for computation-intensive applications and reduce the computational workloads of mobile devices. Under the restrictions of computation latency and cloud computing capacity, our goal is to reduce the overall energy consumption of all users, including transmission energy and local computation energy. In this article, the Deep Q Network Algorithm (DQNA) to deal with the data rates with respect to the user base in different time slots of 5G NOMA network. The DQNA is optimized by considering more number of cell structures like 2, 4, 6 and 8. Therefore, the DQNA provides the optimal distribution of power among all 3 users in the 5G network, which gives the increased data rates. The existing various power distribution algorithms like frequent pattern (FP), weighted least squares mean error weighted least squares mean error (WLSME), and Random Power and Maximal Power allocation are used to justify the proposed DQNA technique. The proposed technique which gives 81.6% more the data rates when increased the cell structure to 8. Thus 25% more in comparison to other algorithms like FP, WLSME Random Power and Maximal Power allocation\",\"PeriodicalId\":13922,\"journal\":{\"name\":\"International Journal of Electronics and Telecommunications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.5000,\"publicationDate\":\"2023-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Electronics and Telecommunications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.24425/ijet.2023.147685\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Electronics and Telecommunications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24425/ijet.2023.147685","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Minimization of Energy and Service Latency Computation Offloading using Neural Network in 5G NOMA System
— The future Internet of Things (IoT) era is anticipated to support computation-intensive and time-critical applications using edge computing for mobile (MEC), which is regarded as promising technique. However, the transmitting uplink performance will be highly impacted by the hostile wireless channel, the low bandwidth, and the low transmission power of IoT devices. Using edge computing for mobile (MEC) to offload tasks becomes a crucial technology to reduce service latency for computation-intensive applications and reduce the computational workloads of mobile devices. Under the restrictions of computation latency and cloud computing capacity, our goal is to reduce the overall energy consumption of all users, including transmission energy and local computation energy. In this article, the Deep Q Network Algorithm (DQNA) to deal with the data rates with respect to the user base in different time slots of 5G NOMA network. The DQNA is optimized by considering more number of cell structures like 2, 4, 6 and 8. Therefore, the DQNA provides the optimal distribution of power among all 3 users in the 5G network, which gives the increased data rates. The existing various power distribution algorithms like frequent pattern (FP), weighted least squares mean error weighted least squares mean error (WLSME), and Random Power and Maximal Power allocation are used to justify the proposed DQNA technique. The proposed technique which gives 81.6% more the data rates when increased the cell structure to 8. Thus 25% more in comparison to other algorithms like FP, WLSME Random Power and Maximal Power allocation
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.50
自引率
14.30%
发文量
0
审稿时长
12 weeks
期刊最新文献
Bandwidth enhancement of circular structure microstrip antenna based on inverted C-shaped ground configuration Experimental validation of asymmetric PZT optimal shape in the active vibration reduction of triangular plates Mobile (wireless) telecommunication sector: an Indian perspective and PESTLE analysis Enhanced optimization model decision efficient multi product retail Subjective tests of speaker recognition for selected voice disguise techniques
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1