AI-federated novel delay-aware link-scheduling for Industry 4.0 applications in IoT networks

IF 0.6 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS International Journal of Pervasive Computing and Communications Pub Date : 2022-06-22 DOI:10.1108/ijpcc-12-2021-0297
Suvarna Patil, Prasad Gokhale
{"title":"AI-federated novel delay-aware link-scheduling for Industry 4.0 applications in IoT networks","authors":"Suvarna Patil, Prasad Gokhale","doi":"10.1108/ijpcc-12-2021-0297","DOIUrl":null,"url":null,"abstract":"\nPurpose\nWith the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy.\n\n\nDesign/methodology/approach\nWith an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achieve effective resource allocation in an IoT network. The two-phased solution is provided to solve these problems for improved data communication in heterogeneous networks achieving fairness. In the first phase, traffic is predicted using the LSTM network model to predict the dynamic traffic. In the second phase, efficient link selection per technology and link scheduling are achieved based on predicted load, the distance between gateways, link capacity and time required as per different technologies supported such as Bluetooth, Wi-Fi and Zigbee. It enhances data transmission fairness for all gateways, resulting in more data transmission achieving maximum throughput. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation.\n\n\nFindings\nOur proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. It also shows that AI- and IoT-federated devices can communicate seamlessly over IoT networks in Industry 4.0.\n\n\nOriginality/value\nThe concept is a part of the original research work and can be adopted by Industry 4.0 for easy and seamless connectivity of AI and IoT-federated devices.\n","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":0.6000,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Pervasive Computing and Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/ijpcc-12-2021-0297","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 1

Abstract

Purpose With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy. Design/methodology/approach With an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achieve effective resource allocation in an IoT network. The two-phased solution is provided to solve these problems for improved data communication in heterogeneous networks achieving fairness. In the first phase, traffic is predicted using the LSTM network model to predict the dynamic traffic. In the second phase, efficient link selection per technology and link scheduling are achieved based on predicted load, the distance between gateways, link capacity and time required as per different technologies supported such as Bluetooth, Wi-Fi and Zigbee. It enhances data transmission fairness for all gateways, resulting in more data transmission achieving maximum throughput. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. Findings Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. It also shows that AI- and IoT-federated devices can communicate seamlessly over IoT networks in Industry 4.0. Originality/value The concept is a part of the original research work and can be adopted by Industry 4.0 for easy and seamless connectivity of AI and IoT-federated devices.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
物联网网络中工业4.0应用的ai联合新型延迟感知链路调度
目的随着人工智能联合技术的出现,通过提高网络吞吐量和减少传输数据的延迟,可以在工业物联网(IIoT)环境中执行复杂任务。工业物联网和工业4.0中的通信需要多种技术的握手,以支持异构网络和各种协议。工业物联网应用可以收集和分析传感器数据,使运营商能够监控和管理生产系统,从而在自动化过程中获得可观的性能提升。所有工业物联网应用程序都负责生成基于不同特征的大量数据。为了在工业物联网环境中获得最佳吞吐量,需要通过通信通道有效地处理工业物联网应用程序。由于工业物联网中的计算资源是有限的,因此工业物联网应用需要以最小的延迟公平地分配资源。尽管一些现有的调度策略解决了延迟问题,但在处理传输延迟的同时,还应该解决更快的数据传输和最佳吞吐量。因此,本研究旨在研究一种公平的机制来处理吞吐量、传输延迟和更快的数据传输。提出的工作提供了一种称为延迟感知资源分配的链路调度算法,该算法通过减少总体延迟和增加网络的总体吞吐量,将计算资源分配给计算敏感的任务。首先,利用人工智能联合神经网络长短期记忆(LSTM)建立了多跳延迟模型,并进行了多步延迟预测,为后续设计奠定了基础。然后,设计了链路调度算法,实现了高效的数据路由。大量的实验结果表明,该策略能最大限度地降低处理、传播、排队和传输延迟的端到端平均延迟。实验表明,机器学习的进步已经导致开发出一种智能的、协作的链路调度算法,用于公平驱动的资源分配,具有最小的延迟和最佳的吞吐量。将人工智能联合LSTM的预测性能与现有方法进行了比较,准确率达到98.2%,优于其他技术。设计/方法/方法随着物联网设备的增加,对更多物联网网关的需求增加,这增加了网络基础设施的成本。因此,本研究提出的系统使用低成本的中间网关。每个网关可以使用不同的通信技术在物联网网络中进行数据传输。因此,网关是异构的,其硬件支持仅限于与无线传感器网络相关的技术。通过考虑物联网动态流量和链路调度问题,实现物联网网络中各网关的数据通信公平性,实现物联网网络资源的有效分配。针对这些问题,提出了两阶段的解决方案,以提高异构网络中数据通信的公平性。在第一阶段,使用LSTM网络模型预测流量,预测动态流量。在第二阶段,根据所支持的蓝牙、Wi-Fi和Zigbee等不同技术的预测负载、网关之间的距离、链路容量和所需时间,实现每种技术的高效选路和链路调度。它增强了所有网关的数据传输公平性,使更多的数据传输达到最大的吞吐量。我们提出的方法通过实现最大的网络吞吐量而优于其他方法,并且通过仿真证明了更少的数据包延迟。结果表明,本文提出的方法在实现最大网络吞吐量方面具有优势,并且通过仿真验证了该方法具有较小的数据包延迟。它还表明,人工智能和物联网联合设备可以在工业4.0的物联网网络上无缝通信。这个概念是原创研究工作的一部分,可以被工业4.0采用,以实现人工智能和物联网联合设备的轻松无缝连接。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Pervasive Computing and Communications
International Journal of Pervasive Computing and Communications COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS-
CiteScore
6.60
自引率
0.00%
发文量
54
期刊最新文献
Big data challenges and opportunities in Internet of Vehicles: a systematic review Cooperative optimization techniques in distributed MAC protocols – a survey Novel communication system for buried water pipe monitoring using acoustic signal propagation along the pipe A new predictive approach for the MAC layer misbehavior in IEEE 802.11 networks Clustering based EO with MRF technique for effective load balancing in cloud computing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1