Pub Date : 2024-03-18DOI: 10.1109/LNET.2024.3377355
Ammar Ibrahim El Sayed;Mahmoud Abdelaziz;Mohamed Hussein;Ashraf D. Elbayoumy
The Internet of Things (IoT) has brought about flexible data management and monitoring, but it is increasingly vulnerable to distributed denial-of-service (DDoS) attacks. To counter these threats and bolster IoT device trust and computational capacity, we propose an innovative solution by integrating machine learning (ML) techniques with blockchain as a supporting framework. Analyzing IoT traffic datasets, we reveal the presence of DDoS attacks, highlighting the need for robust defenses. After evaluating multiple ML models, we choose the most effective one and integrate it with blockchain for enhanced detection and mitigation of DDoS threats, reinforcing IoT network security. This approach enhances device resilience, presenting a promising contribution to the secure IoT landscape.
物联网(IoT)带来了灵活的数据管理和监控,但也越来越容易受到分布式拒绝服务(DDoS)攻击。为了应对这些威胁并增强物联网设备的信任和计算能力,我们提出了一种创新解决方案,将机器学习(ML)技术与区块链作为支持框架进行整合。通过分析物联网流量数据集,我们发现了 DDoS 攻击的存在,突出了对强大防御的需求。在对多个 ML 模型进行评估后,我们选择了最有效的模型,并将其与区块链相结合,以增强对 DDoS 威胁的检测和缓解,从而加强物联网网络的安全性。这种方法增强了设备的恢复能力,为物联网安全领域做出了巨大贡献。
{"title":"DDoS Mitigation in IoT Using Machine Learning and Blockchain Integration","authors":"Ammar Ibrahim El Sayed;Mahmoud Abdelaziz;Mohamed Hussein;Ashraf D. Elbayoumy","doi":"10.1109/LNET.2024.3377355","DOIUrl":"https://doi.org/10.1109/LNET.2024.3377355","url":null,"abstract":"The Internet of Things (IoT) has brought about flexible data management and monitoring, but it is increasingly vulnerable to distributed denial-of-service (DDoS) attacks. To counter these threats and bolster IoT device trust and computational capacity, we propose an innovative solution by integrating machine learning (ML) techniques with blockchain as a supporting framework. Analyzing IoT traffic datasets, we reveal the presence of DDoS attacks, highlighting the need for robust defenses. After evaluating multiple ML models, we choose the most effective one and integrate it with blockchain for enhanced detection and mitigation of DDoS threats, reinforcing IoT network security. This approach enhances device resilience, presenting a promising contribution to the secure IoT landscape.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 2","pages":"152-155"},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1109/LNET.2024.3376435
Qiong Wu;Le Kuai;Pingyi Fan;Qiang Fan;Junhui Zhao;Jiangzhou Wang
In Internet of Things (IoT) networks, the amount of data sensed by user devices may be huge, resulting in the serious network congestion. To solve this problem, intelligent data compression is critical. The variational information bottleneck (VIB) approach, combined with machine learning, can be employed to train the encoder and decoder, so that the required transmission data size can be reduced significantly. However, VIB suffers from the computing burden and network insecurity. In this letter, we propose a blockchain-enabled VIB (BVIB) approach to relieve the computing burden while guaranteeing network security. Extensive simulations conducted by Python and C++ demonstrate that BVIB outperforms VIB by 36%, 22% and 57% in terms of time and CPU cycles cost, mutual information, and accuracy under attack, respectively.
在物联网(IoT)网络中,用户设备感知的数据量可能非常巨大,从而导致严重的网络拥塞。要解决这一问题,智能数据压缩至关重要。变异信息瓶颈(VIB)方法与机器学习相结合,可用于训练编码器和解码器,从而大幅减少所需的传输数据大小。然而,VIB 存在计算负担和网络不安全问题。在本文中,我们提出了一种支持区块链的 VIB(BVIB)方法,以减轻计算负担,同时保证网络安全。使用 Python 和 C++ 进行的大量仿真表明,BVIB 在时间和 CPU 周期成本、互信息和攻击下的准确性方面分别比 VIB 高出 36%、22% 和 57%。
{"title":"Blockchain-Enabled Variational Information Bottleneck for IoT Networks","authors":"Qiong Wu;Le Kuai;Pingyi Fan;Qiang Fan;Junhui Zhao;Jiangzhou Wang","doi":"10.1109/LNET.2024.3376435","DOIUrl":"10.1109/LNET.2024.3376435","url":null,"abstract":"In Internet of Things (IoT) networks, the amount of data sensed by user devices may be huge, resulting in the serious network congestion. To solve this problem, intelligent data compression is critical. The variational information bottleneck (VIB) approach, combined with machine learning, can be employed to train the encoder and decoder, so that the required transmission data size can be reduced significantly. However, VIB suffers from the computing burden and network insecurity. In this letter, we propose a blockchain-enabled VIB (BVIB) approach to relieve the computing burden while guaranteeing network security. Extensive simulations conducted by Python and C++ demonstrate that BVIB outperforms VIB by 36%, 22% and 57% in terms of time and CPU cycles cost, mutual information, and accuracy under attack, respectively.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 2","pages":"92-96"},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.1109/LNET.2024.3400764
Murat Arda Onsu;Poonam Lohan;Burak Kantarci;Emil Janulewicz;Sergio Slobodrian
Network function virtualization (NFV) is a key foundational technology for 5G and beyond networks, wherein to offer network services, execution of Virtual Network Functions (VNFs) in a defined sequence is crucial for high-quality Service Function Chaining (SFC) provisioning. To provide fast, reliable, and automatic VNFs placement, Machine Learning (ML) algorithms such as Deep Reinforcement Learning (DRL) are widely being investigated. However, due to the requirement of fixed-size inputs in DRL models, these algorithms are highly dependent on network configuration such as the number of data centers (DCs) where VNFs can be placed and the logical connections among DCs. In this letter, a novel approach using the DRL technique is proposed for SFC provisioning which unlocks the reconfigurability of the networks, i.e., the same proposed model can be applied in different network configurations without additional training. Moreover, an advanced Deep Neural Network (DNN) architecture is constructed for DRL with an attention layer that improves the performance of SFC provisioning while considering the efficient resource utilization and the End-to-End (E2E) delay of SFC requests by looking up their priority points. Numerical results demonstrate that the proposed model surpasses the baseline heuristic method with an increase in the overall SFC acceptance ratio by 20.3% and a reduction in resource consumption and E2E delay by 50% and 42.65%, respectively.
{"title":"Unlocking Reconfigurability for Deep Reinforcement Learning in SFC Provisioning","authors":"Murat Arda Onsu;Poonam Lohan;Burak Kantarci;Emil Janulewicz;Sergio Slobodrian","doi":"10.1109/LNET.2024.3400764","DOIUrl":"https://doi.org/10.1109/LNET.2024.3400764","url":null,"abstract":"Network function virtualization (NFV) is a key foundational technology for 5G and beyond networks, wherein to offer network services, execution of Virtual Network Functions (VNFs) in a defined sequence is crucial for high-quality Service Function Chaining (SFC) provisioning. To provide fast, reliable, and automatic VNFs placement, Machine Learning (ML) algorithms such as Deep Reinforcement Learning (DRL) are widely being investigated. However, due to the requirement of fixed-size inputs in DRL models, these algorithms are highly dependent on network configuration such as the number of data centers (DCs) where VNFs can be placed and the logical connections among DCs. In this letter, a novel approach using the DRL technique is proposed for SFC provisioning which unlocks the reconfigurability of the networks, i.e., the same proposed model can be applied in different network configurations without additional training. Moreover, an advanced Deep Neural Network (DNN) architecture is constructed for DRL with an attention layer that improves the performance of SFC provisioning while considering the efficient resource utilization and the End-to-End (E2E) delay of SFC requests by looking up their priority points. Numerical results demonstrate that the proposed model surpasses the baseline heuristic method with an increase in the overall SFC acceptance ratio by 20.3% and a reduction in resource consumption and E2E delay by 50% and 42.65%, respectively.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 3","pages":"193-197"},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142517983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1109/LNET.2024.3372407
Anne Bouillard
Several techniques can be used for computing deterministic performance bounds in FIFO networks. The most popular one, as far as Network Calculus is concerned, is Total Flow Analysis (TFA). Its advantages are its algorithmic efficiency, acceptable accuracy and adapted to general topologies. However, handling cyclic dependencies is mostly solved for token-bucket arrival curves. Moreover, in many situations, flows are shaped at their admission in a network, and the network analysis does not fully take advantage of it. In this letter, we generalize the approach to piece-wise linear concave arrival curves and to shaping several flows together at their admission into the network. We show through numerical evaluation that the performance bounds are drastically improved.
{"title":"Admission Shaping With Network Calculus","authors":"Anne Bouillard","doi":"10.1109/LNET.2024.3372407","DOIUrl":"https://doi.org/10.1109/LNET.2024.3372407","url":null,"abstract":"Several techniques can be used for computing deterministic performance bounds in FIFO networks. The most popular one, as far as Network Calculus is concerned, is Total Flow Analysis (TFA). Its advantages are its algorithmic efficiency, acceptable accuracy and adapted to general topologies. However, handling cyclic dependencies is mostly solved for token-bucket arrival curves. Moreover, in many situations, flows are shaped at their admission in a network, and the network analysis does not fully take advantage of it. In this letter, we generalize the approach to piece-wise linear concave arrival curves and to shaping several flows together at their admission into the network. We show through numerical evaluation that the performance bounds are drastically improved.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 2","pages":"115-118"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1109/LNET.2024.3365717
Washim Uddin Mondal;Veni Goyal;Satish V. Ukkusuri;Goutam Das;Di Wang;Mohamed-Slim Alouini;Vaneet Aggarwal
This letter presents a conditional generative adversarial network (cGAN) that translates base station location (BSL) information of any Region-of-Interest (RoI) to location-dependent coverage probability values within a subset of that region, called the region-of-evaluation (RoE). We train our network utilizing the BSL data of India, the USA, Germany, and Brazil. In comparison to the state-of-the-art convolutional neural networks (CNNs), our model improves the prediction error ( $L_{1}$