Pub Date : 2024-12-29DOI: 10.1016/j.comcom.2024.108042
Muhammad Ali Lodhi , Lei Wang , Arshad Farhad , Khalid Ibrahim Qureshi , Jenhu Chen , Khalid Mahmood , Ashok Kumar Das
Long range wide area network (LoRaWAN) utilize Adaptive Data Rate (ADR) for static Internet of Things (IoT) applications such as smart parking in smart city. Blind ADR (BADR) has been introduced for end devices to manage the resources of mobile applications such as assets tracking. However, the predetermined mechanism of allocating the spreading factors (SFs) to mobile end devices is not adequate in terms of energy depletion. Recently, AI-based solutions to resource allocation have been introduced in the existing literature. However, implementing complex models directly on low-power devices is not ideal in terms of energy and processing power. Therefore, considering these challenges, in this paper, we present a novel Contextual Aware Enhanced LoRaWAN Adaptive Data Rate (CA-ADR) for mobile IoT Applications. The proposed CA-ADR comprises two modes offline and online. In offline mode, we compile a dataset based on successful acknowledgments received by the end devices. Later, dataset is modified by implementing contextual rule-based learning (CRL), following which we train a hybrid CNN-LSTM model. In the online mode, we utilize pre-trained model for efficient resource allocation (e.g., SF) to static and mobile end devices. The proposed CA-ADR has been implemented using TinyML, recommended for low-power and computational devices, which has shown improved results in terms of packet success ratio and energy consumption.
{"title":"A Contextual Aware Enhanced LoRaWAN Adaptive Data Rate for mobile IoT applications","authors":"Muhammad Ali Lodhi , Lei Wang , Arshad Farhad , Khalid Ibrahim Qureshi , Jenhu Chen , Khalid Mahmood , Ashok Kumar Das","doi":"10.1016/j.comcom.2024.108042","DOIUrl":"10.1016/j.comcom.2024.108042","url":null,"abstract":"<div><div>Long range wide area network (LoRaWAN) utilize Adaptive Data Rate (ADR) for static Internet of Things (IoT) applications such as smart parking in smart city. Blind ADR (BADR) has been introduced for end devices to manage the resources of mobile applications such as assets tracking. However, the predetermined mechanism of allocating the spreading factors (SFs) to mobile end devices is not adequate in terms of energy depletion. Recently, AI-based solutions to resource allocation have been introduced in the existing literature. However, implementing complex models directly on low-power devices is not ideal in terms of energy and processing power. Therefore, considering these challenges, in this paper, we present a novel Contextual Aware Enhanced LoRaWAN Adaptive Data Rate (CA-ADR) for mobile IoT Applications. The proposed CA-ADR comprises two modes offline and online. In offline mode, we compile a dataset based on successful acknowledgments received by the end devices. Later, dataset is modified by implementing contextual rule-based learning (CRL), following which we train a hybrid CNN-LSTM model. In the online mode, we utilize pre-trained model for efficient resource allocation (e.g., SF) to static and mobile end devices. The proposed CA-ADR has been implemented using TinyML, recommended for low-power and computational devices, which has shown improved results in terms of packet success ratio and energy consumption.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108042"},"PeriodicalIF":4.5,"publicationDate":"2024-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143161654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1016/j.comcom.2024.108037
Yu Li , Yueheng Lu , Xinyu Yang , Wenjian Xu , Zhe Peng
As the next generation of the world wide web, web 3.0 is envisioned as a decentralized internet which improves data security and self-sovereign identity. The mobile web 3.0 mainly focuses on decentralized internet for mobile users and mobile applications. With the rapid development of mobile crowdsourcing research, existing mobile crowdsourcing models can achieve efficient allocation of tasks and responders. Benefiting from the inherent decentralization and immutability, more and more crowdsourcing models over mobile web 3.0 have been deployed on blockchain systems to enhance data verifiability. However, executing these crowdsourcing-oriented smart contracts on a blockchain may incur a large amount of gas consumption, leading to significant costs for the system and increasing users’ expenses. In addition, the existing crowdsourcing model does not take into account the expected quality of task completion in the matching link between tasks and responders, which will cause some tasks to fail to achieve effects and damage the interests of task publishers. In order to solve these problems, this paper proposes a decentralized multi-skill mobile crowdsourcing model with guaranteed task quality and gas optimization (DMCQG), which performs task matching while considering skill coverage and expected quality of task completion, and guarantees the final completion quality of each task. In addition, DMCQG also optimizes the gas value consumed by smart contracts at the code level, reducing the cost of crowdsourcing task participation. In order to verify whether DMCQG is effective, we deployed the model on the Ethereum platform for testing. Through inspection, it was proved that the final expected quality of the tasks matched by DMCQG was better than other models. And it is verified that after optimization, the gas consumption of DMCQG is significantly reduced.
{"title":"Blockchain-empowered multi-skilled crowdsourcing for mobile web 3.0","authors":"Yu Li , Yueheng Lu , Xinyu Yang , Wenjian Xu , Zhe Peng","doi":"10.1016/j.comcom.2024.108037","DOIUrl":"10.1016/j.comcom.2024.108037","url":null,"abstract":"<div><div>As the next generation of the world wide web, web 3.0 is envisioned as a decentralized internet which improves data security and self-sovereign identity. The mobile web 3.0 mainly focuses on decentralized internet for mobile users and mobile applications. With the rapid development of mobile crowdsourcing research, existing mobile crowdsourcing models can achieve efficient allocation of tasks and responders. Benefiting from the inherent decentralization and immutability, more and more crowdsourcing models over mobile web 3.0 have been deployed on blockchain systems to enhance data verifiability. However, executing these crowdsourcing-oriented smart contracts on a blockchain may incur a large amount of gas consumption, leading to significant costs for the system and increasing users’ expenses. In addition, the existing crowdsourcing model does not take into account the expected quality of task completion in the matching link between tasks and responders, which will cause some tasks to fail to achieve effects and damage the interests of task publishers. In order to solve these problems, this paper proposes a decentralized multi-skill mobile crowdsourcing model with guaranteed task quality and gas optimization (DMCQG), which performs task matching while considering skill coverage and expected quality of task completion, and guarantees the final completion quality of each task. In addition, DMCQG also optimizes the gas value consumed by smart contracts at the code level, reducing the cost of crowdsourcing task participation. In order to verify whether DMCQG is effective, we deployed the model on the Ethereum platform for testing. Through inspection, it was proved that the final expected quality of the tasks matched by DMCQG was better than other models. And it is verified that after optimization, the gas consumption of DMCQG is significantly reduced.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108037"},"PeriodicalIF":4.5,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143160893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1016/j.comcom.2024.108010
Pengfei Xu, Zhiyong Peng, Liwei Wang
The proliferation of rumors in online networks poses significant public safety risks and economic repercussions. Addressing this, we investigate the understudied aspect of rumor control: the interplay between influence block effect and user impression counts under budget constraints. We introduce two problem variants, RCIC and RCICB, tailored for diverse application contexts. Our study confronts two inherent challenges: the NP-hard nature of the problems and the non-submodularity of the influence block, which precludes direct greedy approaches. We develop a novel branch-and-bound framework for RCIC, achieving a () approximation ratio, and enhance its efficacy with a progressive upper bound estimation, refining the ratio to (). Extending these techniques to RCICB, we attain approximation ratios of () and (). We conduct experiments on real-world datasets to verify the efficiency, effectiveness, and scalability of our methods.
{"title":"Towards proactive rumor control: When a budget constraint meets impression counts","authors":"Pengfei Xu, Zhiyong Peng, Liwei Wang","doi":"10.1016/j.comcom.2024.108010","DOIUrl":"10.1016/j.comcom.2024.108010","url":null,"abstract":"<div><div>The proliferation of rumors in online networks poses significant public safety risks and economic repercussions. Addressing this, we investigate the understudied aspect of rumor control: the interplay between influence block effect and user impression counts under budget constraints. We introduce two problem variants, RCIC and RCICB, tailored for diverse application contexts. Our study confronts two inherent challenges: the NP-hard nature of the problems and the non-submodularity of the influence block, which precludes direct greedy approaches. We develop a novel branch-and-bound framework for RCIC, achieving a (<span><math><mrow><mn>1</mn><mo>−</mo><mn>1</mn><mo>/</mo><mi>e</mi><mo>−</mo><mi>ϵ</mi></mrow></math></span>) approximation ratio, and enhance its efficacy with a progressive upper bound estimation, refining the ratio to (<span><math><mrow><mn>1</mn><mo>−</mo><mn>1</mn><mo>/</mo><mi>e</mi><mo>−</mo><mi>ϵ</mi><mo>−</mo><mi>ρ</mi></mrow></math></span>). Extending these techniques to RCICB, we attain approximation ratios of (<span><math><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mrow><mo>(</mo><mn>1</mn><mo>−</mo><mn>1</mn><mo>/</mo><mi>e</mi><mo>)</mo></mrow><mo>−</mo><mi>ϵ</mi></mrow></math></span>) and (<span><math><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mrow><mo>(</mo><mn>1</mn><mo>−</mo><mn>1</mn><mo>/</mo><mi>e</mi><mo>−</mo><mi>ρ</mi><mo>)</mo></mrow><mo>−</mo><mi>ϵ</mi></mrow></math></span>). We conduct experiments on real-world datasets to verify the efficiency, effectiveness, and scalability of our methods.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"230 ","pages":"Article 108010"},"PeriodicalIF":4.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-25DOI: 10.1016/j.comcom.2024.108009
Goshgar C. Ismayilov, Can Özturan
The privacy-preserving data aggregation is a critical problem for many applications where multiple parties need to collaborate with each other privately to arrive at certain results. Blockchain, as a database shared across the network, provides an underlying platform on which such aggregations can be carried out with a decentralized manner. Therefore, in this paper, we have proposed a scalable privacy-preserving data aggregation protocol for summation on the Ethereum blockchain by integrating several cryptographic primitives including commitment scheme, asymmetric encryption and zero-knowledge proof along with the hypercube network topology. The protocol consists of four stages as contract deployment, user registration, private submission and proof verification. The analysis of the protocol is made with respect to two main perspectives as security and scalability including computational, communicational and storage overheads. In the paper, the zero-knowledge proof, smart contract and web user interface models for the protocol are provided. We have performed an experimental study in order to identify the required gas costs per individual and per system. The general formulation is provided to characterize the changes in gas costs for the increasing number of users. The zero-knowledge proof generation and verification times are also measured.
{"title":"Trustless privacy-preserving data aggregation on Ethereum with hypercube network topology","authors":"Goshgar C. Ismayilov, Can Özturan","doi":"10.1016/j.comcom.2024.108009","DOIUrl":"10.1016/j.comcom.2024.108009","url":null,"abstract":"<div><div>The privacy-preserving data aggregation is a critical problem for many applications where multiple parties need to collaborate with each other privately to arrive at certain results. Blockchain, as a database shared across the network, provides an underlying platform on which such aggregations can be carried out with a decentralized manner. Therefore, in this paper, we have proposed a scalable privacy-preserving data aggregation protocol for summation on the Ethereum blockchain by integrating several cryptographic primitives including commitment scheme, asymmetric encryption and zero-knowledge proof along with the hypercube network topology. The protocol consists of four stages as <em>contract deployment</em>, <em>user registration</em>, <em>private submission</em> and <em>proof verification</em>. The analysis of the protocol is made with respect to two main perspectives as security and scalability including computational, communicational and storage overheads. In the paper, the zero-knowledge proof, smart contract and web user interface models for the protocol are provided. We have performed an experimental study in order to identify the required gas costs per individual and per system. The general formulation is provided to characterize the changes in gas costs for the increasing number of users. The zero-knowledge proof generation and verification times are also measured.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"230 ","pages":"Article 108009"},"PeriodicalIF":4.5,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142720819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electric Vehicles (EVs) are considered the predominant method of decreasing fossil fuels as well as greenhouse gas emissions. With the drastic growth of EVs, the future smart grid is expected to extensively incorporate dynamic wireless charging (DWC) systems, a significant advancement over traditional charging methods. DWC, offering the unique ability to charge vehicles in motion, introduces new infrastructures, complex network models and consequently, a massive attack surface. To accomplish the goal of such an enormous smart grid accompanying DWCs, the security of EV charging infrastructures has become a deciding factor. EV charging is vulnerable to cyberattacks as it has many attack vectors and many challenges to combat. Unlike the traditional charging services provided in a typical static charging station, the DWC has a complex network architecture which makes it vulnerable to many forms of cyberattacks. Authentication plays a crucial role in safeguarding the frontline security of this ecosystem. However, within the domain of DWC, the current academic landscape has seen limited attention dedicated to authentication protocols. This background signifies the necessity of a comprehensive survey to cover the authentication protocols of dynamic wireless EV charging environments. This review paper examines the security requirements and the network model of the DWC, providing comprehensive insights into existing authentication protocols by scrutinizing a proper classification. Furthermore, the paper addresses existing challenges in authentication schemes within DWC and explores potential future research tendencies aiming to strengthen the security framework of this emerging technology.
{"title":"A survey on authentication protocols of dynamic wireless EV charging","authors":"Nethmi Hettiarachchi, Saqib Hakak, Kalikinkar Mandal","doi":"10.1016/j.comcom.2024.108008","DOIUrl":"10.1016/j.comcom.2024.108008","url":null,"abstract":"<div><div>Electric Vehicles (EVs) are considered the predominant method of decreasing fossil fuels as well as greenhouse gas emissions. With the drastic growth of EVs, the future smart grid is expected to extensively incorporate dynamic wireless charging (DWC) systems, a significant advancement over traditional charging methods. DWC, offering the unique ability to charge vehicles in motion, introduces new infrastructures, complex network models and consequently, a massive attack surface. To accomplish the goal of such an enormous smart grid accompanying DWCs, the security of EV charging infrastructures has become a deciding factor. EV charging is vulnerable to cyberattacks as it has many attack vectors and many challenges to combat. Unlike the traditional charging services provided in a typical static charging station, the DWC has a complex network architecture which makes it vulnerable to many forms of cyberattacks. Authentication plays a crucial role in safeguarding the frontline security of this ecosystem. However, within the domain of DWC, the current academic landscape has seen limited attention dedicated to authentication protocols. This background signifies the necessity of a comprehensive survey to cover the authentication protocols of dynamic wireless EV charging environments. This review paper examines the security requirements and the network model of the DWC, providing comprehensive insights into existing authentication protocols by scrutinizing a proper classification. Furthermore, the paper addresses existing challenges in authentication schemes within DWC and explores potential future research tendencies aiming to strengthen the security framework of this emerging technology.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"230 ","pages":"Article 108008"},"PeriodicalIF":4.5,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-16DOI: 10.1016/j.comcom.2024.108007
Saifur Rahman Sabuj , Yeongi Cho , Mahmoud Elsharief , Han-Shin Jo
Unmanned aerial vehicle (UAV)-aided relaying benefits from easy deployment, strong communication channels, and mobility compared with traditional ground relaying, thereby enhancing the wireless connectivity of future industrial Internet of Things networks. In this paper, a UAV-assisted relay network capable of harvesting energy from a source is designed by exploiting the radio frequency band and transmitting information between the transmitter and corresponding receiver utilizing the terahertz (THz) band. Subsequently, the channel capacity is analytically derived using the finite blocklength theorem for THz communication. In addition, we formulate an optimization problem to determine the optimal location of the UAV to maintain the minimum channel capacity between the transmitter and receiver pair. To determine the optimal location, we employ the augmented Lagrange multiplier approach. Regarding the optimal location, we propose an algorithm for two UAV trajectories, namely forward and backward trajectories, that employs modified minimal jerk trajectories. The numerical results indicate that the backward trajectory provides better system performance in terms of channel capacity. Moreover, the simulation findings show that in urban, dense urban, and high-rise areas, the backward trajectory improves upon the forward trajectory by approximately 41.07%, 59.02%, and 76.47%, respectively, while using a blocklength of 400 bytes.
{"title":"Trajectory design of UAV-aided energy-harvesting relay networks in the terahertz band","authors":"Saifur Rahman Sabuj , Yeongi Cho , Mahmoud Elsharief , Han-Shin Jo","doi":"10.1016/j.comcom.2024.108007","DOIUrl":"10.1016/j.comcom.2024.108007","url":null,"abstract":"<div><div>Unmanned aerial vehicle (UAV)-aided relaying benefits from easy deployment, strong communication channels, and mobility compared with traditional ground relaying, thereby enhancing the wireless connectivity of future industrial Internet of Things networks. In this paper, a UAV-assisted relay network capable of harvesting energy from a source is designed by exploiting the radio frequency band and transmitting information between the transmitter and corresponding receiver utilizing the terahertz (THz) band. Subsequently, the channel capacity is analytically derived using the finite blocklength theorem for THz communication. In addition, we formulate an optimization problem to determine the optimal location of the UAV to maintain the minimum channel capacity between the transmitter and receiver pair. To determine the optimal location, we employ the augmented Lagrange multiplier approach. Regarding the optimal location, we propose an algorithm for two UAV trajectories, namely forward and backward trajectories, that employs modified minimal jerk trajectories. The numerical results indicate that the backward trajectory provides better system performance in terms of channel capacity. Moreover, the simulation findings show that in urban, dense urban, and high-rise areas, the backward trajectory improves upon the forward trajectory by approximately 41.07%, 59.02%, and 76.47%, respectively, while using a blocklength of 400 bytes.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"230 ","pages":"Article 108007"},"PeriodicalIF":4.5,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In today’s digital age, our dependence on IoT (Internet of Things) and IIoT (Industrial IoT) systems has grown immensely, which facilitates sensitive activities such as banking transactions and personal, enterprise data, and legal document exchanges. Cyberattackers consistently exploit weak security measures and tools. The Network Intrusion Detection System (IDS) acts as a primary tool against such cyber threats. However, machine learning-based IDSs, when trained on specific attack patterns, often misclassify new emerging cyberattacks. Further, the limited availability of attack instances for training a supervised learner and the ever-evolving nature of cyber threats further complicate the matter. This emphasizes the need for an adaptable IDS framework capable of recognizing and learning from unfamiliar/unseen attacks over time. In this research, we propose a one-class classification-driven IDS system structured on two tiers. The first tier distinguishes between normal activities and attacks/threats, while the second tier determines if the detected attack is known or unknown. Within this second tier, we also embed a multi-classification mechanism coupled with a clustering algorithm. This model not only identifies unseen attacks but also uses them for retraining them by clustering unseen attacks. This enables our model to be future-proofed, capable of evolving with emerging threat patterns. Leveraging one-class classifiers (OCC) at the first level, our approach bypasses the need for attack samples, addressing data imbalance and zero-day attack concerns and OCC at the second level can effectively separate unknown attacks from the known attacks. Our methodology and evaluations indicate that the presented framework exhibits promising potential for real-world deployments.
{"title":"A dual-tier adaptive one-class classification IDS for emerging cyberthreats","authors":"Md. Ashraf Uddin , Sunil Aryal , Mohamed Reda Bouadjenek , Muna Al-Hawawreh , Md. Alamin Talukder","doi":"10.1016/j.comcom.2024.108006","DOIUrl":"10.1016/j.comcom.2024.108006","url":null,"abstract":"<div><div>In today’s digital age, our dependence on IoT (Internet of Things) and IIoT (Industrial IoT) systems has grown immensely, which facilitates sensitive activities such as banking transactions and personal, enterprise data, and legal document exchanges. Cyberattackers consistently exploit weak security measures and tools. The Network Intrusion Detection System (IDS) acts as a primary tool against such cyber threats. However, machine learning-based IDSs, when trained on specific attack patterns, often misclassify new emerging cyberattacks. Further, the limited availability of attack instances for training a supervised learner and the ever-evolving nature of cyber threats further complicate the matter. This emphasizes the need for an adaptable IDS framework capable of recognizing and learning from unfamiliar/unseen attacks over time. In this research, we propose a one-class classification-driven IDS system structured on two tiers. The first tier distinguishes between normal activities and attacks/threats, while the second tier determines if the detected attack is known or unknown. Within this second tier, we also embed a multi-classification mechanism coupled with a clustering algorithm. This model not only identifies unseen attacks but also uses them for retraining them by clustering unseen attacks. This enables our model to be future-proofed, capable of evolving with emerging threat patterns. Leveraging one-class classifiers (OCC) at the first level, our approach bypasses the need for attack samples, addressing data imbalance and zero-day attack concerns and OCC at the second level can effectively separate unknown attacks from the known attacks. Our methodology and evaluations indicate that the presented framework exhibits promising potential for real-world deployments.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 108006"},"PeriodicalIF":4.5,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1016/j.comcom.2024.108000
Jehad Ali , Sushil Kumar Singh , Weiwei Jiang , Abdulmajeed M. Alenezi , Muhammad Islam , Yousef Ibrahim Daradkeh , Asif Mehmood
The integration of the Internet of Things (IoT) and artificial intelligence (AI) in urban infrastructure, powered by advanced information communication technologies (ICT), has paved the way for smart cities. While these technologies promise enhanced quality of life, economic growth, and improved public services, they also introduce significant cybersecurity challenges. This article comprehensively examines the complex factors in securing AI-driven IoT-enabled smart cities within the framework of future communication networks. Our research addresses critical questions about the evolving threat, multi-layered security approaches, the role of AI in enhancing cybersecurity, and necessary policy frameworks. We conduct an in-depth analysis of cybersecurity solutions across service, application, network, and physical layers, evaluating their effectiveness and integration potential with existing systems. The study offers a detailed examination of AI-driven security approaches, particularly ML and DL techniques, assessing their applicability and limitations in smart city environments. We incorporate real-world case studies to illustrate successful strategies and show areas requiring further research, especially considering emerging communication technologies. Our findings contribute to the field by providing a multi-layered classification of cybersecurity solutions, assessing AI-driven security approaches, and exploring future research directions. Additionally, we investigate the essential role played by policy and regulatory frameworks in safeguarding smart city security. Based on our analysis, we offer recommendations for technical implementations and policy development, aiming to create a holistic approach that balances technological advancements with robust security measures. This study also provides valuable insights for scholars, professionals, and policymakers, offering a comprehensive perspective on the cybersecurity challenges and solutions for AI-driven IoT-enabled smart cities in advanced communication networks.
在先进信息通信技术(ICT)的推动下,物联网(IoT)和人工智能(AI)在城市基础设施中的融合为智慧城市铺平了道路。虽然这些技术有望提高生活质量、促进经济增长和改善公共服务,但它们也带来了重大的网络安全挑战。本文全面探讨了在未来通信网络框架内确保人工智能驱动的物联网智能城市安全的复杂因素。我们的研究涉及不断演变的威胁、多层次安全方法、人工智能在加强网络安全方面的作用以及必要的政策框架等关键问题。我们对服务、应用、网络和物理层的网络安全解决方案进行了深入分析,评估了它们的有效性以及与现有系统集成的潜力。本研究详细考察了人工智能驱动的安全方法,特别是 ML 和 DL 技术,评估了它们在智慧城市环境中的适用性和局限性。我们结合现实世界的案例研究来说明成功的策略,并指出需要进一步研究的领域,特别是考虑到新兴的通信技术。我们的研究结果提供了网络安全解决方案的多层分类,评估了人工智能驱动的安全方法,并探索了未来的研究方向,从而为该领域做出了贡献。此外,我们还研究了政策和监管框架在保障智慧城市安全方面发挥的重要作用。基于我们的分析,我们为技术实施和政策制定提出了建议,旨在创建一种平衡技术进步和健全安全措施的整体方法。本研究还为学者、专业人士和政策制定者提供了宝贵的见解,为先进通信网络中人工智能驱动的物联网智能城市的网络安全挑战和解决方案提供了全面的视角。
{"title":"A deep dive into cybersecurity solutions for AI-driven IoT-enabled smart cities in advanced communication networks","authors":"Jehad Ali , Sushil Kumar Singh , Weiwei Jiang , Abdulmajeed M. Alenezi , Muhammad Islam , Yousef Ibrahim Daradkeh , Asif Mehmood","doi":"10.1016/j.comcom.2024.108000","DOIUrl":"10.1016/j.comcom.2024.108000","url":null,"abstract":"<div><div>The integration of the Internet of Things (IoT) and artificial intelligence (AI) in urban infrastructure, powered by advanced information communication technologies (ICT), has paved the way for smart cities. While these technologies promise enhanced quality of life, economic growth, and improved public services, they also introduce significant cybersecurity challenges. This article comprehensively examines the complex factors in securing AI-driven IoT-enabled smart cities within the framework of future communication networks. Our research addresses critical questions about the evolving threat, multi-layered security approaches, the role of AI in enhancing cybersecurity, and necessary policy frameworks. We conduct an in-depth analysis of cybersecurity solutions across service, application, network, and physical layers, evaluating their effectiveness and integration potential with existing systems. The study offers a detailed examination of AI-driven security approaches, particularly ML and DL techniques, assessing their applicability and limitations in smart city environments. We incorporate real-world case studies to illustrate successful strategies and show areas requiring further research, especially considering emerging communication technologies. Our findings contribute to the field by providing a multi-layered classification of cybersecurity solutions, assessing AI-driven security approaches, and exploring future research directions. Additionally, we investigate the essential role played by policy and regulatory frameworks in safeguarding smart city security. Based on our analysis, we offer recommendations for technical implementations and policy development, aiming to create a holistic approach that balances technological advancements with robust security measures. This study also provides valuable insights for scholars, professionals, and policymakers, offering a comprehensive perspective on the cybersecurity challenges and solutions for AI-driven IoT-enabled smart cities in advanced communication networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 108000"},"PeriodicalIF":4.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-10DOI: 10.1016/j.comcom.2024.107993
Ning Rao, Hua Xu, Zisen Qi, Dan Wang, Yue Zhang, Xiang Peng, Lei Jiang
Jamming decision-making is a pivotal component of modern electromagnetic warfare, wherein recent years have witnessed the extensive application of deep reinforcement learning techniques to enhance the autonomy and intelligence of wireless communication jamming decisions. However, existing researches heavily rely on manually designed customized jamming reward functions, leading to significant consumption of human and computational resources. To this end, under the premise of obviating designing task-customized reward functions, we propose a jamming policy optimization method that learns from imperfect demonstrations to effectively address the complex and high-dimensional jamming resource allocation problem against frequency hopping spread spectrum (FHSS) communication systems. To achieve this, a policy network is meticulously architected to consecutively ascertain jamming schemes for each jamming node, facilitating the construction of the dynamic transition within the Markov decision process. Subsequently, anchored in the dual-trust region concept, we design policy improvement and policy adversarial imitation phases. During the policy improvement phase, the trust region policy optimization method is utilized to refine the policy, while the policy adversarial imitation phase employs adversarial training to guide policy exploration using information embedded in demonstrations. Extensive simulation results indicate that our proposed method can approximate the optimal jamming performance trained under customized reward functions, even with rough binary reward settings, and also significantly surpass demonstration performance.
{"title":"The pupil outdoes the master: Imperfect demonstration-assisted trust region jamming policy optimization against frequency-hopping spread spectrum","authors":"Ning Rao, Hua Xu, Zisen Qi, Dan Wang, Yue Zhang, Xiang Peng, Lei Jiang","doi":"10.1016/j.comcom.2024.107993","DOIUrl":"10.1016/j.comcom.2024.107993","url":null,"abstract":"<div><div>Jamming decision-making is a pivotal component of modern electromagnetic warfare, wherein recent years have witnessed the extensive application of deep reinforcement learning techniques to enhance the autonomy and intelligence of wireless communication jamming decisions. However, existing researches heavily rely on manually designed customized jamming reward functions, leading to significant consumption of human and computational resources. To this end, under the premise of obviating designing task-customized reward functions, we propose a jamming policy optimization method that learns from imperfect demonstrations to effectively address the complex and high-dimensional jamming resource allocation problem against frequency hopping spread spectrum (FHSS) communication systems. To achieve this, a policy network is meticulously architected to consecutively ascertain jamming schemes for each jamming node, facilitating the construction of the dynamic transition within the Markov decision process. Subsequently, anchored in the dual-trust region concept, we design policy improvement and policy adversarial imitation phases. During the policy improvement phase, the trust region policy optimization method is utilized to refine the policy, while the policy adversarial imitation phase employs adversarial training to guide policy exploration using information embedded in demonstrations. Extensive simulation results indicate that our proposed method can approximate the optimal jamming performance trained under customized reward functions, even with rough binary reward settings, and also significantly surpass demonstration performance.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107993"},"PeriodicalIF":4.5,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-08DOI: 10.1016/j.comcom.2024.107992
Datong Xu , Chaosheng Qiu , Wenshan Yin , Pan Zhao , Mingyang Cui
In the non-orthogonal multiple access scenario, users may suffer inter-multiuser eavesdropping due to the feature of successive interference cancellation, and the conditions of eavesdropping suppression methods in the traditional schemes may not be satisfied. To combat this eavesdropping, we consider physical layer security and propose a novel scheme by specially designing symbol conversion and constellation adjustment methods. Based on these methods, the amplitudes and phases of symbols are properly changed. When each user intercepts information as an eavesdropper, he/she has to accept high error probability, or he/she has to undergo exorbitant overhead. Analytical and numerical results demonstrate that the proposed scheme can protect the privacy of information, and this protection does not destruct the execution of successive interference cancellation and symbol transmission.
{"title":"Symbol-level scheme for combating eavesdropping: Symbol conversion and constellation adjustment","authors":"Datong Xu , Chaosheng Qiu , Wenshan Yin , Pan Zhao , Mingyang Cui","doi":"10.1016/j.comcom.2024.107992","DOIUrl":"10.1016/j.comcom.2024.107992","url":null,"abstract":"<div><div>In the non-orthogonal multiple access scenario, users may suffer inter-multiuser eavesdropping due to the feature of successive interference cancellation, and the conditions of eavesdropping suppression methods in the traditional schemes may not be satisfied. To combat this eavesdropping, we consider physical layer security and propose a novel scheme by specially designing symbol conversion and constellation adjustment methods. Based on these methods, the amplitudes and phases of symbols are properly changed. When each user intercepts information as an eavesdropper, he/she has to accept high error probability, or he/she has to undergo exorbitant overhead. Analytical and numerical results demonstrate that the proposed scheme can protect the privacy of information, and this protection does not destruct the execution of successive interference cancellation and symbol transmission.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107992"},"PeriodicalIF":4.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}