Pub Date : 2024-06-24DOI: 10.1016/j.adhoc.2024.103583
Leonardo Badia , Alberto Zancanaro , Giulia Cisotto , Andrea Munari
Sensor data exchanges in IoT applications can experience a variable delay due to changes in the communication environment and sharing of processing capabilities. This variability can impact the performance and effectiveness of the systems being controlled, and is especially reflected on Age of Information (AoI), a performance metric that quantifies the freshness of updates in remote sensing. In this work, we discuss the quantitative impact of activation and propagation delays, both taken as random variables, on AoI. In our analysis we consider an offline scheduling over a finite horizon, we derive a closed form solution to evaluate the average AoI, and we validate our results through numerical simulation. We also provide further analysis on which type of delay has more influence on the system, as well as the probability that the system fails to deliver all the scheduled updates due to excessive delays of either kind.
{"title":"Status update scheduling in remote sensing under variable activation and propagation delays","authors":"Leonardo Badia , Alberto Zancanaro , Giulia Cisotto , Andrea Munari","doi":"10.1016/j.adhoc.2024.103583","DOIUrl":"https://doi.org/10.1016/j.adhoc.2024.103583","url":null,"abstract":"<div><p>Sensor data exchanges in IoT applications can experience a variable delay due to changes in the communication environment and sharing of processing capabilities. This variability can impact the performance and effectiveness of the systems being controlled, and is especially reflected on Age of Information (AoI), a performance metric that quantifies the freshness of updates in remote sensing. In this work, we discuss the quantitative impact of activation and propagation delays, both taken as random variables, on AoI. In our analysis we consider an offline scheduling over a finite horizon, we derive a closed form solution to evaluate the average AoI, and we validate our results through numerical simulation. We also provide further analysis on which type of delay has more influence on the system, as well as the probability that the system fails to deliver all the scheduled updates due to excessive delays of either kind.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1016/j.adhoc.2024.103582
Fuxin Zhang, Guangping Wang
Cellular Vehicle-to-Everything (C-V2X) networks provide critical support for intelligently connected vehicles (ICVs) and intelligent transport systems (ITS). C-V2X utilizes vehicle-to-vehicle (V2V) communication technology to exchange safety–critical information among neighbors. V2V communication has stringent high-reliability and low-latency requirements. The existing solutions on resource allocation for V2V communications mainly rely on channel states to optimize resource utilization but fail to consider vehicle safety requirements, which cannot satisfy safety application performance. In this paper, we focus on application-driven channel resource allocation strategy for V2V communications. First, we propose an inter-packet reception model to represent the delay between two consecutive and successful reception packets at a receiver. We then design an application-specific utility function where the utility depends on the packet reception performance and vehicle safety context. Finally, we formulate the channel resource allocation problem as a non-cooperative game model. The game model can guide each node to cooperate and achieve the trade-off between fairness and efficiency in channel resource allocation. The simulation results show that our work can significantly improve the reliability of V2V communications and guarantee the vehicle safety application performance.
{"title":"Context-aware resource allocation for vehicle-to-vehicle communications in cellular-V2X networks","authors":"Fuxin Zhang, Guangping Wang","doi":"10.1016/j.adhoc.2024.103582","DOIUrl":"https://doi.org/10.1016/j.adhoc.2024.103582","url":null,"abstract":"<div><p>Cellular Vehicle-to-Everything (C-V2X) networks provide critical support for intelligently connected vehicles (ICVs) and intelligent transport systems (ITS). C-V2X utilizes vehicle-to-vehicle (V2V) communication technology to exchange safety–critical information among neighbors. V2V communication has stringent high-reliability and low-latency requirements. The existing solutions on resource allocation for V2V communications mainly rely on channel states to optimize resource utilization but fail to consider vehicle safety requirements, which cannot satisfy safety application performance. In this paper, we focus on application-driven channel resource allocation strategy for V2V communications. First, we propose an inter-packet reception model to represent the delay between two consecutive and successful reception packets at a receiver. We then design an application-specific utility function where the utility depends on the packet reception performance and vehicle safety context. Finally, we formulate the channel resource allocation problem as a non-cooperative game model. The game model can guide each node to cooperate and achieve the trade-off between fairness and efficiency in channel resource allocation. The simulation results show that our work can significantly improve the reliability of V2V communications and guarantee the vehicle safety application performance.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141487027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1016/j.adhoc.2024.103580
Xiaoyan Zhao, Ming Li, Peiyan Yuan
Online computing offloading is an effective method to enhance the performance of mobile edge computing (MEC). However, existing research ignores the impact of system stability and device priority on system performance during task processing.To address the problem of computing offloading for computing-intensive tasks, an online partial offloading algorithm combining task queue length and energy consumption is proposed without any prior information. Firstly, a queue model of IoT devices is created to describe their workload backlogs and reflect the system stability. Then, using Lyapunov optimization, computing offloading problem is decoupled into two sub-problems by calculating the optimal CPU computing rate and device priority, which can determine the task offloading amount and offloading location to complete resource allocation. Finally, the online partial offloading algorithm based on devices priority is solved by minimizing the value of the drift-plus-penalty function’s upper bound to ensure system stability and reduce energy consumption. Theoretical analysis and the outcomes of numerous experiments demonstrate the effectiveness of the proposed algorithm in minimizing system energy consumption while adhering to system constraints, even in dealing with dynamically varying task arrival rates.
在线计算卸载是提高移动边缘计算(MEC)性能的有效方法。为解决计算密集型任务的计算卸载问题,本文提出了一种结合任务队列长度和能耗的在线部分卸载算法,无需任何先验信息。首先,创建一个物联网设备队列模型,以描述其工作负载积压情况并反映系统稳定性。然后,利用李雅普诺夫优化法,通过计算最优 CPU 运算速率和设备优先级,将计算卸载问题解耦为两个子问题,从而确定任务卸载量和卸载位置,完成资源分配。最后,通过最小化漂移加惩罚函数的上限值来求解基于设备优先级的在线部分卸载算法,以确保系统稳定性并降低能耗。理论分析和大量实验结果表明,即使在处理动态变化的任务到达率时,所提出的算法也能有效降低系统能耗,同时遵守系统约束。
{"title":"An online energy-saving offloading algorithm in mobile edge computing with Lyapunov optimization","authors":"Xiaoyan Zhao, Ming Li, Peiyan Yuan","doi":"10.1016/j.adhoc.2024.103580","DOIUrl":"https://doi.org/10.1016/j.adhoc.2024.103580","url":null,"abstract":"<div><p>Online computing offloading is an effective method to enhance the performance of mobile edge computing (MEC). However, existing research ignores the impact of system stability and device priority on system performance during task processing.To address the problem of computing offloading for computing-intensive tasks, an online partial offloading algorithm combining task queue length and energy consumption is proposed without any prior information. Firstly, a queue model of IoT devices is created to describe their workload backlogs and reflect the system stability. Then, using Lyapunov optimization, computing offloading problem is decoupled into two sub-problems by calculating the optimal CPU computing rate and device priority, which can determine the task offloading amount and offloading location to complete resource allocation. Finally, the online partial offloading algorithm based on devices priority is solved by minimizing the value of the drift-plus-penalty function’s upper bound to ensure system stability and reduce energy consumption. Theoretical analysis and the outcomes of numerous experiments demonstrate the effectiveness of the proposed algorithm in minimizing system energy consumption while adhering to system constraints, even in dealing with dynamically varying task arrival rates.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141487028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1016/j.adhoc.2024.103581
Anselme R. Affane M., Hassan Satori
Wireless Sensor Networks (WSNs) are vulnerable to attacks during data transmission, and many techniques have been proposed to detect and secure routing data. In this paper, we introduce a novel stochastic predictive machine learning approach designed to discern untrustworthy events and unreliable routing attributes, aiming to establish an artificial intelligence-based attack detection system for WSNs. Our methodology leverages real-time analysis of the features of simulated WSN routing data. By integrating Hidden Markov Models (HMM) with Gaussian Mixture Models (GMM), we develop a robust classification framework. This framework effectively identifies outliers, pinpoints malicious network behaviors from their origins, and categorizes them as either trusted or untrusted network activities. In addition, dimensionality reduction techniques are used to improve interpretability, reduce computation and processing time, extract uncorrelated features from network data, and optimize performances. The main advantage of our approach is to establish an efficient stochastic machine learning method capable of analyzing and filtering WSN traffic to prevent suspicious and unsafe data, reduce the large dissimilarity in the collected routing features, and rapidly detect attacks before they occur. In this work, we exploit a well-tuned data set that provides a lot of routing information without losing any data. The experimental results show that the proposed stochastic attack detection system can effectively identify and categorize anomalies in wireless sensor networks with high accuracy. The classification rates of the system were found to be around 83.65%, 84.94% and 94.55%, which is significantly better than the existing classification approaches. Furthermore, the proposed system showed a positive prediction value of 11.84% higher than the existing approaches.
{"title":"Machine learning attack detection based-on stochastic classifier methods for enhancing of routing security in wireless sensor networks","authors":"Anselme R. Affane M., Hassan Satori","doi":"10.1016/j.adhoc.2024.103581","DOIUrl":"https://doi.org/10.1016/j.adhoc.2024.103581","url":null,"abstract":"<div><p>Wireless Sensor Networks (WSNs) are vulnerable to attacks during data transmission, and many techniques have been proposed to detect and secure routing data. In this paper, we introduce a novel stochastic predictive machine learning approach designed to discern untrustworthy events and unreliable routing attributes, aiming to establish an artificial intelligence-based attack detection system for WSNs. Our methodology leverages real-time analysis of the features of simulated WSN routing data. By integrating Hidden Markov Models (HMM) with Gaussian Mixture Models (GMM), we develop a robust classification framework. This framework effectively identifies outliers, pinpoints malicious network behaviors from their origins, and categorizes them as either trusted or untrusted network activities. In addition, dimensionality reduction techniques are used to improve interpretability, reduce computation and processing time, extract uncorrelated features from network data, and optimize performances. The main advantage of our approach is to establish an efficient stochastic machine learning method capable of analyzing and filtering WSN traffic to prevent suspicious and unsafe data, reduce the large dissimilarity in the collected routing features, and rapidly detect attacks before they occur. In this work, we exploit a well-tuned data set that provides a lot of routing information without losing any data. The experimental results show that the proposed stochastic attack detection system can effectively identify and categorize anomalies in wireless sensor networks with high accuracy. The classification rates of the system were found to be around 83.65%, 84.94% and 94.55%, which is significantly better than the existing classification approaches. Furthermore, the proposed system showed a positive prediction value of 11.84% higher than the existing approaches.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Analyzing packet loss, whether resulting from communication challenges or malicious attacks, is vital for broadcast authentication protocols. It ensures legitimate and continuous authentication across networks. While previous studies have mainly focused on countering Denial of Service (DoS) attacks' impact on packet loss, our research introduces an innovative investigation into packet loss and develops data recovery within variant TESLA protocols. We highlight the efficacy of our proposed hybrid TLI-µTESLA protocol in maintaining continuous and robust connections among network members, while maximizing data recovery in adverse communication conditions. The study examines the unique packet structures associated with each TESLA protocol variant, emphasizing the implications of losing each type on the network performance. We also introduce modifications to variant TESLA protocols to improve data recovery and alleviate the effects of packet loss. Using Java programming language, we conducted simulation analyses that illustrate the adaptability of variant TESLA protocols in recovering lost packet keys and authenticating previously buffered packets, all while maintaining continuous and robust authentication between network members. Our findings also underscore the superiority of the hybrid TLI-µTESLA protocol in terms of packet loss performance and data recovery, alongside its robust cybersecurity features, including confidentiality, integrity, availability, and accessibility. Additionally, we demonstrated the efficiency of our proposed protocol in terms of low computational and communication requirements compared to earlier TESLA protocol variants, as outlined in previous publications.
分析数据包丢失(无论是通信挑战还是恶意攻击造成的)对于广播认证协议至关重要。它能确保跨网络的合法和持续认证。以往的研究主要集中在对抗拒绝服务(DoS)攻击对数据包丢失的影响,而我们的研究则引入了对数据包丢失的创新调查,并在变体 TESLA 协议中开发了数据恢复功能。我们强调了我们提出的混合 TLI-µTESLA 协议在保持网络成员间持续稳健连接方面的功效,同时在不利的通信条件下最大限度地恢复数据。研究探讨了与每种 TESLA 协议变体相关的独特数据包结构,强调了丢失每种类型的数据包对网络性能的影响。我们还介绍了对变体 TESLA 协议的修改,以改善数据恢复并减轻数据包丢失的影响。我们使用 Java 编程语言进行了仿真分析,结果表明变体 TESLA 协议在恢复丢失的数据包密钥和验证先前缓冲的数据包方面具有很强的适应性,同时还能保持网络成员之间持续而稳健的验证。我们的研究结果还强调了混合 TLI-µTESLA 协议在数据包丢失性能和数据恢复方面的优势,以及其强大的网络安全功能,包括保密性、完整性、可用性和可访问性。此外,与之前发表的 TESLA 协议变体相比,我们提出的协议在低计算和通信要求方面表现出了高效性。
{"title":"Comparative study of novel packet loss analysis and recovery capability between hybrid TLI-µTESLA and other variant TESLA protocols","authors":"Khouloud Eledlebi , Ahmed Alzubaidi , Ernesto Damiani , Victor Mateu , Yousof Al-Hammadi , Deepak Puthal , Chan Yeob Yeun","doi":"10.1016/j.adhoc.2024.103579","DOIUrl":"https://doi.org/10.1016/j.adhoc.2024.103579","url":null,"abstract":"<div><p>Analyzing packet loss, whether resulting from communication challenges or malicious attacks, is vital for broadcast authentication protocols. It ensures legitimate and continuous authentication across networks. While previous studies have mainly focused on countering Denial of Service (DoS) attacks' impact on packet loss, our research introduces an innovative investigation into packet loss and develops data recovery within variant TESLA protocols. We highlight the efficacy of our proposed hybrid TLI-µTESLA protocol in maintaining continuous and robust connections among network members, while maximizing data recovery in adverse communication conditions. The study examines the unique packet structures associated with each TESLA protocol variant, emphasizing the implications of losing each type on the network performance. We also introduce modifications to variant TESLA protocols to improve data recovery and alleviate the effects of packet loss. Using Java programming language, we conducted simulation analyses that illustrate the adaptability of variant TESLA protocols in recovering lost packet keys and authenticating previously buffered packets, all while maintaining continuous and robust authentication between network members. Our findings also underscore the superiority of the hybrid TLI-µTESLA protocol in terms of packet loss performance and data recovery, alongside its robust cybersecurity features, including confidentiality, integrity, availability, and accessibility. Additionally, we demonstrated the efficiency of our proposed protocol in terms of low computational and communication requirements compared to earlier TESLA protocol variants, as outlined in previous publications.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141434833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1016/j.adhoc.2024.103576
Jawad Hassan , Adnan Sohail , Ali Ismail Awad , M. Ahmed Zaka
The Internet of Things (IoT) has recently gained significance as a means of connecting various physical devices to the Internet, enabling various innovative applications. However, the security of IoT networks is a significant concern due to the large volume of data generated and transmitted over them. The limited resources of IoT devices, along with their mobility and diverse characteristics, pose significant challenges for maintaining security in routing protocols, such as the Routing Protocol for Low-Power and Lossy Networks (RPL). This lacks effective defense mechanisms against routing attacks, including Sybil and rank attacks. Various techniques have been proposed to address this issue, including cryptography and intrusion-detection systems. The use of these techniques on IoT nodes is limited by their low power and lossy nature, primarily due to the significant computational overhead they involve. In addition, conventional trust-management systems for addressing security concerns need to be improved due to their high computation, memory, and energy costs. Therefore, this paper presents a novel, Lightweight, and Efficient Trust-based Mechanism (LETM-IoT) for resource-limited IoT networks to mitigate Sybil attacks. We conducted extensive simulations in Cooja, the Contiki OS simulator, to assess the efficacy of the proposed LETM-IoT against three types of Sybil attack (A, B, and C). A comparison was also made with standard RPL and state-of-the-art approaches. The experimental findings show that LETM-IoT outperforms both of these in terms of average packet-delivery ratio by 0.20 percentage points, true-positive ratio by 1.34 percentage points, energy consumption by 2.5%, and memory utilization by 19.42%. The obtained results also show that LETM-IoT consumes increased storage by 5.02% compared to the standard RPL due to the existence of an embedded security module.
{"title":"LETM-IoT: A lightweight and efficient trust mechanism for Sybil attacks in Internet of Things networks","authors":"Jawad Hassan , Adnan Sohail , Ali Ismail Awad , M. Ahmed Zaka","doi":"10.1016/j.adhoc.2024.103576","DOIUrl":"10.1016/j.adhoc.2024.103576","url":null,"abstract":"<div><p>The Internet of Things (IoT) has recently gained significance as a means of connecting various physical devices to the Internet, enabling various innovative applications. However, the security of IoT networks is a significant concern due to the large volume of data generated and transmitted over them. The limited resources of IoT devices, along with their mobility and diverse characteristics, pose significant challenges for maintaining security in routing protocols, such as the Routing Protocol for Low-Power and Lossy Networks (RPL). This lacks effective defense mechanisms against routing attacks, including Sybil and rank attacks. Various techniques have been proposed to address this issue, including cryptography and intrusion-detection systems. The use of these techniques on IoT nodes is limited by their low power and lossy nature, primarily due to the significant computational overhead they involve. In addition, conventional trust-management systems for addressing security concerns need to be improved due to their high computation, memory, and energy costs. Therefore, this paper presents a novel, Lightweight, and Efficient Trust-based Mechanism (LETM-IoT) for resource-limited IoT networks to mitigate Sybil attacks. We conducted extensive simulations in Cooja, the Contiki OS simulator, to assess the efficacy of the proposed LETM-IoT against three types of Sybil attack (A, B, and C). A comparison was also made with standard RPL and state-of-the-art approaches. The experimental findings show that LETM-IoT outperforms both of these in terms of average packet-delivery ratio by 0.20 percentage points, true-positive ratio by 1.34 percentage points, energy consumption by 2.5%, and memory utilization by 19.42%. The obtained results also show that LETM-IoT consumes increased storage by 5.02% compared to the standard RPL due to the existence of an embedded security module.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1570870524001872/pdfft?md5=76ec8ae4462665d30ce03fddc3ecca3b&pid=1-s2.0-S1570870524001872-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141404107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1016/j.adhoc.2024.103566
Chuanhua Wang , Quan Zhang , Xin Xu , Huimin Wang , ZhenYu Luo
The trust system is widely used to prevent malicious behaviors, and it is a key element for vehicles to establish interactions in the Internet of Vehicles (IoV). Nevertheless, trust and privacy remain unresolved concerns stemming from the distinctive features of the IoV. The IoV must thwart malicious attackers from spreading false data while ensuring that the vehicle’s evaluation data is not leaked, which is of utmost importance. In this paper, we propose a blockchain-based trust model (BPS-V), which supports ciphertext computation of trust evaluation data submitted by different vehicles. Design a cooperative update method for vehicle trust, which utilizes an improved distributed two-trapdoor public-key cryptography algorithm to achieve cooperative computing of trust and reduce the risk of privacy leakage of evaluation data. On this basis, BPS-V introduces blockchain sharding technology to realize cross-domain storage and sharing of the trust. Simulation results show that our scheme can effectively protect the privacy of evaluation data and maintain a high detection rate and low false alarm rate in different road environments. Compared with traditional schemes, BPS-V can improve the efficiency of trust updates and the detection of malicious vehicles by 9.5% and 32%.
{"title":"BPS-V: A blockchain-based trust model for the Internet of Vehicles with privacy-preserving","authors":"Chuanhua Wang , Quan Zhang , Xin Xu , Huimin Wang , ZhenYu Luo","doi":"10.1016/j.adhoc.2024.103566","DOIUrl":"10.1016/j.adhoc.2024.103566","url":null,"abstract":"<div><p>The trust system is widely used to prevent malicious behaviors, and it is a key element for vehicles to establish interactions in the Internet of Vehicles (IoV). Nevertheless, trust and privacy remain unresolved concerns stemming from the distinctive features of the IoV. The IoV must thwart malicious attackers from spreading false data while ensuring that the vehicle’s evaluation data is not leaked, which is of utmost importance. In this paper, we propose a blockchain-based trust model (BPS-V), which supports ciphertext computation of trust evaluation data submitted by different vehicles. Design a cooperative update method for vehicle trust, which utilizes an improved distributed two-trapdoor public-key cryptography algorithm to achieve cooperative computing of trust and reduce the risk of privacy leakage of evaluation data. On this basis, BPS-V introduces blockchain sharding technology to realize cross-domain storage and sharing of the trust. Simulation results show that our scheme can effectively protect the privacy of evaluation data and maintain a high detection rate and low false alarm rate in different road environments. Compared with traditional schemes, BPS-V can improve the efficiency of trust updates and the detection of malicious vehicles by 9.5% and 32%.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141401676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-12DOI: 10.1016/j.adhoc.2024.103575
Peiying Zhang , Zixuan Cui , Neeraj Kumar , Jian Wang , Wei Zhang , Lizhuang Tan
With the evolution of Space-based backbone networks, the demand for enhanced efficiency and stability in network resource allocation has become increasingly critical, presenting a substantial challenge to conventional allocation methods. In response, we introduce an innovative resource allocation algorithm for space-based backbone networks. This algorithm represents a synergistic fusion of Deep Reinforcement Learning (DRL) and Local Search (LS) methodologies. It is specifically designed to reduce the extensive training duration associated with traditional policy networks, a crucial aspect in assuring optimal service quality. Our algorithm is structured within a two-stage framework that seamlessly integrates DRL and LS. A distinctive feature of our approach is the incorporation of link reliability into the algorithmic design. This element is meticulously tailored to address the dynamic and heterogeneous nature of space-based networks, ensuring effective resource management. The effectiveness of our approach is substantiated through extensive simulation results. These results demonstrate that the integration of DRL with LS not only enhances training efficiency but also exhibits significant improvements in resource allocation outcomes. Our work represents a noteworthy contribution to the development of practical optimization strategies in space-based networks, merging DRL with traditional methodologies for improved performance.
随着天基骨干网络的发展,对提高网络资源分配效率和稳定性的要求越来越高,这对传统的分配方法提出了巨大挑战。为此,我们为天基骨干网络引入了一种创新的资源分配算法。该算法是深度强化学习(DRL)和局部搜索(LS)方法的协同融合。它专门用于减少与传统策略网络相关的大量训练时间,这是确保最佳服务质量的一个关键方面。我们的算法采用两阶段框架结构,无缝集成了 DRL 和 LS。我们方法的一个显著特点是将链路可靠性纳入算法设计。这一要素是针对天基网络的动态和异构性质而精心定制的,可确保有效的资源管理。我们的方法的有效性通过大量的模拟结果得到了证实。这些结果表明,DRL 与 LS 的整合不仅提高了训练效率,还显著改善了资源分配结果。我们的工作为天基网络实用优化策略的开发做出了显著贡献,将 DRL 与传统方法相结合,提高了性能。
{"title":"Local search resource allocation algorithm for space-based backbone network in Deep Reinforcement Learning method","authors":"Peiying Zhang , Zixuan Cui , Neeraj Kumar , Jian Wang , Wei Zhang , Lizhuang Tan","doi":"10.1016/j.adhoc.2024.103575","DOIUrl":"https://doi.org/10.1016/j.adhoc.2024.103575","url":null,"abstract":"<div><p>With the evolution of Space-based backbone networks, the demand for enhanced efficiency and stability in network resource allocation has become increasingly critical, presenting a substantial challenge to conventional allocation methods. In response, we introduce an innovative resource allocation algorithm for space-based backbone networks. This algorithm represents a synergistic fusion of Deep Reinforcement Learning (DRL) and Local Search (LS) methodologies. It is specifically designed to reduce the extensive training duration associated with traditional policy networks, a crucial aspect in assuring optimal service quality. Our algorithm is structured within a two-stage framework that seamlessly integrates DRL and LS. A distinctive feature of our approach is the incorporation of link reliability into the algorithmic design. This element is meticulously tailored to address the dynamic and heterogeneous nature of space-based networks, ensuring effective resource management. The effectiveness of our approach is substantiated through extensive simulation results. These results demonstrate that the integration of DRL with LS not only enhances training efficiency but also exhibits significant improvements in resource allocation outcomes. Our work represents a noteworthy contribution to the development of practical optimization strategies in space-based networks, merging DRL with traditional methodologies for improved performance.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-08DOI: 10.1016/j.adhoc.2024.103526
Pan Zhao , Liuyuan Chen , Zhiliang Jiang , Datong Xu , Jianli Yang , Mingyang Cui , Tianfei Chen
As the Internet of Things(IoT) and its intelligent applications continue to proliferate, forthcoming 6G networks will confront the dual challenge of heightened communication and computing capacity demands. To address this, D2D collaborative computing is being explored. However, the current D2D collaborative computing ignores the integrity of computing and communication. For a single-task device, offloading operations intertwine computing and communication, internal coupling causes due to parallel executed between local and D2D offloading. In addition, external coupling arises among devices competing for limited radio and computing resources. Worse, internal coupling and external coupling interact, exacerbating the situation. To address these challenges, a novel D2D offloading framework is proposed based on hyper-graph matching in this paper. Our goal is to minimize both delay and energy costs while ensuring service quality for all users by jointly optimizing task scheduling, offload policies and resource allocation. The original problem is formulated as a nonlinear integer programming problem. Then, by three-stage strategy optimization decomposition, it is separated into several sub-problems. In the first stage, a polynomial-time algorithm has been developed to optimize the task offloading ratio, taking into account both its upper and lower bounds. In the second stage, a geometric programming algorithm has been created to address power allocation. In the third stage, a three-dimensional hyper-graph matching model is employed to derive the optimal offloading and channel allocation policies. This is based on analyzing the conflict graph and applying the claw theorem. Simulation results demonstrate that the proposed scheme outperforms other algorithms by approximately 12%, 20%, 28%, 40%, respectively. Moreover, it enhances both spectral efficiency and computational efficiency.
{"title":"Hyper-graph matching D2D offloading scheme for enhanced computation and communication capacity","authors":"Pan Zhao , Liuyuan Chen , Zhiliang Jiang , Datong Xu , Jianli Yang , Mingyang Cui , Tianfei Chen","doi":"10.1016/j.adhoc.2024.103526","DOIUrl":"10.1016/j.adhoc.2024.103526","url":null,"abstract":"<div><p>As the Internet of Things(IoT) and its intelligent applications continue to proliferate, forthcoming 6G networks will confront the dual challenge of heightened communication and computing capacity demands. To address this, D2D collaborative computing is being explored. However, the current D2D collaborative computing ignores the integrity of computing and communication. For a single-task device, offloading operations intertwine computing and communication, internal coupling causes due to parallel executed between local and D2D offloading. In addition, external coupling arises among devices competing for limited radio and computing resources. Worse, internal coupling and external coupling interact, exacerbating the situation. To address these challenges, a novel D2D offloading framework is proposed based on hyper-graph matching in this paper. Our goal is to minimize both delay and energy costs while ensuring service quality for all users by jointly optimizing task scheduling, offload policies and resource allocation. The original problem is formulated as a nonlinear integer programming problem. Then, by three-stage strategy optimization decomposition, it is separated into several sub-problems. In the first stage, a polynomial-time algorithm has been developed to optimize the task offloading ratio, taking into account both its upper and lower bounds. In the second stage, a geometric programming algorithm has been created to address power allocation. In the third stage, a three-dimensional hyper-graph matching model is employed to derive the optimal offloading and channel allocation policies. This is based on analyzing the conflict graph and applying the claw theorem. Simulation results demonstrate that the proposed scheme outperforms other algorithms by approximately 12%, 20%, 28%, 40%, respectively. Moreover, it enhances both spectral efficiency and computational efficiency.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1570870524001379/pdfft?md5=14b146ffd1942df62525be5b45777af4&pid=1-s2.0-S1570870524001379-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141391607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1016/j.adhoc.2024.103567
Ekler Paulino de Mattos , Augusto C.S.A. Domingues , Fabrício A. Silva , Heitor S. Ramos , Antonio A.F. Loureiro
When designing smart cities’ building blocks, mobility data plays a fundamental role in applications and services. However, mobility data usually comes with unrestricted location of its corresponding entities (e.g., citizens and vehicles) and poses privacy concerns, among them recovering the identity of those entities with linking attacks. Location Privacy Protection Mechanisms (LPPMs) based on anonymization, such as mix-zones, have been proposed to address the privacy of users’ identity. Once the data is protected, a comprehensive discussion about the trade-off between privacy and utility happens. However, issues still arise about the application of anonymized data to smart city development: what are the smart cities applications and services that can best leverage mobility data anonymized by mix-zones? To answer this question, we propose the Utility Analysis Framework of Anonymized Trajectories for Smart Cities-Application Domains (UAFAT). This characterization framework measures the utility through twelve metrics related to privacy, mobility, and social, including mix-zones performance metrics from anonymized trajectories produced by mix-zones. This framework aims to identify applications and services where the anonymized data will provide more or less utility in various aspects. The results evaluated with cabs and privacy cars datasets showed that further characterizing it by distortion level, UAFAT ranked the smart cities application domains that best leverage mobility data anonymized by mix-zones. Also, it identified which one of the four case studies of smart city applications had more utility. Additionally, different datasets present different behaviors in terms of utility. These insights can contribute significantly to the utility of both open and private data markets for smart cities.
{"title":"Protect your data and I’ll rank its utility: A framework for utility analysis of anonymized mobility data for smart city applications","authors":"Ekler Paulino de Mattos , Augusto C.S.A. Domingues , Fabrício A. Silva , Heitor S. Ramos , Antonio A.F. Loureiro","doi":"10.1016/j.adhoc.2024.103567","DOIUrl":"10.1016/j.adhoc.2024.103567","url":null,"abstract":"<div><p>When designing smart cities’ building blocks, mobility data plays a fundamental role in applications and services. However, mobility data usually comes with unrestricted location of its corresponding entities (e.g., citizens and vehicles) and poses privacy concerns, among them recovering the identity of those entities with linking attacks. Location Privacy Protection Mechanisms (LPPMs) based on anonymization, such as mix-zones, have been proposed to address the privacy of users’ identity. Once the data is protected, a comprehensive discussion about the trade-off between privacy and utility happens. However, issues still arise about the application of anonymized data to smart city development: what are the smart cities applications and services that can best leverage mobility data anonymized by mix-zones? To answer this question, we propose the Utility Analysis Framework of Anonymized Trajectories for Smart Cities-Application Domains (UAFAT). This characterization framework measures the utility through twelve metrics related to privacy, mobility, and social, including mix-zones performance metrics from anonymized trajectories produced by mix-zones. This framework aims to identify applications and services where the anonymized data will provide more or less utility in various aspects. The results evaluated with cabs and privacy cars datasets showed that further characterizing it by distortion level, UAFAT ranked the smart cities application domains that best leverage mobility data anonymized by mix-zones. Also, it identified which one of the four case studies of smart city applications had more utility. Additionally, different datasets present different behaviors in terms of utility. These insights can contribute significantly to the utility of both open and private data markets for smart cities.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141408608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}