首页 > 最新文献

Computer Networks最新文献

英文 中文
A deep learning sparse urban sensing scheme based on spatiotemporal correlations
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111015
Zihao Wei, Yantao Yu, Guojin Liu, Yucheng Wu
Sparse Mobile Crowdsensing (SMCS) provides vital support for wide-range urban sensing by collecting data from only a few sub-regions and inferring data of unperceived sub-regions based on the spatiotemporal relationships of the collected data. However, due to the complex spatiotemporal correlations among perception data, extracting nonlinear spatiotemporal features from sparse data is exceptionally challenging, which is crucial for accurate data inference and future data prediction. Furthermore, existing cell selection methods often overlook the temporal variation of urban sensing data, failing to adequately utilize historical and predicted data, which is crucial for obtaining the optimal subset of perception regions. To address these issues, a deep learning sparse urban sensing scheme based on spatiotemporal correlations is proposed, which comprises data completion, short-term spatiotemporal prediction, and cell selection, aiming to produce high-quality urban sensing maps within budget constraints. Firstly, to handle sparse sensing data, a Spatio-Temporal Deep Matrix Factorization (STDMF) is proposed to accurately recover the current full map. Subsequently, leveraging predicted and completed historical data, this study constructs spatiotemporal states, rewards, and actions for deep reinforcement learning. A cell selection algorithm called Spatio-Temporal Prediction Assisted Dueling Double Deep Q Network (STPA-D3QN) is proposed, which uses spatiotemporal dueling deep Q-network to discern spatiotemporal features both within and across observation states,then identifies optimal choices for specific states. Finally, extensive experimental evaluations conducted on four sensing tasks in air quality monitoring verify the effectiveness of the proposed algorithm.
{"title":"A deep learning sparse urban sensing scheme based on spatiotemporal correlations","authors":"Zihao Wei,&nbsp;Yantao Yu,&nbsp;Guojin Liu,&nbsp;Yucheng Wu","doi":"10.1016/j.comnet.2024.111015","DOIUrl":"10.1016/j.comnet.2024.111015","url":null,"abstract":"<div><div>Sparse Mobile Crowdsensing (SMCS) provides vital support for wide-range urban sensing by collecting data from only a few sub-regions and inferring data of unperceived sub-regions based on the spatiotemporal relationships of the collected data. However, due to the complex spatiotemporal correlations among perception data, extracting nonlinear spatiotemporal features from sparse data is exceptionally challenging, which is crucial for accurate data inference and future data prediction. Furthermore, existing cell selection methods often overlook the temporal variation of urban sensing data, failing to adequately utilize historical and predicted data, which is crucial for obtaining the optimal subset of perception regions. To address these issues, a deep learning sparse urban sensing scheme based on spatiotemporal correlations is proposed, which comprises data completion, short-term spatiotemporal prediction, and cell selection, aiming to produce high-quality urban sensing maps within budget constraints. Firstly, to handle sparse sensing data, a Spatio-Temporal Deep Matrix Factorization (STDMF) is proposed to accurately recover the current full map. Subsequently, leveraging predicted and completed historical data, this study constructs spatiotemporal states, rewards, and actions for deep reinforcement learning. A cell selection algorithm called Spatio-Temporal Prediction Assisted Dueling Double Deep Q Network (STPA-D3QN) is proposed, which uses spatiotemporal dueling deep Q-network to discern spatiotemporal features both within and across observation states,then identifies optimal choices for specific states. Finally, extensive experimental evaluations conducted on four sensing tasks in air quality monitoring verify the effectiveness of the proposed algorithm.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 111015"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Replay attacks in RPL-based Internet of Things: Comparative and empirical study
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110996
Hussah Albinali , Farag Azzedin
Routing Protocol for Low-Power and Lossy Networks (RPL) is widely used to enable IP-based communication in constrained environments. However, RPL is vulnerable to several security threats, including replay attacks, which can compromise network performance. Malicious nodes can easily replay RPL control messages and hence disrupt network topology and operation. Although this issue is significant, current studies are constrained and mainly focus on replay attacks aimed at DIO messages. There is little discussion about other kinds of replay attacks, especially those involving DAO messages. To fill this gap, we offer an empirical analysis of different types of replay attacks, with a particular emphasis on DAO replay attacks, including the often-neglected route table falsification attack, which has not received much attention in the existing literature. Our research methodically examines the effects of various replay attacks on RPL network topology by conducting comprehensive experiments to assess their influence on packet delivery and network latency. Furthermore, we investigate how these attacks affect information security by applying the CIA triad, which encompasses confidentiality, integrity, and availability. We also emphasize security measures aimed at enhancing resilience against these attacks. Our research indicates that the majority of these attacks significantly affect availability and have a serious impact on integrity. DIO suppression and copycat attacks lead to a 36% reduction in the average delivery ratio and neighbor attacks cause a 50% increase in communication latency in specific attack scenarios. These findings highlight the impact of these attacks and underscore the necessity of developing countermeasures to address them.
{"title":"Replay attacks in RPL-based Internet of Things: Comparative and empirical study","authors":"Hussah Albinali ,&nbsp;Farag Azzedin","doi":"10.1016/j.comnet.2024.110996","DOIUrl":"10.1016/j.comnet.2024.110996","url":null,"abstract":"<div><div>Routing Protocol for Low-Power and Lossy Networks (RPL) is widely used to enable IP-based communication in constrained environments. However, RPL is vulnerable to several security threats, including replay attacks, which can compromise network performance. Malicious nodes can easily replay RPL control messages and hence disrupt network topology and operation. Although this issue is significant, current studies are constrained and mainly focus on replay attacks aimed at DIO messages. There is little discussion about other kinds of replay attacks, especially those involving DAO messages. To fill this gap, we offer an empirical analysis of different types of replay attacks, with a particular emphasis on DAO replay attacks, including the often-neglected route table falsification attack, which has not received much attention in the existing literature. Our research methodically examines the effects of various replay attacks on RPL network topology by conducting comprehensive experiments to assess their influence on packet delivery and network latency. Furthermore, we investigate how these attacks affect information security by applying the CIA triad, which encompasses confidentiality, integrity, and availability. We also emphasize security measures aimed at enhancing resilience against these attacks. Our research indicates that the majority of these attacks significantly affect availability and have a serious impact on integrity. DIO suppression and copycat attacks lead to a 36% reduction in the average delivery ratio and neighbor attacks cause a 50% increase in communication latency in specific attack scenarios. These findings highlight the impact of these attacks and underscore the necessity of developing countermeasures to address them.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110996"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep reinforcement learning with dual-Q and Kolmogorov–Arnold Networks for computation offloading in Industrial IoT
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110987
Jinru Wu, Ruizhong Du, Ziyuan Wang
In the industrial internet of things, the rapid development of smart mobile devices and 5G network technology has driven the application of mobile edge computing, reducing the delay in task computation offloading to some extent. However, the increasing complexity of the IIoT environment presents challenges for communication management and offloading performance. To achieve efficient computation offloading communication, we designed a cloud–edge-device IIoT system model, utilizing Voronoi diagrams to partition the service areas of edge servers, thereby adapting to the complex IIoT environment and improving communication efficiency. Additionally, considering that different offloading strategies may result in varying levels of offloading security risks, we developed a principal component analysis-based offloading security evaluation model (PCA-OSEM) to analyze potential security risks during the offloading process and identify key factors. Finally, to optimize offloading strategies to reduce offloading delay and security risks, we proposed a dual-Q with Kolmogorov–Arnold networks in deep reinforcement learning computation offloading (D2KCO). This method enhances the neural network’s approximation capability and training stability. Experimental results show that the proposed PCA-OSEM is effective, and the D2KCO method can reduce offloading delay by 13% and 23.52% compared to the D3PG and DDPG algorithms, respectively, while also reducing security risks.
{"title":"Deep reinforcement learning with dual-Q and Kolmogorov–Arnold Networks for computation offloading in Industrial IoT","authors":"Jinru Wu,&nbsp;Ruizhong Du,&nbsp;Ziyuan Wang","doi":"10.1016/j.comnet.2024.110987","DOIUrl":"10.1016/j.comnet.2024.110987","url":null,"abstract":"<div><div>In the industrial internet of things, the rapid development of smart mobile devices and 5G network technology has driven the application of mobile edge computing, reducing the delay in task computation offloading to some extent. However, the increasing complexity of the IIoT environment presents challenges for communication management and offloading performance. To achieve efficient computation offloading communication, we designed a cloud–edge-device IIoT system model, utilizing Voronoi diagrams to partition the service areas of edge servers, thereby adapting to the complex IIoT environment and improving communication efficiency. Additionally, considering that different offloading strategies may result in varying levels of offloading security risks, we developed a principal component analysis-based offloading security evaluation model (PCA-OSEM) to analyze potential security risks during the offloading process and identify key factors. Finally, to optimize offloading strategies to reduce offloading delay and security risks, we proposed a dual-Q with Kolmogorov–Arnold networks in deep reinforcement learning computation offloading (D2KCO). This method enhances the neural network’s approximation capability and training stability. Experimental results show that the proposed PCA-OSEM is effective, and the D2KCO method can reduce offloading delay by 13% and 23.52% compared to the D3PG and DDPG algorithms, respectively, while also reducing security risks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110987"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An effective scheme for classifying imbalanced traffic in SD-IoT, leveraging XGBoost and active learning
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110939
Chandroth Jisi, Byeong-hee Roh, Jehad Ali
The volume and diversity of Internet traffic are constantly growing due to the simplicity of Internet of Things (IoT) technology, making machine learning-powered solutions increasingly essential for efficient network oversight in the future. The IoT applications prefer stringent but various Quality of Service (QoS). To allocate network resources and offer security based on these QoS, network traffic classification is the foremost solution and a complex part of modern communication. Software Defined Networking (SDN) is combined with machine learning (ML) to automate traffic classification in the IoT network. Nevertheless, uneven class distribution in traffic classification is brought about by the immanent features of Software-Defined IoT (SD-IoT) networks, which could hinder classification performance, particularly for minority classes. In order to solve the issue of class imbalance in SD-IoT environments, this study introduces a Cost-Sensitive XGBoost with Active Learning (AL-CSXGB) algorithm. This unique approach characterizes class distribution from a new point of view. The proposed work dynamically assigns a weight to different applications and actively queries to label new data points iteratively to acquire better accuracy. Experiments on the MOORE_SET and ISCX VPN-nonVPN datasets are used to ensure the efficiency of the algorithm under consideration. The experimental findings show that AL-CSXGB outperforms the other state-of-the-art methods regarding classification accuracy and computation time and alleviates the imbalance problem in SD-IoT networks. The proposed scheme achieves an accuracy of 98.4% on the MOORE_SET dataset and 98.89% on the ISCX VPN-nonVPN dataset, demonstrating its effectiveness and reliability in diverse scenarios.
{"title":"An effective scheme for classifying imbalanced traffic in SD-IoT, leveraging XGBoost and active learning","authors":"Chandroth Jisi,&nbsp;Byeong-hee Roh,&nbsp;Jehad Ali","doi":"10.1016/j.comnet.2024.110939","DOIUrl":"10.1016/j.comnet.2024.110939","url":null,"abstract":"<div><div>The volume and diversity of Internet traffic are constantly growing due to the simplicity of Internet of Things (IoT) technology, making machine learning-powered solutions increasingly essential for efficient network oversight in the future. The IoT applications prefer stringent but various Quality of Service (QoS). To allocate network resources and offer security based on these QoS, network traffic classification is the foremost solution and a complex part of modern communication. Software Defined Networking (SDN) is combined with machine learning (ML) to automate traffic classification in the IoT network. Nevertheless, uneven class distribution in traffic classification is brought about by the immanent features of Software-Defined IoT (SD-IoT) networks, which could hinder classification performance, particularly for minority classes. In order to solve the issue of class imbalance in SD-IoT environments, this study introduces a Cost-Sensitive XGBoost with Active Learning (AL-CSXGB) algorithm. This unique approach characterizes class distribution from a new point of view. The proposed work dynamically assigns a weight to different applications and actively queries to label new data points iteratively to acquire better accuracy. Experiments on the MOORE_SET and ISCX VPN-nonVPN datasets are used to ensure the efficiency of the algorithm under consideration. The experimental findings show that AL-CSXGB outperforms the other state-of-the-art methods regarding classification accuracy and computation time and alleviates the imbalance problem in SD-IoT networks. The proposed scheme achieves an accuracy of 98.4% on the MOORE_SET dataset and 98.89% on the ISCX VPN-nonVPN dataset, demonstrating its effectiveness and reliability in diverse scenarios.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110939"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sum computation rate maximization for wireless powered OFDMA-based mobile edge computing network
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110961
Guanqun Shen , Xinchen Wei , Kaikai Chi , Fayez Alqahtani , Amr Tolba
The wireless power transfer (WPT) and mobile edge computing (MEC) technologies have been advocated as the prospective effective solution for future wireless networks. This paper introduces a multi-user WPT-MEC system, where a sum computation rate (SCR) maximization design by jointly optimizing the WPT duration, the allocation of the subcarrier selection indicator of each user, each user’s transmit power, and the parameters related to different offload modes at each user is considered. In such a system, the hybrid access point (AP) broadcasts radio frequency (RF) energy intended for users to harvest, subsequently enabling users to transmit their computation tasks to the MEC server via the orthogonal frequency division multiple access (OFDMA) protocol. To address this non-convexity SCR maximization problem, a decomposition optimization is proposed. In the top-problem, the DRL-based deep neural network (DNN) model is applied to realize the computation selection indicator and subcarrier selection indicator among each user. In the sub-problem, for the binary offloading mode, an efficient two-stage algorithm with golden section search and intrinsic properties is utilized to determine the optimal values of remaining parameters. For the partial offloading mode, the problem is reformulated by introducing new variables and then the convex optimization techniques are utilized to efficiently obtain the corresponding solutions. Simulation results demonstrate the proposed approach outperforms the benchmark methods considered in both binary and partial offloading modes.
{"title":"Sum computation rate maximization for wireless powered OFDMA-based mobile edge computing network","authors":"Guanqun Shen ,&nbsp;Xinchen Wei ,&nbsp;Kaikai Chi ,&nbsp;Fayez Alqahtani ,&nbsp;Amr Tolba","doi":"10.1016/j.comnet.2024.110961","DOIUrl":"10.1016/j.comnet.2024.110961","url":null,"abstract":"<div><div>The wireless power transfer (WPT) and mobile edge computing (MEC) technologies have been advocated as the prospective effective solution for future wireless networks. This paper introduces a multi-user WPT-MEC system, where a sum computation rate (SCR) maximization design by jointly optimizing the WPT duration, the allocation of the subcarrier selection indicator of each user, each user’s transmit power, and the parameters related to different offload modes at each user is considered. In such a system, the hybrid access point (AP) broadcasts radio frequency (RF) energy intended for users to harvest, subsequently enabling users to transmit their computation tasks to the MEC server via the orthogonal frequency division multiple access (OFDMA) protocol. To address this non-convexity SCR maximization problem, a decomposition optimization is proposed. In the top-problem, the DRL-based deep neural network (DNN) model is applied to realize the computation selection indicator and subcarrier selection indicator among each user. In the sub-problem, for the binary offloading mode, an efficient two-stage algorithm with golden section search and intrinsic properties is utilized to determine the optimal values of remaining parameters. For the partial offloading mode, the problem is reformulated by introducing new variables and then the convex optimization techniques are utilized to efficiently obtain the corresponding solutions. Simulation results demonstrate the proposed approach outperforms the benchmark methods considered in both binary and partial offloading modes.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110961"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmenting channel estimation via loss field: Site-trained Bayesian modeling and comparative analysis
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.110993
Jie Wang , Meles G. Weldegebriel , Neal Patwari
Future wireless networks that share spectrum dynamically among groups of mobile users will require fast and accurate channel estimation in order to guarantee varying signal-to-interference-plus-noise ratio (SINR) requirements for co-channel links. There is a need for channel models with low computational complexity and high accuracy that adapt to the particular area of deployment while preserving explainability. In this work, we propose the Channel Estimation via Loss Field (CELF) model, which augments existing channel models using channel loss measurements from a deployed network and a Bayesian linear regression method to estimate a site-specific loss field for the area. The loss field is explainable as a site map of additional radio ‘shadowing’, compared to the channel base model, but it requires no site-specific terrain or building information. For an arbitrary pair of transmitter and receiver positions, CELF sums the loss field near the link line to estimate its shadowing loss. We use extensive indoor and outdoor measurements to show that CELF lowers the modeling error variance of the log-distance path loss base model by up to 68% for prediction, and outperforms 3 popular Machine Learning (ML) methods in variance reduction and training efficiency. To validate CELF’s robustness, it is applied to a different channel base model, the terrain-integrated rough earth model (TIREM), and numerical results show that CELF can reduce the test variance by up to 63%. We further discuss two spatial multipath models for a weight matrix in CELF and observe similar accuracy improvement. To summarize CELF offers a new type of explainable learning model for accurate and fast site-specific radio channel loss estimation.
{"title":"Augmenting channel estimation via loss field: Site-trained Bayesian modeling and comparative analysis","authors":"Jie Wang ,&nbsp;Meles G. Weldegebriel ,&nbsp;Neal Patwari","doi":"10.1016/j.comnet.2024.110993","DOIUrl":"10.1016/j.comnet.2024.110993","url":null,"abstract":"<div><div>Future wireless networks that share spectrum dynamically among groups of mobile users will require fast and accurate channel estimation in order to guarantee varying signal-to-interference-plus-noise ratio (SINR) requirements for co-channel links. There is a need for channel models with low computational complexity and high accuracy that adapt to the particular area of deployment while preserving explainability. In this work, we propose the <em>Channel Estimation via Loss Field (CELF)</em> model, which augments existing channel models using channel loss measurements from a deployed network and a Bayesian linear regression method to estimate a site-specific loss field for the area. The loss field is explainable as a site map of additional radio ‘shadowing’, compared to the channel base model, but it requires no site-specific terrain or building information. For an arbitrary pair of transmitter and receiver positions, CELF sums the loss field near the link line to estimate its shadowing loss. We use extensive indoor and outdoor measurements to show that CELF lowers the modeling error variance of the log-distance path loss base model by up to 68% for prediction, and outperforms 3 popular Machine Learning (ML) methods in variance reduction and training efficiency. To validate CELF’s robustness, it is applied to a different channel base model, the terrain-integrated rough earth model (TIREM), and numerical results show that CELF can reduce the test variance by up to 63%. We further discuss two spatial multipath models for a weight matrix in CELF and observe similar accuracy improvement. To summarize CELF offers a new type of explainable learning model for accurate and fast site-specific radio channel loss estimation.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 110993"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing IoT security: A comprehensive exploration of privacy, security measures, and advanced routing solutions
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2025.111045
Azmera Chandu Naik , Lalit Kumar Awasthi , Priyanka R. , T.P. Sharma , Aryan Verma
The Internet of Things (IoT) is an advanced concept in computer networking that enables smooth interconnection and communication across different types of devices around us. The vast applications of the adaptable connections of IoT devices are evident in numerous sectors, including smart surveillance, environmental monitoring, infrastructure management, and our home. Although IoT has numerous advantages, its fundamental openness and dependence on wireless networks make it vulnerable to several security risks that can cause monetary and privacy losses. This paper offers an extensive examination of privacy concerns, weaknesses in the security of IoT networks, attacks at several layers of the IoT architecture, and steps taken to mitigate these difficulties with computational methods. Our debate begins with an analysis of the significant privacy and security issues faced by users of IoT devices, highlighting the need for strict security regulations. Next, we examine a classification of attacks for each layer of the IoT in order to clarify the specific security requirements. In this study, we present a comprehensive review of the literature, organizing current security approaches into four major categories: authentication, encryption, trust management, and secure routing. The review’s focus is primarily on recent advancements in safe routing. Furthermore, our analysis of secure routing is centered around clustering techniques and routing protocols that are aware of quality of service (QoS). We conduct a comparative examination of several techniques for each category, examining their contributions and methods, and then identifying any potential weaknesses. Further, through this review, we offer a comprehensive technical analysis of the difficulties and solutions related to IoT security, with the goal of promoting progress in safeguarding IoT ecosystems.
{"title":"Enhancing IoT security: A comprehensive exploration of privacy, security measures, and advanced routing solutions","authors":"Azmera Chandu Naik ,&nbsp;Lalit Kumar Awasthi ,&nbsp;Priyanka R. ,&nbsp;T.P. Sharma ,&nbsp;Aryan Verma","doi":"10.1016/j.comnet.2025.111045","DOIUrl":"10.1016/j.comnet.2025.111045","url":null,"abstract":"<div><div>The Internet of Things (IoT) is an advanced concept in computer networking that enables smooth interconnection and communication across different types of devices around us. The vast applications of the adaptable connections of IoT devices are evident in numerous sectors, including smart surveillance, environmental monitoring, infrastructure management, and our home. Although IoT has numerous advantages, its fundamental openness and dependence on wireless networks make it vulnerable to several security risks that can cause monetary and privacy losses. This paper offers an extensive examination of privacy concerns, weaknesses in the security of IoT networks, attacks at several layers of the IoT architecture, and steps taken to mitigate these difficulties with computational methods. Our debate begins with an analysis of the significant privacy and security issues faced by users of IoT devices, highlighting the need for strict security regulations. Next, we examine a classification of attacks for each layer of the IoT in order to clarify the specific security requirements. In this study, we present a comprehensive review of the literature, organizing current security approaches into four major categories: authentication, encryption, trust management, and secure routing. The review’s focus is primarily on recent advancements in safe routing. Furthermore, our analysis of secure routing is centered around clustering techniques and routing protocols that are aware of quality of service (QoS). We conduct a comparative examination of several techniques for each category, examining their contributions and methods, and then identifying any potential weaknesses. Further, through this review, we offer a comprehensive technical analysis of the difficulties and solutions related to IoT security, with the goal of promoting progress in safeguarding IoT ecosystems.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111045"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concurrent WiFi backscatter communication using a single receiver in IoT networks
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111029
Weiqi Wu, Wei Gong
The development of the Internet-of-Things (IoT) network has transformed our daily lives, enabling users to access various information. Power consumption in the IoT network is crucial since excessive power use can cause frequent battery replacements, raising environmental concerns. To address this, ambient backscatter communication has emerged owing to its ultra-low-power consumption and battery-free operation. In this paper, we focus on the design of WiFi backscatter communication. Current WiFi backscatter communication systems either operate in a single-tag mode, encountering challenges when multiple tags transmit concurrently, or require a double-receiver design to enable multi-tag concurrent communication, causing a significant reliance on a reliable channel between the WiFi transmitter and the WiFi receiver. To address these issues, we introduce ParaFi, a novel WiFi backscatter system that enables concurrent WiFi backscatter communication using a single receiver. ParaFi is designed with a key insight: a group of known pilot signals is inserted into each OFDM WiFi symbol, which provides us with an opportunity to decode multi-tag data using only a backscatter receiver. To further improve ParaFi’s reliability, we design a novel performance enhancement approach. We evaluate ParaFi’s performance under various scenarios, and our results demonstrate that ParaFi outperforms the state-of-the-art multi-tag WiFi backscatter communication system.
{"title":"Concurrent WiFi backscatter communication using a single receiver in IoT networks","authors":"Weiqi Wu,&nbsp;Wei Gong","doi":"10.1016/j.comnet.2024.111029","DOIUrl":"10.1016/j.comnet.2024.111029","url":null,"abstract":"<div><div>The development of the Internet-of-Things (IoT) network has transformed our daily lives, enabling users to access various information. Power consumption in the IoT network is crucial since excessive power use can cause frequent battery replacements, raising environmental concerns. To address this, ambient backscatter communication has emerged owing to its ultra-low-power consumption and battery-free operation. In this paper, we focus on the design of WiFi backscatter communication. Current WiFi backscatter communication systems either operate in a single-tag mode, encountering challenges when multiple tags transmit concurrently, or require a double-receiver design to enable multi-tag concurrent communication, causing a significant reliance on a reliable channel between the WiFi transmitter and the WiFi receiver. To address these issues, we introduce ParaFi, a novel WiFi backscatter system that enables concurrent WiFi backscatter communication using a single receiver. ParaFi is designed with a key insight: a group of known pilot signals is inserted into each OFDM WiFi symbol, which provides us with an opportunity to decode multi-tag data using only a backscatter receiver. To further improve ParaFi’s reliability, we design a novel performance enhancement approach. We evaluate ParaFi’s performance under various scenarios, and our results demonstrate that ParaFi outperforms the state-of-the-art multi-tag WiFi backscatter communication system.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111029"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143178147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing DV-Hop localization through topology-based straight-line distance estimation
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111025
Liming Wang , Xuanzhi Zhao , Di Yang , Zengli Liu , Wlodek J. Kulesza , Jingmin Tang , Wen Zhang
Wireless sensor networks often use a distributed configuration and rely on self-organizing mechanisms to integrate local information into a global context. This paper considers the 3-hop path as the basic component of a multi-hop path; the 3-hop path has two types of planar topological structures,‘S’-shaped and ‘U’-shaped. This paper provides a deduction of all possible topological structures when a 4-hop structure is merged into a 3-hop structure. Additionally, it offers an iterative method for determining the overall direct distance between the start and end points of an n-hop path along a polyline, given that each node is aware of the distances to nearby nodes. Euler’s four-point formula is utilized in the proposed method to perform two key functions: identifying whether a 3-hop path is ‘U’-shaped or ‘S’-shaped and calculating the straight-line distance within a virtual quadrilateral. The above method is combined with the distance vector routing (DV-Hop) algorithm, and the resulting algorithm is called Path’s Straight Distance DV-Hop (PSDDV-Hop). PSDDV-Hop significantly increases the accuracy of localization by eliminating the polyline bending errors in the distance estimation for an n-hop path. Several issues related to the implementation of PSDDV-Hop are analyzed, and corresponding solutions are provided, including a method of estimating the straight-line distance within no more than 3-hop and the replacement of nonlinear distance–area functions with linear fitting to reduce complexity and compensate for estimation bias. Two distinct strategies for setting the communication radius are introduced to accommodate diverse scenarios. Ultimately, the experiments confirm that PSDDV-Hop provides greater accuracy in localization across diverse network configurations.
{"title":"Optimizing DV-Hop localization through topology-based straight-line distance estimation","authors":"Liming Wang ,&nbsp;Xuanzhi Zhao ,&nbsp;Di Yang ,&nbsp;Zengli Liu ,&nbsp;Wlodek J. Kulesza ,&nbsp;Jingmin Tang ,&nbsp;Wen Zhang","doi":"10.1016/j.comnet.2024.111025","DOIUrl":"10.1016/j.comnet.2024.111025","url":null,"abstract":"<div><div>Wireless sensor networks often use a distributed configuration and rely on self-organizing mechanisms to integrate local information into a global context. This paper considers the 3-hop path as the basic component of a multi-hop path; the 3-hop path has two types of planar topological structures,‘S’-shaped and ‘U’-shaped. This paper provides a deduction of all possible topological structures when a 4-hop structure is merged into a 3-hop structure. Additionally, it offers an iterative method for determining the overall direct distance between the start and end points of an n-hop path along a polyline, given that each node is aware of the distances to nearby nodes. Euler’s four-point formula is utilized in the proposed method to perform two key functions: identifying whether a 3-hop path is ‘U’-shaped or ‘S’-shaped and calculating the straight-line distance within a virtual quadrilateral. The above method is combined with the distance vector routing (DV-Hop) algorithm, and the resulting algorithm is called Path’s Straight Distance DV-Hop (PSDDV-Hop). PSDDV-Hop significantly increases the accuracy of localization by eliminating the polyline bending errors in the distance estimation for an n-hop path. Several issues related to the implementation of PSDDV-Hop are analyzed, and corresponding solutions are provided, including a method of estimating the straight-line distance within no more than 3-hop and the replacement of nonlinear distance–area functions with linear fitting to reduce complexity and compensate for estimation bias. Two distinct strategies for setting the communication radius are introduced to accommodate diverse scenarios. Ultimately, the experiments confirm that PSDDV-Hop provides greater accuracy in localization across diverse network configurations.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111025"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-latency tradeoff for task offloading and resource allocation in vehicular edge computing
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-02-01 DOI: 10.1016/j.comnet.2024.111026
Yuxuan Long, Zhenyu Wang, Shizhan Lan, Rui Zhang, Kai Xu
Vehicular edge computing (VEC) has emerged as a cutting-edge distributed computing paradigm capable of addressing network congestion and excessive energy use in vehicular systems. To enhance VEC performance, we examined the energy–latency tradeoff for partial tasks offloading in end-VEC-cloud orchestrated networks. We formulated a joint computation offloading and resource allocation problem aimed at minimizing latency and energy consumption. To address the underlined problem, we proposed a collaborative task splitting and resource allocation optimization (CTSRAO) algorithm. We initially decoupled the problem into two convex sub-problems and then applied the Lagrangian and simplex methods for joint optimization of computation resources and task splitting ratio. Furthermore, we investigated the criteria for determining whether a task should be offloaded to the VEC or cloud. Simulation results showed that our algorithm significantly enhances systems performance, achieving lower latency and energy consumption than the benchmark and state-of-the-art methods.
{"title":"Energy-latency tradeoff for task offloading and resource allocation in vehicular edge computing","authors":"Yuxuan Long,&nbsp;Zhenyu Wang,&nbsp;Shizhan Lan,&nbsp;Rui Zhang,&nbsp;Kai Xu","doi":"10.1016/j.comnet.2024.111026","DOIUrl":"10.1016/j.comnet.2024.111026","url":null,"abstract":"<div><div>Vehicular edge computing (VEC) has emerged as a cutting-edge distributed computing paradigm capable of addressing network congestion and excessive energy use in vehicular systems. To enhance VEC performance, we examined the energy–latency tradeoff for partial tasks offloading in end-VEC-cloud orchestrated networks. We formulated a joint computation offloading and resource allocation problem aimed at minimizing latency and energy consumption. To address the underlined problem, we proposed a collaborative task splitting and resource allocation optimization (CTSRAO) algorithm. We initially decoupled the problem into two convex sub-problems and then applied the Lagrangian and simplex methods for joint optimization of computation resources and task splitting ratio. Furthermore, we investigated the criteria for determining whether a task should be offloaded to the VEC or cloud. Simulation results showed that our algorithm significantly enhances systems performance, achieving lower latency and energy consumption than the benchmark and state-of-the-art methods.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111026"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1