Pub Date : 2024-05-30DOI: 10.1007/s11276-024-03766-6
Yao-Jen Liang, Yu-Chan Tseng, Chi-Wen Hsieh
We develop a deep reinforcement learning-based (DRL) spectrum access scheme for device-to-device communications in an underlay cellular network. Based on the DRL scheme, the base station aims to maximize the overall system throughput of both the D2D and cellular communications by learning an optimal spectrum allocation strategy. While D2D pairs dynamically access the time slots (TSs) of a shared spectrum belonging to a dedicated cellular user (CU). In particular, to ensure that the quality of service (QoS) requirement of cell-edge CUs, this paper addresses the various positions of CUs and D2D pairs by dividing the cellular area into shareable and un-shareable areas. Then, a double deep Q-network is adopted for the BS to decide whether and which D2D pair can access each TS within a shared spectrum. The proposed DDQN spectrum allocation not only enjoys low computational complexity since just current state information is utilized as input, but also approaches the throughput of exhaustive search method since received signal-to-noise ratios are utilized as inputs. Numerical results show that the proposed deep learning-based spectrum access scheme outperforms the state-of-art algorithms in terms of throughput.
我们开发了一种基于深度强化学习(DRL)的频谱接入方案,用于蜂窝底层网络中的设备对设备通信。基于 DRL 方案,基站旨在通过学习最优频谱分配策略,最大化 D2D 和蜂窝通信的整体系统吞吐量。D2D 对动态访问属于专用蜂窝用户(CU)的共享频谱时隙(TS)。特别是,为了确保小区边缘 CU 的服务质量(QoS)要求,本文通过将蜂窝区域划分为可共享区域和不可共享区域来解决 CU 和 D2D 对的不同位置问题。然后,BS 采用双深 Q 网络来决定是否以及哪个 D2D 对可以访问共享频谱内的每个 TS。由于只使用当前状态信息作为输入,因此所提出的 DDQN 频谱分配不仅计算复杂度低,而且由于使用接收信噪比作为输入,因此其吞吐量接近穷举搜索方法。数值结果表明,所提出的基于深度学习的频谱接入方案在吞吐量方面优于最先进的算法。
{"title":"A deep reinforcement learning-based D2D spectrum allocation underlaying a cellular network","authors":"Yao-Jen Liang, Yu-Chan Tseng, Chi-Wen Hsieh","doi":"10.1007/s11276-024-03766-6","DOIUrl":"https://doi.org/10.1007/s11276-024-03766-6","url":null,"abstract":"<p>We develop a deep reinforcement learning-based (DRL) spectrum access scheme for device-to-device communications in an underlay cellular network. Based on the DRL scheme, the base station aims to maximize the overall system throughput of both the D2D and cellular communications by learning an optimal spectrum allocation strategy. While D2D pairs dynamically access the time slots (TSs) of a shared spectrum belonging to a dedicated cellular user (CU). In particular, to ensure that the quality of service (QoS) requirement of cell-edge CUs, this paper addresses the various positions of CUs and D2D pairs by dividing the cellular area into shareable and un-shareable areas. Then, a double deep Q-network is adopted for the BS to decide whether and which D2D pair can access each TS within a shared spectrum. The proposed DDQN spectrum allocation not only enjoys low computational complexity since just current state information is utilized as input, but also approaches the throughput of exhaustive search method since received signal-to-noise ratios are utilized as inputs. Numerical results show that the proposed deep learning-based spectrum access scheme outperforms the state-of-art algorithms in terms of throughput.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"62 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30DOI: 10.1007/s11276-024-03787-1
Nasrin Amiri, Keyvan Forooraghi, Ensiyeh Ghasemi Mizuji, Mohammad Reza Ghaderi
Slotted waveguide antennas (SWAs) are widely used in various wireless applications such as space aircrafts, radars, and aircraft tracking systems. A specific type of SWA, which has slots in its narrow wall, has caught great interest due to its ability to produce horizontal polarization. However, due to the typically smaller size of the narrow wall compared to the resonant length of the slot, the slot inevitably extends onto the broad waveguide walls. This bending not only compromises the structural integrity of the waveguide but also complicates precise slot excitation modeling and increases fabrication complexity for planar arrays, often requiring metallic separators. This paper introduces a novel design that prevents edge slot bending on the wider waveguide walls by using a dielectric layer placed on the slots, effectively halving the slot’s resonant length. This ensures that the slot remains fully positioned on the narrow wall without bending onto the broader walls and also it protects the antenna from extreme heat and humidity. To validate the effectiveness of the proposed design, an array consisting of 12 slots with a Taylor synthesis-based amplitude distribution was designed, tested, and demonstrated to have side lobes below − 30 dB. Simulation results were found to be in good agreement with measurements.
{"title":"A novel design of non-bending narrow wall slotted waveguide array antenna for X-band wireless network applications","authors":"Nasrin Amiri, Keyvan Forooraghi, Ensiyeh Ghasemi Mizuji, Mohammad Reza Ghaderi","doi":"10.1007/s11276-024-03787-1","DOIUrl":"https://doi.org/10.1007/s11276-024-03787-1","url":null,"abstract":"<p>Slotted waveguide antennas (SWAs) are widely used in various wireless applications such as space aircrafts, radars, and aircraft tracking systems. A specific type of SWA, which has slots in its narrow wall, has caught great interest due to its ability to produce horizontal polarization. However, due to the typically smaller size of the narrow wall compared to the resonant length of the slot, the slot inevitably extends onto the broad waveguide walls. This bending not only compromises the structural integrity of the waveguide but also complicates precise slot excitation modeling and increases fabrication complexity for planar arrays, often requiring metallic separators. This paper introduces a novel design that prevents edge slot bending on the wider waveguide walls by using a dielectric layer placed on the slots, effectively halving the slot’s resonant length. This ensures that the slot remains fully positioned on the narrow wall without bending onto the broader walls and also it protects the antenna from extreme heat and humidity. To validate the effectiveness of the proposed design, an array consisting of 12 slots with a Taylor synthesis-based amplitude distribution was designed, tested, and demonstrated to have side lobes below − 30 dB. Simulation results were found to be in good agreement with measurements.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"40 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1007/s11276-024-03781-7
V. Kiruthika, Arun Sekar Rajasekaran, K. B. Gurumoorthy, Anand Nayyar
Virtual Reality (VR) applications depending on wireless networks demand low-latency representations for efficient modeling. However, the primary concern is the seamless accessibility of the resources for a sustainable VR environment. The scope of such applications is valid for its ease of modeling and swift continuity for resource utilization. The research paper proposes an Optimized Resource Rendering Method (OR2M) that accounts for the VR requirements based on latency and data rate at the initialization state. The initialization state demands maximum data at high-speed and low-latency features for generating wireless VR. The representation state demands free flow availability of wireless and cloud resources that sustain the initialization state demands. Therefore, the analysis is performed using classification tree learning to identify the VR demands in the backboned wireless networks. The consecutive learning performs classification from the unsatisfied rendering demand from the previous interval for optimizing the representation. Experimental results state that the proposed method reduces failures by 10.61% and latency by 7.28% under varying service providers.
{"title":"OR2M: a novel optimized resource rendering methodology for wireless networks based on virtual reality (VR) applications","authors":"V. Kiruthika, Arun Sekar Rajasekaran, K. B. Gurumoorthy, Anand Nayyar","doi":"10.1007/s11276-024-03781-7","DOIUrl":"https://doi.org/10.1007/s11276-024-03781-7","url":null,"abstract":"<p>Virtual Reality (VR) applications depending on wireless networks demand low-latency representations for efficient modeling. However, the primary concern is the seamless accessibility of the resources for a sustainable VR environment. The scope of such applications is valid for its ease of modeling and swift continuity for resource utilization. The research paper proposes an Optimized Resource Rendering Method (OR2M) that accounts for the VR requirements based on latency and data rate at the initialization state. The initialization state demands maximum data at high-speed and low-latency features for generating wireless VR. The representation state demands free flow availability of wireless and cloud resources that sustain the initialization state demands. Therefore, the analysis is performed using classification tree learning to identify the VR demands in the backboned wireless networks. The consecutive learning performs classification from the unsatisfied rendering demand from the previous interval for optimizing the representation. Experimental results state that the proposed method reduces failures by 10.61% and latency by 7.28% under varying service providers.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"2016 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141169253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1007/s11276-024-03773-7
Bo Huang, Huidong Yao, Qing Bin Wu
The security of wireless network transmission data is an important technical index to ensure the reliable transmission of information in local areas, in this paper, there are a lot of personal privacy in wireless network transmission data, and the consequences of leakage are serious. This paper puts forward the prediction and evaluation of wireless network data transmission security risk based on machine learning, an effective method to solve information leakage and privacy protection uses improved Naive Bayesian kernel estimation (INBK) in machine learning to evaluate wireless network data security and risk level. The results show that the proposed model has lower false positive rate and false positive rate than other methods. In the same type of comparison, as the number of attacking nodes increases, Different algorithms have a certain increase in the false positive rate and the false negative rate. The method proposed in this paper has the advantages of accuracy, the recall rate and F1 algorithm perform well. Four algorithms are on the label U2R, R2L performed poorly, overall, it is over 80%, the overall performance is the best. The risk assessment level shows that the correct rate of the method adopted in this paper is higher than 95% in security risk assessment. Other methods are about 80%, and the worst is only 75%. The overall time consumption of different nodes is 18 ms. The highest average time of other models is 35 ms, and the overall time consumption is more.
{"title":"Prediction and evaluation of wireless network data transmission security risk based on machine learning","authors":"Bo Huang, Huidong Yao, Qing Bin Wu","doi":"10.1007/s11276-024-03773-7","DOIUrl":"https://doi.org/10.1007/s11276-024-03773-7","url":null,"abstract":"<p>The security of wireless network transmission data is an important technical index to ensure the reliable transmission of information in local areas, in this paper, there are a lot of personal privacy in wireless network transmission data, and the consequences of leakage are serious. This paper puts forward the prediction and evaluation of wireless network data transmission security risk based on machine learning, an effective method to solve information leakage and privacy protection uses improved Naive Bayesian kernel estimation (INBK) in machine learning to evaluate wireless network data security and risk level. The results show that the proposed model has lower false positive rate and false positive rate than other methods. In the same type of comparison, as the number of attacking nodes increases, Different algorithms have a certain increase in the false positive rate and the false negative rate. The method proposed in this paper has the advantages of accuracy, the recall rate and F1 algorithm perform well. Four algorithms are on the label U2R, R2L performed poorly, overall, it is over 80%, the overall performance is the best. The risk assessment level shows that the correct rate of the method adopted in this paper is higher than 95% in security risk assessment. Other methods are about 80%, and the worst is only 75%. The overall time consumption of different nodes is 18 ms. The highest average time of other models is 35 ms, and the overall time consumption is more.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"19 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141173349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1007/s11276-024-03760-y
Uppuluri Lakshmi Soundharya, G Vadivu, Gogineni Krishna Chaitanya
Database engines and file systems have been using prefetching and caching technologies for decades to enhance the performance of I/O-intensive applications. When future data access needs to be accelerated, prefetching methods often provide gains depending on the latency of the entire system by loading primary memory elements. Its execution time, where the data level prefetching rules are set, has to be much improved, as they are challenging to optimize, comprehend, and manage. This paper aims to introduce a novel distributed file system (DFS) model through dynamic prefetching, that includes four processes such as (1) Identification of popular files, (2) Estimation of support value for a file block, (3) Extraction of frequent block access patterns, and (4) Matching algorithm. At first, the input files are given to the first phase (i.e.), identification of popular sizes, where the popular files are identified. The support value of the file blocks that correspond to popular files is calculated in the second stage. Then, the extraction of frequent block access patterns is done in the third phase. At last, in the matching algorithm, the identification or prediction of frequent access pattern of the query is done by the optimized Neural Network (NN). Here, the weight of NN is optimally tuned by the Harmonic Mean based Grey Wolf Optimization (HMGWO) Algorithm.The proposed NN + HMGWO model produces reduced FPR values with good quality, which are 70.84%, 73.86%, 70.51%, 62.90%, 55.76%, 78.63%, and 73.86%, respectively, in comparison to other standard models like NN + WOA, NN + GWO, NN + PSO, NN + FF, FBAP, NN, and SVM. Lastly, the effectiveness of a chosen scheme is compared to other current methods in terms of delay analysis, latency analysis, hit ratio analysis, and correspondingly.
{"title":"Distributed file systembased optimization algorithm","authors":"Uppuluri Lakshmi Soundharya, G Vadivu, Gogineni Krishna Chaitanya","doi":"10.1007/s11276-024-03760-y","DOIUrl":"https://doi.org/10.1007/s11276-024-03760-y","url":null,"abstract":"<p>Database engines and file systems have been using prefetching and caching technologies for decades to enhance the performance of I/O-intensive applications. When future data access needs to be accelerated, prefetching methods often provide gains depending on the latency of the entire system by loading primary memory elements. Its execution time, where the data level prefetching rules are set, has to be much improved, as they are challenging to optimize, comprehend, and manage. This paper aims to introduce a novel distributed file system (DFS) model through dynamic prefetching, that includes four processes such as (1) Identification of popular files, (2) Estimation of support value for a file block, (3) Extraction of frequent block access patterns, and (4) Matching algorithm. At first, the input files are given to the first phase (i.e.), identification of popular sizes, where the popular files are identified. The support value of the file blocks that correspond to popular files is calculated in the second stage. Then, the extraction of frequent block access patterns is done in the third phase. At last, in the matching algorithm, the identification or prediction of frequent access pattern of the query is done by the optimized Neural Network (NN). Here, the weight of NN is optimally tuned by the Harmonic Mean based Grey Wolf Optimization (HMGWO) Algorithm.The proposed NN + HMGWO model produces reduced FPR values with good quality, which are 70.84%, 73.86%, 70.51%, 62.90%, 55.76%, 78.63%, and 73.86%, respectively, in comparison to other standard models like NN + WOA, NN + GWO, NN + PSO, NN + FF, FBAP, NN, and SVM. Lastly, the effectiveness of a chosen scheme is compared to other current methods in terms of delay analysis, latency analysis, hit ratio analysis, and correspondingly.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"14 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141169441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-26DOI: 10.1007/s11276-024-03767-5
Mojtaba Farmani, Saman Farnam, Razieh Mohammadi, Zahra Shirmohammadi
Wireless sensor networks are considered one of the effective technologies in various applications, responsible for monitoring and sensing. In these networks, sensors are powered by batteries with limited energy capacity. Consequently, the required energy for the sensors is obtained from the surrounding environment using energy harvesters. However, these environmental resources are unpredictable, making power management a critical issue that demands careful consideration. Reinforcement Learning (RL) algorithms offer an efficient solution for throughput management in these networks, enabling the adjustment of data rates for nodes based on the network’s energy conditions. Nevertheless, previous throughput management methods based on RL algorithms suffer from one of the key challenges: discretizing the state space does not guarantee the maximum improvement in throughput the network. Therefore, this paper proposes a method called Deep Deterministic Policy Gradient-Based for Maximizing Network Throughput (D2PG), which utilizes a Deep Reinforcement Learning algorithm known as Deep Deterministic Policy Gradient and introduces a novel reward function. This method can lead to maximizing the data transmission rate and enhancing network throughput across the entire network through continuous state space optimization among sensor energy consumption. The D2PG method is evaluated and compared with RL, RL-new, and Deep Q-Network methods, resulting in throughput enhancements of 15.3%, 12.9%, and 5.7%, respectively, in the network’s throughput. Additionally, the new reward function demonstrates superior performance in terms of data rate proportionality concerning the energy level.
{"title":"D2PG: deep deterministic policy gradient based for maximizing network throughput in clustered EH-WSN","authors":"Mojtaba Farmani, Saman Farnam, Razieh Mohammadi, Zahra Shirmohammadi","doi":"10.1007/s11276-024-03767-5","DOIUrl":"https://doi.org/10.1007/s11276-024-03767-5","url":null,"abstract":"<p>Wireless sensor networks are considered one of the effective technologies in various applications, responsible for monitoring and sensing. In these networks, sensors are powered by batteries with limited energy capacity. Consequently, the required energy for the sensors is obtained from the surrounding environment using energy harvesters. However, these environmental resources are unpredictable, making power management a critical issue that demands careful consideration. Reinforcement Learning (RL) algorithms offer an efficient solution for throughput management in these networks, enabling the adjustment of data rates for nodes based on the network’s energy conditions. Nevertheless, previous throughput management methods based on RL algorithms suffer from one of the key challenges: discretizing the state space does not guarantee the maximum improvement in throughput the network. Therefore, this paper proposes a method called Deep Deterministic Policy Gradient-Based for Maximizing Network Throughput (D2PG), which utilizes a Deep Reinforcement Learning algorithm known as Deep Deterministic Policy Gradient and introduces a novel reward function. This method can lead to maximizing the data transmission rate and enhancing network throughput across the entire network through continuous state space optimization among sensor energy consumption. The D2PG method is evaluated and compared with RL, RL-new, and Deep Q-Network methods, resulting in throughput enhancements of 15.3%, 12.9%, and 5.7%, respectively, in the network’s throughput. Additionally, the new reward function demonstrates superior performance in terms of data rate proportionality concerning the energy level.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"43 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141169343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-12DOI: 10.1007/s11276-024-03748-8
Shabih ul Hassan, Zhongfu Ye, Talha Mir, Usama Mir
The number of bits required in phase shifters (PS) in hybrid precoding (HP) has a significant impact on sum-rate, spectral efficiency (SE), and energy efficiency (EE). The space and cost constraints of a realistic massive multiple-input multiple-output (MIMO) system limit the number of antennas at the base station (BS), limiting the throughput gain promised by theoretical analysis. This paper demonstrates the effectiveness of employing an intelligent reflecting surface (IRS) to enhance efficiency, reduce costs, and conserve energy. Particularly, an IRS consists of an extensive number of reflecting elements, wherein every individual element has a distinct phase shift. Adjusting each phase shift and then jointly optimizing the source precoder at BS and selecting the optimal phase-shift values at IRS will allow us to modify the direction of signal propagation. Additionally, we can improve sum-rate, EE, and SE performance. Furthermore, we proposed an energy-efficient HP at BS in which the analog component is implemented using a low-resolution PS rather than a high-resolution PS. Our analysis reveals that the performance gets better as the number of bits increases. We formulate the problem of jointly optimizing the source precoder at BS and the reflection coefficient at IRS to improve the system performance. However, because of the non-convexity and high complexity of the formulated problem. Inspired by the cross-entropy (CE) optimization technique used in machine learning, we proposed an adaptive cross-entropy (ACE) 1-3-bit PS-based optimization HP approach for this new architecture. Moreover, our analysis of energy consumption revealed that increasing the low-resolution bits can significantly reduce power consumption while also improving performance parameters such as SE, EE, and sum-rate. The simulation results are presented to validate the proposed algorithm, which highlights the IRS efficiency gains to boost sum-rate, SE, and EE compared to previously reported methods.
{"title":"Machine learning-inspired hybrid precoding with low-resolution phase shifters for intelligent reflecting surface (IRS) massive MIMO systems with limited RF chains","authors":"Shabih ul Hassan, Zhongfu Ye, Talha Mir, Usama Mir","doi":"10.1007/s11276-024-03748-8","DOIUrl":"https://doi.org/10.1007/s11276-024-03748-8","url":null,"abstract":"<p>The number of bits required in phase shifters (PS) in hybrid precoding (HP) has a significant impact on sum-rate, spectral efficiency (SE), and energy efficiency (EE). The space and cost constraints of a realistic massive multiple-input multiple-output (MIMO) system limit the number of antennas at the base station (BS), limiting the throughput gain promised by theoretical analysis. This paper demonstrates the effectiveness of employing an intelligent reflecting surface (IRS) to enhance efficiency, reduce costs, and conserve energy. Particularly, an IRS consists of an extensive number of reflecting elements, wherein every individual element has a distinct phase shift. Adjusting each phase shift and then jointly optimizing the source precoder at BS and selecting the optimal phase-shift values at IRS will allow us to modify the direction of signal propagation. Additionally, we can improve sum-rate, EE, and SE performance. Furthermore, we proposed an energy-efficient HP at BS in which the analog component is implemented using a low-resolution PS rather than a high-resolution PS. Our analysis reveals that the performance gets better as the number of bits increases. We formulate the problem of jointly optimizing the source precoder at BS and the reflection coefficient at IRS to improve the system performance. However, because of the non-convexity and high complexity of the formulated problem. Inspired by the cross-entropy (CE) optimization technique used in machine learning, we proposed an adaptive cross-entropy (ACE) 1-3-bit PS-based optimization HP approach for this new architecture. Moreover, our analysis of energy consumption revealed that increasing the low-resolution bits can significantly reduce power consumption while also improving performance parameters such as SE, EE, and sum-rate. The simulation results are presented to validate the proposed algorithm, which highlights the IRS efficiency gains to boost sum-rate, SE, and EE compared to previously reported methods.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"26 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140926851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-11DOI: 10.1007/s11276-024-03752-y
F. Sangeetha Francelin Vinnarasi, S. P. Karuppiah, J. T. Anita Rose, C. A. Subasini
Nowadays, the routing problem has received major concern in Vehicular Ad-hoc Networks (VANETs) because of the utilization of resource-constrained devices in wireless networking environments. The traditional store-carry-forward approach produced highly reliable packet delivery performance using buses on ordinary routes. However, its performance is induced when dealing with inconsistent and dynamic routes. In addition, there is large bandwidth consumption if the forwarded packets are transmitted through improper relay nodes. Therefore, this paper proposes a novel street-centric routing algorithm with the consideration of optimal multiple routes and optimal relay node selection procedures. Initially, the street maps with ten streets and four bus routes are taken as input data. These bus trajectory data are transformed into routing graphs to determine the probability of buses moving through the streets. Subsequently, the optimal multiple shortest routes for forwarding packets to the destination are selected with the consideration of metrics such as Probability of Path Consistency (PPC) and Probability of Street Consistency (PSC). Finally, the optimal relay bus is chosen by employing the proposed Hybrid Fuzzy Niching Grey Wolf (HFNGW) algorithm. The experimental result inherits that the HFNGW algorithm achieves a greater packet delivery ratio of about 98.9% with less relay bus selection time of 32 ms than other compared methods.
{"title":"A novel model for optimal selection of relay bus with maximum link reliability in VANET using hybrid fuzzy niching grey wolf optimization","authors":"F. Sangeetha Francelin Vinnarasi, S. P. Karuppiah, J. T. Anita Rose, C. A. Subasini","doi":"10.1007/s11276-024-03752-y","DOIUrl":"https://doi.org/10.1007/s11276-024-03752-y","url":null,"abstract":"<p>Nowadays, the routing problem has received major concern in Vehicular Ad-hoc Networks (VANETs) because of the utilization of resource-constrained devices in wireless networking environments. The traditional store-carry-forward approach produced highly reliable packet delivery performance using buses on ordinary routes. However, its performance is induced when dealing with inconsistent and dynamic routes. In addition, there is large bandwidth consumption if the forwarded packets are transmitted through improper relay nodes. Therefore, this paper proposes a novel street-centric routing algorithm with the consideration of optimal multiple routes and optimal relay node selection procedures. Initially, the street maps with ten streets and four bus routes are taken as input data. These bus trajectory data are transformed into routing graphs to determine the probability of buses moving through the streets. Subsequently, the optimal multiple shortest routes for forwarding packets to the destination are selected with the consideration of metrics such as Probability of Path Consistency (PPC) and Probability of Street Consistency (PSC). Finally, the optimal relay bus is chosen by employing the proposed Hybrid Fuzzy Niching Grey Wolf (HFNGW) algorithm. The experimental result inherits that the HFNGW algorithm achieves a greater packet delivery ratio of about 98.9% with less relay bus selection time of 32 ms than other compared methods.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"41 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140926619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Blockchain helps to give a sense of security as there is only one history of transactions visible to all the involved parties. Smart contracts enable users to manage significant asset amounts of finances on the blockchain without the involvement of any intermediaries. The conditions and checks that have been written in smart contract and executed to the application cannot be changed again. However, these unique features pose some other risks to the smart contract. Smart contracts have several flaws in its programmable language and methods of execution, despite being a developing technology. To build smart contracts and implement numerous complicated business logics, high-level languages are used by the developers to code smart contracts. Thus, blockchain smart contract is the most important element of any decentralized application, posing the risk for it to be attacked. So, the presence of vulnerabilities are to be taken care of on a priority basis. It is important for detection of vulnerabilities in a smart contract and only then implement and connect it with applications to ensure security of funds. The motive of the paper is to discuss how deep learning may be utilized to deliver bug-free secure smart contracts. Objective of the paper is to detect three kinds of vulnerabilities- reentrancy, timestamp and infinite loop. A deep learning model has been created for detection of smart contract vulnerabilities using graph neural networks. The performance of this model has been compared to the present automated tools and other independent methods. It has been shown that this model has greater accuracy than other methods while comparing the prediction of smart contract vulnerabilities in existing models.
{"title":"Detection of vulnerabilities in blockchain smart contracts using deep learning","authors":"Namya Aankur Gupta, Mansi Bansal, Seema Sharma, Deepti Mehrotra, Misha Kakkar","doi":"10.1007/s11276-024-03755-9","DOIUrl":"https://doi.org/10.1007/s11276-024-03755-9","url":null,"abstract":"<p>Blockchain helps to give a sense of security as there is only one history of transactions visible to all the involved parties. Smart contracts enable users to manage significant asset amounts of finances on the blockchain without the involvement of any intermediaries. The conditions and checks that have been written in smart contract and executed to the application cannot be changed again. However, these unique features pose some other risks to the smart contract. Smart contracts have several flaws in its programmable language and methods of execution, despite being a developing technology. To build smart contracts and implement numerous complicated business logics, high-level languages are used by the developers to code smart contracts. Thus, blockchain smart contract is the most important element of any decentralized application, posing the risk for it to be attacked. So, the presence of vulnerabilities are to be taken care of on a priority basis. It is important for detection of vulnerabilities in a smart contract and only then implement and connect it with applications to ensure security of funds. The motive of the paper is to discuss how deep learning may be utilized to deliver bug-free secure smart contracts. Objective of the paper is to detect three kinds of vulnerabilities- reentrancy, timestamp and infinite loop. A deep learning model has been created for detection of smart contract vulnerabilities using graph neural networks. The performance of this model has been compared to the present automated tools and other independent methods. It has been shown that this model has greater accuracy than other methods while comparing the prediction of smart contract vulnerabilities in existing models.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"11 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140926628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1007/s11276-024-03754-w
Nguyen Van Vinh
This article presents a comprehensive exploration of the synergy between transmit antenna selection (TAS) and reconfigurable intelligent surfaces (RISs) in millimeter-wave (MW) communication systems, considering the impact of practical conditions. Notably, it accounts for imperfect transceiver hardware (ITH) at both the transmitter and receiver. Additionally, real-world channel models and receiver noise statistics are integrated into the analysis, providing a realistic representation of wireless systems in future networks. Mathematical formulas of outage probability (OP) and system throughput (ST) of the multi-RIS-assisted MW communications with ITH and TAS (shortened as the considered communications) are derived for analyzing the system behaviors. These formulas facilitate a comprehensive examination of system behavior. Through a series of comparative scenarios, including evaluations of OP and ST with and without TAS, with and without RISs, and with and without ITH (where the absence of ITH is denoted as perfect transceiver hardware, or PTH), the study substantiates the substantial advantages of TAS and RISs while shedding light on the significant influence of ITH. It is demonstrated that even in the presence of ITH, MW communication performance can be dramatically enhanced by optimizing the number of transmit antennas, selecting suitable carrier frequencies and RIS placements, and utilizing appropriate bandwidth. Ultimately, the derived formulas are rigorously validated through Monte-Carlo simulations, reinforcing the credibility of the findings.
本文全面探讨了毫米波(MW)通信系统中发射天线选择(TAS)与可重构智能表面(RIS)之间的协同作用,并考虑了实际条件的影响。值得注意的是,它考虑到了发射机和接收机上不完善的收发器硬件(ITH)。此外,还将真实世界的信道模型和接收器噪声统计纳入分析,为未来网络中的无线系统提供了真实的表现形式。为分析系统行为,推导出了具有 ITH 和 TAS 的多 RIS 辅助 MW 通信(简称为所考虑的通信)的中断概率(OP)和系统吞吐量(ST)的数学公式。这些公式有助于全面考察系统行为。通过一系列比较方案,包括有无 TAS、有无 RIS 以及有无 ITH(无 ITH 表示完美收发器硬件,或 PTH)的 OP 和 ST 评估,研究证实了 TAS 和 RIS 的巨大优势,同时揭示了 ITH 的重要影响。研究表明,即使存在 ITH,通过优化发射天线数量、选择合适的载波频率和 RIS 位置以及利用适当的带宽,也能显著提高 MW 通信性能。最后,通过蒙特卡洛模拟对推导出的公式进行了严格验证,从而增强了研究结果的可信度。
{"title":"Transmit antenna selection for millimeter-wave communications using multi-RIS with imperfect transceiver hardware","authors":"Nguyen Van Vinh","doi":"10.1007/s11276-024-03754-w","DOIUrl":"https://doi.org/10.1007/s11276-024-03754-w","url":null,"abstract":"<p>This article presents a comprehensive exploration of the synergy between transmit antenna selection (TAS) and reconfigurable intelligent surfaces (RISs) in millimeter-wave (MW) communication systems, considering the impact of practical conditions. Notably, it accounts for imperfect transceiver hardware (ITH) at both the transmitter and receiver. Additionally, real-world channel models and receiver noise statistics are integrated into the analysis, providing a realistic representation of wireless systems in future networks. Mathematical formulas of outage probability (OP) and system throughput (ST) of the multi-RIS-assisted MW communications with ITH and TAS (shortened as the considered communications) are derived for analyzing the system behaviors. These formulas facilitate a comprehensive examination of system behavior. Through a series of comparative scenarios, including evaluations of OP and ST with and without TAS, with and without RISs, and with and without ITH (where the absence of ITH is denoted as perfect transceiver hardware, or PTH), the study substantiates the substantial advantages of TAS and RISs while shedding light on the significant influence of ITH. It is demonstrated that even in the presence of ITH, MW communication performance can be dramatically enhanced by optimizing the number of transmit antennas, selecting suitable carrier frequencies and RIS placements, and utilizing appropriate bandwidth. Ultimately, the derived formulas are rigorously validated through Monte-Carlo simulations, reinforcing the credibility of the findings.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"26 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}