Open Shortest Path First (OSPF) currently supports multiarea networking with two severe limitations: the multiarea topology is restricted to a two-level hierarchy, and globally optimal routing may not be achieved. An OSPF extension that overcomes these limitations is proposed by introducing a routing overlay for the dissemination of multiarea routing information. It applies to both OSPFv2 (IPv4) and OSPFv3 (IPv6) and is transparent to area-internal routers. The extension was fully implemented and tested, and the results show that the added functionality is completely achieved, at the cost of a small penalty in terms of convergence times for small networks.
{"title":"Open Shortest Path First extension for the support of multiarea networks with arbitrary topologies","authors":"Xavier Gomes, João Fonseca, Rui Valadas","doi":"10.1049/ntw2.12112","DOIUrl":"10.1049/ntw2.12112","url":null,"abstract":"<p>Open Shortest Path First (OSPF) currently supports multiarea networking with two severe limitations: the multiarea topology is restricted to a two-level hierarchy, and globally optimal routing may not be achieved. An OSPF extension that overcomes these limitations is proposed by introducing a routing overlay for the dissemination of multiarea routing information. It applies to both OSPFv2 (IPv4) and OSPFv3 (IPv6) and is transparent to area-internal routers. The extension was fully implemented and tested, and the results show that the added functionality is completely achieved, at the cost of a small penalty in terms of convergence times for small networks.</p>","PeriodicalId":46240,"journal":{"name":"IET Networks","volume":"13 3","pages":"241-248"},"PeriodicalIF":1.4,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ntw2.12112","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139603818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cybersecurity events occur frequently. When it comes to investigating security threats, it is essential to offer a 100 percent accurate and packet-level network history, which depends on packet capture with high precision packet timestamping. Many packet capture applications are developed based on data plane development kit (DPDK)—a set of libraries and drivers for fast packet processing. However, DPDK cannot give an accurate timestamp for every packet, and it is unable to truly reflect the order in which packets arrive at the network interface card. In addition, DPDK-based applications cannot achieve zero packet loss when the packet is small such as 64 B for beyond 10 Gigabit Ethernet. Therefore, the authors proposed a new method based on Field-Programmable Gate Array (FPGA) to solve this problem. The authors also develop a DPDK driver for FPGA devices to make the design compatible with all DPDK-based applications. The proposed method performs timestamping at line-rate for 10 Gigabit Ethernet traffic at 4 ns precision and 1 ns precision for 25 Gigabit, which greatly improves the accuracy of security incident retrospective analysis. Furthermore, the design can capture full-size packets for any protocol with zero packet loss and can be applied to 40/100 Gigabit systems as well.
{"title":"Hardware nanosecond-precision timestamping for line-rate packet capture","authors":"Xiaoying Huang","doi":"10.1049/ntw2.12114","DOIUrl":"10.1049/ntw2.12114","url":null,"abstract":"<p>Cybersecurity events occur frequently. When it comes to investigating security threats, it is essential to offer a 100 percent accurate and packet-level network history, which depends on packet capture with high precision packet timestamping. Many packet capture applications are developed based on data plane development kit (DPDK)—a set of libraries and drivers for fast packet processing. However, DPDK cannot give an accurate timestamp for every packet, and it is unable to truly reflect the order in which packets arrive at the network interface card. In addition, DPDK-based applications cannot achieve zero packet loss when the packet is small such as 64 B for beyond 10 Gigabit Ethernet. Therefore, the authors proposed a new method based on Field-Programmable Gate Array (FPGA) to solve this problem. The authors also develop a DPDK driver for FPGA devices to make the design compatible with all DPDK-based applications. The proposed method performs timestamping at line-rate for 10 Gigabit Ethernet traffic at 4 ns precision and 1 ns precision for 25 Gigabit, which greatly improves the accuracy of security incident retrospective analysis. Furthermore, the design can capture full-size packets for any protocol with zero packet loss and can be applied to 40/100 Gigabit systems as well.</p>","PeriodicalId":46240,"journal":{"name":"IET Networks","volume":"13 3","pages":"249-261"},"PeriodicalIF":1.4,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ntw2.12114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139525261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kuan-Chu Lu, I.-Hsien Liu, Keng-Hao Chang, Jung-Shian Li
B5G/6G networks are facing challenges in the deployment of additional base stations. However, Taiwan's four major operators have launched VoWi-Fi calling services to maintain signal quality and coverage for customers. These services pose potential threats when users connect to untrusted Wi-Fi networks. Therefore, the authors utilised commercial equipment to study the security of VoWi-Fi calling services offered by Taiwan's four major telecom companies. The authors employed address resolution protocol attack methods to develop two verification attacks that bypass existing security measures: one for dropping session initiation protocol packets and the other for dropping voice call packets, both capable of circumventing current security defences. Through real-world experiments, the authors confirmed their feasibility and assessed their potential harm. Consequently, two defence methods are proposed. The first is an anti-attack algorithm for app and device manufacturers to detect the security of the user's calling environment. The second is a recommendation for telecom operators to implement new detection mechanisms to safeguard user rights.
The cover image is based on the Case Study VoWi-Fi security threats: Address resolution protocol attack and countermeasures by Kuan-Chu Lu et al., https://doi.org/10.1049/ntw2.12113
{"title":"VoWi-Fi security threats: Address resolution protocol attack and countermeasures","authors":"Kuan-Chu Lu, I.-Hsien Liu, Keng-Hao Chang, Jung-Shian Li","doi":"10.1049/ntw2.12113","DOIUrl":"10.1049/ntw2.12113","url":null,"abstract":"<p>B5G/6G networks are facing challenges in the deployment of additional base stations. However, Taiwan's four major operators have launched VoWi-Fi calling services to maintain signal quality and coverage for customers. These services pose potential threats when users connect to untrusted Wi-Fi networks. Therefore, the authors utilised commercial equipment to study the security of VoWi-Fi calling services offered by Taiwan's four major telecom companies. The authors employed address resolution protocol attack methods to develop two verification attacks that bypass existing security measures: one for dropping session initiation protocol packets and the other for dropping voice call packets, both capable of circumventing current security defences. Through real-world experiments, the authors confirmed their feasibility and assessed their potential harm. Consequently, two defence methods are proposed. The first is an anti-attack algorithm for app and device manufacturers to detect the security of the user's calling environment. The second is a recommendation for telecom operators to implement new detection mechanisms to safeguard user rights.</p><p>The cover image is based on the Case Study <i>VoWi-Fi security threats: Address resolution protocol attack and countermeasures</i> by Kuan-Chu Lu et al., https://doi.org/10.1049/ntw2.12113</p>","PeriodicalId":46240,"journal":{"name":"IET Networks","volume":"13 2","pages":"129-146"},"PeriodicalIF":1.4,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ntw2.12113","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139617777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things (IoT) and wireless communication technologies are evolving rapidly and changing the way we live and work. IoT devices may need to share personal information over the public network taking the help of nearby devices. Thus, the trustworthiness of nearby devices shall play an essential role in providing security and privacy assurance. The authors proposed an autonomous decentralised trust management model for selecting a trustworthy device for a requested service transaction. The proposed scheme uses the Social Internet of Things for trust management. The proposed model uses social relationships to find the level of trust among related devices, estimate the overall trust of the unknown devices, periodically update the observed trust values, and isolate the malicious nodes in the network. The evaluated trust values are updated in the network frequently so that other devices can utilise the information later. The periodic update can increase performance and detect trust‐related malicious attacks. The simulation result shows that the performance of the proposed model is better as compared to existing trust management models. The proposed model can detect malicious objects at an early stage and can handle malicious attacks by isolating malicious devices.
{"title":"Enhanced trust management for building trustworthy social internet of things network","authors":"Swati Sucharita Roy, B. Sahu, Shatarupa Dash","doi":"10.1049/ntw2.12111","DOIUrl":"https://doi.org/10.1049/ntw2.12111","url":null,"abstract":"Internet of Things (IoT) and wireless communication technologies are evolving rapidly and changing the way we live and work. IoT devices may need to share personal information over the public network taking the help of nearby devices. Thus, the trustworthiness of nearby devices shall play an essential role in providing security and privacy assurance. The authors proposed an autonomous decentralised trust management model for selecting a trustworthy device for a requested service transaction. The proposed scheme uses the Social Internet of Things for trust management. The proposed model uses social relationships to find the level of trust among related devices, estimate the overall trust of the unknown devices, periodically update the observed trust values, and isolate the malicious nodes in the network. The evaluated trust values are updated in the network frequently so that other devices can utilise the information later. The periodic update can increase performance and detect trust‐related malicious attacks. The simulation result shows that the performance of the proposed model is better as compared to existing trust management models. The proposed model can detect malicious objects at an early stage and can handle malicious attacks by isolating malicious devices.","PeriodicalId":46240,"journal":{"name":"IET Networks","volume":"61 19","pages":""},"PeriodicalIF":1.4,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139385521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kenneth Nsafoa-Yeboah, Eric Tutu Tchao, Benjamin Kommey, Andrew Selasi Agbemenu, Griffith Selorm Klogo, Nana Kwadwo Akrasi-Mensah
The enhanced capacity of optical networks is a significant advantage within the global telecommunications industry. Optical networks provides transmission of information over large distances with reduced latency. However, the growing intricacy of network topologies poses a significant challenge to network adaptability, network resilience, device compatibility, and service quality in the contemporary era of technology and 5G networks. In light of these challenges, recent studies leverages on disaggregation in the context of Software Defined Network (SDN) and network service orchestrators as a viable remedy. Disaggregated optical systems offer SDON (Software-Defined Optical Networking) enhanced control options and third-party dynamism streamlining upgrades and diminishing single vendor dependency. Although, the advancement of disaggregation improves network flexibility and vendor neutrality of Software Defined Optical Networking (SDON), this improvement comes at the cost of reduced scalability and network controllability performance. The current research paper posits two potential resolutions to the aforementioned challenge. The authors present recommendations and an enhanced architecture that leverages Open Network Operating System (ONOS) containers and Kubernetes orchestration to improve scalability inside the Software-Defined Optical Networking (SDON) architecture. The suggested architectural design has underlining novel flow charts and algorithms that enhances scalability performance by 33% while also preserving flexibility and controllability in comparison to pre-existing SDON architectures. This architecture also makes use of the Mininet-Optical physical-layer architecture to simulate a real-time scenario, as well as yang models from the Open Disaggregated Transport Network (ODTN) working group, the pioneers of SDONs. A detailed analysis of the rules and procedural processes involved in the implementation of the proposed architecture. In order to demonstrate the practical application of this architectural framework to a real-world Software-Defined Optical Network (SDON) system, the pre-existing SDON ONOS architecture within the Optical Transport Domain Networking (OTDN) working group was adjusted and refined. This adaptation aimed to illustrate the use of ONOS in conjunction with established optical network systems, highlighting the advantages it offers.
{"title":"Flexible open network operating system architecture for implementing higher scalability using disaggregated software-defined optical networking","authors":"Kenneth Nsafoa-Yeboah, Eric Tutu Tchao, Benjamin Kommey, Andrew Selasi Agbemenu, Griffith Selorm Klogo, Nana Kwadwo Akrasi-Mensah","doi":"10.1049/ntw2.12110","DOIUrl":"10.1049/ntw2.12110","url":null,"abstract":"<p>The enhanced capacity of optical networks is a significant advantage within the global telecommunications industry. Optical networks provides transmission of information over large distances with reduced latency. However, the growing intricacy of network topologies poses a significant challenge to network adaptability, network resilience, device compatibility, and service quality in the contemporary era of technology and 5G networks. In light of these challenges, recent studies leverages on disaggregation in the context of Software Defined Network (SDN) and network service orchestrators as a viable remedy. Disaggregated optical systems offer SDON (Software-Defined Optical Networking) enhanced control options and third-party dynamism streamlining upgrades and diminishing single vendor dependency. Although, the advancement of disaggregation improves network flexibility and vendor neutrality of Software Defined Optical Networking (SDON), this improvement comes at the cost of reduced scalability and network controllability performance. The current research paper posits two potential resolutions to the aforementioned challenge. The authors present recommendations and an enhanced architecture that leverages Open Network Operating System (ONOS) containers and Kubernetes orchestration to improve scalability inside the Software-Defined Optical Networking (SDON) architecture. The suggested architectural design has underlining novel flow charts and algorithms that enhances scalability performance by 33% while also preserving flexibility and controllability in comparison to pre-existing SDON architectures. This architecture also makes use of the Mininet-Optical physical-layer architecture to simulate a real-time scenario, as well as yang models from the Open Disaggregated Transport Network (ODTN) working group, the pioneers of SDONs. A detailed analysis of the rules and procedural processes involved in the implementation of the proposed architecture. In order to demonstrate the practical application of this architectural framework to a real-world Software-Defined Optical Network (SDON) system, the pre-existing SDON ONOS architecture within the Optical Transport Domain Networking (OTDN) working group was adjusted and refined. This adaptation aimed to illustrate the use of ONOS in conjunction with established optical network systems, highlighting the advantages it offers.</p>","PeriodicalId":46240,"journal":{"name":"IET Networks","volume":"13 3","pages":"221-240"},"PeriodicalIF":1.4,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ntw2.12110","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138595482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, with the rapid development of the Internet of Things and communication technology, the application of network multimedia has become increasingly popular. At present, network multimedia has become a new type of mass media. Due to its interchangeability and extensive dissemination, it creates favourable conditions for information dissemination and acceptance in terms of vision, hearing, touch etc. However, at the same time, there are also some problems. The network multimedia does not have enough original information and the information retrieval is not intelligent, so this has caused great trouble to users. In order to solve these problems, the authors attempted to use the combination of Big Data (BD) and virtual Artificial Intelligence (AI) technology to realise the intelligent design of network multimedia. Therefore, based on the theoretical analysis, the authors also verified the effect of BD and virtual AI technology on the intelligent design of network multimedia. It was found that the intelligent design method proposed by the authors had good application effect in network multimedia, and could effectively improve the information retrieval speed of network multimedia. The query time of this method was 1.87 s less than that of the traditional method when the amount of information was 1000 and 18.16 s less than that of the traditional method when the amount of information was 40,000. At the same time, this method could also improve the accuracy of network multimedia information recommendation, and make network multimedia have better sharing, so as to provide more services for users. In addition, the discussion of BD and virtual AI technology in network multimedia intelligent design could also broaden the application scope of the Internet of Things and promote its better development.
{"title":"Intelligent design of network multimedia using big data and virtual Artificial Intelligence technology","authors":"Xin Zhang","doi":"10.1049/ntw2.12109","DOIUrl":"https://doi.org/10.1049/ntw2.12109","url":null,"abstract":"In recent years, with the rapid development of the Internet of Things and communication technology, the application of network multimedia has become increasingly popular. At present, network multimedia has become a new type of mass media. Due to its interchangeability and extensive dissemination, it creates favourable conditions for information dissemination and acceptance in terms of vision, hearing, touch etc. However, at the same time, there are also some problems. The network multimedia does not have enough original information and the information retrieval is not intelligent, so this has caused great trouble to users. In order to solve these problems, the authors attempted to use the combination of Big Data (BD) and virtual Artificial Intelligence (AI) technology to realise the intelligent design of network multimedia. Therefore, based on the theoretical analysis, the authors also verified the effect of BD and virtual AI technology on the intelligent design of network multimedia. It was found that the intelligent design method proposed by the authors had good application effect in network multimedia, and could effectively improve the information retrieval speed of network multimedia. The query time of this method was 1.87 s less than that of the traditional method when the amount of information was 1000 and 18.16 s less than that of the traditional method when the amount of information was 40,000. At the same time, this method could also improve the accuracy of network multimedia information recommendation, and make network multimedia have better sharing, so as to provide more services for users. In addition, the discussion of BD and virtual AI technology in network multimedia intelligent design could also broaden the application scope of the Internet of Things and promote its better development.","PeriodicalId":46240,"journal":{"name":"IET Networks","volume":"2 24","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138603894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless virtual network uses software defined network and network function virtualisation technologies to create multiple logically isolated virtual networks on a physical wireless network. Wireless virtual network can improve the utilisation of wireless resources to meet the requirements of different services. Delay is an important performance indicator, which has strict requirements for delay-sensitive services such as video conferencing and online games. In this study, a virtual network embedding method based on node delay perception (VNE-NDP) is proposed, which considers both the node and link resources as well as the embedding delay requirements of virtual networks. The virtual network embedding method based on node delay perception consists of two phases: virtual node embedding and virtual link embedding. In the virtual node embedding phase, a physical node sorting method based on node delay perception (PNS-NDP) is proposed. In PNS-NDP, the node deployment delay is introduced into the node sorting algorithm for the first time. The authors select candidate physical nodes for each virtual node according to their resource availability and delay performance, which can greatly reduce the VN embedding delay without sacrificing too much other performance. In the virtual link embedding phase, a shortest path algorithm with bandwidth and link deployment delay constraints to find feasible physical paths for each virtual link is used. In addition, the VN embedding (VNE) deployment time is set as a new evaluation index. Simulation results show that compared with other VNE methods, VNE-NDP can achieve higher success rate, revenue-to-expenditure ratio, and lower deployment delay.
{"title":"Virtual network embedding method based on node delay perception","authors":"Yaning Wang, Hui Zhi","doi":"10.1049/ntw2.12105","DOIUrl":"10.1049/ntw2.12105","url":null,"abstract":"<p>Wireless virtual network uses software defined network and network function virtualisation technologies to create multiple logically isolated virtual networks on a physical wireless network. Wireless virtual network can improve the utilisation of wireless resources to meet the requirements of different services. Delay is an important performance indicator, which has strict requirements for delay-sensitive services such as video conferencing and online games. In this study, a virtual network embedding method based on node delay perception (VNE-NDP) is proposed, which considers both the node and link resources as well as the embedding delay requirements of virtual networks. The virtual network embedding method based on node delay perception consists of two phases: virtual node embedding and virtual link embedding. In the virtual node embedding phase, a physical node sorting method based on node delay perception (PNS-NDP) is proposed. In PNS-NDP, the node deployment delay is introduced into the node sorting algorithm for the first time. The authors select candidate physical nodes for each virtual node according to their resource availability and delay performance, which can greatly reduce the VN embedding delay without sacrificing too much other performance. In the virtual link embedding phase, a shortest path algorithm with bandwidth and link deployment delay constraints to find feasible physical paths for each virtual link is used. In addition, the VN embedding (VNE) deployment time is set as a new evaluation index. Simulation results show that compared with other VNE methods, VNE-NDP can achieve higher success rate, revenue-to-expenditure ratio, and lower deployment delay.</p>","PeriodicalId":46240,"journal":{"name":"IET Networks","volume":"13 2","pages":"178-191"},"PeriodicalIF":1.4,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ntw2.12105","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139210995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A low complexity resource allocation method is proposed for downlink non-orthogonal multiple access system assisted by intelligent reflecting surface (IRS). Firstly, an optimisation problem is formulated to minimise power consumption, with power allocation and IRS phase shifts as variables. Then, the joint optimisation problem of power allocation and IRS phase shifts is transformed into the optimisation of IRS phase shifts, which is further decomposed into multiple single variable sub-problems, the solutions to which can be obtained by using the function extremum method. Next, based on these sub-problems, one-level iterative algorithm is developed to optimise IRS phase shifts. Finally, the minimum power required by each user is calculated based on the IRS phase shifts obtained through iteration. Simulation results show that the proposed scheme is better than the existing schemes in the same scenario under the same rate requirement in terms of computational complexity and power consumption.
{"title":"Low complexity resource allocation scheme for IRS-assisted downlink non-orthogonal multiple access systems","authors":"Ren Ming, Zhang Rong","doi":"10.1049/ntw2.12106","DOIUrl":"10.1049/ntw2.12106","url":null,"abstract":"<p>A low complexity resource allocation method is proposed for downlink non-orthogonal multiple access system assisted by intelligent reflecting surface (IRS). Firstly, an optimisation problem is formulated to minimise power consumption, with power allocation and IRS phase shifts as variables. Then, the joint optimisation problem of power allocation and IRS phase shifts is transformed into the optimisation of IRS phase shifts, which is further decomposed into multiple single variable sub-problems, the solutions to which can be obtained by using the function extremum method. Next, based on these sub-problems, one-level iterative algorithm is developed to optimise IRS phase shifts. Finally, the minimum power required by each user is calculated based on the IRS phase shifts obtained through iteration. Simulation results show that the proposed scheme is better than the existing schemes in the same scenario under the same rate requirement in terms of computational complexity and power consumption.</p>","PeriodicalId":46240,"journal":{"name":"IET Networks","volume":"13 2","pages":"192-198"},"PeriodicalIF":1.4,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ntw2.12106","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139242898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Flying Ad-Hoc Network (FANET) is a promising ad hoc networking paradigm that can offer new added value services in military and civilian applications. Typically, it incorporates a group of Unmanned Aerial Vehicles (UAVs), known as drones that collaborate and cooperate to accomplish several missions without human intervention. However, UAV communications are prone to various attacks and detecting malicious nodes is essential for efficient FANET operation. Trust management is an effective method that plays a significant role in the prediction and recognition of intrusions in FANETs. Specifically, evaluating node behaviour remains an important issue in this domain. For this purpose, the authors suggest using fuzzy logic, one of the most commonly used methods for trust computation, which classifies nodes based on multiple criteria to handle complex environments. In addition, the Received Signal Strength Indication (RSSI) is an important parameter that can be used in fuzzy logic to evaluate a drone's behaviour. However, in outdoor flying networks, the RSSI can be seriously influenced by the humidity of the air, which can dramatically impact the accuracy of the trust results. FUBA, a fuzzy-based UAV behaviour analytics is presented for trust management in FANETs. By considering humidity as a new parameter, FUBA can identify insider threats and increase the overall network's trustworthiness under bad weather conditions. It is capable of performing well in outdoor flying networks. The simulation results indicate that the proposed model significantly outperforms FNDN and UNION in terms of the average end-to-end delay and the false positive ratio.
{"title":"FUBA: A fuzzy-based unmanned aerial vehicle behaviour analytics for trust management in flying ad-hoc networks","authors":"Sihem Benfriha, Nabila Labraoui, Radjaa Bensaid, Haythem Bany Salameh, Hafida Saidi","doi":"10.1049/ntw2.12108","DOIUrl":"10.1049/ntw2.12108","url":null,"abstract":"<p>Flying Ad-Hoc Network (FANET) is a promising ad hoc networking paradigm that can offer new added value services in military and civilian applications. Typically, it incorporates a group of Unmanned Aerial Vehicles (UAVs), known as drones that collaborate and cooperate to accomplish several missions without human intervention. However, UAV communications are prone to various attacks and detecting malicious nodes is essential for efficient FANET operation. Trust management is an effective method that plays a significant role in the prediction and recognition of intrusions in FANETs. Specifically, evaluating node behaviour remains an important issue in this domain. For this purpose, the authors suggest using fuzzy logic, one of the most commonly used methods for trust computation, which classifies nodes based on multiple criteria to handle complex environments. In addition, the Received Signal Strength Indication (RSSI) is an important parameter that can be used in fuzzy logic to evaluate a drone's behaviour. However, in outdoor flying networks, the RSSI can be seriously influenced by the humidity of the air, which can dramatically impact the accuracy of the trust results. FUBA, a fuzzy-based UAV behaviour analytics is presented for trust management in FANETs. By considering humidity as a new parameter, FUBA can identify insider threats and increase the overall network's trustworthiness under bad weather conditions. It is capable of performing well in outdoor flying networks. The simulation results indicate that the proposed model significantly outperforms FNDN and UNION in terms of the average end-to-end delay and the false positive ratio.</p>","PeriodicalId":46240,"journal":{"name":"IET Networks","volume":"13 3","pages":"208-220"},"PeriodicalIF":1.4,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ntw2.12108","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139250458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdulbasit M. A. Sabaawi, Mohammed R. Almasaoodi, Sara El Gaily, Sándor Imre
Devising efficient optimisation methods has been a subject of great research attention since current evolving trends in communication networks, machine learning, and other cutting-edge systems that need a fast and accurate optimised computational model. Classical computers became incapable of handling new optimisation problems posed by newly emerging trends. Quantum optimisation algorithms appear as alternative solutions. The existing bottleneck that restricts the use of the newly developed quantum strategies is the limited qubit size of the available quantum computers (the size of the most recent universal quantum computer is 433 qubits). A new quantum genetic algorithm (QGA) is proposed that handles the presented problem. A quantum extreme value searching algorithm and quantum blind computing framework are utilised to extend the search capabilities of the GA. The quantum genetic strategy is exploited to maximise energy efficiency at full spectral efficiency of massive multiple-input, multiple-output (M-MIMO) technology as a toy example for pointing out the efficiency of the presented quantum strategy. The authors run extensive simulations and prove how the presented quantum method outperforms the existing classical genetic algorithm.
{"title":"Energy efficiency optimisation in massive multiple-input, multiple-output network for 5G applications using new quantum genetic algorithm","authors":"Abdulbasit M. A. Sabaawi, Mohammed R. Almasaoodi, Sara El Gaily, Sándor Imre","doi":"10.1049/ntw2.12104","DOIUrl":"10.1049/ntw2.12104","url":null,"abstract":"<p>Devising efficient optimisation methods has been a subject of great research attention since current evolving trends in communication networks, machine learning, and other cutting-edge systems that need a fast and accurate optimised computational model. Classical computers became incapable of handling new optimisation problems posed by newly emerging trends. Quantum optimisation algorithms appear as alternative solutions. The existing bottleneck that restricts the use of the newly developed quantum strategies is the limited qubit size of the available quantum computers (the size of the most recent universal quantum computer is 433 qubits). A new quantum genetic algorithm (QGA) is proposed that handles the presented problem. A quantum extreme value searching algorithm and quantum blind computing framework are utilised to extend the search capabilities of the GA. The quantum genetic strategy is exploited to maximise energy efficiency at full spectral efficiency of massive multiple-input, multiple-output (M-MIMO) technology as a toy example for pointing out the efficiency of the presented quantum strategy. The authors run extensive simulations and prove how the presented quantum method outperforms the existing classical genetic algorithm.</p>","PeriodicalId":46240,"journal":{"name":"IET Networks","volume":"13 2","pages":"165-177"},"PeriodicalIF":1.4,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ntw2.12104","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134954472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}