Pub Date : 2024-08-29DOI: 10.1007/s10922-024-09860-6
Hazem M. El-Hageen, Yousef H. Alfaifi, Hani Albalawi, Ahmed Alzahmi, Aadel M. Alatwi, Ahmed F. Ali, Mohamed A. Mead
A wireless sensor network (WSN) is made up of one or more sink nodes, also known as base stations, and spatially dispersed sensors. Real-time monitoring of physical parameters like temperature, vibration, and motion is done using sensors, which also provide sensory data. A sensor node may act as a data router in addition to an originator of data. However, there are a number of issues with these sensors, including a high rate of energy consumption and a short network lifetime. One of the greatest ways to handle this problem is to use the clustering technique. In the WSN, selecting the optimal Cluster Heads (CHs) helps save energy consumption. Algorithms for Swarm Intelligence (SI) can assist in resolving challenging issues. We present a novel algorithm in this research to choose the top CHs in the WSN. A Chaotic Zebra Optimization Algorithm (CZOA) is the name of the new algorithm. We integrate the chaotic map and the zebra optimization algorithm (ZOA) in the CZOA. By doing so, the suggested algorithm’s processes of diversification can help to prevent the possibility of being trapped in local minima. Different SI algorithms are compared with the CZOA. The suggested algorithm’s results demonstrate that it can use less energy than the other algorithms and that more nodes are still alive for it than for the other algorithms combined. As a result, the CZOA demonstrated its superiority in lowering energy consumption and lengthening network lifetime.
{"title":"Chaotic Zebra Optimization Algorithm for Increasing the Lifetime of Wireless Sensor Network","authors":"Hazem M. El-Hageen, Yousef H. Alfaifi, Hani Albalawi, Ahmed Alzahmi, Aadel M. Alatwi, Ahmed F. Ali, Mohamed A. Mead","doi":"10.1007/s10922-024-09860-6","DOIUrl":"https://doi.org/10.1007/s10922-024-09860-6","url":null,"abstract":"<p>A wireless sensor network (WSN) is made up of one or more sink nodes, also known as base stations, and spatially dispersed sensors. Real-time monitoring of physical parameters like temperature, vibration, and motion is done using sensors, which also provide sensory data. A sensor node may act as a data router in addition to an originator of data. However, there are a number of issues with these sensors, including a high rate of energy consumption and a short network lifetime. One of the greatest ways to handle this problem is to use the clustering technique. In the WSN, selecting the optimal Cluster Heads (CHs) helps save energy consumption. Algorithms for Swarm Intelligence (SI) can assist in resolving challenging issues. We present a novel algorithm in this research to choose the top CHs in the WSN. A Chaotic Zebra Optimization Algorithm (CZOA) is the name of the new algorithm. We integrate the chaotic map and the zebra optimization algorithm (ZOA) in the CZOA. By doing so, the suggested algorithm’s processes of diversification can help to prevent the possibility of being trapped in local minima. Different SI algorithms are compared with the CZOA. The suggested algorithm’s results demonstrate that it can use less energy than the other algorithms and that more nodes are still alive for it than for the other algorithms combined. As a result, the CZOA demonstrated its superiority in lowering energy consumption and lengthening network lifetime.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"4 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1007/s10922-024-09855-3
Syed Mohsan Raza, Roberto Minerva, Barbara Martini, Noel Crespi
Microservice architecture offers a decentralized structure using componentization of large applications. This approach can be coupled with Edge computing principles: applications with stringent response time can benefit from different deployment options. However, it is crucial to gain profound insights into correlations between the deployment of distributed application components and the response time, especially from an application perspective. For correct placement decisions, it is important to evaluate the impact of small functions’ placement and their interactions across the Edge–Cloud Continuum. This paper investigates the response time from an application perspective, considering the componentization using microservice architecture. Unlike the existing application placement approaches, we present extensive simulation results, illustrating the impact of service chains and marginally considered Application Programming Interface Gateways placement. Numerical evidence depicts that the design and placement of microservice-based applications could counter the common perception that Edge resources are always suitable for user-perceived response time. Further, we also present an experiment involving a componentized application and its optimized deployment in an actual testbed. Our findings and design guidelines inform effective component placement decisions while considering infrastructure constraints as well.
{"title":"Empowering Microservices: A Deep Dive into Intelligent Application Component Placement for Optimal Response Time","authors":"Syed Mohsan Raza, Roberto Minerva, Barbara Martini, Noel Crespi","doi":"10.1007/s10922-024-09855-3","DOIUrl":"https://doi.org/10.1007/s10922-024-09855-3","url":null,"abstract":"<p>Microservice architecture offers a decentralized structure using componentization of large applications. This approach can be coupled with Edge computing principles: applications with stringent response time can benefit from different deployment options. However, it is crucial to gain profound insights into correlations between the deployment of distributed application components and the response time, especially from an application perspective. For correct placement decisions, it is important to evaluate the impact of small functions’ placement and their interactions across the Edge–Cloud Continuum. This paper investigates the response time from an application perspective, considering the componentization using microservice architecture. Unlike the existing application placement approaches, we present extensive simulation results, illustrating the impact of service chains and marginally considered Application Programming Interface Gateways placement. Numerical evidence depicts that the design and placement of microservice-based applications could counter the common perception that Edge resources are always suitable for user-perceived response time. Further, we also present an experiment involving a componentized application and its optimized deployment in an actual testbed. Our findings and design guidelines inform effective component placement decisions while considering infrastructure constraints as well.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"29 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) and Edge Computing (EC) are now pervasive. IoT networks are made up of several objects, deployed in an area of interest (AoI), that can communicate with each other and with a remote computing centre for decision-making. EC reduces latency and data congestion by bringing data processing closer to the source. In this paper, we address the problems of network coverage and data collection in IoT-based EC networks. Several solutions exist designed to solve these problems unfortunately, they are either not energy-efficient or do not consider connectivity and they do not cover AoI. The proposed routing mechanisms are often not suited for AoI coverage schemes and lead to poor data routing delay or high packet losses. To address these shortcomings, we propose ATENA, a periodic, lightweight and energy-efficient protocol that aims to improve network coverage based on the two new schemes used to define a few number of objects to be kept awake at each period it also uses an adaptive routing scheme to send the collected data to the computing centre. This protocol is designed to take into account the limited resources of objects and ensures a longer network lifetime. A comparison of ATENA’s simulation results with recent existing protocols shows that it significantly improves network coverage, network lifetime and end-to-end delay to the computing centre.
{"title":"ATENA: Adaptive TEchniques for Network Area Coverage and Routing in IoT-Based Edge Computing","authors":"Garrik Brel Jagho Mdemaya, Vianney Kengne Tchendji, Mthulisi Velempini, Ariege Atchaze","doi":"10.1007/s10922-024-09856-2","DOIUrl":"https://doi.org/10.1007/s10922-024-09856-2","url":null,"abstract":"<p>The Internet of Things (IoT) and Edge Computing (EC) are now pervasive. IoT networks are made up of several objects, deployed in an area of interest (AoI), that can communicate with each other and with a remote computing centre for decision-making. EC reduces latency and data congestion by bringing data processing closer to the source. In this paper, we address the problems of network coverage and data collection in IoT-based EC networks. Several solutions exist designed to solve these problems unfortunately, they are either not energy-efficient or do not consider connectivity and they do not cover AoI. The proposed routing mechanisms are often not suited for AoI coverage schemes and lead to poor data routing delay or high packet losses. To address these shortcomings, we propose ATENA, a periodic, lightweight and energy-efficient protocol that aims to improve network coverage based on the two new schemes used to define a few number of objects to be kept awake at each period it also uses an adaptive routing scheme to send the collected data to the computing centre. This protocol is designed to take into account the limited resources of objects and ensures a longer network lifetime. A comparison of ATENA’s simulation results with recent existing protocols shows that it significantly improves network coverage, network lifetime and end-to-end delay to the computing centre.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"59 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-25DOI: 10.1007/s10922-024-09858-0
Ameur Bennaoui, Mustapha Guezouri, Mokhtar Keche
Vehicular Ad-hoc Networks (VANETs) play a crucial role in Intelligent Transportation Systems (ITS), but their dynamic nature makes efficient data dissemination challenging. This paper proposes a novel deep learning-based method to optimize data dissemination within VANETs. A realistic dataset is generated through simulations using a modified Breadth-First Search algorithm combined with the Jaccard similarity coefficient to maximize message coverage. A deep neural network (DNN) is trained on this dataset to predict optimal forwarding paths in varying VANET conditions. Integration of this DNN-based protocol into OMNeT++ simulations demonstrates significant improvements in packet delivery ratios, reduced network overhead, and minimized transmission delays compared to existing dissemination protocols.
{"title":"Improving VANET Data Dissemination Efficiency with Deep Neural Networks","authors":"Ameur Bennaoui, Mustapha Guezouri, Mokhtar Keche","doi":"10.1007/s10922-024-09858-0","DOIUrl":"https://doi.org/10.1007/s10922-024-09858-0","url":null,"abstract":"<p>Vehicular Ad-hoc Networks (VANETs) play a crucial role in Intelligent Transportation Systems (ITS), but their dynamic nature makes efficient data dissemination challenging. This paper proposes a novel deep learning-based method to optimize data dissemination within VANETs. A realistic dataset is generated through simulations using a modified Breadth-First Search algorithm combined with the Jaccard similarity coefficient to maximize message coverage. A deep neural network (DNN) is trained on this dataset to predict optimal forwarding paths in varying VANET conditions. Integration of this DNN-based protocol into OMNeT++ simulations demonstrates significant improvements in packet delivery ratios, reduced network overhead, and minimized transmission delays compared to existing dissemination protocols.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"8 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-25DOI: 10.1007/s10922-024-09862-4
Filipe Augusto da Luz Lemos, Thiago dos Santos Cavali, Keiko Verônica Ono Fonseca, Mauro Sergio Pereira Fonseca, Rubens Alexandre de Faria
The increasing complexity and dynamic nature of software-defined networking (SDN) environments pose significant challenges for network security. We propose a methodology for enhancing the security of SDN systems through the use of a well established technique in forensic sciences, the memory analysis, combined with techniques to identify memory modifications, such as signature validation and novelty detection. A proof of concept using a test environment consisting of virtual switches, connected in a ring topology, and hosts validated the proposed methodology. The results were able to demonstrate the capability of the proposed methodology to detect and mitigate unauthorized changes in network equipment, highlighting its potential to improve the security of SDN networks, and possible integration with other methodologies to further improve the security of SDN environments. Overall, the proposed methodology provides a new valuable tool for securing SDN networks, and brings research opportunities on the scalability and adaptability of the proposed solution.
软件定义网络(SDN)环境的复杂性和动态性不断增加,给网络安全带来了巨大挑战。我们提出了一种增强 SDN 系统安全性的方法,即利用法证科学中成熟的内存分析技术,并结合识别内存修改的技术(如签名验证和新颖性检测)。使用由环形拓扑连接的虚拟交换机和主机组成的测试环境进行的概念验证验证了所提出的方法。结果表明,所提出的方法能够检测和减轻网络设备中未经授权的更改,突出了其提高 SDN 网络安全性的潜力,以及与其他方法进行整合以进一步提高 SDN 环境安全性的可能性。总之,所提出的方法为确保 SDN 网络的安全提供了一种新的有价值的工具,并为所提出的解决方案的可扩展性和适应性带来了研究机会。
{"title":"Enhancing the Security of Software-Defined Networking through Forensic Memory Analysis","authors":"Filipe Augusto da Luz Lemos, Thiago dos Santos Cavali, Keiko Verônica Ono Fonseca, Mauro Sergio Pereira Fonseca, Rubens Alexandre de Faria","doi":"10.1007/s10922-024-09862-4","DOIUrl":"https://doi.org/10.1007/s10922-024-09862-4","url":null,"abstract":"<p>The increasing complexity and dynamic nature of software-defined networking (SDN) environments pose significant challenges for network security. We propose a methodology for enhancing the security of SDN systems through the use of a well established technique in forensic sciences, the memory analysis, combined with techniques to identify memory modifications, such as signature validation and novelty detection. A proof of concept using a test environment consisting of virtual switches, connected in a ring topology, and hosts validated the proposed methodology. The results were able to demonstrate the capability of the proposed methodology to detect and mitigate unauthorized changes in network equipment, highlighting its potential to improve the security of SDN networks, and possible integration with other methodologies to further improve the security of SDN environments. Overall, the proposed methodology provides a new valuable tool for securing SDN networks, and brings research opportunities on the scalability and adaptability of the proposed solution.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"7 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-24DOI: 10.1007/s10922-024-09864-2
Sawsan Ali Hamid, Yassine Boujelben, Faouzi Zarai
Cloud computing is profoundly transforming the way IT services are implemented and provided to end-users. Gaming services are not exempt from this new trend. In fact, with cloud gaming or Gaming as a Service, players can enjoy a high-quality gaming experience even while using low-end devices with limited processing capabilities. However, achieving the delicate balance between gamers’ quality of experience and the provider’s net profit poses challenges in optimizing the cloud gaming experience. Implementing procedures and strategies to use the most suitable cloud servers and efficiently utilize their resources can help achieve such a balance. This entails allocating each player’s virtual machine to an appropriate physical server, simultaneously optimizing objectives like reducing resource waste, minimizing power consumption, and decreasing network transmission delays. In this article, we offer a thorough review of recent research on the Virtual Machine Placement (VMP) problem in the context of cloud gaming. We explore various facets, encompassing the architecture of cloud gaming, optimization methodologies, and cloud gaming services. Readers will acquire a comprehensive understanding of the challenges in cloud gaming research, with a specific focus on how optimizing the VMP problem can contribute to resolving associated issues.
云计算正在深刻地改变 IT 服务的实施和向最终用户提供服务的方式。游戏服务也不例外。事实上,有了云游戏或游戏即服务,玩家即使使用处理能力有限的低端设备,也能享受高质量的游戏体验。然而,如何在玩家的游戏体验质量和提供商的净利润之间实现微妙的平衡,给优化云游戏体验带来了挑战。实施使用最合适的云服务器并有效利用其资源的程序和策略有助于实现这种平衡。这需要将每个玩家的虚拟机分配到合适的物理服务器上,同时优化目标,如减少资源浪费、降低能耗和减少网络传输延迟。在本文中,我们全面回顾了最近在云游戏背景下对虚拟机安置(VMP)问题的研究。我们探讨了包括云游戏架构、优化方法和云游戏服务在内的各个方面。读者将全面了解云游戏研究面临的挑战,并特别关注优化 VMP 问题如何有助于解决相关问题。
{"title":"Enhancing Cloud Gaming Experience through Optimized Virtual Machine Placement: A Comprehensive Review","authors":"Sawsan Ali Hamid, Yassine Boujelben, Faouzi Zarai","doi":"10.1007/s10922-024-09864-2","DOIUrl":"https://doi.org/10.1007/s10922-024-09864-2","url":null,"abstract":"<p>Cloud computing is profoundly transforming the way IT services are implemented and provided to end-users. Gaming services are not exempt from this new trend. In fact, with cloud gaming or Gaming as a Service, players can enjoy a high-quality gaming experience even while using low-end devices with limited processing capabilities. However, achieving the delicate balance between gamers’ quality of experience and the provider’s net profit poses challenges in optimizing the cloud gaming experience. Implementing procedures and strategies to use the most suitable cloud servers and efficiently utilize their resources can help achieve such a balance. This entails allocating each player’s virtual machine to an appropriate physical server, simultaneously optimizing objectives like reducing resource waste, minimizing power consumption, and decreasing network transmission delays. In this article, we offer a thorough review of recent research on the Virtual Machine Placement (VMP) problem in the context of cloud gaming. We explore various facets, encompassing the architecture of cloud gaming, optimization methodologies, and cloud gaming services. Readers will acquire a comprehensive understanding of the challenges in cloud gaming research, with a specific focus on how optimizing the VMP problem can contribute to resolving associated issues.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"57 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-16DOI: 10.1007/s10922-024-09841-9
Madjed Bencheikh Lehocine, Hacene Belhadef
Software Defined Networking (SDN) has emerged as a network platform that enables centralized network management, providing network operators with the ability to manage the entire network uniformly and comprehensively, regardless of the complexity of the underlying infrastructure devices. Nevertheless, it remains vulnerable to emerging security threats that can be maliciously exploited by attackers. If the SDN controller is compromised, the entire system becomes susceptible to severe risks. Previous research has focused on proposing flow-based IDSs using Machine-Learning/Deep-Learning models distinguishing between benign traffic and attacks. However, these solutions require periodic message exchanges, containing requests and responses, between the control plane and the data plane. Once the required flow features are extracted from the responses transmitted by the OpenFlow switches, these features undergo preprocessing before being fed to a classifier. This pre-training process consumes a significant amount of time and resources, which is inadequate for early intrusion detection. The study presented in this paper introduces an efficient classification solution based essentially on preprocessing raw input data, eliminating the need for retrieving flow information from the OpenFlow switches. We evaluated our approach on the public InSDN dataset, achieving an accuracy of 99.91% and 99.99% for multiclass and binary classification respectively.
{"title":"Preprocessing-Based Approach for Prompt Intrusion Detection in SDN Networks","authors":"Madjed Bencheikh Lehocine, Hacene Belhadef","doi":"10.1007/s10922-024-09841-9","DOIUrl":"https://doi.org/10.1007/s10922-024-09841-9","url":null,"abstract":"<p>Software Defined Networking (SDN) has emerged as a network platform that enables centralized network management, providing network operators with the ability to manage the entire network uniformly and comprehensively, regardless of the complexity of the underlying infrastructure devices. Nevertheless, it remains vulnerable to emerging security threats that can be maliciously exploited by attackers. If the SDN controller is compromised, the entire system becomes susceptible to severe risks. Previous research has focused on proposing flow-based IDSs using Machine-Learning/Deep-Learning models distinguishing between benign traffic and attacks. However, these solutions require periodic message exchanges, containing requests and responses, between the control plane and the data plane. Once the required flow features are extracted from the responses transmitted by the OpenFlow switches, these features undergo preprocessing before being fed to a classifier. This pre-training process consumes a significant amount of time and resources, which is inadequate for early intrusion detection. The study presented in this paper introduces an efficient classification solution based essentially on preprocessing raw input data, eliminating the need for retrieving flow information from the OpenFlow switches. We evaluated our approach on the public InSDN dataset, achieving an accuracy of 99.91% and 99.99% for multiclass and binary classification respectively.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"284 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-15DOI: 10.1007/s10922-024-09845-5
Hemanth Kumar Ravuri, Jakob Struye, Jeroen van der Hooft, Tim Wauters, Filip De Turck, Jeroen Famaey, Maria Torres Vega
In order to achieve truly immersive multimedia experiences, full freedom of movement has to be supported, and high-quality, interactive video delivery to the head-mounted device is vital. In wireless environments, this is very challenging due to the massive bandwidth and ultra-low delay requirements of such applications. Millimeter wave (mmWave) networks promise ultra-high speed owing to the availability of high-capacity bands at a frequency range of 30 GHz to 300 GHz. However, they are prone to signal attenuation due to blockage and beam misalignment due to mobility, leading to packet loss and retransmissions. This can lead to the head-of-line blocking problem on the transport layer which results in playout stalls and delivery of lower quality data that can be highly detrimental to a user’s quality of experience (QoE). Complementary to research efforts trying to make mmWave networks more resilient through lower-layer enhancements, this paper presents a transport layer solution that provides an adaptive and reliable transmission over mmWave networks-based on partially reliable QUIC. Using context information retrieved periodically from the client to adapt according to the networking conditions induced due to mobility and obstacles, the essential part of the video content (i.e., in the viewport of the end user) is transmitted reliably, while less important content (i.e., outside of the viewport of the end user) is sent unreliably. Our decision-making logic is able to effectively deliver 22.5% more content in the viewport reliably. This is achieved without additional playout interruptions or quality changes for scenarios with high-bitrate volumetric video streaming evaluated over realistic mmWave network traces. In case the server can perfectly predict the network bandwidth, playout interruptions can be avoided altogether.
{"title":"Context-Aware and Reliable Transport Layer Framework for Interactive Immersive Media Delivery Over Millimeter Wave","authors":"Hemanth Kumar Ravuri, Jakob Struye, Jeroen van der Hooft, Tim Wauters, Filip De Turck, Jeroen Famaey, Maria Torres Vega","doi":"10.1007/s10922-024-09845-5","DOIUrl":"https://doi.org/10.1007/s10922-024-09845-5","url":null,"abstract":"<p>In order to achieve truly immersive multimedia experiences, full freedom of movement has to be supported, and high-quality, interactive video delivery to the head-mounted device is vital. In wireless environments, this is very challenging due to the massive bandwidth and ultra-low delay requirements of such applications. Millimeter wave (mmWave) networks promise ultra-high speed owing to the availability of high-capacity bands at a frequency range of 30 GHz to 300 GHz. However, they are prone to signal attenuation due to blockage and beam misalignment due to mobility, leading to packet loss and retransmissions. This can lead to the head-of-line blocking problem on the transport layer which results in playout stalls and delivery of lower quality data that can be highly detrimental to a user’s quality of experience (QoE). Complementary to research efforts trying to make mmWave networks more resilient through lower-layer enhancements, this paper presents a transport layer solution that provides an adaptive and reliable transmission over mmWave networks-based on partially reliable QUIC. Using context information retrieved periodically from the client to adapt according to the networking conditions induced due to mobility and obstacles, the essential part of the video content (i.e., in the viewport of the end user) is transmitted reliably, while less important content (i.e., outside of the viewport of the end user) is sent unreliably. Our decision-making logic is able to effectively deliver 22.5% more content in the viewport reliably. This is achieved without additional playout interruptions or quality changes for scenarios with high-bitrate volumetric video streaming evaluated over realistic mmWave network traces. In case the server can perfectly predict the network bandwidth, playout interruptions can be avoided altogether.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"1 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-11DOI: 10.1007/s10922-024-09852-6
Sana Said, Jalel Eddine Hajlaoui, Mohamed Nazih Omri
The Internet of Things (IoT) environment has become the basic channel for the propagation of Distributed Denial of Service (DDoS) and malware intrusions. Cyber threats in IoT require new mechanisms and strategies to secure devices during their life cycle. These threats are real for operators and manufacturers of connected objects. Less and uncorrected secure devices are priority objectives for botnet operators to capture and obtain testing devices. In this article, we reviewed the problem of optimizing the placement of security components in IoT. We provide a generic view of the placement of security components at different levels in the IoT. First, we present an overview of IoT. Then, we conduct a demonstration and a thematic classification of numerous solutions for the placement of security components in the IoT. Thus, this presentation will be followed by a discussion of diverse research questions and a set of proposals for various future orientations of this domain to advance the problem resolution in this research article.
{"title":"A Survey on the Optimization of Security Components Placement in Internet of Things","authors":"Sana Said, Jalel Eddine Hajlaoui, Mohamed Nazih Omri","doi":"10.1007/s10922-024-09852-6","DOIUrl":"https://doi.org/10.1007/s10922-024-09852-6","url":null,"abstract":"<p>The Internet of Things (IoT) environment has become the basic channel for the propagation of Distributed Denial of Service (DDoS) and malware intrusions. Cyber threats in IoT require new mechanisms and strategies to secure devices during their life cycle. These threats are real for operators and manufacturers of connected objects. Less and uncorrected secure devices are priority objectives for botnet operators to capture and obtain testing devices. In this article, we reviewed the problem of optimizing the placement of security components in IoT. We provide a generic view of the placement of security components at different levels in the IoT. First, we present an overview of IoT. Then, we conduct a demonstration and a thematic classification of numerous solutions for the placement of security components in the IoT. Thus, this presentation will be followed by a discussion of diverse research questions and a set of proposals for various future orientations of this domain to advance the problem resolution in this research article.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"5 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1007/s10922-024-09851-7
Amir Bannoura, Hamid Chekenbah, Frank Meyer, Suhail Odeh, Rafik Lasri
The rapid growth of Internet of Things (IoT) technologies led to an increase in the demand of connected devices around the world. Today, these devices are integrated in several applications and solutions. Therefore, providing a smooth and simple approach for their integration is essential to extend their popularity and wide adaptation. However, there is a high level of complexity to be considered especially when it comes to wireless communication and deployment to the field. These devices are deployed part of wireless sensor networks, which increase the complexity of their operation and integration. In this paper, we focus on the integration of IoT devices in wireless networks for smart home applications. We present an overview of the challenges in designing and developing the firmware. As well, we suggest several testing use-cases to verify that the firmware we are deploying is stable and ready to be introduced to the market. Also, we consider some diagnostic metrics to identify the issues that could degrade the functionality of the devices. Finally, we propose an algorithmic approach based on Fuzzy Logic to improve the integration of IoT devices into the wireless sensor networks using intelligent decision-making techniques to mitigate the challenges and deployment limitations.
{"title":"Overcoming Real-World IoT Deployment Challenges with Enhanced Fuzzy Logic Decision Algorithms","authors":"Amir Bannoura, Hamid Chekenbah, Frank Meyer, Suhail Odeh, Rafik Lasri","doi":"10.1007/s10922-024-09851-7","DOIUrl":"https://doi.org/10.1007/s10922-024-09851-7","url":null,"abstract":"<p>The rapid growth of Internet of Things (IoT) technologies led to an increase in the demand of connected devices around the world. Today, these devices are integrated in several applications and solutions. Therefore, providing a smooth and simple approach for their integration is essential to extend their popularity and wide adaptation. However, there is a high level of complexity to be considered especially when it comes to wireless communication and deployment to the field. These devices are deployed part of wireless sensor networks, which increase the complexity of their operation and integration. In this paper, we focus on the integration of IoT devices in wireless networks for smart home applications. We present an overview of the challenges in designing and developing the firmware. As well, we suggest several testing use-cases to verify that the firmware we are deploying is stable and ready to be introduced to the market. Also, we consider some diagnostic metrics to identify the issues that could degrade the functionality of the devices. Finally, we propose an algorithmic approach based on Fuzzy Logic to improve the integration of IoT devices into the wireless sensor networks using intelligent decision-making techniques to mitigate the challenges and deployment limitations.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"10 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}