Pub Date : 2024-08-30DOI: 10.1109/TNSM.2024.3452485
Huanzhuo Wu;Jia He;Jiakang Weng;Giang T. Nguyen;Martin Reisslein;Frank H. P. Fitzek
Computing in the Network (COIN) has the potential to reduce the data traffic and thus the end-to-end latencies for data-rich services. Existing COIN studies have neglected the impact of the size of the data unit that the network nodes compute on. However, similar to the impact of the protocol data unit (packet) size in conventional store-and-forward packet-switching networks, the Computing Data Unit (CDU) size is an elementary parameter that strongly influences the COIN dynamics. We model the end-to-end service time consisting of the network transport delays (for data transmission and link propagation), the loading delays of the data into the computing units, and the computing delays in the network nodes. We derive the optimal CDU size that minimizes the end-to-end service time with gradient descent. We evaluate the impact of the CDU sizing on the amount of data transmitted over the network links and the end-to-end service time for computing the convolutional neural network (CNN) based Yoho and a Deep Neural Network (DNN) based Multi-Layer Perceptron (MLP). We distribute the Yoho and MLP neural modules over up to five network nodes. Our emulation evaluations indicate that COIN strongly reduces the amount of network traffic after the first few computing nodes. Also, the CDU size optimization has a strong impact on the end-to-end service time; whereby, CDU sizes that are too small or too large can double the service time. Our emulations validate that our gradient descent minimization correctly identifies the optimal CDU size.
{"title":"OptCDU: Optimizing the Computing Data Unit Size for COIN","authors":"Huanzhuo Wu;Jia He;Jiakang Weng;Giang T. Nguyen;Martin Reisslein;Frank H. P. Fitzek","doi":"10.1109/TNSM.2024.3452485","DOIUrl":"10.1109/TNSM.2024.3452485","url":null,"abstract":"Computing in the Network (COIN) has the potential to reduce the data traffic and thus the end-to-end latencies for data-rich services. Existing COIN studies have neglected the impact of the size of the data unit that the network nodes compute on. However, similar to the impact of the protocol data unit (packet) size in conventional store-and-forward packet-switching networks, the Computing Data Unit (CDU) size is an elementary parameter that strongly influences the COIN dynamics. We model the end-to-end service time consisting of the network transport delays (for data transmission and link propagation), the loading delays of the data into the computing units, and the computing delays in the network nodes. We derive the optimal CDU size that minimizes the end-to-end service time with gradient descent. We evaluate the impact of the CDU sizing on the amount of data transmitted over the network links and the end-to-end service time for computing the convolutional neural network (CNN) based Yoho and a Deep Neural Network (DNN) based Multi-Layer Perceptron (MLP). We distribute the Yoho and MLP neural modules over up to five network nodes. Our emulation evaluations indicate that COIN strongly reduces the amount of network traffic after the first few computing nodes. Also, the CDU size optimization has a strong impact on the end-to-end service time; whereby, CDU sizes that are too small or too large can double the service time. Our emulations validate that our gradient descent minimization correctly identifies the optimal CDU size.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6095-6111"},"PeriodicalIF":4.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1109/TNSM.2024.3452150
Daniele Bringhenti;Fulvio Valenza
Sustainability is an increasingly critical design feature for modern computer networks. However, green objectives related to energy savings are affected by the application of approximate cybersecurity management techniques. In particular, their impact is evident in distributed firewall configuration, where traditional manual approaches create redundant architectures, leading to avoidable power consumption. This issue has not been addressed by the approaches proposed in literature to automate firewall configuration so far, because their optimization is not focused on network sustainability. Therefore, this paper presents GreenShield as a possible solution that combines security and green-oriented optimization for firewall configuration. Specifically, GreenShield minimizes the power consumption related to firewalls activated in the network while ensuring that the security requested by the network administrator is guaranteed, and the one due to traffic processing by making firewalls to block undesired traffic as near as possible to the sources. The framework implementing GreenShield has undergone experimental tests to assess the provided optimization and its scalability performance.
{"title":"GreenShield: Optimizing Firewall Configuration for Sustainable Networks","authors":"Daniele Bringhenti;Fulvio Valenza","doi":"10.1109/TNSM.2024.3452150","DOIUrl":"10.1109/TNSM.2024.3452150","url":null,"abstract":"Sustainability is an increasingly critical design feature for modern computer networks. However, green objectives related to energy savings are affected by the application of approximate cybersecurity management techniques. In particular, their impact is evident in distributed firewall configuration, where traditional manual approaches create redundant architectures, leading to avoidable power consumption. This issue has not been addressed by the approaches proposed in literature to automate firewall configuration so far, because their optimization is not focused on network sustainability. Therefore, this paper presents GreenShield as a possible solution that combines security and green-oriented optimization for firewall configuration. Specifically, GreenShield minimizes the power consumption related to firewalls activated in the network while ensuring that the security requested by the network administrator is guaranteed, and the one due to traffic processing by making firewalls to block undesired traffic as near as possible to the sources. The framework implementing GreenShield has undergone experimental tests to assess the provided optimization and its scalability performance.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6909-6923"},"PeriodicalIF":4.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10660559","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1109/TNSM.2024.3452326
Angela Sara Cacciapuoti;Jessica Illiano;Michele Viscardi;Marcello Caleffi
Multipartite entanglement distribution is a key functionality of the Quantum Internet. However, quantum entanglement is very fragile, easily degraded by decoherence, which strictly constraints the time horizon within the distribution has to be completed. This, coupled with the quantum noise irremediably impinging on the channels utilized for entanglement distribution, may imply the need to attempt the distribution process multiple times before the targeted network nodes successfully share the desired entangled state. And there is no guarantee that this is accomplished within the time horizon dictated by the coherence times. As a consequence, in noisy scenarios requiring multiple distribution attempts, it may be convenient to stop the distribution process early. In this paper, we take steps in the direction of knowing when to stop the entanglement distribution by developing a theoretical framework, able to capture the quantum noise effects. Specifically, we first prove that the entanglement distribution process can be modeled as a Markov decision process. Then, we prove that the optimal decision policy exhibits attractive features, which we exploit to reduce the computational complexity. The developed framework provides quantum network designers with flexible tools to optimally engineer the design parameters of the entanglement distribution process.
{"title":"Multipartite Entanglement Distribution in the Quantum Internet: Knowing When to Stop!","authors":"Angela Sara Cacciapuoti;Jessica Illiano;Michele Viscardi;Marcello Caleffi","doi":"10.1109/TNSM.2024.3452326","DOIUrl":"10.1109/TNSM.2024.3452326","url":null,"abstract":"Multipartite entanglement distribution is a key functionality of the Quantum Internet. However, quantum entanglement is very fragile, easily degraded by decoherence, which strictly constraints the time horizon within the distribution has to be completed. This, coupled with the quantum noise irremediably impinging on the channels utilized for entanglement distribution, may imply the need to attempt the distribution process multiple times before the targeted network nodes successfully share the desired entangled state. And there is no guarantee that this is accomplished within the time horizon dictated by the coherence times. As a consequence, in noisy scenarios requiring multiple distribution attempts, it may be convenient to stop the distribution process early. In this paper, we take steps in the direction of knowing when to stop the entanglement distribution by developing a theoretical framework, able to capture the quantum noise effects. Specifically, we first prove that the entanglement distribution process can be modeled as a Markov decision process. Then, we prove that the optimal decision policy exhibits attractive features, which we exploit to reduce the computational complexity. The developed framework provides quantum network designers with flexible tools to optimally engineer the design parameters of the entanglement distribution process.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6041-6058"},"PeriodicalIF":4.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10660502","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Satellite Edge Computing (SEC) is seen as a promising solution for deploying network functions in orbit to provide ubiquitous services with low latency and bandwidth. Software Defined Networks (SDN) and Network Function Virtualization (NFV) enable SEC to manage and deploy services more flexibly. In this paper, we study a dynamic and topology-aware VNF mapping and scheduling strategy within an SDN/NFV-enabled SEC infrastructure. Our focus is on meeting the stringent requirements of mission-critical (MC) applications, recognizing their significance in both satellite-to-satellite and edge-to-satellite communications while ensuring service delay margin fairness across various time-sensitive service requests. We formulate the VNF mapping and scheduling problem as an Integer Nonlinear Programming problem ( INLP