A quantum network distributes quantum entanglements between remote nodes, and is key to many applications in secure communication, quantum sensing and distributed quantum computing. This paper explores the fundamental trade-off between the throughput and the quality of entanglement distribution in a multi-hop quantum repeater network. Compared to existing work which aims to heuristically maximize the entanglement distribution rate (EDR) and/or entanglement fidelity, our goal is to characterize the maximum achievable worst-case fidelity, while satisfying a bound on the maximum achievable expected EDR between an arbitrary pair of quantum nodes. This characterization will provide fundamental bounds on the achievable performance region of a quantum network, which can assist with the design of quantum network topology, protocols and applications. However, the task is highly non-trivial and is NP-hard as we shall prove. Our main contribution is a fully polynomial-time approximation scheme to approximate the achievable worst-case fidelity subject to a strict expected EDR bound, combining an optimal fidelity-agnostic EDR-maximizing formulation and a worst-case isotropic noise model. The EDR and fidelity guarantees can be implemented by a post-selection-and-storage protocol with quantum memories. By developing a discrete-time quantum network simulator, we conduct simulations to show the characterized performance region (the approximate Pareto frontier) of a network, and demonstrate that the designed protocol can achieve the performance region while existing protocols exhibit a substantial gap.
{"title":"FENDI: Toward High-Fidelity Entanglement Distribution in the Quantum Internet","authors":"Huayue Gu;Zhouyu Li;Ruozhou Yu;Xiaojian Wang;Fangtong Zhou;Jianqing Liu;Guoliang Xue","doi":"10.1109/TNET.2024.3450271","DOIUrl":"10.1109/TNET.2024.3450271","url":null,"abstract":"A quantum network distributes quantum entanglements between remote nodes, and is key to many applications in secure communication, quantum sensing and distributed quantum computing. This paper explores the fundamental trade-off between the throughput and the quality of entanglement distribution in a multi-hop quantum repeater network. Compared to existing work which aims to heuristically maximize the entanglement distribution rate (EDR) and/or entanglement fidelity, our goal is to characterize the maximum achievable worst-case fidelity, while satisfying a bound on the maximum achievable expected EDR between an arbitrary pair of quantum nodes. This characterization will provide fundamental bounds on the achievable performance region of a quantum network, which can assist with the design of quantum network topology, protocols and applications. However, the task is highly non-trivial and is NP-hard as we shall prove. Our main contribution is a fully polynomial-time approximation scheme to approximate the achievable worst-case fidelity subject to a strict expected EDR bound, combining an optimal fidelity-agnostic EDR-maximizing formulation and a worst-case isotropic noise model. The EDR and fidelity guarantees can be implemented by a post-selection-and-storage protocol with quantum memories. By developing a discrete-time quantum network simulator, we conduct simulations to show the characterized performance region (the approximate Pareto frontier) of a network, and demonstrate that the designed protocol can achieve the performance region while existing protocols exhibit a substantial gap.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"5033-5048"},"PeriodicalIF":3.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142248687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1109/TNET.2024.3454478
Viviana Arrigoni;Matteo Prata;Novella Bartolini
Massive failures in communication networks result from natural disasters, heavy blackouts, and military and cyber attacks. After these events, an adequate network recovery plan is key to ensuring emergency-critical service restoration and preventing intolerable downtime and performance degradation. We tackle the problem of minimizing the time and number of interventions to sufficiently restore the communication network to support emergency services after large-scale failures. We propose Proton (Progressive RecOvery and Tomography-based mONitoring), an efficient algorithm for progressive recovery of emergency services. Unlike previous work, assuming centralized routing and complete network observability, Proton addresses the more realistic scenario in which the network relies on the existing routing protocols, and knowledge of the network state is partial and uncertain. Proton relies on Network Tomography for monitoring and acquiring information about the state of nodes and links. Simulation results on real topologies show that our algorithm outperforms previous solutions in terms of cumulative routed flow, repair costs and recovery time in static and dynamic failure scenarios.
{"title":"Recovering Critical Service After Large-Scale Failures With Bayesian Network Tomography","authors":"Viviana Arrigoni;Matteo Prata;Novella Bartolini","doi":"10.1109/TNET.2024.3454478","DOIUrl":"10.1109/TNET.2024.3454478","url":null,"abstract":"Massive failures in communication networks result from natural disasters, heavy blackouts, and military and cyber attacks. After these events, an adequate network recovery plan is key to ensuring emergency-critical service restoration and preventing intolerable downtime and performance degradation. We tackle the problem of minimizing the time and number of interventions to sufficiently restore the communication network to support emergency services after large-scale failures. We propose Proton (Progressive RecOvery and Tomography-based mONitoring), an efficient algorithm for progressive recovery of emergency services. Unlike previous work, assuming centralized routing and complete network observability, Proton addresses the more realistic scenario in which the network relies on the existing routing protocols, and knowledge of the network state is partial and uncertain. Proton relies on Network Tomography for monitoring and acquiring information about the state of nodes and links. Simulation results on real topologies show that our algorithm outperforms previous solutions in terms of cumulative routed flow, repair costs and recovery time in static and dynamic failure scenarios.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"5216-5231"},"PeriodicalIF":3.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10679612","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1109/TNET.2024.3445274
Julia Khamis;Arad Kotzer;Ori Rottenstreich
Payment channel networks (PCNs), also known as off-chain networks, implement a common approach to deal with the scalability problem of blockchain networks. They enable users to execute payments without committing them to the blockchain by relying on predefined payment channels. A pair of users can employ a payment even without a direct channel between them, by routing the payment via payment channels involving other intermediate users. Users, together with the channels, form a graph known as the off-chain network topology. The off-chain topology and the payment characteristics affect network performance such as the average number of intermediate users a payment is routed through or the values of transaction fees. In this paper, we study two basic problems in payment channel network design. First, efficiently mapping users to an off-chain topology with a known structure. Second, constructing a topology with a bounded number of channels that can serve users well with associated payments. We design algorithms for both problems while considering several fundamental topologies. We study topology-related real data statistics of Raiden, the off-chain extension for Ethereum as well as of Lightning, the equivalent off-chain layer of Bitcoin. We conduct experiments to demonstrate the effectiveness of the algorithms for these networks.
{"title":"Topologies for Blockchain Payment Channel Networks: Models and Constructions","authors":"Julia Khamis;Arad Kotzer;Ori Rottenstreich","doi":"10.1109/TNET.2024.3445274","DOIUrl":"10.1109/TNET.2024.3445274","url":null,"abstract":"Payment channel networks (PCNs), also known as off-chain networks, implement a common approach to deal with the scalability problem of blockchain networks. They enable users to execute payments without committing them to the blockchain by relying on predefined payment channels. A pair of users can employ a payment even without a direct channel between them, by routing the payment via payment channels involving other intermediate users. Users, together with the channels, form a graph known as the off-chain network topology. The off-chain topology and the payment characteristics affect network performance such as the average number of intermediate users a payment is routed through or the values of transaction fees. In this paper, we study two basic problems in payment channel network design. First, efficiently mapping users to an off-chain topology with a known structure. Second, constructing a topology with a bounded number of channels that can serve users well with associated payments. We design algorithms for both problems while considering several fundamental topologies. We study topology-related real data statistics of Raiden, the off-chain extension for Ethereum as well as of Lightning, the equivalent off-chain layer of Bitcoin. We conduct experiments to demonstrate the effectiveness of the algorithms for these networks.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"4781-4797"},"PeriodicalIF":3.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile Crowdsourcing (MCS) has become a novel paradigm for enabling data collection by worker recruitment, and the reputation plays a crucial role in achieving high-quality data. Although identity, data, and bid privacy preserving have been thoroughly investigated with the advance of blockchain technology, existing literature barely focuses on reputation privacy, which prevents malicious workers from submitting false data that could affect truth discovery for data requester. Therefore, we propose a Blockchain-Based Reputation Privacy Preserving for Quality-Aware Worker Recruitment Scheme (BRPP-QWR). First, we design a lightweight privacy preserving scheme for the whole life cycle of the worker’s reputation, which adopts sub-address retrieval technique combined with Pedersen Commitment and Compact Linkable Spontaneous Anonymous Group (CLSAG) signature to enable fast and anonymous verification of the reputation update process. Subsequently, to tackle the unknown worker recruitment problem, we propose a Reputation, Selfishness, and Quality-based Multi-Armed Bandit (RSQ-MAB) learning algorithm to select reliable and high-quality workers. Lastly, we implement a prototype system on Hyperledger Fabric to evaluate the performance of the reputation management scheme. The results indicate that the execution latency for the reputation score verification and retrieval latency can be reduced by an average of 6.30%–56.90% compared with ARMS-MCS. In addition, experimental results on both real and synthetic datasets show that the proposed RSQ-MAB algorithm achieves an increase of at least 20.05% in regard to the data requester’s total revenue and a decrease of at least 48.55% and 3.18% in regret and Multi-round Average Error (MAE), respectively, compared with other benchmark methods.
{"title":"Blockchain-Based Reputation Privacy Preserving for Quality-Aware Worker Recruitment Scheme in MCS","authors":"Qingyong Deng;Qinghua Zuo;Zhetao Li;Haolin Liu;Yong Xie","doi":"10.1109/TNET.2024.3453056","DOIUrl":"10.1109/TNET.2024.3453056","url":null,"abstract":"Mobile Crowdsourcing (MCS) has become a novel paradigm for enabling data collection by worker recruitment, and the reputation plays a crucial role in achieving high-quality data. Although identity, data, and bid privacy preserving have been thoroughly investigated with the advance of blockchain technology, existing literature barely focuses on reputation privacy, which prevents malicious workers from submitting false data that could affect truth discovery for data requester. Therefore, we propose a Blockchain-Based Reputation Privacy Preserving for Quality-Aware Worker Recruitment Scheme (BRPP-QWR). First, we design a lightweight privacy preserving scheme for the whole life cycle of the worker’s reputation, which adopts sub-address retrieval technique combined with Pedersen Commitment and Compact Linkable Spontaneous Anonymous Group (CLSAG) signature to enable fast and anonymous verification of the reputation update process. Subsequently, to tackle the unknown worker recruitment problem, we propose a Reputation, Selfishness, and Quality-based Multi-Armed Bandit (RSQ-MAB) learning algorithm to select reliable and high-quality workers. Lastly, we implement a prototype system on Hyperledger Fabric to evaluate the performance of the reputation management scheme. The results indicate that the execution latency for the reputation score verification and retrieval latency can be reduced by an average of 6.30%–56.90% compared with ARMS-MCS. In addition, experimental results on both real and synthetic datasets show that the proposed RSQ-MAB algorithm achieves an increase of at least 20.05% in regard to the data requester’s total revenue and a decrease of at least 48.55% and 3.18% in regret and Multi-round Average Error (MAE), respectively, compared with other benchmark methods.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"5188-5203"},"PeriodicalIF":3.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1109/TNET.2024.3452780
Zhengyu Liao;Shiyou Qian;Zhonglong Zheng;Jiange Zhang;Jian Cao;Guangtao Xue;Minglu Li
Packet classification, as a crucial function of networks, has been extensively investigated. In recent years, the rapid advancement of software-defined networking (SDN) has introduced new demands for packet classification, particularly in supporting dynamic rule updates and fast lookup. This paper presents a novel structure called DBTable for efficient packet classification to achieve high overall performance. DBTable integrates the strengths of conventional packet classification methods and neural network concepts. Within DBTable, a straightforward indexing scheme is proposed to eliminate rule replication, thereby ensuring high update performance. Additionally, we propose an iterative method for generating a discriminative bitset (DBS) to evenly partition rules. By utilizing the DBS, rules can be efficiently mapped in a hash table, thus achieving exceptional lookup performance. Moreover, DBTable incorporates a hybrid structure to further optimize the worst-case lookup performance, primarily caused by data skewness. The experiment results on 12 256k rulesets show that, compared to seven state-of-the-art schemes, DBTable achieves an overall lookup speed improvement ranging from 1.53x to 7.29x, while maintaining the fastest update speed.
{"title":"DBTable: Leveraging Discriminative Bitsets for High-Performance Packet Classification","authors":"Zhengyu Liao;Shiyou Qian;Zhonglong Zheng;Jiange Zhang;Jian Cao;Guangtao Xue;Minglu Li","doi":"10.1109/TNET.2024.3452780","DOIUrl":"10.1109/TNET.2024.3452780","url":null,"abstract":"Packet classification, as a crucial function of networks, has been extensively investigated. In recent years, the rapid advancement of software-defined networking (SDN) has introduced new demands for packet classification, particularly in supporting dynamic rule updates and fast lookup. This paper presents a novel structure called DBTable for efficient packet classification to achieve high overall performance. DBTable integrates the strengths of conventional packet classification methods and neural network concepts. Within DBTable, a straightforward indexing scheme is proposed to eliminate rule replication, thereby ensuring high update performance. Additionally, we propose an iterative method for generating a discriminative bitset (DBS) to evenly partition rules. By utilizing the DBS, rules can be efficiently mapped in a hash table, thus achieving exceptional lookup performance. Moreover, DBTable incorporates a hybrid structure to further optimize the worst-case lookup performance, primarily caused by data skewness. The experiment results on 12 256k rulesets show that, compared to seven state-of-the-art schemes, DBTable achieves an overall lookup speed improvement ranging from 1.53x to 7.29x, while maintaining the fastest update speed.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"5232-5246"},"PeriodicalIF":3.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1109/TNET.2024.3453067
Soohyun Park;Hankyul Baek;Joongheon Kim
According to the challenges related to the limited availability of quantum bits (qubits) in the era of noisy intermediate-scale quantum (NISQ), the immediate replacement of all components in existing network architectures with quantum computing devices may not be practical. As a result, implementing a hybrid quantum-classical system is regarded as one of effective strategies. In hybrid quantum-classical systems, quantum computing devices can be used for computation-intensive applications, such as massive scheduling in dynamic environments. Furthermore, one of most popular network applications is advanced social media services such as metaverse. Accordingly, this paper proposes an advanced multi-metaverse dynamic streaming algorithm in hybrid quantum-classical systems. For this purpose, the proposed algorithm consists of three stages. For the first stage, three-dimensional (3D) point cloud data gathering should be conducted using spatially scheduled observing devices from physical-spaces for constructing virtual multiple meta-spaces in metaverse server. This is for massive scheduling over dynamic situations, i.e., quantum multi-agent reinforcement learning-based scheduling is utilized for scheduling dimension reduction into a logarithmic-scale. For the second stage, a temporal low-delay metaverse server’s processor scheduler is designed for region-popularity-aware multiple virtual meta-spaces rendering contents allocation via modified bin-packing with hard real-time constraints. Lastly, a novel dynamic dynamic streaming algorithm is proposed for high-quality, differentiated, and stabilized meta-spaces rendering contents delivery to individual users via Lyapunov optimization theory. Our performance evaluation results verify that the proposed spatio-temporal algorithm outperforms benchmarks in various aspects over hybrid quantum-classical systems.
{"title":"Spatio-Temporal Multi-Metaverse Dynamic Streaming for Hybrid Quantum-Classical Systems","authors":"Soohyun Park;Hankyul Baek;Joongheon Kim","doi":"10.1109/TNET.2024.3453067","DOIUrl":"10.1109/TNET.2024.3453067","url":null,"abstract":"According to the challenges related to the limited availability of quantum bits (qubits) in the era of noisy intermediate-scale quantum (NISQ), the immediate replacement of all components in existing network architectures with quantum computing devices may not be practical. As a result, implementing a hybrid quantum-classical system is regarded as one of effective strategies. In hybrid quantum-classical systems, quantum computing devices can be used for computation-intensive applications, such as massive scheduling in dynamic environments. Furthermore, one of most popular network applications is advanced social media services such as metaverse. Accordingly, this paper proposes an advanced multi-metaverse dynamic streaming algorithm in hybrid quantum-classical systems. For this purpose, the proposed algorithm consists of three stages. For the first stage, three-dimensional (3D) point cloud data gathering should be conducted using spatially scheduled observing devices from physical-spaces for constructing virtual multiple meta-spaces in metaverse server. This is for massive scheduling over dynamic situations, i.e., quantum multi-agent reinforcement learning-based scheduling is utilized for scheduling dimension reduction into a logarithmic-scale. For the second stage, a temporal low-delay metaverse server’s processor scheduler is designed for region-popularity-aware multiple virtual meta-spaces rendering contents allocation via modified bin-packing with hard real-time constraints. Lastly, a novel dynamic dynamic streaming algorithm is proposed for high-quality, differentiated, and stabilized meta-spaces rendering contents delivery to individual users via Lyapunov optimization theory. Our performance evaluation results verify that the proposed spatio-temporal algorithm outperforms benchmarks in various aspects over hybrid quantum-classical systems.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"5279-5294"},"PeriodicalIF":3.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1109/TNET.2024.3436712
Haitham H. Esmat;Xiaohao Xia;Yinxuan Wu;Beatriz Lorenzo;Linke Guo
Heterogeneous Internet of Things (IoT) networks, which operate using various protocols and spectrum bands like WiFi, Bluetooth, Zigbee, and LoRa, bring many opportunities to collaborate and achieve timely data collection. However, several challenges must be addressed due to heterogeneous data patterns, coverage, spectrum bands, and mobility. This paper introduces a cross-technology IoT network architecture design that facilitates collaboration between service providers (SPs) to share their spectrum bands and offload computing tasks from heterogeneous IoT devices using multi-protocol mobile gateways (M-MGs). The objective is to minimize the age of information (AoI) and energy consumption by jointly optimizing collaboration between M-MGs and SPs for bandwidth allocation, relaying, and cross-technology data scheduling. A pricing mechanism is presented to incentivize different levels of collaboration and matching between M-MGs and SPs. Given the uncertainty due to mobility and task requests, we design a cross-technology federated matching algorithm (CT-Fed-Match) based on a multi-agent actor-critic approach in which M-MGs and SPs learn their strategies in a distributed manner. Furthermore, we incorporate federated learning to enhance the convergence of the learning process. The numerical results demonstrate that our CT-Fed-Match-RC algorithm with cross-technology and relaying collaboration reduces the AoI by 30 times and collects 8 times more packets than existing approaches.
{"title":"Cross-Technology Federated Matching for Age of Information Minimization in Heterogeneous IoT","authors":"Haitham H. Esmat;Xiaohao Xia;Yinxuan Wu;Beatriz Lorenzo;Linke Guo","doi":"10.1109/TNET.2024.3436712","DOIUrl":"10.1109/TNET.2024.3436712","url":null,"abstract":"Heterogeneous Internet of Things (IoT) networks, which operate using various protocols and spectrum bands like WiFi, Bluetooth, Zigbee, and LoRa, bring many opportunities to collaborate and achieve timely data collection. However, several challenges must be addressed due to heterogeneous data patterns, coverage, spectrum bands, and mobility. This paper introduces a cross-technology IoT network architecture design that facilitates collaboration between service providers (SPs) to share their spectrum bands and offload computing tasks from heterogeneous IoT devices using multi-protocol mobile gateways (M-MGs). The objective is to minimize the age of information (AoI) and energy consumption by jointly optimizing collaboration between M-MGs and SPs for bandwidth allocation, relaying, and cross-technology data scheduling. A pricing mechanism is presented to incentivize different levels of collaboration and matching between M-MGs and SPs. Given the uncertainty due to mobility and task requests, we design a cross-technology federated matching algorithm (CT-Fed-Match) based on a multi-agent actor-critic approach in which M-MGs and SPs learn their strategies in a distributed manner. Furthermore, we incorporate federated learning to enhance the convergence of the learning process. The numerical results demonstrate that our CT-Fed-Match-RC algorithm with cross-technology and relaying collaboration reduces the AoI by 30 times and collects 8 times more packets than existing approaches.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"4901-4916"},"PeriodicalIF":3.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1109/TNET.2024.3451231
Kunpeng Ding;Jiayu Yang;Kaiping Xue;Jiangping Han;Jian Li;Qibin Sun;Jun Lu
Named Data Networking (NDN) stands out as a promising Information Centric Networking architecture capable of facilitating large-scale content distribution through in-network caching and location-independent data access. However, attackers can easily inject poisoned content into the network, called content poisoning attacks, which leads to a substantial deterioration in user experience and transmission efficiency. In existing schemes, routers fail to determine the contamination source of received poisoned content, leading to the inability to accurately identify attacker nodes. Besides, attackers’ dynamic behaviors and network instability could disrupt identification results. In this paper, we propose a Secure and Lightweight scheme against content poisoning attacks based on Probing (SLP), where a proactive and reliable probing protocol is designed to identify adversaries quickly and precisely. In SLP, a router sends specifically chosen interest packets to probe a suspicious node, so that the returned corresponding content can straightly reflect its trustworthiness without other nodes’ interference. In addition, a hypothesis testing algorithm is developed to analyze the returned content, which can exclude the impact of transmission errors and adapt to dynamic attackers. Moreover, we utilize users’ feedback to avoid unnecessary probing costs on unaffected routers, with its reliability guaranteed by an efficient cuckoo-filter-based feedback validation mechanism. Security analysis shows that SLP achieves resistance against content poisoning attacks and malicious feedback. The experimental results demonstrate that SLP makes users hardly be affected by attacks and brings in only slight overhead.
{"title":"SLP: A Secure and Lightweight Scheme Against Content Poisoning Attacks in Named Data Networking Based on Probing","authors":"Kunpeng Ding;Jiayu Yang;Kaiping Xue;Jiangping Han;Jian Li;Qibin Sun;Jun Lu","doi":"10.1109/TNET.2024.3451231","DOIUrl":"10.1109/TNET.2024.3451231","url":null,"abstract":"Named Data Networking (NDN) stands out as a promising Information Centric Networking architecture capable of facilitating large-scale content distribution through in-network caching and location-independent data access. However, attackers can easily inject poisoned content into the network, called content poisoning attacks, which leads to a substantial deterioration in user experience and transmission efficiency. In existing schemes, routers fail to determine the contamination source of received poisoned content, leading to the inability to accurately identify attacker nodes. Besides, attackers’ dynamic behaviors and network instability could disrupt identification results. In this paper, we propose a Secure and Lightweight scheme against content poisoning attacks based on Probing (SLP), where a proactive and reliable probing protocol is designed to identify adversaries quickly and precisely. In SLP, a router sends specifically chosen interest packets to probe a suspicious node, so that the returned corresponding content can straightly reflect its trustworthiness without other nodes’ interference. In addition, a hypothesis testing algorithm is developed to analyze the returned content, which can exclude the impact of transmission errors and adapt to dynamic attackers. Moreover, we utilize users’ feedback to avoid unnecessary probing costs on unaffected routers, with its reliability guaranteed by an efficient cuckoo-filter-based feedback validation mechanism. Security analysis shows that SLP achieves resistance against content poisoning attacks and malicious feedback. The experimental results demonstrate that SLP makes users hardly be affected by attacks and brings in only slight overhead.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"5128-5143"},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1109/TNET.2024.3452006
Vishrant Tripathi;Eytan Modiano
We consider a setting where multiple active sources send real-time updates over a single-hop wireless broadcast network to a monitoring station. Our goal is to design a scheduling policy that minimizes the time-average of general non-decreasing cost functions of Age of Information. We use a Whittle index based approach to find low complexity scheduling policies that have good performance. We prove that for a system with two sources, having possibly different cost functions and reliable channels, the Whittle index policy is exactly optimal. We derive structural properties of an optimal policy, that suggest that the performance of the Whittle index policy may be close to optimal in general. These results might also be of independent interest in the study of restless multi-armed bandit problems with similar underlying structure. We further establish that minimizing monitoring error for linear time-invariant systems and symmetric Markov chains is equivalent to minimizing appropriately chosen monotone functions of Age of Information. Finally, we provide simulations comparing the Whittle index policy with optimal scheduling policies found using dynamic programming, which support our results.
{"title":"A Whittle Index Approach to Minimizing Functions of Age of Information","authors":"Vishrant Tripathi;Eytan Modiano","doi":"10.1109/TNET.2024.3452006","DOIUrl":"10.1109/TNET.2024.3452006","url":null,"abstract":"We consider a setting where multiple active sources send real-time updates over a single-hop wireless broadcast network to a monitoring station. Our goal is to design a scheduling policy that minimizes the time-average of general non-decreasing cost functions of Age of Information. We use a Whittle index based approach to find low complexity scheduling policies that have good performance. We prove that for a system with two sources, having possibly different cost functions and reliable channels, the Whittle index policy is exactly optimal. We derive structural properties of an optimal policy, that suggest that the performance of the Whittle index policy may be close to optimal in general. These results might also be of independent interest in the study of restless multi-armed bandit problems with similar underlying structure. We further establish that minimizing monitoring error for linear time-invariant systems and symmetric Markov chains is equivalent to minimizing appropriately chosen monotone functions of Age of Information. Finally, we provide simulations comparing the Whittle index policy with optimal scheduling policies found using dynamic programming, which support our results.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"5144-5158"},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1109/TNET.2024.3450098
Jianzhi Shi;Bo Yi;Xingwei Wang;Min Huang;Yang Song;Qiang He;Chao Zeng;Keqin Li
The current global economy is undergoing a transformative phase, emphasizing collaboration among multiple competing entities rather than monopolization. Economic globalization is accelerating the adoption of globalized cloud services, and in line with this trend, cloud 2.0 introduces the concept of “cloud cooperation”. JointCloud, as a novel computing model for Cloud 2.0, advocates for the establishment of an evolving cloud ecosystem. However, a critical challenge arises due to the lack of direct incentives for a cloud to join the JointCloud ecosystem, leading to uncertainty regarding the rationale for the existence of the JointCloud ecosystem. To address this ambiguity, we draw inspiration from supply chain competition and formulate the market dynamics of resources within the JointCloud ecosystem. Our focus is particularly on the analysis of data resource trade within the JointCloud market. To comprehensively analyze the JointCloud market, we propose a market game that examines the competition among clouds within the ecosystem. We theoretically prove that a Nash Equilibrium always exists under the JointCloud market. Subsequently, we conduct an in-depth analysis of the profits of cloud resource manufacturers and cloud resource retailers as the number of clouds varies within the JointCloud ecosystem. Based on our analysis, we further explore the incentives for a cloud to participate in the JointCloud ecosystem. We then evaluate the performance of the proposed market game through extensive experiments, illustrating how process variables and profits change with the market size. The experiments demonstrate that the trends of various variables are aligned with our analysis obtained from the market game. Compared with the Cournot model, our proposed model captures the market power of both manufacturers and retailers, resulting in a model that closely mirrors real market dynamics. Our findings provide valuable insights into the cloud market within Cloud 2.0, offering guidance for stakeholders navigating the evolving landscape of cloud cooperation and competition.
{"title":"JointCloud Resource Market Competition: A Game-Theoretic Approach","authors":"Jianzhi Shi;Bo Yi;Xingwei Wang;Min Huang;Yang Song;Qiang He;Chao Zeng;Keqin Li","doi":"10.1109/TNET.2024.3450098","DOIUrl":"10.1109/TNET.2024.3450098","url":null,"abstract":"The current global economy is undergoing a transformative phase, emphasizing collaboration among multiple competing entities rather than monopolization. Economic globalization is accelerating the adoption of globalized cloud services, and in line with this trend, cloud 2.0 introduces the concept of “cloud cooperation”. JointCloud, as a novel computing model for Cloud 2.0, advocates for the establishment of an evolving cloud ecosystem. However, a critical challenge arises due to the lack of direct incentives for a cloud to join the JointCloud ecosystem, leading to uncertainty regarding the rationale for the existence of the JointCloud ecosystem. To address this ambiguity, we draw inspiration from supply chain competition and formulate the market dynamics of resources within the JointCloud ecosystem. Our focus is particularly on the analysis of data resource trade within the JointCloud market. To comprehensively analyze the JointCloud market, we propose a market game that examines the competition among clouds within the ecosystem. We theoretically prove that a Nash Equilibrium always exists under the JointCloud market. Subsequently, we conduct an in-depth analysis of the profits of cloud resource manufacturers and cloud resource retailers as the number of clouds varies within the JointCloud ecosystem. Based on our analysis, we further explore the incentives for a cloud to participate in the JointCloud ecosystem. We then evaluate the performance of the proposed market game through extensive experiments, illustrating how process variables and profits change with the market size. The experiments demonstrate that the trends of various variables are aligned with our analysis obtained from the market game. Compared with the Cournot model, our proposed model captures the market power of both manufacturers and retailers, resulting in a model that closely mirrors real market dynamics. Our findings provide valuable insights into the cloud market within Cloud 2.0, offering guidance for stakeholders navigating the evolving landscape of cloud cooperation and competition.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"5112-5127"},"PeriodicalIF":3.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}