Pub Date : 2024-11-01DOI: 10.1109/TMC.2024.3489722
Hao Wang;Huijuan Zheng;Guangjie Han;Dong Tang
Currently, the source location privacy (SLP) becomes a hot research interest in network security of Underwater Acoustic Sensor Networks (UASNs), and existing schemes are mostly proposed for a given scenario. Introducing source location privacy technologies inevitably increase the energy consumption of nodes, while they are widely deployed in available studies, resulting in massive energy wastage. Therefore, an adaptive scheme for protecting source location privacy (APSLP) in UASNs is proposed. The APSLP scheme first analyzes the possible locations of the adversary by trust method. Then, considering the lagging nature of the trust method, which means that the adversary may not stay in locations given by trust method, a hidden Markov-based backtracking method is proposed and location privacy methods are functioned according to the backtracking result. The simulation shows that even though the security level of the APSLP scheme is not the largest, the efficiency is the highest, approximately an increase of 69.1$%$ and 10.3$%$ compared with two comparison algorithms, respectively.
{"title":"An Adaptive Scheme for Protecting Source Location Privacy in Underwater Acoustic Sensor Networks","authors":"Hao Wang;Huijuan Zheng;Guangjie Han;Dong Tang","doi":"10.1109/TMC.2024.3489722","DOIUrl":"https://doi.org/10.1109/TMC.2024.3489722","url":null,"abstract":"Currently, the source location privacy (SLP) becomes a hot research interest in network security of Underwater Acoustic Sensor Networks (UASNs), and existing schemes are mostly proposed for a given scenario. Introducing source location privacy technologies inevitably increase the energy consumption of nodes, while they are widely deployed in available studies, resulting in massive energy wastage. Therefore, an adaptive scheme for protecting source location privacy (APSLP) in UASNs is proposed. The APSLP scheme first analyzes the possible locations of the adversary by trust method. Then, considering the lagging nature of the trust method, which means that the adversary may not stay in locations given by trust method, a hidden Markov-based backtracking method is proposed and location privacy methods are functioned according to the backtracking result. The simulation shows that even though the security level of the APSLP scheme is not the largest, the efficiency is the highest, approximately an increase of 69.1<inline-formula><tex-math>$%$</tex-math></inline-formula> and 10.3<inline-formula><tex-math>$%$</tex-math></inline-formula> compared with two comparison algorithms, respectively.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2193-2202"},"PeriodicalIF":7.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1109/TMC.2024.3490545
Shengchao Zhu;Guangjie Han;Chuan Lin;Yu Zhang
With the rapid development of underwater materials technology and underwater robot technology, human exploitation of marine resources has been increasingly advanced, which has given rise to various application scenarios for Autonomous Underwater Vehicle (AUV) cluster networks, such as cooperative data collection and target tracking. In this paper, we aim to explore how to utilize networking and swarm intelligence to improve the AUV cluster network’s target tracking performance in a time-saving manner. Specifically, on account of our previous work, we introduce an underwater interrupted mechanism and propose an Interrupted Software-Defined Multi-AUV Reinforcement Learning (ISD-MARL) architecture. For MARL algorithm in ISD-MARL, we propose a time-saving MARL algorithm, S-MADDPG, integrating our proposed action optimization model and action network loss function, to accelerate the convergence of the MARL algorithm. Furthermore, to further improve the AUV cluster network’s path planning performance during the target tracking, we propose an Interrupted Tracking Path Planning Scheme (ITPPS) for the AUV cluster network based on the proposed ISD-MARL and S-MADDPG. The evaluation results showcase that our proposed scheme can effectively plan the underwater target tracking path for the AUV cluster network in a shorter time and outperform various mainstream strategies in terms of convergence speed and training time, etc.
{"title":"Underwater Target Tracking Based on Interrupted Software-Defined Multi-AUV Reinforcement Learning: A Multi-AUV Time-Saving MARL Approach","authors":"Shengchao Zhu;Guangjie Han;Chuan Lin;Yu Zhang","doi":"10.1109/TMC.2024.3490545","DOIUrl":"https://doi.org/10.1109/TMC.2024.3490545","url":null,"abstract":"With the rapid development of underwater materials technology and underwater robot technology, human exploitation of marine resources has been increasingly advanced, which has given rise to various application scenarios for Autonomous Underwater Vehicle (AUV) cluster networks, such as cooperative data collection and target tracking. In this paper, we aim to explore how to utilize networking and swarm intelligence to improve the AUV cluster network’s target tracking performance in a time-saving manner. Specifically, on account of our previous work, we introduce an underwater interrupted mechanism and propose an Interrupted Software-Defined Multi-AUV Reinforcement Learning (ISD-MARL) architecture. For MARL algorithm in ISD-MARL, we propose a time-saving MARL algorithm, S-MADDPG, integrating our proposed action optimization model and action network loss function, to accelerate the convergence of the MARL algorithm. Furthermore, to further improve the AUV cluster network’s path planning performance during the target tracking, we propose an Interrupted Tracking Path Planning Scheme (ITPPS) for the AUV cluster network based on the proposed ISD-MARL and S-MADDPG. The evaluation results showcase that our proposed scheme can effectively plan the underwater target tracking path for the AUV cluster network in a shorter time and outperform various mainstream strategies in terms of convergence speed and training time, etc.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2124-2136"},"PeriodicalIF":7.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of sixth-generation (6G) communication technology, global communication networks are moving towards the goal of comprehensive and seamless coverage. In particular, low earth orbit (LEO) satellites have become a critical component of satellite communication networks. The emergence of LEO satellites has brought about new computational resources known as the LEO satellite edge, enabling ground users (GU) to offload computing tasks to the resource-rich LEO satellite edge. However, existing LEO satellite computational offloading solutions primarily focus on optimizing system performance, neglecting the potential issue of malicious satellite attacks during task offloading. In this paper, we propose the deployment of LEO satellite edge in an integrated satellite-terrestrial networks (ISTN) structure to support security-sensitive computing task offloading. We model the task allocation and offloading order problem as a joint optimization problem to minimize task offloading delay, energy consumption, and the number of attacks while satisfying reliability constraints. To achieve this objective, we model the task offloading process as a Markov decision process (MDP) and propose a security-sensitive task offloading strategy optimization algorithm based on proximal policy optimization (PPO). Experimental results demonstrate that our algorithm significantly outperforms other benchmark methods in terms of performance.
{"title":"Security-Sensitive Task Offloading in Integrated Satellite-Terrestrial Networks","authors":"Wenjun Lan;Kongyang Chen;Jiannong Cao;Yikai Li;Ning Li;Qi Chen;Yuvraj Sahni","doi":"10.1109/TMC.2024.3489619","DOIUrl":"https://doi.org/10.1109/TMC.2024.3489619","url":null,"abstract":"With the rapid development of sixth-generation (6G) communication technology, global communication networks are moving towards the goal of comprehensive and seamless coverage. In particular, low earth orbit (LEO) satellites have become a critical component of satellite communication networks. The emergence of LEO satellites has brought about new computational resources known as the <italic>LEO satellite edge</i>, enabling ground users (GU) to offload computing tasks to the resource-rich LEO satellite edge. However, existing LEO satellite computational offloading solutions primarily focus on optimizing system performance, neglecting the potential issue of malicious satellite attacks during task offloading. In this paper, we propose the deployment of LEO satellite edge in an integrated satellite-terrestrial networks (ISTN) structure to support <italic>security-sensitive computing task offloading</i>. We model the task allocation and offloading order problem as a joint optimization problem to minimize task offloading delay, energy consumption, and the number of attacks while satisfying reliability constraints. To achieve this objective, we model the task offloading process as a Markov decision process (MDP) and propose a security-sensitive task offloading strategy optimization algorithm based on proximal policy optimization (PPO). Experimental results demonstrate that our algorithm significantly outperforms other benchmark methods in terms of performance.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2220-2233"},"PeriodicalIF":7.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Split Federated Learning (SFL) improves scalability of Split Learning (SL) by enabling parallel computing of the learning tasks on multiple clients. However, state-of-the-art SFL schemes neglect the effects of heterogeneity in the clients’ computation and communication performance as well as the computation time for the tasks offloaded to the cloud server. In this paper, we propose a fine-grained parallelization framework, called PipeSFL, to accelerate SFL on heterogeneous clients. PipeSFL is based on two key novel ideas. First, we design a server-side priority scheduling mechanism to minimize per-iteration time. Second, we propose a hybrid training mode to reduce per-round time, which employs asynchronous training within rounds and synchronous training between rounds. We theoretically prove the optimality of the proposed priority scheduling mechanism within one round and analyze the total time per round for PipeSFL, SFL and SL. We implement PipeSFL on PyTorch. Extensive experiments on seven 64-client clusters with different heterogeneity demonstrate that at training speed, PipeSFL achieves up to 1.65x and 1.93x speedup compared to EPSL and SFL, respectively. At energy consumption, PipeSFL saves up to 30.8% and 43.4% of the energy consumed within each training round compared to EPSL and SFL, respectively.
{"title":"PipeSFL: A Fine-Grained Parallelization Framework for Split Federated Learning on Heterogeneous Clients","authors":"Yunqi Gao;Bing Hu;Mahdi Boloursaz Mashhadi;Wei Wang;Mehdi Bennis","doi":"10.1109/TMC.2024.3489642","DOIUrl":"https://doi.org/10.1109/TMC.2024.3489642","url":null,"abstract":"Split Federated Learning (SFL) improves scalability of Split Learning (SL) by enabling parallel computing of the learning tasks on multiple clients. However, state-of-the-art SFL schemes neglect the effects of heterogeneity in the clients’ computation and communication performance as well as the computation time for the tasks offloaded to the cloud server. In this paper, we propose a fine-grained parallelization framework, called PipeSFL, to accelerate SFL on heterogeneous clients. PipeSFL is based on two key novel ideas. First, we design a server-side priority scheduling mechanism to minimize per-iteration time. Second, we propose a hybrid training mode to reduce per-round time, which employs asynchronous training within rounds and synchronous training between rounds. We theoretically prove the optimality of the proposed priority scheduling mechanism within one round and analyze the total time per round for PipeSFL, SFL and SL. We implement PipeSFL on PyTorch. Extensive experiments on seven 64-client clusters with different heterogeneity demonstrate that at training speed, PipeSFL achieves up to 1.65x and 1.93x speedup compared to EPSL and SFL, respectively. At energy consumption, PipeSFL saves up to 30.8% and 43.4% of the energy consumed within each training round compared to EPSL and SFL, respectively.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1774-1791"},"PeriodicalIF":7.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.1109/TMC.2024.3489724
Zhonghui Mei
Minimizing the decoding delay, the completion time, or the delivery time of instantly decodable network coding (IDNC) can all be approximated to a maximum weight clique (MWC) problem, which is well known to be NP hard. Due to its good tradeoff between performance and computational complexity, a heuristic approach named as maximum weight vertex (MWV) search is widely employed to select MWC for IDNC. However, in MWV, when there are few coding connection edges among the adjacent vertices of a vertex, its modified vertex weight cannot well reflect the weight of the MWC containing the vertex, which leads to incorrect selection of MWC. This paper proposes a new method to calculate the modified weight of a vertex by summing the weights of the vertices in the approximate maximum weight path (A-MWP) generated by this vertex. Since the vertices in an A-MWP can form a maximal clique, the proposed modified vertex weight may well indicate the weight of the MWC containing the vertex. The proposed algorithm has the same computational complexity as the MWV algorithm. Simulation results show that when employing any of the three performance metrics of IDNC, our proposed algorithm can achieve better system performance than the MWV algorithm.
{"title":"A Novel Method to Solve the Maximum Weight Clique Problem for Instantly Decodable Network Coding","authors":"Zhonghui Mei","doi":"10.1109/TMC.2024.3489724","DOIUrl":"https://doi.org/10.1109/TMC.2024.3489724","url":null,"abstract":"Minimizing the decoding delay, the completion time, or the delivery time of instantly decodable network coding (IDNC) can all be approximated to a maximum weight clique (MWC) problem, which is well known to be NP hard. Due to its good tradeoff between performance and computational complexity, a heuristic approach named as maximum weight vertex (MWV) search is widely employed to select MWC for IDNC. However, in MWV, when there are few coding connection edges among the adjacent vertices of a vertex, its modified vertex weight cannot well reflect the weight of the MWC containing the vertex, which leads to incorrect selection of MWC. This paper proposes a new method to calculate the modified weight of a vertex by summing the weights of the vertices in the approximate maximum weight path (A-MWP) generated by this vertex. Since the vertices in an A-MWP can form a maximal clique, the proposed modified vertex weight may well indicate the weight of the MWC containing the vertex. The proposed algorithm has the same computational complexity as the MWV algorithm. Simulation results show that when employing any of the three performance metrics of IDNC, our proposed algorithm can achieve better system performance than the MWV algorithm.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2181-2192"},"PeriodicalIF":7.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advancements have showcased the potential of handheld millimeter-wave (mmWave) imaging, which applies synthetic aperture radar (SAR) principles in portable settings. However, existing studies addressing handheld motion errors either rely on costly tracking devices or employ simplified imaging models, leading to impractical deployment or limited performance. In this paper, we present IFNet, a novel deep unfolding network that combines the strengths of signal processing models and deep neural networks to achieve robust imaging and focusing for handheld mmWave systems. We first formulate the handheld imaging model by integrating multiple priors about mmWave images and handheld phase errors. Furthermore, we transform the optimization processes into an iterative network structure for improved and efficient imaging performance. Extensive experiments demonstrate that IFNet effectively compensates for handheld phase errors and recovers high-fidelity images from severely distorted signals. In comparison with existing methods, IFNet can achieve at least 11.89 dB improvement in average peak signal-to-noise ratio (PSNR) and 64.91% improvement in average structural similarity index measure (SSIM) on a real-world dataset.
{"title":"IFNet: Deep Imaging and Focusing for Handheld SAR With Millimeter-Wave Signals","authors":"Yadong Li;Dongheng Zhang;Ruixu Geng;Jincheng Wu;Yang Hu;Qibin Sun;Yan Chen","doi":"10.1109/TMC.2024.3489641","DOIUrl":"https://doi.org/10.1109/TMC.2024.3489641","url":null,"abstract":"Recent advancements have showcased the potential of handheld millimeter-wave (mmWave) imaging, which applies synthetic aperture radar (SAR) principles in portable settings. However, existing studies addressing handheld motion errors either rely on costly tracking devices or employ simplified imaging models, leading to impractical deployment or limited performance. In this paper, we present IFNet, a novel deep unfolding network that combines the strengths of signal processing models and deep neural networks to achieve robust imaging and focusing for handheld mmWave systems. We first formulate the handheld imaging model by integrating multiple priors about mmWave images and handheld phase errors. Furthermore, we transform the optimization processes into an iterative network structure for improved and efficient imaging performance. Extensive experiments demonstrate that IFNet effectively compensates for handheld phase errors and recovers high-fidelity images from severely distorted signals. In comparison with existing methods, IFNet can achieve at least 11.89 dB improvement in average peak signal-to-noise ratio (PSNR) and 64.91% improvement in average structural similarity index measure (SSIM) on a real-world dataset.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2166-2180"},"PeriodicalIF":7.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.1109/TMC.2024.3489028
Yu Luo;Lina Pu;Chun-Hung Liu
The integration of energy harvesting capabilities into mobile edge computing (MEC) edge servers enables their deployment beyond the reach of electrical grids, expanding MEC services to isolated regions and geographically challenging terrains. However, the fluctuating nature of renewable energy sources, such as solar and wind, necessitates dynamic management of server computing power in response to variable energy harvesting rates. Unlike conventional models that assume predetermined amounts of harvested energy per time period, this study illustrates the complex interdependencies between server power consumption and variable energy harvesting rates due to battery charging characteristics. To address this, we introduce a novel energy harvesting model that comprehensively accounts for the interaction between computing power management and energy harvesting rates. We develop both offline and online offline optimal computing power management strategies aimed at maximizing the average computational capacity of edge servers. An analytical solution to the resulting nonlinear optimization problem is provided to determine the optimal computing power configurations. Simulation results indicate that the proposed strategy effectively balances energy harvesting rates and energy utilization, thereby enhancing computational performance in dynamic energy environments.
{"title":"Computing Power and Battery Charging Management for Solar Energy Powered Edge Computing","authors":"Yu Luo;Lina Pu;Chun-Hung Liu","doi":"10.1109/TMC.2024.3489028","DOIUrl":"https://doi.org/10.1109/TMC.2024.3489028","url":null,"abstract":"The integration of energy harvesting capabilities into mobile edge computing (MEC) edge servers enables their deployment beyond the reach of electrical grids, expanding MEC services to isolated regions and geographically challenging terrains. However, the fluctuating nature of renewable energy sources, such as solar and wind, necessitates dynamic management of server computing power in response to variable energy harvesting rates. Unlike conventional models that assume predetermined amounts of harvested energy per time period, this study illustrates the complex interdependencies between server power consumption and variable energy harvesting rates due to battery charging characteristics. To address this, we introduce a novel energy harvesting model that comprehensively accounts for the interaction between computing power management and energy harvesting rates. We develop both offline and online offline optimal computing power management strategies aimed at maximizing the average computational capacity of edge servers. An analytical solution to the resulting nonlinear optimization problem is provided to determine the optimal computing power configurations. Simulation results indicate that the proposed strategy effectively balances energy harvesting rates and energy utilization, thereby enhancing computational performance in dynamic energy environments.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1913-1927"},"PeriodicalIF":7.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.1109/TMC.2024.3488746
Handi Chen;Rui Zhou;Yun-Hin Chan;Zhihan Jiang;Xianhao Chen;Edith C. H. Ngai
Leveraging blockchain in Federated Learning (FL) emerges as a new paradigm for secure collaborative learning on Massive Edge Networks (MENs). As the scale of MENs increases, it becomes more difficult to implement and manage a blockchain among edge devices due to complex communication topologies, heterogeneous computation capabilities, and limited storage capacities. Moreover, the lack of a standard metric for blockchain security becomes a significant issue. To address these challenges, we propose a lightweight blockchain for verifiable and scalable FL, namely LiteChain, to provide efficient and secure services in MENs. Specifically, we develop a distributed clustering algorithm to reorganize MENs into a two-level structure to improve communication and computing efficiency under security requirements. Moreover, we introduce a Comprehensive Byzantine Fault Tolerance (CBFT) consensus mechanism and a secure update mechanism to ensure the security of model transactions through LiteChain. Our experiments based on Hyperledger Fabric demonstrate that LiteChain presents the lowest end-to-end latency and on-chain storage overheads across various network scales, outperforming the other two benchmarks. In addition, LiteChain exhibits a high level of robustness against replay and data poisoning attacks.
{"title":"LiteChain: A Lightweight Blockchain for Verifiable and Scalable Federated Learning in Massive Edge Networks","authors":"Handi Chen;Rui Zhou;Yun-Hin Chan;Zhihan Jiang;Xianhao Chen;Edith C. H. Ngai","doi":"10.1109/TMC.2024.3488746","DOIUrl":"https://doi.org/10.1109/TMC.2024.3488746","url":null,"abstract":"Leveraging blockchain in Federated Learning (FL) emerges as a new paradigm for secure collaborative learning on Massive Edge Networks (MENs). As the scale of MENs increases, it becomes more difficult to implement and manage a blockchain among edge devices due to complex communication topologies, heterogeneous computation capabilities, and limited storage capacities. Moreover, the lack of a standard metric for blockchain security becomes a significant issue. To address these challenges, we propose a lightweight blockchain for verifiable and scalable FL, namely LiteChain, to provide efficient and secure services in MENs. Specifically, we develop a distributed clustering algorithm to reorganize MENs into a two-level structure to improve communication and computing efficiency under security requirements. Moreover, we introduce a Comprehensive Byzantine Fault Tolerance (CBFT) consensus mechanism and a secure update mechanism to ensure the security of model transactions through LiteChain. Our experiments based on Hyperledger Fabric demonstrate that LiteChain presents the lowest end-to-end latency and on-chain storage overheads across various network scales, outperforming the other two benchmarks. In addition, LiteChain exhibits a high level of robustness against replay and data poisoning attacks.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1928-1944"},"PeriodicalIF":7.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.1109/TMC.2024.3489717
Jing Bai;Jinsong Gui;Tian Wang;Houbing Song;Anfeng Liu;Neal N. Xiong
Mobile Crowdsensing (MCS) has emerged as a promising sensing paradigm for accomplishing large-scale tasks by leveraging ubiquitously distributed mobile workers. Due to the variability in sensory data provided by different workers, identifying truth values from them has garnered wide attention. However, existing truth discovery schemes either offer limited privacy protection or incur high participation costs and lower data aggregation quality due to malicious workers. In this paper, we propose an Efficient and Trusted Bilateral Privacy-preserving Truth Discovery scheme (ETBP-TD) to obtain high-quality truth values while preventing privacy leakage from both workers and the data requester. Specifically, a matrix encryption-based protocol is introduced to the whole truth discovery process, which keeps locations and data related to tasks and workers secret from other entries. Additionally, trust-based worker recruitment and trust update mechanisms are first integrated within a privacy-preserving truth discovery scheme to enhance truth value accuracy and reduce unnecessary participation costs. Our theoretical analyses on the security and regret of ETBP-TD, along with extensive simulations on real-world datasets, demonstrate that ETBP-TD effectively preserves workers’ and tasks’ privacy while reducing the estimated error by up to 84.40% and participation cost by 54.72%.
{"title":"ETBP-TD: An Efficient and Trusted Bilateral Privacy-Preserving Truth Discovery Scheme for Mobile Crowdsensing","authors":"Jing Bai;Jinsong Gui;Tian Wang;Houbing Song;Anfeng Liu;Neal N. Xiong","doi":"10.1109/TMC.2024.3489717","DOIUrl":"https://doi.org/10.1109/TMC.2024.3489717","url":null,"abstract":"Mobile Crowdsensing (MCS) has emerged as a promising sensing paradigm for accomplishing large-scale tasks by leveraging ubiquitously distributed mobile workers. Due to the variability in sensory data provided by different workers, identifying truth values from them has garnered wide attention. However, existing truth discovery schemes either offer limited privacy protection or incur high participation costs and lower data aggregation quality due to malicious workers. In this paper, we propose an Efficient and Trusted Bilateral Privacy-preserving Truth Discovery scheme (ETBP-TD) to obtain high-quality truth values while preventing privacy leakage from both workers and the data requester. Specifically, a matrix encryption-based protocol is introduced to the whole truth discovery process, which keeps locations and data related to tasks and workers secret from other entries. Additionally, trust-based worker recruitment and trust update mechanisms are first integrated within a privacy-preserving truth discovery scheme to enhance truth value accuracy and reduce unnecessary participation costs. Our theoretical analyses on the security and regret of ETBP-TD, along with extensive simulations on real-world datasets, demonstrate that ETBP-TD effectively preserves workers’ and tasks’ privacy while reducing the estimated error by up to 84.40% and participation cost by 54.72%.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2203-2219"},"PeriodicalIF":7.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1109/TMC.2024.3487967
Tianya Zhao;Junqing Zhang;Shiwen Mao;Xuyu Wang
Despite the proven capabilities of deep neural networks (DNNs) in identifying devices through radio frequency (RF) fingerprinting, the security vulnerabilities of these deep learning models have been largely overlooked. While the threat of backdoor attacks is well-studied in the image domain, few works have explored this threat in the context of RF signals. In this paper, we thoroughly analyze the susceptibility of DNN-based RF fingerprinting to backdoor attacks, focusing on a more practical scenario where attackers lack access to control model gradients and training processes. We propose leveraging explainable machine learning techniques and autoencoders to guide the selection of trigger positions and values, allowing for the creation of effective backdoor triggers in a model-agnostic manner. To comprehensively evaluate this backdoor attack, we employ four diverse datasets with two protocols (Wi-Fi and LoRa) across various DNN architectures. Given that RF signals are often transformed into the frequency or time-frequency domains, this study also assesses attack efficacy in the time-frequency domain. Furthermore, we experiment with potential detection and defense methods, demonstrating the difficulty of fully safeguarding against our proposed backdoor attack. Additionally, we consider the attack performance in the domain shift case.
{"title":"Explanation-Guided Backdoor Attacks Against Model-Agnostic RF Fingerprinting Systems","authors":"Tianya Zhao;Junqing Zhang;Shiwen Mao;Xuyu Wang","doi":"10.1109/TMC.2024.3487967","DOIUrl":"https://doi.org/10.1109/TMC.2024.3487967","url":null,"abstract":"Despite the proven capabilities of deep neural networks (DNNs) in identifying devices through radio frequency (RF) fingerprinting, the security vulnerabilities of these deep learning models have been largely overlooked. While the threat of backdoor attacks is well-studied in the image domain, few works have explored this threat in the context of RF signals. In this paper, we thoroughly analyze the susceptibility of DNN-based RF fingerprinting to backdoor attacks, focusing on a more practical scenario where attackers lack access to control model gradients and training processes. We propose leveraging explainable machine learning techniques and autoencoders to guide the selection of trigger positions and values, allowing for the creation of effective backdoor triggers in a model-agnostic manner. To comprehensively evaluate this backdoor attack, we employ four diverse datasets with two protocols (Wi-Fi and LoRa) across various DNN architectures. Given that RF signals are often transformed into the frequency or time-frequency domains, this study also assesses attack efficacy in the time-frequency domain. Furthermore, we experiment with potential detection and defense methods, demonstrating the difficulty of fully safeguarding against our proposed backdoor attack. Additionally, we consider the attack performance in the domain shift case.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2029-2042"},"PeriodicalIF":7.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}