Pub Date : 2024-12-02DOI: 10.1109/TMC.2024.3499958
Meng Zhang;Deepanshu Vasal
Network utility maximization (NUM) is a fundamental framework for optimizing next-generation networks. However, self-interested agents with private information pose challenges due to potential system manipulation. To address these challenges, the literature on economic mechanism design has emerged. Existing mechanisms are not suited for large-scale networks due to their complexity, high implementation costs, and difficulty to adapt to dynamic settings. This paper proposes a large-scale mechanism design framework that mitigates these limitations. As the number of agents $I$ approaches infinity, their incentive to misreport decreases rapidly at a rate of $mathcal {O}(1/I^{2})$. We introduce a superimposable framework applicable to any NUM algorithm without modifications, reducing implementation costs. In the dynamic setting, the large-scale mechanism design framework introduces the decomposability of the problem, enabling agents to align their own interests with the objectives of the dynamic NUM problem. This alignment helps overcome the additional, more stringent incentive constraints encountered in dynamic settings. Extending our results to dynamic settings, we present the design of a Dynamic Large-Scale mechanism with desirable properties and the corresponding Dynamic Superimposable Large-Scale mechanism. Our numerical experiments validate the fact that our proposed schemes are approximately $I$ times faster than the seminal VCG mechanism.
网络效用最大化(NUM)是优化下一代网络的基本框架。然而,由于潜在的系统操纵,拥有私人信息的自利代理带来了挑战。为了应对这些挑战,有关经济机制设计的文献应运而生。现有的机制因其复杂性、高实施成本和难以适应动态设置而不适合大规模网络。本文提出的大规模机制设计框架可减轻这些限制。当代理的数量 $I$ 接近无穷大时,他们错误报告的动机会以 $mathcal {O}(1/I^{2})$ 的速度迅速降低。我们引入了一个可叠加框架,无需修改即可适用于任何 NUM 算法,从而降低了实施成本。在动态环境中,大规模机制设计框架引入了问题的可分解性,使代理能够将自身利益与动态 NUM 问题的目标相一致。这种协调有助于克服动态环境中遇到的额外、更严格的激励约束。将我们的结果扩展到动态环境中,我们提出了具有理想特性的动态大规模机制设计以及相应的动态可叠加大规模机制。我们的数值实验验证了我们提出的方案比开创性的 VCG 机制快约 $I$ 倍。
{"title":"Large-Scale Mechanism Design for Networks: Superimposability and Dynamic Implementation","authors":"Meng Zhang;Deepanshu Vasal","doi":"10.1109/TMC.2024.3499958","DOIUrl":"https://doi.org/10.1109/TMC.2024.3499958","url":null,"abstract":"Network utility maximization (NUM) is a fundamental framework for optimizing next-generation networks. However, self-interested agents with private information pose challenges due to potential system manipulation. To address these challenges, the literature on economic mechanism design has emerged. Existing mechanisms are not suited for large-scale networks due to their complexity, high implementation costs, and difficulty to adapt to dynamic settings. This paper proposes a large-scale mechanism design framework that mitigates these limitations. As the number of agents <inline-formula><tex-math>$I$</tex-math></inline-formula> approaches infinity, their incentive to misreport decreases rapidly at a rate of <inline-formula><tex-math>$mathcal {O}(1/I^{2})$</tex-math></inline-formula>. We introduce a superimposable framework applicable to any NUM algorithm without modifications, reducing implementation costs. In the dynamic setting, the large-scale mechanism design framework introduces the decomposability of the problem, enabling agents to align their own interests with the objectives of the dynamic NUM problem. This alignment helps overcome the additional, more stringent incentive constraints encountered in dynamic settings. Extending our results to dynamic settings, we present the design of a Dynamic Large-Scale mechanism with desirable properties and the corresponding Dynamic Superimposable Large-Scale mechanism. Our numerical experiments validate the fact that our proposed schemes are approximately <inline-formula><tex-math>$I$</tex-math></inline-formula> times faster than the seminal VCG mechanism.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1278-1292"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of the Internet of Vehicles (IoV), vehicles will generate massive data and computation demands, necessitating computation offloading at the edge. However, existing research faces challenges in efficiency and trust. In this paper, we explore the IoV computation offloading from both user and edge facility provider perspectives, working to optimize the quality of experience (QoE), load balancing, and success rate based on challenges to efficiency and trust. First, two vehicle interconnection models are constructed to extend the linkable range of intra-road and inter-road vehicles while considering the maximum link time constraint. Then, a dynamic planning method is proposed, combining the reputation and feedback mechanisms, which can schedule edge resources online based on the cumulative computation latency of each service side, reliability value, and historical behavior. These two phases further improve the efficiency of edge services. Subsequently, blockchain is combined to optimize the trust problem of edge collaboration, and an edge-limited Byzantine fault tolerance local consensus mechanism is proposed to optimize consensus efficiency and ensure the reliability of edge services. Finally, this paper conducts dynamic experiments on real-world datasets, verifying the effectiveness of the proposed algorithm and models in multiple vehicle density datasets and experimental scenarios.
{"title":"Optimization of Models and Strategies for Computation Offloading in the Internet of Vehicles: Efficiency and Trust","authors":"Qinghang Gao;Jianmao Xiao;Zhiyong Feng;Jingyu Li;Yang Yu;Hongqi Chen;Qiaoyun Yin","doi":"10.1109/TMC.2024.3509542","DOIUrl":"https://doi.org/10.1109/TMC.2024.3509542","url":null,"abstract":"With the rapid development of the Internet of Vehicles (IoV), vehicles will generate massive data and computation demands, necessitating computation offloading at the edge. However, existing research faces challenges in efficiency and trust. In this paper, we explore the IoV computation offloading from both user and edge facility provider perspectives, working to optimize the quality of experience (QoE), load balancing, and success rate based on challenges to efficiency and trust. First, two vehicle interconnection models are constructed to extend the linkable range of intra-road and inter-road vehicles while considering the maximum link time constraint. Then, a dynamic planning method is proposed, combining the reputation and feedback mechanisms, which can schedule edge resources online based on the cumulative computation latency of each service side, reliability value, and historical behavior. These two phases further improve the efficiency of edge services. Subsequently, blockchain is combined to optimize the trust problem of edge collaboration, and an edge-limited Byzantine fault tolerance local consensus mechanism is proposed to optimize consensus efficiency and ensure the reliability of edge services. Finally, this paper conducts dynamic experiments on real-world datasets, verifying the effectiveness of the proposed algorithm and models in multiple vehicle density datasets and experimental scenarios.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3372-3389"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1109/TMC.2024.3509852
Yuchang Sun;Marios Kountouris;Jun Zhang
Federated learning (FL) has attracted vivid attention as a privacy-preserving distributed learning framework. In this work, we focus on cross-silo FL, where clients become the model owners after training and are only concerned about the model's generalization performance on their local data. Due to the data heterogeneity issue, asking all the clients to join a single FL training process may result in model performance degradation. To investigate the effectiveness of collaboration, we first derive a generalization bound for each client when collaborating with others or when training independently. We show that the generalization performance of a client can be improved by collaborating with other clients that have more training data and similar data distributions. Our analysis allows us to formulate a client utility maximization problem by partitioning clients into multiple collaborating groups. A hierarchical clustering-based collaborative training (HCCT) scheme is then proposed, which does not need to fix in advance the number of groups. We further analyze the convergence of HCCT for general non-convex loss functions which unveils the effect of data similarity among clients. Extensive simulations show that HCCT achieves better generalization performance than baseline schemes, whereas it degenerates to independent training and conventional FL in specific scenarios.
{"title":"How to Collaborate: Towards Maximizing the Generalization Performance in Cross-Silo Federated Learning","authors":"Yuchang Sun;Marios Kountouris;Jun Zhang","doi":"10.1109/TMC.2024.3509852","DOIUrl":"https://doi.org/10.1109/TMC.2024.3509852","url":null,"abstract":"Federated learning (FL) has attracted vivid attention as a privacy-preserving distributed learning framework. In this work, we focus on cross-silo FL, where clients become the model owners after training and are only concerned about the model's generalization performance on their local data. Due to the data heterogeneity issue, asking all the clients to join a single FL training process may result in model performance degradation. To investigate the effectiveness of collaboration, we first derive a generalization bound for each client when collaborating with others or when training independently. We show that the generalization performance of a client can be improved by collaborating with other clients that have more training data and similar data distributions. Our analysis allows us to formulate a client utility maximization problem by partitioning clients into multiple collaborating groups. A <underline>h</u>ierarchical <underline>c</u>lustering-based <underline>c</u>ollaborative <underline>t</u>raining (HCCT) scheme is then proposed, which does not need to fix in advance the number of groups. We further analyze the convergence of HCCT for general non-convex loss functions which unveils the effect of data similarity among clients. Extensive simulations show that HCCT achieves better generalization performance than baseline schemes, whereas it degenerates to independent training and conventional FL in specific scenarios.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3211-3222"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1109/TMC.2024.3507381
Zimu Xu;Antonio Di Maio;Eric Samikwa;Torsten Braun
Federated Learning (FL) is widely applied in privacy-sensitive domains, such as healthcare, finance, and education, due to its privacy-preserving properties. However, implementing FL in dynamic wireless networks poses substantial communication challenges. Central to these challenges is the need for efficient communication strategies that can adapt to fluctuating network conditions and the growing number of participating devices, which can lead to unacceptable communication delays. In this article, we propose Stochastic Client Selection for Tree All-Reduce Federated Learning (CSTAR-FL), a novel approach that combines a probabilistic User Device (UD) selection strategy with a tree-based communication architecture to enhance communication efficiency in FL within densely populated wireless networks. By optimizing UD selection for effective model aggregation and employing an efficient data transmission structure, CSTAR-FL significantly reduces communication time and improves FL efficiency. Additionally, our approach ensures high global model accuracy under scenarios where data distribution is heterogeneous from User Device (UD)s. Extensive simulations in dynamic wireless network scenarios demonstrate that CSTAR-FL outperforms existing state-of-the-art methods, reducing model convergence time by up to 40% without losing the global model accuracy. This makes CSTAR-FL a robust solution for efficient and scalable FL deployments in high-density environments.
{"title":"CSTAR-FL: Stochastic Client Selection for Tree All-Reduce Federated Learning","authors":"Zimu Xu;Antonio Di Maio;Eric Samikwa;Torsten Braun","doi":"10.1109/TMC.2024.3507381","DOIUrl":"https://doi.org/10.1109/TMC.2024.3507381","url":null,"abstract":"Federated Learning (FL) is widely applied in privacy-sensitive domains, such as healthcare, finance, and education, due to its privacy-preserving properties. However, implementing FL in dynamic wireless networks poses substantial communication challenges. Central to these challenges is the need for efficient communication strategies that can adapt to fluctuating network conditions and the growing number of participating devices, which can lead to unacceptable communication delays. In this article, we propose Stochastic Client Selection for Tree All-Reduce Federated Learning (<monospace>CSTAR-FL</monospace>), a novel approach that combines a probabilistic User Device (UD) selection strategy with a tree-based communication architecture to enhance communication efficiency in FL within densely populated wireless networks. By optimizing UD selection for effective model aggregation and employing an efficient data transmission structure, <monospace>CSTAR-FL</monospace> significantly reduces communication time and improves FL efficiency. Additionally, our approach ensures high global model accuracy under scenarios where data distribution is heterogeneous from User Device (UD)s. Extensive simulations in dynamic wireless network scenarios demonstrate that <monospace>CSTAR-FL</monospace> outperforms existing state-of-the-art methods, reducing model convergence time by up to 40% without losing the global model accuracy. This makes <monospace>CSTAR-FL</monospace> a robust solution for efficient and scalable FL deployments in high-density environments.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3110-3129"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Compared with synchronous federated learning (FL), asynchronous FL (AFL) has attracted more and more attention in edge computing (EC) fields because of its strong adaptability to heterogeneous application scenarios. However, the non-independent and identically distributed (Non-IID) data across devices and the staleness-aware estimation of unreliable wireless connections and limited edge resources make it much more difficult to achieve better AFL-related applications. To handle this problem, we propose an Adaptive Staleness-aware Momentum Accelerated AFL (ASMAFL) algorithm to reduce the resources consumption of heterogeneous wireless communication EC (WCEC) scenarios, as well as decrease the negative impact of Non-IID data for model training. Specifically, we first introduce the staleness-aware parameter and a unified momentum gradient descent (GD) framework to reformulate AFL. Then, we establish global convergence properties of AFL, derive an upper bound on AFL convergence rate, and find that the bound is related to the staleness-aware parameter and Non-IIDness. Next, we formulate the bound into a minimization problem of resource consumption under given model accuracy, and the corresponding staleness-aware parameter of devices will be recomputed after each asynchronous aggregation to eliminate the differences of local models’ contribution to global model aggregation. Finally, extensive experiments are carried out to validate the superiority of ASMAFL in model accuracy, convergence rate, resources consumption, Non-IID issue, etc.
{"title":"ASMAFL: Adaptive Staleness-Aware Momentum Asynchronous Federated Learning in Edge Computing","authors":"Dewen Qiao;Songtao Guo;Jun Zhao;Junqing Le;Pengzhan Zhou;Mingyan Li;Xuetao Chen","doi":"10.1109/TMC.2024.3510135","DOIUrl":"https://doi.org/10.1109/TMC.2024.3510135","url":null,"abstract":"Compared with synchronous federated learning (FL), asynchronous FL (AFL) has attracted more and more attention in edge computing (EC) fields because of its strong adaptability to heterogeneous application scenarios. However, the non-independent and identically distributed (Non-IID) data across devices and the staleness-aware estimation of unreliable wireless connections and limited edge resources make it much more difficult to achieve better AFL-related applications. To handle this problem, we propose an <bold><u>A</u></b>daptive <bold><u>S</u></b>taleness-aware <bold><u>M</u></b>omentum <bold><u>A</u></b>ccelerated <bold><u>AFL</u></b> (ASMAFL) algorithm to reduce the resources consumption of heterogeneous wireless communication EC (WCEC) scenarios, as well as decrease the negative impact of Non-IID data for model training. Specifically, we first introduce the staleness-aware parameter and a unified momentum gradient descent (GD) framework to reformulate AFL. Then, we establish global convergence properties of AFL, derive an upper bound on AFL convergence rate, and find that the bound is related to the staleness-aware parameter and Non-IIDness. Next, we formulate the bound into a minimization problem of resource consumption under given model accuracy, and the corresponding staleness-aware parameter of devices will be recomputed after each asynchronous aggregation to eliminate the differences of local models’ contribution to global model aggregation. Finally, extensive experiments are carried out to validate the superiority of ASMAFL in model accuracy, convergence rate, resources consumption, Non-IID issue, etc.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3390-3406"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edge computing is conducive to reducing service response time and improving service quality by pushing cloud functions to a network's edges. Most existing works in edge computing focus on utility maximization of task offloading on static edges with a single antenna. Besides, trajectory planning of mobile edges, e.g., autonomous aerial vehicles (AAVs) is also rarely discussed. In this paper, we are the first to jointly discuss the deadline-ware task offloading and AAV trajectory planning problem in a multi-input multi-output (MIMO) AAV-aided mobile edge computing system. Due to discrete variables and highly coupling nonconvex constraints, we equivalently convert the original problem into a more solvable form by introducing auxiliary variables. Next, a penalty dual decomposition-based algorithm is developed to achieve a global optimal solution to the problem. Besides, we proposed a profit-based fireworks algorithm in a relatively lower time to reduce the execution time for large-scale networks. Extensive evaluation results reveal that our proposed optimal algorithms could significantly outperform static offloading algorithms and other algorithms by 25% on average.
{"title":"Joint Trajectory Planning and Task Offloading for MIMO AAV-Aided Mobile Edge Computing","authors":"Xuewen Dong;Shuangrui Zhao;Ximeng Liu;Zijie Di;Yuzhen Zhang;Yulong Shen","doi":"10.1109/TMC.2024.3510272","DOIUrl":"https://doi.org/10.1109/TMC.2024.3510272","url":null,"abstract":"Edge computing is conducive to reducing service response time and improving service quality by pushing cloud functions to a network's edges. Most existing works in edge computing focus on utility maximization of task offloading on static edges with a single antenna. Besides, trajectory planning of mobile edges, e.g., autonomous aerial vehicles (AAVs) is also rarely discussed. In this paper, we are the first to jointly discuss the deadline-ware task offloading and AAV trajectory planning problem in a multi-input multi-output (MIMO) AAV-aided mobile edge computing system. Due to discrete variables and highly coupling nonconvex constraints, we equivalently convert the original problem into a more solvable form by introducing auxiliary variables. Next, a penalty dual decomposition-based algorithm is developed to achieve a global optimal solution to the problem. Besides, we proposed a profit-based fireworks algorithm in a relatively lower time to reduce the execution time for large-scale networks. Extensive evaluation results reveal that our proposed optimal algorithms could significantly outperform static offloading algorithms and other algorithms by 25% on average.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3196-3210"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1109/TMC.2024.3509716
Yuhan Lin;Haoran Xu;Zhimeng Yin;Guang Tan
Modern intelligent vehicles (IVs) are equipped with a variety of sensors and communication modules, empowering Advanced Driver Assistance Systems (ADAS) and enabling inter-vehicle connectivity. This paper focuses on multi-vehicle cooperative perception, with a primary objective of achieving low latency. The task involves nearby cooperative vehicles sending their camera data to an edge server, which then merges the local views to create a global traffic view. While multi-camera perception has been actively researched, existing solutions often rely on deep learning models, resulting in excessive processing latency. In contrast, we propose leveraging the state estimation technique from the robotics field for this task. We explicitly model and solve for the system state, addressing additional challenges brought by object mobility and vision obstruction. Furthermore, we introduce a progressive state estimation pipeline to further accelerate system state notifications, supported by a motion prediction method that optimizes position accuracy and perception smoothness. Experimental results demonstrate the superiority of our approach over the deep learning method, with 12.0 × to 27.4 × reductions in server processing delay, while maintaining mean absolute errors below 1 m.
{"title":"Edge Assisted Low-Latency Cooperative BEV Perception With Progressive State Estimation","authors":"Yuhan Lin;Haoran Xu;Zhimeng Yin;Guang Tan","doi":"10.1109/TMC.2024.3509716","DOIUrl":"https://doi.org/10.1109/TMC.2024.3509716","url":null,"abstract":"Modern intelligent vehicles (IVs) are equipped with a variety of sensors and communication modules, empowering Advanced Driver Assistance Systems (ADAS) and enabling inter-vehicle connectivity. This paper focuses on multi-vehicle cooperative perception, with a primary objective of achieving low latency. The task involves nearby cooperative vehicles sending their camera data to an edge server, which then merges the local views to create a global traffic view. While multi-camera perception has been actively researched, existing solutions often rely on deep learning models, resulting in excessive processing latency. In contrast, we propose leveraging the <italic>state estimation</i> technique from the robotics field for this task. We explicitly model and solve for the system state, addressing additional challenges brought by object mobility and vision obstruction. Furthermore, we introduce a <italic>progressive state estimation</i> pipeline to further accelerate system state notifications, supported by a motion prediction method that optimizes position accuracy and perception smoothness. Experimental results demonstrate the superiority of our approach over the deep learning method, with 12.0 × to 27.4 × reductions in server processing delay, while maintaining mean absolute errors below 1 m.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3346-3358"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1109/TMC.2024.3509680
Lei Feng;Xiaoyi Jiang;Yao Sun;Dusit Niyato;Yu Zhou;Shiyi Gu;Zhixiang Yang;Yang Yang;Fanqin Zhou
In the Metaverse, real-time, concurrent services such as virtual classrooms and immersive gaming require local graphic rendering to maintain low latency. However, the limited processing power and battery capacity of user devices make it challenging to balance Quality of Experience (QoE) and terminal energy consumption. In this paper, we investigate a multi-objective optimization problem (MOP) regarding power control and rendering capacity allocation by formulating it as a multi-objective optimization problem. This problem aims to minimize energy consumption while maximizing Meta-Immersion (MI), a metric that integrates objective network performance with subjective user perception. To solve this problem, we propose a Multi-Objective Multi-Agent Evolutionary Reinforcement Learning with User-Object-Attention (M2ERL-UOA) algorithm. The algorithm employs a prediction-driven evolutionary learning mechanism for multi-agents, coupled with optimized rendering capacity decisions for virtual objects. The algorithm can yield a superior Pareto front that attains the Nash equilibrium. Simulation results demonstrate that the proposed algorithm can generate Pareto fronts, effectively adapts to dynamic user preferences, and significantly reduces decision-making time compared to several benchmarks.
{"title":"Resource Allocation for Metaverse Experience Optimization: A Multi-Objective Multi-Agent Evolutionary Reinforcement Learning Approach","authors":"Lei Feng;Xiaoyi Jiang;Yao Sun;Dusit Niyato;Yu Zhou;Shiyi Gu;Zhixiang Yang;Yang Yang;Fanqin Zhou","doi":"10.1109/TMC.2024.3509680","DOIUrl":"https://doi.org/10.1109/TMC.2024.3509680","url":null,"abstract":"In the Metaverse, real-time, concurrent services such as virtual classrooms and immersive gaming require local graphic rendering to maintain low latency. However, the limited processing power and battery capacity of user devices make it challenging to balance Quality of Experience (QoE) and terminal energy consumption. In this paper, we investigate a multi-objective optimization problem (MOP) regarding power control and rendering capacity allocation by formulating it as a multi-objective optimization problem. This problem aims to minimize energy consumption while maximizing Meta-Immersion (MI), a metric that integrates objective network performance with subjective user perception. To solve this problem, we propose a Multi-Objective Multi-Agent Evolutionary Reinforcement Learning with User-Object-Attention (M2ERL-UOA) algorithm. The algorithm employs a prediction-driven evolutionary learning mechanism for multi-agents, coupled with optimized rendering capacity decisions for virtual objects. The algorithm can yield a superior Pareto front that attains the Nash equilibrium. Simulation results demonstrate that the proposed algorithm can generate Pareto fronts, effectively adapts to dynamic user preferences, and significantly reduces decision-making time compared to several benchmarks.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3473-3488"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1109/TMC.2024.3510099
Vinay U. Pai;Neelesh B. Mehta;Chandramani Singh
Full-duplex (FD) communication is a promising new technology that enables simultaneous transmission and reception in wireless local area networks (WLANs). The benefits of FD on the medium access control (MAC) layer throughput in IEEE 802.11 WLANs are well-documented. However, cross-layer interactions between the FD MAC protocol and transport layer protocols such as Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are less explored. We consider a WLAN with uplink and downlink TCP flows as well as UDP flows between stations (STAs) and a server via an FD access point (AP). We study an STA-initiated FD MAC protocol in which the AP can transmit on the downlink while receiving on the uplink. Using a novel FD-specific STA saturation approximation, Markov renewal theory, and fixed-point analysis, we derive novel expressions for the uplink and downlink TCP and UDP saturation throughputs. Our analysis shows that the AP is no longer a bottleneck and may be unsaturated unlike in conventional half-duplex (HD) WLANs. Despite greater contention and cross-link interference between STAs, FD achieves a higher TCP throughput than HD. FD causes a significant degradation in the UDP throughput. In the unsaturated regime, FD achieves a lower average downlink TCP packet delay than HD.
{"title":"Novel Insights From a Cross-Layer Analysis of TCP and UDP Traffic Over Full-Duplex WLANs","authors":"Vinay U. Pai;Neelesh B. Mehta;Chandramani Singh","doi":"10.1109/TMC.2024.3510099","DOIUrl":"https://doi.org/10.1109/TMC.2024.3510099","url":null,"abstract":"Full-duplex (FD) communication is a promising new technology that enables simultaneous transmission and reception in wireless local area networks (WLANs). The benefits of FD on the medium access control (MAC) layer throughput in IEEE 802.11 WLANs are well-documented. However, cross-layer interactions between the FD MAC protocol and transport layer protocols such as Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are less explored. We consider a WLAN with uplink and downlink TCP flows as well as UDP flows between stations (STAs) and a server via an FD access point (AP). We study an STA-initiated FD MAC protocol in which the AP can transmit on the downlink while receiving on the uplink. Using a novel FD-specific STA saturation approximation, Markov renewal theory, and fixed-point analysis, we derive novel expressions for the uplink and downlink TCP and UDP saturation throughputs. Our analysis shows that the AP is no longer a bottleneck and may be unsaturated unlike in conventional half-duplex (HD) WLANs. Despite greater contention and cross-link interference between STAs, FD achieves a higher TCP throughput than HD. FD causes a significant degradation in the UDP throughput. In the unsaturated regime, FD achieves a lower average downlink TCP packet delay than HD.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3288-3301"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-29DOI: 10.1109/TMC.2024.3509418
Jiahao Zhu;Lu Zhao;Jian Zhou;Hui Cai;Fu Xiao
In edge computing, a user prefers offloading his/her task to nearby edge servers to maximize the offloading utility. However, this inevitably exposes the user's location privacy information when suffering from the side-channel attacks based on offloading decision behaviors and Received Signal Strength Indicators (RSSI). Existing works only consider the scenario with one untrusted edge server or defend only against one of the attacks. In this paper, we first study the edge task offloading problem with comprehensive privacy protection against these side-channel attacks from multiple edge servers. To address this problem while ensuring satisfactory offloading utility, we develop a Location Imitation-based Edge Task Offloading approach LITO. Specifically, we first determine a suitable perturbation region centered at the user's real location for a balance between offloading utility and privacy protection, and then propose a modified Laplace mechanism to generate a fake location meeting geo-indistinguishability within the region. Subsequently, to mislead the side-channel attacks to the fake location, we design an approximate algorithm and a transmit power control strategy to imitate the offloading decisions and RSSIs at the fake location, respectively. Theoretical analysis and experimental evaluations demonstrate the performance of LITO in improving privacy protection and guaranteeing offloading utility.
{"title":"Do as the Romans Do: Location Imitation-Based Edge Task Offloading for Privacy Protection","authors":"Jiahao Zhu;Lu Zhao;Jian Zhou;Hui Cai;Fu Xiao","doi":"10.1109/TMC.2024.3509418","DOIUrl":"https://doi.org/10.1109/TMC.2024.3509418","url":null,"abstract":"In edge computing, a user prefers offloading his/her task to nearby edge servers to maximize the offloading utility. However, this inevitably exposes the user's location privacy information when suffering from the side-channel attacks based on offloading decision behaviors and Received Signal Strength Indicators (RSSI). Existing works only consider the scenario with one untrusted edge server or defend only against one of the attacks. In this paper, we first study the edge task offloading problem with comprehensive privacy protection against these side-channel attacks from multiple edge servers. To address this problem while ensuring satisfactory offloading utility, we develop a <underline>L</u>ocation <underline>I</u>mitation-based Edge <underline>T</u>ask <underline>O</u>ffloading approach <italic>LITO</i>. Specifically, we first determine a suitable perturbation region centered at the user's real location for a balance between offloading utility and privacy protection, and then propose a modified Laplace mechanism to generate a fake location meeting geo-indistinguishability within the region. Subsequently, to mislead the side-channel attacks to the fake location, we design an approximate algorithm and a transmit power control strategy to imitate the offloading decisions and RSSIs at the fake location, respectively. Theoretical analysis and experimental evaluations demonstrate the performance of <italic>LITO</i> in improving privacy protection and guaranteeing offloading utility.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3456-3472"},"PeriodicalIF":7.7,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}