首页 > 最新文献

IEEE Transactions on Mobile Computing最新文献

英文 中文
Large-Scale Mechanism Design for Networks: Superimposability and Dynamic Implementation
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-02 DOI: 10.1109/TMC.2024.3499958
Meng Zhang;Deepanshu Vasal
Network utility maximization (NUM) is a fundamental framework for optimizing next-generation networks. However, self-interested agents with private information pose challenges due to potential system manipulation. To address these challenges, the literature on economic mechanism design has emerged. Existing mechanisms are not suited for large-scale networks due to their complexity, high implementation costs, and difficulty to adapt to dynamic settings. This paper proposes a large-scale mechanism design framework that mitigates these limitations. As the number of agents $I$ approaches infinity, their incentive to misreport decreases rapidly at a rate of $mathcal {O}(1/I^{2})$. We introduce a superimposable framework applicable to any NUM algorithm without modifications, reducing implementation costs. In the dynamic setting, the large-scale mechanism design framework introduces the decomposability of the problem, enabling agents to align their own interests with the objectives of the dynamic NUM problem. This alignment helps overcome the additional, more stringent incentive constraints encountered in dynamic settings. Extending our results to dynamic settings, we present the design of a Dynamic Large-Scale mechanism with desirable properties and the corresponding Dynamic Superimposable Large-Scale mechanism. Our numerical experiments validate the fact that our proposed schemes are approximately $I$ times faster than the seminal VCG mechanism.
网络效用最大化(NUM)是优化下一代网络的基本框架。然而,由于潜在的系统操纵,拥有私人信息的自利代理带来了挑战。为了应对这些挑战,有关经济机制设计的文献应运而生。现有的机制因其复杂性、高实施成本和难以适应动态设置而不适合大规模网络。本文提出的大规模机制设计框架可减轻这些限制。当代理的数量 $I$ 接近无穷大时,他们错误报告的动机会以 $mathcal {O}(1/I^{2})$ 的速度迅速降低。我们引入了一个可叠加框架,无需修改即可适用于任何 NUM 算法,从而降低了实施成本。在动态环境中,大规模机制设计框架引入了问题的可分解性,使代理能够将自身利益与动态 NUM 问题的目标相一致。这种协调有助于克服动态环境中遇到的额外、更严格的激励约束。将我们的结果扩展到动态环境中,我们提出了具有理想特性的动态大规模机制设计以及相应的动态可叠加大规模机制。我们的数值实验验证了我们提出的方案比开创性的 VCG 机制快约 $I$ 倍。
{"title":"Large-Scale Mechanism Design for Networks: Superimposability and Dynamic Implementation","authors":"Meng Zhang;Deepanshu Vasal","doi":"10.1109/TMC.2024.3499958","DOIUrl":"https://doi.org/10.1109/TMC.2024.3499958","url":null,"abstract":"Network utility maximization (NUM) is a fundamental framework for optimizing next-generation networks. However, self-interested agents with private information pose challenges due to potential system manipulation. To address these challenges, the literature on economic mechanism design has emerged. Existing mechanisms are not suited for large-scale networks due to their complexity, high implementation costs, and difficulty to adapt to dynamic settings. This paper proposes a large-scale mechanism design framework that mitigates these limitations. As the number of agents <inline-formula><tex-math>$I$</tex-math></inline-formula> approaches infinity, their incentive to misreport decreases rapidly at a rate of <inline-formula><tex-math>$mathcal {O}(1/I^{2})$</tex-math></inline-formula>. We introduce a superimposable framework applicable to any NUM algorithm without modifications, reducing implementation costs. In the dynamic setting, the large-scale mechanism design framework introduces the decomposability of the problem, enabling agents to align their own interests with the objectives of the dynamic NUM problem. This alignment helps overcome the additional, more stringent incentive constraints encountered in dynamic settings. Extending our results to dynamic settings, we present the design of a Dynamic Large-Scale mechanism with desirable properties and the corresponding Dynamic Superimposable Large-Scale mechanism. Our numerical experiments validate the fact that our proposed schemes are approximately <inline-formula><tex-math>$I$</tex-math></inline-formula> times faster than the seminal VCG mechanism.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"1278-1292"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143184148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of Models and Strategies for Computation Offloading in the Internet of Vehicles: Efficiency and Trust
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-02 DOI: 10.1109/TMC.2024.3509542
Qinghang Gao;Jianmao Xiao;Zhiyong Feng;Jingyu Li;Yang Yu;Hongqi Chen;Qiaoyun Yin
With the rapid development of the Internet of Vehicles (IoV), vehicles will generate massive data and computation demands, necessitating computation offloading at the edge. However, existing research faces challenges in efficiency and trust. In this paper, we explore the IoV computation offloading from both user and edge facility provider perspectives, working to optimize the quality of experience (QoE), load balancing, and success rate based on challenges to efficiency and trust. First, two vehicle interconnection models are constructed to extend the linkable range of intra-road and inter-road vehicles while considering the maximum link time constraint. Then, a dynamic planning method is proposed, combining the reputation and feedback mechanisms, which can schedule edge resources online based on the cumulative computation latency of each service side, reliability value, and historical behavior. These two phases further improve the efficiency of edge services. Subsequently, blockchain is combined to optimize the trust problem of edge collaboration, and an edge-limited Byzantine fault tolerance local consensus mechanism is proposed to optimize consensus efficiency and ensure the reliability of edge services. Finally, this paper conducts dynamic experiments on real-world datasets, verifying the effectiveness of the proposed algorithm and models in multiple vehicle density datasets and experimental scenarios.
{"title":"Optimization of Models and Strategies for Computation Offloading in the Internet of Vehicles: Efficiency and Trust","authors":"Qinghang Gao;Jianmao Xiao;Zhiyong Feng;Jingyu Li;Yang Yu;Hongqi Chen;Qiaoyun Yin","doi":"10.1109/TMC.2024.3509542","DOIUrl":"https://doi.org/10.1109/TMC.2024.3509542","url":null,"abstract":"With the rapid development of the Internet of Vehicles (IoV), vehicles will generate massive data and computation demands, necessitating computation offloading at the edge. However, existing research faces challenges in efficiency and trust. In this paper, we explore the IoV computation offloading from both user and edge facility provider perspectives, working to optimize the quality of experience (QoE), load balancing, and success rate based on challenges to efficiency and trust. First, two vehicle interconnection models are constructed to extend the linkable range of intra-road and inter-road vehicles while considering the maximum link time constraint. Then, a dynamic planning method is proposed, combining the reputation and feedback mechanisms, which can schedule edge resources online based on the cumulative computation latency of each service side, reliability value, and historical behavior. These two phases further improve the efficiency of edge services. Subsequently, blockchain is combined to optimize the trust problem of edge collaboration, and an edge-limited Byzantine fault tolerance local consensus mechanism is proposed to optimize consensus efficiency and ensure the reliability of edge services. Finally, this paper conducts dynamic experiments on real-world datasets, verifying the effectiveness of the proposed algorithm and models in multiple vehicle density datasets and experimental scenarios.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3372-3389"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to Collaborate: Towards Maximizing the Generalization Performance in Cross-Silo Federated Learning
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-02 DOI: 10.1109/TMC.2024.3509852
Yuchang Sun;Marios Kountouris;Jun Zhang
Federated learning (FL) has attracted vivid attention as a privacy-preserving distributed learning framework. In this work, we focus on cross-silo FL, where clients become the model owners after training and are only concerned about the model's generalization performance on their local data. Due to the data heterogeneity issue, asking all the clients to join a single FL training process may result in model performance degradation. To investigate the effectiveness of collaboration, we first derive a generalization bound for each client when collaborating with others or when training independently. We show that the generalization performance of a client can be improved by collaborating with other clients that have more training data and similar data distributions. Our analysis allows us to formulate a client utility maximization problem by partitioning clients into multiple collaborating groups. A hierarchical clustering-based collaborative training (HCCT) scheme is then proposed, which does not need to fix in advance the number of groups. We further analyze the convergence of HCCT for general non-convex loss functions which unveils the effect of data similarity among clients. Extensive simulations show that HCCT achieves better generalization performance than baseline schemes, whereas it degenerates to independent training and conventional FL in specific scenarios.
作为一种保护隐私的分布式学习框架,联合学习(FL)备受关注。在这项工作中,我们关注的是跨ilo FL,即客户在训练后成为模型的所有者,并且只关心模型在其本地数据上的泛化性能。由于数据异构问题,要求所有客户端加入单一 FL 训练过程可能会导致模型性能下降。为了研究协作的有效性,我们首先推导出每个客户端与他人协作或独立训练时的泛化边界。我们的研究表明,通过与其他拥有更多训练数据和类似数据分布的客户端合作,客户端的泛化性能可以得到改善。通过分析,我们可以将客户端划分为多个协作组,从而提出客户端效用最大化问题。然后,我们提出了一种基于分层聚类的协作训练(HCCT)方案,该方案无需事先确定组的数量。我们进一步分析了 HCCT 在一般非凸损失函数下的收敛性,揭示了客户间数据相似性的影响。大量仿真表明,HCCT 比基准方案实现了更好的泛化性能,而在特定情况下,它退化为独立训练和传统的 FL。
{"title":"How to Collaborate: Towards Maximizing the Generalization Performance in Cross-Silo Federated Learning","authors":"Yuchang Sun;Marios Kountouris;Jun Zhang","doi":"10.1109/TMC.2024.3509852","DOIUrl":"https://doi.org/10.1109/TMC.2024.3509852","url":null,"abstract":"Federated learning (FL) has attracted vivid attention as a privacy-preserving distributed learning framework. In this work, we focus on cross-silo FL, where clients become the model owners after training and are only concerned about the model's generalization performance on their local data. Due to the data heterogeneity issue, asking all the clients to join a single FL training process may result in model performance degradation. To investigate the effectiveness of collaboration, we first derive a generalization bound for each client when collaborating with others or when training independently. We show that the generalization performance of a client can be improved by collaborating with other clients that have more training data and similar data distributions. Our analysis allows us to formulate a client utility maximization problem by partitioning clients into multiple collaborating groups. A <underline>h</u>ierarchical <underline>c</u>lustering-based <underline>c</u>ollaborative <underline>t</u>raining (HCCT) scheme is then proposed, which does not need to fix in advance the number of groups. We further analyze the convergence of HCCT for general non-convex loss functions which unveils the effect of data similarity among clients. Extensive simulations show that HCCT achieves better generalization performance than baseline schemes, whereas it degenerates to independent training and conventional FL in specific scenarios.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3211-3222"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CSTAR-FL: Stochastic Client Selection for Tree All-Reduce Federated Learning
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-02 DOI: 10.1109/TMC.2024.3507381
Zimu Xu;Antonio Di Maio;Eric Samikwa;Torsten Braun
Federated Learning (FL) is widely applied in privacy-sensitive domains, such as healthcare, finance, and education, due to its privacy-preserving properties. However, implementing FL in dynamic wireless networks poses substantial communication challenges. Central to these challenges is the need for efficient communication strategies that can adapt to fluctuating network conditions and the growing number of participating devices, which can lead to unacceptable communication delays. In this article, we propose Stochastic Client Selection for Tree All-Reduce Federated Learning (CSTAR-FL), a novel approach that combines a probabilistic User Device (UD) selection strategy with a tree-based communication architecture to enhance communication efficiency in FL within densely populated wireless networks. By optimizing UD selection for effective model aggregation and employing an efficient data transmission structure, CSTAR-FL significantly reduces communication time and improves FL efficiency. Additionally, our approach ensures high global model accuracy under scenarios where data distribution is heterogeneous from User Device (UD)s. Extensive simulations in dynamic wireless network scenarios demonstrate that CSTAR-FL outperforms existing state-of-the-art methods, reducing model convergence time by up to 40% without losing the global model accuracy. This makes CSTAR-FL a robust solution for efficient and scalable FL deployments in high-density environments.
联合学习(FL)因其保护隐私的特性,被广泛应用于医疗、金融和教育等对隐私敏感的领域。然而,在动态无线网络中实施联合学习会带来巨大的通信挑战。这些挑战的核心是需要高效的通信策略,以适应不断变化的网络条件和不断增加的参与设备数量,这可能会导致不可接受的通信延迟。在本文中,我们提出了用于树状全还原联合学习(CSTAR-FL)的随机客户端选择,这是一种新颖的方法,它将概率用户设备(UD)选择策略与基于树状的通信架构相结合,以提高人口稠密无线网络中 FL 的通信效率。通过优化 UD 选择以实现有效的模型聚合,并采用高效的数据传输结构,CSTAR-FL 大幅缩短了通信时间,提高了 FL 效率。此外,我们的方法还能确保在用户设备(UD)数据分布异构的情况下,全局模型的高准确性。在动态无线网络场景中进行的大量仿真表明,CSTAR-FL 优于现有的最先进方法,在不损失全局模型精度的情况下,模型收敛时间最多可缩短 40%。这使得 CSTAR-FL 成为在高密度环境中高效、可扩展 FL 部署的强大解决方案。
{"title":"CSTAR-FL: Stochastic Client Selection for Tree All-Reduce Federated Learning","authors":"Zimu Xu;Antonio Di Maio;Eric Samikwa;Torsten Braun","doi":"10.1109/TMC.2024.3507381","DOIUrl":"https://doi.org/10.1109/TMC.2024.3507381","url":null,"abstract":"Federated Learning (FL) is widely applied in privacy-sensitive domains, such as healthcare, finance, and education, due to its privacy-preserving properties. However, implementing FL in dynamic wireless networks poses substantial communication challenges. Central to these challenges is the need for efficient communication strategies that can adapt to fluctuating network conditions and the growing number of participating devices, which can lead to unacceptable communication delays. In this article, we propose Stochastic Client Selection for Tree All-Reduce Federated Learning (<monospace>CSTAR-FL</monospace>), a novel approach that combines a probabilistic User Device (UD) selection strategy with a tree-based communication architecture to enhance communication efficiency in FL within densely populated wireless networks. By optimizing UD selection for effective model aggregation and employing an efficient data transmission structure, <monospace>CSTAR-FL</monospace> significantly reduces communication time and improves FL efficiency. Additionally, our approach ensures high global model accuracy under scenarios where data distribution is heterogeneous from User Device (UD)s. Extensive simulations in dynamic wireless network scenarios demonstrate that <monospace>CSTAR-FL</monospace> outperforms existing state-of-the-art methods, reducing model convergence time by up to 40% without losing the global model accuracy. This makes <monospace>CSTAR-FL</monospace> a robust solution for efficient and scalable FL deployments in high-density environments.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3110-3129"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASMAFL: Adaptive Staleness-Aware Momentum Asynchronous Federated Learning in Edge Computing
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-02 DOI: 10.1109/TMC.2024.3510135
Dewen Qiao;Songtao Guo;Jun Zhao;Junqing Le;Pengzhan Zhou;Mingyan Li;Xuetao Chen
Compared with synchronous federated learning (FL), asynchronous FL (AFL) has attracted more and more attention in edge computing (EC) fields because of its strong adaptability to heterogeneous application scenarios. However, the non-independent and identically distributed (Non-IID) data across devices and the staleness-aware estimation of unreliable wireless connections and limited edge resources make it much more difficult to achieve better AFL-related applications. To handle this problem, we propose an Adaptive Staleness-aware Momentum Accelerated AFL (ASMAFL) algorithm to reduce the resources consumption of heterogeneous wireless communication EC (WCEC) scenarios, as well as decrease the negative impact of Non-IID data for model training. Specifically, we first introduce the staleness-aware parameter and a unified momentum gradient descent (GD) framework to reformulate AFL. Then, we establish global convergence properties of AFL, derive an upper bound on AFL convergence rate, and find that the bound is related to the staleness-aware parameter and Non-IIDness. Next, we formulate the bound into a minimization problem of resource consumption under given model accuracy, and the corresponding staleness-aware parameter of devices will be recomputed after each asynchronous aggregation to eliminate the differences of local models’ contribution to global model aggregation. Finally, extensive experiments are carried out to validate the superiority of ASMAFL in model accuracy, convergence rate, resources consumption, Non-IID issue, etc.
{"title":"ASMAFL: Adaptive Staleness-Aware Momentum Asynchronous Federated Learning in Edge Computing","authors":"Dewen Qiao;Songtao Guo;Jun Zhao;Junqing Le;Pengzhan Zhou;Mingyan Li;Xuetao Chen","doi":"10.1109/TMC.2024.3510135","DOIUrl":"https://doi.org/10.1109/TMC.2024.3510135","url":null,"abstract":"Compared with synchronous federated learning (FL), asynchronous FL (AFL) has attracted more and more attention in edge computing (EC) fields because of its strong adaptability to heterogeneous application scenarios. However, the non-independent and identically distributed (Non-IID) data across devices and the staleness-aware estimation of unreliable wireless connections and limited edge resources make it much more difficult to achieve better AFL-related applications. To handle this problem, we propose an <bold><u>A</u></b>daptive <bold><u>S</u></b>taleness-aware <bold><u>M</u></b>omentum <bold><u>A</u></b>ccelerated <bold><u>AFL</u></b> (ASMAFL) algorithm to reduce the resources consumption of heterogeneous wireless communication EC (WCEC) scenarios, as well as decrease the negative impact of Non-IID data for model training. Specifically, we first introduce the staleness-aware parameter and a unified momentum gradient descent (GD) framework to reformulate AFL. Then, we establish global convergence properties of AFL, derive an upper bound on AFL convergence rate, and find that the bound is related to the staleness-aware parameter and Non-IIDness. Next, we formulate the bound into a minimization problem of resource consumption under given model accuracy, and the corresponding staleness-aware parameter of devices will be recomputed after each asynchronous aggregation to eliminate the differences of local models’ contribution to global model aggregation. Finally, extensive experiments are carried out to validate the superiority of ASMAFL in model accuracy, convergence rate, resources consumption, Non-IID issue, etc.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3390-3406"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Trajectory Planning and Task Offloading for MIMO AAV-Aided Mobile Edge Computing
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-02 DOI: 10.1109/TMC.2024.3510272
Xuewen Dong;Shuangrui Zhao;Ximeng Liu;Zijie Di;Yuzhen Zhang;Yulong Shen
Edge computing is conducive to reducing service response time and improving service quality by pushing cloud functions to a network's edges. Most existing works in edge computing focus on utility maximization of task offloading on static edges with a single antenna. Besides, trajectory planning of mobile edges, e.g., autonomous aerial vehicles (AAVs) is also rarely discussed. In this paper, we are the first to jointly discuss the deadline-ware task offloading and AAV trajectory planning problem in a multi-input multi-output (MIMO) AAV-aided mobile edge computing system. Due to discrete variables and highly coupling nonconvex constraints, we equivalently convert the original problem into a more solvable form by introducing auxiliary variables. Next, a penalty dual decomposition-based algorithm is developed to achieve a global optimal solution to the problem. Besides, we proposed a profit-based fireworks algorithm in a relatively lower time to reduce the execution time for large-scale networks. Extensive evaluation results reveal that our proposed optimal algorithms could significantly outperform static offloading algorithms and other algorithms by 25% on average.
{"title":"Joint Trajectory Planning and Task Offloading for MIMO AAV-Aided Mobile Edge Computing","authors":"Xuewen Dong;Shuangrui Zhao;Ximeng Liu;Zijie Di;Yuzhen Zhang;Yulong Shen","doi":"10.1109/TMC.2024.3510272","DOIUrl":"https://doi.org/10.1109/TMC.2024.3510272","url":null,"abstract":"Edge computing is conducive to reducing service response time and improving service quality by pushing cloud functions to a network's edges. Most existing works in edge computing focus on utility maximization of task offloading on static edges with a single antenna. Besides, trajectory planning of mobile edges, e.g., autonomous aerial vehicles (AAVs) is also rarely discussed. In this paper, we are the first to jointly discuss the deadline-ware task offloading and AAV trajectory planning problem in a multi-input multi-output (MIMO) AAV-aided mobile edge computing system. Due to discrete variables and highly coupling nonconvex constraints, we equivalently convert the original problem into a more solvable form by introducing auxiliary variables. Next, a penalty dual decomposition-based algorithm is developed to achieve a global optimal solution to the problem. Besides, we proposed a profit-based fireworks algorithm in a relatively lower time to reduce the execution time for large-scale networks. Extensive evaluation results reveal that our proposed optimal algorithms could significantly outperform static offloading algorithms and other algorithms by 25% on average.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3196-3210"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge Assisted Low-Latency Cooperative BEV Perception With Progressive State Estimation
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-02 DOI: 10.1109/TMC.2024.3509716
Yuhan Lin;Haoran Xu;Zhimeng Yin;Guang Tan
Modern intelligent vehicles (IVs) are equipped with a variety of sensors and communication modules, empowering Advanced Driver Assistance Systems (ADAS) and enabling inter-vehicle connectivity. This paper focuses on multi-vehicle cooperative perception, with a primary objective of achieving low latency. The task involves nearby cooperative vehicles sending their camera data to an edge server, which then merges the local views to create a global traffic view. While multi-camera perception has been actively researched, existing solutions often rely on deep learning models, resulting in excessive processing latency. In contrast, we propose leveraging the state estimation technique from the robotics field for this task. We explicitly model and solve for the system state, addressing additional challenges brought by object mobility and vision obstruction. Furthermore, we introduce a progressive state estimation pipeline to further accelerate system state notifications, supported by a motion prediction method that optimizes position accuracy and perception smoothness. Experimental results demonstrate the superiority of our approach over the deep learning method, with 12.0 × to 27.4 × reductions in server processing delay, while maintaining mean absolute errors below 1 m.
{"title":"Edge Assisted Low-Latency Cooperative BEV Perception With Progressive State Estimation","authors":"Yuhan Lin;Haoran Xu;Zhimeng Yin;Guang Tan","doi":"10.1109/TMC.2024.3509716","DOIUrl":"https://doi.org/10.1109/TMC.2024.3509716","url":null,"abstract":"Modern intelligent vehicles (IVs) are equipped with a variety of sensors and communication modules, empowering Advanced Driver Assistance Systems (ADAS) and enabling inter-vehicle connectivity. This paper focuses on multi-vehicle cooperative perception, with a primary objective of achieving low latency. The task involves nearby cooperative vehicles sending their camera data to an edge server, which then merges the local views to create a global traffic view. While multi-camera perception has been actively researched, existing solutions often rely on deep learning models, resulting in excessive processing latency. In contrast, we propose leveraging the <italic>state estimation</i> technique from the robotics field for this task. We explicitly model and solve for the system state, addressing additional challenges brought by object mobility and vision obstruction. Furthermore, we introduce a <italic>progressive state estimation</i> pipeline to further accelerate system state notifications, supported by a motion prediction method that optimizes position accuracy and perception smoothness. Experimental results demonstrate the superiority of our approach over the deep learning method, with 12.0 × to 27.4 × reductions in server processing delay, while maintaining mean absolute errors below 1 m.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3346-3358"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resource Allocation for Metaverse Experience Optimization: A Multi-Objective Multi-Agent Evolutionary Reinforcement Learning Approach
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-02 DOI: 10.1109/TMC.2024.3509680
Lei Feng;Xiaoyi Jiang;Yao Sun;Dusit Niyato;Yu Zhou;Shiyi Gu;Zhixiang Yang;Yang Yang;Fanqin Zhou
In the Metaverse, real-time, concurrent services such as virtual classrooms and immersive gaming require local graphic rendering to maintain low latency. However, the limited processing power and battery capacity of user devices make it challenging to balance Quality of Experience (QoE) and terminal energy consumption. In this paper, we investigate a multi-objective optimization problem (MOP) regarding power control and rendering capacity allocation by formulating it as a multi-objective optimization problem. This problem aims to minimize energy consumption while maximizing Meta-Immersion (MI), a metric that integrates objective network performance with subjective user perception. To solve this problem, we propose a Multi-Objective Multi-Agent Evolutionary Reinforcement Learning with User-Object-Attention (M2ERL-UOA) algorithm. The algorithm employs a prediction-driven evolutionary learning mechanism for multi-agents, coupled with optimized rendering capacity decisions for virtual objects. The algorithm can yield a superior Pareto front that attains the Nash equilibrium. Simulation results demonstrate that the proposed algorithm can generate Pareto fronts, effectively adapts to dynamic user preferences, and significantly reduces decision-making time compared to several benchmarks.
{"title":"Resource Allocation for Metaverse Experience Optimization: A Multi-Objective Multi-Agent Evolutionary Reinforcement Learning Approach","authors":"Lei Feng;Xiaoyi Jiang;Yao Sun;Dusit Niyato;Yu Zhou;Shiyi Gu;Zhixiang Yang;Yang Yang;Fanqin Zhou","doi":"10.1109/TMC.2024.3509680","DOIUrl":"https://doi.org/10.1109/TMC.2024.3509680","url":null,"abstract":"In the Metaverse, real-time, concurrent services such as virtual classrooms and immersive gaming require local graphic rendering to maintain low latency. However, the limited processing power and battery capacity of user devices make it challenging to balance Quality of Experience (QoE) and terminal energy consumption. In this paper, we investigate a multi-objective optimization problem (MOP) regarding power control and rendering capacity allocation by formulating it as a multi-objective optimization problem. This problem aims to minimize energy consumption while maximizing Meta-Immersion (MI), a metric that integrates objective network performance with subjective user perception. To solve this problem, we propose a Multi-Objective Multi-Agent Evolutionary Reinforcement Learning with User-Object-Attention (M2ERL-UOA) algorithm. The algorithm employs a prediction-driven evolutionary learning mechanism for multi-agents, coupled with optimized rendering capacity decisions for virtual objects. The algorithm can yield a superior Pareto front that attains the Nash equilibrium. Simulation results demonstrate that the proposed algorithm can generate Pareto fronts, effectively adapts to dynamic user preferences, and significantly reduces decision-making time compared to several benchmarks.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3473-3488"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel Insights From a Cross-Layer Analysis of TCP and UDP Traffic Over Full-Duplex WLANs
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-02 DOI: 10.1109/TMC.2024.3510099
Vinay U. Pai;Neelesh B. Mehta;Chandramani Singh
Full-duplex (FD) communication is a promising new technology that enables simultaneous transmission and reception in wireless local area networks (WLANs). The benefits of FD on the medium access control (MAC) layer throughput in IEEE 802.11 WLANs are well-documented. However, cross-layer interactions between the FD MAC protocol and transport layer protocols such as Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are less explored. We consider a WLAN with uplink and downlink TCP flows as well as UDP flows between stations (STAs) and a server via an FD access point (AP). We study an STA-initiated FD MAC protocol in which the AP can transmit on the downlink while receiving on the uplink. Using a novel FD-specific STA saturation approximation, Markov renewal theory, and fixed-point analysis, we derive novel expressions for the uplink and downlink TCP and UDP saturation throughputs. Our analysis shows that the AP is no longer a bottleneck and may be unsaturated unlike in conventional half-duplex (HD) WLANs. Despite greater contention and cross-link interference between STAs, FD achieves a higher TCP throughput than HD. FD causes a significant degradation in the UDP throughput. In the unsaturated regime, FD achieves a lower average downlink TCP packet delay than HD.
{"title":"Novel Insights From a Cross-Layer Analysis of TCP and UDP Traffic Over Full-Duplex WLANs","authors":"Vinay U. Pai;Neelesh B. Mehta;Chandramani Singh","doi":"10.1109/TMC.2024.3510099","DOIUrl":"https://doi.org/10.1109/TMC.2024.3510099","url":null,"abstract":"Full-duplex (FD) communication is a promising new technology that enables simultaneous transmission and reception in wireless local area networks (WLANs). The benefits of FD on the medium access control (MAC) layer throughput in IEEE 802.11 WLANs are well-documented. However, cross-layer interactions between the FD MAC protocol and transport layer protocols such as Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are less explored. We consider a WLAN with uplink and downlink TCP flows as well as UDP flows between stations (STAs) and a server via an FD access point (AP). We study an STA-initiated FD MAC protocol in which the AP can transmit on the downlink while receiving on the uplink. Using a novel FD-specific STA saturation approximation, Markov renewal theory, and fixed-point analysis, we derive novel expressions for the uplink and downlink TCP and UDP saturation throughputs. Our analysis shows that the AP is no longer a bottleneck and may be unsaturated unlike in conventional half-duplex (HD) WLANs. Despite greater contention and cross-link interference between STAs, FD achieves a higher TCP throughput than HD. FD causes a significant degradation in the UDP throughput. In the unsaturated regime, FD achieves a lower average downlink TCP packet delay than HD.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3288-3301"},"PeriodicalIF":7.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do as the Romans Do: Location Imitation-Based Edge Task Offloading for Privacy Protection
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-29 DOI: 10.1109/TMC.2024.3509418
Jiahao Zhu;Lu Zhao;Jian Zhou;Hui Cai;Fu Xiao
In edge computing, a user prefers offloading his/her task to nearby edge servers to maximize the offloading utility. However, this inevitably exposes the user's location privacy information when suffering from the side-channel attacks based on offloading decision behaviors and Received Signal Strength Indicators (RSSI). Existing works only consider the scenario with one untrusted edge server or defend only against one of the attacks. In this paper, we first study the edge task offloading problem with comprehensive privacy protection against these side-channel attacks from multiple edge servers. To address this problem while ensuring satisfactory offloading utility, we develop a Location Imitation-based Edge Task Offloading approach LITO. Specifically, we first determine a suitable perturbation region centered at the user's real location for a balance between offloading utility and privacy protection, and then propose a modified Laplace mechanism to generate a fake location meeting geo-indistinguishability within the region. Subsequently, to mislead the side-channel attacks to the fake location, we design an approximate algorithm and a transmit power control strategy to imitate the offloading decisions and RSSIs at the fake location, respectively. Theoretical analysis and experimental evaluations demonstrate the performance of LITO in improving privacy protection and guaranteeing offloading utility.
{"title":"Do as the Romans Do: Location Imitation-Based Edge Task Offloading for Privacy Protection","authors":"Jiahao Zhu;Lu Zhao;Jian Zhou;Hui Cai;Fu Xiao","doi":"10.1109/TMC.2024.3509418","DOIUrl":"https://doi.org/10.1109/TMC.2024.3509418","url":null,"abstract":"In edge computing, a user prefers offloading his/her task to nearby edge servers to maximize the offloading utility. However, this inevitably exposes the user's location privacy information when suffering from the side-channel attacks based on offloading decision behaviors and Received Signal Strength Indicators (RSSI). Existing works only consider the scenario with one untrusted edge server or defend only against one of the attacks. In this paper, we first study the edge task offloading problem with comprehensive privacy protection against these side-channel attacks from multiple edge servers. To address this problem while ensuring satisfactory offloading utility, we develop a <underline>L</u>ocation <underline>I</u>mitation-based Edge <underline>T</u>ask <underline>O</u>ffloading approach <italic>LITO</i>. Specifically, we first determine a suitable perturbation region centered at the user's real location for a balance between offloading utility and privacy protection, and then propose a modified Laplace mechanism to generate a fake location meeting geo-indistinguishability within the region. Subsequently, to mislead the side-channel attacks to the fake location, we design an approximate algorithm and a transmit power control strategy to imitate the offloading decisions and RSSIs at the fake location, respectively. Theoretical analysis and experimental evaluations demonstrate the performance of <italic>LITO</i> in improving privacy protection and guaranteeing offloading utility.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"3456-3472"},"PeriodicalIF":7.7,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Mobile Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1