Pub Date : 2024-08-14DOI: 10.1109/JSAC.2024.3443759
Ya-Feng Liu;Tsung-Hui Chang;Mingyi Hong;Zheyu Wu;Anthony Man-Cho So;Eduard A. Jorswieck;Wei Yu
Mathematical optimization is now widely regarded as an indispensable modeling and solution tool for the design of wireless communications systems. While optimization has played a significant role in the revolutionary progress in wireless communication and networking technologies from 1G to 5G and onto the future 6G, the innovations in wireless technologies have also substantially transformed the nature of the underlying mathematical optimization problems upon which the system designs are based and have sparked significant innovations in the development of methodologies to understand, to analyze, and to solve those problems. In this paper, we provide a comprehensive survey of recent advances in mathematical optimization theory and algorithms for wireless communication system design. We begin by illustrating common features of mathematical optimization problems arising in wireless communication system design. We discuss various scenarios and use cases and their associated mathematical structures from an optimization perspective. We then provide an overview of recently developed optimization techniques in areas ranging from nonconvex optimization, global optimization, and integer programming, to distributed optimization and learning-based optimization. The key to successful solution of mathematical optimization problems is in carefully choosing or developing suitable algorithms (or neural network architectures) that can exploit the underlying problem structure. We conclude the paper by identifying several open research challenges and outlining future research directions.
{"title":"A Survey of Recent Advances in Optimization Methods for Wireless Communications","authors":"Ya-Feng Liu;Tsung-Hui Chang;Mingyi Hong;Zheyu Wu;Anthony Man-Cho So;Eduard A. Jorswieck;Wei Yu","doi":"10.1109/JSAC.2024.3443759","DOIUrl":"10.1109/JSAC.2024.3443759","url":null,"abstract":"Mathematical optimization is now widely regarded as an indispensable modeling and solution tool for the design of wireless communications systems. While optimization has played a significant role in the revolutionary progress in wireless communication and networking technologies from 1G to 5G and onto the future 6G, the innovations in wireless technologies have also substantially transformed the nature of the underlying mathematical optimization problems upon which the system designs are based and have sparked significant innovations in the development of methodologies to understand, to analyze, and to solve those problems. In this paper, we provide a comprehensive survey of recent advances in mathematical optimization theory and algorithms for wireless communication system design. We begin by illustrating common features of mathematical optimization problems arising in wireless communication system design. We discuss various scenarios and use cases and their associated mathematical structures from an optimization perspective. We then provide an overview of recently developed optimization techniques in areas ranging from nonconvex optimization, global optimization, and integer programming, to distributed optimization and learning-based optimization. The key to successful solution of mathematical optimization problems is in carefully choosing or developing suitable algorithms (or neural network architectures) that can exploit the underlying problem structure. We conclude the paper by identifying several open research challenges and outlining future research directions.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":"42 11","pages":"2992-3031"},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141986283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a joint single-base localization and communication enhancement scheme for the uplink (UL) integrated sensing and communications (ISAC) system with asynchronism, which can achieve accurate single-base localization of user equipment (UE) and significantly improve the communication reliability despite the existence of timing offset (TO) due to the clock asynchronism between UE and base station (BS). Our proposed scheme integrates the CSI enhancement into the multiple signal classification (MUSIC)-based AoA estimation and thus imposes no extra complexity on the ISAC system. We further exploit a MUSIC-based range estimation method and prove that it can suppress the time-varying TO-related phase terms. Exploiting the AoA and range estimation of UE, we can estimate the location of UE. Finally, we propose a joint CSI and data signals-based localization scheme that can coherently exploit the data and the CSI signals to improve the AoA and range estimation, which further enhances the single-base localization of UE. The extensive simulation results show that the enhanced CSI can achieve equivalent bit error rate performance to the minimum mean square error (MMSE) CSI estimator. The proposed joint CSI and data signals-based localization scheme can achieve decimeter-level localization accuracy despite the existing clock asynchronism and improve the localization root mean square error (RMSE) by about 6 dB compared with the maximum likelihood esimation (MLE)-based benchmark method.
{"title":"Joint Localization and Communication Enhancement in Uplink Integrated Sensing and Communications System With Clock Asynchronism","authors":"Xu Chen;Xinxin He;Zhiyong Feng;Zhiqing Wei;Qixun Zhang;Xin Yuan;Ping Zhang","doi":"10.1109/JSAC.2024.3414625","DOIUrl":"10.1109/JSAC.2024.3414625","url":null,"abstract":"In this paper, we propose a joint single-base localization and communication enhancement scheme for the uplink (UL) integrated sensing and communications (ISAC) system with asynchronism, which can achieve accurate single-base localization of user equipment (UE) and significantly improve the communication reliability despite the existence of timing offset (TO) due to the clock asynchronism between UE and base station (BS). Our proposed scheme integrates the CSI enhancement into the multiple signal classification (MUSIC)-based AoA estimation and thus imposes no extra complexity on the ISAC system. We further exploit a MUSIC-based range estimation method and prove that it can suppress the time-varying TO-related phase terms. Exploiting the AoA and range estimation of UE, we can estimate the location of UE. Finally, we propose a joint CSI and data signals-based localization scheme that can coherently exploit the data and the CSI signals to improve the AoA and range estimation, which further enhances the single-base localization of UE. The extensive simulation results show that the enhanced CSI can achieve equivalent bit error rate performance to the minimum mean square error (MMSE) CSI estimator. The proposed joint CSI and data signals-based localization scheme can achieve decimeter-level localization accuracy despite the existing clock asynchronism and improve the localization root mean square error (RMSE) by about 6 dB compared with the maximum likelihood esimation (MLE)-based benchmark method.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":"42 10","pages":"2659-2673"},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141899816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1109/JSAC.2024.3413994
Haiyao Yu;Changyang She;Yunkai Hu;Geng Wang;Rui Wang;Branka Vucetic;Yonghui Li
Machine learning has been considered a promising approach for indoor localization. Nevertheless, the sample efficiency, scalability, and generalization ability remain open issues of implementing learning-based algorithms in practical systems. In this paper, we establish a zero-shot learning framework that does not need real-world measurements in a new communication environment. Specifically, a graph neural network that is scalable to the number of access points (APs) and mobile devices (MDs) is used for obtaining coarse locations of MDs. Based on the coarse locations, the floor-plan image between an MD and an AP is exploited to improve localization accuracy in a floor-plan-aided deep neural network. To further improve the generalization ability, we develop a synthetic data generator that provides synthetic data samples in different scenarios, where real-world samples are not available. We implement the framework in a prototype that estimates the locations of MDs. Experimental results show that our zero-shot learning method can reduce localization errors by around 30% to 55% compared with three baselines from the existing literature.
{"title":"Floor-Plan-Aided Indoor Localization: Zero-Shot Learning Framework, Data Sets, and Prototype","authors":"Haiyao Yu;Changyang She;Yunkai Hu;Geng Wang;Rui Wang;Branka Vucetic;Yonghui Li","doi":"10.1109/JSAC.2024.3413994","DOIUrl":"10.1109/JSAC.2024.3413994","url":null,"abstract":"Machine learning has been considered a promising approach for indoor localization. Nevertheless, the sample efficiency, scalability, and generalization ability remain open issues of implementing learning-based algorithms in practical systems. In this paper, we establish a zero-shot learning framework that does not need real-world measurements in a new communication environment. Specifically, a graph neural network that is scalable to the number of access points (APs) and mobile devices (MDs) is used for obtaining coarse locations of MDs. Based on the coarse locations, the floor-plan image between an MD and an AP is exploited to improve localization accuracy in a floor-plan-aided deep neural network. To further improve the generalization ability, we develop a synthetic data generator that provides synthetic data samples in different scenarios, where real-world samples are not available. We implement the framework in a prototype that estimates the locations of MDs. Experimental results show that our zero-shot learning method can reduce localization errors by around 30% to 55% compared with three baselines from the existing literature.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":"42 9","pages":"2472-2486"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141877368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-31DOI: 10.1109/JSAC.2024.3431583
Kai Liang;Gan Zheng;Zan Li;Kai-Kit Wong;Chan-Byoung Chae
This paper investigates the optimization of the probabilistically robust transmit beamforming problem with channel uncertainties in the multiuser multiple-input single-output (MISO) downlink transmission. This problem poses significant analytical and computational challenges. Currently, the state-of-the-art optimization method relies on convex restrictions as tractable approximations to ensure robustness against Gaussian channel uncertainties. However, this method not only exhibits high computational complexity and suffers from the rank relaxation issue but also yields conservative solutions. In this paper, we propose an unsupervised deep learning-based approach that incorporates the sampling of channel uncertainties in the training process to optimize the probabilistic system performance. We introduce a model-driven learning approach that defines a new beamforming structure with trainable parameters to account for channel uncertainties. Additionally, we employ a graph neural network to efficiently infer the key beamforming parameters. We successfully apply this approach to the minimum rate quantile maximization problem subject to outage and total power constraints. Furthermore, we propose a bisection search method to address the more challenging power minimization problem with probabilistic rate constraints by leveraging the aforementioned approach. Numerical results confirm that our approach achieves non-conservative robust performance, higher data rates, greater power efficiency, and faster execution compared to state-of-the-art optimization methods.
{"title":"A Data and Model-Driven Deep Learning Approach to Robust Downlink Beamforming Optimization","authors":"Kai Liang;Gan Zheng;Zan Li;Kai-Kit Wong;Chan-Byoung Chae","doi":"10.1109/JSAC.2024.3431583","DOIUrl":"10.1109/JSAC.2024.3431583","url":null,"abstract":"This paper investigates the optimization of the probabilistically robust transmit beamforming problem with channel uncertainties in the multiuser multiple-input single-output (MISO) downlink transmission. This problem poses significant analytical and computational challenges. Currently, the state-of-the-art optimization method relies on convex restrictions as tractable approximations to ensure robustness against Gaussian channel uncertainties. However, this method not only exhibits high computational complexity and suffers from the rank relaxation issue but also yields conservative solutions. In this paper, we propose an unsupervised deep learning-based approach that incorporates the sampling of channel uncertainties in the training process to optimize the probabilistic system performance. We introduce a model-driven learning approach that defines a new beamforming structure with trainable parameters to account for channel uncertainties. Additionally, we employ a graph neural network to efficiently infer the key beamforming parameters. We successfully apply this approach to the minimum rate quantile maximization problem subject to outage and total power constraints. Furthermore, we propose a bisection search method to address the more challenging power minimization problem with probabilistic rate constraints by leveraging the aforementioned approach. Numerical results confirm that our approach achieves non-conservative robust performance, higher data rates, greater power efficiency, and faster execution compared to state-of-the-art optimization methods.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":"42 11","pages":"3278-3292"},"PeriodicalIF":0.0,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile edge computing (MEC) and digital twin (DT) technologies have been recognized as key enabling factors for the next generation of industrial Internet of Things (IoT) applications. In existing works, DT-assisted edge network resource optimization solutions mostly focus on short-term performance optimization, and long-term resource optimization has not been well studied. Thus, this paper introduces a digital twin-assisted heterogeneous edge network (DTHEN), aiming to minimize long-term energy consumption by jointly optimizing transmit power and computing resource. To solve the stochastic optimization problem, we propose a long-term queue-aware energy minimization (LQEM) scheme for joint communication and computing resource management. The proposed scheme uses Lyapunov optimization to transform the original problem with long-term time constraints into a deterministic upper bound problem for each time slot, decouples it into three independent sub-problems, and solves each sub-problem separately. We then theoretically prove the asymptotic optimality of the LQEM scheme and the tradeoff between system energy consumption and task queue backlog. Finally, experimental results verify the performance analysis of the LQEM scheme, demonstrating its superiority over several benchmark schemes, and reveal the impact of various parameters on the system.
{"title":"Stochastic Long-Term Energy Optimization in Digital Twin-Assisted Heterogeneous Edge Networks","authors":"Yingsheng Peng;Jingpu Duan;Jinbei Zhang;Weichao Li;Yong Liu;Fuli Jiang","doi":"10.1109/JSAC.2024.3431581","DOIUrl":"10.1109/JSAC.2024.3431581","url":null,"abstract":"Mobile edge computing (MEC) and digital twin (DT) technologies have been recognized as key enabling factors for the next generation of industrial Internet of Things (IoT) applications. In existing works, DT-assisted edge network resource optimization solutions mostly focus on short-term performance optimization, and long-term resource optimization has not been well studied. Thus, this paper introduces a digital twin-assisted heterogeneous edge network (DTHEN), aiming to minimize long-term energy consumption by jointly optimizing transmit power and computing resource. To solve the stochastic optimization problem, we propose a long-term queue-aware energy minimization (LQEM) scheme for joint communication and computing resource management. The proposed scheme uses Lyapunov optimization to transform the original problem with long-term time constraints into a deterministic upper bound problem for each time slot, decouples it into three independent sub-problems, and solves each sub-problem separately. We then theoretically prove the asymptotic optimality of the LQEM scheme and the tradeoff between system energy consumption and task queue backlog. Finally, experimental results verify the performance analysis of the LQEM scheme, demonstrating its superiority over several benchmark schemes, and reveal the impact of various parameters on the system.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":"42 11","pages":"3157-3171"},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141754872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1109/JSAC.2024.3431575
Zifan Zhang;Yuchen Liu;Zhiyuan Peng;Mingzhe Chen;Dongkuan Xu;Shuguang Cui
Optimizing edge caching is crucial for the advancement of next-generation (nextG) wireless networks, ensuring high-speed and low-latency services for mobile users. Existing data-driven optimization approaches often lack awareness of the distribution of random data variables and focus solely on optimizing cache hit rates, neglecting potential reliability concerns, such as base station overload and unbalanced cache issues. This oversight can result in system crashes and degraded user experience. To bridge this gap, we introduce a novel digital twin-assisted optimization framework, called D-REC, which integrates reinforcement learning (RL) with diverse intervention modules to ensure reliable caching in nextG wireless networks. We first develop a joint vertical and horizontal twinning approach to efficiently create network digital twins, which are then employed by D-REC as RL optimizers and safeguards, providing ample datasets for training and predictive evaluation of our cache replacement policy. By incorporating reliability modules into a constrained Markov decision process, D-REC can adaptively adjust actions, rewards, and states to comply with advantageous constraints, minimizing the risk of network failures. Theoretical analysis demonstrates comparable convergence rates between D-REC and vanilla data-driven methods without compromising caching performance. Extensive experiments validate that D-REC outperforms conventional approaches in cache hit rate and load balancing while effectively enforcing predetermined reliability intervention modules.
{"title":"Digital Twin-Assisted Data-Driven Optimization for Reliable Edge Caching in Wireless Networks","authors":"Zifan Zhang;Yuchen Liu;Zhiyuan Peng;Mingzhe Chen;Dongkuan Xu;Shuguang Cui","doi":"10.1109/JSAC.2024.3431575","DOIUrl":"10.1109/JSAC.2024.3431575","url":null,"abstract":"Optimizing edge caching is crucial for the advancement of next-generation (nextG) wireless networks, ensuring high-speed and low-latency services for mobile users. Existing data-driven optimization approaches often lack awareness of the distribution of random data variables and focus solely on optimizing cache hit rates, neglecting potential reliability concerns, such as base station overload and unbalanced cache issues. This oversight can result in system crashes and degraded user experience. To bridge this gap, we introduce a novel digital twin-assisted optimization framework, called D-REC, which integrates reinforcement learning (RL) with diverse intervention modules to ensure reliable caching in nextG wireless networks. We first develop a joint vertical and horizontal twinning approach to efficiently create network digital twins, which are then employed by D-REC as RL optimizers and safeguards, providing ample datasets for training and predictive evaluation of our cache replacement policy. By incorporating reliability modules into a constrained Markov decision process, D-REC can adaptively adjust actions, rewards, and states to comply with advantageous constraints, minimizing the risk of network failures. Theoretical analysis demonstrates comparable convergence rates between D-REC and vanilla data-driven methods without compromising caching performance. Extensive experiments validate that D-REC outperforms conventional approaches in cache hit rate and load balancing while effectively enforcing predetermined reliability intervention modules.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":"42 11","pages":"3306-3320"},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141754869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1109/JSAC.2024.3431582
Zixiang Ren;Jie Xu;Ling Qiu;Derrick Wing Kwan Ng
This paper studies a secure cell-free integrated sensing and communication (ISAC) system, in which multiple ISAC transmitters collaboratively send confidential information to multiple communication users (CUs) and concurrently conduct target detection. Different from prior works investigating communication security against potential information eavesdropping, we consider the security of both communication and sensing in the presence of information and sensing eavesdroppers that aim to intercept confidential communication information and extract target information, respectively. Towards this end, we optimize the joint information and sensing transmit beamforming at these ISAC transmitters for secure cell-free ISAC. Our objective is to maximize the detection probability over a designated sensing area while ensuring the minimum signal-to-interference-plus-noise-ratio (SINR) requirements at CUs. Our formulation also takes into account the maximum tolerable signal-to-noise ratio (SNR) constraints at information eavesdroppers for ensuring the confidentiality of information transmission, and the maximum detection probability constraints at sensing eavesdroppers for preserving sensing privacy. The formulated secure joint transmit beamforming problem is highly non-convex due to the intricate interplay between the detection probabilities, beamforming vectors, and SINR constraints. Fortunately, through strategic manipulation and via applying the semidefinite relaxation (SDR) technique, we successfully obtain the globally optimal solution to the design problem by rigorously verifying the tightness of SDR. Furthermore, we present two alternative joint beamforming designs based on the sensing SNR maximization over the specific sensing area and the coordinated beamforming, respectively. Numerical results reveal the benefits of our proposed design over these alternative benchmarks.
{"title":"Secure Cell-Free Integrated Sensing and Communication in the Presence of Information and Sensing Eavesdroppers","authors":"Zixiang Ren;Jie Xu;Ling Qiu;Derrick Wing Kwan Ng","doi":"10.1109/JSAC.2024.3431582","DOIUrl":"10.1109/JSAC.2024.3431582","url":null,"abstract":"This paper studies a secure cell-free integrated sensing and communication (ISAC) system, in which multiple ISAC transmitters collaboratively send confidential information to multiple communication users (CUs) and concurrently conduct target detection. Different from prior works investigating communication security against potential information eavesdropping, we consider the security of both communication and sensing in the presence of information and sensing eavesdroppers that aim to intercept confidential communication information and extract target information, respectively. Towards this end, we optimize the joint information and sensing transmit beamforming at these ISAC transmitters for secure cell-free ISAC. Our objective is to maximize the detection probability over a designated sensing area while ensuring the minimum signal-to-interference-plus-noise-ratio (SINR) requirements at CUs. Our formulation also takes into account the maximum tolerable signal-to-noise ratio (SNR) constraints at information eavesdroppers for ensuring the confidentiality of information transmission, and the maximum detection probability constraints at sensing eavesdroppers for preserving sensing privacy. The formulated secure joint transmit beamforming problem is highly non-convex due to the intricate interplay between the detection probabilities, beamforming vectors, and SINR constraints. Fortunately, through strategic manipulation and via applying the semidefinite relaxation (SDR) technique, we successfully obtain the globally optimal solution to the design problem by rigorously verifying the tightness of SDR. Furthermore, we present two alternative joint beamforming designs based on the sensing SNR maximization over the specific sensing area and the coordinated beamforming, respectively. Numerical results reveal the benefits of our proposed design over these alternative benchmarks.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":"42 11","pages":"3217-3231"},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141754868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated Learning (FL) has emerged as a fundamental learning paradigm to harness massive data scattered at geo-distributed edge devices in a privacy-preserving way. Given the heterogeneous deployment of edge devices, however, their data are usually Non-IID, introducing significant challenges to FL including degraded training accuracy, intensive communication costs, and high computing complexity. Towards that, traditional approaches typically utilize adaptive mechanisms, which may suffer from scalability issues, increased computational overhead, and limited adaptability to diverse edge environments. To address that, this paper instead leverages the observation that the computation offloading involves inherent functionalities such as node matching and service correlation to achieve data reshaping and proposes F