Pub Date : 2026-03-09DOI: 10.1109/TMC.2026.3659976
Yunhao Yao;Zhiqiang Wang;Puhan Luo;Yihang Cheng;Jiahui Hou;Xiang-Yang Li
In the above article [1], there are several errors. The corrections are listed below: •Equation [page 9, column left]:
在上面的文章[1]中,有几个错误。•等式[第9页,左栏]:
{"title":"Correction to “PrivGuardInfer: Channel-Level End-Edge Collaborative Inference Strategy Protecting Original Inputs and Sensitive Attributes”","authors":"Yunhao Yao;Zhiqiang Wang;Puhan Luo;Yihang Cheng;Jiahui Hou;Xiang-Yang Li","doi":"10.1109/TMC.2026.3659976","DOIUrl":"https://doi.org/10.1109/TMC.2026.3659976","url":null,"abstract":"In the above article [1], there are several errors. The corrections are listed below: •Equation [page 9, column left]:","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"25 4","pages":"5952-5952"},"PeriodicalIF":9.2,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11424258","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1109/TMC.2026.3653591
{"title":"2025 Reviewers List*","authors":"","doi":"10.1109/TMC.2026.3653591","DOIUrl":"https://doi.org/10.1109/TMC.2026.3653591","url":null,"abstract":"","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"25 3","pages":"4455-4477"},"PeriodicalIF":9.2,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11372682","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate fall risk prediction is crucial for early intervention and prevention, effectively reducing the incidence of falls and the associated harm. This paper proposes a non-contact gait detection and fall risk prediction method based on the human electrostatic field and Stacking ensemble learning algorithm. A theoretical model for gait detection based on the human electrostatic field is established, and an experimental scheme is designed. The electrostatic gait measurement system is used to collect electrostatic gait signals from healthy young individuals, healthy elderly individuals, and elderly individuals with a history of falls. Gait features, including 28-dimensional quantifiable characteristics, are proposed for evaluating human balance and motor abilities, covering four aspects: gait time parameters, gait symmetry based on ratios and signal similarity, gait stability based on the maximum Lyapunov exponent and entropy information, and gait time parameter variability. A hybrid feature reduction method based on Particle Swarm Optimization (PSO) is used to obtain the optimal feature subset. Fall risk prediction models based on single classifiers (DT, SVM, KNN, and NB) are constructed using both the original feature set and the optimal feature subset. The single classifier based on the optimal feature subset achieves better classification performance. Furthermore, a Stacking ensemble learning model using LightGBM as the meta-learner is developed, achieving an accuracy of 97.78%. This study provides a novel approach for fall risk prediction that can predict the likelihood of falls and reduce the probability of their occurrence.
{"title":"Fall Risk Prediction Method Based on Human Electrostatic Field and Stacking Ensemble Learning Algorithm","authors":"Sichao Qin;Jiaao Yan;Ziyi Jiao;Weijie Yuan;Xi Chen","doi":"10.1109/TMC.2025.3647110","DOIUrl":"https://doi.org/10.1109/TMC.2025.3647110","url":null,"abstract":"Accurate fall risk prediction is crucial for early intervention and prevention, effectively reducing the incidence of falls and the associated harm. This paper proposes a non-contact gait detection and fall risk prediction method based on the human electrostatic field and Stacking ensemble learning algorithm. A theoretical model for gait detection based on the human electrostatic field is established, and an experimental scheme is designed. The electrostatic gait measurement system is used to collect electrostatic gait signals from healthy young individuals, healthy elderly individuals, and elderly individuals with a history of falls. Gait features, including 28-dimensional quantifiable characteristics, are proposed for evaluating human balance and motor abilities, covering four aspects: gait time parameters, gait symmetry based on ratios and signal similarity, gait stability based on the maximum Lyapunov exponent and entropy information, and gait time parameter variability. A hybrid feature reduction method based on Particle Swarm Optimization (PSO) is used to obtain the optimal feature subset. Fall risk prediction models based on single classifiers (DT, SVM, KNN, and NB) are constructed using both the original feature set and the optimal feature subset. The single classifier based on the optimal feature subset achieves better classification performance. Furthermore, a Stacking ensemble learning model using LightGBM as the meta-learner is developed, achieving an accuracy of 97.78%. This study provides a novel approach for fall risk prediction that can predict the likelihood of falls and reduce the probability of their occurrence.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"25 3","pages":"4443-4454"},"PeriodicalIF":9.2,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data integrity verification in geographically distributed edge systems remains a critical unsolved challenge. While centralized verification introduces bottlenecks and single points of failure, existing decentralized alternatives suffer from inefficiency due to their lack of batch verification capabilities. This limitation leads to prohibitive communication and computational overheads that scale poorly as data volume grows. This paper introduces EdgeBatch, the first decentralized protocol designed for efficient batch integrity verification, reducing communication rounds from $mathcal {O}(n)$ to $mathcal {O}(1)$ a small, constant number. At its core is a reputation-aware Combination Selection Algorithm (CSA), a polynomial-time heuristic that identifies near-optimal peer server combinations, balancing verifier group size against servers’ historical trustworthiness through intelligent pruning strategies. This process is orchestrated through distributed ledger technology and smart contracts, ensuring a secure, transparent, and trustless verification environment. The protocol’s design is underpinned by rigorous theoretical analysis, including formal proofs of security and correctness, and a probabilistic model for optimizing key system parameters. Extensive simulations show that EdgeBatch drastically outperforms state-of-the-art methods; it improves computational efficiency by an average of 518.60× over EdgeWatch and 1030.93× over CooperEDI, while also reducing communication overhead by 296.68× and 62.66×, respectively. A concluding ablation study confirms the vital role of our reputation mechanism, demonstrating it reduces the required verification rounds by 73% and is the key to the protocol’s efficiency.
{"title":"EdgeBatch: Efficient Decentralized Batch Verification for Edge Data Integrity via Reputation-Aware Combination Selection","authors":"Jian Li;Yibo Chen;Qinglin Zhao;Jincheng Cai;Shaohua Teng;Naiqi Wu","doi":"10.1109/TMC.2025.3645025","DOIUrl":"https://doi.org/10.1109/TMC.2025.3645025","url":null,"abstract":"Data integrity verification in geographically distributed edge systems remains a critical unsolved challenge. While centralized verification introduces bottlenecks and single points of failure, existing decentralized alternatives suffer from inefficiency due to their lack of batch verification capabilities. This limitation leads to prohibitive communication and computational overheads that scale poorly as data volume grows. This paper introduces EdgeBatch, the first decentralized protocol designed for efficient batch integrity verification, reducing communication rounds from <inline-formula><tex-math>$mathcal {O}(n)$</tex-math></inline-formula> to <inline-formula><tex-math>$mathcal {O}(1)$</tex-math></inline-formula> a small, constant number. At its core is a reputation-aware Combination Selection Algorithm (CSA), a polynomial-time heuristic that identifies near-optimal peer server combinations, balancing verifier group size against servers’ historical trustworthiness through intelligent pruning strategies. This process is orchestrated through distributed ledger technology and smart contracts, ensuring a secure, transparent, and trustless verification environment. The protocol’s design is underpinned by rigorous theoretical analysis, including formal proofs of security and correctness, and a probabilistic model for optimizing key system parameters. Extensive simulations show that EdgeBatch drastically outperforms state-of-the-art methods; it improves computational efficiency by an average of 518.60× over EdgeWatch and 1030.93× over CooperEDI, while also reducing communication overhead by 296.68× and 62.66×, respectively. A concluding ablation study confirms the vital role of our reputation mechanism, demonstrating it reduces the required verification rounds by 73% and is the key to the protocol’s efficiency.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"25 3","pages":"4425-4442"},"PeriodicalIF":9.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1109/TMC.2025.3634199
Ming Yang;Dongrun Li;Xin Wang;Feng Li;Lisheng Fan;Chunxiao Wang;Xiaoming Wu;Peng Cheng
In privacy-preserving mobile network transmission scenarios with heterogeneous client data, personalized federated learning methods that decouple feature extractors and classifiers have demonstrated notable advantages in enhancing learning capability. However, many existing approaches primarily focus on feature space consistency and classification personalization during local training, often neglecting the local adaptability of the extractor and the global generalization of the classifier. This oversight results in insufficient coordination and weak coupling between the components, ultimately degrading the overall model performance. To address this challenge, we propose FedeCouple, a federated learning method that balances global generalization and local adaptability at a fine-grained level. Our approach jointly learns global and local feature representations while employing dynamic knowledge distillation to enhance the generalization of personalized classifiers. We further introduce anchors to refine the feature space; their strict locality and non-transmission inherently preserve privacy and reduce communication overhead. Furthermore, we provide a theoretical analysis proving that FedeCouple converges for nonconvex objectives, with iterates approaching a stationary point as the number of communication rounds increases. Extensive experiments conducted on five image-classification datasets demonstrate that FedeCouple consistently outperforms nine baseline methods in effectiveness, stability, scalability, and security. Notably, in experiments evaluating effectiveness, FedeCouple surpasses the best baseline by a significant margin of 4.3%.
{"title":"FedeCouple: Fine-Grained Balancing of Global-Generalization and Local-Adaptability in Federated Learning","authors":"Ming Yang;Dongrun Li;Xin Wang;Feng Li;Lisheng Fan;Chunxiao Wang;Xiaoming Wu;Peng Cheng","doi":"10.1109/TMC.2025.3634199","DOIUrl":"https://doi.org/10.1109/TMC.2025.3634199","url":null,"abstract":"In privacy-preserving mobile network transmission scenarios with heterogeneous client data, personalized federated learning methods that decouple feature extractors and classifiers have demonstrated notable advantages in enhancing learning capability. However, many existing approaches primarily focus on feature space consistency and classification personalization during local training, often neglecting the local adaptability of the extractor and the global generalization of the classifier. This oversight results in insufficient coordination and weak coupling between the components, ultimately degrading the overall model performance. To address this challenge, we propose FedeCouple, a federated learning method that balances global generalization and local adaptability at a fine-grained level. Our approach jointly learns global and local feature representations while employing dynamic knowledge distillation to enhance the generalization of personalized classifiers. We further introduce anchors to refine the feature space; their strict locality and non-transmission inherently preserve privacy and reduce communication overhead. Furthermore, we provide a theoretical analysis proving that FedeCouple converges for nonconvex objectives, with iterates approaching a stationary point as the number of communication rounds increases. Extensive experiments conducted on five image-classification datasets demonstrate that FedeCouple consistently outperforms nine baseline methods in effectiveness, stability, scalability, and security. Notably, in experiments evaluating effectiveness, FedeCouple surpasses the best baseline by a significant margin of 4.3%.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"25 4","pages":"5855-5871"},"PeriodicalIF":9.2,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1109/TMC.2025.3634372
Burak Ahmet Ozden;Erdogan Aydin;Fatih Cogen
The growing demands in wireless communication technologies necessitate the development of more advanced and efficient systems. Therefore, this paper introduces a high-performance and data rate index modulation technique called the double media-based modulation (DMBM) system. The proposed system enhances the conventional media-based modulation (MBM) system by selecting two mirror activation patterns (MAPs) and transmitting two symbols within the same transmission duration. Consequently, the DMBM system achieves double the spectral efficiency of MBM while improving error performance through an increased number of bits encoded in the indices. The proposed DMBM scheme undergoes performance evaluation using $M$-ary quadrature amplitude modulation ($M$-QAM) over Rayleigh, Rician, and Nakagami-$m$ fading channels. Its error performance is compared with alternative techniques such as spatial modulation (SM), quadrature SM (QSM), MBM, and double SM (DSM) over the Rayleigh channel. Also, to further improve reliability, especially in high-mobility scenarios envisioned for sixth-generation (6G) networks, orthogonal time frequency space (OTFS) modulation is integrated into the proposed DMBM system. The proposed OTFS-based DMBM (OTFS-DMBM) system is compared with the conventional OTFS system and offers better error performance at the same spectral efficiency. Furthermore, comprehensive analyses of throughput, complexity, energy efficiency, spectral efficiency, and capacity are conducted for the DMBM system alongside the benchmark systems. The impact of imperfect channel state information (CSI) for the proposed DMBM system is also analyzed, and performance comparisons are presented for both perfect and imperfect CSI conditions. The findings demonstrate that the DMBM system outperforms its counterparts, highlighting its potential as a strong solution for modern wireless communication networks’ demands.
{"title":"Double Media-Based Modulation Scheme for High-Rate Wireless Communication Systems","authors":"Burak Ahmet Ozden;Erdogan Aydin;Fatih Cogen","doi":"10.1109/TMC.2025.3634372","DOIUrl":"https://doi.org/10.1109/TMC.2025.3634372","url":null,"abstract":"The growing demands in wireless communication technologies necessitate the development of more advanced and efficient systems. Therefore, this paper introduces a high-performance and data rate index modulation technique called the double media-based modulation (DMBM) system. The proposed system enhances the conventional media-based modulation (MBM) system by selecting two mirror activation patterns (MAPs) and transmitting two symbols within the same transmission duration. Consequently, the DMBM system achieves double the spectral efficiency of MBM while improving error performance through an increased number of bits encoded in the indices. The proposed DMBM scheme undergoes performance evaluation using <inline-formula><tex-math>$M$</tex-math></inline-formula>-ary quadrature amplitude modulation (<inline-formula><tex-math>$M$</tex-math></inline-formula>-QAM) over Rayleigh, Rician, and Nakagami-<inline-formula><tex-math>$m$</tex-math></inline-formula> fading channels. Its error performance is compared with alternative techniques such as spatial modulation (SM), quadrature SM (QSM), MBM, and double SM (DSM) over the Rayleigh channel. Also, to further improve reliability, especially in high-mobility scenarios envisioned for sixth-generation (6G) networks, orthogonal time frequency space (OTFS) modulation is integrated into the proposed DMBM system. The proposed OTFS-based DMBM (OTFS-DMBM) system is compared with the conventional OTFS system and offers better error performance at the same spectral efficiency. Furthermore, comprehensive analyses of throughput, complexity, energy efficiency, spectral efficiency, and capacity are conducted for the DMBM system alongside the benchmark systems. The impact of imperfect channel state information (CSI) for the proposed DMBM system is also analyzed, and performance comparisons are presented for both perfect and imperfect CSI conditions. The findings demonstrate that the DMBM system outperforms its counterparts, highlighting its potential as a strong solution for modern wireless communication networks’ demands.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"25 4","pages":"5730-5741"},"PeriodicalIF":9.2,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1109/TMC.2025.3634361
Minghui Min;Peng Zhang;Yue Zhang;Wenmin Kuang;Hongliang Zhang;Shiyin Li;Dusit Niyato;Zhu Han
Multi-access Edge Computing (MEC) enables users to handle resource-intensive and latency-sensitive tasks. However, the offloading behaviors, which are closely correlated with wireless channel conditions, can inadvertently reveal users’ location information to untrustworthy MEC servers. Existing location privacy-aware task offloading (LPTO) mechanisms have not fully considered and comprehensively analyzed personalized location privacy protection requirements. To address this gap, this paper proposes a differential privacy (DP)-based personalized LPTO mechanism for MEC environments that jointly optimizes the perturbation region, privacy budget, and offloading rate while maximizing the offloading utility. We quantify personalized privacy requirements by incorporating task sensitivity, user privacy preference, and task priority. Then, we propose a two-timescale (2Ts) optimization framework to solve the complex personalized location privacy-aware task offloading optimization problem. Specifically, we optimize the perturbation region on a long timescale to align with long-term privacy requirements. In contrast, the offloading ratio and privacy budget are dynamically optimized on a short timescale based on instantaneous channel states and offloading workloads. Furthermore, we model the privacy-aware offloading problem as a Markov decision process (MDP) and develop a dual-agent deep reinforcement learning (DRL)-based personalized LPTO mechanism (DDPLM) to optimize strategies under dynamic MEC systems. Simulation results validate that the proposed DDPLM achieves personalized location privacy protection while reducing computational costs.
{"title":"Personalized Location Privacy-Aware Task Offloading: A Dual-Agent DRL Approach","authors":"Minghui Min;Peng Zhang;Yue Zhang;Wenmin Kuang;Hongliang Zhang;Shiyin Li;Dusit Niyato;Zhu Han","doi":"10.1109/TMC.2025.3634361","DOIUrl":"https://doi.org/10.1109/TMC.2025.3634361","url":null,"abstract":"Multi-access Edge Computing (MEC) enables users to handle resource-intensive and latency-sensitive tasks. However, the offloading behaviors, which are closely correlated with wireless channel conditions, can inadvertently reveal users’ location information to untrustworthy MEC servers. Existing location privacy-aware task offloading (LPTO) mechanisms have not fully considered and comprehensively analyzed personalized location privacy protection requirements. To address this gap, this paper proposes a differential privacy (DP)-based personalized LPTO mechanism for MEC environments that jointly optimizes the perturbation region, privacy budget, and offloading rate while maximizing the offloading utility. We quantify personalized privacy requirements by incorporating task sensitivity, user privacy preference, and task priority. Then, we propose a two-timescale (2Ts) optimization framework to solve the complex personalized location privacy-aware task offloading optimization problem. Specifically, we optimize the perturbation region on a long timescale to align with long-term privacy requirements. In contrast, the offloading ratio and privacy budget are dynamically optimized on a short timescale based on instantaneous channel states and offloading workloads. Furthermore, we model the privacy-aware offloading problem as a Markov decision process (MDP) and develop a dual-agent deep reinforcement learning (DRL)-based personalized LPTO mechanism (DDPLM) to optimize strategies under dynamic MEC systems. Simulation results validate that the proposed DDPLM achieves personalized location privacy protection while reducing computational costs.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"25 4","pages":"5824-5838"},"PeriodicalIF":9.2,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing adoption of AI-Generated Content (AIGC) has made large-scale processing of multiple Generative AI (GAI) training jobs a key strategy for improving cost-efficiency in computing clusters. However, the distributed nature of GAI models, together with inherent network bottlenecks, imposes significant challenges on system performance. Moreover, differences in training purposes, variations in model sizes, and asynchronous lifecycles create a dynamic environment. As a result, the coexistence of multiple GAI training jobs in a computing cluster exacerbates problems such as resource misallocation, fragmentation, and network contention, leading to low resource utilization and inefficient training performance. These motivate us to explore an efficient resource scheduling approach for completing multiple GAI training jobs. Accordingly, we introduce an intrinsic topology-aware scheduling framework designed to ensure flexible scheduling and efficient distributed training of GAI models. To address the trade-off between the number of concurrent jobs and the communication contention they generate, we formulate a multi-objective optimization problem with two objectives: maximizing the utility of GAI jobs and minimizing communication bandwidth. We then propose the Diffusion Model-based AI-Generated Resources Scheduling (DARS) algorithm, designed to capture dynamic, high-dimensional environments and generate optimal resource scheduling decisions. DARS employs a denoising diffusion process to iteratively refine noisy resource allocations into optimized scheduling decisions. Subsequently, we replace the policy network of Deep Reinforcement Learning (DRL) with DARS to address environmental uncertainty and enhance efficiency. Finally, the simulation results confirm that the proposed algorithm outperforms existing approaches.
{"title":"Sculpting Resource Efficiency: Diffusion Model-Aided Dynamic Multi-Job Scheduling With Topology Awareness in AI Clusters","authors":"Meng Yuan;Songjing Tao;Qiang Wu;Xiangbin Wang;Ran Wang;Jie Hao;Dusit Niyato","doi":"10.1109/TMC.2025.3634217","DOIUrl":"https://doi.org/10.1109/TMC.2025.3634217","url":null,"abstract":"The growing adoption of AI-Generated Content (AIGC) has made large-scale processing of multiple Generative AI (GAI) training jobs a key strategy for improving cost-efficiency in computing clusters. However, the distributed nature of GAI models, together with inherent network bottlenecks, imposes significant challenges on system performance. Moreover, differences in training purposes, variations in model sizes, and asynchronous lifecycles create a dynamic environment. As a result, the coexistence of multiple GAI training jobs in a computing cluster exacerbates problems such as resource misallocation, fragmentation, and network contention, leading to low resource utilization and inefficient training performance. These motivate us to explore an efficient resource scheduling approach for completing multiple GAI training jobs. Accordingly, we introduce an intrinsic topology-aware scheduling framework designed to ensure flexible scheduling and efficient distributed training of GAI models. To address the trade-off between the number of concurrent jobs and the communication contention they generate, we formulate a multi-objective optimization problem with two objectives: maximizing the utility of GAI jobs and minimizing communication bandwidth. We then propose the Diffusion Model-based AI-Generated Resources Scheduling (DARS) algorithm, designed to capture dynamic, high-dimensional environments and generate optimal resource scheduling decisions. DARS employs a denoising diffusion process to iteratively refine noisy resource allocations into optimized scheduling decisions. Subsequently, we replace the policy network of Deep Reinforcement Learning (DRL) with DARS to address environmental uncertainty and enhance efficiency. Finally, the simulation results confirm that the proposed algorithm outperforms existing approaches.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"25 4","pages":"5872-5889"},"PeriodicalIF":9.2,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1109/TMC.2025.3634127
S. Arthi;Neelesh B. Mehta;Chandramani Singh
The hybrid medium access control (MAC) protocol, which was first adopted in the IEEE 802.11ax standard, combines contention-based random access (UORA) and contention-free scheduled access (SA) transmissions over orthogonal resource units (RUs). We present a novel fixed-point analysis of saturation throughput and average access delay of hybrid access that accounts for discrete rate adaptation, packet decoding errors, and scheduling. Using this analysis and Markov decision process (MDP) theory, we design a novel dynamic RU allocation policy (ODRAP) for hybrid access. Our analysis and policy design are the first to capture the dynamic flow of users between UORA and SA, and its dependence on the RU allocation. The existing literature has modeled UORA or SA, but not both, or has assumed a fixed number of SA users. We first develop the analysis when the number of packets reported in the buffer status report (BSR) of a user is a geometric random variable. We then present an iterative approach to handle application-specific general distributions. Our numerical results verify the accuracy of the analysis despite its simplicity. Furthermore, they highlight the impact of the number of allocated RUs on the scheduler. ODRAP optimally trades off the throughput with the access delay compared to several benchmark policies.
{"title":"Hybrid Access MAC Protocol in Wi-Fi: Analysis and Optimal Resource Allocation Policy Design","authors":"S. Arthi;Neelesh B. Mehta;Chandramani Singh","doi":"10.1109/TMC.2025.3634127","DOIUrl":"https://doi.org/10.1109/TMC.2025.3634127","url":null,"abstract":"The hybrid medium access control (MAC) protocol, which was first adopted in the IEEE 802.11ax standard, combines contention-based random access (UORA) and contention-free scheduled access (SA) transmissions over orthogonal resource units (RUs). We present a novel fixed-point analysis of saturation throughput and average access delay of hybrid access that accounts for discrete rate adaptation, packet decoding errors, and scheduling. Using this analysis and Markov decision process (MDP) theory, we design a novel dynamic RU allocation policy (ODRAP) for hybrid access. Our analysis and policy design are the first to capture the dynamic flow of users between UORA and SA, and its dependence on the RU allocation. The existing literature has modeled UORA or SA, but not both, or has assumed a fixed number of SA users. We first develop the analysis when the number of packets reported in the buffer status report (BSR) of a user is a geometric random variable. We then present an iterative approach to handle application-specific general distributions. Our numerical results verify the accuracy of the analysis despite its simplicity. Furthermore, they highlight the impact of the number of allocated RUs on the scheduler. ODRAP optimally trades off the throughput with the access delay compared to several benchmark policies.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"25 4","pages":"5742-5757"},"PeriodicalIF":9.2,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1109/TMC.2025.3634221
Senhao Gao;Junqing Zhang;Luoyu Mei;Shuai Wang;Xuyu Wang
Human activity recognition (HAR) requires extracting accurate spatial-temporal features with human movements. A mmWave radar point cloud-based HAR system suffers from sparsity and variable-size problems due to the physical features of the mmWave signal. Existing works usually borrow the preprocessing algorithms for the vision-based systems with dense point clouds, which may not be optimal for mmWave radar systems. In this work, we proposed a graph representation with a discrete dynamic graph neural network (DDGNN) to explore the spatial-temporal representation of human movement-related features. Specifically, we designed a star graph to describe the high-dimensional relative relationship between a manually added static center point and the dynamic mmWave radar points in the same and consecutive frames. We then adopted DDGNN to learn the features residing in the star graph with variable sizes. Experimental results demonstrated that our approach outperformed other baseline methods using real-world HAR datasets. Our system achieved an overall classification accuracy of 94.27%, which gets the near-optimal performance with a vision-based skeleton data accuracy of 97.25%. We also conducted an inference test on Raspberry Pi 4 to demonstrate its effectiveness on resource-constraint platforms. We provided a comprehensive ablation study for variable DDGNN structures to validate our model design. Our system also outperformed three recent radar-specific methods without requiring resampling or frame aggregators.
人体活动识别(HAR)需要准确提取人体运动的时空特征。由于毫米波信号的物理特性,基于毫米波雷达点云的HAR系统存在稀疏性和大小可变的问题。现有的工作通常借用基于视觉的密集点云系统的预处理算法,这对于毫米波雷达系统来说可能不是最优的。在这项工作中,我们提出了一种离散动态图神经网络(DDGNN)的图表示来探索人体运动相关特征的时空表示。具体来说,我们设计了一个星图来描述在同一帧和连续帧中手动添加的静态中心点与动态毫米波雷达点之间的高维相对关系。然后,我们采用DDGNN来学习存在于可变大小星图中的特征。实验结果表明,我们的方法优于使用真实HAR数据集的其他基线方法。该系统的总体分类准确率为94.27%,其中基于视觉的骨架数据准确率为97.25%,达到了近乎最优的分类性能。我们还在Raspberry Pi 4上进行了推理测试,以证明其在资源约束平台上的有效性。我们对可变DDGNN结构进行了全面的烧蚀研究,以验证我们的模型设计。我们的系统在不需要重采样或帧聚合器的情况下也优于最近的三种雷达特定方法。
{"title":"Exploring Spatial-Temporal Representation via Star Graph for mmWave Radar-Based Human Activity Recognition","authors":"Senhao Gao;Junqing Zhang;Luoyu Mei;Shuai Wang;Xuyu Wang","doi":"10.1109/TMC.2025.3634221","DOIUrl":"https://doi.org/10.1109/TMC.2025.3634221","url":null,"abstract":"Human activity recognition (HAR) requires extracting accurate spatial-temporal features with human movements. A mmWave radar point cloud-based HAR system suffers from sparsity and variable-size problems due to the physical features of the mmWave signal. Existing works usually borrow the preprocessing algorithms for the vision-based systems with dense point clouds, which may not be optimal for mmWave radar systems. In this work, we proposed a graph representation with a discrete dynamic graph neural network (DDGNN) to explore the spatial-temporal representation of human movement-related features. Specifically, we designed a star graph to describe the high-dimensional relative relationship between a manually added static center point and the dynamic mmWave radar points in the same and consecutive frames. We then adopted DDGNN to learn the features residing in the star graph with variable sizes. Experimental results demonstrated that our approach outperformed other baseline methods using real-world HAR datasets. Our system achieved an overall classification accuracy of 94.27%, which gets the near-optimal performance with a vision-based skeleton data accuracy of 97.25%. We also conducted an inference test on Raspberry Pi 4 to demonstrate its effectiveness on resource-constraint platforms. We provided a comprehensive ablation study for variable DDGNN structures to validate our model design. Our system also outperformed three recent radar-specific methods without requiring resampling or frame aggregators.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"25 4","pages":"5700-5715"},"PeriodicalIF":9.2,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}