Pub Date : 2025-07-07DOI: 10.1109/TMC.2025.3586615
Yiyi Zhang;Peng Guo;Xuefeng Liu;Chao Cai;Kui Zhang;Jiang Liu
As a classic data processing tool, Principal Component Analysis (PCA) has been widely applied in various data analysis applications. To mitigate the high computational complexity of PCA on Big Data, distributed PCA methods have been extensively studied, which disperse the computational tasks across multiple computation units while guaranteeing the accuracy. For the scenarios of distributed PCA in wireless networks, as the data is originally dispersed across different locations, it is further required to reduce the communication cost of distributed PCA in networks, which however has been seldom studied. Reducing the communication cost of distributed PCA in wireless networks requires not only appropriately partitioning the computation of PCA, ensuring accuracy, but also effectively assigning the partitioned computations and routing strategies to the nodes. In this paper, we propose CD-PCA, a communication-efficient distributed PCA (CD-PCA) scheme. This scheme implements a transmission-benefit equipartition strategy for the network to facilitate high-accuracy distributed computation and designs novel routing strategies for nodes to execute the distributed PCA within each partitioned region. Extensive simulation results demonstrate that the proposed CD-PCA scheme can reduce transmission costs by over 30% on average compared to related methods and baseline approaches.
{"title":"Reducing Transmission Cost of Distributed Principal Components Analysis in Wireless Networks With Accuracy Guaranteed","authors":"Yiyi Zhang;Peng Guo;Xuefeng Liu;Chao Cai;Kui Zhang;Jiang Liu","doi":"10.1109/TMC.2025.3586615","DOIUrl":"https://doi.org/10.1109/TMC.2025.3586615","url":null,"abstract":"As a classic data processing tool, Principal Component Analysis (PCA) has been widely applied in various data analysis applications. To mitigate the high computational complexity of PCA on Big Data, distributed PCA methods have been extensively studied, which disperse the computational tasks across multiple computation units while guaranteeing the accuracy. For the scenarios of distributed PCA in wireless networks, as the data is originally dispersed across different locations, it is further required to reduce the communication cost of distributed PCA in networks, which however has been seldom studied. Reducing the communication cost of distributed PCA in wireless networks requires not only appropriately partitioning the computation of PCA, ensuring accuracy, but also effectively assigning the partitioned computations and routing strategies to the nodes. In this paper, we propose CD-PCA, a communication-efficient distributed PCA (CD-PCA) scheme. This scheme implements a transmission-benefit equipartition strategy for the network to facilitate high-accuracy distributed computation and designs novel routing strategies for nodes to execute the distributed PCA within each partitioned region. Extensive simulation results demonstrate that the proposed CD-PCA scheme can reduce transmission costs by over 30% on average compared to related methods and baseline approaches.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 11","pages":"12711-12725"},"PeriodicalIF":9.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-07DOI: 10.1109/TMC.2025.3586623
Haixing Wu;Jiameng Zheng;Shunfu Jin
Mobile edge computing (MEC) has become an effective paradigm to support computation-intensive applications by providing services in close proximity to user devices (UDs). In MEC networks, computation offloading technology is devoted to balancing system load and prolonging UDs’ battery life. However, most existing studies on computation offloading take the impractical assumption of the MEC scenario with homogeneous users, ignoring security requirement from certain users. Moreover, with users mobility and task arrivals correlation, most existing computing offloading approaches suffer from inefficient or suboptimal decision making in practical MEC environments. To tackle these issues, by integrating task arrivals correlation within a time slot and environment dynamics between time slots, we propose an adaptive computation offloading scheme based on a collaborative architecture with heterogeneous MEC nodes. First, considering additional security requirement from very important people (VIP) users, we present a novel collaborative architecture by separating edge/cloud servers into public and private nodes. Then, with the architecture, we develop a dynamic computation offloading (DCO) algorithm to realize adaptive computation offloading scheme in MEC environment with mobile users. Particularly, the algorithm involves three stages. 1) By extending Poisson process into Markovian arrival process (MAP), we construct an MAP-based system model to capture the behavior of time-dependent task arrivals and then analyze the system model to derive the system delay in steady state. 2) For the purpose of minimizing the system delay in each time slot, we formulate a computation offloading problem in MEC environment with mobile users. 3) Under a deep reinforcement learning (DRL) framework, by taking the system delay as environmental feedback, we solve the formulated problem and provide offloading decisions in each time slot. We evaluate the performance of DCO algorithm by comparing it with other benchmark algorithms in various application scenarios. Results demonstrate that the proposed DCO algorithm outperforms the compared algorithms in response performance.
{"title":"Adaptive Computation Offloading Scheme Based on a Collaborative Architecture With Heterogeneous MEC Nodes: A DRL Approach","authors":"Haixing Wu;Jiameng Zheng;Shunfu Jin","doi":"10.1109/TMC.2025.3586623","DOIUrl":"https://doi.org/10.1109/TMC.2025.3586623","url":null,"abstract":"Mobile edge computing (MEC) has become an effective paradigm to support computation-intensive applications by providing services in close proximity to user devices (UDs). In MEC networks, computation offloading technology is devoted to balancing system load and prolonging UDs’ battery life. However, most existing studies on computation offloading take the impractical assumption of the MEC scenario with homogeneous users, ignoring security requirement from certain users. Moreover, with users mobility and task arrivals correlation, most existing computing offloading approaches suffer from inefficient or suboptimal decision making in practical MEC environments. To tackle these issues, by integrating task arrivals correlation within a time slot and environment dynamics between time slots, we propose an adaptive computation offloading scheme based on a collaborative architecture with heterogeneous MEC nodes. First, considering additional security requirement from very important people (VIP) users, we present a novel collaborative architecture by separating edge/cloud servers into public and private nodes. Then, with the architecture, we develop a dynamic computation offloading (DCO) algorithm to realize adaptive computation offloading scheme in MEC environment with mobile users. Particularly, the algorithm involves three stages. 1) By extending Poisson process into Markovian arrival process (MAP), we construct an MAP-based system model to capture the behavior of time-dependent task arrivals and then analyze the system model to derive the system delay in steady state. 2) For the purpose of minimizing the system delay in each time slot, we formulate a computation offloading problem in MEC environment with mobile users. 3) Under a deep reinforcement learning (DRL) framework, by taking the system delay as environmental feedback, we solve the formulated problem and provide offloading decisions in each time slot. We evaluate the performance of DCO algorithm by comparing it with other benchmark algorithms in various application scenarios. Results demonstrate that the proposed DCO algorithm outperforms the compared algorithms in response performance.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 11","pages":"12692-12710"},"PeriodicalIF":9.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Depression detection via wearable Electroencephalogram (EEG) sensor-assisted diagnosis system demands computationally efficient models compatible with resource-constrained edge devices. Spiking Neural Networks (SNNs) offer inherent advantages for processing the spatio-temporal patterns of EEG through event-driven neuromorphic computing. In this study, we innovatively present LSNNet, a lightweight SNN model specifically designed for wearable EEG sensors. The model exhibits low computational complexity with 7.18 K parameters and 67.68 M Floating-Point Operations (FLOPs). It requires only 246.88 KB of Random Access Memory (RAM) and 57.33 KB of Read-Only Memory (ROM) for on-board execution, and has been validated on both the single-core STM32U535CET6 and the multi-core GAP8 microcontrollers. Despite its minimal computational and memory requirements, LSNNet achieves impressive performance metrics, with a classification accuracy of 89.2%, specificity of 92.4%, and sensitivity of 86.4% in independent tests conducted on EEG data collected from 73 depressed patients and 108 healthy controls using our three-lead EEG sensor. Especially, when running on the GAP8 microcontroller, the LSNNet model has a low power consumption of 21.43 mW and a satisfactory inference time of 0.63 s while maintaining a classification accuracy of 87.5% (only with a reduction of 1.98% ). These results underscore the potential of integrating wearable EEG sensors with the LSNNet model for depression detection in the Internet of Things (IoT) era.
{"title":"LSNN Model: A Lightweight Spiking Neural Network-Based Depression Classification Model for Wearable EEG Sensors","authors":"Qinglin Zhao;Lixin Zhang;Haojie Zhang;Hua Jiang;Kunbo Cui;Zhongqing Wu;Jingyu Liu;Mingqi Zhao;Fuze Tian;Bin Hu","doi":"10.1109/TMC.2025.3586591","DOIUrl":"https://doi.org/10.1109/TMC.2025.3586591","url":null,"abstract":"Depression detection via wearable Electroencephalogram (EEG) sensor-assisted diagnosis system demands computationally efficient models compatible with resource-constrained edge devices. Spiking Neural Networks (SNNs) offer inherent advantages for processing the spatio-temporal patterns of EEG through event-driven neuromorphic computing. In this study, we innovatively present LSNNet, a lightweight SNN model specifically designed for wearable EEG sensors. The model exhibits low computational complexity with 7.18 K parameters and 67.68 M Floating-Point Operations (FLOPs). It requires only 246.88 KB of Random Access Memory (RAM) and 57.33 KB of Read-Only Memory (ROM) for on-board execution, and has been validated on both the single-core STM32U535CET6 and the multi-core GAP8 microcontrollers. Despite its minimal computational and memory requirements, LSNNet achieves impressive performance metrics, with a classification accuracy of 89.2%, specificity of 92.4%, and sensitivity of 86.4% in independent tests conducted on EEG data collected from 73 depressed patients and 108 healthy controls using our three-lead EEG sensor. Especially, when running on the GAP8 microcontroller, the LSNNet model has a low power consumption of 21.43 mW and a satisfactory inference time of 0.63 s while maintaining a classification accuracy of 87.5% (only with a reduction of 1.98% ). These results underscore the potential of integrating wearable EEG sensors with the LSNNet model for depression detection in the Internet of Things (IoT) era.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 11","pages":"12640-12654"},"PeriodicalIF":9.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality (AR) offers users immersive experiences to interact with digital contents in their physical space. However, practical AR applications are challenged by the tight coupling of algorithm and engineering during the development and deployment phases as well as the execution requirements of hybrid AR subtasks on heterogeneous and resource-constraint mobile devices. In this work, we build an end-to-end, cross-platform, and efficient AR system, called ARSys. The infrastructure in ARSys adopts the new principle of integrated design, unifies and refines AR fundamental capabilities, supports streaming media processing, model inference, and real-time rendering by exposing high-performance tensor compute engine to top, and constructs a Python multi-instance virtual machine as the cross-platform AR task execution container. The runtime mechanism of ARSys schedules AR tasks in a pipeline parallelism way and allocates subtasks to hardware backends by optimizing the slowest node. The development workbench and the deployment platform in ARSys allow the decoupling of algorithms written in Python from engineering components in C/C++ and further support remote debugging and quick validation of AR algorithms. We extensively evaluate ARSys in practical AR applications across high-end, mid-end, and low-end Android and iOS devices, demonstrating higher development, deployment, and runtime efficiency than existing MediaPipe-oriented framework. ARSys has been integrated into Mobile Taobao for production use.
{"title":"ARSys: An Efficient and Cross-Platform Development, Deployment, and Runtime System for Mobile Augmented Reality","authors":"Chengfei Lv;Chaoyue Niu;Yu Cai;Xiaotang Jiang;Fan Wu;Guihai Chen","doi":"10.1109/TMC.2025.3586797","DOIUrl":"https://doi.org/10.1109/TMC.2025.3586797","url":null,"abstract":"Augmented reality (AR) offers users immersive experiences to interact with digital contents in their physical space. However, practical AR applications are challenged by the tight coupling of algorithm and engineering during the development and deployment phases as well as the execution requirements of hybrid AR subtasks on heterogeneous and resource-constraint mobile devices. In this work, we build an end-to-end, cross-platform, and efficient AR system, called ARSys. The infrastructure in ARSys adopts the new principle of integrated design, unifies and refines AR fundamental capabilities, supports streaming media processing, model inference, and real-time rendering by exposing high-performance tensor compute engine to top, and constructs a Python multi-instance virtual machine as the cross-platform AR task execution container. The runtime mechanism of ARSys schedules AR tasks in a pipeline parallelism way and allocates subtasks to hardware backends by optimizing the slowest node. The development workbench and the deployment platform in ARSys allow the decoupling of algorithms written in Python from engineering components in C/C++ and further support remote debugging and quick validation of AR algorithms. We extensively evaluate ARSys in practical AR applications across high-end, mid-end, and low-end Android and iOS devices, demonstrating higher development, deployment, and runtime efficiency than existing MediaPipe-oriented framework. ARSys has been integrated into Mobile Taobao for production use.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 11","pages":"12655-12671"},"PeriodicalIF":9.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-07DOI: 10.1109/TMC.2025.3586636
Haibin Sun;Yongzheng Zhang
Range-free localization algorithms have attracted considerable attention for outdoor wireless sensor network (WSN) positioning because they are less susceptible to environmental factors when estimating inter node distances and require only a few beacon nodes with known locations to rapidly determine all node positions. Among these, the connectivity based DV Hop algorithm has become widely used due to its simplicity and ease of implementation. However, its localization accuracy is limited and it is easily degraded by non uniform node distributions and obstacle environments. To address these shortcomings, this paper proposes a novel range free localization algorithm (RF-DEGO). First, a new distance estimation formula is derived from node connectivity and the probability distribution of distances. Next, the estimated distances are corrected using the local node density along communication paths, and paths identified as detouring around obstacles receive a further correction. Finally, an enhanced hierarchical Grey Wolf Optimization algorithm computes the node positions. Extensive simulation experiments under various network scenarios and parameter settings show that the proposed algorithm outperforms several existing localization methods in both accuracy and computation time, demonstrating superior overall performance and strong competitiveness.
{"title":"RF-DEGO: A Range Free Localization Algorithm for Non Uniform Node Distributions and Obstacle Environments","authors":"Haibin Sun;Yongzheng Zhang","doi":"10.1109/TMC.2025.3586636","DOIUrl":"https://doi.org/10.1109/TMC.2025.3586636","url":null,"abstract":"Range-free localization algorithms have attracted considerable attention for outdoor wireless sensor network (WSN) positioning because they are less susceptible to environmental factors when estimating inter node distances and require only a few beacon nodes with known locations to rapidly determine all node positions. Among these, the connectivity based DV Hop algorithm has become widely used due to its simplicity and ease of implementation. However, its localization accuracy is limited and it is easily degraded by non uniform node distributions and obstacle environments. To address these shortcomings, this paper proposes a novel range free localization algorithm (RF-DEGO). First, a new distance estimation formula is derived from node connectivity and the probability distribution of distances. Next, the estimated distances are corrected using the local node density along communication paths, and paths identified as detouring around obstacles receive a further correction. Finally, an enhanced hierarchical Grey Wolf Optimization algorithm computes the node positions. Extensive simulation experiments under various network scenarios and parameter settings show that the proposed algorithm outperforms several existing localization methods in both accuracy and computation time, demonstrating superior overall performance and strong competitiveness.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 11","pages":"12517-12532"},"PeriodicalIF":9.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The evolution towards Internet of Things (IoT) in the forthcoming sixth generation (6G) is facing massive amounts of transmitted data and harsh wireless transmission environment, which severely degrade the quality of communication. To overcome these difficulties, a novel multiple intelligent reflecting surfaces (IRSs) assisted autonomous aerial vehicle (AAV) network framework with non-orthogonal multiple access (NOMA) is proposed in this article, where the AAV applies the NOMA scheme to deliver the information to the ground users assisted by multiple IRSs. We aim to maximize the achievable rate of the considered network while guaranteeing the minimum communication rate of each user, by jointly optimizing the multi-IRS phase shifts, AAV transmit power, AAV trajectory, and NOMA decoding order. To handle the coupled variables and integer constraints, we decompose the original problem into three subproblems based on the block coordinate descent (BCD) framework. Specifically, we first obtain the multi-IRS phase shifts by applying the semidefinite relaxation (SDR) technique. Next, the AAV transmit power allocation is derived by exploiting the concave convex procedure (CCCP) method. The AAV trajectory and NOMA decoding order are finally obtained by invoking the penalty-based method and the successive convex approximation (SCA) technique. Based on these, an alternating optimization algorithm is proposed. The numerical results show that: 1) the NOMA scheme enhances the utilization of the spectrum and enhances the access capacity of the communication system; 2) the multi-IRS cooperative structure increases the reflective channels and effectively improves the air-ground transmission environment, thus enhancing the system achievable rate; 3) the proposed multi-IRS assisted AAV NOMA algorithm achieves a significant network rate improvement compared to other benchmark schemes.
{"title":"Achievable Rate Maximization for Multi-IRS Assisted AAV-NOMA Networks","authors":"Dingcheng Yang;Kangqing Wu;Yu Xu;Fahui Wu;Tiankui Zhang","doi":"10.1109/TMC.2025.3586768","DOIUrl":"https://doi.org/10.1109/TMC.2025.3586768","url":null,"abstract":"The evolution towards Internet of Things (IoT) in the forthcoming sixth generation (6G) is facing massive amounts of transmitted data and harsh wireless transmission environment, which severely degrade the quality of communication. To overcome these difficulties, a novel multiple intelligent reflecting surfaces (IRSs) assisted autonomous aerial vehicle (AAV) network framework with non-orthogonal multiple access (NOMA) is proposed in this article, where the AAV applies the NOMA scheme to deliver the information to the ground users assisted by multiple IRSs. We aim to maximize the achievable rate of the considered network while guaranteeing the minimum communication rate of each user, by jointly optimizing the multi-IRS phase shifts, AAV transmit power, AAV trajectory, and NOMA decoding order. To handle the coupled variables and integer constraints, we decompose the original problem into three subproblems based on the block coordinate descent (BCD) framework. Specifically, we first obtain the multi-IRS phase shifts by applying the semidefinite relaxation (SDR) technique. Next, the AAV transmit power allocation is derived by exploiting the concave convex procedure (CCCP) method. The AAV trajectory and NOMA decoding order are finally obtained by invoking the penalty-based method and the successive convex approximation (SCA) technique. Based on these, an alternating optimization algorithm is proposed. The numerical results show that: 1) the NOMA scheme enhances the utilization of the spectrum and enhances the access capacity of the communication system; 2) the multi-IRS cooperative structure increases the reflective channels and effectively improves the air-ground transmission environment, thus enhancing the system achievable rate; 3) the proposed multi-IRS assisted AAV NOMA algorithm achieves a significant network rate improvement compared to other benchmark schemes.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 11","pages":"12580-12594"},"PeriodicalIF":9.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile devices have increasingly integrated with numerous deep learning-based visual applications, such as object classification and recognition models. While these models perform well in controlled environments, their effectiveness declines in real-world environment due to out-of-distribution (OOD) data not seen during training. Existing methods for detecting OOD data often compromise normal data recognition and require extensive training on unattainable OOD data. To address these issues, we propose $mathtt {POD}$, a framework designed to enhance mobile visual applications by providing high-precision OOD detection without affecting original model performance. In the offline phase, $mathtt {POD}$ generates OOD detectors from any classification model by analyzing model’s neuron responses to various data types. In the online phase, it continuously adjusts decision boundaries by integrating results from both the original model and the detector. Evaluated on two public datasets and one self-collected dataset across various popular classification models, $mathtt {POD}$ significantly improves OOD detection performance while maintaining the accuracy of original models.
{"title":"Enabling Effective OOD Detection via Plug-and-Play Network for Mobile Visual Applications","authors":"Zixiao Wang;Qi Dong;Tianzhang Xing;Zhidan Liu;Zhenjiang Li;Xiaojiang Chen","doi":"10.1109/TMC.2025.3586625","DOIUrl":"https://doi.org/10.1109/TMC.2025.3586625","url":null,"abstract":"Mobile devices have increasingly integrated with numerous deep learning-based visual applications, such as object classification and recognition models. While these models perform well in controlled environments, their effectiveness declines in real-world environment due to out-of-distribution (OOD) data not seen during training. Existing methods for detecting OOD data often compromise normal data recognition and require extensive training on unattainable OOD data. To address these issues, we propose <inline-formula><tex-math>$mathtt {POD}$</tex-math></inline-formula>, a framework designed to enhance mobile visual applications by providing high-precision OOD detection without affecting original model performance. In the offline phase, <inline-formula><tex-math>$mathtt {POD}$</tex-math></inline-formula> generates OOD detectors from any classification model by analyzing model’s neuron responses to various data types. In the online phase, it continuously adjusts decision boundaries by integrating results from both the original model and the detector. Evaluated on two public datasets and one self-collected dataset across various popular classification models, <inline-formula><tex-math>$mathtt {POD}$</tex-math></inline-formula> significantly improves OOD detection performance while maintaining the accuracy of original models.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 11","pages":"12471-12486"},"PeriodicalIF":9.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-07DOI: 10.1109/TMC.2025.3586618
Tianlang He;Zhiqiu Xia;S.-H. Gary Chan
Knowing a pedestrian’s conveyor state of “elevator,” “escalator,” or “neither” is fundamental to many applications such as indoor navigation and people flow management. Previous studies on classifying the conveyor state often rely on specially designed body-worn sensors or make strong assumptions on pedestrian behaviors, which greatly strangles their deployability. To overcome this, we study the classification problem under arbitrary pedestrian behaviors using the inertial navigation system (INS) of the commonly available smartphones (including accelerometer, gyroscope, and magnetometer). This problem is challenging, because the INS signals of the conveyor states are entangled by the arbitrary and diverse pedestrian behaviors. We propose ELESON, a novel and lightweight deep-learning approach that uses phone INS to classify a pedestrian to elevator, escalator, or neither. Using causal decomposition and adversarial learning, ELESON extracts the motion and magnetic features of conveyor state independent of pedestrian behavior, based on which it estimates the state confidence by means of an evidential classifier. We curate a large and diverse dataset with 36,420 instances of pedestrians randomly taking elevators and escalators under arbitrary unknown behaviors. Our extensive experiments show that ELESON is robust against pedestrian behavior, achieving a high accuracy of over 0.9 in F1 score, strong confidence discriminability of 0.81 in AUROC (Area Under the Receiver Operating Characteristics), and low computational and memory requirements fit for common smartphone deployment.
了解行人的传送带状态是“电梯”、“自动扶梯”还是“两者都不是”,对于室内导航和人流管理等许多应用来说都是至关重要的。以往的传送带状态分类研究往往依赖于专门设计的穿戴式传感器,或者对行人行为进行了较强的假设,极大地限制了其可部署性。为了克服这一问题,我们使用常用智能手机(包括加速度计、陀螺仪和磁力计)的惯性导航系统(INS)研究了任意行人行为下的分类问题。这是一个具有挑战性的问题,因为传送带状态的INS信号被任意和多样的行人行为所纠缠。我们提出了ELESON,这是一种新颖且轻量级的深度学习方法,它使用手机INS将行人分类为电梯、自动扶梯或两者都不分类。利用因果分解和对抗学习,ELESON提取了与行人行为无关的运输状态的运动和磁性特征,并在此基础上通过证据分类器估计状态置信度。我们制作了一个庞大而多样的数据集,其中包含36,420个行人在任意未知行为下随机乘坐电梯和自动扶梯的实例。我们的大量实验表明,ELESON对行人行为具有鲁棒性,在F1得分中达到0.9以上的高精度,在AUROC (Receiver Operating Characteristics Area Under Area)中达到0.81的强置信度,并且适合普通智能手机部署的低计算和内存要求。
{"title":"Elevator, Escalator, or Neither? Classifying Conveyor State Using Smartphone Under Arbitrary Pedestrian Behavior","authors":"Tianlang He;Zhiqiu Xia;S.-H. Gary Chan","doi":"10.1109/TMC.2025.3586618","DOIUrl":"https://doi.org/10.1109/TMC.2025.3586618","url":null,"abstract":"Knowing a pedestrian’s <italic>conveyor state</i> of “elevator,” “escalator,” or “neither” is fundamental to many applications such as indoor navigation and people flow management. Previous studies on classifying the conveyor state often rely on specially designed body-worn sensors or make strong assumptions on pedestrian behaviors, which greatly strangles their deployability. To overcome this, we study the classification problem under arbitrary pedestrian behaviors using the inertial navigation system (INS) of the commonly available smartphones (including accelerometer, gyroscope, and magnetometer). This problem is challenging, because the INS signals of the conveyor states are entangled by the arbitrary and diverse pedestrian behaviors. We propose ELESON, a novel and lightweight deep-learning approach that uses phone INS to classify a pedestrian to <bold>el</b>evator, <bold>es</b>calator, <bold>o</b>r <bold>n</b>either. Using causal decomposition and adversarial learning, ELESON extracts the motion and magnetic features of conveyor state independent of pedestrian behavior, based on which it estimates the state confidence by means of an evidential classifier. We curate a large and diverse dataset with 36,420 instances of pedestrians randomly taking elevators and escalators under arbitrary unknown behaviors. Our extensive experiments show that ELESON is robust against pedestrian behavior, achieving a high accuracy of over 0.9 in F1 score, strong confidence discriminability of 0.81 in AUROC (Area Under the Receiver Operating Characteristics), and low computational and memory requirements fit for common smartphone deployment.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 11","pages":"12626-12639"},"PeriodicalIF":9.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-07DOI: 10.1109/TMC.2025.3586447
Linfeng Liu;Wenzhe Zhang;Xingyu Li;Jia Xu
At present, Unmanned Aerial Vehicle (UAV) swarm has been extensively applied in various fields. In the application of detection and localization of electronic signals, some UAVs could become disabled due to some abnormal events (e.g. electromagnetic interference and battery electricity exhaustion), and the topology connectivity of UAV swarm could be impaired, i.e., the topology of UAV swarm could be partitioned. For the topology recovery issue, we first propose Robust Topology Recovery Algorithm of UAV swarm (RTRA) to recover the topology connectivity of UAV swarm and enhance the topology robustness (reduce the number of potential topology recoveries in future) by relocating some UAVs to new positions with shortest flight distance. Furthermore, we note that the relocated UAVs are easy to exhaust the battery electricity and fail due to the extra flight movements for the topology recoveries, which affects the topology robustness. To this end, we present Cascading Robust Recovery Topology Algorithm of UAV swarm (CRTRA), which adopts a cascading movement strategy to share the flight movements among multiply relocated UAVs, thus avoiding the battery electricity exhaustion of the relocated UAVs. Extensive simulations and comparisons demonstrate that our proposed CRTRA can effectively recover the topology connectivity of UAV swarm while enhancing the topology robustness and shortening the flight distance of relocated UAVs, and CRTRA is especially suitable for some missions such as the detection and localization of electronic signals where UAVs are prone to fail.
目前,无人机群已广泛应用于各个领域。在电子信号的检测与定位应用中,一些无人机可能会因为某些异常事件(如电磁干扰、电池电量耗尽)而导致无人机失能,破坏无人机群的拓扑连通性,即对无人机群的拓扑进行分区。针对拓扑恢复问题,首先提出了无人机群鲁棒拓扑恢复算法(Robust topology recovery Algorithm of UAV swarm, RTRA),通过将部分无人机重新定位到飞行距离最短的新位置,恢复无人机群的拓扑连通性,增强拓扑鲁棒性(减少未来可能的拓扑恢复次数)。此外,我们注意到重新定位的无人机容易耗尽电池电量,并且由于拓扑恢复的额外飞行运动而失效,这影响了拓扑的鲁棒性。为此,提出了无人机群的级联鲁棒恢复拓扑算法(CRTRA),该算法采用级联运动策略,在多个重新定位的无人机之间共享飞行运动,从而避免了重新定位无人机的电池电量耗尽。大量的仿真和比较表明,该算法可以有效地恢复无人机群的拓扑连通性,同时增强了拓扑鲁棒性,缩短了重新定位无人机的飞行距离,特别适用于无人机容易失效的电子信号检测和定位等任务。
{"title":"On the Robust Topology Recovery of UAV Swarm for Detection and Localization of Electronic Signals","authors":"Linfeng Liu;Wenzhe Zhang;Xingyu Li;Jia Xu","doi":"10.1109/TMC.2025.3586447","DOIUrl":"https://doi.org/10.1109/TMC.2025.3586447","url":null,"abstract":"At present, Unmanned Aerial Vehicle (UAV) swarm has been extensively applied in various fields. In the application of detection and localization of electronic signals, some UAVs could become disabled due to some abnormal events (e.g. electromagnetic interference and battery electricity exhaustion), and the topology connectivity of UAV swarm could be impaired, i.e., the topology of UAV swarm could be partitioned. For the topology recovery issue, we first propose Robust Topology Recovery Algorithm of UAV swarm (RTRA) to recover the topology connectivity of UAV swarm and enhance the topology robustness (reduce the number of potential topology recoveries in future) by relocating some UAVs to new positions with shortest flight distance. Furthermore, we note that the relocated UAVs are easy to exhaust the battery electricity and fail due to the extra flight movements for the topology recoveries, which affects the topology robustness. To this end, we present Cascading Robust Recovery Topology Algorithm of UAV swarm (CRTRA), which adopts a cascading movement strategy to share the flight movements among multiply relocated UAVs, thus avoiding the battery electricity exhaustion of the relocated UAVs. Extensive simulations and comparisons demonstrate that our proposed CRTRA can effectively recover the topology connectivity of UAV swarm while enhancing the topology robustness and shortening the flight distance of relocated UAVs, and CRTRA is especially suitable for some missions such as the detection and localization of electronic signals where UAVs are prone to fail.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 11","pages":"12595-12610"},"PeriodicalIF":9.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-07DOI: 10.1109/TMC.2025.3586457
Yundi Wang;Xiaoyu Wang;He Huang;Haipeng Dai
Unmanned Aerial Vehicles (UAVs) can be easily deployed as auxiliary base stations due to their convenience and flexibility. However, limited battery capacity becomes a bottleneck. Promising wireless power transfer (WPT) technologies can provide a continuous power supply for UAVs. Many of the recent works treat the UAV battery capacity as a constraint, which hinders the assurance of continuous UAV operation. Furthermore, most studies employ intelligent path-planning algorithms that lack explicit performance guarantees. In this paper, we study the problem of Practical Optimizing UAV Trajectory in Wireless Charging Networks (POTWCN), which involves planning the trajectory of the wireless-powered UAV in the practical environment with obstacles by selecting candidate passing positions and determining the access order in the charging network. The goal is to maximize the benefit, i.e., balancing the total task completion time and the number of charging stations visited, so as to minimize path length and flight time, and ensure energy constraints with performance bound. To solve this problem, we first formalize the problem and prove its submodularity. Then, we propose the obstacle-aware weighted graph generation algorithm (OWGGA) to deal with the obstacles in the environment, which forms an obstacle-avoidance path using tangents and arcs between two hovering positions and the blocking obstacles. Next, we propose a dynamic charging station selection algorithm (ACSA), which maximizes the UAV’s energy utilization by limiting the number of charging stations that can be included. In the algorithm, we introduce the Christofides algorithm and use the path length calculated by OWGGA as the edge weights of the graph. Subsequently, considering the UAV’s energy constraints, we iteratively solve the UAV trajectory planning problem by adding the charging station with a maximized marginal benefit to the path. We prove that the proposed algorithm achieves an approximation ratio $1 - 1/e$ as well as the path length is at most $3pi /4$ times the optimal solution. Simulation results show that our algorithm reduces the flight distance by 38.01% and the task completion time by 34.00% on average.
{"title":"Practical Optimizing UAV Trajectory in Wireless Charging Networks: An Approximated Approach","authors":"Yundi Wang;Xiaoyu Wang;He Huang;Haipeng Dai","doi":"10.1109/TMC.2025.3586457","DOIUrl":"https://doi.org/10.1109/TMC.2025.3586457","url":null,"abstract":"Unmanned Aerial Vehicles (UAVs) can be easily deployed as auxiliary base stations due to their convenience and flexibility. However, limited battery capacity becomes a bottleneck. Promising wireless power transfer (WPT) technologies can provide a continuous power supply for UAVs. Many of the recent works treat the UAV battery capacity as a constraint, which hinders the assurance of continuous UAV operation. Furthermore, most studies employ intelligent path-planning algorithms that lack explicit performance guarantees. In this paper, we study the problem of <u>P</u>ractical <u>O</u>ptimizing UAV <u>T</u>rajectory in <u>W</u>ireless <u>C</u>harging <u>N</u>etworks (POTWCN), which involves planning the trajectory of the wireless-powered UAV in the practical environment with obstacles by selecting candidate passing positions and determining the access order in the charging network. The goal is to maximize the benefit, i.e., balancing the total task completion time and the number of charging stations visited, so as to minimize path length and flight time, and ensure energy constraints with performance bound. To solve this problem, we first formalize the problem and prove its submodularity. Then, we propose the obstacle-aware weighted graph generation algorithm (OWGGA) to deal with the obstacles in the environment, which forms an obstacle-avoidance path using tangents and arcs between two hovering positions and the blocking obstacles. Next, we propose a dynamic charging station selection algorithm (ACSA), which maximizes the UAV’s energy utilization by limiting the number of charging stations that can be included. In the algorithm, we introduce the Christofides algorithm and use the path length calculated by OWGGA as the edge weights of the graph. Subsequently, considering the UAV’s energy constraints, we iteratively solve the UAV trajectory planning problem by adding the charging station with a maximized marginal benefit to the path. We prove that the proposed algorithm achieves an approximation ratio <inline-formula><tex-math>$1 - 1/e$</tex-math></inline-formula> as well as the path length is at most <inline-formula><tex-math>$3pi /4$</tex-math></inline-formula> times the optimal solution. Simulation results show that our algorithm reduces the flight distance by 38.01% and the task completion time by 34.00% on average.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 11","pages":"12550-12566"},"PeriodicalIF":9.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}