Pub Date : 2025-02-05DOI: 10.1109/TMC.2025.3527174
{"title":"2024 Reviewers List","authors":"","doi":"10.1109/TMC.2025.3527174","DOIUrl":"https://doi.org/10.1109/TMC.2025.3527174","url":null,"abstract":"","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2470-2484"},"PeriodicalIF":7.7,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10874877","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deterministic network services play a vital role for supporting emerging real-time applications with bounded low latency, jitter, and high reliability. The deterministic guarantee is penetrated into various types of networks, such as 5G, WiFi, satellite, and edge computing networks. From the user’s perspective, the real-time applications require end-to-end deterministic guarantee across the converged network. In this paper, we investigate the end-to-end deterministic guarantee problem across the whole converged network, aiming to provide a scalable method for different kinds of converged networks to meet the bounded end-to-end latency, jitter, and high reliability demands of each flow, while improving the network scheduling QoS. Particularly, we set up the global end-to-end control plane to abstract the deterministic-related resources from converged network, and model the deterministic flow transmission by using the abstracted resources. With the resource abstraction, our model can work well for different underlying technologies. Given large amounts of abstracted resources in our model, it is difficult for traditional algorithms to fully utilize the resources. Thus, we propose a deep reinforcement learning based end-to-end deterministic-related resource scheduling (E2eDRS) algorithm to schedule the network resources from end to end. By setting the action groups, the E2eDRS can support varying network dimensions both in horizontal and vertical end-to-end deterministic-related network architectures. Experimental results show that E2eDRS can averagely increase 1.33x and 6.01x schedulable flow number for horizontal scheduling compared with MultiDRS and MultiNaive algorithms, respectively. The E2eDRS can also optimize 2.65x and 3.87x server load balance than MultiDRS and MultiNaive algorithms, respectively. For vertical scheduling, the E2eDRS can still perform better on schedulable flow number and server load balance.
{"title":"Intelligent End-to-End Deterministic Scheduling Across Converged Networks","authors":"Zongrong Cheng;Weiting Zhang;Dong Yang;Chuan Huang;Hongke Zhang;Xuemin Sherman Shen","doi":"10.1109/TMC.2025.3530486","DOIUrl":"https://doi.org/10.1109/TMC.2025.3530486","url":null,"abstract":"Deterministic network services play a vital role for supporting emerging real-time applications with bounded low latency, jitter, and high reliability. The deterministic guarantee is penetrated into various types of networks, such as 5G, WiFi, satellite, and edge computing networks. From the user’s perspective, the real-time applications require end-to-end deterministic guarantee across the converged network. In this paper, we investigate the end-to-end deterministic guarantee problem across the whole converged network, aiming to provide a scalable method for different kinds of converged networks to meet the bounded end-to-end latency, jitter, and high reliability demands of each flow, while improving the network scheduling QoS. Particularly, we set up the global end-to-end control plane to abstract the deterministic-related resources from converged network, and model the deterministic flow transmission by using the abstracted resources. With the resource abstraction, our model can work well for different underlying technologies. Given large amounts of abstracted resources in our model, it is difficult for traditional algorithms to fully utilize the resources. Thus, we propose a deep reinforcement learning based end-to-end deterministic-related resource scheduling (E2eDRS) algorithm to schedule the network resources from end to end. By setting the action groups, the E2eDRS can support varying network dimensions both in horizontal and vertical end-to-end deterministic-related network architectures. Experimental results show that E2eDRS can averagely increase 1.33x and 6.01x schedulable flow number for horizontal scheduling compared with MultiDRS and MultiNaive algorithms, respectively. The E2eDRS can also optimize 2.65x and 3.87x server load balance than MultiDRS and MultiNaive algorithms, respectively. For vertical scheduling, the E2eDRS can still perform better on schedulable flow number and server load balance.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2504-2518"},"PeriodicalIF":7.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-14DOI: 10.1109/TMC.2024.3521245
Jiahui Gong;Yu Liu;Tong Li;Jingtao Ding;Zhaocheng Wang;Depeng Jin
Accurately predicting mobile traffic and accessed user amount is of great importance to network resource allocation, energy saving, etc. However, due to the complicated environmental contexts and complex interaction between mobile traffic and connected users, mobile network prediction is still challenging. Besides, the existing works could not be applied to large-scale networks because of the limited hardware resources and unacceptable time cost. In this work, we propose the spatiotemporal transformer framework for the multi-task mobile network prediction. Our proposed model contains three key parts. First, to capture the complex interaction between mobile traffic and connected users, we propose the temporal cross-attention encoder. Then, to identify and extract the most relevant information from various semantic relationships, we propose the hierarchical spatial encoder. This information is then used to create a more comprehensive representation of the network. Finally, the subgraph sampling method could significantly reduce the amount of computing power required and have comparable performance to the methods that input the whole network, enabling the model for real-world applications. Extensive experiments demonstrate that our proposed model significantly outperforms the state-of-the-art models by over 17% in both mobile traffic prediction and connected user prediction.
{"title":"STTF: A Spatiotemporal Transformer Framework for Multi-task Mobile Network Prediction","authors":"Jiahui Gong;Yu Liu;Tong Li;Jingtao Ding;Zhaocheng Wang;Depeng Jin","doi":"10.1109/TMC.2024.3521245","DOIUrl":"https://doi.org/10.1109/TMC.2024.3521245","url":null,"abstract":"Accurately predicting mobile traffic and accessed user amount is of great importance to network resource allocation, energy saving, etc. However, due to the complicated environmental contexts and complex interaction between mobile traffic and connected users, mobile network prediction is still challenging. Besides, the existing works could not be applied to large-scale networks because of the limited hardware resources and unacceptable time cost. In this work, we propose the spatiotemporal transformer framework for the multi-task mobile network prediction. Our proposed model contains three key parts. First, to capture the complex interaction between mobile traffic and connected users, we propose the temporal cross-attention encoder. Then, to identify and extract the most relevant information from various semantic relationships, we propose the hierarchical spatial encoder. This information is then used to create a more comprehensive representation of the network. Finally, the subgraph sampling method could significantly reduce the amount of computing power required and have comparable performance to the methods that input the whole network, enabling the model for real-world applications. Extensive experiments demonstrate that our proposed model significantly outperforms the state-of-the-art models by over 17% in both mobile traffic prediction and connected user prediction.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 5","pages":"4072-4085"},"PeriodicalIF":7.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143783300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.1109/TMC.2025.3526185
Hao Pan;Yongjian Fu;Ye Qi;Yi-Chao Chen;Ju Ren
Air writing technology enhances text input for IoT, VR, and AR devices, offering a spatially flexible alternative to physical keyboards. Addressing the demand for such innovation, this paper presents MagicWrite, a novel system utilizing acoustic-based 1D tracking, which is suitable for mobile devices with existing speaker and microphone infrastructure. Compared to 2D or 3D tracking of the finger, 1D tracking eliminates the need for multiple microphones and/or speakers and is more universally applicable. However, challenges emerge when using 1D tracking for recognizing handwritten letters due to trajectory loss and inter-user writing variability. To address this, we develop a general conversion technique that transforms image-based text datasets (e.g., MNIST) into 1D tracking trajectory data, generating artificial datasets of tracking traces (referred to as TrackMNISTs) to bolster system robustness and scalability. These tracking datasets facilitate the creation of personalized user databases that align with individual writing habits. Combined with a kNN classifier, our proposed MagicWrite ensures high accuracy and robustness in text input recognition while simultaneously reducing computational load and energy consumption. Extensive experiments validate that our proposed MagicWrite achieves exceptional classification accuracy for unseen users and inputs in five languages, marking it as a robust solution for air writing.
{"title":"MagicWrite: One-Dimensional Acoustic Tracking-Based Air Writing System","authors":"Hao Pan;Yongjian Fu;Ye Qi;Yi-Chao Chen;Ju Ren","doi":"10.1109/TMC.2025.3526185","DOIUrl":"https://doi.org/10.1109/TMC.2025.3526185","url":null,"abstract":"Air writing technology enhances text input for IoT, VR, and AR devices, offering a spatially flexible alternative to physical keyboards. Addressing the demand for such innovation, this paper presents MagicWrite, a novel system utilizing acoustic-based 1D tracking, which is suitable for mobile devices with existing speaker and microphone infrastructure. Compared to 2D or 3D tracking of the finger, 1D tracking eliminates the need for multiple microphones and/or speakers and is more universally applicable. However, challenges emerge when using 1D tracking for recognizing handwritten letters due to trajectory loss and inter-user writing variability. To address this, we develop a general conversion technique that transforms image-based text datasets (<italic>e.g.</i>, MNIST) into 1D tracking trajectory data, generating artificial datasets of tracking traces (referred to as <italic>Track</i>MNISTs) to bolster system robustness and scalability. These tracking datasets facilitate the creation of personalized user databases that align with individual writing habits. Combined with a kNN classifier, our proposed MagicWrite ensures high accuracy and robustness in text input recognition while simultaneously reducing computational load and energy consumption. Extensive experiments validate that our proposed MagicWrite achieves exceptional classification accuracy for unseen users and inputs in five languages, marking it as a robust solution for air writing.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 5","pages":"4403-4418"},"PeriodicalIF":7.7,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.1109/TMC.2025.3526143
Cheng-Wei Ching;Xin Chen;Chaeeun Kim;Tongze Wang;Dong Chen;Dilma Da Silva;Liting Hu
Edge applications generate a large influx of sensor data on massive scales, and these massive data streams must be processed shortly to derive actionable intelligence. However, traditional data processing systems are not well-suited for these edge applications as they often do not scale well with a large number of concurrent stream queries, do not support low-latency processing under limited edge computing resources, and do not adapt to the level of heterogeneity and dynamicity commonly present in edge computing environments. As such, we present AgileDart, an agile and scalable edge stream processing engine that enables fast stream processing of many concurrently running low-latency edge applications’ queries at scale in dynamic, heterogeneous edge environments. The novelty of our work lies in a dynamic dataflow abstraction that leverages distributed hash table-based peer-to-peer overlay networks to autonomously place, chain, and scale stream operators to reduce query latencies, adapt to workload variations, and recover from failures and a bandit-based path planning model that re-plans the data shuffling paths to adapt to unreliable and heterogeneous edge networks. We show that AgileDart outperforms Storm and EdgeWise on query latency and significantly improves scalability and adaptability when processing many real-world edge stream applications’ queries.
{"title":"AgileDART: An Agile and Scalable Edge Stream Processing Engine","authors":"Cheng-Wei Ching;Xin Chen;Chaeeun Kim;Tongze Wang;Dong Chen;Dilma Da Silva;Liting Hu","doi":"10.1109/TMC.2025.3526143","DOIUrl":"https://doi.org/10.1109/TMC.2025.3526143","url":null,"abstract":"Edge applications generate a large influx of sensor data on massive scales, and these massive data streams must be processed shortly to derive actionable intelligence. However, traditional data processing systems are not well-suited for these edge applications as they often do not scale well with a large number of concurrent stream queries, do not support low-latency processing under limited edge computing resources, and do not adapt to the level of heterogeneity and dynamicity commonly present in edge computing environments. As such, we present AgileDart, an agile and scalable edge stream processing engine that enables fast stream processing of many concurrently running low-latency edge applications’ queries at scale in dynamic, heterogeneous edge environments. The novelty of our work lies in a dynamic dataflow abstraction that leverages distributed hash table-based peer-to-peer overlay networks to autonomously place, chain, and scale stream operators to reduce query latencies, adapt to workload variations, and recover from failures and a bandit-based path planning model that re-plans the data shuffling paths to adapt to unreliable and heterogeneous edge networks. We show that AgileDart outperforms Storm and EdgeWise on query latency and significantly improves scalability and adaptability when processing many real-world edge stream applications’ queries.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 5","pages":"4510-4528"},"PeriodicalIF":7.7,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.1109/TMC.2025.3526232
Yongliang Xu;Hang Cheng;Ximeng Liu;Changsong Jiang;Xinpeng Zhang;Meiqing Wang
Collaborative searchable encryption for group data sharing enables a consortium of authorized users to collectively generate trapdoors and decrypt search results. However, existing countermeasures may be vulnerable to a keyword guessing attack (KGA) initiated by malicious insiders, compromising the confidentiality of keywords. Simultaneously, these solutions often fail to guard against hostile manufacturers embedding backdoors, leading to potential information leakage. To address these challenges, we propose a novel privacy-preserving collaborative searchable encryption (PCSE) scheme tailored for group data sharing. This scheme introduces a dedicated keyword server to export server-derived keywords, thereby withstanding KGA attempts. Based on this, PCSE deploys cryptographic reverse firewalls to thwart subversion attacks. To overcome the single point of failure inherent in a single keyword server, the export of server-derived keywords is collaboratively performed by multiple keyword servers. Furthermore, PCSE extends its capabilities to support efficient multi-keyword searches and result verification and incorporates a rate-limiting mechanism to effectively slow down adversaries’ online KGA attempts. Security analysis demonstrates that our scheme can resist KGA and subversion attack. Theoretical analyses and experimental results show that PCSE is significantly more practical for group data sharing systems compared with state-of-the-art works.
{"title":"PCSE: Privacy-Preserving Collaborative Searchable Encryption for Group Data Sharing in Cloud Computing","authors":"Yongliang Xu;Hang Cheng;Ximeng Liu;Changsong Jiang;Xinpeng Zhang;Meiqing Wang","doi":"10.1109/TMC.2025.3526232","DOIUrl":"https://doi.org/10.1109/TMC.2025.3526232","url":null,"abstract":"Collaborative searchable encryption for group data sharing enables a consortium of authorized users to collectively generate trapdoors and decrypt search results. However, existing countermeasures may be vulnerable to a keyword guessing attack (KGA) initiated by malicious insiders, compromising the confidentiality of keywords. Simultaneously, these solutions often fail to guard against hostile manufacturers embedding backdoors, leading to potential information leakage. To address these challenges, we propose a novel privacy-preserving collaborative searchable encryption (PCSE) scheme tailored for group data sharing. This scheme introduces a dedicated keyword server to export server-derived keywords, thereby withstanding KGA attempts. Based on this, PCSE deploys cryptographic reverse firewalls to thwart subversion attacks. To overcome the single point of failure inherent in a single keyword server, the export of server-derived keywords is collaboratively performed by multiple keyword servers. Furthermore, PCSE extends its capabilities to support efficient multi-keyword searches and result verification and incorporates a rate-limiting mechanism to effectively slow down adversaries’ online KGA attempts. Security analysis demonstrates that our scheme can resist KGA and subversion attack. Theoretical analyses and experimental results show that PCSE is significantly more practical for group data sharing systems compared with state-of-the-art works.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 5","pages":"4558-4572"},"PeriodicalIF":7.7,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the limitations of computing resources and battery capacity, the computation tasks of ground devices can be offloaded to edge servers for processing. Moreover, with the development of the low earth orbit (LEO) satellite technology, LEO satellite-terrestrial edge computing can realize a global coverage network to provide seamless computing services beyond the regional restrictions compared to the conventional terrestrial edge computing networks. In this paper, we study the computation offloading problem in the LEO satellite-terrestrial edge computing systems. Ground devices can offload their computation tasks to terrestrial base stations (BSs) or LEO satellites deployed on edge servers for remote processing. We formulate the computation offloading problem to minimize the cost of devices while satisfying resource and LEO satellite communication time constraints. Since each ground device competes for transmission and computing resources to reduce its own offloading cost, we reformulate this problem as the LEO satellite-terrestrial computation offloading game (LSTCO-Game). It is derived that there is an upper bound on transmission interference and computing resource competition among devices. Then, we theoretically prove that at least one Nash equilibrium (NE) offloading strategy exists in the LSTCO-Game. We propose the game-theoretical distributed computation offloading (GDCO) algorithm to find the NE offloading strategy. Next, we analyze the cost obtained by GDCO's NE offloading strategy in the worst case. Experiments are conducted by comparing the proposed GDCO algorithm with other computation offloading methods. The results show that the GDCO algorithm can effectively reduce the offloading cost.
{"title":"A Game-Theoretical Approach for Distributed Computation Offloading in LEO Satellite-Terrestrial Edge Computing Systems","authors":"Ying Chen;Yaozong Yang;Jintao Hu;Yuan Wu;Jiwei Huang","doi":"10.1109/TMC.2025.3526200","DOIUrl":"https://doi.org/10.1109/TMC.2025.3526200","url":null,"abstract":"Due to the limitations of computing resources and battery capacity, the computation tasks of ground devices can be offloaded to edge servers for processing. Moreover, with the development of the low earth orbit (LEO) satellite technology, LEO satellite-terrestrial edge computing can realize a global coverage network to provide seamless computing services beyond the regional restrictions compared to the conventional terrestrial edge computing networks. In this paper, we study the computation offloading problem in the LEO satellite-terrestrial edge computing systems. Ground devices can offload their computation tasks to terrestrial base stations (BSs) or LEO satellites deployed on edge servers for remote processing. We formulate the computation offloading problem to minimize the cost of devices while satisfying resource and LEO satellite communication time constraints. Since each ground device competes for transmission and computing resources to reduce its own offloading cost, we reformulate this problem as the LEO satellite-terrestrial computation offloading game (LSTCO-Game). It is derived that there is an upper bound on transmission interference and computing resource competition among devices. Then, we theoretically prove that at least one Nash equilibrium (NE) offloading strategy exists in the LSTCO-Game. We propose the game-theoretical distributed computation offloading (GDCO) algorithm to find the NE offloading strategy. Next, we analyze the cost obtained by GDCO's NE offloading strategy in the worst case. Experiments are conducted by comparing the proposed GDCO algorithm with other computation offloading methods. The results show that the GDCO algorithm can effectively reduce the offloading cost.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 5","pages":"4389-4402"},"PeriodicalIF":7.7,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1109/TMC.2025.3526196
Shiyu Bai;Weisong Wen;Dongzhe Su;Li-Ta Hsu
Pedestrian location tracking in emergency responses and environmental surveys of indoor scenarios tend to rely only on their own mobile devices, reducing the usage of external services. Low-cost and small-sized inertial measurement units (IMU) have been widely distributed in mobile devices. However, they suffer from high-level noises, leading to drift in position estimation over time. In this work, we present a graph-based indoor 3D pedestrian location tracking with inertial-only perception. The proposed method uses onboard inertial sensors in mobile devices alone for pedestrian state estimation in a simultaneous localization and mapping (SLAM) mode. It starts with a deep vertical odometry-aided 3D pedestrian dead reckoning (PDR) to predict the position in 3D space. Environment-induced behaviors, such as corner-turning and stair-taking, are regarded as landmarks. Multi-hypothesis loop closures are formed using statistical methods to handle ambiguous data association. A factor graph optimization fuses 3D PDR and behavior loop closures for state estimation. Experiments in different scenarios are performed using a smartphone to evaluate the performance of the proposed method, which can achieve better location tracking than current learning-based and filtering-based methods. Moreover, the proposed method is also discussed in different aspects, including the accuracy of offline optimization and proposed height regression, and the reliability of the multi-hypothesis behavior loop closures. The video (YouTube) or (BiliBili) is also shared to display our research.
{"title":"Graph-Based Indoor 3D Pedestrian Location Tracking With Inertial-Only Perception","authors":"Shiyu Bai;Weisong Wen;Dongzhe Su;Li-Ta Hsu","doi":"10.1109/TMC.2025.3526196","DOIUrl":"https://doi.org/10.1109/TMC.2025.3526196","url":null,"abstract":"Pedestrian location tracking in emergency responses and environmental surveys of indoor scenarios tend to rely only on their own mobile devices, reducing the usage of external services. Low-cost and small-sized inertial measurement units (IMU) have been widely distributed in mobile devices. However, they suffer from high-level noises, leading to drift in position estimation over time. In this work, we present a graph-based indoor 3D pedestrian location tracking with inertial-only perception. The proposed method uses onboard inertial sensors in mobile devices alone for pedestrian state estimation in a simultaneous localization and mapping (SLAM) mode. It starts with a deep vertical odometry-aided 3D pedestrian dead reckoning (PDR) to predict the position in 3D space. Environment-induced behaviors, such as corner-turning and stair-taking, are regarded as landmarks. Multi-hypothesis loop closures are formed using statistical methods to handle ambiguous data association. A factor graph optimization fuses 3D PDR and behavior loop closures for state estimation. Experiments in different scenarios are performed using a smartphone to evaluate the performance of the proposed method, which can achieve better location tracking than current learning-based and filtering-based methods. Moreover, the proposed method is also discussed in different aspects, including the accuracy of offline optimization and proposed height regression, and the reliability of the multi-hypothesis behavior loop closures. The video (<uri>YouTube</uri>) or (<uri>BiliBili</uri>) is also shared to display our research.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 5","pages":"4481-4495"},"PeriodicalIF":7.7,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1109/TMC.2025.3526158
Cangzhu Xu;Shanshan Song;Xiujuan Wu;Guangjie Han;Miao Pan;Gaochao Xu;Jun-Hong Cui
Increasing demands for versatile applications have spurred the rapid development of Unmanned Underwater Vehicle (UUV) networks. Nevertheless, multi-UUV movements exacerbates the spatial-temporal variability, leading to serious intermittent connectivity of underwater acoustic channel. Such phenomena challenge the identification of reliable paths for high-dynamic network routing. Existing routing protocols overlook the effects of UUV movements on forwarding path, typically selecting forwarders based solely on the current network state, which lead to instability in packet transmission. To address these challenges, we propose a Routing protocol based on Spatial-Temporal Graph model with Q-learning for multi-UUV networks (STGR), achieving high reliable and energy effective transmission. Specifically, a distributed Spatial-Temporal Graph model (STG) is proposed to depict the evolving variation characteristics (neighbor relationships, link quality, and connectivity duration) among underwater nodes over periodic intervals. Then we design a Q-learning-based forwarder selection algorithm integrated with STG to calculate reward function, ensuring adaptability to the ever-changing conditions. We have performed extensive simulations of STGR on the Aqua-Sim-tg platform and compared with the state-of-the-art routing protocols in terms of Packet Delivery Rate (PDR), latency, energy consumption and energy balance with different network settings. The results show that STGR yields 24.32 percent higher PDR on average than them in multi-UUV networks.
{"title":"A High Reliable Routing Protocol Based on Spatial-Temporal Graph Model for Multiple Unmanned Underwater Vehicles Network","authors":"Cangzhu Xu;Shanshan Song;Xiujuan Wu;Guangjie Han;Miao Pan;Gaochao Xu;Jun-Hong Cui","doi":"10.1109/TMC.2025.3526158","DOIUrl":"https://doi.org/10.1109/TMC.2025.3526158","url":null,"abstract":"Increasing demands for versatile applications have spurred the rapid development of Unmanned Underwater Vehicle (UUV) networks. Nevertheless, multi-UUV movements exacerbates the spatial-temporal variability, leading to serious intermittent connectivity of underwater acoustic channel. Such phenomena challenge the identification of reliable paths for high-dynamic network routing. Existing routing protocols overlook the effects of UUV movements on forwarding path, typically selecting forwarders based solely on the current network state, which lead to instability in packet transmission. To address these challenges, we propose a Routing protocol based on Spatial-Temporal Graph model with Q-learning for multi-UUV networks (STGR), achieving high reliable and energy effective transmission. Specifically, a distributed Spatial-Temporal Graph model (STG) is proposed to depict the evolving variation characteristics (neighbor relationships, link quality, and connectivity duration) among underwater nodes over periodic intervals. Then we design a Q-learning-based forwarder selection algorithm integrated with STG to calculate reward function, ensuring adaptability to the ever-changing conditions. We have performed extensive simulations of STGR on the Aqua-Sim-tg platform and compared with the state-of-the-art routing protocols in terms of Packet Delivery Rate (PDR), latency, energy consumption and energy balance with different network settings. The results show that STGR yields 24.32 percent higher PDR on average than them in multi-UUV networks.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 5","pages":"4434-4450"},"PeriodicalIF":7.7,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1109/TMC.2025.3526519
Dake Zeng;Akhtar Badshah;Shanshan Tu;Muhammad Waqas;Zhu Han
The surge in smartphone and wearable device usage has propelled the advancement of the Internet of Things (IoT) applications. Among these, e-healthcare stands out as a fundamental service, enabling the remote access and storage of patient-related data on a centralized medical server (MS), and facilitating connections between authorized individuals such as doctors, patients, and nurses over the public Internet. However, the inherent vulnerability of the public Internet to diverse security threats underscores the critical need for a robust and secure user authentication protocol to safeguard these essential services. This research presents a novel, resource-efficient user authentication protocol specifically designed for healthcare systems. Our proposed protocol leverages the lightweight authenticated encryption with associated data (AEAD) primitive Ascon combined with hash functions and XoR, specifically tailored for encrypted communication in resource-constrained IoT devices, emphasizing resource efficiency. Additionally, the proposed protocol establishes secure session keys between users and MS, facilitating future encrypted communications and preventing unauthorized attackers from illegally obtaining users’ private data. Furthermore, comprehensive security validation, including informal security analyses, demonstrates the protocol's resilience against a spectrum of security threats. Extensive analysis reveals that our proposed protocol significantly reduces computational and communication resource requirements during the authentication phase in comparison to similar authentication protocols, underscoring its efficiency and suitability for deployment in healthcare systems.
{"title":"A Security-Enhanced Ultra-Lightweight and Anonymous User Authentication Protocol for Telehealthcare Information Systems","authors":"Dake Zeng;Akhtar Badshah;Shanshan Tu;Muhammad Waqas;Zhu Han","doi":"10.1109/TMC.2025.3526519","DOIUrl":"https://doi.org/10.1109/TMC.2025.3526519","url":null,"abstract":"The surge in smartphone and wearable device usage has propelled the advancement of the Internet of Things (IoT) applications. Among these, e-healthcare stands out as a fundamental service, enabling the remote access and storage of patient-related data on a centralized medical server (MS), and facilitating connections between authorized individuals such as doctors, patients, and nurses over the public Internet. However, the inherent vulnerability of the public Internet to diverse security threats underscores the critical need for a robust and secure user authentication protocol to safeguard these essential services. This research presents a novel, resource-efficient user authentication protocol specifically designed for healthcare systems. Our proposed protocol leverages the lightweight authenticated encryption with associated data (AEAD) primitive <sc>Ascon</small> combined with hash functions and XoR, specifically tailored for encrypted communication in resource-constrained IoT devices, emphasizing resource efficiency. Additionally, the proposed protocol establishes secure session keys between users and MS, facilitating future encrypted communications and preventing unauthorized attackers from illegally obtaining users’ private data. Furthermore, comprehensive security validation, including informal security analyses, demonstrates the protocol's resilience against a spectrum of security threats. Extensive analysis reveals that our proposed protocol significantly reduces computational and communication resource requirements during the authentication phase in comparison to similar authentication protocols, underscoring its efficiency and suitability for deployment in healthcare systems.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 5","pages":"4529-4542"},"PeriodicalIF":7.7,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}