Pub Date : 2024-11-08DOI: 10.1109/TMC.2024.3494757
Juyeop Kim;Soomin Kwon;Jiyoon Han;Taegyeom Lee;Ohyun Jo
Machine Learning (ML) is recently considered a key technology for bringing outstanding performance to wireless communications. Conventional research has highlighted the potential of Support Vector Machines (SVMs), which train their model based on optimization theory, to enhance the performance of wireless communications. However, there are practical issues that makes SVM difficult to apply to a wireless communication system. SVM generally entails a heavy training process with high computational complexity, and the model requires a significant amount of time for training. Also, the entire dataset needs to be trained at once, requiring a substantial amount of memory for data storage. To enable SVM in wireless communications, we propose Real-Time Channel Vector Classifier (RTCVC), which employs a light-weight SVM model capable of training and processing incoming data in real-time. A novel input data pre-processing technique is implemented to reduce the computational overhead associated with calculating non-linear functions. The rearranged formulation of the original problem also allows each SVM sub-model to be trained distributively over time based on incremental parameters. For performance evaluation, we implement the RTCVC inter-operating with 5G beam index detection, whose detection probability has been theoretically proven to be significantly enhanced by SVM. The software modules of the RTCVC are based on LibSVM, a well-known open-source library for implementing SVM sub-models. The experimental results confirm that RTCVC significantly reduces training time while maintaining suitable performance for 5G beam index detection.
{"title":"Design and Implementation of a Light-Weight Channel Vector Classifier Based on Support Vector Machine for Real-Time 5G Beam Index Detection","authors":"Juyeop Kim;Soomin Kwon;Jiyoon Han;Taegyeom Lee;Ohyun Jo","doi":"10.1109/TMC.2024.3494757","DOIUrl":"https://doi.org/10.1109/TMC.2024.3494757","url":null,"abstract":"Machine Learning (ML) is recently considered a key technology for bringing outstanding performance to wireless communications. Conventional research has highlighted the potential of Support Vector Machines (SVMs), which train their model based on optimization theory, to enhance the performance of wireless communications. However, there are practical issues that makes SVM difficult to apply to a wireless communication system. SVM generally entails a heavy training process with high computational complexity, and the model requires a significant amount of time for training. Also, the entire dataset needs to be trained at once, requiring a substantial amount of memory for data storage. To enable SVM in wireless communications, we propose Real-Time Channel Vector Classifier (RTCVC), which employs a light-weight SVM model capable of training and processing incoming data in real-time. A novel input data pre-processing technique is implemented to reduce the computational overhead associated with calculating non-linear functions. The rearranged formulation of the original problem also allows each SVM sub-model to be trained distributively over time based on incremental parameters. For performance evaluation, we implement the RTCVC inter-operating with 5G beam index detection, whose detection probability has been theoretically proven to be significantly enhanced by SVM. The software modules of the RTCVC are based on LibSVM, a well-known open-source library for implementing SVM sub-models. The experimental results confirm that RTCVC significantly reduces training time while maintaining suitable performance for 5G beam index detection.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2660-2672"},"PeriodicalIF":7.7,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-08DOI: 10.1109/TMC.2024.3495015
Yuzhe Chen;Yanjun Li;Chung Shue Chen;Kaikai Chi
Symbiotic radio (SR), combining the advantages of cognitive radio and ambient backscatter communication (AmBC), stands as a promising solution for spectrum-and-energy-efficient wireless communications. In an SR network, backscatter devices (BDs) share the spectrum resources with the primary transmitter (PT) by utilizing the incident radio frequency (RF) signal from PT for uplink non-orthogonal multiple access (NOMA) transmission. The primary receiver (PR) decodes the signals of PT and BDs via the successive interference cancellation (SIC) technique. Our goal is to establish a long-term commensalistic relationship between PT and BDs. We address the problem of maximizing the long-term average sum rate of BDs while ensuring a minimum average rate for the PT by optimizing the power reflection coefficients of the BDs. We explicitly consider practical constraints such as the required power difference among signals for SIC decoding and the unknown future channel state information (CSI). We prove the NP-hardness of the offline version of the problem and subsequently employ the Lyapunov optimization technique to convert the original problem into a series of sub-problems in each individual time slot that can be solved in an online manner without relying on future CSI. We then utilize the successive convex optimization (SCO) technique to solve the non-convex sub-problems. Extensive simulations validate that our proposed Lyapunov-SCO algorithm achieves superior performance in terms of the average sum rate of BDs while ensuring PT’s required average rate. In addition, we provide discussions on extending the proposed solution to SR networks with multiple PT-PR pairs, high-mobility BDs, and enhancing fairness among BDs.
{"title":"Exploring Long-Term Commensalism: Throughput Maximization for Symbiotic Radio Networks","authors":"Yuzhe Chen;Yanjun Li;Chung Shue Chen;Kaikai Chi","doi":"10.1109/TMC.2024.3495015","DOIUrl":"https://doi.org/10.1109/TMC.2024.3495015","url":null,"abstract":"Symbiotic radio (SR), combining the advantages of cognitive radio and ambient backscatter communication (AmBC), stands as a promising solution for spectrum-and-energy-efficient wireless communications. In an SR network, backscatter devices (BDs) share the spectrum resources with the primary transmitter (PT) by utilizing the incident radio frequency (RF) signal from PT for uplink non-orthogonal multiple access (NOMA) transmission. The primary receiver (PR) decodes the signals of PT and BDs via the successive interference cancellation (SIC) technique. Our goal is to establish a long-term commensalistic relationship between PT and BDs. We address the problem of maximizing the long-term average sum rate of BDs while ensuring a minimum average rate for the PT by optimizing the power reflection coefficients of the BDs. We explicitly consider practical constraints such as the required power difference among signals for SIC decoding and the unknown future channel state information (CSI). We prove the NP-hardness of the offline version of the problem and subsequently employ the Lyapunov optimization technique to convert the original problem into a series of sub-problems in each individual time slot that can be solved in an online manner without relying on future CSI. We then utilize the successive convex optimization (SCO) technique to solve the non-convex sub-problems. Extensive simulations validate that our proposed Lyapunov-SCO algorithm achieves superior performance in terms of the average sum rate of BDs while ensuring PT’s required average rate. In addition, we provide discussions on extending the proposed solution to SR networks with multiple PT-PR pairs, high-mobility BDs, and enhancing fairness among BDs.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2376-2393"},"PeriodicalIF":7.7,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vehicle-to-Everything (V2X) technology plays a pivotal role in enabling real-time traffic coordination and safety, warning, and decision support. Within V2X, the Basic Safety Message (BSM) serves as the core to transmit critical vehicle status, location, and intention information to provide a foundation for ensuring reliable traffic safety and coordination mechanisms. Data accuracy stands as a key to the effectiveness and reliability of the V2X system, in which the transmission of error data can potentially result in severe traffic accidents. During vehicular operation, sensors may generate error data owing to looseness or external conditions. However, immediate sensor replacement is often impractical or infeasible. Therefore, this paper introduces a collaborative scheme involving vehicles, Road Side Units (RSUs), and Data Center (DC) to jointly enhance the accuracy of vehicle-transmitted BSMs. Our scheme involves analyzing statistical features of vehicle driving information to detect error BSMs. Subsequently, these detected errors are corrected by leveraging historical data from the vehicle and its relative relationship with surrounding vehicles. In addition, we propose a time optimization method to reduce the average processing time of each data by RSUs. The extensive experimental results demonstrate that the proposed scheme can accurately detect error BSMs and effectively correct error BSMs. The entire scheme also meets the requisite computational latency requirements.
{"title":"A Collaborative Error Detection and Correction Scheme for Safety Message in V2X","authors":"Hui Qian;Hongmei Chai;Ammar Hawbani;Yuanguo Bi;Na Lin;Liang Zhao","doi":"10.1109/TMC.2024.3494713","DOIUrl":"https://doi.org/10.1109/TMC.2024.3494713","url":null,"abstract":"Vehicle-to-Everything (V2X) technology plays a pivotal role in enabling real-time traffic coordination and safety, warning, and decision support. Within V2X, the Basic Safety Message (BSM) serves as the core to transmit critical vehicle status, location, and intention information to provide a foundation for ensuring reliable traffic safety and coordination mechanisms. Data accuracy stands as a key to the effectiveness and reliability of the V2X system, in which the transmission of error data can potentially result in severe traffic accidents. During vehicular operation, sensors may generate error data owing to looseness or external conditions. However, immediate sensor replacement is often impractical or infeasible. Therefore, this paper introduces a collaborative scheme involving vehicles, Road Side Units (RSUs), and Data Center (DC) to jointly enhance the accuracy of vehicle-transmitted BSMs. Our scheme involves analyzing statistical features of vehicle driving information to detect error BSMs. Subsequently, these detected errors are corrected by leveraging historical data from the vehicle and its relative relationship with surrounding vehicles. In addition, we propose a time optimization method to reduce the average processing time of each data by RSUs. The extensive experimental results demonstrate that the proposed scheme can accurately detect error BSMs and effectively correct error BSMs. The entire scheme also meets the requisite computational latency requirements.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2594-2611"},"PeriodicalIF":7.7,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1109/TMC.2024.3493597
Jiangjin Yin;Xin Xie;Hangyu Mao;Song Guo
Radio frequency identification (RFID) system has been extensively employed to track missing items by affixing them with RFID tags. Many practical applications require to efficiently identify missing events for a specific subset of system tags (called key tags) due to their elevated importance. Existing methods primarily aim to identify all tags, which makes it challenging to specifically identify key tags because of interference from other non-key tags (called ordinary tags). In light of this, several key tag identification methods follow a two-step scheme that filters ordinary tags first and then identifies key tags. Nevertheless, this wastes too much time on tag filtering, resulting in low time efficiency. This paper presents a novel missing key tag identification protocol with two creative designs to gain high efficiency. First, we develop a novel verification technique that can rapidly determine the presence or absence of key tags amid the scenarios with both key tags and ordinary ones. By combining the ON-OFF Keying modulation, we could verify multiple key tags in a single slot, thereby reducing the total slots required. Second, we design a new selection technique that efficiently selects the unverified key tags for further verification, while filtering out the verified key tags and irrelevant ordinary tags to avoid redundant data transmission. Additionally, we present an enhancement protocol that leverages a preselection technique to avoid collecting useless tag responses, further boosting efficiency. We carry out rigorous theoretical analysis to optimize the performance of the proposed protocols. Both simulations and practical experiments demonstrate that our method is markedly superior to state-of-the-art solutions.
{"title":"Efficient Missing Key Tag Identification in Large-Scale RFID Systems: An Iterative Verification and Selection Method","authors":"Jiangjin Yin;Xin Xie;Hangyu Mao;Song Guo","doi":"10.1109/TMC.2024.3493597","DOIUrl":"https://doi.org/10.1109/TMC.2024.3493597","url":null,"abstract":"Radio frequency identification (RFID) system has been extensively employed to track missing items by affixing them with RFID tags. Many practical applications require to efficiently identify missing events for a specific subset of system tags (called key tags) due to their elevated importance. Existing methods primarily aim to identify all tags, which makes it challenging to specifically identify key tags because of interference from other non-key tags (called ordinary tags). In light of this, several key tag identification methods follow a two-step scheme that filters ordinary tags first and then identifies key tags. Nevertheless, this wastes too much time on tag filtering, resulting in low time efficiency. This paper presents a novel missing key tag identification protocol with two creative designs to gain high efficiency. First, we develop a novel verification technique that can rapidly determine the presence or absence of key tags amid the scenarios with both key tags and ordinary ones. By combining the ON-OFF Keying modulation, we could verify multiple key tags in a single slot, thereby reducing the total slots required. Second, we design a new selection technique that efficiently selects the unverified key tags for further verification, while filtering out the verified key tags and irrelevant ordinary tags to avoid redundant data transmission. Additionally, we present an enhancement protocol that leverages a preselection technique to avoid collecting useless tag responses, further boosting efficiency. We carry out rigorous theoretical analysis to optimize the performance of the proposed protocols. Both simulations and practical experiments demonstrate that our method is markedly superior to state-of-the-art solutions.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2253-2269"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic ridesharing has gained significant attention in recent years. However, existing ridesharing studies often focus on optimizing order dispatching and vehicle repositioning separately, leading to short-sighted decisions and underutilization of the ridesharing potential. In this paper, we propose a novel joint optimization framework called $mathtt {JODR}$. By coordinating order dispatching and vehicle repositioning, $mathtt {JODR}$ enhances ridesharing efficiency while ensuring high-quality service. The core idea of $mathtt {JODR}$ is to dispatch ride orders with high demand in specific mobility directions to vehicles with sufficient available capacity, effectively balancing future supply and demand in those directions. To achieve this, we introduce a novel mobility value function that can predict the long-term mobility value of matching an order with its travel direction. By considering orders’ directional mobility values, service quality assessments, and available vehicle capacities, $mathtt {JODR}$ formulates the order dispatching as a minimum-cost maximum-flow problem to derive the optimal order-vehicle assignments. Furthermore, the value function helps the intelligent repositioning of idle vehicles. Extensive experiments conducted on a large real-world dataset demonstrate the superiority of $mathtt {JODR}$ over state-of-the-art methods across various performance metrics. These experimental results validate the effectiveness of $mathtt {JODR}$ in improving the ridesharing efficiency and experience.
{"title":"Joint Order Dispatching and Vehicle Repositioning for Dynamic Ridesharing","authors":"Zhidan Liu;Guofeng Ouyang;Bolin Zhang;Bo Du;Chao Chen;Kaishun Wu","doi":"10.1109/TMC.2024.3493974","DOIUrl":"https://doi.org/10.1109/TMC.2024.3493974","url":null,"abstract":"Dynamic ridesharing has gained significant attention in recent years. However, existing ridesharing studies often focus on optimizing order dispatching and vehicle repositioning separately, leading to short-sighted decisions and underutilization of the ridesharing potential. In this paper, we propose a novel joint optimization framework called <inline-formula><tex-math>$mathtt {JODR}$</tex-math></inline-formula>. By coordinating order dispatching and vehicle repositioning, <inline-formula><tex-math>$mathtt {JODR}$</tex-math></inline-formula> enhances ridesharing efficiency while ensuring high-quality service. The core idea of <inline-formula><tex-math>$mathtt {JODR}$</tex-math></inline-formula> is to dispatch ride orders with high demand in specific mobility directions to vehicles with sufficient available capacity, effectively balancing future supply and demand in those directions. To achieve this, we introduce a novel mobility value function that can predict the long-term mobility value of matching an order with its travel direction. By considering orders’ directional mobility values, service quality assessments, and available vehicle capacities, <inline-formula><tex-math>$mathtt {JODR}$</tex-math></inline-formula> formulates the order dispatching as a minimum-cost maximum-flow problem to derive the optimal order-vehicle assignments. Furthermore, the value function helps the intelligent repositioning of idle vehicles. Extensive experiments conducted on a large real-world dataset demonstrate the superiority of <inline-formula><tex-math>$mathtt {JODR}$</tex-math></inline-formula> over state-of-the-art methods across various performance metrics. These experimental results validate the effectiveness of <inline-formula><tex-math>$mathtt {JODR}$</tex-math></inline-formula> in improving the ridesharing efficiency and experience.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 4","pages":"2628-2643"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1109/TMC.2024.3493592
Yifan Gu;Zhi Quan
Real-time status updates play an important role in low-latency cyber-physical systems, in which the real network traffic statistics (i.e., transmission delay and/or error rate) are often unknown and non-stationary. In such cases, short-time age-of-information (ST-AoI) is more crucial than long-term average AoI, because instantaneous high ST-AoI could lead to system failures even if the long-term average AoI is low. In this paper, we propose an adaptive sampling control (ASC) scheme to ensure a low ST-AoI outage probability, defined as the probability of the average AoI in each control cycle, i.e., over a limited number of packets, exceeding a given threshold. This ASC scheme does not rely on an explicit statistical model for the non-stationary traffic behaviors. It establishes a dynamic linearization data model with a pseudo-partial derivative (PPD) parameter to capture the unknown and non-stationary traffic statistics. By estimating the PPD parameter in each control cycle, ASC can determine the sampling rates to ensure an extremely low ST-AoI outage probability. Both numerical simulation and real-world experiment show that the proposed ASC scheme significantly outperforms existing methods, reducing the ST-AoI outage probability almost by half.
{"title":"Adaptive Sampling for Age of Information in Non-Stationary Network Traffic","authors":"Yifan Gu;Zhi Quan","doi":"10.1109/TMC.2024.3493592","DOIUrl":"https://doi.org/10.1109/TMC.2024.3493592","url":null,"abstract":"Real-time status updates play an important role in low-latency cyber-physical systems, in which the real network traffic statistics (i.e., transmission delay and/or error rate) are often unknown and non-stationary. In such cases, short-time age-of-information (ST-AoI) is more crucial than long-term average AoI, because instantaneous high ST-AoI could lead to system failures even if the long-term average AoI is low. In this paper, we propose an adaptive sampling control (ASC) scheme to ensure a low ST-AoI outage probability, defined as the probability of the average AoI in each control cycle, i.e., over a limited number of packets, exceeding a given threshold. This ASC scheme does not rely on an explicit statistical model for the non-stationary traffic behaviors. It establishes a dynamic linearization data model with a pseudo-partial derivative (PPD) parameter to capture the unknown and non-stationary traffic statistics. By estimating the PPD parameter in each control cycle, ASC can determine the sampling rates to ensure an extremely low ST-AoI outage probability. Both numerical simulation and real-world experiment show that the proposed ASC scheme significantly outperforms existing methods, reducing the ST-AoI outage probability almost by half.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2110-2123"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physical Unclonable Functions (PUFs) are hardware-based mechanisms that exploit inherent manufacturing variations to generate unique identifiers for devices. Dynamic Random Access Memory (DRAM) has emerged as a promising medium for implementing PUFs, providing a cost-effective solution without the need for additional circuitry. This makes DRAM PUFs ideal for use in resource-constrained environments such as Internet of Things (IoT) networks. However, current DRAM PUF implementations often either disrupt host system functions or produce unreliable responses due to environmental sensitivity. In this paper, we present EPUF, a novel approach to extracting random and unique features from DRAM cells to generate reliable PUF responses. We leverage bitmap images of binary DRAM values and their entropy features to enhance the robustness of our PUF. Through extensive real-world experiments, we demonstrate that EPUF is approximately 1.7 times faster than existing solutions, achieves 100% reliability, produces features with 47.79% uniqueness, and supports a substantial set of Challenge-Response Pairs (CRPs). These capabilities make EPUF a powerful tool for DRAM PUF-based authentication. Based on EPUF, we then propose a lightweight authentication protocol that not only offers superior security features but also surpasses state-of-the-art authentication schemes in terms of communication overhead and computational efficiency.
{"title":"EPUF: An Entropy-Derived Latency-Based DRAM Physical Unclonable Function for Lightweight Authentication in Internet of Things","authors":"Fatemeh Najafi;Masoud Kaveh;Mohammad Reza Mosavi;Alessandro Brighente;Mauro Conti","doi":"10.1109/TMC.2024.3494612","DOIUrl":"https://doi.org/10.1109/TMC.2024.3494612","url":null,"abstract":"Physical Unclonable Functions (PUFs) are hardware-based mechanisms that exploit inherent manufacturing variations to generate unique identifiers for devices. Dynamic Random Access Memory (DRAM) has emerged as a promising medium for implementing PUFs, providing a cost-effective solution without the need for additional circuitry. This makes DRAM PUFs ideal for use in resource-constrained environments such as Internet of Things (IoT) networks. However, current DRAM PUF implementations often either disrupt host system functions or produce unreliable responses due to environmental sensitivity. In this paper, we present EPUF, a novel approach to extracting random and unique features from DRAM cells to generate reliable PUF responses. We leverage bitmap images of binary DRAM values and their entropy features to enhance the robustness of our PUF. Through extensive real-world experiments, we demonstrate that EPUF is approximately 1.7 times faster than existing solutions, achieves 100% reliability, produces features with 47.79% uniqueness, and supports a substantial set of Challenge-Response Pairs (CRPs). These capabilities make EPUF a powerful tool for DRAM PUF-based authentication. Based on EPUF, we then propose a lightweight authentication protocol that not only offers superior security features but also surpasses state-of-the-art authentication schemes in terms of communication overhead and computational efficiency.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2422-2436"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1109/TMC.2024.3493375
Runze Cheng;Yao Sun;Dusit Niyato;Lan Zhang;Lei Zhang;Muhammad Ali Imran
With the significant advances in AI-generated content (AIGC) and the proliferation of mobile devices, providing high-quality AIGC services via wireless networks is becoming the future direction. However, the primary challenges of AIGC services provisioning in wireless networks lie in unstable channels, limited bandwidth resources, and unevenly distributed computational resources. To this end, this paper proposes a semantic communication (SemCom)-empowered AIGC (SemAIGC) generation and transmission framework, where only semantic information of the content rather than all the binary bits should be generated and transmitted by using SemCom. Specifically, SemAIGC integrates diffusion models within the semantic encoder and decoder to design a workload-adjustable transceiver thereby allowing adjustment of computational resource utilization in edge and local. In addition, a resource-aware workload trade-off (ROOT) scheme is devised to intelligently make workload adaptation decisions for the transceiver, thus efficiently generating, transmitting, and fine-tuning content as per dynamic wireless channel conditions and service requirements. Simulations verify the superiority of our proposed SemAIGC framework in terms of latency and content quality compared to conventional approaches.
{"title":"A Wireless AI-Generated Content (AIGC) Provisioning Framework Empowered by Semantic Communication","authors":"Runze Cheng;Yao Sun;Dusit Niyato;Lan Zhang;Lei Zhang;Muhammad Ali Imran","doi":"10.1109/TMC.2024.3493375","DOIUrl":"https://doi.org/10.1109/TMC.2024.3493375","url":null,"abstract":"With the significant advances in AI-generated content (AIGC) and the proliferation of mobile devices, providing high-quality AIGC services via wireless networks is becoming the future direction. However, the primary challenges of AIGC services provisioning in wireless networks lie in unstable channels, limited bandwidth resources, and unevenly distributed computational resources. To this end, this paper proposes a semantic communication (SemCom)-empowered AIGC (SemAIGC) generation and transmission framework, where only semantic information of the content rather than all the binary bits should be generated and transmitted by using SemCom. Specifically, SemAIGC integrates diffusion models within the semantic encoder and decoder to design a workload-adjustable transceiver thereby allowing adjustment of computational resource utilization in edge and local. In addition, a <underline>r</u>esource-aware w<underline>o</u>rkl<underline>o</u>ad <underline>t</u>rade-off (ROOT) scheme is devised to intelligently make workload adaptation decisions for the transceiver, thus efficiently generating, transmitting, and fine-tuning content as per dynamic wireless channel conditions and service requirements. Simulations verify the superiority of our proposed SemAIGC framework in terms of latency and content quality compared to conventional approaches.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2137-2150"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generally, Reinforcement Learning (RL) agent updates its policy by repetitively interacting with the environment, contingent on the received rewards to observed states and undertaken actions. However, the environmental disturbance, commonly leading to noisy observations (e.g., rewards and states), could significantly shape the performance of agent. Furthermore, the learning performance of Multi-Agent Reinforcement Learning (MARL) is more susceptible to noise due to the interference among intelligent agents. Therefore, it becomes imperative to revolutionize the design of MARL, so as to capably ameliorate the annoying impact of noisy rewards. In this paper, we propose a novel decomposition-based multi-agent distributional RL method by approximating the globally shared noisy reward by a Gaussian Mixture Model (GMM) and decomposing it into the combination of individual distributional local rewards, with which each agent can be updated locally through distributional RL. Moreover, a Diffusion Model (DM) is leveraged for reward generation in order to mitigate the issue of costly interaction expenditure for learning distributions. Furthermore, the monotonicity of the reward distribution decomposition is theoretically validated under nonnegative weights and increasing distortion risk function, while the design of the loss function is carefully calibrated to avoid decomposition ambiguity. We also verify the effectiveness of the proposed method through extensive simulation experiments with noisy rewards. Besides, different risk-sensitive policies are evaluated in order to demonstrate the superiority of distributional RL in different MARL tasks.
{"title":"Noise Distribution Decomposition Based Multi-Agent Distributional Reinforcement Learning","authors":"Wei Geng;Baidi Xiao;Rongpeng Li;Ning Wei;Dong Wang;Zhifeng Zhao","doi":"10.1109/TMC.2024.3492272","DOIUrl":"https://doi.org/10.1109/TMC.2024.3492272","url":null,"abstract":"Generally, Reinforcement Learning (RL) agent updates its policy by repetitively interacting with the environment, contingent on the received rewards to observed states and undertaken actions. However, the environmental disturbance, commonly leading to noisy observations (e.g., rewards and states), could significantly shape the performance of agent. Furthermore, the learning performance of Multi-Agent Reinforcement Learning (MARL) is more susceptible to noise due to the interference among intelligent agents. Therefore, it becomes imperative to revolutionize the design of MARL, so as to capably ameliorate the annoying impact of noisy rewards. In this paper, we propose a novel decomposition-based multi-agent distributional RL method by approximating the globally shared noisy reward by a Gaussian Mixture Model (GMM) and decomposing it into the combination of individual distributional local rewards, with which each agent can be updated locally through distributional RL. Moreover, a Diffusion Model (DM) is leveraged for reward generation in order to mitigate the issue of costly interaction expenditure for learning distributions. Furthermore, the monotonicity of the reward distribution decomposition is theoretically validated under nonnegative weights and increasing distortion risk function, while the design of the loss function is carefully calibrated to avoid decomposition ambiguity. We also verify the effectiveness of the proposed method through extensive simulation experiments with noisy rewards. Besides, different risk-sensitive policies are evaluated in order to demonstrate the superiority of distributional RL in different MARL tasks.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2301-2314"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1109/TMC.2024.3493032
Jin Meng;Qinglin Zhao;Weimin Wu;Minghao Jin;Penghui Song;Yingzhuang Liu
This study explores the performance optimization of uplink orthogonal frequency division multiple access (OFDMA)-based random access (UORA) in IEEE 802.11ax networks. UORA supports multi-user transmission via two methods, where users transmit either fixed-size or variable-size aggregated MAC protocol data units. However, three critical issues arise. 1 Existing studies only focus on the fixed-size method with low practicality, and overlook the impact of traffic load which leads to inaccurate evaluation of the network performance. 2 The variable-size method has never been studied due to a complex scenario, where user frames append padding bits to fulfill the transmission opportunity constraint. 3 In realistic networks, the variable-size method sacrifices throughput to achieve high practicality and low latency. To address the first two issues, we proposed two novel models based on queueing theory that accurately capture the impact of these transmission methods and various parameters (e.g., the traffic load and padding bits) on throughput, packet loss rate, and latency. To address Issue 3, we design a Dynamic Selection Algorithm of Transmission Methods (DSATM), which dynamically switches between the two transmission methods to enhance practicality, maximize throughput, and minimize latency. Finally, we conducted extensive simulations to verify the accuracy of our models and DSATM.
{"title":"Enhancing IEEE 802.11ax Network Performance: An Investigation and Modeling Into Multi-User Transmission","authors":"Jin Meng;Qinglin Zhao;Weimin Wu;Minghao Jin;Penghui Song;Yingzhuang Liu","doi":"10.1109/TMC.2024.3493032","DOIUrl":"https://doi.org/10.1109/TMC.2024.3493032","url":null,"abstract":"This study explores the performance optimization of uplink orthogonal frequency division multiple access (OFDMA)-based random access (UORA) in IEEE 802.11ax networks. UORA supports multi-user transmission via two methods, where users transmit either fixed-size or variable-size aggregated MAC protocol data units. However, three critical issues arise. 1 Existing studies only focus on the fixed-size method with low practicality, and overlook the impact of traffic load which leads to inaccurate evaluation of the network performance. 2 The variable-size method has never been studied due to a complex scenario, where user frames append padding bits to fulfill the transmission opportunity constraint. 3 In realistic networks, the variable-size method sacrifices throughput to achieve high practicality and low latency. To address the first two issues, we proposed two novel models based on queueing theory that accurately capture the impact of these transmission methods and various parameters (e.g., the traffic load and padding bits) on throughput, packet loss rate, and latency. To address Issue 3, we design a <bold><u>D</u></b>ynamic <bold><u>S</u></b>election <bold><u>A</u></b>lgorithm of <bold><u>T</u></b>ransmission <bold><u>M</u></b>ethods (DSATM), which dynamically switches between the two transmission methods to enhance practicality, maximize throughput, and minimize latency. Finally, we conducted extensive simulations to verify the accuracy of our models and DSATM.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 3","pages":"2151-2165"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}