Pub Date : 2023-12-01DOI: 10.23919/JCC.ea.2021-0446.202302
Weidong Zhou, Shengwei Lei, Chunhe Xia, Tianbo Wang
Network intrusion poses a severe threat to the Internet. However, existing intrusion detection models cannot effectively distinguish different intrusions with high-degree feature overlap. In addition, efficient real-time detection is an urgent problem. To address the two above problems, we propose a Latent Dirichlet Allocation topic model-based framework for real-time network Intrusion Detection (LDA-ID), consisting of static and online LDA-ID. The problem of feature overlap is transformed into static LDA-ID topic number optimization and topic selection. Thus, the detection is based on the latent topic features. To achieve efficient real-time detection, we design an online computing mode for static LDA-ID, in which a parameter iteration method based on momentum is proposed to balance the contribution of prior knowledge and new information. Furthermore, we design two matching mechanisms to accommodate the static and online LDA-ID, respectively. Experimental results on the public NSL-KDD and UNSW-NB15 datasets show that our framework gets higher accuracy than the others.
{"title":"LDA-ID: An LDA-based framework for real-time network intrusion detection","authors":"Weidong Zhou, Shengwei Lei, Chunhe Xia, Tianbo Wang","doi":"10.23919/JCC.ea.2021-0446.202302","DOIUrl":"https://doi.org/10.23919/JCC.ea.2021-0446.202302","url":null,"abstract":"Network intrusion poses a severe threat to the Internet. However, existing intrusion detection models cannot effectively distinguish different intrusions with high-degree feature overlap. In addition, efficient real-time detection is an urgent problem. To address the two above problems, we propose a Latent Dirichlet Allocation topic model-based framework for real-time network Intrusion Detection (LDA-ID), consisting of static and online LDA-ID. The problem of feature overlap is transformed into static LDA-ID topic number optimization and topic selection. Thus, the detection is based on the latent topic features. To achieve efficient real-time detection, we design an online computing mode for static LDA-ID, in which a parameter iteration method based on momentum is proposed to balance the contribution of prior knowledge and new information. Furthermore, we design two matching mechanisms to accommodate the static and online LDA-ID, respectively. Experimental results on the public NSL-KDD and UNSW-NB15 datasets show that our framework gets higher accuracy than the others.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"601 ","pages":"166-181"},"PeriodicalIF":4.1,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139019480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radio frequency fingerprinting (RFF) is a remarkable lightweight authentication scheme to support rapid and scalable identification in the internet of things (IoT) systems. Deep learning (DL) is a critical enabler of RFF identification by leveraging the hardware-level features. However, traditional supervised learning methods require huge labeled training samples. Therefore, how to establish a highperformance supervised learning model with few labels under practical application is still challenging. To address this issue, we in this paper propose a novel RFF semi-supervised learning (RFFSSL) model which can obtain a better performance with few meta labels. Specifically, the proposed RFFSSL model is constituted by a teacher-student network, in which the student network learns from the pseudo label predicted by the teacher. Then, the output of the student model will be exploited to improve the performance of teacher among the labeled data. Furthermore, a comprehensive evaluation on the accuracy is conducted. We derive about 50 GB real long-term evolution (LTE) mobile phone's raw signal datasets, which is used to evaluate various models. Experimental results demonstrate that the proposed RFFSSL scheme can achieve up to 97% experimental testing accuracy over a noisy environment only with 10% labeled samples when training samples equal to 2700.
{"title":"Radio frequency fingerprinting identification using semi-supervised learning with meta labels","authors":"Tiantian Zhang, Pinyi Ren, Dongyang Xu, Zhanyi Ren","doi":"10.23919/JCC.fa.2022-0609.202312","DOIUrl":"https://doi.org/10.23919/JCC.fa.2022-0609.202312","url":null,"abstract":"Radio frequency fingerprinting (RFF) is a remarkable lightweight authentication scheme to support rapid and scalable identification in the internet of things (IoT) systems. Deep learning (DL) is a critical enabler of RFF identification by leveraging the hardware-level features. However, traditional supervised learning methods require huge labeled training samples. Therefore, how to establish a highperformance supervised learning model with few labels under practical application is still challenging. To address this issue, we in this paper propose a novel RFF semi-supervised learning (RFFSSL) model which can obtain a better performance with few meta labels. Specifically, the proposed RFFSSL model is constituted by a teacher-student network, in which the student network learns from the pseudo label predicted by the teacher. Then, the output of the student model will be exploited to improve the performance of teacher among the labeled data. Furthermore, a comprehensive evaluation on the accuracy is conducted. We derive about 50 GB real long-term evolution (LTE) mobile phone's raw signal datasets, which is used to evaluate various models. Experimental results demonstrate that the proposed RFFSSL scheme can achieve up to 97% experimental testing accuracy over a noisy environment only with 10% labeled samples when training samples equal to 2700.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"86 9","pages":"78-95"},"PeriodicalIF":4.1,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139017402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.23919/JCC.ea.2022-0336.202302
Songjiao Bi, Langtao Hu, QUAN LIU, Jianlan Wu, Rui Yang, L. Wu
Covert communications can hide the existence of a transmission from the transmitter to receiver. This paper considers an intelligent reflecting surface (IRS) assisted unmanned aerial vehicle (UAV) covert communication system. It was inspired by the high-dimensional data processing and decision-making capabilities of the deep reinforcement learning (DRL) algorithm. In order to improve the covert communication performance, an UAV 3D trajectory and IRS phase optimization algorithm based on double deep Q network (TAP-DDQN) is proposed. The simulations show that TAP-DDQN can significantly improve the covert performance of the IRS-assisted UAV covert communication system, compared with benchmark solutions.
{"title":"Deep reinforcement learning for IRS-assisted UAV covert communications","authors":"Songjiao Bi, Langtao Hu, QUAN LIU, Jianlan Wu, Rui Yang, L. Wu","doi":"10.23919/JCC.ea.2022-0336.202302","DOIUrl":"https://doi.org/10.23919/JCC.ea.2022-0336.202302","url":null,"abstract":"Covert communications can hide the existence of a transmission from the transmitter to receiver. This paper considers an intelligent reflecting surface (IRS) assisted unmanned aerial vehicle (UAV) covert communication system. It was inspired by the high-dimensional data processing and decision-making capabilities of the deep reinforcement learning (DRL) algorithm. In order to improve the covert communication performance, an UAV 3D trajectory and IRS phase optimization algorithm based on double deep Q network (TAP-DDQN) is proposed. The simulations show that TAP-DDQN can significantly improve the covert performance of the IRS-assisted UAV covert communication system, compared with benchmark solutions.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"329 ","pages":"131-141"},"PeriodicalIF":4.1,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139022980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.23919/JCC.fa.2022-0726.202312
Qiang Wang, Shaoyi Xu, Rongtao Xu, Dongji Li
In this article, an efficient federated learning (FL) Framework in the Internet of Vehicles (IoV) is studied. In the considered model, vehicle users implement an FL algorithm by training their local FL models and sending their models to a base station (BS) that generates a global FL model through the model aggregation. Since each user owns data samples with diverse sizes and different quality, it is necessary for the BS to select the proper participating users to acquire a better global model. Meanwhile, considering the high computational overhead of existing selection methods based on the gradient, the lightweight user selection scheme based on the loss decay is proposed. Due to the limited wireless bandwidth, the BS needs to select an suitable subset of users to implement the FL algorithm. Moreover, the vehicle users' computing resource that can be used for FL training is usually limited in the IoV when other multiple tasks are required to be executed. The local model training and model parameter transmission of FL will have significant effects on the latency of FL. To address this issue, the joint communication and computing optimization problem is formulated whose objective is to minimize the FL delay in the resource-constrained system. To solve the complex nonconvex problem, an algorithm based on the concave-convex procedure (CCCP) is proposed, which can achieve superior performance in the small-scale and delay-insensitive FL system. Due to the fact that the convergence rate of CCCP method is too slow in a large-scale FL system, this method is not suitable for delay-sensitive applications. To solve this issue, a block coordinate descent algorithm based on the one-step projected gradient method is proposed to decrease the complexity of the solution at the cost of light performance degrading. Simulations are conducted and numerical results show the good performance of the proposed methods.
{"title":"An efficient federated learning framework deployed in resource-constrained IoV: User selection and learning time optimization schemes","authors":"Qiang Wang, Shaoyi Xu, Rongtao Xu, Dongji Li","doi":"10.23919/JCC.fa.2022-0726.202312","DOIUrl":"https://doi.org/10.23919/JCC.fa.2022-0726.202312","url":null,"abstract":"In this article, an efficient federated learning (FL) Framework in the Internet of Vehicles (IoV) is studied. In the considered model, vehicle users implement an FL algorithm by training their local FL models and sending their models to a base station (BS) that generates a global FL model through the model aggregation. Since each user owns data samples with diverse sizes and different quality, it is necessary for the BS to select the proper participating users to acquire a better global model. Meanwhile, considering the high computational overhead of existing selection methods based on the gradient, the lightweight user selection scheme based on the loss decay is proposed. Due to the limited wireless bandwidth, the BS needs to select an suitable subset of users to implement the FL algorithm. Moreover, the vehicle users' computing resource that can be used for FL training is usually limited in the IoV when other multiple tasks are required to be executed. The local model training and model parameter transmission of FL will have significant effects on the latency of FL. To address this issue, the joint communication and computing optimization problem is formulated whose objective is to minimize the FL delay in the resource-constrained system. To solve the complex nonconvex problem, an algorithm based on the concave-convex procedure (CCCP) is proposed, which can achieve superior performance in the small-scale and delay-insensitive FL system. Due to the fact that the convergence rate of CCCP method is too slow in a large-scale FL system, this method is not suitable for delay-sensitive applications. To solve this issue, a block coordinate descent algorithm based on the one-step projected gradient method is proposed to decrease the complexity of the solution at the cost of light performance degrading. Simulations are conducted and numerical results show the good performance of the proposed methods.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"372 2","pages":"111-130"},"PeriodicalIF":4.1,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139022303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.23919/JCC.ea.2021-0252.202302
Hong Qin, Haitao Du, Huahua Wang, Li Su, Yunfeng Peng
Mobile Edge Computing (MEC) is a technology for the fifth-generation (5G) wireless communications to enable User Equipment (UE) to offload tasks to servers deployed at the edge of network. However, taking both delay and energy consumption into consideration in the 5G MEC system is usually complex and contradictory. Non-orthogonal multiple access (NOMA) enable more UEs to offload their computing tasks to MEC servers using the same spectrum resources to enhance the spectrum efficiency for 5G, which makes the problem even more complex in the NOMA-MEC system. In this work, a system utility maximization model is present to NOMA-MEC system, and two optimization algorithms based on Newton method and greedy algorithm respectively are proposed to jointly optimize the computing resource allocation, SIC order, transmission time slot allocation, which can easily achieve a better trade-off between the delay and energy consumption. The simulation results prove that the proposed method is effective for NOMA-MEC systems.
{"title":"Multi-objective optimization for NOMA-based mobile edge computing offloading by maximizing system utility","authors":"Hong Qin, Haitao Du, Huahua Wang, Li Su, Yunfeng Peng","doi":"10.23919/JCC.ea.2021-0252.202302","DOIUrl":"https://doi.org/10.23919/JCC.ea.2021-0252.202302","url":null,"abstract":"Mobile Edge Computing (MEC) is a technology for the fifth-generation (5G) wireless communications to enable User Equipment (UE) to offload tasks to servers deployed at the edge of network. However, taking both delay and energy consumption into consideration in the 5G MEC system is usually complex and contradictory. Non-orthogonal multiple access (NOMA) enable more UEs to offload their computing tasks to MEC servers using the same spectrum resources to enhance the spectrum efficiency for 5G, which makes the problem even more complex in the NOMA-MEC system. In this work, a system utility maximization model is present to NOMA-MEC system, and two optimization algorithms based on Newton method and greedy algorithm respectively are proposed to jointly optimize the computing resource allocation, SIC order, transmission time slot allocation, which can easily achieve a better trade-off between the delay and energy consumption. The simulation results prove that the proposed method is effective for NOMA-MEC systems.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"244 ","pages":"156-165"},"PeriodicalIF":4.1,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139026621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.23919/jcc.ea.2020-0174.202302
Yuchuan Fu, Changle Li, T. Luan, Yao Zhang
Diversified traffic participants and complex traffic environment (e.g., roadblocks or road damage exist) challenge the decision-making accuracy of a single connected and autonomous vehicle (CAV) due to its limited sensing and computing capabilities. Using Internet of Vehicles (IoV) to share driving rules between CAVs can break limitations of a single CAV, but at the same time may cause privacy and safety issues. To tackle this problem, this paper proposes to combine IoV and blockchain technologies to form an efficient and accurate autonomous guidance strategy. Specifically, we first use reinforcement learning for driving decision learning, and give the corresponding driving rule extraction method. Then, an architecture combining IoV and blockchain is designed to ensure secure driving rule sharing. Finally, the shared rules will form an effective autonomous driving guidance strategy through driving rules selection and action selection. Extensive simulation proves that the proposed strategy performs well in complex traffic environment, mainly in terms of accuracy, safety, and robustness.
{"title":"IoV and blockchain-enabled driving guidance strategy in complex traffic environment","authors":"Yuchuan Fu, Changle Li, T. Luan, Yao Zhang","doi":"10.23919/jcc.ea.2020-0174.202302","DOIUrl":"https://doi.org/10.23919/jcc.ea.2020-0174.202302","url":null,"abstract":"Diversified traffic participants and complex traffic environment (e.g., roadblocks or road damage exist) challenge the decision-making accuracy of a single connected and autonomous vehicle (CAV) due to its limited sensing and computing capabilities. Using Internet of Vehicles (IoV) to share driving rules between CAVs can break limitations of a single CAV, but at the same time may cause privacy and safety issues. To tackle this problem, this paper proposes to combine IoV and blockchain technologies to form an efficient and accurate autonomous guidance strategy. Specifically, we first use reinforcement learning for driving decision learning, and give the corresponding driving rule extraction method. Then, an architecture combining IoV and blockchain is designed to ensure secure driving rule sharing. Finally, the shared rules will form an effective autonomous driving guidance strategy through driving rules selection and action selection. Extensive simulation proves that the proposed strategy performs well in complex traffic environment, mainly in terms of accuracy, safety, and robustness.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"1 1","pages":"230-243"},"PeriodicalIF":4.1,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68734630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the complex and changeable environment under water, the performance of traditional DOA estimation algorithms based on mathematical model, such as MUSIC, ESPRIT, etc., degrades greatly or even some mistakes can be made because of the mismatch between algorithm model and actual environment model. In addition, the neural network has the ability of generalization and mapping, it can consider the noise, transmission channel inconsistency and other factors of the objective environment. Therefore, this paper utilizes Back Propagation (BP) neural network as the basic framework of underwater DOA estimation. Furthermore, in order to improve the performance of DOA estimation of BP neural network, the following three improvements are proposed. (1) Aiming at the problem that the weight and threshold of traditional BP neural network converge slowly and easily fall into the local optimal value in the iterative process, PSO-BP-NN based on optimized particle swarm optimization (PSO) algorithm is proposed. (2) The Higher-order cumulant of the received signal is utilized to establish the training model. (3) A BP neural network training method for arbitrary number of sources is proposed. Finally, the effectiveness of the proposed algorithm is proved by comparing with the state-of-the-art algorithms and MUSIC algorithm.
{"title":"Multi-source underwater DOA estimation using PSO-BP neural network based on high-order cumulant optimization","authors":"Haihua Chen, Jingyao Zhang, Binbin Jiang, Xuerong Cui, Rongrong Zhou, Yucheng Zhang","doi":"10.23919/jcc.ea.2021-0031.202302","DOIUrl":"https://doi.org/10.23919/jcc.ea.2021-0031.202302","url":null,"abstract":"Due to the complex and changeable environment under water, the performance of traditional DOA estimation algorithms based on mathematical model, such as MUSIC, ESPRIT, etc., degrades greatly or even some mistakes can be made because of the mismatch between algorithm model and actual environment model. In addition, the neural network has the ability of generalization and mapping, it can consider the noise, transmission channel inconsistency and other factors of the objective environment. Therefore, this paper utilizes Back Propagation (BP) neural network as the basic framework of underwater DOA estimation. Furthermore, in order to improve the performance of DOA estimation of BP neural network, the following three improvements are proposed. (1) Aiming at the problem that the weight and threshold of traditional BP neural network converge slowly and easily fall into the local optimal value in the iterative process, PSO-BP-NN based on optimized particle swarm optimization (PSO) algorithm is proposed. (2) The Higher-order cumulant of the received signal is utilized to establish the training model. (3) A BP neural network training method for arbitrary number of sources is proposed. Finally, the effectiveness of the proposed algorithm is proved by comparing with the state-of-the-art algorithms and MUSIC algorithm.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"1 1","pages":"212-229"},"PeriodicalIF":4.1,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68734795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.23919/JCC.fa.2021-0742.202312
Qiuna Niu, Wei Shi, Yongdao Xu, Weijun Wen
60 GHz millimeter wave (mmWave) system provides extremely high time resolution and multipath components (MPC) separation and has great potential to achieve high precision in the indoor positioning. However, the ranging data is often contaminated by non-line-of-sight (NLOS) transmission. First, six features of 60GHz mmWave signal under LOS and NLOS conditions are evaluated. Next, a classifier constructed by random forest (RF) algorithm is used to identify line-of-sight (LOS) or NLOS channel. The identification mechanism has excellent generalization performance and the classification accuracy is over 97%. Finally, based on the identification results, a residual weighted least squares positioning method is proposed. All ranging information including that under NLOS channels is fully utilized, positioning failure caused by insufficient LOS links can be avoided. Compared with the conventional least squares approach, the positioning error of the proposed algorithm is reduced by 49%.
60 GHz 毫米波(mmWave)系统具有极高的时间分辨率和多径分量(MPC)分离能力,在实现高精度室内定位方面潜力巨大。然而,测距数据经常受到非视距(NLOS)传输的污染。首先,评估了 60GHz 毫米波信号在 LOS 和 NLOS 条件下的六个特征。然后,使用随机森林(RF)算法构建的分类器来识别视距(LOS)或非视距(NLOS)信道。该识别机制具有出色的泛化性能,分类准确率超过 97%。最后,基于识别结果,提出了一种残差加权最小二乘法定位方法。该方法充分利用了包括 NLOS 信道在内的所有测距信息,避免了因 LOS 链路不足而导致的定位失败。与传统的最小二乘法相比,所提算法的定位误差减少了 49%。
{"title":"High-accuracy NLOS identification based on random forest and high-precision positioning on 60 GHz millimeter wave","authors":"Qiuna Niu, Wei Shi, Yongdao Xu, Weijun Wen","doi":"10.23919/JCC.fa.2021-0742.202312","DOIUrl":"https://doi.org/10.23919/JCC.fa.2021-0742.202312","url":null,"abstract":"60 GHz millimeter wave (mmWave) system provides extremely high time resolution and multipath components (MPC) separation and has great potential to achieve high precision in the indoor positioning. However, the ranging data is often contaminated by non-line-of-sight (NLOS) transmission. First, six features of 60GHz mmWave signal under LOS and NLOS conditions are evaluated. Next, a classifier constructed by random forest (RF) algorithm is used to identify line-of-sight (LOS) or NLOS channel. The identification mechanism has excellent generalization performance and the classification accuracy is over 97%. Finally, based on the identification results, a residual weighted least squares positioning method is proposed. All ranging information including that under NLOS channels is fully utilized, positioning failure caused by insufficient LOS links can be avoided. Compared with the conventional least squares approach, the positioning error of the proposed algorithm is reduced by 49%.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"1 2","pages":"96-110"},"PeriodicalIF":4.1,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139014842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.23919/JCC.fa.2021-0347.202312
Guangliang Pan, Wei Wang, Minglei Li
In this paper, we propose a novel deep learning (DL)-based receiver design for orthogonal frequency division multiplexing (OFDM) systems. The entire process of channel estimation, equalization, and signal detection is replaced by a neural network (NN), and hence, the detector is called a NN detector (N2D). First, an OFDM signal model is established. We analyze both temporal and spectral characteristics of OFDM signals, which are the motivation for DL. Then, the generated data based on the simulation of channel statistics is used for offline training of bi-directional long short-term memory (Bi-LSTM) NN. Especially, a discriminator (F) is added to the input of Bi-LSTM NN to look for subcarrier transmission data with optimal channel gain (OCG), which can greatly improve the performance of the detector. Finally, the trained N2D is used for online recovery of OFDM symbols. The performance of the proposed N2D is analyzed theoretically in terms of bit error rate (BER) by Monte Carlo simulation under different parameter scenarios. The simulation results demonstrate that the BER of N2D is obviously lower than other algorithms, especially at high signal-to-noise ratios (SNRs). Meanwhile, the proposed N2D is robust to the fluctuation of parameter values.
{"title":"Deep learning based signal detector for OFDM systems","authors":"Guangliang Pan, Wei Wang, Minglei Li","doi":"10.23919/JCC.fa.2021-0347.202312","DOIUrl":"https://doi.org/10.23919/JCC.fa.2021-0347.202312","url":null,"abstract":"In this paper, we propose a novel deep learning (DL)-based receiver design for orthogonal frequency division multiplexing (OFDM) systems. The entire process of channel estimation, equalization, and signal detection is replaced by a neural network (NN), and hence, the detector is called a NN detector (N2D). First, an OFDM signal model is established. We analyze both temporal and spectral characteristics of OFDM signals, which are the motivation for DL. Then, the generated data based on the simulation of channel statistics is used for offline training of bi-directional long short-term memory (Bi-LSTM) NN. Especially, a discriminator (F) is added to the input of Bi-LSTM NN to look for subcarrier transmission data with optimal channel gain (OCG), which can greatly improve the performance of the detector. Finally, the trained N2D is used for online recovery of OFDM symbols. The performance of the proposed N2D is analyzed theoretically in terms of bit error rate (BER) by Monte Carlo simulation under different parameter scenarios. The simulation results demonstrate that the BER of N2D is obviously lower than other algorithms, especially at high signal-to-noise ratios (SNRs). Meanwhile, the proposed N2D is robust to the fluctuation of parameter values.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"39 1","pages":"66-77"},"PeriodicalIF":4.1,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139016808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.23919/jcc.ea.2021-0067.202302
Tianyue Yu, Xiaoli Sun, Yueming Cai, Z. Zhu
Ultra-reliable and low-latency communication (URLLC) is still in the early stage of research due to its two strict and conflicting requirements, i.e., ultra-low latency and ultra-high reliability, and its impact on security performance is still unclear. Specifically, short-packet communication is expected to meet the delay requirement of URLLC, while the degradation of reliability caused by it makes traditional physical-layer security metrics not applicable. In this paper, we investigate the secure short-packet transmission in uplink massive multiuser multiple-input-multiple-output (MU-MIMO) system under imperfect channel state information (CSI). We propose an artificial noise scheme to improve the security performance of the system and use the system average secrecy throughput (AST) as the analysis metric. We derive the approximate closed-form expression of the system AST and further analyze the system asymptotic performance in two regimes. Furthermore, a one-dimensional search method is used to optimize the maximum system AST for a given pilot length. Numerical results verify the correctness of theoretical analysis, and show that there are some parameters that affect the tradeoff between security and latency. Moreover, appropriately increasing the number of antennas at the base station (BS) and transmission power at user devices (UDs) can increase the system AST to achieve the required threshold.
{"title":"Secure short-packet transmission in uplink massive MU-MIMO assisted URLLC under imperfect CSI","authors":"Tianyue Yu, Xiaoli Sun, Yueming Cai, Z. Zhu","doi":"10.23919/jcc.ea.2021-0067.202302","DOIUrl":"https://doi.org/10.23919/jcc.ea.2021-0067.202302","url":null,"abstract":"Ultra-reliable and low-latency communication (URLLC) is still in the early stage of research due to its two strict and conflicting requirements, i.e., ultra-low latency and ultra-high reliability, and its impact on security performance is still unclear. Specifically, short-packet communication is expected to meet the delay requirement of URLLC, while the degradation of reliability caused by it makes traditional physical-layer security metrics not applicable. In this paper, we investigate the secure short-packet transmission in uplink massive multiuser multiple-input-multiple-output (MU-MIMO) system under imperfect channel state information (CSI). We propose an artificial noise scheme to improve the security performance of the system and use the system average secrecy throughput (AST) as the analysis metric. We derive the approximate closed-form expression of the system AST and further analyze the system asymptotic performance in two regimes. Furthermore, a one-dimensional search method is used to optimize the maximum system AST for a given pilot length. Numerical results verify the correctness of theoretical analysis, and show that there are some parameters that affect the tradeoff between security and latency. Moreover, appropriately increasing the number of antennas at the base station (BS) and transmission power at user devices (UDs) can increase the system AST to achieve the required threshold.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"1 1","pages":"196-211"},"PeriodicalIF":4.1,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68734342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}