{"title":"信道训练样本不匹配情况下神经网络接收机的性能研究","authors":"Pedro H. C. de Souza, L. Mendes, R. Souza","doi":"10.1109/FNWF55208.2022.00099","DOIUrl":null,"url":null,"abstract":"Data-driven frameworks for wireless communications systems are currently attracting a lot of attention from researchers and practitioners alike. These frameworks based on machine learning (ML) algorithms and neural networks (NN s) architectures, are capable of solving a broad variety of tasks in the wireless communications domain as, for exam-ple, signal detection, channel estimation, channel coding and modulation classification. Moreover, these tasks are solved at a reduced computational cost in comparison to classic model-driven frameworks such as the maximum likelihood for signal detection, for instance. However, data-driven frameworks depend heavily on the dataset available, so that ML algorithms and NNs could be able to actually learn from data and optimize their parameters to solve such tasks at hand. This contrasts to the model-driven frameworks that inherently impart specialized domain knowledge and thus do not require to learn from data. Therefore, a mismatch between the dataset used for training and the actual data may severely degrade the performance of ML algorithms and NN s, especially in practical scenarios where the data statistics and distribution are unknown. In this work we analyze a recently proposed NN for detecting compressed signals, under practical scenarios of dataset samples mismatch, where channel delay profile and statistics mismatches are considered. Numerical results generated by computer simulations show that the NN is robust to statistics mismatches, whereas a significant degradation in performance is observed for channel delay profile mismatches.","PeriodicalId":300165,"journal":{"name":"2022 IEEE Future Networks World Forum (FNWF)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Performance of a Neural Network Receiver under Mismatch of Channel Training Samples\",\"authors\":\"Pedro H. C. de Souza, L. Mendes, R. Souza\",\"doi\":\"10.1109/FNWF55208.2022.00099\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Data-driven frameworks for wireless communications systems are currently attracting a lot of attention from researchers and practitioners alike. These frameworks based on machine learning (ML) algorithms and neural networks (NN s) architectures, are capable of solving a broad variety of tasks in the wireless communications domain as, for exam-ple, signal detection, channel estimation, channel coding and modulation classification. Moreover, these tasks are solved at a reduced computational cost in comparison to classic model-driven frameworks such as the maximum likelihood for signal detection, for instance. However, data-driven frameworks depend heavily on the dataset available, so that ML algorithms and NNs could be able to actually learn from data and optimize their parameters to solve such tasks at hand. This contrasts to the model-driven frameworks that inherently impart specialized domain knowledge and thus do not require to learn from data. Therefore, a mismatch between the dataset used for training and the actual data may severely degrade the performance of ML algorithms and NN s, especially in practical scenarios where the data statistics and distribution are unknown. In this work we analyze a recently proposed NN for detecting compressed signals, under practical scenarios of dataset samples mismatch, where channel delay profile and statistics mismatches are considered. Numerical results generated by computer simulations show that the NN is robust to statistics mismatches, whereas a significant degradation in performance is observed for channel delay profile mismatches.\",\"PeriodicalId\":300165,\"journal\":{\"name\":\"2022 IEEE Future Networks World Forum (FNWF)\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Future Networks World Forum (FNWF)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FNWF55208.2022.00099\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Future Networks World Forum (FNWF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FNWF55208.2022.00099","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance of a Neural Network Receiver under Mismatch of Channel Training Samples
Data-driven frameworks for wireless communications systems are currently attracting a lot of attention from researchers and practitioners alike. These frameworks based on machine learning (ML) algorithms and neural networks (NN s) architectures, are capable of solving a broad variety of tasks in the wireless communications domain as, for exam-ple, signal detection, channel estimation, channel coding and modulation classification. Moreover, these tasks are solved at a reduced computational cost in comparison to classic model-driven frameworks such as the maximum likelihood for signal detection, for instance. However, data-driven frameworks depend heavily on the dataset available, so that ML algorithms and NNs could be able to actually learn from data and optimize their parameters to solve such tasks at hand. This contrasts to the model-driven frameworks that inherently impart specialized domain knowledge and thus do not require to learn from data. Therefore, a mismatch between the dataset used for training and the actual data may severely degrade the performance of ML algorithms and NN s, especially in practical scenarios where the data statistics and distribution are unknown. In this work we analyze a recently proposed NN for detecting compressed signals, under practical scenarios of dataset samples mismatch, where channel delay profile and statistics mismatches are considered. Numerical results generated by computer simulations show that the NN is robust to statistics mismatches, whereas a significant degradation in performance is observed for channel delay profile mismatches.