Kosuke Tamura;Shun Kojima;Phuc V. Trinh;Shinya Sugiura;Chang-Jun Ahn
{"title":"Joint SNR and Rician K-Factor Estimation Using Multimodal Network Over Mobile Fading Channels","authors":"Kosuke Tamura;Shun Kojima;Phuc V. Trinh;Shinya Sugiura;Chang-Jun Ahn","doi":"10.1109/TMLCN.2024.3412054","DOIUrl":null,"url":null,"abstract":"This paper proposes a novel joint signal-to-noise ratio (SNR) and Rician K-factor estimation scheme based on supervised multimodal learning. In the case of using machine learning to estimate the communication environment, achieving high accuracy requires a sufficient amount of training data. To solve this problem, we introduce a multimodal convolutional neural network (CNN) structure using different waveform formats. The proposed scheme obtains “feature diversity” by increasing the modalities from the same received signal, such as sequence data and spectrogram image. Especially with a limited dataset, training convergence is accelerated since different features can be extracted from each modality. Simulations demonstrate that the presented scheme achieves superior performance compared to conventional estimation methods.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"766-779"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10552814","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10552814/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a novel joint signal-to-noise ratio (SNR) and Rician K-factor estimation scheme based on supervised multimodal learning. In the case of using machine learning to estimate the communication environment, achieving high accuracy requires a sufficient amount of training data. To solve this problem, we introduce a multimodal convolutional neural network (CNN) structure using different waveform formats. The proposed scheme obtains “feature diversity” by increasing the modalities from the same received signal, such as sequence data and spectrogram image. Especially with a limited dataset, training convergence is accelerated since different features can be extracted from each modality. Simulations demonstrate that the presented scheme achieves superior performance compared to conventional estimation methods.
本文提出了一种基于有监督多模态学习的新型信噪比(SNR)和里克里亚 K 因子联合估计方案。在使用机器学习估计通信环境的情况下,要达到高精度需要足够多的训练数据。为了解决这个问题,我们引入了一种使用不同波形格式的多模态卷积神经网络(CNN)结构。所提出的方案通过增加同一接收信号的模式(如序列数据和频谱图图像)来获得 "特征多样性"。特别是在数据集有限的情况下,由于可以从每种模态中提取不同的特征,因此可以加快训练收敛速度。模拟结果表明,与传统的估计方法相比,所提出的方案性能更优。