一种用于多用户大规模MIMO系统鲁棒混合预编码的压缩神经网络

Mingyang Chai, Suhua Tang, Ming Zhao, Wuyang Zhou
{"title":"一种用于多用户大规模MIMO系统鲁棒混合预编码的压缩神经网络","authors":"Mingyang Chai, Suhua Tang, Ming Zhao, Wuyang Zhou","doi":"10.1109/GLOBECOM42002.2020.9322109","DOIUrl":null,"url":null,"abstract":"In multi-user millimeter wave (mmWave) communications, massive multiple-input multiple-output (MIMO) systems can achieve high gain and spectral efficiency significantly. To reduce the hardware complexity and energy consumption of massive MIMO systems, hybrid precoding as a crucial technique has attracted extensive attention. Most previous works for hybrid precoding developed algorithms based on optimization or exhaustive search approaches that either lead to sub-optimal performance or have high computational complexity. Motivated by the thought of cross-fertilization between Data-driven and Model-driven approaches, we consider applying deep learning approach and introduce the Hybrid Precoding Network(HPNet), which is a compressed deep neural network exploiting the feature extracting (thanks to convolutional kernels) and generalization ability of neural networks and the natural sparsity of mmWave channels. The HPNet takes imperfect channel state information (CSI) as the input and predicts the analog precoder and baseband precoder for multi-user massive MIMO systems. Moreover, in order to make the approach more practical in real scenarios, we further introduce a model compression algorithm, using network pruning, to greatly reduce the computational complexity and memory usage of the neural network while almost retaining the model performance and then assess the influence of pruned parameters in the network. Numerical experiments demonstrate that HPNet outperforms state-of-the-art hybrid precoding schemes with higher performance and stronger robustness. Finally, we analyze and compare the computational complexity of different schemes.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"66 1","pages":"1-7"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"HPNet: A Compressed Neural Network for Robust Hybrid Precoding in Multi-User Massive MIMO Systems\",\"authors\":\"Mingyang Chai, Suhua Tang, Ming Zhao, Wuyang Zhou\",\"doi\":\"10.1109/GLOBECOM42002.2020.9322109\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In multi-user millimeter wave (mmWave) communications, massive multiple-input multiple-output (MIMO) systems can achieve high gain and spectral efficiency significantly. To reduce the hardware complexity and energy consumption of massive MIMO systems, hybrid precoding as a crucial technique has attracted extensive attention. Most previous works for hybrid precoding developed algorithms based on optimization or exhaustive search approaches that either lead to sub-optimal performance or have high computational complexity. Motivated by the thought of cross-fertilization between Data-driven and Model-driven approaches, we consider applying deep learning approach and introduce the Hybrid Precoding Network(HPNet), which is a compressed deep neural network exploiting the feature extracting (thanks to convolutional kernels) and generalization ability of neural networks and the natural sparsity of mmWave channels. The HPNet takes imperfect channel state information (CSI) as the input and predicts the analog precoder and baseband precoder for multi-user massive MIMO systems. Moreover, in order to make the approach more practical in real scenarios, we further introduce a model compression algorithm, using network pruning, to greatly reduce the computational complexity and memory usage of the neural network while almost retaining the model performance and then assess the influence of pruned parameters in the network. Numerical experiments demonstrate that HPNet outperforms state-of-the-art hybrid precoding schemes with higher performance and stronger robustness. Finally, we analyze and compare the computational complexity of different schemes.\",\"PeriodicalId\":12759,\"journal\":{\"name\":\"GLOBECOM 2020 - 2020 IEEE Global Communications Conference\",\"volume\":\"66 1\",\"pages\":\"1-7\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"GLOBECOM 2020 - 2020 IEEE Global Communications Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM42002.2020.9322109\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM42002.2020.9322109","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

在多用户毫米波(mmWave)通信中,大规模多输入多输出(MIMO)系统可以显著实现高增益和频谱效率。为了降低大规模MIMO系统的硬件复杂度和能耗,混合预编码作为一项关键技术受到了广泛的关注。大多数先前的混合预编码工作开发了基于优化或穷举搜索方法的算法,这些算法要么导致次优性能,要么具有很高的计算复杂度。基于数据驱动和模型驱动方法之间相互作用的思想,我们考虑应用深度学习方法并引入混合预编码网络(HPNet),这是一种压缩深度神经网络,利用神经网络的特征提取(多亏了卷积核)和泛化能力以及毫米波信道的自然稀疏性。HPNet以不完全信道状态信息(CSI)作为输入,对多用户大规模MIMO系统的模拟预编码器和基带预编码器进行预测。此外,为了使该方法在实际场景中更加实用,我们进一步引入了一种使用网络剪枝的模型压缩算法,在几乎保留模型性能的情况下大大降低了神经网络的计算复杂度和内存使用,然后评估了剪枝参数对网络的影响。数值实验表明,HPNet具有更高的性能和更强的鲁棒性。最后,对不同方案的计算复杂度进行了分析比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
HPNet: A Compressed Neural Network for Robust Hybrid Precoding in Multi-User Massive MIMO Systems
In multi-user millimeter wave (mmWave) communications, massive multiple-input multiple-output (MIMO) systems can achieve high gain and spectral efficiency significantly. To reduce the hardware complexity and energy consumption of massive MIMO systems, hybrid precoding as a crucial technique has attracted extensive attention. Most previous works for hybrid precoding developed algorithms based on optimization or exhaustive search approaches that either lead to sub-optimal performance or have high computational complexity. Motivated by the thought of cross-fertilization between Data-driven and Model-driven approaches, we consider applying deep learning approach and introduce the Hybrid Precoding Network(HPNet), which is a compressed deep neural network exploiting the feature extracting (thanks to convolutional kernels) and generalization ability of neural networks and the natural sparsity of mmWave channels. The HPNet takes imperfect channel state information (CSI) as the input and predicts the analog precoder and baseband precoder for multi-user massive MIMO systems. Moreover, in order to make the approach more practical in real scenarios, we further introduce a model compression algorithm, using network pruning, to greatly reduce the computational complexity and memory usage of the neural network while almost retaining the model performance and then assess the influence of pruned parameters in the network. Numerical experiments demonstrate that HPNet outperforms state-of-the-art hybrid precoding schemes with higher performance and stronger robustness. Finally, we analyze and compare the computational complexity of different schemes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
AirID: Injecting a Custom RF Fingerprint for Enhanced UAV Identification using Deep Learning Oversampling Algorithm based on Reinforcement Learning in Imbalanced Problems FAST-RAM: A Fast AI-assistant Solution for Task Offloading and Resource Allocation in MEC Achieving Privacy-Preserving Vehicle Selection for Effective Content Dissemination in Smart Cities Age-optimal Transmission Policy for Markov Source with Differential Encoding
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1