Yipeng Liang;Qimei Chen;Guangxu Zhu;Hao Jiang;Yonina C. Eldar;Shuguang Cui
{"title":"Communication-and-Energy Efficient Over-the-Air Federated Learning","authors":"Yipeng Liang;Qimei Chen;Guangxu Zhu;Hao Jiang;Yonina C. Eldar;Shuguang Cui","doi":"10.1109/TWC.2024.3501297","DOIUrl":null,"url":null,"abstract":"Communication and energy efficiencies are two crucial objectives in the pursuit of edge intelligence in 6G networks, and become increasingly important given the prevalence of large model training. Existing designs typically focus on either communication efficiency or energy efficiency due to the fact that improving one objective generally comes at the expense of the other. Over-the-air federated learning (OTA-FL) has recently emerged as a promising approach to enhance both efficiencies through an integrated communication and computation design. Nevertheless, most previous studies on OTA-FL only consider scenarios where the dataset for the entire FL procedure is collected and available prior to training. In real-world applications, devices continuously collect new data in an online manner. This underscores the significance of sample collection through sensing in a practical FL pipeline. We propose to integrate sensing with communication and computation into a joint design to further boost the communication-and-energy efficiencies of OTA-FL. Specifically, we consider a training latency and energy consumption minimization problem with performance guarantees. To this end, we first derive an average training error (ATE) metric to quantify convergence performance. Then, a joint sensing, communication and computation resource allocation strategy is developed based on a deep reinforcement learning (DRL) algorithm that nests convex optimization with a deep Q-network. Extensive experiments are conducted to validate our theoretical analysis, and demonstrate the effectiveness of the proposed design for communication-and-energy efficient FL.","PeriodicalId":13431,"journal":{"name":"IEEE Transactions on Wireless Communications","volume":"24 1","pages":"767-782"},"PeriodicalIF":10.7000,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Wireless Communications","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10767214/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Communication and energy efficiencies are two crucial objectives in the pursuit of edge intelligence in 6G networks, and become increasingly important given the prevalence of large model training. Existing designs typically focus on either communication efficiency or energy efficiency due to the fact that improving one objective generally comes at the expense of the other. Over-the-air federated learning (OTA-FL) has recently emerged as a promising approach to enhance both efficiencies through an integrated communication and computation design. Nevertheless, most previous studies on OTA-FL only consider scenarios where the dataset for the entire FL procedure is collected and available prior to training. In real-world applications, devices continuously collect new data in an online manner. This underscores the significance of sample collection through sensing in a practical FL pipeline. We propose to integrate sensing with communication and computation into a joint design to further boost the communication-and-energy efficiencies of OTA-FL. Specifically, we consider a training latency and energy consumption minimization problem with performance guarantees. To this end, we first derive an average training error (ATE) metric to quantify convergence performance. Then, a joint sensing, communication and computation resource allocation strategy is developed based on a deep reinforcement learning (DRL) algorithm that nests convex optimization with a deep Q-network. Extensive experiments are conducted to validate our theoretical analysis, and demonstrate the effectiveness of the proposed design for communication-and-energy efficient FL.
期刊介绍:
The IEEE Transactions on Wireless Communications is a prestigious publication that showcases cutting-edge advancements in wireless communications. It welcomes both theoretical and practical contributions in various areas. The scope of the Transactions encompasses a wide range of topics, including modulation and coding, detection and estimation, propagation and channel characterization, and diversity techniques. The journal also emphasizes the physical and link layer communication aspects of network architectures and protocols.
The journal is open to papers on specific topics or non-traditional topics related to specific application areas. This includes simulation tools and methodologies, orthogonal frequency division multiplexing, MIMO systems, and wireless over optical technologies.
Overall, the IEEE Transactions on Wireless Communications serves as a platform for high-quality manuscripts that push the boundaries of wireless communications and contribute to advancements in the field.