Mohammad Reza Jabbarpour;Bahman Javadi;Philip H.W. Leong;Rodrigo N. Calheiros;David Boland
{"title":"FedOrbit: Energy Efficient Federated Learning for Orbital Edge Computing Using Block Minifloat Arithmetic","authors":"Mohammad Reza Jabbarpour;Bahman Javadi;Philip H.W. Leong;Rodrigo N. Calheiros;David Boland","doi":"10.1109/TSC.2024.3478768","DOIUrl":null,"url":null,"abstract":"Low Earth Orbit (LEO) satellite constellations have diverse applications, including earth observation, communication services, navigation, and positioning. These constellations have evolved into a valuable data source; however, their use in a ground station (GS) for analysis via machine learning algorithms presents challenges due to constraints on power consumption, communication bandwidth, and onboard computing capabilities. While the combination of Federated Learning (FL) and Orbital Edge Computing has been employed to address these challenges, its heavy reliance on the GS for model aggregation and edge resource limitations remains a research challenge. This article presents FedOrbit, a novel energy-efficient and decentralised FL method to optimise communication with the GS and reduce power consumption. FedOrbit utilises reinforcement learning for cluster formation, satellite visiting patterns for master satellite selection, and block minifloat arithmetic for power reduction. Extensive performance evaluation under Walker Delta-based LEO constellation configurations and different datasets reveals that FedOrbit can maintain high accuracy while significantly reduce communication demand, power consumption and training time in comparison to state-of-the-art FL approaches. The proposed technique can also reduce the training time by 5× compared with the centralised FL approaches. In addition, the utilisation of block minifloat representation as low-precision arithmetic enhanced the energy consumption by 3.5× compared with the single-precision (FP32) format.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"17 6","pages":"3657-3671"},"PeriodicalIF":5.5000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Services Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10714035/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Low Earth Orbit (LEO) satellite constellations have diverse applications, including earth observation, communication services, navigation, and positioning. These constellations have evolved into a valuable data source; however, their use in a ground station (GS) for analysis via machine learning algorithms presents challenges due to constraints on power consumption, communication bandwidth, and onboard computing capabilities. While the combination of Federated Learning (FL) and Orbital Edge Computing has been employed to address these challenges, its heavy reliance on the GS for model aggregation and edge resource limitations remains a research challenge. This article presents FedOrbit, a novel energy-efficient and decentralised FL method to optimise communication with the GS and reduce power consumption. FedOrbit utilises reinforcement learning for cluster formation, satellite visiting patterns for master satellite selection, and block minifloat arithmetic for power reduction. Extensive performance evaluation under Walker Delta-based LEO constellation configurations and different datasets reveals that FedOrbit can maintain high accuracy while significantly reduce communication demand, power consumption and training time in comparison to state-of-the-art FL approaches. The proposed technique can also reduce the training time by 5× compared with the centralised FL approaches. In addition, the utilisation of block minifloat representation as low-precision arithmetic enhanced the energy consumption by 3.5× compared with the single-precision (FP32) format.
期刊介绍:
IEEE Transactions on Services Computing encompasses the computing and software aspects of the science and technology of services innovation research and development. It places emphasis on algorithmic, mathematical, statistical, and computational methods central to services computing. Topics covered include Service Oriented Architecture, Web Services, Business Process Integration, Solution Performance Management, and Services Operations and Management. The transactions address mathematical foundations, security, privacy, agreement, contract, discovery, negotiation, collaboration, and quality of service for web services. It also covers areas like composite web service creation, business and scientific applications, standards, utility models, business process modeling, integration, collaboration, and more in the realm of Services Computing.