Imane Khalil, Mohammed Alae Chanoui, Zine El Abidine Alaoui Ismaili, Zouhair Guennoun, Adnane Addaim, Mohammed Sbihi
{"title":"Lightweight U-Net based on depthwise separable convolution for cloud detection onboard nanosatellite","authors":"Imane Khalil, Mohammed Alae Chanoui, Zine El Abidine Alaoui Ismaili, Zouhair Guennoun, Adnane Addaim, Mohammed Sbihi","doi":"10.1007/s11227-024-06452-8","DOIUrl":null,"url":null,"abstract":"<p>The typical procedure for Earth Observation Nanosatellites involves the sequential steps of image capture, onboard storage, and subsequent transmission to the ground station. This approach places significant demands on onboard resources and encounters bandwidth limitations; moreover, the captured images may be obstructed by cloud cover. Many current deep-learning methods have achieved reasonable accuracy in cloud detection. However, the constraints posed by nanosatellites specifically in terms of memory and energy present challenges for effective onboard Artificial Intelligence implementation. Hence, we propose an optimized tiny Machine learning model based on the U-Net architecture, implemented on STM32H7 microcontroller for real-time cloud coverage prediction. The optimized U-Net architecture on the embedded device introduces Depthwise Separable Convolution for efficient feature extraction, reducing computational complexity. By utilizing this method, coupled with encoder and decoder blocks, the model optimizes cloud detection for nanosatellites, showcasing a significant advancement in resource-efficient onboard processing. This approach aims to enhance the university nanosatellite mission, equipped with an RGB Gecko imager camera. The model is trained on Sentinel 2 satellite images due to the unavailability of a large dataset for the payload imager and is subsequently evaluated on gecko images, demonstrating the generalizability of our approach. The outcome of our optimization approach is a 21% reduction in network parameters compared to the original configuration and maintaining an accuracy of 89%. This reduction enables the system to allocate only 61.89 KB in flash memory effectively, resulting in improvements in memory usage and computational efficiency.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"21 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Supercomputing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11227-024-06452-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The typical procedure for Earth Observation Nanosatellites involves the sequential steps of image capture, onboard storage, and subsequent transmission to the ground station. This approach places significant demands on onboard resources and encounters bandwidth limitations; moreover, the captured images may be obstructed by cloud cover. Many current deep-learning methods have achieved reasonable accuracy in cloud detection. However, the constraints posed by nanosatellites specifically in terms of memory and energy present challenges for effective onboard Artificial Intelligence implementation. Hence, we propose an optimized tiny Machine learning model based on the U-Net architecture, implemented on STM32H7 microcontroller for real-time cloud coverage prediction. The optimized U-Net architecture on the embedded device introduces Depthwise Separable Convolution for efficient feature extraction, reducing computational complexity. By utilizing this method, coupled with encoder and decoder blocks, the model optimizes cloud detection for nanosatellites, showcasing a significant advancement in resource-efficient onboard processing. This approach aims to enhance the university nanosatellite mission, equipped with an RGB Gecko imager camera. The model is trained on Sentinel 2 satellite images due to the unavailability of a large dataset for the payload imager and is subsequently evaluated on gecko images, demonstrating the generalizability of our approach. The outcome of our optimization approach is a 21% reduction in network parameters compared to the original configuration and maintaining an accuracy of 89%. This reduction enables the system to allocate only 61.89 KB in flash memory effectively, resulting in improvements in memory usage and computational efficiency.