Pub Date : 2024-11-27DOI: 10.1109/LGRS.2024.3508462
Camilla Caricchio;Luis Felipe Mendonça;Carlos A. D. Lentini;André T. C. Lima;David O. Silva;Pedro H. Meirelles e Góes
Noncollaborative vessels are usually involved in illegal activities and actively monitoring these vessels is one of the most challenging task. This study introduces a methodology that combines automatic identification system (AIS) data and SAR images into a YOLOv8+ slicing-aided hyper inference (SAHI)-based approach, as a decision aid tool for noncooperative vessel detection, to improve maritime domain awareness. It was used 1958 augmented images to custom train the YOLOv8 neural network. For the study case, 16 Sentinel high-resolution ground range detected (GRDH)- interferometric wide (IW) SAR images were used. During the training, the custom model achieved excellent performance with satisfactory statistical results (mAP@.5: 94.3%, precision: 92.5%, and recall: 91.9%), especially when compared to similar previous studies. The model was able to correctly distinguish between vessels and nonvessel features, such as islands, rivers, or coastlines. In the study case, the false negative (FN) detection rate was 95.4%, similar to mAp@0.5 results found at the training and validation step and the Recall was 95.6%, considered excellent results. The recall improvement in the study case shows that the model’s performance in real-world scenarios is better than initially expected for application in noncollaborative vessel detection systems. The model presented showed very promising results for the operational detection of darkships using, simultaneous, SAR images and AIS data.
{"title":"YOLOv8 Neural Network Application for Noncollaborative Vessel Detection Using Sentinel-1 SAR Data: A Case Study","authors":"Camilla Caricchio;Luis Felipe Mendonça;Carlos A. D. Lentini;André T. C. Lima;David O. Silva;Pedro H. Meirelles e Góes","doi":"10.1109/LGRS.2024.3508462","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3508462","url":null,"abstract":"Noncollaborative vessels are usually involved in illegal activities and actively monitoring these vessels is one of the most challenging task. This study introduces a methodology that combines automatic identification system (AIS) data and SAR images into a YOLOv8+ slicing-aided hyper inference (SAHI)-based approach, as a decision aid tool for noncooperative vessel detection, to improve maritime domain awareness. It was used 1958 augmented images to custom train the YOLOv8 neural network. For the study case, 16 Sentinel high-resolution ground range detected (GRDH)- interferometric wide (IW) SAR images were used. During the training, the custom model achieved excellent performance with satisfactory statistical results (mAP@.5: 94.3%, precision: 92.5%, and recall: 91.9%), especially when compared to similar previous studies. The model was able to correctly distinguish between vessels and nonvessel features, such as islands, rivers, or coastlines. In the study case, the false negative (FN) detection rate was 95.4%, similar to mAp@0.5 results found at the training and validation step and the Recall was 95.6%, considered excellent results. The recall improvement in the study case shows that the model’s performance in real-world scenarios is better than initially expected for application in noncollaborative vessel detection systems. The model presented showed very promising results for the operational detection of darkships using, simultaneous, SAR images and AIS data.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-27DOI: 10.1109/LGRS.2024.3506566
Jiang Qin;Bin Zou;Haolin Li;Lamei Zhang
The undesirable distortions of synthetic aperture radar (SAR) images pose a challenge to intuitive SAR interpretation. SAR-to-optical (S2O) image translation provides a feasible solution for easier interpretation of SAR and supports multisensor analysis. Currently, diffusion-based S2O models are emerging and have achieved remarkable performance in terms of perceptual metrics and fidelity. However, the numerous iterative sampling steps and slow inference speed of these diffusion models (DMs) limit their potential for practical applications. In this letter, an efficient end-to-end diffusion model (E3Diff) is developed for real-time one-step S2O translation. E3Diff not only samples as fast as generative adversarial network (GAN) models, but also retains the powerful image synthesis performance of DMs to achieve high-quality S2O translation in an end-to-end manner. To be specific, SAR spatial priors are first incorporated to provide enriched conditional clues and achieve more precise control from the feature level to synthesize optical images. Then, E3Diff is accelerated by a hybrid refinement loss, which effectively integrates the advantages of both GAN and diffusion components to achieve efficient one-step sampling. Experiments show that E3Diff achieves real-time inference speed (0.17 s per image on an A6000 GPU) and demonstrates significant image-quality improvements (35% and 27% improvement in Frechet inception distance (FID) on the UNICORN and SEN12 dataset, respectively) compared to existing state-of-the-art (SOTA) diffusion S2O methods. This advancement of E3Diff highlights its potential to enhance SAR interpretation and cross-modal applications. The code is available at https://github.com/DeepSARRS/E