{"title":"Vision-enhanced GNSS-based environmental context detection for autonomous vehicle navigation","authors":"Florent Feriol, Yoko Watanabe, Damien Vivet","doi":"10.1109/MFI55806.2022.9913867","DOIUrl":null,"url":null,"abstract":"Context-adaptive navigation is currently considered as one of the potential solutions to achieve a more precise and robust positioning. The goal would be to adapt the sensor parameters and the navigation filter structure so that it takes into account the context-dependant sensor performance, notably GNSS signal degradations. For that, a reliable context detection is essential. This paper proposes a GNSS-based environmental context detector which classifies the environment surrounding a vehicle into four classes: canyon, open-sky, trees and urban. A support-vector machine classifier is trained on our database collected around Toulouse. We first show the classification results of a model based on GNSS data only, revealing its limitation to distinguish trees and urban contexts. For addressing this issue, this paper proposes the vision-enhanced model by adding satellite visibility information from sky segmentation on fisheye camera images. Compared to the GNSS-only model, the proposed vision-enhanced model significantly improved the classification performance and raised an average F1-score from 78% to 86%.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"217 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MFI55806.2022.9913867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Context-adaptive navigation is currently considered as one of the potential solutions to achieve a more precise and robust positioning. The goal would be to adapt the sensor parameters and the navigation filter structure so that it takes into account the context-dependant sensor performance, notably GNSS signal degradations. For that, a reliable context detection is essential. This paper proposes a GNSS-based environmental context detector which classifies the environment surrounding a vehicle into four classes: canyon, open-sky, trees and urban. A support-vector machine classifier is trained on our database collected around Toulouse. We first show the classification results of a model based on GNSS data only, revealing its limitation to distinguish trees and urban contexts. For addressing this issue, this paper proposes the vision-enhanced model by adding satellite visibility information from sky segmentation on fisheye camera images. Compared to the GNSS-only model, the proposed vision-enhanced model significantly improved the classification performance and raised an average F1-score from 78% to 86%.