P. Ribeiro, M. Santos, Paulo L. J. Drews-Jr, S. Botelho
{"title":"使用深度学习的前视声纳场景匹配","authors":"P. Ribeiro, M. Santos, Paulo L. J. Drews-Jr, S. Botelho","doi":"10.1109/ICMLA.2017.00-99","DOIUrl":null,"url":null,"abstract":"Optical images display drastically reduced visibility due to underwater turbidity conditions. Sonar imaging presents an alternative form of environment perception for underwater vehicles navigation, mapping and localization. In this work we present a novel method for Acoustic Scene Matching. Therefore, we developed and trained a new Deep Learning architecture designed to compare two acoustic images and decide if they correspond to the same underwater scene. The network is named Sonar Matching Network (SMNet). The acoustic images used in this paper were obtained by a Forward Looking Sonar during a Remotely Operated Vehicle (ROV) mission. A Geographic Positioning System provided the ROV position for the ground truth score which is used in the learning process of our network. The proposed method uses 36.000 samples of real data for validation. From a binary classification perspective, our method achieved 98% of accuracy when two given scenes have more than ten percent of intersection.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"59 1","pages":"574-579"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":"{\"title\":\"Forward Looking Sonar Scene Matching Using Deep Learning\",\"authors\":\"P. Ribeiro, M. Santos, Paulo L. J. Drews-Jr, S. Botelho\",\"doi\":\"10.1109/ICMLA.2017.00-99\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Optical images display drastically reduced visibility due to underwater turbidity conditions. Sonar imaging presents an alternative form of environment perception for underwater vehicles navigation, mapping and localization. In this work we present a novel method for Acoustic Scene Matching. Therefore, we developed and trained a new Deep Learning architecture designed to compare two acoustic images and decide if they correspond to the same underwater scene. The network is named Sonar Matching Network (SMNet). The acoustic images used in this paper were obtained by a Forward Looking Sonar during a Remotely Operated Vehicle (ROV) mission. A Geographic Positioning System provided the ROV position for the ground truth score which is used in the learning process of our network. The proposed method uses 36.000 samples of real data for validation. From a binary classification perspective, our method achieved 98% of accuracy when two given scenes have more than ten percent of intersection.\",\"PeriodicalId\":6636,\"journal\":{\"name\":\"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"volume\":\"59 1\",\"pages\":\"574-579\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"17\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA.2017.00-99\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2017.00-99","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Forward Looking Sonar Scene Matching Using Deep Learning
Optical images display drastically reduced visibility due to underwater turbidity conditions. Sonar imaging presents an alternative form of environment perception for underwater vehicles navigation, mapping and localization. In this work we present a novel method for Acoustic Scene Matching. Therefore, we developed and trained a new Deep Learning architecture designed to compare two acoustic images and decide if they correspond to the same underwater scene. The network is named Sonar Matching Network (SMNet). The acoustic images used in this paper were obtained by a Forward Looking Sonar during a Remotely Operated Vehicle (ROV) mission. A Geographic Positioning System provided the ROV position for the ground truth score which is used in the learning process of our network. The proposed method uses 36.000 samples of real data for validation. From a binary classification perspective, our method achieved 98% of accuracy when two given scenes have more than ten percent of intersection.