Amany Azevedo Amin, Efstathios Kagioulis, Norbert Domcsek, P. Graham, T. Nowotny, A. Philippides
{"title":"通过视图分类实现基于视图的鲁棒导航","authors":"Amany Azevedo Amin, Efstathios Kagioulis, Norbert Domcsek, P. Graham, T. Nowotny, A. Philippides","doi":"10.31256/xq3eo4f","DOIUrl":null,"url":null,"abstract":"—Current implementations of view-based navigation on robots have shown success, but are limited to routes of < 10m [1] [2]. This is in part because current strategies do not take into account whether a view has been correctly recognised, moving in the most familiar direction given by the rotational familiarity function (RFF) regardless of prediction confidence. We demonstrate that it is possible to use the shape of the RFF to classify if the current view is from a known position, and thus likely to provide valid navigational information, or from a position which is unknown , aliased or occluded and therefore likely to result in erroneous movement. Our model could classify these four view types with accuracies of 1.00, 0.91, 0.97 and 0.87 respectively. We hope to use these results to extend online view-based navigation and prevent robot loss in complex environments.","PeriodicalId":144066,"journal":{"name":"UKRAS22 Conference \"Robotics for Unconstrained Environments\" Proceedings","volume":"372 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust View Based Navigation through View Classification\",\"authors\":\"Amany Azevedo Amin, Efstathios Kagioulis, Norbert Domcsek, P. Graham, T. Nowotny, A. Philippides\",\"doi\":\"10.31256/xq3eo4f\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"—Current implementations of view-based navigation on robots have shown success, but are limited to routes of < 10m [1] [2]. This is in part because current strategies do not take into account whether a view has been correctly recognised, moving in the most familiar direction given by the rotational familiarity function (RFF) regardless of prediction confidence. We demonstrate that it is possible to use the shape of the RFF to classify if the current view is from a known position, and thus likely to provide valid navigational information, or from a position which is unknown , aliased or occluded and therefore likely to result in erroneous movement. Our model could classify these four view types with accuracies of 1.00, 0.91, 0.97 and 0.87 respectively. We hope to use these results to extend online view-based navigation and prevent robot loss in complex environments.\",\"PeriodicalId\":144066,\"journal\":{\"name\":\"UKRAS22 Conference \\\"Robotics for Unconstrained Environments\\\" Proceedings\",\"volume\":\"372 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"UKRAS22 Conference \\\"Robotics for Unconstrained Environments\\\" Proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.31256/xq3eo4f\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"UKRAS22 Conference \"Robotics for Unconstrained Environments\" Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31256/xq3eo4f","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust View Based Navigation through View Classification
—Current implementations of view-based navigation on robots have shown success, but are limited to routes of < 10m [1] [2]. This is in part because current strategies do not take into account whether a view has been correctly recognised, moving in the most familiar direction given by the rotational familiarity function (RFF) regardless of prediction confidence. We demonstrate that it is possible to use the shape of the RFF to classify if the current view is from a known position, and thus likely to provide valid navigational information, or from a position which is unknown , aliased or occluded and therefore likely to result in erroneous movement. Our model could classify these four view types with accuracies of 1.00, 0.91, 0.97 and 0.87 respectively. We hope to use these results to extend online view-based navigation and prevent robot loss in complex environments.