{"title":"DIRD是一个光照健壮的描述符","authors":"Henning Lategahn, Johannes Beck, C. Stiller","doi":"10.1109/IVS.2014.6856421","DOIUrl":null,"url":null,"abstract":"Many robotics applications nowadays use cameras for various task such as place recognition, localization, mapping etc. These methods heavily depend on image descriptors. A plethora of descriptors have recently been introduced but hardly any address the problem of illumination robustness. Herein we introduce an illumination robust image descriptor which we dub DIRD (Dird is an Illumination Robust Descriptor). First a set of Haar features are computed and individual pixel responses are normalized to L2 unit length. Thereafter features are pooled over a predefined neighborhood region. The concatenation of several such features form the basis DIRD vector. These features are then quantized to maximize entropy allowing (among others) a binary version of DIRD consisting of only ones and zeros for very fast matching. We evaluate DIRD on three test sets and compare its performance with (extended) USURF, BRIEF and a baseline gray level descriptor. All proposed DIRD variants substantially outperform these methods by times more than doubling the performance of USURF and BRIEF.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"DIRD is an illumination robust descriptor\",\"authors\":\"Henning Lategahn, Johannes Beck, C. Stiller\",\"doi\":\"10.1109/IVS.2014.6856421\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many robotics applications nowadays use cameras for various task such as place recognition, localization, mapping etc. These methods heavily depend on image descriptors. A plethora of descriptors have recently been introduced but hardly any address the problem of illumination robustness. Herein we introduce an illumination robust image descriptor which we dub DIRD (Dird is an Illumination Robust Descriptor). First a set of Haar features are computed and individual pixel responses are normalized to L2 unit length. Thereafter features are pooled over a predefined neighborhood region. The concatenation of several such features form the basis DIRD vector. These features are then quantized to maximize entropy allowing (among others) a binary version of DIRD consisting of only ones and zeros for very fast matching. We evaluate DIRD on three test sets and compare its performance with (extended) USURF, BRIEF and a baseline gray level descriptor. All proposed DIRD variants substantially outperform these methods by times more than doubling the performance of USURF and BRIEF.\",\"PeriodicalId\":254500,\"journal\":{\"name\":\"2014 IEEE Intelligent Vehicles Symposium Proceedings\",\"volume\":\"72 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE Intelligent Vehicles Symposium Proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IVS.2014.6856421\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Intelligent Vehicles Symposium Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVS.2014.6856421","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Many robotics applications nowadays use cameras for various task such as place recognition, localization, mapping etc. These methods heavily depend on image descriptors. A plethora of descriptors have recently been introduced but hardly any address the problem of illumination robustness. Herein we introduce an illumination robust image descriptor which we dub DIRD (Dird is an Illumination Robust Descriptor). First a set of Haar features are computed and individual pixel responses are normalized to L2 unit length. Thereafter features are pooled over a predefined neighborhood region. The concatenation of several such features form the basis DIRD vector. These features are then quantized to maximize entropy allowing (among others) a binary version of DIRD consisting of only ones and zeros for very fast matching. We evaluate DIRD on three test sets and compare its performance with (extended) USURF, BRIEF and a baseline gray level descriptor. All proposed DIRD variants substantially outperform these methods by times more than doubling the performance of USURF and BRIEF.