{"title":"On splitting dataset: Boosting Locally Adaptive Regression Kernels for car localization","authors":"Sheng Wang, Qiang Wu, Xiangjian He, Min Xu","doi":"10.1109/ICARCV.2012.6485320","DOIUrl":null,"url":null,"abstract":"In this paper, we study the impact of learning an Adaboost classifier with small sample set (i.e., with fewer training examples). In particular, we make use of car localization as an underlying application, because car localization can be widely used to various real world applications. In order to evaluate the performance of Adaboost learning with a few examples, we simply apply Adaboost learning to a recently proposed feature descriptor - Locally Adaptive Regression Kernel (LARK). As a type of state-of-the-art feature descriptor, LARK is robust against illumination changes and noises. More importantly, we use LARK because its spatial property is also favorable for our purpose (i.e., each patch in the LARK descriptor corresponds to one unique pixel in the original image). In addition to learning a detector from the entire training dataset, we also split the original training dataset into several sub-groups and then we train one detector for each sub-group. We compare those features associated using the detector of each sub-group with that of the detector learnt with the entire training dataset and propose improvements based on the comparison results. Our experimental results indicate that the Adaboost learning is only successful on a small dataset when those learnt features simultaneously satisfy two conditions that: 1. features are learnt from the Region of Interest (ROI), and 2. features are sufficiently far away from each other.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARCV.2012.6485320","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In this paper, we study the impact of learning an Adaboost classifier with small sample set (i.e., with fewer training examples). In particular, we make use of car localization as an underlying application, because car localization can be widely used to various real world applications. In order to evaluate the performance of Adaboost learning with a few examples, we simply apply Adaboost learning to a recently proposed feature descriptor - Locally Adaptive Regression Kernel (LARK). As a type of state-of-the-art feature descriptor, LARK is robust against illumination changes and noises. More importantly, we use LARK because its spatial property is also favorable for our purpose (i.e., each patch in the LARK descriptor corresponds to one unique pixel in the original image). In addition to learning a detector from the entire training dataset, we also split the original training dataset into several sub-groups and then we train one detector for each sub-group. We compare those features associated using the detector of each sub-group with that of the detector learnt with the entire training dataset and propose improvements based on the comparison results. Our experimental results indicate that the Adaboost learning is only successful on a small dataset when those learnt features simultaneously satisfy two conditions that: 1. features are learnt from the Region of Interest (ROI), and 2. features are sufficiently far away from each other.