Parmeshwar Birajadar, Meet Haria, S. G. Sangodkar, V. Gadre
{"title":"Unconstrained Ear Recognition Using Deep Scattering Wavelet Network","authors":"Parmeshwar Birajadar, Meet Haria, S. G. Sangodkar, V. Gadre","doi":"10.1109/IBSSC47189.2019.8973055","DOIUrl":null,"url":null,"abstract":"There has been significant progress in the field of automatic ear recognition, wherein ear images are captured in a constrained environment. But unconstrained ear recognition have acquired less attention due to the unavailability of such a database having variations in illumination, pose, size, resolution and occlusions. It is a challenging pattern recognition problem due to large intra-class variability. In this paper, we propose a novel local descriptor for unconstrained ear recognition based on scattering wavelet network (ScatNet) to extract translation and small deformation invariant local features. The experiments conducted on a recently released unconstrained ear benchmark databases, such as Annotated Web Ears (AWE) and USTB-Helloear databases, and also on our newly created IIT-Bombay smartphone-captured ear database show the effectiveness and robustness of the proposed local feature descriptor in terms of Equal Error Rate (EER) and Rank-1 (R1) accuracy.","PeriodicalId":148941,"journal":{"name":"2019 IEEE Bombay Section Signature Conference (IBSSC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Bombay Section Signature Conference (IBSSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IBSSC47189.2019.8973055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
There has been significant progress in the field of automatic ear recognition, wherein ear images are captured in a constrained environment. But unconstrained ear recognition have acquired less attention due to the unavailability of such a database having variations in illumination, pose, size, resolution and occlusions. It is a challenging pattern recognition problem due to large intra-class variability. In this paper, we propose a novel local descriptor for unconstrained ear recognition based on scattering wavelet network (ScatNet) to extract translation and small deformation invariant local features. The experiments conducted on a recently released unconstrained ear benchmark databases, such as Annotated Web Ears (AWE) and USTB-Helloear databases, and also on our newly created IIT-Bombay smartphone-captured ear database show the effectiveness and robustness of the proposed local feature descriptor in terms of Equal Error Rate (EER) and Rank-1 (R1) accuracy.
自动耳朵识别领域已经取得了重大进展,其中耳朵图像是在受限环境中捕获的。但是,由于这种数据库在光照、姿态、大小、分辨率和遮挡方面存在变化,因此无约束耳识别得到的关注较少。由于类内变化很大,这是一个具有挑战性的模式识别问题。本文提出了一种基于散射小波网络(ScatNet)的无约束耳识别局部描述子,用于提取平移和小变形不变的局部特征。在最近发布的无约束耳朵基准数据库(如Annotated Web Ears (AWE)和USTB-Helloear数据库)以及我们新创建的IIT-Bombay智能手机捕获的耳朵数据库上进行的实验表明,所提出的局部特征描述符在等错误率(EER)和Rank-1 (R1)精度方面具有有效性和鲁棒性。