鲁棒对比不变特征检测

C. Chennubhotla, A. Jepson, J. Midgley
{"title":"鲁棒对比不变特征检测","authors":"C. Chennubhotla, A. Jepson, J. Midgley","doi":"10.1109/ICPR.2002.1048410","DOIUrl":null,"url":null,"abstract":"We achieve two goals in this paper: (1) to build a novel appearance-based object representation that takes into account variations in contrast often found in training images; (2) to develop a robust appearance-based detection scheme that can handle outliers such as occlusion and structured noise. To build the representation, we decompose the input ensemble into two subspaces: a principal subspace (within-subspace) and its orthogonal complement (out-of-subspace). Before computing the principal subspace, we remove any dependency on contrast that the training set might exhibit. To account for pixel outliers in test images, we model the residual signal in the out-of-subspace by a probabilistic mixture model of an inlier distribution and a uniform outlier distribution. The mixture model, in turn, facilitates the robust estimation of the within-subspace coefficients. We show our methodology leads to an effective classifier for separating images of eyes from non-eyes extracted from the FERET dataset.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Robust contrast-invariant eigen detection\",\"authors\":\"C. Chennubhotla, A. Jepson, J. Midgley\",\"doi\":\"10.1109/ICPR.2002.1048410\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We achieve two goals in this paper: (1) to build a novel appearance-based object representation that takes into account variations in contrast often found in training images; (2) to develop a robust appearance-based detection scheme that can handle outliers such as occlusion and structured noise. To build the representation, we decompose the input ensemble into two subspaces: a principal subspace (within-subspace) and its orthogonal complement (out-of-subspace). Before computing the principal subspace, we remove any dependency on contrast that the training set might exhibit. To account for pixel outliers in test images, we model the residual signal in the out-of-subspace by a probabilistic mixture model of an inlier distribution and a uniform outlier distribution. The mixture model, in turn, facilitates the robust estimation of the within-subspace coefficients. We show our methodology leads to an effective classifier for separating images of eyes from non-eyes extracted from the FERET dataset.\",\"PeriodicalId\":159502,\"journal\":{\"name\":\"Object recognition supported by user interaction for service robots\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-12-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Object recognition supported by user interaction for service robots\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPR.2002.1048410\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Object recognition supported by user interaction for service robots","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPR.2002.1048410","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

我们在本文中实现了两个目标:(1)建立一种新的基于外观的对象表示,该表示考虑了训练图像中经常发现的对比度变化;(2)开发一种鲁棒的基于外观的检测方案,该方案可以处理遮挡和结构化噪声等异常值。为了构建表示,我们将输入集合分解为两个子空间:主子空间(子空间内)和它的正交补(子空间外)。在计算主子空间之前,我们消除了训练集可能表现出的对对比度的依赖。为了考虑测试图像中的像素异常值,我们通过一个初始分布和均匀异常分布的概率混合模型来模拟子空间外的残差信号。混合模型反过来又有助于子空间内系数的鲁棒估计。我们展示了我们的方法导致了一个有效的分类器,用于从FERET数据集中提取眼睛图像和非眼睛图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Robust contrast-invariant eigen detection
We achieve two goals in this paper: (1) to build a novel appearance-based object representation that takes into account variations in contrast often found in training images; (2) to develop a robust appearance-based detection scheme that can handle outliers such as occlusion and structured noise. To build the representation, we decompose the input ensemble into two subspaces: a principal subspace (within-subspace) and its orthogonal complement (out-of-subspace). Before computing the principal subspace, we remove any dependency on contrast that the training set might exhibit. To account for pixel outliers in test images, we model the residual signal in the out-of-subspace by a probabilistic mixture model of an inlier distribution and a uniform outlier distribution. The mixture model, in turn, facilitates the robust estimation of the within-subspace coefficients. We show our methodology leads to an effective classifier for separating images of eyes from non-eyes extracted from the FERET dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Pattern recognition for humanitarian de-mining Data clustering using evidence accumulation Facial expression recognition using pseudo 3-D hidden Markov models Speeding up SVM decision based on mirror points Real-time tracking and estimation of plane pose
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1