Confidence Based updation of Motion Conspicuity in Dynamic Scenes

V. Singh, Subhransu Maji, A. Mukerjee
{"title":"Confidence Based updation of Motion Conspicuity in Dynamic Scenes","authors":"V. Singh, Subhransu Maji, A. Mukerjee","doi":"10.1109/CRV.2006.24","DOIUrl":null,"url":null,"abstract":"Computational models of visual attention result in considerable data compression by eliminating processing on regions likely to be devoid of meaningful content. While saliency maps in static images is indexed on image region (pixels), psychovisual data indicates that in dynamic scenes human attention is object driven and localized motion is a significant determiner of object conspicuity. We have introduced a confidence map, which indicates the uncertainty in the position of the moving objects incorporating the exponential loss of information as we move away from the fovea. We improve the model further using a computational model of visual attention based on perceptual grouping of objects with motion and computation of a motion saliency map based on localized motion conspicuity of the objects. Behaviors exhibited in the system include attentive focus on moving wholes, shifting focus in multiple object motion, focus on objects moving contrary to the majority motion. We also present experimental data contrasting the model with human gaze tracking in a simple visual task.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CRV.2006.24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

Computational models of visual attention result in considerable data compression by eliminating processing on regions likely to be devoid of meaningful content. While saliency maps in static images is indexed on image region (pixels), psychovisual data indicates that in dynamic scenes human attention is object driven and localized motion is a significant determiner of object conspicuity. We have introduced a confidence map, which indicates the uncertainty in the position of the moving objects incorporating the exponential loss of information as we move away from the fovea. We improve the model further using a computational model of visual attention based on perceptual grouping of objects with motion and computation of a motion saliency map based on localized motion conspicuity of the objects. Behaviors exhibited in the system include attentive focus on moving wholes, shifting focus in multiple object motion, focus on objects moving contrary to the majority motion. We also present experimental data contrasting the model with human gaze tracking in a simple visual task.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
动态场景中基于置信度的运动显著性更新
视觉注意的计算模型通过消除对可能缺乏有意义内容的区域的处理,导致相当大的数据压缩。虽然静态图像中的显著性映射是在图像区域(像素)上索引的,但心理视觉数据表明,在动态场景中,人类的注意力是物体驱动的,局部运动是物体显著性的重要决定因素。我们引入了一个置信度图,它显示了移动物体位置的不确定性,包括当我们离开中央凹时信息的指数损失。我们使用基于运动对象感知分组的视觉注意计算模型和基于物体局部运动显著性计算的运动显著性图进一步改进了模型。该系统表现出的行为包括对运动整体的关注、对多物体运动的关注、对与多数运动相反的物体的关注。我们还提供了在一个简单的视觉任务中将该模型与人类注视跟踪进行对比的实验数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Image Classification and Retrieval using Correlation Photometric Stereo with Nearby Planar Distributed Illuminants Evolving a Vision-Based Line-Following Robot Controller Line Extraction with Composite Background Subtract The Nomad 200 and the Nomad SuperScout: Reverse engineered and resurrected
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1