Computing 3D saliency from a 2D image

Sudarshan Ramenahalli, E. Niebur
{"title":"Computing 3D saliency from a 2D image","authors":"Sudarshan Ramenahalli, E. Niebur","doi":"10.1109/CISS.2013.6552297","DOIUrl":null,"url":null,"abstract":"A saliency map is a model of visual selective attention using purely bottom-up features of an image like color, intensity and orientation. Another bottom-up feature of visual input is depth, the distance between eye (or sensor) and objects in the visual field. In this report we study the effect of depth on saliency. Different from previous work, we do not use measured depth (disparity) information but, instead, compute a 3D depth map from the 2D image using a depth learning algorithm. This computed depth is then added as an additional feature channel to the 2D saliency map, and all feature channels are linearly combined with equal weights to obtain a 3-dimensional saliency map. We compare the efficacy of saliency maps (2D and 3D) in predicting human eye fixations using three different performance measures. The 3D saliency map outperforms its 2D counterpart in predicting human eye fixations on all measures. Perhaps surprisingly, our 3D saliency map computed from a 2D image performs better than an existing 3D saliency model that uses explicit depth information.","PeriodicalId":268095,"journal":{"name":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 47th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS.2013.6552297","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

A saliency map is a model of visual selective attention using purely bottom-up features of an image like color, intensity and orientation. Another bottom-up feature of visual input is depth, the distance between eye (or sensor) and objects in the visual field. In this report we study the effect of depth on saliency. Different from previous work, we do not use measured depth (disparity) information but, instead, compute a 3D depth map from the 2D image using a depth learning algorithm. This computed depth is then added as an additional feature channel to the 2D saliency map, and all feature channels are linearly combined with equal weights to obtain a 3-dimensional saliency map. We compare the efficacy of saliency maps (2D and 3D) in predicting human eye fixations using three different performance measures. The 3D saliency map outperforms its 2D counterpart in predicting human eye fixations on all measures. Perhaps surprisingly, our 3D saliency map computed from a 2D image performs better than an existing 3D saliency model that uses explicit depth information.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从2D图像计算3D显著性
显著性图是一种视觉选择性注意模型,它使用图像的颜色、强度和方向等纯粹自下而上的特征。视觉输入的另一个自下而上的特征是深度,即眼睛(或传感器)与视野中物体之间的距离。在本报告中,我们研究了深度对显著性的影响。与以前的工作不同,我们不使用测量的深度(视差)信息,而是使用深度学习算法从2D图像计算3D深度图。然后将计算得到的深度作为额外的特征通道添加到二维显著性图中,所有特征通道以等权重线性组合以获得三维显著性图。我们比较了显著性图(2D和3D)在预测人眼注视使用三种不同的性能指标的功效。3D显著性地图在预测人眼注视的所有指标上都优于2D地图。也许令人惊讶的是,我们从2D图像计算的3D显著性地图比使用显式深度信息的现有3D显著性模型表现得更好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
UCS-WN: An unbiased compressive sensing framework for weighted networks Degrees-of-freedom and interference tradeoff for a mix-Gaussian cognitive radio channel Audio-visual saliency map: Overview, basic models and hardware implementation Proactive versus reactive revisited: IPv6 routing for Low Power Lossy Networks Secrecy enhancement of block ciphered systems with deliberate noise in Non-coherent scenario
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1