Detecting Protuberant Saliency from a Depth Image

Yuseok Ban
{"title":"Detecting Protuberant Saliency from a Depth Image","authors":"Yuseok Ban","doi":"10.1145/3387168.3387171","DOIUrl":null,"url":null,"abstract":"The visual attention of a human enables quick perception of noticeable regions in an image. The research on the models of visual attention has been actively studied for decades in the computer vision areas. For example, detecting visual saliency in a scene allows to estimate which details humans find interesting in advance to understand the scene. This also forms the important basis of a variety of latter tasks related to visual detection and tracking. By virtue of increasing diffusion of low-cost 3D sensors, many studies have been proposed to examine how to incorporate 3D information into visual attention models. Despite many advantages of depth data, relatively few studies on the visual attention of a depth image have delved into how to fully exploit the structural information of depth perception based on depth data itself. In this paper, Protuberant saliency is proposed to effectively detect the saliency in a depth image. The proposed approach explores the inherent protuberance information encoded in a depth structure. The fixation of a human eye in a depth scene is directly estimated by Protuberant saliency. It is robust to the isometric deformation and varying orientation of a depth region. The experimental results show that the rotation invariant and flexible architecture of Protuberant saliency produces the effectiveness against those challenging conditions.","PeriodicalId":346739,"journal":{"name":"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3387168.3387171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The visual attention of a human enables quick perception of noticeable regions in an image. The research on the models of visual attention has been actively studied for decades in the computer vision areas. For example, detecting visual saliency in a scene allows to estimate which details humans find interesting in advance to understand the scene. This also forms the important basis of a variety of latter tasks related to visual detection and tracking. By virtue of increasing diffusion of low-cost 3D sensors, many studies have been proposed to examine how to incorporate 3D information into visual attention models. Despite many advantages of depth data, relatively few studies on the visual attention of a depth image have delved into how to fully exploit the structural information of depth perception based on depth data itself. In this paper, Protuberant saliency is proposed to effectively detect the saliency in a depth image. The proposed approach explores the inherent protuberance information encoded in a depth structure. The fixation of a human eye in a depth scene is directly estimated by Protuberant saliency. It is robust to the isometric deformation and varying orientation of a depth region. The experimental results show that the rotation invariant and flexible architecture of Protuberant saliency produces the effectiveness against those challenging conditions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从深度图像中检测凸起性显著性
人类的视觉注意力能够快速感知图像中值得注意的区域。计算机视觉领域对视觉注意模型的研究已经活跃了几十年。例如,检测场景中的视觉显著性可以让我们提前估计出哪些细节是人类感兴趣的,从而更好地理解场景。这也构成了与视觉检测和跟踪相关的各种后续任务的重要基础。随着低成本3D传感器的日益普及,人们提出了许多关于如何将3D信息纳入视觉注意模型的研究。尽管深度数据有很多优势,但关于深度图像视觉注意的研究很少涉及如何基于深度数据本身充分挖掘深度感知的结构信息。为了有效地检测深度图像中的显著性,本文提出了凸突显著性。该方法探索深度结构中编码的固有凸起信息。人眼在深度场景中的注视是由凸突显著性直接估计的。它对深度区域的等距变形和变向具有较强的鲁棒性。实验结果表明,凸突显著性的旋转不变性和灵活的结构能够有效地应对这些具有挑战性的条件。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Real-time Small Object Detection Model in the Bird-view UAV Imagery Federated Object Detection: Optimizing Object Detection Model with Federated Learning Efficient Verification of Control Systems with Neural Network Controllers Automatic Inspection Drone with Deep Learning-based Auto-tracking Camera Gimbal to Detect Defects in Power Lines Parameter Analysis of an Existing Copper Access Network for the Optimization of VDSL Service in Rural Areas
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1