{"title":"Detecting Protuberant Saliency from a Depth Image","authors":"Yuseok Ban","doi":"10.1145/3387168.3387171","DOIUrl":null,"url":null,"abstract":"The visual attention of a human enables quick perception of noticeable regions in an image. The research on the models of visual attention has been actively studied for decades in the computer vision areas. For example, detecting visual saliency in a scene allows to estimate which details humans find interesting in advance to understand the scene. This also forms the important basis of a variety of latter tasks related to visual detection and tracking. By virtue of increasing diffusion of low-cost 3D sensors, many studies have been proposed to examine how to incorporate 3D information into visual attention models. Despite many advantages of depth data, relatively few studies on the visual attention of a depth image have delved into how to fully exploit the structural information of depth perception based on depth data itself. In this paper, Protuberant saliency is proposed to effectively detect the saliency in a depth image. The proposed approach explores the inherent protuberance information encoded in a depth structure. The fixation of a human eye in a depth scene is directly estimated by Protuberant saliency. It is robust to the isometric deformation and varying orientation of a depth region. The experimental results show that the rotation invariant and flexible architecture of Protuberant saliency produces the effectiveness against those challenging conditions.","PeriodicalId":346739,"journal":{"name":"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3387168.3387171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The visual attention of a human enables quick perception of noticeable regions in an image. The research on the models of visual attention has been actively studied for decades in the computer vision areas. For example, detecting visual saliency in a scene allows to estimate which details humans find interesting in advance to understand the scene. This also forms the important basis of a variety of latter tasks related to visual detection and tracking. By virtue of increasing diffusion of low-cost 3D sensors, many studies have been proposed to examine how to incorporate 3D information into visual attention models. Despite many advantages of depth data, relatively few studies on the visual attention of a depth image have delved into how to fully exploit the structural information of depth perception based on depth data itself. In this paper, Protuberant saliency is proposed to effectively detect the saliency in a depth image. The proposed approach explores the inherent protuberance information encoded in a depth structure. The fixation of a human eye in a depth scene is directly estimated by Protuberant saliency. It is robust to the isometric deformation and varying orientation of a depth region. The experimental results show that the rotation invariant and flexible architecture of Protuberant saliency produces the effectiveness against those challenging conditions.