Salient Object Ranking: Saliency model on relativity learning and evaluation metric on triple accuracy

IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Displays Pub Date : 2024-10-10 DOI:10.1016/j.displa.2024.102855
{"title":"Salient Object Ranking: Saliency model on relativity learning and evaluation metric on triple accuracy","authors":"","doi":"10.1016/j.displa.2024.102855","DOIUrl":null,"url":null,"abstract":"<div><div>Salient object ranking (SOR) aims to evaluate the saliency level of each object in an image, which is crucial for the advancement of downstream tasks. The human visual system distinguishes the saliency levels of different targets in a scene by comprehensively utilizing multiple saliency cues. To mimic this comprehensive evaluation behavior, the SOR task needs to consider both the objects’ intrinsic information and their relative information within the entire image. However, existing methods still struggle to obtain relative information effectively, which tend to focus too much on specific objects while ignoring their relativity. To address these issues, this paper proposes a Salient Object Ranking method based on Relativity Learning (RLSOR), which integrates multiple saliency cues to learn the relative information among objects. RLSOR consists of three main modules: the Top-down Guided Salience Regulation module (TGSR), the Global–Local Cooperative Perception module (GLCP), and the Semantic-guided Edge Enhancement module (SEE). At the same time, this paper proposes a Triple-Accuracy Evaluation (TAE) metric for the SOR task, which can evaluate the segmentation accuracy, relative ranking accuracy, and absolute ranking accuracy in one metric. Experimental results show that RLSOR significantly enhances SOR performance, and the proposed SOR evaluation metric can better meets human subjective perceptions.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7000,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224002191","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Salient object ranking (SOR) aims to evaluate the saliency level of each object in an image, which is crucial for the advancement of downstream tasks. The human visual system distinguishes the saliency levels of different targets in a scene by comprehensively utilizing multiple saliency cues. To mimic this comprehensive evaluation behavior, the SOR task needs to consider both the objects’ intrinsic information and their relative information within the entire image. However, existing methods still struggle to obtain relative information effectively, which tend to focus too much on specific objects while ignoring their relativity. To address these issues, this paper proposes a Salient Object Ranking method based on Relativity Learning (RLSOR), which integrates multiple saliency cues to learn the relative information among objects. RLSOR consists of three main modules: the Top-down Guided Salience Regulation module (TGSR), the Global–Local Cooperative Perception module (GLCP), and the Semantic-guided Edge Enhancement module (SEE). At the same time, this paper proposes a Triple-Accuracy Evaluation (TAE) metric for the SOR task, which can evaluate the segmentation accuracy, relative ranking accuracy, and absolute ranking accuracy in one metric. Experimental results show that RLSOR significantly enhances SOR performance, and the proposed SOR evaluation metric can better meets human subjective perceptions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
突出对象排名:关于相对性学习的显著性模型和关于三重准确性的评估指标
显著性物体排序(SOR)旨在评估图像中每个物体的显著性水平,这对推进下游任务至关重要。人类视觉系统通过综合利用多种显著性线索来区分场景中不同目标的显著性水平。为了模仿这种综合评估行为,SOR 任务需要同时考虑物体的内在信息和它们在整个图像中的相对信息。然而,现有方法仍难以有效获取相对信息,往往过于关注特定物体,而忽略了其相对性。为了解决这些问题,本文提出了一种基于相对性学习的突出物体排序方法(RLSOR),它整合了多种突出线索来学习物体之间的相对信息。RLSOR 由三个主要模块组成:自上而下引导的显著性调节模块(TGSR)、全局-局部合作感知模块(GLCP)和语义引导的边缘增强模块(SEE)。同时,本文还针对 SOR 任务提出了三重精度评估(TAE)指标,可在一个指标中评估分割精度、相对排序精度和绝对排序精度。实验结果表明,RLSOR 能显著提高 SOR 性能,所提出的 SOR 评价指标能更好地满足人类的主观感受。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
期刊最新文献
Profiles of cybersickness symptoms DZ-SLAM: A SAM-based SLAM algorithm oriented to dynamic environments Salient Object Ranking: Saliency model on relativity learning and evaluation metric on triple accuracy Pen-based vibrotactile feedback rendering of surface textures under unconstrained acquisition conditions A comparative analysis of machine learning methods for display characterization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1