Yingchun Guo, Shu Chen, Gang Yan, Shi Di, Xueqi Lv
{"title":"突出对象排名:关于相对性学习的显著性模型和关于三重准确性的评估指标","authors":"Yingchun Guo, Shu Chen, Gang Yan, Shi Di, Xueqi Lv","doi":"10.1016/j.displa.2024.102855","DOIUrl":null,"url":null,"abstract":"<div><div>Salient object ranking (SOR) aims to evaluate the saliency level of each object in an image, which is crucial for the advancement of downstream tasks. The human visual system distinguishes the saliency levels of different targets in a scene by comprehensively utilizing multiple saliency cues. To mimic this comprehensive evaluation behavior, the SOR task needs to consider both the objects’ intrinsic information and their relative information within the entire image. However, existing methods still struggle to obtain relative information effectively, which tend to focus too much on specific objects while ignoring their relativity. To address these issues, this paper proposes a Salient Object Ranking method based on Relativity Learning (RLSOR), which integrates multiple saliency cues to learn the relative information among objects. RLSOR consists of three main modules: the Top-down Guided Salience Regulation module (TGSR), the Global–Local Cooperative Perception module (GLCP), and the Semantic-guided Edge Enhancement module (SEE). At the same time, this paper proposes a Triple-Accuracy Evaluation (TAE) metric for the SOR task, which can evaluate the segmentation accuracy, relative ranking accuracy, and absolute ranking accuracy in one metric. Experimental results show that RLSOR significantly enhances SOR performance, and the proposed SOR evaluation metric can better meets human subjective perceptions.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102855"},"PeriodicalIF":3.7000,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Salient Object Ranking: Saliency model on relativity learning and evaluation metric on triple accuracy\",\"authors\":\"Yingchun Guo, Shu Chen, Gang Yan, Shi Di, Xueqi Lv\",\"doi\":\"10.1016/j.displa.2024.102855\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Salient object ranking (SOR) aims to evaluate the saliency level of each object in an image, which is crucial for the advancement of downstream tasks. The human visual system distinguishes the saliency levels of different targets in a scene by comprehensively utilizing multiple saliency cues. To mimic this comprehensive evaluation behavior, the SOR task needs to consider both the objects’ intrinsic information and their relative information within the entire image. However, existing methods still struggle to obtain relative information effectively, which tend to focus too much on specific objects while ignoring their relativity. To address these issues, this paper proposes a Salient Object Ranking method based on Relativity Learning (RLSOR), which integrates multiple saliency cues to learn the relative information among objects. RLSOR consists of three main modules: the Top-down Guided Salience Regulation module (TGSR), the Global–Local Cooperative Perception module (GLCP), and the Semantic-guided Edge Enhancement module (SEE). At the same time, this paper proposes a Triple-Accuracy Evaluation (TAE) metric for the SOR task, which can evaluate the segmentation accuracy, relative ranking accuracy, and absolute ranking accuracy in one metric. Experimental results show that RLSOR significantly enhances SOR performance, and the proposed SOR evaluation metric can better meets human subjective perceptions.</div></div>\",\"PeriodicalId\":50570,\"journal\":{\"name\":\"Displays\",\"volume\":\"85 \",\"pages\":\"Article 102855\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Displays\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0141938224002191\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224002191","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
摘要
显著性物体排序(SOR)旨在评估图像中每个物体的显著性水平,这对推进下游任务至关重要。人类视觉系统通过综合利用多种显著性线索来区分场景中不同目标的显著性水平。为了模仿这种综合评估行为,SOR 任务需要同时考虑物体的内在信息和它们在整个图像中的相对信息。然而,现有方法仍难以有效获取相对信息,往往过于关注特定物体,而忽略了其相对性。为了解决这些问题,本文提出了一种基于相对性学习的突出物体排序方法(RLSOR),它整合了多种突出线索来学习物体之间的相对信息。RLSOR 由三个主要模块组成:自上而下引导的显著性调节模块(TGSR)、全局-局部合作感知模块(GLCP)和语义引导的边缘增强模块(SEE)。同时,本文还针对 SOR 任务提出了三重精度评估(TAE)指标,可在一个指标中评估分割精度、相对排序精度和绝对排序精度。实验结果表明,RLSOR 能显著提高 SOR 性能,所提出的 SOR 评价指标能更好地满足人类的主观感受。
Salient Object Ranking: Saliency model on relativity learning and evaluation metric on triple accuracy
Salient object ranking (SOR) aims to evaluate the saliency level of each object in an image, which is crucial for the advancement of downstream tasks. The human visual system distinguishes the saliency levels of different targets in a scene by comprehensively utilizing multiple saliency cues. To mimic this comprehensive evaluation behavior, the SOR task needs to consider both the objects’ intrinsic information and their relative information within the entire image. However, existing methods still struggle to obtain relative information effectively, which tend to focus too much on specific objects while ignoring their relativity. To address these issues, this paper proposes a Salient Object Ranking method based on Relativity Learning (RLSOR), which integrates multiple saliency cues to learn the relative information among objects. RLSOR consists of three main modules: the Top-down Guided Salience Regulation module (TGSR), the Global–Local Cooperative Perception module (GLCP), and the Semantic-guided Edge Enhancement module (SEE). At the same time, this paper proposes a Triple-Accuracy Evaluation (TAE) metric for the SOR task, which can evaluate the segmentation accuracy, relative ranking accuracy, and absolute ranking accuracy in one metric. Experimental results show that RLSOR significantly enhances SOR performance, and the proposed SOR evaluation metric can better meets human subjective perceptions.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.