Co-salient object detection with consensus mining and consistency cross-layer interactive decoding

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Image and Vision Computing Pub Date : 2025-02-01 Epub Date: 2025-01-10 DOI:10.1016/j.imavis.2025.105414
Yanliang Ge , Jinghuai Pan , Junchao Ren , Min He , Hongbo Bi , Qiao Zhang
{"title":"Co-salient object detection with consensus mining and consistency cross-layer interactive decoding","authors":"Yanliang Ge ,&nbsp;Jinghuai Pan ,&nbsp;Junchao Ren ,&nbsp;Min He ,&nbsp;Hongbo Bi ,&nbsp;Qiao Zhang","doi":"10.1016/j.imavis.2025.105414","DOIUrl":null,"url":null,"abstract":"<div><div>The main goal of co-salient object detection (CoSOD) is to extract a group of notable objects that appear together in the image. The existing methods face two major challenges: the first is that in some complex scenes or in the case of interference by other salient objects, the mining of consensus cues for co-salient objects is inadequate; the second is that other methods input consensus cues from top to bottom into the decoder, which ignores the compactness of the consensus and lacks cross-layer interaction. To solve the above problems, we propose a consensus mining and consistency cross-layer interactive decoding network, called CCNet, which consists of two key components, namely, a consensus cue mining module (CCM) and a consistency cross-layer interactive decoder (CCID). Specifically, the purpose of CCM is to fully mine the cross-consensus clues among the co-salient objects in the image group, so as to achieve the group consistency modeling of the group of images. Furthermore, CCID accepts features of different levels as input and receives semantic information of group consensus from CCM, which is used to guide features of other levels to learn higher-level feature representations and cross-layer interaction of group semantic consensus clues, thereby maintaining the consistency of group consensus cues and enabling accurate co-saliency map prediction. We evaluated the proposed CCNet using four widely accepted metrics across three challenging CoSOD datasets and the experimental results demonstrate that our proposed approach outperforms other existing state-of-the-art CoSOD methods, particularly on the CoSal2015 and CoSOD3k datasets. The results of our method are available at <span><span>https://github.com/jinghuaipan/CCNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105414"},"PeriodicalIF":4.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625000022","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/10 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The main goal of co-salient object detection (CoSOD) is to extract a group of notable objects that appear together in the image. The existing methods face two major challenges: the first is that in some complex scenes or in the case of interference by other salient objects, the mining of consensus cues for co-salient objects is inadequate; the second is that other methods input consensus cues from top to bottom into the decoder, which ignores the compactness of the consensus and lacks cross-layer interaction. To solve the above problems, we propose a consensus mining and consistency cross-layer interactive decoding network, called CCNet, which consists of two key components, namely, a consensus cue mining module (CCM) and a consistency cross-layer interactive decoder (CCID). Specifically, the purpose of CCM is to fully mine the cross-consensus clues among the co-salient objects in the image group, so as to achieve the group consistency modeling of the group of images. Furthermore, CCID accepts features of different levels as input and receives semantic information of group consensus from CCM, which is used to guide features of other levels to learn higher-level feature representations and cross-layer interaction of group semantic consensus clues, thereby maintaining the consistency of group consensus cues and enabling accurate co-saliency map prediction. We evaluated the proposed CCNet using four widely accepted metrics across three challenging CoSOD datasets and the experimental results demonstrate that our proposed approach outperforms other existing state-of-the-art CoSOD methods, particularly on the CoSal2015 and CoSOD3k datasets. The results of our method are available at https://github.com/jinghuaipan/CCNet.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于共识挖掘和一致性跨层交互解码的共显著目标检测
共显著目标检测(CoSOD)的主要目标是提取一组在图像中同时出现的显著目标。现有方法面临两大挑战:一是在某些复杂场景或其他显著物体干扰的情况下,对共显著物体的共识线索挖掘不足;二是其他方法从上到下向解码器输入共识线索,忽略了共识的紧密性,缺乏跨层交互。为了解决上述问题,我们提出了一个共识挖掘和一致性跨层交互解码网络,称为CCNet,它由两个关键组件组成,即共识线索挖掘模块(CCM)和一致性跨层交互解码器(CCID)。具体来说,CCM的目的是充分挖掘图像组中共显著对象之间的交叉共识线索,从而实现图像组的组一致性建模。CCID接受不同层次的特征作为输入,从CCM接收群体共识的语义信息,用于引导其他层次的特征学习更高层次的特征表示和群体语义共识线索的跨层交互,从而保持群体共识线索的一致性,实现准确的共显著性图预测。我们在三个具有挑战性的CoSOD数据集上使用四个被广泛接受的指标来评估所提出的CCNet,实验结果表明,我们提出的方法优于其他现有的最先进的CoSOD方法,特别是在CoSal2015和CoSOD3k数据集上。我们方法的结果可在https://github.com/jinghuaipan/CCNet上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
期刊最新文献
TABNet: A Triplet Augmentation Self-recovery framework with Boundary-aware Pseudo-labels for scribble-based medical image segmentation HBMF-YOLO: Target detection in harsh environments based on a hybrid backbone network and multi-feature fusion Enhancing biometric transparency through skeletal feature learning in chest X-rays: A triplet network approach with Explainable AI All you need for object detection: From pixels, points, and prompts to Next-Gen fusion and multimodal LLMs/VLMs in autonomous vehicles Bidirectional causal learning for visual question answering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1