Seonghoon Yu, Ilchae Jung, Byeongju Han, Taeoh Kim, Yunho Kim, Dongyoon Wee, Jeany Son
{"title":"用于参考图像分割的单编码器简单基线","authors":"Seonghoon Yu, Ilchae Jung, Byeongju Han, Taeoh Kim, Yunho Kim, Dongyoon Wee, Jeany Son","doi":"arxiv-2408.15521","DOIUrl":null,"url":null,"abstract":"Referring image segmentation (RIS) requires dense vision-language\ninteractions between visual pixels and textual words to segment objects based\non a given description. However, commonly adapted dual-encoders in RIS, e.g.,\nSwin transformer and BERT (uni-modal encoders) or CLIP (a multi-modal\ndual-encoder), lack dense multi-modal interactions during pre-training, leading\nto a gap with a pixel-level RIS task. To bridge this gap, existing RIS methods\noften rely on multi-modal fusion modules that interact two encoders, but this\napproach leads to high computational costs. In this paper, we present a novel\nRIS method with a single-encoder, i.e., BEiT-3, maximizing the potential of\nshared self-attention across all framework components. This enables seamless\ninteractions of two modalities from input to final prediction, producing\ngranularly aligned multi-modal features. Furthermore, we propose lightweight\nyet effective decoder modules, a Shared FPN and a Shared Mask Decoder, which\ncontribute to the high efficiency of our model. Our simple baseline with a\nsingle encoder achieves outstanding performances on the RIS benchmark datasets\nwhile maintaining computational efficiency, compared to the most recent SoTA\nmethods based on dual-encoders.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"6 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Simple Baseline with Single-encoder for Referring Image Segmentation\",\"authors\":\"Seonghoon Yu, Ilchae Jung, Byeongju Han, Taeoh Kim, Yunho Kim, Dongyoon Wee, Jeany Son\",\"doi\":\"arxiv-2408.15521\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Referring image segmentation (RIS) requires dense vision-language\\ninteractions between visual pixels and textual words to segment objects based\\non a given description. However, commonly adapted dual-encoders in RIS, e.g.,\\nSwin transformer and BERT (uni-modal encoders) or CLIP (a multi-modal\\ndual-encoder), lack dense multi-modal interactions during pre-training, leading\\nto a gap with a pixel-level RIS task. To bridge this gap, existing RIS methods\\noften rely on multi-modal fusion modules that interact two encoders, but this\\napproach leads to high computational costs. In this paper, we present a novel\\nRIS method with a single-encoder, i.e., BEiT-3, maximizing the potential of\\nshared self-attention across all framework components. This enables seamless\\ninteractions of two modalities from input to final prediction, producing\\ngranularly aligned multi-modal features. Furthermore, we propose lightweight\\nyet effective decoder modules, a Shared FPN and a Shared Mask Decoder, which\\ncontribute to the high efficiency of our model. Our simple baseline with a\\nsingle encoder achieves outstanding performances on the RIS benchmark datasets\\nwhile maintaining computational efficiency, compared to the most recent SoTA\\nmethods based on dual-encoders.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"6 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.15521\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.15521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Simple Baseline with Single-encoder for Referring Image Segmentation
Referring image segmentation (RIS) requires dense vision-language
interactions between visual pixels and textual words to segment objects based
on a given description. However, commonly adapted dual-encoders in RIS, e.g.,
Swin transformer and BERT (uni-modal encoders) or CLIP (a multi-modal
dual-encoder), lack dense multi-modal interactions during pre-training, leading
to a gap with a pixel-level RIS task. To bridge this gap, existing RIS methods
often rely on multi-modal fusion modules that interact two encoders, but this
approach leads to high computational costs. In this paper, we present a novel
RIS method with a single-encoder, i.e., BEiT-3, maximizing the potential of
shared self-attention across all framework components. This enables seamless
interactions of two modalities from input to final prediction, producing
granularly aligned multi-modal features. Furthermore, we propose lightweight
yet effective decoder modules, a Shared FPN and a Shared Mask Decoder, which
contribute to the high efficiency of our model. Our simple baseline with a
single encoder achieves outstanding performances on the RIS benchmark datasets
while maintaining computational efficiency, compared to the most recent SoTA
methods based on dual-encoders.