Edge guided and Fourier attention-based Dual Interaction Network for scene text erasing

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Image and Vision Computing Pub Date : 2025-02-01 Epub Date: 2025-01-01 DOI:10.1016/j.imavis.2024.105406
Ran Gong, Anna Zhu, Kun Liu
{"title":"Edge guided and Fourier attention-based Dual Interaction Network for scene text erasing","authors":"Ran Gong,&nbsp;Anna Zhu,&nbsp;Kun Liu","doi":"10.1016/j.imavis.2024.105406","DOIUrl":null,"url":null,"abstract":"<div><div>Scene text erasing (STE) aims to remove the text regions and inpaint those regions with reasonable content in the image. It involves a potential task, i.e., scene text segmentation, in implicate or explicate ways. Most previous methods used cascaded or parallel pipelines to segment text in one branch and erase text in another branch. However, they have not fully explored the information between the two subtasks, i.e., using an interactive method to enhance each other. In this paper, we introduce a novel one-stage STE model called Dual Interaction Network (DINet), which encourages interaction between scene text segmentation and scene text erasing in an end-to-end manner. DINet adopts a shared encoder and two parallel decoders for text segmentation and erasing respectively. Specifically, the two decoders interact via an Interaction Enhancement Module (IEM) in each layer, aggregating the residual information from each other. To facilitate effective and efficient mutual enhancement between the dual tasks, we propose a novel Fourier Transform-based Attention Module (FTAM). In addition, we incorporate an Edge-Guided Module (EGM) into the text segmentation branch to better erase the text boundary regions and generate natural-looking images. Extensive experiments demonstrate that the DINet achieves state-of-the-art performances on several benchmarks. Furthermore, the ablation studies indicate the effectiveness and efficiency of our proposed modules in DINet.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105406"},"PeriodicalIF":4.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624005110","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Scene text erasing (STE) aims to remove the text regions and inpaint those regions with reasonable content in the image. It involves a potential task, i.e., scene text segmentation, in implicate or explicate ways. Most previous methods used cascaded or parallel pipelines to segment text in one branch and erase text in another branch. However, they have not fully explored the information between the two subtasks, i.e., using an interactive method to enhance each other. In this paper, we introduce a novel one-stage STE model called Dual Interaction Network (DINet), which encourages interaction between scene text segmentation and scene text erasing in an end-to-end manner. DINet adopts a shared encoder and two parallel decoders for text segmentation and erasing respectively. Specifically, the two decoders interact via an Interaction Enhancement Module (IEM) in each layer, aggregating the residual information from each other. To facilitate effective and efficient mutual enhancement between the dual tasks, we propose a novel Fourier Transform-based Attention Module (FTAM). In addition, we incorporate an Edge-Guided Module (EGM) into the text segmentation branch to better erase the text boundary regions and generate natural-looking images. Extensive experiments demonstrate that the DINet achieves state-of-the-art performances on several benchmarks. Furthermore, the ablation studies indicate the effectiveness and efficiency of our proposed modules in DINet.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于边缘引导和傅立叶注意的场景文本擦除双交互网络
场景文本擦除(Scene text erase, STE)的目的是去除图像中的文本区域,并用合理的内容重新绘制这些区域。它涉及到一个潜在的任务,即场景文本分割,以隐含或明确的方式。大多数以前的方法使用级联或并行管道在一个分支中分割文本,并在另一个分支中擦除文本。然而,他们并没有充分挖掘两个子任务之间的信息,即使用交互的方法来相互增强。在本文中,我们介绍了一种新的单阶段STE模型,称为双交互网络(DINet),它鼓励场景文本分割和场景文本擦除之间以端到端的方式进行交互。DINet采用一个共享编码器和两个并行解码器分别进行文本分割和擦除。具体来说,两个解码器通过每层的交互增强模块(IEM)进行交互,聚合彼此的剩余信息。为了促进双重任务之间有效和高效的相互增强,我们提出了一种新的基于傅立叶变换的注意力模块(FTAM)。此外,我们在文本分割分支中加入了边缘引导模块(EGM),以更好地擦除文本边界区域并生成自然的图像。大量的实验表明,DINet在几个基准测试中达到了最先进的性能。此外,烧蚀研究表明我们提出的模块在DINet中的有效性和效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
期刊最新文献
TABNet: A Triplet Augmentation Self-recovery framework with Boundary-aware Pseudo-labels for scribble-based medical image segmentation HBMF-YOLO: Target detection in harsh environments based on a hybrid backbone network and multi-feature fusion Enhancing biometric transparency through skeletal feature learning in chest X-rays: A triplet network approach with Explainable AI All you need for object detection: From pixels, points, and prompts to Next-Gen fusion and multimodal LLMs/VLMs in autonomous vehicles Bidirectional causal learning for visual question answering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1