{"title":"Edge guided and Fourier attention-based Dual Interaction Network for scene text erasing","authors":"Ran Gong, Anna Zhu, Kun Liu","doi":"10.1016/j.imavis.2024.105406","DOIUrl":null,"url":null,"abstract":"<div><div>Scene text erasing (STE) aims to remove the text regions and inpaint those regions with reasonable content in the image. It involves a potential task, i.e., scene text segmentation, in implicate or explicate ways. Most previous methods used cascaded or parallel pipelines to segment text in one branch and erase text in another branch. However, they have not fully explored the information between the two subtasks, i.e., using an interactive method to enhance each other. In this paper, we introduce a novel one-stage STE model called Dual Interaction Network (DINet), which encourages interaction between scene text segmentation and scene text erasing in an end-to-end manner. DINet adopts a shared encoder and two parallel decoders for text segmentation and erasing respectively. Specifically, the two decoders interact via an Interaction Enhancement Module (IEM) in each layer, aggregating the residual information from each other. To facilitate effective and efficient mutual enhancement between the dual tasks, we propose a novel Fourier Transform-based Attention Module (FTAM). In addition, we incorporate an Edge-Guided Module (EGM) into the text segmentation branch to better erase the text boundary regions and generate natural-looking images. Extensive experiments demonstrate that the DINet achieves state-of-the-art performances on several benchmarks. Furthermore, the ablation studies indicate the effectiveness and efficiency of our proposed modules in DINet.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105406"},"PeriodicalIF":4.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624005110","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Scene text erasing (STE) aims to remove the text regions and inpaint those regions with reasonable content in the image. It involves a potential task, i.e., scene text segmentation, in implicate or explicate ways. Most previous methods used cascaded or parallel pipelines to segment text in one branch and erase text in another branch. However, they have not fully explored the information between the two subtasks, i.e., using an interactive method to enhance each other. In this paper, we introduce a novel one-stage STE model called Dual Interaction Network (DINet), which encourages interaction between scene text segmentation and scene text erasing in an end-to-end manner. DINet adopts a shared encoder and two parallel decoders for text segmentation and erasing respectively. Specifically, the two decoders interact via an Interaction Enhancement Module (IEM) in each layer, aggregating the residual information from each other. To facilitate effective and efficient mutual enhancement between the dual tasks, we propose a novel Fourier Transform-based Attention Module (FTAM). In addition, we incorporate an Edge-Guided Module (EGM) into the text segmentation branch to better erase the text boundary regions and generate natural-looking images. Extensive experiments demonstrate that the DINet achieves state-of-the-art performances on several benchmarks. Furthermore, the ablation studies indicate the effectiveness and efficiency of our proposed modules in DINet.
场景文本擦除(Scene text erase, STE)的目的是去除图像中的文本区域,并用合理的内容重新绘制这些区域。它涉及到一个潜在的任务,即场景文本分割,以隐含或明确的方式。大多数以前的方法使用级联或并行管道在一个分支中分割文本,并在另一个分支中擦除文本。然而,他们并没有充分挖掘两个子任务之间的信息,即使用交互的方法来相互增强。在本文中,我们介绍了一种新的单阶段STE模型,称为双交互网络(DINet),它鼓励场景文本分割和场景文本擦除之间以端到端的方式进行交互。DINet采用一个共享编码器和两个并行解码器分别进行文本分割和擦除。具体来说,两个解码器通过每层的交互增强模块(IEM)进行交互,聚合彼此的剩余信息。为了促进双重任务之间有效和高效的相互增强,我们提出了一种新的基于傅立叶变换的注意力模块(FTAM)。此外,我们在文本分割分支中加入了边缘引导模块(EGM),以更好地擦除文本边界区域并生成自然的图像。大量的实验表明,DINet在几个基准测试中达到了最先进的性能。此外,烧蚀研究表明我们提出的模块在DINet中的有效性和效率。
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.