Zhengmi Tang , Tomo Miyazaki , Zhijie Wang , Yongsong Huang , Jonathan Pradana Mailoa , Shinichiro Omachi
{"title":"VQ-STE: Scene text erasing with mask refinement and vector-quantized texture dictionary","authors":"Zhengmi Tang , Tomo Miyazaki , Zhijie Wang , Yongsong Huang , Jonathan Pradana Mailoa , Shinichiro Omachi","doi":"10.1016/j.knosys.2025.113306","DOIUrl":null,"url":null,"abstract":"<div><div>Scene text erasing (STE), which aims to remove text from natural images and restore a plausible background, has been extensively researched in recent years. Most existing STE methods employ segment-then-erase pipelines, either explicitly or implicitly. However, these methods still face challenges, such as inaccurate text segmentation, difficulty in large text removal, and the shortage of training data. To address the first two issues, we present a scene-text-erasing network, VQ-STE, and to mitigate the third issue, we introduce a high-quality synthetic dataset, MixSyn. VQ-STE comprises a lightweight text Mask Refinement Network (MRN) and a Texture Dictionary-based Inpainting Network (TDIN). The MRN refines the bounding box-level text region mask, producing a high-recall stroke-level text mask by incorporating data augmentation and multiple loss functions. The TDIN erases large text regions by replacing distorted features with those from a pre-learned Texture Dictionary. Moreover, our generated MixSyn dataset offers greater diversity in background, text appearance, layout, and annotation compared to existing synthetic datasets. VQ-STE performs effectively in one or two-step settings, i.e., with or without additional text bounding box information. Experimental results demonstrate that VQ-STE outperforms existing one-step and two-step methods in quantitative and qualitative evaluations on the SCUT-EnsText dataset.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"315 ","pages":"Article 113306"},"PeriodicalIF":7.6000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125003533","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/12 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Scene text erasing (STE), which aims to remove text from natural images and restore a plausible background, has been extensively researched in recent years. Most existing STE methods employ segment-then-erase pipelines, either explicitly or implicitly. However, these methods still face challenges, such as inaccurate text segmentation, difficulty in large text removal, and the shortage of training data. To address the first two issues, we present a scene-text-erasing network, VQ-STE, and to mitigate the third issue, we introduce a high-quality synthetic dataset, MixSyn. VQ-STE comprises a lightweight text Mask Refinement Network (MRN) and a Texture Dictionary-based Inpainting Network (TDIN). The MRN refines the bounding box-level text region mask, producing a high-recall stroke-level text mask by incorporating data augmentation and multiple loss functions. The TDIN erases large text regions by replacing distorted features with those from a pre-learned Texture Dictionary. Moreover, our generated MixSyn dataset offers greater diversity in background, text appearance, layout, and annotation compared to existing synthetic datasets. VQ-STE performs effectively in one or two-step settings, i.e., with or without additional text bounding box information. Experimental results demonstrate that VQ-STE outperforms existing one-step and two-step methods in quantitative and qualitative evaluations on the SCUT-EnsText dataset.
场景文本擦除(Scene text erase, STE)是一种旨在从自然图像中删除文本并恢复其真实背景的技术,近年来得到了广泛的研究。大多数现有的STE方法都显式或隐式地使用段后擦除管道。然而,这些方法仍然面临着文本分割不准确、大文本去除困难、训练数据不足等挑战。为了解决前两个问题,我们提出了一个场景文本擦除网络VQ-STE,为了缓解第三个问题,我们引入了一个高质量的合成数据集MixSyn。VQ-STE包括一个轻量级的文本蒙版细化网络(MRN)和一个基于纹理字典的喷漆网络(TDIN)。MRN改进了边界框级文本区域掩码,通过结合数据增强和多个损失函数产生了高召回率的笔画级文本掩码。TDIN通过用预先学习的纹理字典中的特征替换扭曲的特征来擦除大面积的文本区域。此外,与现有的合成数据集相比,我们生成的MixSyn数据集在背景、文本外观、布局和注释方面提供了更大的多样性。VQ-STE在一个或两步设置中有效地执行,即,有或没有额外的文本边界框信息。实验结果表明,VQ-STE方法在对SCUT-EnsText数据集进行定量和定性评价方面优于现有的一步法和两步法。
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.