Pengwen Dai;Jingyu Li;Dayan Wu;Peijia Zheng;Xiaochun Cao
{"title":"TextSafety: Visual Text Vanishing via Hierarchical Context-Aware Interaction Reconstruction","authors":"Pengwen Dai;Jingyu Li;Dayan Wu;Peijia Zheng;Xiaochun Cao","doi":"10.1109/TIFS.2025.3528249","DOIUrl":null,"url":null,"abstract":"Privacy information existing in the scene text will be leaked with the spread of images in cyberspace. Vanishing the scene text from the image is a simple yet effective method to prevent privacy disclosure to the machine and the human. Previous visual text vanishing methods have achieved promising results but the performance still fell short of expectations for complicated-shape scene texts with various scales. In this paper, we propose a novel hierarchical context-aware interaction reconstruction method to make the visual text vanish in the natural scene image. To avoid the interference of the non-text regions, we narrow down the reconstruction regions by the guidance of the hierarchical refined text region masks, helping provide accurate position information. Meanwhile, we propose to learn the long-range context-aware interaction in a lightweight way, which can ensure the smoothing of the artifacts that are easily generated by the convolutional layers. To be more specific, we first simultaneously generate the coarse text region mask and the initially vanishing scene text image. Then, we obtain more accurate refined masks to better capture the locations of complicated-shape texts via a hierarchical mask generation network. Next, based on the refined masks, we exploit a channel-wise context-aware interaction mechanism to model the long-range relationships between the reconstruction region and the backgrounds for better removing the artifacts. Finally, we fuse the reconstructed text regions with the non-masked regions to obtain the ultimate protected image. Experiments on two frequently-used benchmarks SCUT-EnsText and SCUT-Syn demonstrate that our proposed method outperforms previous related methods by a large margin.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1421-1433"},"PeriodicalIF":6.3000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10836718/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Privacy information existing in the scene text will be leaked with the spread of images in cyberspace. Vanishing the scene text from the image is a simple yet effective method to prevent privacy disclosure to the machine and the human. Previous visual text vanishing methods have achieved promising results but the performance still fell short of expectations for complicated-shape scene texts with various scales. In this paper, we propose a novel hierarchical context-aware interaction reconstruction method to make the visual text vanish in the natural scene image. To avoid the interference of the non-text regions, we narrow down the reconstruction regions by the guidance of the hierarchical refined text region masks, helping provide accurate position information. Meanwhile, we propose to learn the long-range context-aware interaction in a lightweight way, which can ensure the smoothing of the artifacts that are easily generated by the convolutional layers. To be more specific, we first simultaneously generate the coarse text region mask and the initially vanishing scene text image. Then, we obtain more accurate refined masks to better capture the locations of complicated-shape texts via a hierarchical mask generation network. Next, based on the refined masks, we exploit a channel-wise context-aware interaction mechanism to model the long-range relationships between the reconstruction region and the backgrounds for better removing the artifacts. Finally, we fuse the reconstructed text regions with the non-masked regions to obtain the ultimate protected image. Experiments on two frequently-used benchmarks SCUT-EnsText and SCUT-Syn demonstrate that our proposed method outperforms previous related methods by a large margin.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features