{"title":"Robust Generative Steganography Based on Image Mapping","authors":"Qinghua Zhang;Fangjun Huang","doi":"10.1109/TCSVT.2024.3451620","DOIUrl":null,"url":null,"abstract":"Coverless steganography requires no modification of the cover image and can effectively resist steganalysis, which has received widespread attention from researchers in recent years. However, existing coverless image steganographic methods are achieved by constructing a mapping between the secret information and images in a known dataset. This image dataset needs to be sent to the receiver, which consumes substantial resources and poses a risk of information leakage. In addition, existing methods cannot achieve high-accuracy extraction when facing various attacks. To address the aforementioned issues, we propose a robust generative steganography based on image mapping (GSIM). This method establishes prompts based on the topic and quantity requirements first and then generate the candidate image database according to the prompts, which can be independently generated by both the sender and receiver without the need for transmission. In order to improve the robustness of the algorithm, our proposed GSIM utilizes prompts and fractional-order Chebyshev-Fourier moments (FrCHFMs) to construct the mapping between the generated images and the predefined binary sequences, as well as uses speeded-up robust features (SURFs) as auxiliary features in the information extraction phase. The experimental results show that GSIM is superior to existing coverless image steganographic methods in terms of capacity, security, and robustness.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"13543-13555"},"PeriodicalIF":8.3000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10659913/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Coverless steganography requires no modification of the cover image and can effectively resist steganalysis, which has received widespread attention from researchers in recent years. However, existing coverless image steganographic methods are achieved by constructing a mapping between the secret information and images in a known dataset. This image dataset needs to be sent to the receiver, which consumes substantial resources and poses a risk of information leakage. In addition, existing methods cannot achieve high-accuracy extraction when facing various attacks. To address the aforementioned issues, we propose a robust generative steganography based on image mapping (GSIM). This method establishes prompts based on the topic and quantity requirements first and then generate the candidate image database according to the prompts, which can be independently generated by both the sender and receiver without the need for transmission. In order to improve the robustness of the algorithm, our proposed GSIM utilizes prompts and fractional-order Chebyshev-Fourier moments (FrCHFMs) to construct the mapping between the generated images and the predefined binary sequences, as well as uses speeded-up robust features (SURFs) as auxiliary features in the information extraction phase. The experimental results show that GSIM is superior to existing coverless image steganographic methods in terms of capacity, security, and robustness.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.