{"title":"Few-shot Text Steganalysis Based on Attentional Meta-learner","authors":"Juan Wen, Ziwei Zhang, Y. Yang, Yiming Xue","doi":"10.1145/3531536.3532949","DOIUrl":null,"url":null,"abstract":"Text steganalysis is a technique to distinguish between steganographic text and normal text via statistical features. Current state-of-the-art text steganalysis models have two limitations. First, they need sufficient amounts of labeled data for training. Second, they lack the generalization ability on different detection tasks. In this paper, we propose a meta-learning framework for text steganalysis in the few-shot scenario to ensure model fast-adaptation between tasks. A general feature extractor based on BERT is applied to extract universal features among tasks, and a meta-learner based on attentional Bi-LSTM is employed to learn task-specific representations. A classifier trained on the support set calculates the prediction loss on the query set with a few samples to update the meta-learner. Extensive experiments show that our model can adapt fast among different steganalysis tasks through extremely few-shot samples, significantly improving detection performance compared with the state-of-the-art steganalysis models and other meta-learning methods.","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3531536.3532949","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Text steganalysis is a technique to distinguish between steganographic text and normal text via statistical features. Current state-of-the-art text steganalysis models have two limitations. First, they need sufficient amounts of labeled data for training. Second, they lack the generalization ability on different detection tasks. In this paper, we propose a meta-learning framework for text steganalysis in the few-shot scenario to ensure model fast-adaptation between tasks. A general feature extractor based on BERT is applied to extract universal features among tasks, and a meta-learner based on attentional Bi-LSTM is employed to learn task-specific representations. A classifier trained on the support set calculates the prediction loss on the query set with a few samples to update the meta-learner. Extensive experiments show that our model can adapt fast among different steganalysis tasks through extremely few-shot samples, significantly improving detection performance compared with the state-of-the-art steganalysis models and other meta-learning methods.