Miaomiao Li, Jie Yu, Shasha Li, Jun Ma, Huijun Liu
{"title":"硬标签黑盒环境下命名实体识别的文本对抗性攻击","authors":"Miaomiao Li, Jie Yu, Shasha Li, Jun Ma, Huijun Liu","doi":"10.1109/ICACTE55855.2022.9943674","DOIUrl":null,"url":null,"abstract":"Named entity recognition is a key task in the field of natural language processing, which plays a key role in many downstream tasks. Adversarial examples attack based on hard label black box is to generate adversarial examples which make the model classification wrong under the condition that only the decision results of the model are obtained. However, at present, there is little research on adversarial examples attack in hard-label black box setting for named entity recognition task. Influenced by adversarial examples attacks in hard-label black box settings in text classification task, we apply genetic algorithm to adversarial examples attacks in named entity recognition task. In this paper, we first randomly generate the initial adversarial examples, and shorten the search space to a certain extent, and then use genetic algorithm to continuously optimize the examples, and finally generate high quality adversarial examples. Experiments and analysis show that the adversarial examples generated in the hard label black box setting can effectively reduce the accuracy of the model.","PeriodicalId":165068,"journal":{"name":"2022 15th International Conference on Advanced Computer Theory and Engineering (ICACTE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Textual Adversarial Attacks on Named Entity Recognition in a Hard Label Black Box Setting\",\"authors\":\"Miaomiao Li, Jie Yu, Shasha Li, Jun Ma, Huijun Liu\",\"doi\":\"10.1109/ICACTE55855.2022.9943674\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Named entity recognition is a key task in the field of natural language processing, which plays a key role in many downstream tasks. Adversarial examples attack based on hard label black box is to generate adversarial examples which make the model classification wrong under the condition that only the decision results of the model are obtained. However, at present, there is little research on adversarial examples attack in hard-label black box setting for named entity recognition task. Influenced by adversarial examples attacks in hard-label black box settings in text classification task, we apply genetic algorithm to adversarial examples attacks in named entity recognition task. In this paper, we first randomly generate the initial adversarial examples, and shorten the search space to a certain extent, and then use genetic algorithm to continuously optimize the examples, and finally generate high quality adversarial examples. Experiments and analysis show that the adversarial examples generated in the hard label black box setting can effectively reduce the accuracy of the model.\",\"PeriodicalId\":165068,\"journal\":{\"name\":\"2022 15th International Conference on Advanced Computer Theory and Engineering (ICACTE)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 15th International Conference on Advanced Computer Theory and Engineering (ICACTE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACTE55855.2022.9943674\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 15th International Conference on Advanced Computer Theory and Engineering (ICACTE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACTE55855.2022.9943674","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Textual Adversarial Attacks on Named Entity Recognition in a Hard Label Black Box Setting
Named entity recognition is a key task in the field of natural language processing, which plays a key role in many downstream tasks. Adversarial examples attack based on hard label black box is to generate adversarial examples which make the model classification wrong under the condition that only the decision results of the model are obtained. However, at present, there is little research on adversarial examples attack in hard-label black box setting for named entity recognition task. Influenced by adversarial examples attacks in hard-label black box settings in text classification task, we apply genetic algorithm to adversarial examples attacks in named entity recognition task. In this paper, we first randomly generate the initial adversarial examples, and shorten the search space to a certain extent, and then use genetic algorithm to continuously optimize the examples, and finally generate high quality adversarial examples. Experiments and analysis show that the adversarial examples generated in the hard label black box setting can effectively reduce the accuracy of the model.