{"title":"基于外部实体信息的讽刺检测","authors":"Xu Xufei, Shimada Kazutaka","doi":"10.29007/zbzq","DOIUrl":null,"url":null,"abstract":"Sarcasm is generally characterized as ironic or satirical that is intended to blame, mock, or amuse in an implied way. Recently, pre-trained language models, such as BERT, have achieved remarkable success in sarcasm detection. However, there are many problems that cannot be solved by using such state-of-the-art models. One problem is attribute infor- mation of entities in sentences. This work investigates the potential of external knowledge about entities in knowledge bases to improve BERT for sarcasm detection. We apply em- bedded knowledge graph from Wikipedia to the task. We generate vector representations from entities of knowledge graph. Then we incorporate them with BERT by a mechanism based on self-attention. Experimental results indicate that our approach improves the accuracy as compared with the BERT model without external knowledge.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sarcasm Detection with External Entity Information\",\"authors\":\"Xu Xufei, Shimada Kazutaka\",\"doi\":\"10.29007/zbzq\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sarcasm is generally characterized as ironic or satirical that is intended to blame, mock, or amuse in an implied way. Recently, pre-trained language models, such as BERT, have achieved remarkable success in sarcasm detection. However, there are many problems that cannot be solved by using such state-of-the-art models. One problem is attribute infor- mation of entities in sentences. This work investigates the potential of external knowledge about entities in knowledge bases to improve BERT for sarcasm detection. We apply em- bedded knowledge graph from Wikipedia to the task. We generate vector representations from entities of knowledge graph. Then we incorporate them with BERT by a mechanism based on self-attention. Experimental results indicate that our approach improves the accuracy as compared with the BERT model without external knowledge.\",\"PeriodicalId\":93549,\"journal\":{\"name\":\"EPiC series in computing\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EPiC series in computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.29007/zbzq\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EPiC series in computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.29007/zbzq","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Sarcasm Detection with External Entity Information
Sarcasm is generally characterized as ironic or satirical that is intended to blame, mock, or amuse in an implied way. Recently, pre-trained language models, such as BERT, have achieved remarkable success in sarcasm detection. However, there are many problems that cannot be solved by using such state-of-the-art models. One problem is attribute infor- mation of entities in sentences. This work investigates the potential of external knowledge about entities in knowledge bases to improve BERT for sarcasm detection. We apply em- bedded knowledge graph from Wikipedia to the task. We generate vector representations from entities of knowledge graph. Then we incorporate them with BERT by a mechanism based on self-attention. Experimental results indicate that our approach improves the accuracy as compared with the BERT model without external knowledge.