{"title":"回顾对话中的语境毒性检测","authors":"Julia Ive, Atijit Anuchitanukul, Lucia Specia","doi":"10.1145/3561390","DOIUrl":null,"url":null,"abstract":"Understanding toxicity in user conversations is undoubtedly an important problem. Addressing “covert” or implicit cases of toxicity is particularly hard and requires context. Very few previous studies have analysed the influence of conversational context in human perception or in automated detection models. We dive deeper into both these directions. We start by analysing existing contextual datasets and find that toxicity labelling by humans is in general influenced by the conversational structure, polarity, and topic of the context. We then propose to bring these findings into computational detection models by introducing and evaluating (a) neural architectures for contextual toxicity detection that are aware of the conversational structure, and (b) data augmentation strategies that can help model contextual toxicity detection. Our results show the encouraging potential of neural architectures that are aware of the conversation structure. We also demonstrate that such models can benefit from synthetic data, especially in the social media domain.","PeriodicalId":44355,"journal":{"name":"ACM Journal of Data and Information Quality","volume":"5 4 1","pages":"1 - 22"},"PeriodicalIF":1.5000,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Revisiting Contextual Toxicity Detection in Conversations\",\"authors\":\"Julia Ive, Atijit Anuchitanukul, Lucia Specia\",\"doi\":\"10.1145/3561390\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Understanding toxicity in user conversations is undoubtedly an important problem. Addressing “covert” or implicit cases of toxicity is particularly hard and requires context. Very few previous studies have analysed the influence of conversational context in human perception or in automated detection models. We dive deeper into both these directions. We start by analysing existing contextual datasets and find that toxicity labelling by humans is in general influenced by the conversational structure, polarity, and topic of the context. We then propose to bring these findings into computational detection models by introducing and evaluating (a) neural architectures for contextual toxicity detection that are aware of the conversational structure, and (b) data augmentation strategies that can help model contextual toxicity detection. Our results show the encouraging potential of neural architectures that are aware of the conversation structure. We also demonstrate that such models can benefit from synthetic data, especially in the social media domain.\",\"PeriodicalId\":44355,\"journal\":{\"name\":\"ACM Journal of Data and Information Quality\",\"volume\":\"5 4 1\",\"pages\":\"1 - 22\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2021-11-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Journal of Data and Information Quality\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3561390\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Journal of Data and Information Quality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3561390","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Revisiting Contextual Toxicity Detection in Conversations
Understanding toxicity in user conversations is undoubtedly an important problem. Addressing “covert” or implicit cases of toxicity is particularly hard and requires context. Very few previous studies have analysed the influence of conversational context in human perception or in automated detection models. We dive deeper into both these directions. We start by analysing existing contextual datasets and find that toxicity labelling by humans is in general influenced by the conversational structure, polarity, and topic of the context. We then propose to bring these findings into computational detection models by introducing and evaluating (a) neural architectures for contextual toxicity detection that are aware of the conversational structure, and (b) data augmentation strategies that can help model contextual toxicity detection. Our results show the encouraging potential of neural architectures that are aware of the conversation structure. We also demonstrate that such models can benefit from synthetic data, especially in the social media domain.