Subversive Toxicity Detection using Sentiment Information

Éloi Brassard-Gourdeau, R. Khoury
{"title":"Subversive Toxicity Detection using Sentiment Information","authors":"Éloi Brassard-Gourdeau, R. Khoury","doi":"10.18653/v1/W19-3501","DOIUrl":null,"url":null,"abstract":"The presence of toxic content has become a major problem for many online communities. Moderators try to limit this problem by implementing more and more refined comment filters, but toxic users are constantly finding new ways to circumvent them. Our hypothesis is that while modifying toxic content and keywords to fool filters can be easy, hiding sentiment is harder. In this paper, we explore various aspects of sentiment detection and their correlation to toxicity, and use our results to implement a toxicity detection tool. We then test how adding the sentiment information helps detect toxicity in three different real-world datasets, and incorporate subversion to these datasets to simulate a user trying to circumvent the system. Our results show sentiment information has a positive impact on toxicity detection.","PeriodicalId":230845,"journal":{"name":"Proceedings of the Third Workshop on Abusive Language Online","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third Workshop on Abusive Language Online","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/W19-3501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

The presence of toxic content has become a major problem for many online communities. Moderators try to limit this problem by implementing more and more refined comment filters, but toxic users are constantly finding new ways to circumvent them. Our hypothesis is that while modifying toxic content and keywords to fool filters can be easy, hiding sentiment is harder. In this paper, we explore various aspects of sentiment detection and their correlation to toxicity, and use our results to implement a toxicity detection tool. We then test how adding the sentiment information helps detect toxicity in three different real-world datasets, and incorporate subversion to these datasets to simulate a user trying to circumvent the system. Our results show sentiment information has a positive impact on toxicity detection.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于情感信息的颠覆性毒性检测
有毒内容的存在已经成为许多在线社区的主要问题。版主试图通过实施越来越精细的评论过滤器来限制这个问题,但有毒的用户总是在寻找新的方法来绕过它们。我们的假设是,虽然修改有毒内容和关键词来欺骗过滤器很容易,但隐藏情绪却很难。在本文中,我们探索了情感检测的各个方面及其与毒性的相关性,并利用我们的结果实现了毒性检测工具。然后,我们在三个不同的真实世界数据集中测试添加情感信息如何帮助检测毒性,并将颠覆合并到这些数据集中,以模拟试图绕过系统的用户。我们的研究结果表明,情绪信息对毒性检测有积极的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Challenges and frontiers in abusive content detection Detecting harassment in real-time as conversations develop Subversive Toxicity Detection using Sentiment Information Exploring Deep Multimodal Fusion of Text and Photo for Hate Speech Classification A Platform Agnostic Dual-Strand Hate Speech Detector
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1