Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research

Jiachen Jiang, Soroush Vosoughi
{"title":"Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research","authors":"Jiachen Jiang, Soroush Vosoughi","doi":"10.1145/3422841.3423534","DOIUrl":null,"url":null,"abstract":"Social media has shaken the foundations of our society, unlikely as it may seem. Many of the popular tools used to moderate harmful digital content, however, have received widespread criticism from both the academic community and the public sphere for middling performance and lack of accountability. Though social media research is thought to center primarily on natural language processing, we demonstrate the need for the community to understand multimedia processing and its unique ethical considerations. Specifically, we identify statistical differences in the performance of Amazon Turk (MTurk) annotators when different modalities of information are provided and discuss the patterns of harm that arise from crowd-sourced human demographic prediction. Finally, we discuss the consequences of those biases through auditing the performance of a toxicity detector called Perspective API on the language of Twitter users across a variety of demographic categories.","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"116 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3422841.3423534","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Social media has shaken the foundations of our society, unlikely as it may seem. Many of the popular tools used to moderate harmful digital content, however, have received widespread criticism from both the academic community and the public sphere for middling performance and lack of accountability. Though social media research is thought to center primarily on natural language processing, we demonstrate the need for the community to understand multimedia processing and its unique ethical considerations. Specifically, we identify statistical differences in the performance of Amazon Turk (MTurk) annotators when different modalities of information are provided and discuss the patterns of harm that arise from crowd-sourced human demographic prediction. Finally, we discuss the consequences of those biases through auditing the performance of a toxicity detector called Perspective API on the language of Twitter users across a variety of demographic categories.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
不以封面判断用户:了解社交媒体研究中多模态处理的危害
社交媒体已经动摇了我们社会的基础,尽管看起来不太可能。然而,许多用于缓和有害数字内容的流行工具因表现一般和缺乏问责制而受到学术界和公共领域的广泛批评。虽然社会媒体研究被认为主要集中在自然语言处理上,但我们证明了社区理解多媒体处理及其独特的伦理考虑的必要性。具体来说,我们确定了在提供不同形式的信息时,Amazon Turk (MTurk)注释器性能的统计差异,并讨论了由众包的人类人口统计预测产生的危害模式。最后,我们通过审计一个名为Perspective API的毒性检测器对各种人口统计类别的Twitter用户的语言的性能来讨论这些偏差的后果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research Balancing Fairness and Accuracy in Sentiment Detection using Multiple Black Box Models Fighting Filterbubbles with Adversarial Training Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1