谁会被标记?社交媒体报道中的审查和偏见实验

IF 2.2 3区 社会学 Q2 POLITICAL SCIENCE Ps-Political Science & Politics Pub Date : 2023-01-05 DOI:10.1017/S1049096522001238
Jessica T. Feezell, Meredith Conroy, Barbara Gomez-Aguinaga, John K. Wagner
{"title":"谁会被标记?社交媒体报道中的审查和偏见实验","authors":"Jessica T. Feezell, Meredith Conroy, Barbara Gomez-Aguinaga, John K. Wagner","doi":"10.1017/S1049096522001238","DOIUrl":null,"url":null,"abstract":"With a large majority of Americans using social media platforms to consume and disseminate information on a regular basis, social media serve as today’s town square in many ways (Pew Research Center 2021). However, unlike public spaces where the free expression of citizens is afforded First Amendment protections, social media platforms are privately owned, and users are subject to the platform’s terms of service and community standards (Congressional Research Service 2021). Although platform rules vary about what is allowable content, most are in agreement that certain forms of content (e.g., credible threats of violence and hate speech) are not, and they strive to identify and remove such posts. Both Twitter and Facebook prohibit credible threats of violence (e.g., “I will...” or “I plan to...”) and hate speech directed at protected classes (e.g., race, gender, and religion). To identify objectionable content, social media platforms rely in part on users to report offensive posts, which the platform then decides to leave up or take down (Crawford and Gillespie 2016). Users play a critical role in determining which content is flagged for review; however, little is known about user reporting behavior. In general, social media platforms use two techniques to identify objectionable content: (1) algorithms (or “classifiers”) that are trained to flag posts that contain certain language; and (2) other users who report posts that they believe violate the community standards (Crawford and Gillespie 2016). Posts that are identified as possibly containing objectionable content then are reviewed by a group of human moderators to determine whether the post in fact violates the terms of service and therefore should be removed or labeled. Adjudicating what is and is not objectionable content is difficult and subject to personal biases; even professional moderators admit to making mistakes (Gadde and Derella 2020; Varner et al. 2017). However, classifiers also are subject to racial bias. For instance, several classifiers were more likely to flag social media posts written in “Black English” as abusive than posts written in standard English (Davidson, Bhattacharya, and Weber 2019; Sap et al. 2019). Automated toxic-language identification tools generally are unable to consider social and cultural context and therefore risk reporting posts that are not actually in violation. Thus, the assumption that automated techniques are a way to remove bias is incorrect andmay invite systemic bias. In our study, we tested for bias in the second pathway to online content removal: that is, through social media users. Specifically, we were interested in whether the demographics of the poster influence a willingness to report content as violating the community standards; this makes certain demographics more likely to have their posts reviewed and possibly removed. We focused on race, gender, and the intersection of these traits because gendered and racial stereotypes—as well as shared traits betweenmessengers and receivers—can influence people’s attitudes and evaluations of content (Karpowitz, Mendelberg, and Shaker 2012; Mastro 2017). Although some scholars argue that computer-mediated communication has reduced the public’s ability to identify the background of messengers, other studies have shown that personal characteristics of the public continue to influence assessments of messages in online environments (Metzger and Flanagin 2013; Settle 2018).","PeriodicalId":48096,"journal":{"name":"Ps-Political Science & Politics","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Who Gets Flagged? An Experiment on Censorship and Bias in Social Media Reporting\",\"authors\":\"Jessica T. Feezell, Meredith Conroy, Barbara Gomez-Aguinaga, John K. Wagner\",\"doi\":\"10.1017/S1049096522001238\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With a large majority of Americans using social media platforms to consume and disseminate information on a regular basis, social media serve as today’s town square in many ways (Pew Research Center 2021). However, unlike public spaces where the free expression of citizens is afforded First Amendment protections, social media platforms are privately owned, and users are subject to the platform’s terms of service and community standards (Congressional Research Service 2021). Although platform rules vary about what is allowable content, most are in agreement that certain forms of content (e.g., credible threats of violence and hate speech) are not, and they strive to identify and remove such posts. Both Twitter and Facebook prohibit credible threats of violence (e.g., “I will...” or “I plan to...”) and hate speech directed at protected classes (e.g., race, gender, and religion). To identify objectionable content, social media platforms rely in part on users to report offensive posts, which the platform then decides to leave up or take down (Crawford and Gillespie 2016). Users play a critical role in determining which content is flagged for review; however, little is known about user reporting behavior. In general, social media platforms use two techniques to identify objectionable content: (1) algorithms (or “classifiers”) that are trained to flag posts that contain certain language; and (2) other users who report posts that they believe violate the community standards (Crawford and Gillespie 2016). Posts that are identified as possibly containing objectionable content then are reviewed by a group of human moderators to determine whether the post in fact violates the terms of service and therefore should be removed or labeled. Adjudicating what is and is not objectionable content is difficult and subject to personal biases; even professional moderators admit to making mistakes (Gadde and Derella 2020; Varner et al. 2017). However, classifiers also are subject to racial bias. For instance, several classifiers were more likely to flag social media posts written in “Black English” as abusive than posts written in standard English (Davidson, Bhattacharya, and Weber 2019; Sap et al. 2019). Automated toxic-language identification tools generally are unable to consider social and cultural context and therefore risk reporting posts that are not actually in violation. Thus, the assumption that automated techniques are a way to remove bias is incorrect andmay invite systemic bias. In our study, we tested for bias in the second pathway to online content removal: that is, through social media users. Specifically, we were interested in whether the demographics of the poster influence a willingness to report content as violating the community standards; this makes certain demographics more likely to have their posts reviewed and possibly removed. We focused on race, gender, and the intersection of these traits because gendered and racial stereotypes—as well as shared traits betweenmessengers and receivers—can influence people’s attitudes and evaluations of content (Karpowitz, Mendelberg, and Shaker 2012; Mastro 2017). Although some scholars argue that computer-mediated communication has reduced the public’s ability to identify the background of messengers, other studies have shown that personal characteristics of the public continue to influence assessments of messages in online environments (Metzger and Flanagin 2013; Settle 2018).\",\"PeriodicalId\":48096,\"journal\":{\"name\":\"Ps-Political Science & Politics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2023-01-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ps-Political Science & Politics\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1017/S1049096522001238\",\"RegionNum\":3,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"POLITICAL SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ps-Political Science & Politics","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1017/S1049096522001238","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"POLITICAL SCIENCE","Score":null,"Total":0}
引用次数: 0

摘要

由于绝大多数美国人定期使用社交媒体平台来消费和传播信息,社交媒体在许多方面都成为了今天的城市广场(皮尤研究中心2021)。然而,与公民自由表达受到第一修正案保护的公共空间不同,社交媒体平台是私有的,用户必须遵守平台的服务条款和社区标准(国会研究服务2021年)。虽然平台规则对允许的内容有所不同,但大多数人都同意某些形式的内容(例如,可信的暴力威胁和仇恨言论)是不允许的,他们努力识别和删除这些帖子。Twitter和Facebook都禁止可信的暴力威胁(例如,“我将……或“我计划……”),以及针对受保护阶层(如种族、性别和宗教)的仇恨言论。为了识别令人反感的内容,社交媒体平台部分依赖用户报告冒犯性帖子,然后平台决定保留或删除(Crawford and Gillespie 2016)。用户在决定哪些内容被标记为审查方面发挥着关键作用;然而,我们对用户报告行为知之甚少。一般来说,社交媒体平台使用两种技术来识别令人反感的内容:(1)经过训练的算法(或“分类器”),用于标记包含某些语言的帖子;(2)举报他们认为违反社区标准的帖子的其他用户(Crawford and Gillespie 2016)。那些被认定可能包含不良内容的帖子随后会由一组人工版主进行审查,以确定该帖子是否确实违反了服务条款,因此应该被删除或贴上标签。判断什么是令人反感的内容,什么不是令人反感的内容是困难的,而且会受到个人偏见的影响;即使是专业的主持人也会承认犯错误(Gadde和Derella 2020;Varner et al. 2017)。然而,分类器也会受到种族偏见的影响。例如,一些分类器更有可能将用“黑人英语”撰写的社交媒体帖子标记为辱骂,而不是用标准英语撰写的帖子(Davidson, Bhattacharya和Weber 2019;Sap et al. 2019)。自动有毒语言识别工具通常无法考虑社会和文化背景,因此无法报告实际上没有违规的风险帖子。因此,认为自动化技术是消除偏见的一种方法的假设是不正确的,可能会引发系统性偏见。在我们的研究中,我们测试了在线内容删除的第二种途径中的偏见:即通过社交媒体用户。具体来说,我们感兴趣的是海报的人口统计数据是否会影响他们举报违反社区标准的内容的意愿;这使得某些人口统计数据更有可能被审查和删除。我们关注种族、性别和这些特征的交集,因为性别和种族的刻板印象——以及传递者和接收者之间的共同特征——会影响人们对内容的态度和评估(Karpowitz, Mendelberg, and Shaker 2012;获取2017)。尽管一些学者认为,计算机媒介传播降低了公众识别信使背景的能力,但其他研究表明,公众的个人特征继续影响在线环境中对信息的评估(Metzger and Flanagin 2013;解决2018)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Who Gets Flagged? An Experiment on Censorship and Bias in Social Media Reporting
With a large majority of Americans using social media platforms to consume and disseminate information on a regular basis, social media serve as today’s town square in many ways (Pew Research Center 2021). However, unlike public spaces where the free expression of citizens is afforded First Amendment protections, social media platforms are privately owned, and users are subject to the platform’s terms of service and community standards (Congressional Research Service 2021). Although platform rules vary about what is allowable content, most are in agreement that certain forms of content (e.g., credible threats of violence and hate speech) are not, and they strive to identify and remove such posts. Both Twitter and Facebook prohibit credible threats of violence (e.g., “I will...” or “I plan to...”) and hate speech directed at protected classes (e.g., race, gender, and religion). To identify objectionable content, social media platforms rely in part on users to report offensive posts, which the platform then decides to leave up or take down (Crawford and Gillespie 2016). Users play a critical role in determining which content is flagged for review; however, little is known about user reporting behavior. In general, social media platforms use two techniques to identify objectionable content: (1) algorithms (or “classifiers”) that are trained to flag posts that contain certain language; and (2) other users who report posts that they believe violate the community standards (Crawford and Gillespie 2016). Posts that are identified as possibly containing objectionable content then are reviewed by a group of human moderators to determine whether the post in fact violates the terms of service and therefore should be removed or labeled. Adjudicating what is and is not objectionable content is difficult and subject to personal biases; even professional moderators admit to making mistakes (Gadde and Derella 2020; Varner et al. 2017). However, classifiers also are subject to racial bias. For instance, several classifiers were more likely to flag social media posts written in “Black English” as abusive than posts written in standard English (Davidson, Bhattacharya, and Weber 2019; Sap et al. 2019). Automated toxic-language identification tools generally are unable to consider social and cultural context and therefore risk reporting posts that are not actually in violation. Thus, the assumption that automated techniques are a way to remove bias is incorrect andmay invite systemic bias. In our study, we tested for bias in the second pathway to online content removal: that is, through social media users. Specifically, we were interested in whether the demographics of the poster influence a willingness to report content as violating the community standards; this makes certain demographics more likely to have their posts reviewed and possibly removed. We focused on race, gender, and the intersection of these traits because gendered and racial stereotypes—as well as shared traits betweenmessengers and receivers—can influence people’s attitudes and evaluations of content (Karpowitz, Mendelberg, and Shaker 2012; Mastro 2017). Although some scholars argue that computer-mediated communication has reduced the public’s ability to identify the background of messengers, other studies have shown that personal characteristics of the public continue to influence assessments of messages in online environments (Metzger and Flanagin 2013; Settle 2018).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ps-Political Science & Politics
Ps-Political Science & Politics POLITICAL SCIENCE-
CiteScore
3.40
自引率
27.30%
发文量
166
期刊介绍: PS: Political Science & Politics provides critical analyses of contemporary political phenomena and is the journal of record for the discipline of political science reporting on research, teaching, and professional development. PS, begun in 1968, is the only quarterly professional news and commentary journal in the field and is the prime source of information on political scientists" achievements and professional concerns. PS: Political Science & Politics is sold ONLY as part of a joint subscription with American Political Science Review and Perspectives on Politics.
期刊最新文献
The Invincible Gender Gap in Political Ambition Logging in to Learn: The Effects of Online Civic Education Pedagogy on a Latinx and AAPI Civic Engagement Youth Conference A Case for Description COVID-19 Direct Relief Payments and Political and Economic Attitudes among Tertiary Students: A Quasi-Experimental Study – CORRIGENDUM Escalating Political Violence and the Intersectional Impacts on Latinas in National Politics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1