TikTok上的非自杀式自残和内容审核

Valerie Vera
{"title":"TikTok上的非自杀式自残和内容审核","authors":"Valerie Vera","doi":"10.1002/pra2.979","DOIUrl":null,"url":null,"abstract":"ABSTRACT Online nonsuicidal self‐injury communities commonly create and share information on harm reduction strategies and exchange social support on social media platforms, including the short‐form video sharing platform TikTok. While TikTok's Community Guidelines permit users to share personal experiences with mental health topics, TikTok explicitly bans content depicting, promoting, normalizing, or glorifying activities that could lead to self‐harm. As such, TikTok may moderate user‐generated content, leading to exclusion and marginalization in this digital space. Through semi‐structured interviews with eight TikTok users with a history of nonsuicidal self‐injury, this pilot study explores how users experience TikTok's algorithm to create and engage with content on nonsuicidal self‐injury. Findings demonstrate that users understand how to circumnavigate TikTok's algorithm through algospeak (i.e., codewords or turns of phrases) and signaling to maintain visibility on the platform. Further, findings emphasize that users actively engage in self‐surveillance and self‐censorship to create a safe online community. In turn, content moderation can ultimately hinder progress toward the destigmatization of nonsuicidal self‐injury and restrict social support exchanged within online nonsuicidal self‐injury communities.","PeriodicalId":37833,"journal":{"name":"Proceedings of the Association for Information Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Nonsuicidal <scp>Self‐Injury</scp> and Content Moderation on <scp>TikTok</scp>\",\"authors\":\"Valerie Vera\",\"doi\":\"10.1002/pra2.979\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Online nonsuicidal self‐injury communities commonly create and share information on harm reduction strategies and exchange social support on social media platforms, including the short‐form video sharing platform TikTok. While TikTok's Community Guidelines permit users to share personal experiences with mental health topics, TikTok explicitly bans content depicting, promoting, normalizing, or glorifying activities that could lead to self‐harm. As such, TikTok may moderate user‐generated content, leading to exclusion and marginalization in this digital space. Through semi‐structured interviews with eight TikTok users with a history of nonsuicidal self‐injury, this pilot study explores how users experience TikTok's algorithm to create and engage with content on nonsuicidal self‐injury. Findings demonstrate that users understand how to circumnavigate TikTok's algorithm through algospeak (i.e., codewords or turns of phrases) and signaling to maintain visibility on the platform. Further, findings emphasize that users actively engage in self‐surveillance and self‐censorship to create a safe online community. In turn, content moderation can ultimately hinder progress toward the destigmatization of nonsuicidal self‐injury and restrict social support exchanged within online nonsuicidal self‐injury communities.\",\"PeriodicalId\":37833,\"journal\":{\"name\":\"Proceedings of the Association for Information Science and Technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Association for Information Science and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/pra2.979\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Association for Information Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/pra2.979","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

在线非自杀性自伤社区通常在社交媒体平台上创建和分享有关减少伤害策略的信息,并在短视频分享平台TikTok上交换社会支持。虽然TikTok的社区指南允许用户分享有关心理健康话题的个人经历,但TikTok明确禁止描述、宣传、正常化或美化可能导致自我伤害的活动的内容。因此,TikTok可能会限制用户生成的内容,导致这个数字空间的排斥和边缘化。通过对8位有非自杀性自伤史的TikTok用户进行半结构化访谈,这项试点研究探讨了用户如何体验TikTok的算法来创建和参与有关非自杀性自伤的内容。研究结果表明,用户了解如何通过算法语言(即码字或短语转换)和信号绕过TikTok的算法,以保持在平台上的可见性。此外,研究结果强调,用户积极参与自我监视和自我审查,以创建一个安全的在线社区。反过来,内容审核最终会阻碍非自杀性自伤行为去污名化的进程,并限制在线非自杀性自伤社区内交换的社会支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Nonsuicidal Self‐Injury and Content Moderation on TikTok
ABSTRACT Online nonsuicidal self‐injury communities commonly create and share information on harm reduction strategies and exchange social support on social media platforms, including the short‐form video sharing platform TikTok. While TikTok's Community Guidelines permit users to share personal experiences with mental health topics, TikTok explicitly bans content depicting, promoting, normalizing, or glorifying activities that could lead to self‐harm. As such, TikTok may moderate user‐generated content, leading to exclusion and marginalization in this digital space. Through semi‐structured interviews with eight TikTok users with a history of nonsuicidal self‐injury, this pilot study explores how users experience TikTok's algorithm to create and engage with content on nonsuicidal self‐injury. Findings demonstrate that users understand how to circumnavigate TikTok's algorithm through algospeak (i.e., codewords or turns of phrases) and signaling to maintain visibility on the platform. Further, findings emphasize that users actively engage in self‐surveillance and self‐censorship to create a safe online community. In turn, content moderation can ultimately hinder progress toward the destigmatization of nonsuicidal self‐injury and restrict social support exchanged within online nonsuicidal self‐injury communities.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Proceedings of the Association for Information Science and Technology
Proceedings of the Association for Information Science and Technology Social Sciences-Library and Information Sciences
CiteScore
1.30
自引率
0.00%
发文量
164
期刊介绍: Information not localized
期刊最新文献
Considering the Role of Information and Context in Promoting Health-Related Behavioral Change. Transforming Indigenous Knowledges Stewardship Praxis through an Ethics of Care “I Am in a Privileged Situation”: Examining the Factors Promoting Inequity in Open Access Publishing Shifting Roles of Citizen Scientists Accelerates High‐Quality Data Collection for Climate Change Research Investigating the Intersections of Ethics and Artificial Intelligence in the Collections as Data Position Papers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1