{"title":"Nonsuicidal <scp>Self‐Injury</scp> and Content Moderation on <scp>TikTok</scp>","authors":"Valerie Vera","doi":"10.1002/pra2.979","DOIUrl":null,"url":null,"abstract":"ABSTRACT Online nonsuicidal self‐injury communities commonly create and share information on harm reduction strategies and exchange social support on social media platforms, including the short‐form video sharing platform TikTok. While TikTok's Community Guidelines permit users to share personal experiences with mental health topics, TikTok explicitly bans content depicting, promoting, normalizing, or glorifying activities that could lead to self‐harm. As such, TikTok may moderate user‐generated content, leading to exclusion and marginalization in this digital space. Through semi‐structured interviews with eight TikTok users with a history of nonsuicidal self‐injury, this pilot study explores how users experience TikTok's algorithm to create and engage with content on nonsuicidal self‐injury. Findings demonstrate that users understand how to circumnavigate TikTok's algorithm through algospeak (i.e., codewords or turns of phrases) and signaling to maintain visibility on the platform. Further, findings emphasize that users actively engage in self‐surveillance and self‐censorship to create a safe online community. In turn, content moderation can ultimately hinder progress toward the destigmatization of nonsuicidal self‐injury and restrict social support exchanged within online nonsuicidal self‐injury communities.","PeriodicalId":37833,"journal":{"name":"Proceedings of the Association for Information Science and Technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Association for Information Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/pra2.979","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
ABSTRACT Online nonsuicidal self‐injury communities commonly create and share information on harm reduction strategies and exchange social support on social media platforms, including the short‐form video sharing platform TikTok. While TikTok's Community Guidelines permit users to share personal experiences with mental health topics, TikTok explicitly bans content depicting, promoting, normalizing, or glorifying activities that could lead to self‐harm. As such, TikTok may moderate user‐generated content, leading to exclusion and marginalization in this digital space. Through semi‐structured interviews with eight TikTok users with a history of nonsuicidal self‐injury, this pilot study explores how users experience TikTok's algorithm to create and engage with content on nonsuicidal self‐injury. Findings demonstrate that users understand how to circumnavigate TikTok's algorithm through algospeak (i.e., codewords or turns of phrases) and signaling to maintain visibility on the platform. Further, findings emphasize that users actively engage in self‐surveillance and self‐censorship to create a safe online community. In turn, content moderation can ultimately hinder progress toward the destigmatization of nonsuicidal self‐injury and restrict social support exchanged within online nonsuicidal self‐injury communities.