监管不力?处理社交媒体上的非法内容和行为的需求和困境

M. Yar
{"title":"监管不力?处理社交媒体上的非法内容和行为的需求和困境","authors":"M. Yar","doi":"10.52306/01010318rvze9940","DOIUrl":null,"url":null,"abstract":"The proliferation and user uptake of social media applications has brought in its wake a growing problem of illegal and harmful interactions and content online. Recent controversy has arisen around issues ranging from the alleged online manipulation of the 2016 US presidential election by Russian hackers and ‘trolls’, to the misuse of users’ Facebook data by the political consulting firm Cambridge Analytica (Hall 2018; Swaine & Bennetts 2018). These recent issues notwithstanding, in the UK context, ongoing concern has focused in particular upon (a) sexually-oriented and abusive content about or directed at children, and (b) content that is racially or religiously hateful, incites violence and promotes or celebrates terrorist violence. Legal innovation has sought to make specific provision for such online offences, and offenders have been subject to prosecution in some widely-publicised cases. Nevertheless, as a whole, the business of regulating (identifying, blocking, removing and reporting) offending content has been left largely to social media providers themselves. This has been sustained by concerns both practical (the amount of public resource that would be required to police social media) and political (concerns about excessive state surveillance and curtailment of free speech in liberal democracies). However, growing evidence about providers’ unwillingness and/or inability to effectively stem the flow of illegal and harmful content has created a crisis for the existing self-regulatory model. Consequently, we now see a range of proposals that would take a much more coercive and punitive stance toward media platforms, so as to compel them into taking more concerted action. Taking the UK as a primary focus, these proposals are considered and assessed, with a view to charting possible future configurations for tackling illegal social media content.","PeriodicalId":314035,"journal":{"name":"The International Journal of Cybersecurity Intelligence and Cybercrime","volume":"144 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":"{\"title\":\"A Failure to Regulate? The Demands and Dilemmas of Tackling Illegal Content and Behaviour on Social Media\",\"authors\":\"M. Yar\",\"doi\":\"10.52306/01010318rvze9940\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The proliferation and user uptake of social media applications has brought in its wake a growing problem of illegal and harmful interactions and content online. Recent controversy has arisen around issues ranging from the alleged online manipulation of the 2016 US presidential election by Russian hackers and ‘trolls’, to the misuse of users’ Facebook data by the political consulting firm Cambridge Analytica (Hall 2018; Swaine & Bennetts 2018). These recent issues notwithstanding, in the UK context, ongoing concern has focused in particular upon (a) sexually-oriented and abusive content about or directed at children, and (b) content that is racially or religiously hateful, incites violence and promotes or celebrates terrorist violence. Legal innovation has sought to make specific provision for such online offences, and offenders have been subject to prosecution in some widely-publicised cases. Nevertheless, as a whole, the business of regulating (identifying, blocking, removing and reporting) offending content has been left largely to social media providers themselves. This has been sustained by concerns both practical (the amount of public resource that would be required to police social media) and political (concerns about excessive state surveillance and curtailment of free speech in liberal democracies). However, growing evidence about providers’ unwillingness and/or inability to effectively stem the flow of illegal and harmful content has created a crisis for the existing self-regulatory model. Consequently, we now see a range of proposals that would take a much more coercive and punitive stance toward media platforms, so as to compel them into taking more concerted action. Taking the UK as a primary focus, these proposals are considered and assessed, with a view to charting possible future configurations for tackling illegal social media content.\",\"PeriodicalId\":314035,\"journal\":{\"name\":\"The International Journal of Cybersecurity Intelligence and Cybercrime\",\"volume\":\"144 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"27\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The International Journal of Cybersecurity Intelligence and Cybercrime\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.52306/01010318rvze9940\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Journal of Cybersecurity Intelligence and Cybercrime","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52306/01010318rvze9940","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27

摘要

社交媒体应用程序的激增和用户的接受带来了日益严重的非法和有害的在线互动和内容问题。最近的争议出现在一系列问题上,从俄罗斯黑客和“巨魔”涉嫌在线操纵2016年美国总统大选,到政治咨询公司剑桥分析滥用用户的Facebook数据(Hall 2018;Swaine & bennett 2018)。尽管最近出现了这些问题,但在英国的背景下,持续关注的重点是(a)关于或针对儿童的性取向和辱骂内容,以及(b)种族或宗教仇恨、煽动暴力和促进或庆祝恐怖主义暴力的内容。法律创新试图对此类网络犯罪作出具体规定,在一些广为流传的案件中,违法者受到起诉。然而,作为一个整体,监管(识别、屏蔽、删除和报告)违规内容的业务主要留给了社交媒体提供商自己。这种情况一直受到现实问题(监管社交媒体需要多少公共资源)和政治问题(对自由民主国家过度的国家监控和限制言论自由的担忧)的支持。然而,越来越多的证据表明,提供商不愿意和/或没有能力有效地阻止非法和有害内容的流动,这给现有的自我监管模式带来了危机。因此,我们现在看到了一系列将对媒体平台采取更具强制性和惩罚性立场的提案,以迫使它们采取更一致的行动。以英国为主要焦点,对这些建议进行了考虑和评估,以期制定未来可能的解决非法社交媒体内容的配置。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Failure to Regulate? The Demands and Dilemmas of Tackling Illegal Content and Behaviour on Social Media
The proliferation and user uptake of social media applications has brought in its wake a growing problem of illegal and harmful interactions and content online. Recent controversy has arisen around issues ranging from the alleged online manipulation of the 2016 US presidential election by Russian hackers and ‘trolls’, to the misuse of users’ Facebook data by the political consulting firm Cambridge Analytica (Hall 2018; Swaine & Bennetts 2018). These recent issues notwithstanding, in the UK context, ongoing concern has focused in particular upon (a) sexually-oriented and abusive content about or directed at children, and (b) content that is racially or religiously hateful, incites violence and promotes or celebrates terrorist violence. Legal innovation has sought to make specific provision for such online offences, and offenders have been subject to prosecution in some widely-publicised cases. Nevertheless, as a whole, the business of regulating (identifying, blocking, removing and reporting) offending content has been left largely to social media providers themselves. This has been sustained by concerns both practical (the amount of public resource that would be required to police social media) and political (concerns about excessive state surveillance and curtailment of free speech in liberal democracies). However, growing evidence about providers’ unwillingness and/or inability to effectively stem the flow of illegal and harmful content has created a crisis for the existing self-regulatory model. Consequently, we now see a range of proposals that would take a much more coercive and punitive stance toward media platforms, so as to compel them into taking more concerted action. Taking the UK as a primary focus, these proposals are considered and assessed, with a view to charting possible future configurations for tackling illegal social media content.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Malware Infections in the U.S. during the COVID-19 Pandemic: An Empirical Study Editorial introduction to the special issue: Supporting future scholarship on cybercrime North Korean Cyber Attacks and Policy Responses: An Interdisciplinary Theoretical Framework Level of Engagement with Social Networking Services and Fear of Online Victimization: The Role of Online Victimization Experiences The Challenges of Identifying Dangers Online and Predictors of Victimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1