{"title":"A Failure to Regulate? The Demands and Dilemmas of Tackling Illegal Content and Behaviour on Social Media","authors":"M. Yar","doi":"10.52306/01010318rvze9940","DOIUrl":null,"url":null,"abstract":"The proliferation and user uptake of social media applications has brought in its wake a growing problem of illegal and harmful interactions and content online. Recent controversy has arisen around issues ranging from the alleged online manipulation of the 2016 US presidential election by Russian hackers and ‘trolls’, to the misuse of users’ Facebook data by the political consulting firm Cambridge Analytica (Hall 2018; Swaine & Bennetts 2018). These recent issues notwithstanding, in the UK context, ongoing concern has focused in particular upon (a) sexually-oriented and abusive content about or directed at children, and (b) content that is racially or religiously hateful, incites violence and promotes or celebrates terrorist violence. Legal innovation has sought to make specific provision for such online offences, and offenders have been subject to prosecution in some widely-publicised cases. Nevertheless, as a whole, the business of regulating (identifying, blocking, removing and reporting) offending content has been left largely to social media providers themselves. This has been sustained by concerns both practical (the amount of public resource that would be required to police social media) and political (concerns about excessive state surveillance and curtailment of free speech in liberal democracies). However, growing evidence about providers’ unwillingness and/or inability to effectively stem the flow of illegal and harmful content has created a crisis for the existing self-regulatory model. Consequently, we now see a range of proposals that would take a much more coercive and punitive stance toward media platforms, so as to compel them into taking more concerted action. Taking the UK as a primary focus, these proposals are considered and assessed, with a view to charting possible future configurations for tackling illegal social media content.","PeriodicalId":314035,"journal":{"name":"The International Journal of Cybersecurity Intelligence and Cybercrime","volume":"144 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Journal of Cybersecurity Intelligence and Cybercrime","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52306/01010318rvze9940","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27
Abstract
The proliferation and user uptake of social media applications has brought in its wake a growing problem of illegal and harmful interactions and content online. Recent controversy has arisen around issues ranging from the alleged online manipulation of the 2016 US presidential election by Russian hackers and ‘trolls’, to the misuse of users’ Facebook data by the political consulting firm Cambridge Analytica (Hall 2018; Swaine & Bennetts 2018). These recent issues notwithstanding, in the UK context, ongoing concern has focused in particular upon (a) sexually-oriented and abusive content about or directed at children, and (b) content that is racially or religiously hateful, incites violence and promotes or celebrates terrorist violence. Legal innovation has sought to make specific provision for such online offences, and offenders have been subject to prosecution in some widely-publicised cases. Nevertheless, as a whole, the business of regulating (identifying, blocking, removing and reporting) offending content has been left largely to social media providers themselves. This has been sustained by concerns both practical (the amount of public resource that would be required to police social media) and political (concerns about excessive state surveillance and curtailment of free speech in liberal democracies). However, growing evidence about providers’ unwillingness and/or inability to effectively stem the flow of illegal and harmful content has created a crisis for the existing self-regulatory model. Consequently, we now see a range of proposals that would take a much more coercive and punitive stance toward media platforms, so as to compel them into taking more concerted action. Taking the UK as a primary focus, these proposals are considered and assessed, with a view to charting possible future configurations for tackling illegal social media content.