{"title":"内容审核作为语言政策","authors":"Mandy Lau","doi":"10.25071/2564-2855.11","DOIUrl":null,"url":null,"abstract":"\n\n\nCommercial content moderation removes harassment, abuse, hate, or any material deemed harmful or offensive from user-generated content platforms. A platform’s content policy and related government regulations are forms of explicit language policy. This kind of policy dictates the classifications of harmful language and aims to change users’ language practices by force. However, the de facto language policy is the actual practice of language moderation by algorithms and humans. Algorithms and human moderators enforce which words (and thereby, content) can be shared, revealing the normative values of hateful, offensive, or free speech and shaping how users adapt and create new language practices. This paper will introduce the process and challenges of commercial content moderation, as well as Canada’s proposed Bill C-36 with its complementary regulatory framework, and briefly discuss the implications for language practices.\n\n\n","PeriodicalId":153997,"journal":{"name":"Working papers in Applied Linguistics and Linguistics at York","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Content moderation as language policy\",\"authors\":\"Mandy Lau\",\"doi\":\"10.25071/2564-2855.11\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n\\n\\nCommercial content moderation removes harassment, abuse, hate, or any material deemed harmful or offensive from user-generated content platforms. A platform’s content policy and related government regulations are forms of explicit language policy. This kind of policy dictates the classifications of harmful language and aims to change users’ language practices by force. However, the de facto language policy is the actual practice of language moderation by algorithms and humans. Algorithms and human moderators enforce which words (and thereby, content) can be shared, revealing the normative values of hateful, offensive, or free speech and shaping how users adapt and create new language practices. This paper will introduce the process and challenges of commercial content moderation, as well as Canada’s proposed Bill C-36 with its complementary regulatory framework, and briefly discuss the implications for language practices.\\n\\n\\n\",\"PeriodicalId\":153997,\"journal\":{\"name\":\"Working papers in Applied Linguistics and Linguistics at York\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Working papers in Applied Linguistics and Linguistics at York\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.25071/2564-2855.11\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Working papers in Applied Linguistics and Linguistics at York","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25071/2564-2855.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Commercial content moderation removes harassment, abuse, hate, or any material deemed harmful or offensive from user-generated content platforms. A platform’s content policy and related government regulations are forms of explicit language policy. This kind of policy dictates the classifications of harmful language and aims to change users’ language practices by force. However, the de facto language policy is the actual practice of language moderation by algorithms and humans. Algorithms and human moderators enforce which words (and thereby, content) can be shared, revealing the normative values of hateful, offensive, or free speech and shaping how users adapt and create new language practices. This paper will introduce the process and challenges of commercial content moderation, as well as Canada’s proposed Bill C-36 with its complementary regulatory framework, and briefly discuss the implications for language practices.