{"title":"丢失的数据:人工智能系统如何以安全之名审查 LGBTQ+ 内容","authors":"Sophia Chen","doi":"10.1038/s43588-024-00695-4","DOIUrl":null,"url":null,"abstract":"Many AI companies implement safety systems to protect users from offensive or inaccurate content. Though well intentioned, these filters can exacerbate existing inequalities, and data shows that they have disproportionately removed LGBTQ+ content.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"4 9","pages":"629-632"},"PeriodicalIF":12.0000,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s43588-024-00695-4.pdf","citationCount":"0","resultStr":"{\"title\":\"The lost data: how AI systems censor LGBTQ+ content in the name of safety\",\"authors\":\"Sophia Chen\",\"doi\":\"10.1038/s43588-024-00695-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many AI companies implement safety systems to protect users from offensive or inaccurate content. Though well intentioned, these filters can exacerbate existing inequalities, and data shows that they have disproportionately removed LGBTQ+ content.\",\"PeriodicalId\":74246,\"journal\":{\"name\":\"Nature computational science\",\"volume\":\"4 9\",\"pages\":\"629-632\"},\"PeriodicalIF\":12.0000,\"publicationDate\":\"2024-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.nature.com/articles/s43588-024-00695-4.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nature computational science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.nature.com/articles/s43588-024-00695-4\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature computational science","FirstCategoryId":"1085","ListUrlMain":"https://www.nature.com/articles/s43588-024-00695-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
The lost data: how AI systems censor LGBTQ+ content in the name of safety
Many AI companies implement safety systems to protect users from offensive or inaccurate content. Though well intentioned, these filters can exacerbate existing inequalities, and data shows that they have disproportionately removed LGBTQ+ content.