{"title":"自动影响有什么问题","authors":"Claire Benn, Seth Lazar","doi":"10.1017/can.2021.23","DOIUrl":null,"url":null,"abstract":"Abstract Automated Influence is the use of Artificial Intelligence (AI) to collect, integrate, and analyse people’s data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of “AI Ethics” in favour of a more structural, political philosophy of AI, we show that the real problem with Automated Influence is the crisis of legitimacy that it precipitates.","PeriodicalId":51573,"journal":{"name":"CANADIAN JOURNAL OF PHILOSOPHY","volume":"52 1","pages":"125 - 148"},"PeriodicalIF":1.7000,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"What’s Wrong with Automated Influence\",\"authors\":\"Claire Benn, Seth Lazar\",\"doi\":\"10.1017/can.2021.23\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Automated Influence is the use of Artificial Intelligence (AI) to collect, integrate, and analyse people’s data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of “AI Ethics” in favour of a more structural, political philosophy of AI, we show that the real problem with Automated Influence is the crisis of legitimacy that it precipitates.\",\"PeriodicalId\":51573,\"journal\":{\"name\":\"CANADIAN JOURNAL OF PHILOSOPHY\",\"volume\":\"52 1\",\"pages\":\"125 - 148\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2021-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"CANADIAN JOURNAL OF PHILOSOPHY\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1017/can.2021.23\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"PHILOSOPHY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"CANADIAN JOURNAL OF PHILOSOPHY","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/can.2021.23","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PHILOSOPHY","Score":null,"Total":0}
Abstract Automated Influence is the use of Artificial Intelligence (AI) to collect, integrate, and analyse people’s data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of “AI Ethics” in favour of a more structural, political philosophy of AI, we show that the real problem with Automated Influence is the crisis of legitimacy that it precipitates.