价值观一致,人性提升,道德革命

Ariela Tubert, Justin Tiehen
{"title":"价值观一致,人性提升,道德革命","authors":"Ariela Tubert, Justin Tiehen","doi":"10.1080/0020174x.2023.2261506","DOIUrl":null,"url":null,"abstract":"ABSTRACTHuman beings are internally inconsistent in various ways. One way to develop this thought involves using the language of value alignment: the values we hold are not always aligned with our behavior and are not always aligned with each other. Because of this self-misalignment, there is room for potential projects of human enhancement that involve achieving a greater degree of value alignment than we presently have. Relatedly, discussions of AI ethics sometimes focus on what is known as the value alignment problem, the challenge of how to build AI that acts in accordance with our human values. We argue that there is an especially close connection between solving the value alignment problem in AI ethics and using AI to pursue certain forms of human enhancement. But in addition, we also argue that there are important limits to what kinds of human enhancement can be pursued in this way, because some forms of human enhancement—namely moral revolutions—involve a kind of value misalignment rather than alignment.KEYWORDS: Artificial intelligencehuman enhancementmoral revolutions AcknowledgementsBoth authors would like to thank the National Endowment for the Humanities for support for their work, the University of Puget Sound and the John Lantz Senior Fellowship for Research or Advanced Study, and the participants at the Philosophy, AI, and Society Workshop at Stanford University. Ariela Tubert would like to thank the audience at the Ethics and Broader Implications of Technology Conference at the University of Nebraska at Lincoln.Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 See for instance Russell (Citation2019), Christian (Citation2020), Gabriel (Citation2020), Wallach and Vallor (Citation2020).2 Appiah (Citation2010). See also Baker (Citation2019).3 Russell and Norvig (Citation2010).4 Gershman (Citation2021, 156) makes this point while arguing that the ‘folklore’ about how machine learning has its origins in neuroscience overstates the level of influence neuroscience has actually had.5 See for instance Kahneman, Slovic, and Tversky (Citation1982), Kahneman and Tversky (Citation2000), Kahneman (Citation2011).6 Lieder et al. (Citation2019).7 Lieder et al. (Citation2019, 1096).8 Lieder and Griffiths (Citation2019). The notion of ‘rational analysis’ is drawn from Anderson (Citation1990).9 This is a point of focus in Griffiths (Citation2020).10 Lieder et al. (Citation2019, 1096).11 Lieder et al. (Citation2019, 1096). On gamification and AI more generally, see Deterding et al. (Citation2011).12 Chasse (Citation2021).13 Lieder et al. (Citation2019).14 Sinnott-Armstrong (Citation2008).15 Tversky and Kahneman (Citation1981).16 As Kühberger (Citation2017, 79) notes, the effect is robust and has been replicated across hundreds of papers.17 Kahneman and Tversky (Citation1979).18 Sometimes this point is used as part of an argument that we should be skeptical of moral facts at all, but this move requires a further inference. For influential discussions of some of the issues involved, see Street (Citation2006), Joyce (Citation2007).19 Singer (Citation2005), Greene (Citation2007).20 See for instance Haidt (Citation2012).21 Kass (Citation1997).22 Nussbaum (Citation2004). Kelly (Citation2011) offers an extended discussion of the moral significance of disgust.23 The notion of an expanding circle of moral concern comes from Singer (Citation2011).24 On Tay, see Victor (Citation2016). On the Turkish translation case, see Olson (Citation2018).25 On search engines, see Noble (Citation2018). On facial recognition systems, Buolamwini and Gebru (Citation2018). On hiring decisions, Dastin (Citation2018). On loan and credit card applications, Angwin et al. (Citation2016). On predictive policing, O’Neil (Citation2016). On sentencing and parole decisions, Eubanks (Citation2018).26 See for instance Kleinberg et al. (Citation2018), Kleinberg et al. (Citation2020).27 See for example Dovidio and Gaertner (Citation2000), Amodio and Devine (Citation2006), Gendler (Citation2011), Levy (Citation2017). For a critical assessment of work on implicit bias, though, see Machery (Citation2022).28 Wallach and Allen (Citation2009). We note though that they frame their discussion in terms of building moral machines rather than in terms of value alignment. For Wallach’s thoughts about value alignment, see Wallach and Vallor (Citation2020).29 Mill (Citation1861/1998). Discussions of a utilitarian-oriented AI include Gips (Citation1994), Grau (Citation2011), and Russell (Citation2019).30 Kant (Citation1785/2012). Thomas Powers’ (Citation2006) ‘Prospects for a Kantian Machine’ connects the view to AI.31 Asimov (Citation1950).32 Each of these examples is mentioned by Wallach and Allen (Citation2009, 79).33 Shortliffe and Buchanan (Citation1975).34 Savulescu and Maslen (Citation2015), Giubilini and Savulescu (Citation2018). For critical discussion of the proposal that is still sympathetic to the idea of pursuing AI-based human moral enhancement, see Lara and Decker (Citation2020).35 Deterding (Citation2014) discusses moral gamification, defending a ‘eudaimonic design’ approach.36 Millar (Citation2015) and Contissa, Lagioia, and Sartor (Citation2017) argue in favor of user control over the ethical settings on autonomous cars, while Lin (Citation2014) and Gogoll and Müller (Citation2017) argue against the idea.37 Santurkar et al. (Citation2023). See also Rozado (Citation2023).38 Thompson, Hsu, and Myers (Citation2023).39 See Narayanan and Kapoor (Citation2023) for a critical discussion of Santurkar et al. (Citation2023).40 OpenAI (Citation2023).41 Steinberg (Citation2023).42 Walker (Citation2023).43 Marcus (Citation2023).44 Appiah (Citation2010). Klenk et al. (Citation2022) provides a survey of recent work on moral revolutions.45 Appiah (Citation2010:, 8), Kuhn (Citation1962). Klenk et al. (Citation2022) emphasize how this connection to Kuhn is common also in other authors discussing moral revolutions.46 Wallach and Allen (Citation2009, 79).47 LeCun, Bengio, and Hinton (Citation2015), Bengio, LeCun, and Hinton (Citation2021).48 Ensmenger (Citation2012).49 Holodny (Citation2017).50 Metz (Citation2016).51 Knight (Citation2017).52 Strogatz (Citation2018).53 Rini (Citation2017) also uses AlphaGo’s Move 37 as an analogy for a radically new AI moral view.54 Appiah (Citation2010:, 66), Klenk et al. (Citation2022).55 See discussions of what is needed for significant society-wide moral progress: Moody-Adams (Citation2017), Rorty (Citation2006), Nussbaum (Citation2007).56 Appiah (Citation2010).57 On AI and the risk of value lock-in, see for instance Ord (Citation2020: Chapter 5), MacAskill (Citation2022: Chapter 4).58 Kenward and Sinclair (Citation2021).","PeriodicalId":47504,"journal":{"name":"Inquiry-An Interdisciplinary Journal of Philosophy","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Value alignment, human enhancement, and moral revolutions\",\"authors\":\"Ariela Tubert, Justin Tiehen\",\"doi\":\"10.1080/0020174x.2023.2261506\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACTHuman beings are internally inconsistent in various ways. One way to develop this thought involves using the language of value alignment: the values we hold are not always aligned with our behavior and are not always aligned with each other. Because of this self-misalignment, there is room for potential projects of human enhancement that involve achieving a greater degree of value alignment than we presently have. Relatedly, discussions of AI ethics sometimes focus on what is known as the value alignment problem, the challenge of how to build AI that acts in accordance with our human values. We argue that there is an especially close connection between solving the value alignment problem in AI ethics and using AI to pursue certain forms of human enhancement. But in addition, we also argue that there are important limits to what kinds of human enhancement can be pursued in this way, because some forms of human enhancement—namely moral revolutions—involve a kind of value misalignment rather than alignment.KEYWORDS: Artificial intelligencehuman enhancementmoral revolutions AcknowledgementsBoth authors would like to thank the National Endowment for the Humanities for support for their work, the University of Puget Sound and the John Lantz Senior Fellowship for Research or Advanced Study, and the participants at the Philosophy, AI, and Society Workshop at Stanford University. Ariela Tubert would like to thank the audience at the Ethics and Broader Implications of Technology Conference at the University of Nebraska at Lincoln.Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 See for instance Russell (Citation2019), Christian (Citation2020), Gabriel (Citation2020), Wallach and Vallor (Citation2020).2 Appiah (Citation2010). See also Baker (Citation2019).3 Russell and Norvig (Citation2010).4 Gershman (Citation2021, 156) makes this point while arguing that the ‘folklore’ about how machine learning has its origins in neuroscience overstates the level of influence neuroscience has actually had.5 See for instance Kahneman, Slovic, and Tversky (Citation1982), Kahneman and Tversky (Citation2000), Kahneman (Citation2011).6 Lieder et al. (Citation2019).7 Lieder et al. (Citation2019, 1096).8 Lieder and Griffiths (Citation2019). The notion of ‘rational analysis’ is drawn from Anderson (Citation1990).9 This is a point of focus in Griffiths (Citation2020).10 Lieder et al. (Citation2019, 1096).11 Lieder et al. (Citation2019, 1096). On gamification and AI more generally, see Deterding et al. (Citation2011).12 Chasse (Citation2021).13 Lieder et al. (Citation2019).14 Sinnott-Armstrong (Citation2008).15 Tversky and Kahneman (Citation1981).16 As Kühberger (Citation2017, 79) notes, the effect is robust and has been replicated across hundreds of papers.17 Kahneman and Tversky (Citation1979).18 Sometimes this point is used as part of an argument that we should be skeptical of moral facts at all, but this move requires a further inference. For influential discussions of some of the issues involved, see Street (Citation2006), Joyce (Citation2007).19 Singer (Citation2005), Greene (Citation2007).20 See for instance Haidt (Citation2012).21 Kass (Citation1997).22 Nussbaum (Citation2004). Kelly (Citation2011) offers an extended discussion of the moral significance of disgust.23 The notion of an expanding circle of moral concern comes from Singer (Citation2011).24 On Tay, see Victor (Citation2016). On the Turkish translation case, see Olson (Citation2018).25 On search engines, see Noble (Citation2018). On facial recognition systems, Buolamwini and Gebru (Citation2018). On hiring decisions, Dastin (Citation2018). On loan and credit card applications, Angwin et al. (Citation2016). On predictive policing, O’Neil (Citation2016). On sentencing and parole decisions, Eubanks (Citation2018).26 See for instance Kleinberg et al. (Citation2018), Kleinberg et al. (Citation2020).27 See for example Dovidio and Gaertner (Citation2000), Amodio and Devine (Citation2006), Gendler (Citation2011), Levy (Citation2017). For a critical assessment of work on implicit bias, though, see Machery (Citation2022).28 Wallach and Allen (Citation2009). We note though that they frame their discussion in terms of building moral machines rather than in terms of value alignment. For Wallach’s thoughts about value alignment, see Wallach and Vallor (Citation2020).29 Mill (Citation1861/1998). Discussions of a utilitarian-oriented AI include Gips (Citation1994), Grau (Citation2011), and Russell (Citation2019).30 Kant (Citation1785/2012). Thomas Powers’ (Citation2006) ‘Prospects for a Kantian Machine’ connects the view to AI.31 Asimov (Citation1950).32 Each of these examples is mentioned by Wallach and Allen (Citation2009, 79).33 Shortliffe and Buchanan (Citation1975).34 Savulescu and Maslen (Citation2015), Giubilini and Savulescu (Citation2018). For critical discussion of the proposal that is still sympathetic to the idea of pursuing AI-based human moral enhancement, see Lara and Decker (Citation2020).35 Deterding (Citation2014) discusses moral gamification, defending a ‘eudaimonic design’ approach.36 Millar (Citation2015) and Contissa, Lagioia, and Sartor (Citation2017) argue in favor of user control over the ethical settings on autonomous cars, while Lin (Citation2014) and Gogoll and Müller (Citation2017) argue against the idea.37 Santurkar et al. (Citation2023). See also Rozado (Citation2023).38 Thompson, Hsu, and Myers (Citation2023).39 See Narayanan and Kapoor (Citation2023) for a critical discussion of Santurkar et al. (Citation2023).40 OpenAI (Citation2023).41 Steinberg (Citation2023).42 Walker (Citation2023).43 Marcus (Citation2023).44 Appiah (Citation2010). Klenk et al. (Citation2022) provides a survey of recent work on moral revolutions.45 Appiah (Citation2010:, 8), Kuhn (Citation1962). Klenk et al. (Citation2022) emphasize how this connection to Kuhn is common also in other authors discussing moral revolutions.46 Wallach and Allen (Citation2009, 79).47 LeCun, Bengio, and Hinton (Citation2015), Bengio, LeCun, and Hinton (Citation2021).48 Ensmenger (Citation2012).49 Holodny (Citation2017).50 Metz (Citation2016).51 Knight (Citation2017).52 Strogatz (Citation2018).53 Rini (Citation2017) also uses AlphaGo’s Move 37 as an analogy for a radically new AI moral view.54 Appiah (Citation2010:, 66), Klenk et al. (Citation2022).55 See discussions of what is needed for significant society-wide moral progress: Moody-Adams (Citation2017), Rorty (Citation2006), Nussbaum (Citation2007).56 Appiah (Citation2010).57 On AI and the risk of value lock-in, see for instance Ord (Citation2020: Chapter 5), MacAskill (Citation2022: Chapter 4).58 Kenward and Sinclair (Citation2021).\",\"PeriodicalId\":47504,\"journal\":{\"name\":\"Inquiry-An Interdisciplinary Journal of Philosophy\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2023-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Inquiry-An Interdisciplinary Journal of Philosophy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/0020174x.2023.2261506\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inquiry-An Interdisciplinary Journal of Philosophy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/0020174x.2023.2261506","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

人的内在在许多方面是不一致的。发展这种想法的一种方法是使用价值一致的语言:我们持有的价值观并不总是与我们的行为一致,也不总是彼此一致。由于这种自我错位,人类增强的潜在项目有很大的空间,包括实现比我们目前拥有的更大程度的价值一致性。与此相关,人工智能伦理的讨论有时集中在所谓的价值一致性问题上,即如何构建符合人类价值观的人工智能的挑战。我们认为,解决人工智能伦理中的价值一致性问题与利用人工智能追求某些形式的人类增强之间存在着特别密切的联系。但除此之外,我们还认为,以这种方式追求什么样的人类提升存在重要的限制,因为某些形式的人类提升——即道德革命——涉及一种价值错位,而不是一致性。两位作者都要感谢美国国家人文基金会对他们工作的支持,感谢普吉特海湾大学和约翰·兰茨高级研究或高级研究奖学金,感谢斯坦福大学哲学、人工智能和社会研讨会的参与者。Ariela Tubert要感谢在内布拉斯加大学林肯分校举行的技术伦理与广泛影响会议上的观众。披露声明作者未报告潜在的利益冲突。注1参见Russell (Citation2019)、Christian (Citation2020)、Gabriel (Citation2020)、Wallach和valor (Citation2020)阿皮亚(Citation2010)。另见Baker (Citation2019)3 . Russell and Norvig (citation) 2010Gershman (Citation2021, 156)提出了这一点,同时认为关于机器学习如何起源于神经科学的“民间传说”夸大了神经科学实际具有的影响程度参见Kahneman, Slovic, and Tversky (Citation1982), Kahneman and Tversky (Citation2000), Kahneman (Citation2011)7 . Lieder et al. (Citation2019)7 . Lieder et al. (citation, 2019, 1996)Lieder和Griffiths (Citation2019)。“理性分析”的概念来自安德森(Citation1990)这是Griffiths (Citation2020)的一个重点11 . Lieder et al. (citation, 2019, 1996)Lieder等人(Citation2019, 1996)。关于游戏化和AI,请参见Deterding等人快滑步(Citation2021) 1314 . Lieder et al. (Citation2019)Sinnott-Armstrong (Citation2008)含量Tversky and Kahneman (Citation1981).16正如k<s:1>赫伯格(Citation2017, 79)所指出的那样,这种效应是强大的,并且已经在数百篇论文中得到了重复卡内曼和特沃斯基(引文1979).18有时,这一点被用作我们应该对道德事实持怀疑态度的论点的一部分,但这一举动需要进一步的推断。19 .关于所涉及的一些问题的有影响力的讨论,见Street (Citation2006), Joyce (Citation2007)辛格(Citation2005),格林(Citation2007).20参见Haidt (Citation2012)卡斯(Citation1997)。22口径的枪努斯鲍姆(Citation2004)。Kelly (Citation2011)对厌恶的道德意义进行了扩展讨论24 .扩大道德关注圈的概念来自辛格(Citation2011)关于Tay,请参见Victor (Citation2016)。关于土耳其语翻译案例,见Olson (Citation2018).25关于搜索引擎,请参见Noble (Citation2018)。关于面部识别系统,Buolamwini和Gebru (Citation2018)。在招聘决策方面,达斯汀(Citation2018)。关于贷款和信用卡申请,Angwin等人(Citation2016)。论预测性警务,奥尼尔(Citation2016)。关于量刑和假释决定,尤班克斯(Citation2018).26例如参见Kleinberg et al. (Citation2018), Kleinberg et al. (Citation2020).27参见Dovidio和Gaertner (Citation2000), Amodio和Devine (Citation2006), Gendler (Citation2011), Levy (Citation2017)。关于内隐偏见的批判性评估,请参见《机械》(Citation2022)瓦拉赫和艾伦(引文2009)。但我们注意到,他们的讨论是建立道德机器,而不是价值一致性。关于Wallach关于价值一致性的想法,请参见Wallach和valor (Citation2020)。29机(Citation1861/1998)。关于功利主义导向的人工智能的讨论包括Gips (Citation1994)、Grau (Citation2011)和Russell (Citation2019)康德(Citation1785/2012)。托马斯·鲍尔斯(Thomas Powers)的《康德式机器的前景》(Prospects for a Kantian Machine)将这一观点与人工智能联系起来Wallach和Allen都提到了这些例子(Citation2009, 79)肖特利夫和布坎南(引文1975).34萨乌列斯库和马伦(Citation2015),朱比利尼和萨乌列斯库(Citation2018)。 关于该提案的批判性讨论,仍然同情追求基于人工智能的人类道德增强的想法,参见Lara和Decker (Citation2020).35dedeterding (Citation2014)讨论了道德游戏化,捍卫了“理想设计”方法37 . Millar (Citation2015)、Contissa、Lagioia和Sartor (Citation2017)赞成用户控制自动驾驶汽车的道德设置,而Lin (Citation2014)、Gogoll和m<e:1> ller (Citation2017)则反对这一想法Santurkar等人(Citation2023)。参见Rozado (Citation2023).38Thompson, Hsu, and Myers (Citation2023).39参见Narayanan和Kapoor (Citation2023)对Santurkar等人(Citation2023)的批判性讨论OpenAI Citation2023 .41点斯坦伯格(Citation2023)点沃克(Citation2023)点马库斯(Citation2023)无误阿皮亚(Citation2010)。Klenk等人(Citation2022)提供了一项关于道德革命的最新研究阿皮亚(Citation2010:, 8),库恩(Citation1962)。Klenk等人(Citation2022)强调,在其他讨论道德革命的作者中,这种与库恩的联系也很常见[j] .中国科学院学报(自然科学版)48 . LeCun, Bengio和Hinton (Citation2015), Bengio, LeCun和Hinton (Citation2021)Ensmenger (Citation2012)报Holodny (Citation2017) 50梅斯(Citation2016) .51骑士(Citation2017)点“(Citation2018) 53瑞尼(Citation2017)还用AlphaGo的第37步作为一个全新的人工智能道德观的类比Appiah (Citation2010:, 66), Klenk等(Citation2022).55参见关于重大的全社会道德进步需要什么的讨论:穆迪-亚当斯(Citation2017),罗蒂(Citation2006),努斯鲍姆(Citation2007).56阿皮亚.57 (Citation2010)关于人工智能和价值锁定的风险,参见Ord (Citation2020:第5章),MacAskill (Citation2022:第4章)。58Kenward and Sinclair (Citation2021)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Value alignment, human enhancement, and moral revolutions
ABSTRACTHuman beings are internally inconsistent in various ways. One way to develop this thought involves using the language of value alignment: the values we hold are not always aligned with our behavior and are not always aligned with each other. Because of this self-misalignment, there is room for potential projects of human enhancement that involve achieving a greater degree of value alignment than we presently have. Relatedly, discussions of AI ethics sometimes focus on what is known as the value alignment problem, the challenge of how to build AI that acts in accordance with our human values. We argue that there is an especially close connection between solving the value alignment problem in AI ethics and using AI to pursue certain forms of human enhancement. But in addition, we also argue that there are important limits to what kinds of human enhancement can be pursued in this way, because some forms of human enhancement—namely moral revolutions—involve a kind of value misalignment rather than alignment.KEYWORDS: Artificial intelligencehuman enhancementmoral revolutions AcknowledgementsBoth authors would like to thank the National Endowment for the Humanities for support for their work, the University of Puget Sound and the John Lantz Senior Fellowship for Research or Advanced Study, and the participants at the Philosophy, AI, and Society Workshop at Stanford University. Ariela Tubert would like to thank the audience at the Ethics and Broader Implications of Technology Conference at the University of Nebraska at Lincoln.Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 See for instance Russell (Citation2019), Christian (Citation2020), Gabriel (Citation2020), Wallach and Vallor (Citation2020).2 Appiah (Citation2010). See also Baker (Citation2019).3 Russell and Norvig (Citation2010).4 Gershman (Citation2021, 156) makes this point while arguing that the ‘folklore’ about how machine learning has its origins in neuroscience overstates the level of influence neuroscience has actually had.5 See for instance Kahneman, Slovic, and Tversky (Citation1982), Kahneman and Tversky (Citation2000), Kahneman (Citation2011).6 Lieder et al. (Citation2019).7 Lieder et al. (Citation2019, 1096).8 Lieder and Griffiths (Citation2019). The notion of ‘rational analysis’ is drawn from Anderson (Citation1990).9 This is a point of focus in Griffiths (Citation2020).10 Lieder et al. (Citation2019, 1096).11 Lieder et al. (Citation2019, 1096). On gamification and AI more generally, see Deterding et al. (Citation2011).12 Chasse (Citation2021).13 Lieder et al. (Citation2019).14 Sinnott-Armstrong (Citation2008).15 Tversky and Kahneman (Citation1981).16 As Kühberger (Citation2017, 79) notes, the effect is robust and has been replicated across hundreds of papers.17 Kahneman and Tversky (Citation1979).18 Sometimes this point is used as part of an argument that we should be skeptical of moral facts at all, but this move requires a further inference. For influential discussions of some of the issues involved, see Street (Citation2006), Joyce (Citation2007).19 Singer (Citation2005), Greene (Citation2007).20 See for instance Haidt (Citation2012).21 Kass (Citation1997).22 Nussbaum (Citation2004). Kelly (Citation2011) offers an extended discussion of the moral significance of disgust.23 The notion of an expanding circle of moral concern comes from Singer (Citation2011).24 On Tay, see Victor (Citation2016). On the Turkish translation case, see Olson (Citation2018).25 On search engines, see Noble (Citation2018). On facial recognition systems, Buolamwini and Gebru (Citation2018). On hiring decisions, Dastin (Citation2018). On loan and credit card applications, Angwin et al. (Citation2016). On predictive policing, O’Neil (Citation2016). On sentencing and parole decisions, Eubanks (Citation2018).26 See for instance Kleinberg et al. (Citation2018), Kleinberg et al. (Citation2020).27 See for example Dovidio and Gaertner (Citation2000), Amodio and Devine (Citation2006), Gendler (Citation2011), Levy (Citation2017). For a critical assessment of work on implicit bias, though, see Machery (Citation2022).28 Wallach and Allen (Citation2009). We note though that they frame their discussion in terms of building moral machines rather than in terms of value alignment. For Wallach’s thoughts about value alignment, see Wallach and Vallor (Citation2020).29 Mill (Citation1861/1998). Discussions of a utilitarian-oriented AI include Gips (Citation1994), Grau (Citation2011), and Russell (Citation2019).30 Kant (Citation1785/2012). Thomas Powers’ (Citation2006) ‘Prospects for a Kantian Machine’ connects the view to AI.31 Asimov (Citation1950).32 Each of these examples is mentioned by Wallach and Allen (Citation2009, 79).33 Shortliffe and Buchanan (Citation1975).34 Savulescu and Maslen (Citation2015), Giubilini and Savulescu (Citation2018). For critical discussion of the proposal that is still sympathetic to the idea of pursuing AI-based human moral enhancement, see Lara and Decker (Citation2020).35 Deterding (Citation2014) discusses moral gamification, defending a ‘eudaimonic design’ approach.36 Millar (Citation2015) and Contissa, Lagioia, and Sartor (Citation2017) argue in favor of user control over the ethical settings on autonomous cars, while Lin (Citation2014) and Gogoll and Müller (Citation2017) argue against the idea.37 Santurkar et al. (Citation2023). See also Rozado (Citation2023).38 Thompson, Hsu, and Myers (Citation2023).39 See Narayanan and Kapoor (Citation2023) for a critical discussion of Santurkar et al. (Citation2023).40 OpenAI (Citation2023).41 Steinberg (Citation2023).42 Walker (Citation2023).43 Marcus (Citation2023).44 Appiah (Citation2010). Klenk et al. (Citation2022) provides a survey of recent work on moral revolutions.45 Appiah (Citation2010:, 8), Kuhn (Citation1962). Klenk et al. (Citation2022) emphasize how this connection to Kuhn is common also in other authors discussing moral revolutions.46 Wallach and Allen (Citation2009, 79).47 LeCun, Bengio, and Hinton (Citation2015), Bengio, LeCun, and Hinton (Citation2021).48 Ensmenger (Citation2012).49 Holodny (Citation2017).50 Metz (Citation2016).51 Knight (Citation2017).52 Strogatz (Citation2018).53 Rini (Citation2017) also uses AlphaGo’s Move 37 as an analogy for a radically new AI moral view.54 Appiah (Citation2010:, 66), Klenk et al. (Citation2022).55 See discussions of what is needed for significant society-wide moral progress: Moody-Adams (Citation2017), Rorty (Citation2006), Nussbaum (Citation2007).56 Appiah (Citation2010).57 On AI and the risk of value lock-in, see for instance Ord (Citation2020: Chapter 5), MacAskill (Citation2022: Chapter 4).58 Kenward and Sinclair (Citation2021).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.60
自引率
23.10%
发文量
144
期刊最新文献
Ordinal type theory What is priority monism? Reply to Kovacs Responses to critics A new concept of replication Precis of Amie L. Thomasson, norms and necessity
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1