首页 > 最新文献

Science and Engineering Ethics最新文献

英文 中文
Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach. 证明我们对人工智能系统可信度的信任:一种可靠的方法
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-11-21 DOI: 10.1007/s11948-024-00522-z
Andrea Ferrario

We address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of the AI and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that credences, namely, beliefs we hold with a degree of confidence, are the appropriate attitude for capturing the facets of the trustworthiness of an AI over time. Then, we introduce a reliabilistic account providing justification to the credences in the trustworthiness of AI, which we derive from Tang's probabilistic theory of justified credence. Our account stipulates that a credence in the trustworthiness of an AI system is justified if and only if it is caused by an assessment process that tends to result in a high proportion of credences for which the actual and perceived trustworthiness of the AI are calibrated. This approach informs research on the ethics of AI and human-AI interactions by providing actionable recommendations on how to measure the reliability of the process through which users perceive the trustworthiness of the system, investigating its calibration to the actual levels of trustworthiness of the AI as well as users' appropriate reliance on the system.

我们探讨了人工智能(AI)哲学中的一个未决问题:如何证明我们对人工智能系统可信度所持的认识论态度是正确的。这个问题非常重要,因为提供相信人工智能系统值得信任的理由是在人与人工智能互动中适当依赖这些系统的关键。在我们的方法中,我们将人工智能的可信度视为系统的一种与时间相关的综合属性,它有两个不同的方面。一个是人工智能的实际可信度,另一个是用户在与系统交互时对系统可信度的感知。我们证明,可信度,即我们持有的具有一定可信度的信念,是捕捉人工智能随时间变化的可信度的适当态度。然后,我们引入了一种可靠的解释,为人工智能可信度中的可信度提供理由,这种解释源自唐氏的有理可信度概率论。我们的理论认为,只有当且仅当人工智能系统的可信度是由一个评估过程造成的,而这个评估过程倾向于产生高比例的可信度时,人工智能系统的可信度才是合理的,因为人工智能系统的实际可信度和感知可信度是经过校准的。这种方法为有关人工智能和人与人工智能互动伦理的研究提供了信息,就如何衡量用户感知系统可信度的过程的可靠性、调查其与人工智能实际可信度水平的校准以及用户对系统的适当依赖提供了可操作的建议。
{"title":"Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach.","authors":"Andrea Ferrario","doi":"10.1007/s11948-024-00522-z","DOIUrl":"10.1007/s11948-024-00522-z","url":null,"abstract":"<p><p>We address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of the AI and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that credences, namely, beliefs we hold with a degree of confidence, are the appropriate attitude for capturing the facets of the trustworthiness of an AI over time. Then, we introduce a reliabilistic account providing justification to the credences in the trustworthiness of AI, which we derive from Tang's probabilistic theory of justified credence. Our account stipulates that a credence in the trustworthiness of an AI system is justified if and only if it is caused by an assessment process that tends to result in a high proportion of credences for which the actual and perceived trustworthiness of the AI are calibrated. This approach informs research on the ethics of AI and human-AI interactions by providing actionable recommendations on how to measure the reliability of the process through which users perceive the trustworthiness of the system, investigating its calibration to the actual levels of trustworthiness of the AI as well as users' appropriate reliance on the system.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"55"},"PeriodicalIF":2.7,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11582117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Know Thyself, Improve Thyself: Personalized LLMs for Self-Knowledge and Moral Enhancement. 认识你自己,完善你自己:用于认识自我和提高道德水平的个性化 LLM。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-11-21 DOI: 10.1007/s11948-024-00518-9
Alberto Giubilini, Sebastian Porsdam Mann, Cristina Voinea, Brian Earp, Julian Savulescu

In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users' past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be and the actions and goals necessary for so becoming. The feasibility of LLMs providing such personalized moral insights remains uncertain pending further technical development. Nevertheless, we argue that this approach addresses limitations in existing AMA proposals reliant on either predetermined values or introspective self-knowledge.

在本文中,我们建议,根据个人撰写的信息或与个人相关的信息训练的个性化 LLM 可以作为人工道德顾问(AMA),考虑到个人道德的动态性质。这些基于 LLM 的人工道德顾问将利用用户过去和现在的数据来推断和明确他们有时会改变的价值观和偏好,从而促进自我认知。此外,这些系统还可以帮助用户反思自己想成为什么样的人,以及成为这样的人所需采取的行动和实现的目标,从而帮助用户进行自我创造。在技术进一步发展之前,提供这种个性化道德见解的 LLM 的可行性仍不确定。不过,我们认为,这种方法解决了现有的基于预先确定的价值观或内省式自我认识的 AMA 方案的局限性。
{"title":"Know Thyself, Improve Thyself: Personalized LLMs for Self-Knowledge and Moral Enhancement.","authors":"Alberto Giubilini, Sebastian Porsdam Mann, Cristina Voinea, Brian Earp, Julian Savulescu","doi":"10.1007/s11948-024-00518-9","DOIUrl":"10.1007/s11948-024-00518-9","url":null,"abstract":"<p><p>In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users' past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be and the actions and goals necessary for so becoming. The feasibility of LLMs providing such personalized moral insights remains uncertain pending further technical development. Nevertheless, we argue that this approach addresses limitations in existing AMA proposals reliant on either predetermined values or introspective self-knowledge.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"54"},"PeriodicalIF":2.7,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11582191/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Authorship and Citizen Science: Seven Heuristic Rules. 作者身份与公民科学:七条启发式规则
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-29 DOI: 10.1007/s11948-024-00516-x
Per Sandin, Patrik Baard, William Bülow, Gert Helgesson

Citizen science (CS) is an umbrella term for research with a significant amount of contributions from volunteers. Those volunteers can occupy a hybrid role, being both 'researcher' and 'subject' at the same time. This has repercussions for questions about responsibility and credit, e.g. pertaining to the issue of authorship. In this paper, we first review some existing guidelines for authorship and their applicability to CS. Second, we assess the claim that the guidelines from the International Committee of Medical Journal Editors (ICMJE), known as 'the Vancouver guidelines', may lead to exclusion of deserving citizen scientists as authors. We maintain that the idea of including citizen scientists as authors is supported by at least two arguments: transparency and fairness. Third, we argue that it might be plausible to include groups as authors in CS. Fourth and finally, we offer a heuristic list of seven recommendations to be considered when deciding about whom to include as an author of a CS publication.

公民科学(CS)是有大量志愿者参与的研究的总称。这些志愿者可以扮演混合角色,既是 "研究者 "又是 "研究对象"。这对责任和信用问题产生了影响,例如与作者身份有关的问题。在本文中,我们首先回顾了作者身份的一些现有准则及其对 CS 的适用性。其次,我们评估了国际医学期刊编辑委员会(ICMJE)的指导方针(即 "温哥华指导方针")可能导致公民科学家被排除在作者之外的说法。我们认为,让公民科学家成为作者的想法至少有两个论据支持:透明度和公平性。第三,我们认为在 CS 中吸收团体作为作者是可行的。第四也是最后一点,我们提出了一份启发式清单,其中包括七项建议,供在决定将哪些人列为 CS 出版物的作者时参考。
{"title":"Authorship and Citizen Science: Seven Heuristic Rules.","authors":"Per Sandin, Patrik Baard, William Bülow, Gert Helgesson","doi":"10.1007/s11948-024-00516-x","DOIUrl":"10.1007/s11948-024-00516-x","url":null,"abstract":"<p><p>Citizen science (CS) is an umbrella term for research with a significant amount of contributions from volunteers. Those volunteers can occupy a hybrid role, being both 'researcher' and 'subject' at the same time. This has repercussions for questions about responsibility and credit, e.g. pertaining to the issue of authorship. In this paper, we first review some existing guidelines for authorship and their applicability to CS. Second, we assess the claim that the guidelines from the International Committee of Medical Journal Editors (ICMJE), known as 'the Vancouver guidelines', may lead to exclusion of deserving citizen scientists as authors. We maintain that the idea of including citizen scientists as authors is supported by at least two arguments: transparency and fairness. Third, we argue that it might be plausible to include groups as authors in CS. Fourth and finally, we offer a heuristic list of seven recommendations to be considered when deciding about whom to include as an author of a CS publication.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"53"},"PeriodicalIF":2.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11522116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Confucian Algorithm for Autonomous Vehicles. 用于自动驾驶汽车的儒学算法。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-21 DOI: 10.1007/s11948-024-00514-z
Tingting Sui, Sebastian Sunday Grève

Any moral algorithm for autonomous vehicles must provide a practical solution to moral problems of the trolley type, in which all possible courses of action will result in damage, injury, or death. This article discusses a hitherto neglected variety of this type of problem, based on a recent psychological study whose results are reported here. It argues that the most adequate solution to this problem will be achieved by a moral algorithm that is based on Confucian ethics. In addition to this philosophical and psychological discussion, the article outlines the mathematics, engineering, and legal implementation of a possible Confucian algorithm. The proposed Confucian algorithm is based on the idea of making it possible to set an autonomous vehicle to allow an increased level of protection for selected people. It is shown that the proposed algorithm can be implemented alongside other moral algorithms, using either the framework of personal ethics settings or that of mandatory ethics settings.

任何自动驾驶汽车的道德算法都必须为电车类型的道德问题提供切实可行的解决方案,在电车类型的道德问题中,所有可能的行动方案都将导致损害、伤害或死亡。本文基于最近的一项心理学研究,讨论了迄今为止被忽视的这类问题的一个变种。文章认为,以儒家伦理为基础的道德算法将能最充分地解决这一问题。除了哲学和心理学方面的讨论外,文章还概述了儒家算法在数学、工程和法律方面的实施。所提出的儒家算法基于这样一个理念,即可以设置自动驾驶汽车,以提高对选定人员的保护水平。研究表明,建议的算法可以与其他道德算法一起实施,既可以使用个人道德设置框架,也可以使用强制道德设置框架。
{"title":"A Confucian Algorithm for Autonomous Vehicles.","authors":"Tingting Sui, Sebastian Sunday Grève","doi":"10.1007/s11948-024-00514-z","DOIUrl":"10.1007/s11948-024-00514-z","url":null,"abstract":"<p><p>Any moral algorithm for autonomous vehicles must provide a practical solution to moral problems of the trolley type, in which all possible courses of action will result in damage, injury, or death. This article discusses a hitherto neglected variety of this type of problem, based on a recent psychological study whose results are reported here. It argues that the most adequate solution to this problem will be achieved by a moral algorithm that is based on Confucian ethics. In addition to this philosophical and psychological discussion, the article outlines the mathematics, engineering, and legal implementation of a possible Confucian algorithm. The proposed Confucian algorithm is based on the idea of making it possible to set an autonomous vehicle to allow an increased level of protection for selected people. It is shown that the proposed algorithm can be implemented alongside other moral algorithms, using either the framework of personal ethics settings or that of mandatory ethics settings.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"52"},"PeriodicalIF":2.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11493828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Rubik's Cube-Inspired Pedagogical Tool for Teaching and Learning Engineering Ethics. 受魔方启发的工程伦理教学工具。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-17 DOI: 10.1007/s11948-024-00506-z
Yuqi Peng

To facilitate engineering students' understanding of engineering ethics and support instructors in developing course content, this study introduces an innovative educational tool drawing inspiration from the Rubik's Cube metaphor. This Engineering Ethics Knowledge Rubik's Cube (EEKRC) integrates six key aspects-ethical theories, codes of ethics, ethical issues, engineering disciplines, stakeholders, and life cycle-identified through an analysis of engineering ethics textbooks and courses across the United States, Singapore, and China. This analysis underpins the selection of the six aspects, reflecting the shared and unique elements of engineering ethics education in these regions. In an engineering ethics course, the EEKRC serves multiple functions: it provides visual support for grasping engineering ethics concepts, acts as a pedagogical guide for both experienced and inexperienced educators in course design, offers a complementary assessment method for evaluating students learning outcomes, and assists as a reference for students engaging in ethical analysis.

为了促进工科学生对工程伦理的理解,支持教师开发课程内容,本研究从魔方隐喻中汲取灵感,引入了一种创新的教育工具。这个工程伦理知识魔方(EEKRC)整合了六个关键方面--伦理理论、伦理准则、伦理问题、工程学科、利益相关者和生命周期--这六个方面是通过分析美国、新加坡和中国的工程伦理教科书和课程而确定的。这一分析是选择六个方面的基础,反映了这些地区工程伦理教育的共同和独特因素。在工程伦理课程中,EEKRC 具有多种功能:为掌握工程伦理概念提供直观支持;为有经验和无经验的教育者设计课程提供教学指导;为评估学生的学习成果提供补充评估方法;为学生进行伦理分析提供参考。
{"title":"A Rubik's Cube-Inspired Pedagogical Tool for Teaching and Learning Engineering Ethics.","authors":"Yuqi Peng","doi":"10.1007/s11948-024-00506-z","DOIUrl":"10.1007/s11948-024-00506-z","url":null,"abstract":"<p><p>To facilitate engineering students' understanding of engineering ethics and support instructors in developing course content, this study introduces an innovative educational tool drawing inspiration from the Rubik's Cube metaphor. This Engineering Ethics Knowledge Rubik's Cube (EEKRC) integrates six key aspects-ethical theories, codes of ethics, ethical issues, engineering disciplines, stakeholders, and life cycle-identified through an analysis of engineering ethics textbooks and courses across the United States, Singapore, and China. This analysis underpins the selection of the six aspects, reflecting the shared and unique elements of engineering ethics education in these regions. In an engineering ethics course, the EEKRC serves multiple functions: it provides visual support for grasping engineering ethics concepts, acts as a pedagogical guide for both experienced and inexperienced educators in course design, offers a complementary assessment method for evaluating students learning outcomes, and assists as a reference for students engaging in ethical analysis.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"50"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patient Preferences Concerning Humanoid Features in Healthcare Robots. 患者对医疗机器人仿人功能的偏好。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-17 DOI: 10.1007/s11948-024-00508-x
Dane Leigh Gogoshin

In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard patient preferences, even those which are potentially morally problematic. A method for handling these at the design level is thus imperative. By way of uncontroversial starting points, I argue that the priority of the public healthcare system is the fulfillment of patients' therapeutic needs, among which certain potentially morally problematic preferences may be counted. There are further ethical considerations, however, which, taken together, suggest that the potential benefits of upholding these preferences are outweighed by the potential harms.

在本文中,我认为患者对与种族、文化和性别相关的人体特征的偏好应被排除在公共医疗机器人设计之外。一方面,医疗保健应以(客观、普遍)需求为导向。另一方面,病人的福祉(医疗保健的目的)与偏好有着具体的联系,病人的满意度(世界卫生组织的核心价值)也是如此。向以患者为中心的医疗保健转变使患者的偏好成为关注的焦点。因此,医疗保健技术的设计不能简单地忽视患者的偏好,即使是那些可能存在道德问题的偏好。因此,在设计层面处理这些偏好的方法势在必行。从没有争议的出发点出发,我认为公共医疗系统的首要任务是满足病人的治疗需求,其中可能包括某些潜在的道德问题偏好。然而,还有其他一些伦理方面的考虑因素,综合起来表明,维护这些偏好的潜在好处大于潜在危害。
{"title":"Patient Preferences Concerning Humanoid Features in Healthcare Robots.","authors":"Dane Leigh Gogoshin","doi":"10.1007/s11948-024-00508-x","DOIUrl":"10.1007/s11948-024-00508-x","url":null,"abstract":"<p><p>In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard patient preferences, even those which are potentially morally problematic. A method for handling these at the design level is thus imperative. By way of uncontroversial starting points, I argue that the priority of the public healthcare system is the fulfillment of patients' therapeutic needs, among which certain potentially morally problematic preferences may be counted. There are further ethical considerations, however, which, taken together, suggest that the potential benefits of upholding these preferences are outweighed by the potential harms.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"49"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany. 责任差距与报应处置:来自美国、日本和德国的证据。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-17 DOI: 10.1007/s11948-024-00509-w
Markus Kneer, Markus Christen

Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.

Danaher(2016)认为,机器人化程度的提高会导致报应差距:在这种情况下,没有人会对有害结果承担公正责任的规范事实与我们的报应主义道德倾向相冲突。在本文中,我们报告了一项跨文化实证研究,该研究基于斯帕罗(2007 年)著名的自主武器系统犯下战争罪的例子,参与者来自美国、日本和德国。我们发现:(1) 人们非常愿意让自主系统承担道义责任;(2) 在与此类系统互动时,部分人可以开脱人类代理人的责任;更广泛地说,(3) 规范责任差距的可能性确实与人们明显的报应主义倾向相悖。我们将讨论这些结果对报应差距和责任差距文献中其他立场的潜在影响。
{"title":"Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.","authors":"Markus Kneer, Markus Christen","doi":"10.1007/s11948-024-00509-w","DOIUrl":"10.1007/s11948-024-00509-w","url":null,"abstract":"<p><p>Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"51"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hidden: A Baker's Dozen Ways in Which Research Reporting is Less Transparent than it Could be and Suggestions for Implementing Einstein's Dictum. 隐藏:研究报告不够透明的一打方法以及实施爱因斯坦箴言的建议》。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-16 DOI: 10.1007/s11948-024-00517-w
Abu Bakkar Siddique, Brian Shaw, Johanna Dwyer, David A Fields, Kevin Fontaine, David Hand, Randy Schekman, Jeffrey Alberts, Julie Locher, David B Allison

The tutelage of our mentors as scientists included the analogy that writing a good scientific paper was an exercise in storytelling that omitted unessential details that did not move the story forward or that detracted from the overall message. However, the advice to not get lost in the details had an important flaw. In science, it is the many details of the data themselves and the methods used to generate and analyze them that give conclusions their probative meaning. Facts may sometimes slow or distract from the clarity, tidiness, intrigue, or flow of the narrative, but nevertheless they are important for the assessment of what was done, the trustworthiness of the science, and the meaning of the findings. Nevertheless, many critical elements and facts about research studies may be omitted from the narrative and become hidden from scholarly scrutiny. We describe a "baker's dozen" shortfalls in which such elements that are pertinent to evaluating the validity of scientific studies are sometimes hidden in reports of the work. Such shortfalls may be intentional or unintentional or lie somewhere in between. Additionally, shortfalls may occur at the level of the individual or an institution or of the entire system itself. We conclude by proposing countermeasures to these shortfalls.

作为科学家,我们的导师对我们的指导包括这样一个比喻:撰写一篇好的科学论文是一个讲故事的练习,要省略那些不能推动故事发展或有损于整体信息的无关紧要的细节。然而,不要迷失在细节中的建议有一个重要缺陷。在科学中,正是数据本身的许多细节以及生成和分析这些数据的方法赋予了结论以证明意义。事实有时可能会减慢或分散叙述的清晰度、整洁度、吸引力或流畅性,但无论如何,它们对于评估所做的工作、科学的可信度以及结论的意义都很重要。然而,研究报告的许多关键要素和事实可能会在叙述中被遗漏,从而无法接受学术审查。我们描述了 "面包师的一打 "不足之处,在这些不足之处中,与评估科学研究有效性相关的要素有时会被隐藏在工作报告中。这些缺陷可能是有意的,也可能是无意的,或者介于两者之间。此外,不足之处可能发生在个人或机构层面,也可能发生在整个系统本身。最后,我们针对这些不足之处提出了对策。
{"title":"Hidden: A Baker's Dozen Ways in Which Research Reporting is Less Transparent than it Could be and Suggestions for Implementing Einstein's Dictum.","authors":"Abu Bakkar Siddique, Brian Shaw, Johanna Dwyer, David A Fields, Kevin Fontaine, David Hand, Randy Schekman, Jeffrey Alberts, Julie Locher, David B Allison","doi":"10.1007/s11948-024-00517-w","DOIUrl":"10.1007/s11948-024-00517-w","url":null,"abstract":"<p><p>The tutelage of our mentors as scientists included the analogy that writing a good scientific paper was an exercise in storytelling that omitted unessential details that did not move the story forward or that detracted from the overall message. However, the advice to not get lost in the details had an important flaw. In science, it is the many details of the data themselves and the methods used to generate and analyze them that give conclusions their probative meaning. Facts may sometimes slow or distract from the clarity, tidiness, intrigue, or flow of the narrative, but nevertheless they are important for the assessment of what was done, the trustworthiness of the science, and the meaning of the findings. Nevertheless, many critical elements and facts about research studies may be omitted from the narrative and become hidden from scholarly scrutiny. We describe a \"baker's dozen\" shortfalls in which such elements that are pertinent to evaluating the validity of scientific studies are sometimes hidden in reports of the work. Such shortfalls may be intentional or unintentional or lie somewhere in between. Additionally, shortfalls may occur at the level of the individual or an institution or of the entire system itself. We conclude by proposing countermeasures to these shortfalls.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"48"},"PeriodicalIF":2.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11485062/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical Decision-Making for Self-Driving Vehicles: A Proposed Model & List of Value-Laden Terms that Warrant (Technical) Specification. 自动驾驶汽车的道德决策:建议的模型和需要(技术)规范的价值术语清单。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-10 DOI: 10.1007/s11948-024-00513-0
Franziska Poszler, Maximilian Geisslinger, Christoph Lütge

Self-driving vehicles (SDVs) will need to make decisions that carry ethical dimensions and are of normative significance. For example, by choosing a specific trajectory, they determine how risks are distributed among traffic participants. Accordingly, policymakers, standardization organizations and scholars have conceptualized what (shall) constitute(s) ethical decision-making for SDVs. Eventually, these conceptualizations must be converted into specific system requirements to ensure proper technical implementation. Therefore, this article aims to translate critical requirements recently formulated in scholarly work, existing standards, regulatory drafts and guidelines into an explicit five-step ethical decision model for SDVs during hazardous situations. This model states a precise sequence of steps, indicates the guiding ethical principles that inform each step and points out a list of terms that demand further investigation and technical specification. By integrating ethical, legal and engineering considerations, we aim to contribute to the scholarly debate on computational ethics (particularly in autonomous driving) while offering practitioners in the automotive sector a decision-making process for SDVs that is technically viable, legally permissible, ethically grounded and adaptable to societal values. In the future, assessing the actual impact, effectiveness and admissibility of implementing the here sketched theories, terms and the overall decision process requires an empirical evaluation and testing of the overall decision-making model.

自动驾驶汽车(SDV)需要做出具有道德层面和规范意义的决定。例如,通过选择特定的轨迹,它们决定了风险如何在交通参与者之间分配。因此,政策制定者、标准化组织和学者对什么(应)构成 SDV 的道德决策进行了概念化。最终,这些概念必须转化为具体的系统要求,以确保正确的技术实施。因此,本文旨在将最近在学术著作、现有标准、监管草案和指南中提出的关键要求转化为一个明确的五步伦理决策模型,供特警队员在危险情况下使用。该模型陈述了精确的步骤顺序,指出了指导每个步骤的伦理原则,并指出了需要进一步调查和技术规范的术语清单。通过整合伦理、法律和工程方面的考虑因素,我们旨在为有关计算伦理(尤其是自动驾驶)的学术讨论做出贡献,同时为汽车行业的从业人员提供一个在技术上可行、法律上允许、伦理上有依据并能适应社会价值观的 SDV 决策过程。未来,要评估本文所概述的理论、术语和整个决策过程的实际影响、有效性和可接受性,需要对整个决策模型进行实证评估和测试。
{"title":"Ethical Decision-Making for Self-Driving Vehicles: A Proposed Model & List of Value-Laden Terms that Warrant (Technical) Specification.","authors":"Franziska Poszler, Maximilian Geisslinger, Christoph Lütge","doi":"10.1007/s11948-024-00513-0","DOIUrl":"10.1007/s11948-024-00513-0","url":null,"abstract":"<p><p>Self-driving vehicles (SDVs) will need to make decisions that carry ethical dimensions and are of normative significance. For example, by choosing a specific trajectory, they determine how risks are distributed among traffic participants. Accordingly, policymakers, standardization organizations and scholars have conceptualized what (shall) constitute(s) ethical decision-making for SDVs. Eventually, these conceptualizations must be converted into specific system requirements to ensure proper technical implementation. Therefore, this article aims to translate critical requirements recently formulated in scholarly work, existing standards, regulatory drafts and guidelines into an explicit five-step ethical decision model for SDVs during hazardous situations. This model states a precise sequence of steps, indicates the guiding ethical principles that inform each step and points out a list of terms that demand further investigation and technical specification. By integrating ethical, legal and engineering considerations, we aim to contribute to the scholarly debate on computational ethics (particularly in autonomous driving) while offering practitioners in the automotive sector a decision-making process for SDVs that is technically viable, legally permissible, ethically grounded and adaptable to societal values. In the future, assessing the actual impact, effectiveness and admissibility of implementing the here sketched theories, terms and the overall decision process requires an empirical evaluation and testing of the overall decision-making model.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"47"},"PeriodicalIF":2.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466986/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence. 重构人工智能伦理原则:罗尔斯人工智能伦理学。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-09 DOI: 10.1007/s11948-024-00507-y
Salla Westerstrand

The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.

人工智能(AI)技术的普及引发了有关其伦理影响的讨论。这一发展迫使政府组织、非政府组织和私营公司做出反应,为未来开发符合伦理的人工智能系统起草伦理指南。虽然许多伦理指南都涉及伦理学家所熟悉的价值观,但它们似乎缺乏伦理依据。此外,大多数准则往往忽视人工智能对民主、治理和公共审议的影响。然而,现有研究表明,人工智能会威胁到西方民主制度中与伦理相关的关键要素。本文运用罗尔斯的正义理论,为组织和政策制定者起草了一套指导方针,以引导人工智能朝着更加合乎伦理的方向发展。本文的目的是通过探索构建人工智能伦理准则的可能性,从哲学上证明其合理性,并从更广泛的社会正义角度出发,为扩大人工智能伦理讨论做出贡献。本文讨论了罗尔斯的正义即公平理论及其关键概念与当前人工智能伦理发展的关系,并提出了一个命题,即如果与罗尔斯的正义即公平理论保持一致,那么为人工智能伦理在实践中的可操作性提供基础的原则会是怎样的。
{"title":"Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.","authors":"Salla Westerstrand","doi":"10.1007/s11948-024-00507-y","DOIUrl":"10.1007/s11948-024-00507-y","url":null,"abstract":"<p><p>The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"46"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science and Engineering Ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1