首页 > 最新文献

Science and Engineering Ethics最新文献

英文 中文
Authorship and Citizen Science: Seven Heuristic Rules. 作者身份与公民科学:七条启发式规则
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-29 DOI: 10.1007/s11948-024-00516-x
Per Sandin, Patrik Baard, William Bülow, Gert Helgesson

Citizen science (CS) is an umbrella term for research with a significant amount of contributions from volunteers. Those volunteers can occupy a hybrid role, being both 'researcher' and 'subject' at the same time. This has repercussions for questions about responsibility and credit, e.g. pertaining to the issue of authorship. In this paper, we first review some existing guidelines for authorship and their applicability to CS. Second, we assess the claim that the guidelines from the International Committee of Medical Journal Editors (ICMJE), known as 'the Vancouver guidelines', may lead to exclusion of deserving citizen scientists as authors. We maintain that the idea of including citizen scientists as authors is supported by at least two arguments: transparency and fairness. Third, we argue that it might be plausible to include groups as authors in CS. Fourth and finally, we offer a heuristic list of seven recommendations to be considered when deciding about whom to include as an author of a CS publication.

公民科学(CS)是有大量志愿者参与的研究的总称。这些志愿者可以扮演混合角色,既是 "研究者 "又是 "研究对象"。这对责任和信用问题产生了影响,例如与作者身份有关的问题。在本文中,我们首先回顾了作者身份的一些现有准则及其对 CS 的适用性。其次,我们评估了国际医学期刊编辑委员会(ICMJE)的指导方针(即 "温哥华指导方针")可能导致公民科学家被排除在作者之外的说法。我们认为,让公民科学家成为作者的想法至少有两个论据支持:透明度和公平性。第三,我们认为在 CS 中吸收团体作为作者是可行的。第四也是最后一点,我们提出了一份启发式清单,其中包括七项建议,供在决定将哪些人列为 CS 出版物的作者时参考。
{"title":"Authorship and Citizen Science: Seven Heuristic Rules.","authors":"Per Sandin, Patrik Baard, William Bülow, Gert Helgesson","doi":"10.1007/s11948-024-00516-x","DOIUrl":"10.1007/s11948-024-00516-x","url":null,"abstract":"<p><p>Citizen science (CS) is an umbrella term for research with a significant amount of contributions from volunteers. Those volunteers can occupy a hybrid role, being both 'researcher' and 'subject' at the same time. This has repercussions for questions about responsibility and credit, e.g. pertaining to the issue of authorship. In this paper, we first review some existing guidelines for authorship and their applicability to CS. Second, we assess the claim that the guidelines from the International Committee of Medical Journal Editors (ICMJE), known as 'the Vancouver guidelines', may lead to exclusion of deserving citizen scientists as authors. We maintain that the idea of including citizen scientists as authors is supported by at least two arguments: transparency and fairness. Third, we argue that it might be plausible to include groups as authors in CS. Fourth and finally, we offer a heuristic list of seven recommendations to be considered when deciding about whom to include as an author of a CS publication.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"53"},"PeriodicalIF":2.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11522116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Confucian Algorithm for Autonomous Vehicles. 用于自动驾驶汽车的儒学算法。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-21 DOI: 10.1007/s11948-024-00514-z
Tingting Sui, Sebastian Sunday Grève

Any moral algorithm for autonomous vehicles must provide a practical solution to moral problems of the trolley type, in which all possible courses of action will result in damage, injury, or death. This article discusses a hitherto neglected variety of this type of problem, based on a recent psychological study whose results are reported here. It argues that the most adequate solution to this problem will be achieved by a moral algorithm that is based on Confucian ethics. In addition to this philosophical and psychological discussion, the article outlines the mathematics, engineering, and legal implementation of a possible Confucian algorithm. The proposed Confucian algorithm is based on the idea of making it possible to set an autonomous vehicle to allow an increased level of protection for selected people. It is shown that the proposed algorithm can be implemented alongside other moral algorithms, using either the framework of personal ethics settings or that of mandatory ethics settings.

任何自动驾驶汽车的道德算法都必须为电车类型的道德问题提供切实可行的解决方案,在电车类型的道德问题中,所有可能的行动方案都将导致损害、伤害或死亡。本文基于最近的一项心理学研究,讨论了迄今为止被忽视的这类问题的一个变种。文章认为,以儒家伦理为基础的道德算法将能最充分地解决这一问题。除了哲学和心理学方面的讨论外,文章还概述了儒家算法在数学、工程和法律方面的实施。所提出的儒家算法基于这样一个理念,即可以设置自动驾驶汽车,以提高对选定人员的保护水平。研究表明,建议的算法可以与其他道德算法一起实施,既可以使用个人道德设置框架,也可以使用强制道德设置框架。
{"title":"A Confucian Algorithm for Autonomous Vehicles.","authors":"Tingting Sui, Sebastian Sunday Grève","doi":"10.1007/s11948-024-00514-z","DOIUrl":"10.1007/s11948-024-00514-z","url":null,"abstract":"<p><p>Any moral algorithm for autonomous vehicles must provide a practical solution to moral problems of the trolley type, in which all possible courses of action will result in damage, injury, or death. This article discusses a hitherto neglected variety of this type of problem, based on a recent psychological study whose results are reported here. It argues that the most adequate solution to this problem will be achieved by a moral algorithm that is based on Confucian ethics. In addition to this philosophical and psychological discussion, the article outlines the mathematics, engineering, and legal implementation of a possible Confucian algorithm. The proposed Confucian algorithm is based on the idea of making it possible to set an autonomous vehicle to allow an increased level of protection for selected people. It is shown that the proposed algorithm can be implemented alongside other moral algorithms, using either the framework of personal ethics settings or that of mandatory ethics settings.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"52"},"PeriodicalIF":2.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11493828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Rubik's Cube-Inspired Pedagogical Tool for Teaching and Learning Engineering Ethics. 受魔方启发的工程伦理教学工具。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-17 DOI: 10.1007/s11948-024-00506-z
Yuqi Peng

To facilitate engineering students' understanding of engineering ethics and support instructors in developing course content, this study introduces an innovative educational tool drawing inspiration from the Rubik's Cube metaphor. This Engineering Ethics Knowledge Rubik's Cube (EEKRC) integrates six key aspects-ethical theories, codes of ethics, ethical issues, engineering disciplines, stakeholders, and life cycle-identified through an analysis of engineering ethics textbooks and courses across the United States, Singapore, and China. This analysis underpins the selection of the six aspects, reflecting the shared and unique elements of engineering ethics education in these regions. In an engineering ethics course, the EEKRC serves multiple functions: it provides visual support for grasping engineering ethics concepts, acts as a pedagogical guide for both experienced and inexperienced educators in course design, offers a complementary assessment method for evaluating students learning outcomes, and assists as a reference for students engaging in ethical analysis.

为了促进工科学生对工程伦理的理解,支持教师开发课程内容,本研究从魔方隐喻中汲取灵感,引入了一种创新的教育工具。这个工程伦理知识魔方(EEKRC)整合了六个关键方面--伦理理论、伦理准则、伦理问题、工程学科、利益相关者和生命周期--这六个方面是通过分析美国、新加坡和中国的工程伦理教科书和课程而确定的。这一分析是选择六个方面的基础,反映了这些地区工程伦理教育的共同和独特因素。在工程伦理课程中,EEKRC 具有多种功能:为掌握工程伦理概念提供直观支持;为有经验和无经验的教育者设计课程提供教学指导;为评估学生的学习成果提供补充评估方法;为学生进行伦理分析提供参考。
{"title":"A Rubik's Cube-Inspired Pedagogical Tool for Teaching and Learning Engineering Ethics.","authors":"Yuqi Peng","doi":"10.1007/s11948-024-00506-z","DOIUrl":"https://doi.org/10.1007/s11948-024-00506-z","url":null,"abstract":"<p><p>To facilitate engineering students' understanding of engineering ethics and support instructors in developing course content, this study introduces an innovative educational tool drawing inspiration from the Rubik's Cube metaphor. This Engineering Ethics Knowledge Rubik's Cube (EEKRC) integrates six key aspects-ethical theories, codes of ethics, ethical issues, engineering disciplines, stakeholders, and life cycle-identified through an analysis of engineering ethics textbooks and courses across the United States, Singapore, and China. This analysis underpins the selection of the six aspects, reflecting the shared and unique elements of engineering ethics education in these regions. In an engineering ethics course, the EEKRC serves multiple functions: it provides visual support for grasping engineering ethics concepts, acts as a pedagogical guide for both experienced and inexperienced educators in course design, offers a complementary assessment method for evaluating students learning outcomes, and assists as a reference for students engaging in ethical analysis.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"50"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patient Preferences Concerning Humanoid Features in Healthcare Robots. 患者对医疗机器人仿人功能的偏好。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-17 DOI: 10.1007/s11948-024-00508-x
Dane Leigh Gogoshin

In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard patient preferences, even those which are potentially morally problematic. A method for handling these at the design level is thus imperative. By way of uncontroversial starting points, I argue that the priority of the public healthcare system is the fulfillment of patients' therapeutic needs, among which certain potentially morally problematic preferences may be counted. There are further ethical considerations, however, which, taken together, suggest that the potential benefits of upholding these preferences are outweighed by the potential harms.

在本文中,我认为患者对与种族、文化和性别相关的人体特征的偏好应被排除在公共医疗机器人设计之外。一方面,医疗保健应以(客观、普遍)需求为导向。另一方面,病人的福祉(医疗保健的目的)与偏好有着具体的联系,病人的满意度(世界卫生组织的核心价值)也是如此。向以患者为中心的医疗保健转变使患者的偏好成为关注的焦点。因此,医疗保健技术的设计不能简单地忽视患者的偏好,即使是那些可能存在道德问题的偏好。因此,在设计层面处理这些偏好的方法势在必行。从没有争议的出发点出发,我认为公共医疗系统的首要任务是满足病人的治疗需求,其中可能包括某些潜在的道德问题偏好。然而,还有其他一些伦理方面的考虑因素,综合起来表明,维护这些偏好的潜在好处大于潜在危害。
{"title":"Patient Preferences Concerning Humanoid Features in Healthcare Robots.","authors":"Dane Leigh Gogoshin","doi":"10.1007/s11948-024-00508-x","DOIUrl":"https://doi.org/10.1007/s11948-024-00508-x","url":null,"abstract":"<p><p>In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard patient preferences, even those which are potentially morally problematic. A method for handling these at the design level is thus imperative. By way of uncontroversial starting points, I argue that the priority of the public healthcare system is the fulfillment of patients' therapeutic needs, among which certain potentially morally problematic preferences may be counted. There are further ethical considerations, however, which, taken together, suggest that the potential benefits of upholding these preferences are outweighed by the potential harms.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"49"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany. 责任差距与报应处置:来自美国、日本和德国的证据。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-17 DOI: 10.1007/s11948-024-00509-w
Markus Kneer, Markus Christen

Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.

Danaher(2016)认为,机器人化程度的提高会导致报应差距:在这种情况下,没有人会对有害结果承担公正责任的规范事实与我们的报应主义道德倾向相冲突。在本文中,我们报告了一项跨文化实证研究,该研究基于斯帕罗(2007 年)著名的自主武器系统犯下战争罪的例子,参与者来自美国、日本和德国。我们发现:(1) 人们非常愿意让自主系统承担道义责任;(2) 在与此类系统互动时,部分人可以开脱人类代理人的责任;更广泛地说,(3) 规范责任差距的可能性确实与人们明显的报应主义倾向相悖。我们将讨论这些结果对报应差距和责任差距文献中其他立场的潜在影响。
{"title":"Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.","authors":"Markus Kneer, Markus Christen","doi":"10.1007/s11948-024-00509-w","DOIUrl":"https://doi.org/10.1007/s11948-024-00509-w","url":null,"abstract":"<p><p>Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"51"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hidden: A Baker's Dozen Ways in Which Research Reporting is Less Transparent than it Could be and Suggestions for Implementing Einstein's Dictum. 隐藏:研究报告不够透明的一打方法以及实施爱因斯坦箴言的建议》。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-16 DOI: 10.1007/s11948-024-00517-w
Abu Bakkar Siddique, Brian Shaw, Johanna Dwyer, David A Fields, Kevin Fontaine, David Hand, Randy Schekman, Jeffrey Alberts, Julie Locher, David B Allison

The tutelage of our mentors as scientists included the analogy that writing a good scientific paper was an exercise in storytelling that omitted unessential details that did not move the story forward or that detracted from the overall message. However, the advice to not get lost in the details had an important flaw. In science, it is the many details of the data themselves and the methods used to generate and analyze them that give conclusions their probative meaning. Facts may sometimes slow or distract from the clarity, tidiness, intrigue, or flow of the narrative, but nevertheless they are important for the assessment of what was done, the trustworthiness of the science, and the meaning of the findings. Nevertheless, many critical elements and facts about research studies may be omitted from the narrative and become hidden from scholarly scrutiny. We describe a "baker's dozen" shortfalls in which such elements that are pertinent to evaluating the validity of scientific studies are sometimes hidden in reports of the work. Such shortfalls may be intentional or unintentional or lie somewhere in between. Additionally, shortfalls may occur at the level of the individual or an institution or of the entire system itself. We conclude by proposing countermeasures to these shortfalls.

作为科学家,我们的导师对我们的指导包括这样一个比喻:撰写一篇好的科学论文是一个讲故事的练习,要省略那些不能推动故事发展或有损于整体信息的无关紧要的细节。然而,不要迷失在细节中的建议有一个重要缺陷。在科学中,正是数据本身的许多细节以及生成和分析这些数据的方法赋予了结论以证明意义。事实有时可能会减慢或分散叙述的清晰度、整洁度、吸引力或流畅性,但无论如何,它们对于评估所做的工作、科学的可信度以及结论的意义都很重要。然而,研究报告的许多关键要素和事实可能会在叙述中被遗漏,从而无法接受学术审查。我们描述了 "面包师的一打 "不足之处,在这些不足之处中,与评估科学研究有效性相关的要素有时会被隐藏在工作报告中。这些缺陷可能是有意的,也可能是无意的,或者介于两者之间。此外,不足之处可能发生在个人或机构层面,也可能发生在整个系统本身。最后,我们针对这些不足之处提出了对策。
{"title":"Hidden: A Baker's Dozen Ways in Which Research Reporting is Less Transparent than it Could be and Suggestions for Implementing Einstein's Dictum.","authors":"Abu Bakkar Siddique, Brian Shaw, Johanna Dwyer, David A Fields, Kevin Fontaine, David Hand, Randy Schekman, Jeffrey Alberts, Julie Locher, David B Allison","doi":"10.1007/s11948-024-00517-w","DOIUrl":"10.1007/s11948-024-00517-w","url":null,"abstract":"<p><p>The tutelage of our mentors as scientists included the analogy that writing a good scientific paper was an exercise in storytelling that omitted unessential details that did not move the story forward or that detracted from the overall message. However, the advice to not get lost in the details had an important flaw. In science, it is the many details of the data themselves and the methods used to generate and analyze them that give conclusions their probative meaning. Facts may sometimes slow or distract from the clarity, tidiness, intrigue, or flow of the narrative, but nevertheless they are important for the assessment of what was done, the trustworthiness of the science, and the meaning of the findings. Nevertheless, many critical elements and facts about research studies may be omitted from the narrative and become hidden from scholarly scrutiny. We describe a \"baker's dozen\" shortfalls in which such elements that are pertinent to evaluating the validity of scientific studies are sometimes hidden in reports of the work. Such shortfalls may be intentional or unintentional or lie somewhere in between. Additionally, shortfalls may occur at the level of the individual or an institution or of the entire system itself. We conclude by proposing countermeasures to these shortfalls.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"48"},"PeriodicalIF":2.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11485062/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical Decision-Making for Self-Driving Vehicles: A Proposed Model & List of Value-Laden Terms that Warrant (Technical) Specification. 自动驾驶汽车的道德决策:建议的模型和需要(技术)规范的价值术语清单。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-10 DOI: 10.1007/s11948-024-00513-0
Franziska Poszler, Maximilian Geisslinger, Christoph Lütge

Self-driving vehicles (SDVs) will need to make decisions that carry ethical dimensions and are of normative significance. For example, by choosing a specific trajectory, they determine how risks are distributed among traffic participants. Accordingly, policymakers, standardization organizations and scholars have conceptualized what (shall) constitute(s) ethical decision-making for SDVs. Eventually, these conceptualizations must be converted into specific system requirements to ensure proper technical implementation. Therefore, this article aims to translate critical requirements recently formulated in scholarly work, existing standards, regulatory drafts and guidelines into an explicit five-step ethical decision model for SDVs during hazardous situations. This model states a precise sequence of steps, indicates the guiding ethical principles that inform each step and points out a list of terms that demand further investigation and technical specification. By integrating ethical, legal and engineering considerations, we aim to contribute to the scholarly debate on computational ethics (particularly in autonomous driving) while offering practitioners in the automotive sector a decision-making process for SDVs that is technically viable, legally permissible, ethically grounded and adaptable to societal values. In the future, assessing the actual impact, effectiveness and admissibility of implementing the here sketched theories, terms and the overall decision process requires an empirical evaluation and testing of the overall decision-making model.

自动驾驶汽车(SDV)需要做出具有道德层面和规范意义的决定。例如,通过选择特定的轨迹,它们决定了风险如何在交通参与者之间分配。因此,政策制定者、标准化组织和学者对什么(应)构成 SDV 的道德决策进行了概念化。最终,这些概念必须转化为具体的系统要求,以确保正确的技术实施。因此,本文旨在将最近在学术著作、现有标准、监管草案和指南中提出的关键要求转化为一个明确的五步伦理决策模型,供特警队员在危险情况下使用。该模型陈述了精确的步骤顺序,指出了指导每个步骤的伦理原则,并指出了需要进一步调查和技术规范的术语清单。通过整合伦理、法律和工程方面的考虑因素,我们旨在为有关计算伦理(尤其是自动驾驶)的学术讨论做出贡献,同时为汽车行业的从业人员提供一个在技术上可行、法律上允许、伦理上有依据并能适应社会价值观的 SDV 决策过程。未来,要评估本文所概述的理论、术语和整个决策过程的实际影响、有效性和可接受性,需要对整个决策模型进行实证评估和测试。
{"title":"Ethical Decision-Making for Self-Driving Vehicles: A Proposed Model & List of Value-Laden Terms that Warrant (Technical) Specification.","authors":"Franziska Poszler, Maximilian Geisslinger, Christoph Lütge","doi":"10.1007/s11948-024-00513-0","DOIUrl":"https://doi.org/10.1007/s11948-024-00513-0","url":null,"abstract":"<p><p>Self-driving vehicles (SDVs) will need to make decisions that carry ethical dimensions and are of normative significance. For example, by choosing a specific trajectory, they determine how risks are distributed among traffic participants. Accordingly, policymakers, standardization organizations and scholars have conceptualized what (shall) constitute(s) ethical decision-making for SDVs. Eventually, these conceptualizations must be converted into specific system requirements to ensure proper technical implementation. Therefore, this article aims to translate critical requirements recently formulated in scholarly work, existing standards, regulatory drafts and guidelines into an explicit five-step ethical decision model for SDVs during hazardous situations. This model states a precise sequence of steps, indicates the guiding ethical principles that inform each step and points out a list of terms that demand further investigation and technical specification. By integrating ethical, legal and engineering considerations, we aim to contribute to the scholarly debate on computational ethics (particularly in autonomous driving) while offering practitioners in the automotive sector a decision-making process for SDVs that is technically viable, legally permissible, ethically grounded and adaptable to societal values. In the future, assessing the actual impact, effectiveness and admissibility of implementing the here sketched theories, terms and the overall decision process requires an empirical evaluation and testing of the overall decision-making model.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"47"},"PeriodicalIF":2.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466986/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence. 重构人工智能伦理原则:罗尔斯人工智能伦理学。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-09 DOI: 10.1007/s11948-024-00507-y
Salla Westerstrand

The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.

人工智能(AI)技术的普及引发了有关其伦理影响的讨论。这一发展迫使政府组织、非政府组织和私营公司做出反应,为未来开发符合伦理的人工智能系统起草伦理指南。虽然许多伦理指南都涉及伦理学家所熟悉的价值观,但它们似乎缺乏伦理依据。此外,大多数准则往往忽视人工智能对民主、治理和公共审议的影响。然而,现有研究表明,人工智能会威胁到西方民主制度中与伦理相关的关键要素。本文运用罗尔斯的正义理论,为组织和政策制定者起草了一套指导方针,以引导人工智能朝着更加合乎伦理的方向发展。本文的目的是通过探索构建人工智能伦理准则的可能性,从哲学上证明其合理性,并从更广泛的社会正义角度出发,为扩大人工智能伦理讨论做出贡献。本文讨论了罗尔斯的正义即公平理论及其关键概念与当前人工智能伦理发展的关系,并提出了一个命题,即如果与罗尔斯的正义即公平理论保持一致,那么为人工智能伦理在实践中的可操作性提供基础的原则会是怎样的。
{"title":"Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.","authors":"Salla Westerstrand","doi":"10.1007/s11948-024-00507-y","DOIUrl":"10.1007/s11948-024-00507-y","url":null,"abstract":"<p><p>The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"46"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Rise of Tech Ethics: Approaches, Critique, and Future Pathways. 科技伦理的兴起:方法、批判和未来之路。
IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-09 DOI: 10.1007/s11948-024-00510-3
Nina Frahm, Kasper Schiølin

In this editorial to the Topical Collection "Innovation under Fire: The Rise of Ethics in Tech", we provide an overview of the papers gathered in the collection, reflect on similarities and differences in their analytical angles and methodological approaches, and carve out some of the cross-cutting themes that emerge from research on the production of 'Tech Ethics'. We identify two recurring ways through which 'Tech Ethics' are studied and forms of critique towards them developed, which we argue diverge primarily in their a priori commitments towards what ethical tech is and how it should best be pursued. Beyond these differences, we observe how current research on 'Tech Ethics' evidences a close relationship between public controversies about technological innovation and the rise of ethics discourses and instruments for their settlement, producing legitimacy crises for 'Tech Ethics' in and of itself. 'Tech Ethics' is not only instrumental for governing technoscientific projects in the present but is equally instrumental for the construction of socio-technical imaginaries and the essentialization of technological futures. We suggest that efforts to reach beyond single case-studies are needed and call for collective reflection on joint issues and challenges to advance the critical project of 'Tech Ethics'.

在这篇专题文集 "烈火中的创新:科技伦理的崛起 "的社论中,我们概述了文集中收集的论文,反思了这些论文在分析角度和方法论上的异同,并提出了在 "科技伦理 "研究中出现的一些交叉主题。我们发现了研究 "科技伦理 "和对其进行批判的两种经常性方式,我们认为这两种方式的分歧主要在于它们对什么是科技伦理以及如何最好地追求科技伦理的先验承诺。除了这些分歧之外,我们还观察到,当前关于 "技术伦理 "的研究如何证明了有关技术创新的公共争议与伦理话语的兴起以及解决这些争议的工具之间的密切关系,从而为 "技术伦理 "本身带来了合法性危机。技术伦理 "不仅有助于管理当前的技术科学项目,而且同样有助于构建社会技术想象和技术未来的本质化。我们建议,需要努力超越单一的案例研究,并呼吁对共同的问题和挑战进行集体反思,以推进 "技术伦理 "的关键项目。
{"title":"The Rise of Tech Ethics: Approaches, Critique, and Future Pathways.","authors":"Nina Frahm, Kasper Schiølin","doi":"10.1007/s11948-024-00510-3","DOIUrl":"10.1007/s11948-024-00510-3","url":null,"abstract":"<p><p>In this editorial to the Topical Collection \"Innovation under Fire: The Rise of Ethics in Tech\", we provide an overview of the papers gathered in the collection, reflect on similarities and differences in their analytical angles and methodological approaches, and carve out some of the cross-cutting themes that emerge from research on the production of 'Tech Ethics'. We identify two recurring ways through which 'Tech Ethics' are studied and forms of critique towards them developed, which we argue diverge primarily in their a priori commitments towards what ethical tech is and how it should best be pursued. Beyond these differences, we observe how current research on 'Tech Ethics' evidences a close relationship between public controversies about technological innovation and the rise of ethics discourses and instruments for their settlement, producing legitimacy crises for 'Tech Ethics' in and of itself. 'Tech Ethics' is not only instrumental for governing technoscientific projects in the present but is equally instrumental for the construction of socio-technical imaginaries and the essentialization of technological futures. We suggest that efforts to reach beyond single case-studies are needed and call for collective reflection on joint issues and challenges to advance the critical project of 'Tech Ethics'.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"45"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Anthropocentrism: The Moral and Strategic Philosophy behind Russell and Burch’s 3Rs in Animal Experimentation 超越人类中心主义:罗素和伯奇动物实验 3R 背后的道德与战略哲学
IF 3.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-09-11 DOI: 10.1007/s11948-024-00504-1
Nico Dario Müller

The 3Rs framework in animal experimentation– “replace, reduce, refine” – has been alleged to be expressive of anthropocentrism, the view that only humans are directly morally relevant. After all, the 3Rs safeguard animal welfare only as far as given human research objectives permit, effectively prioritizing human use interests over animal interests. This article acknowledges this prioritization, but argues that the characterization as anthropocentric is inaccurate. In fact, the 3Rs prioritize research purposes even more strongly than an ethical anthropocentrist would. Drawing on the writings of Universities Federation for Animal Welfare (UFAW) founder Charles W. Hume, who employed Russell and Burch, it is argued that the 3Rs originally arose from an animal-centered ethic which was however restricted by an organizational strategy aiming at the voluntary cooperation of animal researchers. Research purposes thus had to be accepted as given. While this explains why the 3Rs focus narrowly on humane method selection, not on encouraging animal-free question selection in the first place, it suggests that governments should (also) focus on the latter if they recognize animals as deserving protection for their own sake.

动物实验中的 3R 框架--"取代、减少、完善"--被指表达了人类中心主义,即认为只有人类才与道德直接相关。毕竟,只有在人类研究目标允许的范围内,3Rs 才能保障动物福利,这实际上是将人类使用利益置于动物利益之上。本文承认这种优先性,但认为以人类为中心的描述并不准确。事实上,"3R "对研究目的的优先考虑甚至比伦理人类中心主义者更为强烈。本文借鉴了大学动物福利联盟(UFAW)创始人查尔斯-休姆(Charles W. Hume)的著作,认为 3Rs 最初源于以动物为中心的伦理观,但这一伦理观受到了旨在让动物研究人员自愿合作的组织战略的限制。因此,研究目的不得不被视为既定事实。这就解释了为什么 3Rs 狭隘地关注人道的方法选择,而不是首先鼓励无动物的问题选择,同时也表明,如果政府认识到动物本身值得保护,就应该(也)关注后者。
{"title":"Beyond Anthropocentrism: The Moral and Strategic Philosophy behind Russell and Burch’s 3Rs in Animal Experimentation","authors":"Nico Dario Müller","doi":"10.1007/s11948-024-00504-1","DOIUrl":"https://doi.org/10.1007/s11948-024-00504-1","url":null,"abstract":"<p>The 3Rs framework in animal experimentation– “replace, reduce, refine” – has been alleged to be expressive of anthropocentrism, the view that only humans are directly morally relevant. After all, the 3Rs safeguard animal welfare only as far as given human research objectives permit, effectively prioritizing human use interests over animal interests. This article acknowledges this prioritization, but argues that the characterization as anthropocentric is inaccurate. In fact, the 3Rs prioritize research purposes even more strongly than an ethical anthropocentrist would. Drawing on the writings of Universities Federation for Animal Welfare (UFAW) founder Charles W. Hume, who employed Russell and Burch, it is argued that the 3Rs originally arose from an animal-centered ethic which was however restricted by an organizational strategy aiming at the voluntary cooperation of animal researchers. Research purposes thus had to be accepted as given. While this explains why the 3Rs focus narrowly on humane method selection, not on encouraging animal-free question selection in the first place, it suggests that governments should (also) focus on the latter if they recognize animals as deserving protection for their own sake.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"389 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science and Engineering Ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1