Pub Date : 2024-11-28DOI: 10.1007/s11948-024-00519-8
Omar F Khabour, Karem H Alzoubi, Wesal M Aldarabseh
The use of the open publishing is expected to be the dominant model in the future. However, along with the use of this model, predatory journals are increasingly appearing. In the current study, the awareness of researchers in Jordan about predatory journals and the strategies utilized to avoid them was investigated. The study included 558 researchers from Jordan. A total of 34.0% of the participants reported a high ability to identify predatory journals, while 27.0% reported a low ability to identify predatory journals. Most participants (64.0%) apply "Think. Check. Submit." strategy to avoid predatory journals. However, 11.9% of the sample reported being a victim of a predatory journal. Multinomial regression analysis showed gender, number of publications, using Beall's list of predatory journals, and applying "Think. Check. Submit." strategy were predictors of the high ability to identify predatory journals. Participants reported using databases such as Scopus, Clarivate, membership in the publishing ethics committee, and DOAJ to validate the journal before publication. Finally, most participants (88.4%) agreed to attend a training module on how to identify predatory journals. In conclusion, Jordanian researchers use valid strategies to avoid predatory journals. Implementing a training module may enhance researchers' ability to identify predatory journals.
{"title":"Awareness of Jordanian Researchers About Predatory Journals: A Need for Training.","authors":"Omar F Khabour, Karem H Alzoubi, Wesal M Aldarabseh","doi":"10.1007/s11948-024-00519-8","DOIUrl":"10.1007/s11948-024-00519-8","url":null,"abstract":"<p><p>The use of the open publishing is expected to be the dominant model in the future. However, along with the use of this model, predatory journals are increasingly appearing. In the current study, the awareness of researchers in Jordan about predatory journals and the strategies utilized to avoid them was investigated. The study included 558 researchers from Jordan. A total of 34.0% of the participants reported a high ability to identify predatory journals, while 27.0% reported a low ability to identify predatory journals. Most participants (64.0%) apply \"Think. Check. Submit.\" strategy to avoid predatory journals. However, 11.9% of the sample reported being a victim of a predatory journal. Multinomial regression analysis showed gender, number of publications, using Beall's list of predatory journals, and applying \"Think. Check. Submit.\" strategy were predictors of the high ability to identify predatory journals. Participants reported using databases such as Scopus, Clarivate, membership in the publishing ethics committee, and DOAJ to validate the journal before publication. Finally, most participants (88.4%) agreed to attend a training module on how to identify predatory journals. In conclusion, Jordanian researchers use valid strategies to avoid predatory journals. Implementing a training module may enhance researchers' ability to identify predatory journals.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"58"},"PeriodicalIF":2.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11604683/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142741196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-25DOI: 10.1007/s11948-024-00512-1
Justin L Hess
Engineers make decisions with global impacts and empathy can motivate ethical reasoning and behavior that is sensitive to the needs and perspectives of stakeholders across the globe. Microethics and macroethics offer two frames of reference for engineering ethics education, but different dimensions of empathy play distinct roles in micro- and macroethics. Microethics emphasizes individual responsibility and interpersonal relationships whereas macroethics emphasizes societal obligations and impacts. While empathy can support ethical reasoning and behavior for each, in this paper I argue that affective empathy plays a primary (but not exclusive) role in microethics whereas cognitive empathy plays a primary role in macroethics. Gilligan's and Kohlberg's theories of moral development are used to further depict how affective empathy aligns with care (depicted as an interpersonal phenomenon) and how cognitive empathy aligns with justice (depicted as a systems-focused phenomenon), thus positioning these ethical principles as playing primary (but again, not exclusive) roles in micro- and macro-ethics, respectively. Building on these ideas, this study generates a framework that describes and visualizes how empathy manifests across six frames of reference, each of which are increasingly macro-ethical in nature: self, team, operators, participants, bystanders, and future generations. The paper describes how proxy stakeholders can be identified, developed, and leveraged to empathize with stakeholder groups. Taken together, the manuscript seeks to clarify the role of empathy in engineering ethics and can enable engineering students to better empathize with the range of stakeholders impacted by engineering decisions, ranging from one's self to stakeholders across the globe. The intrapersonal understandings and motivations that students generate by empathizing across six frames of reference can facilitate ethical reasoning processes and behaviors that are more inclusive and comprehensive.
{"title":"Empathy's Role in Engineering Ethics: Empathizing with One's Self to Others Across the Globe.","authors":"Justin L Hess","doi":"10.1007/s11948-024-00512-1","DOIUrl":"10.1007/s11948-024-00512-1","url":null,"abstract":"<p><p>Engineers make decisions with global impacts and empathy can motivate ethical reasoning and behavior that is sensitive to the needs and perspectives of stakeholders across the globe. Microethics and macroethics offer two frames of reference for engineering ethics education, but different dimensions of empathy play distinct roles in micro- and macroethics. Microethics emphasizes individual responsibility and interpersonal relationships whereas macroethics emphasizes societal obligations and impacts. While empathy can support ethical reasoning and behavior for each, in this paper I argue that affective empathy plays a primary (but not exclusive) role in microethics whereas cognitive empathy plays a primary role in macroethics. Gilligan's and Kohlberg's theories of moral development are used to further depict how affective empathy aligns with care (depicted as an interpersonal phenomenon) and how cognitive empathy aligns with justice (depicted as a systems-focused phenomenon), thus positioning these ethical principles as playing primary (but again, not exclusive) roles in micro- and macro-ethics, respectively. Building on these ideas, this study generates a framework that describes and visualizes how empathy manifests across six frames of reference, each of which are increasingly macro-ethical in nature: self, team, operators, participants, bystanders, and future generations. The paper describes how proxy stakeholders can be identified, developed, and leveraged to empathize with stakeholder groups. Taken together, the manuscript seeks to clarify the role of empathy in engineering ethics and can enable engineering students to better empathize with the range of stakeholders impacted by engineering decisions, ranging from one's self to stakeholders across the globe. The intrapersonal understandings and motivations that students generate by empathizing across six frames of reference can facilitate ethical reasoning processes and behaviors that are more inclusive and comprehensive.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"57"},"PeriodicalIF":2.7,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11588796/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142711685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-21DOI: 10.1007/s11948-024-00520-1
Amalia Kallergi, Lotte Asveld
Safe-by-Design (SbD) is a new concept that urges the developers of novel technologies to integrate safety early on in their design process. A SbD approach could-in theory-support the development of safer products and assist a responsible transition to the bioeconomy, via the deployment of safer bio-based and biotechnological alternatives. Despite its prominence in policy discourse, SbD is yet to gain traction in research and innovation practice. In this paper, we examine a frequently stated objection to the initiative of SbD, namely the position that SbD is already common practice in research and industry. We draw upon observations from two case studies: one, a study on the applicability of SbD in the context of bio-based circular materials and, two, a study on stakeholder perceptions of SbD in biotechnology. Interviewed practitioners in both case studies make claims to a strong safety culture in their respective fields and have difficulties differentiating a SbD approach from existing safety practices. Two variations of this argument are discussed: early attentiveness to safety as a strictly formalised practice and early attentiveness as implicit practice. We analyse these perceptions using the theoretical lens of safety culture and contrast them to the aims of SbD. Our analysis indicates that professional identity and professional pride may explain some of the resistance to the initiative of SbD. Nevertheless, SbD could still be advantageous by a) emphasising multidisciplinary approaches to safety and b) offering a (reflective) frame via which implicit attentiveness to safety becomes explicit.
{"title":"\"Business as usual\"? Safe-by-Design Vis-à-Vis Proclaimed Safety Cultures in Technology Development for the Bioeconomy.","authors":"Amalia Kallergi, Lotte Asveld","doi":"10.1007/s11948-024-00520-1","DOIUrl":"10.1007/s11948-024-00520-1","url":null,"abstract":"<p><p>Safe-by-Design (SbD) is a new concept that urges the developers of novel technologies to integrate safety early on in their design process. A SbD approach could-in theory-support the development of safer products and assist a responsible transition to the bioeconomy, via the deployment of safer bio-based and biotechnological alternatives. Despite its prominence in policy discourse, SbD is yet to gain traction in research and innovation practice. In this paper, we examine a frequently stated objection to the initiative of SbD, namely the position that SbD is already common practice in research and industry. We draw upon observations from two case studies: one, a study on the applicability of SbD in the context of bio-based circular materials and, two, a study on stakeholder perceptions of SbD in biotechnology. Interviewed practitioners in both case studies make claims to a strong safety culture in their respective fields and have difficulties differentiating a SbD approach from existing safety practices. Two variations of this argument are discussed: early attentiveness to safety as a strictly formalised practice and early attentiveness as implicit practice. We analyse these perceptions using the theoretical lens of safety culture and contrast them to the aims of SbD. Our analysis indicates that professional identity and professional pride may explain some of the resistance to the initiative of SbD. Nevertheless, SbD could still be advantageous by a) emphasising multidisciplinary approaches to safety and b) offering a (reflective) frame via which implicit attentiveness to safety becomes explicit.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"56"},"PeriodicalIF":2.7,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11582267/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-21DOI: 10.1007/s11948-024-00522-z
Andrea Ferrario
We address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of the AI and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that credences, namely, beliefs we hold with a degree of confidence, are the appropriate attitude for capturing the facets of the trustworthiness of an AI over time. Then, we introduce a reliabilistic account providing justification to the credences in the trustworthiness of AI, which we derive from Tang's probabilistic theory of justified credence. Our account stipulates that a credence in the trustworthiness of an AI system is justified if and only if it is caused by an assessment process that tends to result in a high proportion of credences for which the actual and perceived trustworthiness of the AI are calibrated. This approach informs research on the ethics of AI and human-AI interactions by providing actionable recommendations on how to measure the reliability of the process through which users perceive the trustworthiness of the system, investigating its calibration to the actual levels of trustworthiness of the AI as well as users' appropriate reliance on the system.
{"title":"Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach.","authors":"Andrea Ferrario","doi":"10.1007/s11948-024-00522-z","DOIUrl":"10.1007/s11948-024-00522-z","url":null,"abstract":"<p><p>We address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of the AI and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that credences, namely, beliefs we hold with a degree of confidence, are the appropriate attitude for capturing the facets of the trustworthiness of an AI over time. Then, we introduce a reliabilistic account providing justification to the credences in the trustworthiness of AI, which we derive from Tang's probabilistic theory of justified credence. Our account stipulates that a credence in the trustworthiness of an AI system is justified if and only if it is caused by an assessment process that tends to result in a high proportion of credences for which the actual and perceived trustworthiness of the AI are calibrated. This approach informs research on the ethics of AI and human-AI interactions by providing actionable recommendations on how to measure the reliability of the process through which users perceive the trustworthiness of the system, investigating its calibration to the actual levels of trustworthiness of the AI as well as users' appropriate reliance on the system.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"55"},"PeriodicalIF":2.7,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11582117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-21DOI: 10.1007/s11948-024-00518-9
Alberto Giubilini, Sebastian Porsdam Mann, Cristina Voinea, Brian Earp, Julian Savulescu
In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users' past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be and the actions and goals necessary for so becoming. The feasibility of LLMs providing such personalized moral insights remains uncertain pending further technical development. Nevertheless, we argue that this approach addresses limitations in existing AMA proposals reliant on either predetermined values or introspective self-knowledge.
在本文中,我们建议,根据个人撰写的信息或与个人相关的信息训练的个性化 LLM 可以作为人工道德顾问(AMA),考虑到个人道德的动态性质。这些基于 LLM 的人工道德顾问将利用用户过去和现在的数据来推断和明确他们有时会改变的价值观和偏好,从而促进自我认知。此外,这些系统还可以帮助用户反思自己想成为什么样的人,以及成为这样的人所需采取的行动和实现的目标,从而帮助用户进行自我创造。在技术进一步发展之前,提供这种个性化道德见解的 LLM 的可行性仍不确定。不过,我们认为,这种方法解决了现有的基于预先确定的价值观或内省式自我认识的 AMA 方案的局限性。
{"title":"Know Thyself, Improve Thyself: Personalized LLMs for Self-Knowledge and Moral Enhancement.","authors":"Alberto Giubilini, Sebastian Porsdam Mann, Cristina Voinea, Brian Earp, Julian Savulescu","doi":"10.1007/s11948-024-00518-9","DOIUrl":"10.1007/s11948-024-00518-9","url":null,"abstract":"<p><p>In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users' past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be and the actions and goals necessary for so becoming. The feasibility of LLMs providing such personalized moral insights remains uncertain pending further technical development. Nevertheless, we argue that this approach addresses limitations in existing AMA proposals reliant on either predetermined values or introspective self-knowledge.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"54"},"PeriodicalIF":2.7,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11582191/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29DOI: 10.1007/s11948-024-00516-x
Per Sandin, Patrik Baard, William Bülow, Gert Helgesson
Citizen science (CS) is an umbrella term for research with a significant amount of contributions from volunteers. Those volunteers can occupy a hybrid role, being both 'researcher' and 'subject' at the same time. This has repercussions for questions about responsibility and credit, e.g. pertaining to the issue of authorship. In this paper, we first review some existing guidelines for authorship and their applicability to CS. Second, we assess the claim that the guidelines from the International Committee of Medical Journal Editors (ICMJE), known as 'the Vancouver guidelines', may lead to exclusion of deserving citizen scientists as authors. We maintain that the idea of including citizen scientists as authors is supported by at least two arguments: transparency and fairness. Third, we argue that it might be plausible to include groups as authors in CS. Fourth and finally, we offer a heuristic list of seven recommendations to be considered when deciding about whom to include as an author of a CS publication.
{"title":"Authorship and Citizen Science: Seven Heuristic Rules.","authors":"Per Sandin, Patrik Baard, William Bülow, Gert Helgesson","doi":"10.1007/s11948-024-00516-x","DOIUrl":"10.1007/s11948-024-00516-x","url":null,"abstract":"<p><p>Citizen science (CS) is an umbrella term for research with a significant amount of contributions from volunteers. Those volunteers can occupy a hybrid role, being both 'researcher' and 'subject' at the same time. This has repercussions for questions about responsibility and credit, e.g. pertaining to the issue of authorship. In this paper, we first review some existing guidelines for authorship and their applicability to CS. Second, we assess the claim that the guidelines from the International Committee of Medical Journal Editors (ICMJE), known as 'the Vancouver guidelines', may lead to exclusion of deserving citizen scientists as authors. We maintain that the idea of including citizen scientists as authors is supported by at least two arguments: transparency and fairness. Third, we argue that it might be plausible to include groups as authors in CS. Fourth and finally, we offer a heuristic list of seven recommendations to be considered when deciding about whom to include as an author of a CS publication.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"53"},"PeriodicalIF":2.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11522116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1007/s11948-024-00514-z
Tingting Sui, Sebastian Sunday Grève
Any moral algorithm for autonomous vehicles must provide a practical solution to moral problems of the trolley type, in which all possible courses of action will result in damage, injury, or death. This article discusses a hitherto neglected variety of this type of problem, based on a recent psychological study whose results are reported here. It argues that the most adequate solution to this problem will be achieved by a moral algorithm that is based on Confucian ethics. In addition to this philosophical and psychological discussion, the article outlines the mathematics, engineering, and legal implementation of a possible Confucian algorithm. The proposed Confucian algorithm is based on the idea of making it possible to set an autonomous vehicle to allow an increased level of protection for selected people. It is shown that the proposed algorithm can be implemented alongside other moral algorithms, using either the framework of personal ethics settings or that of mandatory ethics settings.
{"title":"A Confucian Algorithm for Autonomous Vehicles.","authors":"Tingting Sui, Sebastian Sunday Grève","doi":"10.1007/s11948-024-00514-z","DOIUrl":"10.1007/s11948-024-00514-z","url":null,"abstract":"<p><p>Any moral algorithm for autonomous vehicles must provide a practical solution to moral problems of the trolley type, in which all possible courses of action will result in damage, injury, or death. This article discusses a hitherto neglected variety of this type of problem, based on a recent psychological study whose results are reported here. It argues that the most adequate solution to this problem will be achieved by a moral algorithm that is based on Confucian ethics. In addition to this philosophical and psychological discussion, the article outlines the mathematics, engineering, and legal implementation of a possible Confucian algorithm. The proposed Confucian algorithm is based on the idea of making it possible to set an autonomous vehicle to allow an increased level of protection for selected people. It is shown that the proposed algorithm can be implemented alongside other moral algorithms, using either the framework of personal ethics settings or that of mandatory ethics settings.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"52"},"PeriodicalIF":2.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11493828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1007/s11948-024-00506-z
Yuqi Peng
To facilitate engineering students' understanding of engineering ethics and support instructors in developing course content, this study introduces an innovative educational tool drawing inspiration from the Rubik's Cube metaphor. This Engineering Ethics Knowledge Rubik's Cube (EEKRC) integrates six key aspects-ethical theories, codes of ethics, ethical issues, engineering disciplines, stakeholders, and life cycle-identified through an analysis of engineering ethics textbooks and courses across the United States, Singapore, and China. This analysis underpins the selection of the six aspects, reflecting the shared and unique elements of engineering ethics education in these regions. In an engineering ethics course, the EEKRC serves multiple functions: it provides visual support for grasping engineering ethics concepts, acts as a pedagogical guide for both experienced and inexperienced educators in course design, offers a complementary assessment method for evaluating students learning outcomes, and assists as a reference for students engaging in ethical analysis.
{"title":"A Rubik's Cube-Inspired Pedagogical Tool for Teaching and Learning Engineering Ethics.","authors":"Yuqi Peng","doi":"10.1007/s11948-024-00506-z","DOIUrl":"10.1007/s11948-024-00506-z","url":null,"abstract":"<p><p>To facilitate engineering students' understanding of engineering ethics and support instructors in developing course content, this study introduces an innovative educational tool drawing inspiration from the Rubik's Cube metaphor. This Engineering Ethics Knowledge Rubik's Cube (EEKRC) integrates six key aspects-ethical theories, codes of ethics, ethical issues, engineering disciplines, stakeholders, and life cycle-identified through an analysis of engineering ethics textbooks and courses across the United States, Singapore, and China. This analysis underpins the selection of the six aspects, reflecting the shared and unique elements of engineering ethics education in these regions. In an engineering ethics course, the EEKRC serves multiple functions: it provides visual support for grasping engineering ethics concepts, acts as a pedagogical guide for both experienced and inexperienced educators in course design, offers a complementary assessment method for evaluating students learning outcomes, and assists as a reference for students engaging in ethical analysis.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"50"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1007/s11948-024-00508-x
Dane Leigh Gogoshin
In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard patient preferences, even those which are potentially morally problematic. A method for handling these at the design level is thus imperative. By way of uncontroversial starting points, I argue that the priority of the public healthcare system is the fulfillment of patients' therapeutic needs, among which certain potentially morally problematic preferences may be counted. There are further ethical considerations, however, which, taken together, suggest that the potential benefits of upholding these preferences are outweighed by the potential harms.
{"title":"Patient Preferences Concerning Humanoid Features in Healthcare Robots.","authors":"Dane Leigh Gogoshin","doi":"10.1007/s11948-024-00508-x","DOIUrl":"10.1007/s11948-024-00508-x","url":null,"abstract":"<p><p>In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard patient preferences, even those which are potentially morally problematic. A method for handling these at the design level is thus imperative. By way of uncontroversial starting points, I argue that the priority of the public healthcare system is the fulfillment of patients' therapeutic needs, among which certain potentially morally problematic preferences may be counted. There are further ethical considerations, however, which, taken together, suggest that the potential benefits of upholding these preferences are outweighed by the potential harms.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"49"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1007/s11948-024-00509-w
Markus Kneer, Markus Christen
Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.
{"title":"Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.","authors":"Markus Kneer, Markus Christen","doi":"10.1007/s11948-024-00509-w","DOIUrl":"10.1007/s11948-024-00509-w","url":null,"abstract":"<p><p>Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"51"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}