Pub Date : 2024-10-09DOI: 10.1007/s11948-024-00507-y
Salla Westerstrand
The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.
{"title":"Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.","authors":"Salla Westerstrand","doi":"10.1007/s11948-024-00507-y","DOIUrl":"10.1007/s11948-024-00507-y","url":null,"abstract":"<p><p>The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"46"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1007/s11948-024-00510-3
Nina Frahm, Kasper Schiølin
In this editorial to the Topical Collection "Innovation under Fire: The Rise of Ethics in Tech", we provide an overview of the papers gathered in the collection, reflect on similarities and differences in their analytical angles and methodological approaches, and carve out some of the cross-cutting themes that emerge from research on the production of 'Tech Ethics'. We identify two recurring ways through which 'Tech Ethics' are studied and forms of critique towards them developed, which we argue diverge primarily in their a priori commitments towards what ethical tech is and how it should best be pursued. Beyond these differences, we observe how current research on 'Tech Ethics' evidences a close relationship between public controversies about technological innovation and the rise of ethics discourses and instruments for their settlement, producing legitimacy crises for 'Tech Ethics' in and of itself. 'Tech Ethics' is not only instrumental for governing technoscientific projects in the present but is equally instrumental for the construction of socio-technical imaginaries and the essentialization of technological futures. We suggest that efforts to reach beyond single case-studies are needed and call for collective reflection on joint issues and challenges to advance the critical project of 'Tech Ethics'.
{"title":"The Rise of Tech Ethics: Approaches, Critique, and Future Pathways.","authors":"Nina Frahm, Kasper Schiølin","doi":"10.1007/s11948-024-00510-3","DOIUrl":"10.1007/s11948-024-00510-3","url":null,"abstract":"<p><p>In this editorial to the Topical Collection \"Innovation under Fire: The Rise of Ethics in Tech\", we provide an overview of the papers gathered in the collection, reflect on similarities and differences in their analytical angles and methodological approaches, and carve out some of the cross-cutting themes that emerge from research on the production of 'Tech Ethics'. We identify two recurring ways through which 'Tech Ethics' are studied and forms of critique towards them developed, which we argue diverge primarily in their a priori commitments towards what ethical tech is and how it should best be pursued. Beyond these differences, we observe how current research on 'Tech Ethics' evidences a close relationship between public controversies about technological innovation and the rise of ethics discourses and instruments for their settlement, producing legitimacy crises for 'Tech Ethics' in and of itself. 'Tech Ethics' is not only instrumental for governing technoscientific projects in the present but is equally instrumental for the construction of socio-technical imaginaries and the essentialization of technological futures. We suggest that efforts to reach beyond single case-studies are needed and call for collective reflection on joint issues and challenges to advance the critical project of 'Tech Ethics'.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"45"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1007/s11948-024-00504-1
Nico Dario Müller
The 3Rs framework in animal experimentation– “replace, reduce, refine” – has been alleged to be expressive of anthropocentrism, the view that only humans are directly morally relevant. After all, the 3Rs safeguard animal welfare only as far as given human research objectives permit, effectively prioritizing human use interests over animal interests. This article acknowledges this prioritization, but argues that the characterization as anthropocentric is inaccurate. In fact, the 3Rs prioritize research purposes even more strongly than an ethical anthropocentrist would. Drawing on the writings of Universities Federation for Animal Welfare (UFAW) founder Charles W. Hume, who employed Russell and Burch, it is argued that the 3Rs originally arose from an animal-centered ethic which was however restricted by an organizational strategy aiming at the voluntary cooperation of animal researchers. Research purposes thus had to be accepted as given. While this explains why the 3Rs focus narrowly on humane method selection, not on encouraging animal-free question selection in the first place, it suggests that governments should (also) focus on the latter if they recognize animals as deserving protection for their own sake.
{"title":"Beyond Anthropocentrism: The Moral and Strategic Philosophy behind Russell and Burch’s 3Rs in Animal Experimentation","authors":"Nico Dario Müller","doi":"10.1007/s11948-024-00504-1","DOIUrl":"https://doi.org/10.1007/s11948-024-00504-1","url":null,"abstract":"<p>The 3Rs framework in animal experimentation– “replace, reduce, refine” – has been alleged to be expressive of anthropocentrism, the view that only humans are directly morally relevant. After all, the 3Rs safeguard animal welfare only as far as given human research objectives permit, effectively prioritizing human use interests over animal interests. This article acknowledges this prioritization, but argues that the characterization as anthropocentric is inaccurate. In fact, the 3Rs prioritize research purposes even more strongly than an ethical anthropocentrist would. Drawing on the writings of Universities Federation for Animal Welfare (UFAW) founder Charles W. Hume, who employed Russell and Burch, it is argued that the 3Rs originally arose from an animal-centered ethic which was however restricted by an organizational strategy aiming at the voluntary cooperation of animal researchers. Research purposes thus had to be accepted as given. While this explains why the 3Rs focus narrowly on humane method selection, not on encouraging animal-free question selection in the first place, it suggests that governments should (also) focus on the latter if they recognize animals as deserving protection for their own sake.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"389 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1007/s11948-024-00511-2
Belén Liedo, Janna Van Grunsven, Lavinia Marin
Care ethics has been advanced as a suitable framework for evaluating the ethical significance of assistive robotics. One of the most prominent care ethical contributions to the ethical assessment of assistive robots comes through the work of Aimee Van Wynsberghe, who has developed the Care-Centred Value-Sensitive Design framework (CCVSD) in order to incorporate care values into the design of assistive robots. Building upon the care ethics work of Joan Tronto, CCVSD has been able to highlight a number of ways in which care practices can undergo significant ethical transformations upon the introduction of assistive robots. In this paper, we too build upon the work of Tronto in an effort to enrich the CCVSD framework. Combining insights from Tronto’s work with the sociological concept of emotional labor, we argue that CCVSD remains underdeveloped with respect to the impact robots may have on the emotional labor required by paid care workers. Emotional labor consists of the managing of emotions and of emotional bonding, both of which signify a demanding yet potentially fulfilling dimension of paid care work. Because of the conditions in which care labor is performed nowadays, emotional labor is also susceptible to exploitation. While CCVSD can acknowledge some manifestations of unrecognized emotional labor in care delivery, it remains limited in capturing the structural conditions that fuel this vulnerability to exploitation. We propose that the idea of privileged irresponsibility, coined by Tronto, helps to understand how the exploitation of emotional labor can be prone to happen in roboticized care practices.
{"title":"Emotional Labor and the Problem of Exploitation in Roboticized Care Practices: Enriching the Framework of Care Centred Value Sensitive Design","authors":"Belén Liedo, Janna Van Grunsven, Lavinia Marin","doi":"10.1007/s11948-024-00511-2","DOIUrl":"https://doi.org/10.1007/s11948-024-00511-2","url":null,"abstract":"<p>Care ethics has been advanced as a suitable framework for evaluating the ethical significance of assistive robotics. One of the most prominent care ethical contributions to the ethical assessment of assistive robots comes through the work of Aimee Van Wynsberghe, who has developed the Care-Centred Value-Sensitive Design framework (CCVSD) in order to incorporate care values into the design of assistive robots. Building upon the care ethics work of Joan Tronto, CCVSD has been able to highlight a number of ways in which care practices can undergo significant ethical transformations upon the introduction of assistive robots. In this paper, we too build upon the work of Tronto in an effort to enrich the CCVSD framework. Combining insights from Tronto’s work with the sociological concept of <i>emotional labor</i>, we argue that CCVSD remains underdeveloped with respect to the impact robots may have on the emotional labor required by paid care workers. Emotional labor consists of the managing of emotions and of emotional bonding, both of which signify a demanding yet potentially fulfilling dimension of paid care work. Because of the conditions in which care labor is performed nowadays, emotional labor is also susceptible to exploitation. While CCVSD can acknowledge some manifestations of unrecognized emotional labor in care delivery, it remains limited in capturing the structural conditions that fuel this vulnerability to exploitation. We propose that the idea of <i>privileged irresponsibility,</i> coined by Tronto, helps to understand how the exploitation of emotional labor can be prone to happen in roboticized care practices.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"56 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1007/s11948-024-00505-0
Di Di, Elaine Howard Ecklund
This research explores the perspectives of academic physicists from three national contexts concerning their roles and responsibilities within the realm of science. Using a dataset comprised of 211 interviews with scientists working in China, the United States, and the United Kingdom, the study seeks to explain whether and in what manner physicists conceptualize scientific ethics within a global or national framework. The empirical findings bring to light disparities across nations in the physicists' perceptions of what constitutes responsible mentorship and engagement in public service. These cross-national variations underscore the moral agency of physicists as they navigate the ethical standards embraced by the global scientific community vis-à-vis those that are specific to their respective national contexts. The study's empirical insights may carry significant implications for both policymakers and ethicists, underscoring the imperative of soliciting and acknowledging the perspectives of academic scientists working and living in disparate national contexts when formulating comprehensive science ethics frameworks. Such inclusive and context-aware approaches to shaping ethics in science can contribute to the cultivation of a more robust and universally relevant ethical foundation for the scientific community.
{"title":"Cross-National Variations in Scientific Ethics: Exploring Ethical Perspectives Among Scientists in China, the US, and the UK.","authors":"Di Di, Elaine Howard Ecklund","doi":"10.1007/s11948-024-00505-0","DOIUrl":"10.1007/s11948-024-00505-0","url":null,"abstract":"<p><p>This research explores the perspectives of academic physicists from three national contexts concerning their roles and responsibilities within the realm of science. Using a dataset comprised of 211 interviews with scientists working in China, the United States, and the United Kingdom, the study seeks to explain whether and in what manner physicists conceptualize scientific ethics within a global or national framework. The empirical findings bring to light disparities across nations in the physicists' perceptions of what constitutes responsible mentorship and engagement in public service. These cross-national variations underscore the moral agency of physicists as they navigate the ethical standards embraced by the global scientific community vis-à-vis those that are specific to their respective national contexts. The study's empirical insights may carry significant implications for both policymakers and ethicists, underscoring the imperative of soliciting and acknowledging the perspectives of academic scientists working and living in disparate national contexts when formulating comprehensive science ethics frameworks. Such inclusive and context-aware approaches to shaping ethics in science can contribute to the cultivation of a more robust and universally relevant ethical foundation for the scientific community.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"41"},"PeriodicalIF":2.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11390852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142299555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology.
机器学习(MU)经常从如何促进 "被遗忘权 "的角度进行分析。在这篇评论中,我们表明机器学习可以支持经合组织(OECD)的可信人工智能五项原则,这些原则正影响着全球的人工智能发展和监管。这使它成为将人工智能原则转化为实践的一个有前途的工具。我们还认为,实施人工智能并非没有道德风险。为了解决这些问题并扩大 MU 的积极影响,我们提出了六类政策建议,以鼓励研究和采用这项可能极具影响力的新技术。
{"title":"Supporting Trustworthy AI Through Machine Unlearning","authors":"Emmie Hine, Claudio Novelli, Mariarosaria Taddeo, Luciano Floridi","doi":"10.1007/s11948-024-00500-5","DOIUrl":"https://doi.org/10.1007/s11948-024-00500-5","url":null,"abstract":"<p>Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"17 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1007/s11948-024-00503-2
Justin L. Hess, Elizabeth Sanders, Grant A. Fore, Martin Coleman, Mary Price, Sammy Nyarko, Brandon Sorge
Ethics is central to scientific and engineering research and practice, but a key challenge for promoting students’ ethical formation involves enhancing faculty members’ ability and confidence in embedding positive ethical learning experiences into their curriculums. To this end, this paper explores changes in faculty members’ approaches to and perceptions of ethics education following their participation in a multi-year interdisciplinary faculty learning community (FLC). We conducted and thematically analyzed semi-structured interviews with 11 participants following the second year of the FLC. Qualitative themes suggested that, following two years of FLC participation, faculty members (1) were better able to articulate their conceptualizations of ethics; (2) became cognizant of how personal experiences, views, and beliefs informed how they introduced ethics into their curriculum; and (3) developed and lived instructional principles that guided their ethics teaching. Results thus suggested that faculty members benefitted from exploring, discussing, and teaching ethics, which (in turn) enabled them to see new opportunities and become confident in integrating ethics into their courses in meaningful ways that aligned with their scholarly identities. Taken together, these data suggest faculty became agents of change for designing, implementing, and refining ethics-related instructional efforts in STEM. This work can guide others interested in designing faculty learning communities to promote instructional skill development, faculty members’ awareness of their ethical values, and their ability and agency to design and integrate ethics learning activities alongside departmental peers in an intentional and continuous manner.
{"title":"Transforming Ethics Education Through a Faculty Learning Community: “I’m Coming Around to Seeing Ethics as Being Maybe as Important as Calculus”","authors":"Justin L. Hess, Elizabeth Sanders, Grant A. Fore, Martin Coleman, Mary Price, Sammy Nyarko, Brandon Sorge","doi":"10.1007/s11948-024-00503-2","DOIUrl":"https://doi.org/10.1007/s11948-024-00503-2","url":null,"abstract":"<p>Ethics is central to scientific and engineering research and practice, but a key challenge for promoting students’ ethical formation involves enhancing faculty members’ ability and confidence in embedding positive ethical learning experiences into their curriculums. To this end, this paper explores changes in faculty members’ approaches to and perceptions of ethics education following their participation in a multi-year interdisciplinary faculty learning community (FLC). We conducted and thematically analyzed semi-structured interviews with 11 participants following the second year of the FLC. Qualitative themes suggested that, following two years of FLC participation, faculty members (1) were better able to articulate their conceptualizations of ethics; (2) became cognizant of how personal experiences, views, and beliefs informed how they introduced ethics into their curriculum; and (3) developed and lived instructional principles that guided their ethics teaching. Results thus suggested that faculty members benefitted from exploring, discussing, and teaching ethics, which (in turn) enabled them to see new opportunities and become confident in integrating ethics into their courses in meaningful ways that aligned with their scholarly identities. Taken together, these data suggest faculty became agents of change for designing, implementing, and refining ethics-related instructional efforts in STEM. This work can guide others interested in designing faculty learning communities to promote instructional skill development, faculty members’ awareness of their ethical values, and their ability and agency to design and integrate ethics learning activities alongside departmental peers in an intentional and continuous manner.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s11948-024-00499-9
Logan L Watts, Sampoorna Nandi, Michelle Martín-Raugh, Rylee M Linhardt
The ethical decision making of researchers has historically been studied from an individualistic perspective. However, researchers rarely work alone, and they typically experience ethical dilemmas in a team context. In this mixed-methods study, 67 scientists and engineers working at a public R1 (very high research activity) university in the United States responded to a survey that asked whether they had experienced or observed an ethical dilemma while working in a research team. Among these, 30 respondents agreed to be interviewed about their experiences using a think-aloud protocol. A total of 40 unique ethical incidents were collected across these interviews. Qualitative data from interview transcripts were then systematically content-analyzed by multiple independent judges to quantify the overall ethicality of team decisions as well as several team characteristics, decision processes, and situational factors. The results demonstrated that team formalistic orientation, ethical championing, and the use of ethical decision strategies were all positively related to the overall ethicality of team decisions. Additionally, the relationship between ethical championing and overall team decision ethicality was moderated by psychological safety and moral intensity. Implications for future research and practice are discussed.
{"title":"Team Factors in Ethical Decision Making: A Content Analysis of Interviews with Scientists and Engineers.","authors":"Logan L Watts, Sampoorna Nandi, Michelle Martín-Raugh, Rylee M Linhardt","doi":"10.1007/s11948-024-00499-9","DOIUrl":"10.1007/s11948-024-00499-9","url":null,"abstract":"<p><p>The ethical decision making of researchers has historically been studied from an individualistic perspective. However, researchers rarely work alone, and they typically experience ethical dilemmas in a team context. In this mixed-methods study, 67 scientists and engineers working at a public R1 (very high research activity) university in the United States responded to a survey that asked whether they had experienced or observed an ethical dilemma while working in a research team. Among these, 30 respondents agreed to be interviewed about their experiences using a think-aloud protocol. A total of 40 unique ethical incidents were collected across these interviews. Qualitative data from interview transcripts were then systematically content-analyzed by multiple independent judges to quantify the overall ethicality of team decisions as well as several team characteristics, decision processes, and situational factors. The results demonstrated that team formalistic orientation, ethical championing, and the use of ethical decision strategies were all positively related to the overall ethicality of team decisions. Additionally, the relationship between ethical championing and overall team decision ethicality was moderated by psychological safety and moral intensity. Implications for future research and practice are discussed.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"39"},"PeriodicalIF":2.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362223/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-09DOI: 10.1007/s11948-024-00502-3
Giovanni Spitale, Federico Germani, Nikola Biller-Andorno
This paper investigates the ethical implications of applying open science (OS) practices on disruptive technologies, such as generative AIs. Disruptive technologies, characterized by their scalability and paradigm-shifting nature, have the potential to generate significant global impact, and carry a risk of dual use. The tension arises between the moral duty of OS to promote societal benefit by democratizing knowledge and the risks associated with open dissemination of disruptive technologies. Van Rennselaer Potter's 'third bioethics' serves as the founding horizon for an ethical framework to govern these tensions. Through theoretical analysis and concrete examples, this paper explores how OS can contribute to a better future or pose threats. Finally, we provide an ethical framework for the intersection between OS and disruptive technologies that tries to go beyond the simple 'as open as possible' tenet, considering openness as an instrumental value for the pursuit of other ethical values rather than as a principle with prima facie moral significance.
本文探讨了在生成式人工智能等颠覆性技术上应用开放科学(OS)实践的伦理意义。颠覆性技术的特点是可扩展性和范式转换性,有可能产生重大的全球影响,并有双重用途的风险。操作系统的道德责任是通过知识民主化来促进社会效益,而公开传播颠覆性技术则会带来风险,两者之间的矛盾由此产生。Van Rennselaer Potter 的 "第三生物伦理学 "为解决这些矛盾提供了一个伦理框架。通过理论分析和具体实例,本文探讨了操作系统如何为更美好的未来做出贡献或构成威胁。最后,我们为操作系统与颠覆性技术之间的交叉提供了一个伦理框架,试图超越简单的 "尽可能开放 "原则,将开放性视为追求其他伦理价值的工具性价值,而不是具有初步道德意义的原则。
{"title":"Disruptive Technologies and Open Science: How Open Should Open Science Be? A 'Third Bioethics' Ethical Framework.","authors":"Giovanni Spitale, Federico Germani, Nikola Biller-Andorno","doi":"10.1007/s11948-024-00502-3","DOIUrl":"10.1007/s11948-024-00502-3","url":null,"abstract":"<p><p>This paper investigates the ethical implications of applying open science (OS) practices on disruptive technologies, such as generative AIs. Disruptive technologies, characterized by their scalability and paradigm-shifting nature, have the potential to generate significant global impact, and carry a risk of dual use. The tension arises between the moral duty of OS to promote societal benefit by democratizing knowledge and the risks associated with open dissemination of disruptive technologies. Van Rennselaer Potter's 'third bioethics' serves as the founding horizon for an ethical framework to govern these tensions. Through theoretical analysis and concrete examples, this paper explores how OS can contribute to a better future or pose threats. Finally, we provide an ethical framework for the intersection between OS and disruptive technologies that tries to go beyond the simple 'as open as possible' tenet, considering openness as an instrumental value for the pursuit of other ethical values rather than as a principle with prima facie moral significance.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"36"},"PeriodicalIF":2.7,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-09DOI: 10.1007/s11948-024-00497-x
Joost Alleblas, Anna Melnyk, Ibo van de Poel
This paper is the introduction to a topical collection on "Changing Values and Energy Systems" that consists of six contributions that examine instances of value change regarding the design, use and operation of energy systems. This introduction discusses the need to consider values in the energy transition. It examines conceptions of value and value change and how values can be addressed in the design of energy systems. Value change in the context of energy and energy systems is a topic that has recently gained traction. Current, and past, energy transitions often focus on a limited range of values, such as sustainability, while leaving other salient values, such as energy democracy, or energy justice, out of the picture. Furthermore, these values become entrenched in the design of these systems: it is hard for stakeholders to address new concerns and values in the use and operation of these systems, leading to further costly transitions and systems' overhaul. To remedy this issue, value change in the context of energy systems needs to be better understood. We also need to think about further requirements for the governance, institutional and engineering design of energy systems to accommodate future value change. Openness, transparency, adaptiveness, flexibility and modularity emerge as new requirements within the current energy transition that need further exploration and scrutiny.
{"title":"Introduction to Topical Collection: Changing Values and Energy Systems.","authors":"Joost Alleblas, Anna Melnyk, Ibo van de Poel","doi":"10.1007/s11948-024-00497-x","DOIUrl":"10.1007/s11948-024-00497-x","url":null,"abstract":"<p><p>This paper is the introduction to a topical collection on \"Changing Values and Energy Systems\" that consists of six contributions that examine instances of value change regarding the design, use and operation of energy systems. This introduction discusses the need to consider values in the energy transition. It examines conceptions of value and value change and how values can be addressed in the design of energy systems. Value change in the context of energy and energy systems is a topic that has recently gained traction. Current, and past, energy transitions often focus on a limited range of values, such as sustainability, while leaving other salient values, such as energy democracy, or energy justice, out of the picture. Furthermore, these values become entrenched in the design of these systems: it is hard for stakeholders to address new concerns and values in the use and operation of these systems, leading to further costly transitions and systems' overhaul. To remedy this issue, value change in the context of energy systems needs to be better understood. We also need to think about further requirements for the governance, institutional and engineering design of energy systems to accommodate future value change. Openness, transparency, adaptiveness, flexibility and modularity emerge as new requirements within the current energy transition that need further exploration and scrutiny.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"38"},"PeriodicalIF":2.7,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315695/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}