Pub Date : 2024-10-17DOI: 10.1007/s11948-024-00509-w
Markus Kneer, Markus Christen
Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.
{"title":"Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.","authors":"Markus Kneer, Markus Christen","doi":"10.1007/s11948-024-00509-w","DOIUrl":"10.1007/s11948-024-00509-w","url":null,"abstract":"<p><p>Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"51"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1007/s11948-024-00517-w
Abu Bakkar Siddique, Brian Shaw, Johanna Dwyer, David A Fields, Kevin Fontaine, David Hand, Randy Schekman, Jeffrey Alberts, Julie Locher, David B Allison
The tutelage of our mentors as scientists included the analogy that writing a good scientific paper was an exercise in storytelling that omitted unessential details that did not move the story forward or that detracted from the overall message. However, the advice to not get lost in the details had an important flaw. In science, it is the many details of the data themselves and the methods used to generate and analyze them that give conclusions their probative meaning. Facts may sometimes slow or distract from the clarity, tidiness, intrigue, or flow of the narrative, but nevertheless they are important for the assessment of what was done, the trustworthiness of the science, and the meaning of the findings. Nevertheless, many critical elements and facts about research studies may be omitted from the narrative and become hidden from scholarly scrutiny. We describe a "baker's dozen" shortfalls in which such elements that are pertinent to evaluating the validity of scientific studies are sometimes hidden in reports of the work. Such shortfalls may be intentional or unintentional or lie somewhere in between. Additionally, shortfalls may occur at the level of the individual or an institution or of the entire system itself. We conclude by proposing countermeasures to these shortfalls.
{"title":"Hidden: A Baker's Dozen Ways in Which Research Reporting is Less Transparent than it Could be and Suggestions for Implementing Einstein's Dictum.","authors":"Abu Bakkar Siddique, Brian Shaw, Johanna Dwyer, David A Fields, Kevin Fontaine, David Hand, Randy Schekman, Jeffrey Alberts, Julie Locher, David B Allison","doi":"10.1007/s11948-024-00517-w","DOIUrl":"10.1007/s11948-024-00517-w","url":null,"abstract":"<p><p>The tutelage of our mentors as scientists included the analogy that writing a good scientific paper was an exercise in storytelling that omitted unessential details that did not move the story forward or that detracted from the overall message. However, the advice to not get lost in the details had an important flaw. In science, it is the many details of the data themselves and the methods used to generate and analyze them that give conclusions their probative meaning. Facts may sometimes slow or distract from the clarity, tidiness, intrigue, or flow of the narrative, but nevertheless they are important for the assessment of what was done, the trustworthiness of the science, and the meaning of the findings. Nevertheless, many critical elements and facts about research studies may be omitted from the narrative and become hidden from scholarly scrutiny. We describe a \"baker's dozen\" shortfalls in which such elements that are pertinent to evaluating the validity of scientific studies are sometimes hidden in reports of the work. Such shortfalls may be intentional or unintentional or lie somewhere in between. Additionally, shortfalls may occur at the level of the individual or an institution or of the entire system itself. We conclude by proposing countermeasures to these shortfalls.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"48"},"PeriodicalIF":2.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11485062/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1007/s11948-024-00513-0
Franziska Poszler, Maximilian Geisslinger, Christoph Lütge
Self-driving vehicles (SDVs) will need to make decisions that carry ethical dimensions and are of normative significance. For example, by choosing a specific trajectory, they determine how risks are distributed among traffic participants. Accordingly, policymakers, standardization organizations and scholars have conceptualized what (shall) constitute(s) ethical decision-making for SDVs. Eventually, these conceptualizations must be converted into specific system requirements to ensure proper technical implementation. Therefore, this article aims to translate critical requirements recently formulated in scholarly work, existing standards, regulatory drafts and guidelines into an explicit five-step ethical decision model for SDVs during hazardous situations. This model states a precise sequence of steps, indicates the guiding ethical principles that inform each step and points out a list of terms that demand further investigation and technical specification. By integrating ethical, legal and engineering considerations, we aim to contribute to the scholarly debate on computational ethics (particularly in autonomous driving) while offering practitioners in the automotive sector a decision-making process for SDVs that is technically viable, legally permissible, ethically grounded and adaptable to societal values. In the future, assessing the actual impact, effectiveness and admissibility of implementing the here sketched theories, terms and the overall decision process requires an empirical evaluation and testing of the overall decision-making model.
{"title":"Ethical Decision-Making for Self-Driving Vehicles: A Proposed Model & List of Value-Laden Terms that Warrant (Technical) Specification.","authors":"Franziska Poszler, Maximilian Geisslinger, Christoph Lütge","doi":"10.1007/s11948-024-00513-0","DOIUrl":"10.1007/s11948-024-00513-0","url":null,"abstract":"<p><p>Self-driving vehicles (SDVs) will need to make decisions that carry ethical dimensions and are of normative significance. For example, by choosing a specific trajectory, they determine how risks are distributed among traffic participants. Accordingly, policymakers, standardization organizations and scholars have conceptualized what (shall) constitute(s) ethical decision-making for SDVs. Eventually, these conceptualizations must be converted into specific system requirements to ensure proper technical implementation. Therefore, this article aims to translate critical requirements recently formulated in scholarly work, existing standards, regulatory drafts and guidelines into an explicit five-step ethical decision model for SDVs during hazardous situations. This model states a precise sequence of steps, indicates the guiding ethical principles that inform each step and points out a list of terms that demand further investigation and technical specification. By integrating ethical, legal and engineering considerations, we aim to contribute to the scholarly debate on computational ethics (particularly in autonomous driving) while offering practitioners in the automotive sector a decision-making process for SDVs that is technically viable, legally permissible, ethically grounded and adaptable to societal values. In the future, assessing the actual impact, effectiveness and admissibility of implementing the here sketched theories, terms and the overall decision process requires an empirical evaluation and testing of the overall decision-making model.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"47"},"PeriodicalIF":2.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466986/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1007/s11948-024-00507-y
Salla Westerstrand
The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.
{"title":"Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.","authors":"Salla Westerstrand","doi":"10.1007/s11948-024-00507-y","DOIUrl":"10.1007/s11948-024-00507-y","url":null,"abstract":"<p><p>The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"46"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1007/s11948-024-00510-3
Nina Frahm, Kasper Schiølin
In this editorial to the Topical Collection "Innovation under Fire: The Rise of Ethics in Tech", we provide an overview of the papers gathered in the collection, reflect on similarities and differences in their analytical angles and methodological approaches, and carve out some of the cross-cutting themes that emerge from research on the production of 'Tech Ethics'. We identify two recurring ways through which 'Tech Ethics' are studied and forms of critique towards them developed, which we argue diverge primarily in their a priori commitments towards what ethical tech is and how it should best be pursued. Beyond these differences, we observe how current research on 'Tech Ethics' evidences a close relationship between public controversies about technological innovation and the rise of ethics discourses and instruments for their settlement, producing legitimacy crises for 'Tech Ethics' in and of itself. 'Tech Ethics' is not only instrumental for governing technoscientific projects in the present but is equally instrumental for the construction of socio-technical imaginaries and the essentialization of technological futures. We suggest that efforts to reach beyond single case-studies are needed and call for collective reflection on joint issues and challenges to advance the critical project of 'Tech Ethics'.
{"title":"The Rise of Tech Ethics: Approaches, Critique, and Future Pathways.","authors":"Nina Frahm, Kasper Schiølin","doi":"10.1007/s11948-024-00510-3","DOIUrl":"10.1007/s11948-024-00510-3","url":null,"abstract":"<p><p>In this editorial to the Topical Collection \"Innovation under Fire: The Rise of Ethics in Tech\", we provide an overview of the papers gathered in the collection, reflect on similarities and differences in their analytical angles and methodological approaches, and carve out some of the cross-cutting themes that emerge from research on the production of 'Tech Ethics'. We identify two recurring ways through which 'Tech Ethics' are studied and forms of critique towards them developed, which we argue diverge primarily in their a priori commitments towards what ethical tech is and how it should best be pursued. Beyond these differences, we observe how current research on 'Tech Ethics' evidences a close relationship between public controversies about technological innovation and the rise of ethics discourses and instruments for their settlement, producing legitimacy crises for 'Tech Ethics' in and of itself. 'Tech Ethics' is not only instrumental for governing technoscientific projects in the present but is equally instrumental for the construction of socio-technical imaginaries and the essentialization of technological futures. We suggest that efforts to reach beyond single case-studies are needed and call for collective reflection on joint issues and challenges to advance the critical project of 'Tech Ethics'.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"45"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1007/s11948-024-00504-1
Nico Dario Müller
The 3Rs framework in animal experimentation– “replace, reduce, refine” – has been alleged to be expressive of anthropocentrism, the view that only humans are directly morally relevant. After all, the 3Rs safeguard animal welfare only as far as given human research objectives permit, effectively prioritizing human use interests over animal interests. This article acknowledges this prioritization, but argues that the characterization as anthropocentric is inaccurate. In fact, the 3Rs prioritize research purposes even more strongly than an ethical anthropocentrist would. Drawing on the writings of Universities Federation for Animal Welfare (UFAW) founder Charles W. Hume, who employed Russell and Burch, it is argued that the 3Rs originally arose from an animal-centered ethic which was however restricted by an organizational strategy aiming at the voluntary cooperation of animal researchers. Research purposes thus had to be accepted as given. While this explains why the 3Rs focus narrowly on humane method selection, not on encouraging animal-free question selection in the first place, it suggests that governments should (also) focus on the latter if they recognize animals as deserving protection for their own sake.
{"title":"Beyond Anthropocentrism: The Moral and Strategic Philosophy behind Russell and Burch’s 3Rs in Animal Experimentation","authors":"Nico Dario Müller","doi":"10.1007/s11948-024-00504-1","DOIUrl":"https://doi.org/10.1007/s11948-024-00504-1","url":null,"abstract":"<p>The 3Rs framework in animal experimentation– “replace, reduce, refine” – has been alleged to be expressive of anthropocentrism, the view that only humans are directly morally relevant. After all, the 3Rs safeguard animal welfare only as far as given human research objectives permit, effectively prioritizing human use interests over animal interests. This article acknowledges this prioritization, but argues that the characterization as anthropocentric is inaccurate. In fact, the 3Rs prioritize research purposes even more strongly than an ethical anthropocentrist would. Drawing on the writings of Universities Federation for Animal Welfare (UFAW) founder Charles W. Hume, who employed Russell and Burch, it is argued that the 3Rs originally arose from an animal-centered ethic which was however restricted by an organizational strategy aiming at the voluntary cooperation of animal researchers. Research purposes thus had to be accepted as given. While this explains why the 3Rs focus narrowly on humane method selection, not on encouraging animal-free question selection in the first place, it suggests that governments should (also) focus on the latter if they recognize animals as deserving protection for their own sake.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"389 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1007/s11948-024-00511-2
Belén Liedo, Janna Van Grunsven, Lavinia Marin
Care ethics has been advanced as a suitable framework for evaluating the ethical significance of assistive robotics. One of the most prominent care ethical contributions to the ethical assessment of assistive robots comes through the work of Aimee Van Wynsberghe, who has developed the Care-Centred Value-Sensitive Design framework (CCVSD) in order to incorporate care values into the design of assistive robots. Building upon the care ethics work of Joan Tronto, CCVSD has been able to highlight a number of ways in which care practices can undergo significant ethical transformations upon the introduction of assistive robots. In this paper, we too build upon the work of Tronto in an effort to enrich the CCVSD framework. Combining insights from Tronto’s work with the sociological concept of emotional labor, we argue that CCVSD remains underdeveloped with respect to the impact robots may have on the emotional labor required by paid care workers. Emotional labor consists of the managing of emotions and of emotional bonding, both of which signify a demanding yet potentially fulfilling dimension of paid care work. Because of the conditions in which care labor is performed nowadays, emotional labor is also susceptible to exploitation. While CCVSD can acknowledge some manifestations of unrecognized emotional labor in care delivery, it remains limited in capturing the structural conditions that fuel this vulnerability to exploitation. We propose that the idea of privileged irresponsibility, coined by Tronto, helps to understand how the exploitation of emotional labor can be prone to happen in roboticized care practices.
{"title":"Emotional Labor and the Problem of Exploitation in Roboticized Care Practices: Enriching the Framework of Care Centred Value Sensitive Design","authors":"Belén Liedo, Janna Van Grunsven, Lavinia Marin","doi":"10.1007/s11948-024-00511-2","DOIUrl":"https://doi.org/10.1007/s11948-024-00511-2","url":null,"abstract":"<p>Care ethics has been advanced as a suitable framework for evaluating the ethical significance of assistive robotics. One of the most prominent care ethical contributions to the ethical assessment of assistive robots comes through the work of Aimee Van Wynsberghe, who has developed the Care-Centred Value-Sensitive Design framework (CCVSD) in order to incorporate care values into the design of assistive robots. Building upon the care ethics work of Joan Tronto, CCVSD has been able to highlight a number of ways in which care practices can undergo significant ethical transformations upon the introduction of assistive robots. In this paper, we too build upon the work of Tronto in an effort to enrich the CCVSD framework. Combining insights from Tronto’s work with the sociological concept of <i>emotional labor</i>, we argue that CCVSD remains underdeveloped with respect to the impact robots may have on the emotional labor required by paid care workers. Emotional labor consists of the managing of emotions and of emotional bonding, both of which signify a demanding yet potentially fulfilling dimension of paid care work. Because of the conditions in which care labor is performed nowadays, emotional labor is also susceptible to exploitation. While CCVSD can acknowledge some manifestations of unrecognized emotional labor in care delivery, it remains limited in capturing the structural conditions that fuel this vulnerability to exploitation. We propose that the idea of <i>privileged irresponsibility,</i> coined by Tronto, helps to understand how the exploitation of emotional labor can be prone to happen in roboticized care practices.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"56 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1007/s11948-024-00505-0
Di Di, Elaine Howard Ecklund
This research explores the perspectives of academic physicists from three national contexts concerning their roles and responsibilities within the realm of science. Using a dataset comprised of 211 interviews with scientists working in China, the United States, and the United Kingdom, the study seeks to explain whether and in what manner physicists conceptualize scientific ethics within a global or national framework. The empirical findings bring to light disparities across nations in the physicists' perceptions of what constitutes responsible mentorship and engagement in public service. These cross-national variations underscore the moral agency of physicists as they navigate the ethical standards embraced by the global scientific community vis-à-vis those that are specific to their respective national contexts. The study's empirical insights may carry significant implications for both policymakers and ethicists, underscoring the imperative of soliciting and acknowledging the perspectives of academic scientists working and living in disparate national contexts when formulating comprehensive science ethics frameworks. Such inclusive and context-aware approaches to shaping ethics in science can contribute to the cultivation of a more robust and universally relevant ethical foundation for the scientific community.
{"title":"Cross-National Variations in Scientific Ethics: Exploring Ethical Perspectives Among Scientists in China, the US, and the UK.","authors":"Di Di, Elaine Howard Ecklund","doi":"10.1007/s11948-024-00505-0","DOIUrl":"10.1007/s11948-024-00505-0","url":null,"abstract":"<p><p>This research explores the perspectives of academic physicists from three national contexts concerning their roles and responsibilities within the realm of science. Using a dataset comprised of 211 interviews with scientists working in China, the United States, and the United Kingdom, the study seeks to explain whether and in what manner physicists conceptualize scientific ethics within a global or national framework. The empirical findings bring to light disparities across nations in the physicists' perceptions of what constitutes responsible mentorship and engagement in public service. These cross-national variations underscore the moral agency of physicists as they navigate the ethical standards embraced by the global scientific community vis-à-vis those that are specific to their respective national contexts. The study's empirical insights may carry significant implications for both policymakers and ethicists, underscoring the imperative of soliciting and acknowledging the perspectives of academic scientists working and living in disparate national contexts when formulating comprehensive science ethics frameworks. Such inclusive and context-aware approaches to shaping ethics in science can contribute to the cultivation of a more robust and universally relevant ethical foundation for the scientific community.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"41"},"PeriodicalIF":2.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11390852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142299555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology.
机器学习(MU)经常从如何促进 "被遗忘权 "的角度进行分析。在这篇评论中,我们表明机器学习可以支持经合组织(OECD)的可信人工智能五项原则,这些原则正影响着全球的人工智能发展和监管。这使它成为将人工智能原则转化为实践的一个有前途的工具。我们还认为,实施人工智能并非没有道德风险。为了解决这些问题并扩大 MU 的积极影响,我们提出了六类政策建议,以鼓励研究和采用这项可能极具影响力的新技术。
{"title":"Supporting Trustworthy AI Through Machine Unlearning","authors":"Emmie Hine, Claudio Novelli, Mariarosaria Taddeo, Luciano Floridi","doi":"10.1007/s11948-024-00500-5","DOIUrl":"https://doi.org/10.1007/s11948-024-00500-5","url":null,"abstract":"<p>Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"17 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1007/s11948-024-00503-2
Justin L. Hess, Elizabeth Sanders, Grant A. Fore, Martin Coleman, Mary Price, Sammy Nyarko, Brandon Sorge
Ethics is central to scientific and engineering research and practice, but a key challenge for promoting students’ ethical formation involves enhancing faculty members’ ability and confidence in embedding positive ethical learning experiences into their curriculums. To this end, this paper explores changes in faculty members’ approaches to and perceptions of ethics education following their participation in a multi-year interdisciplinary faculty learning community (FLC). We conducted and thematically analyzed semi-structured interviews with 11 participants following the second year of the FLC. Qualitative themes suggested that, following two years of FLC participation, faculty members (1) were better able to articulate their conceptualizations of ethics; (2) became cognizant of how personal experiences, views, and beliefs informed how they introduced ethics into their curriculum; and (3) developed and lived instructional principles that guided their ethics teaching. Results thus suggested that faculty members benefitted from exploring, discussing, and teaching ethics, which (in turn) enabled them to see new opportunities and become confident in integrating ethics into their courses in meaningful ways that aligned with their scholarly identities. Taken together, these data suggest faculty became agents of change for designing, implementing, and refining ethics-related instructional efforts in STEM. This work can guide others interested in designing faculty learning communities to promote instructional skill development, faculty members’ awareness of their ethical values, and their ability and agency to design and integrate ethics learning activities alongside departmental peers in an intentional and continuous manner.
{"title":"Transforming Ethics Education Through a Faculty Learning Community: “I’m Coming Around to Seeing Ethics as Being Maybe as Important as Calculus”","authors":"Justin L. Hess, Elizabeth Sanders, Grant A. Fore, Martin Coleman, Mary Price, Sammy Nyarko, Brandon Sorge","doi":"10.1007/s11948-024-00503-2","DOIUrl":"https://doi.org/10.1007/s11948-024-00503-2","url":null,"abstract":"<p>Ethics is central to scientific and engineering research and practice, but a key challenge for promoting students’ ethical formation involves enhancing faculty members’ ability and confidence in embedding positive ethical learning experiences into their curriculums. To this end, this paper explores changes in faculty members’ approaches to and perceptions of ethics education following their participation in a multi-year interdisciplinary faculty learning community (FLC). We conducted and thematically analyzed semi-structured interviews with 11 participants following the second year of the FLC. Qualitative themes suggested that, following two years of FLC participation, faculty members (1) were better able to articulate their conceptualizations of ethics; (2) became cognizant of how personal experiences, views, and beliefs informed how they introduced ethics into their curriculum; and (3) developed and lived instructional principles that guided their ethics teaching. Results thus suggested that faculty members benefitted from exploring, discussing, and teaching ethics, which (in turn) enabled them to see new opportunities and become confident in integrating ethics into their courses in meaningful ways that aligned with their scholarly identities. Taken together, these data suggest faculty became agents of change for designing, implementing, and refining ethics-related instructional efforts in STEM. This work can guide others interested in designing faculty learning communities to promote instructional skill development, faculty members’ awareness of their ethical values, and their ability and agency to design and integrate ethics learning activities alongside departmental peers in an intentional and continuous manner.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}