Pub Date : 2024-10-29DOI: 10.1007/s11948-024-00516-x
Per Sandin, Patrik Baard, William Bülow, Gert Helgesson
Citizen science (CS) is an umbrella term for research with a significant amount of contributions from volunteers. Those volunteers can occupy a hybrid role, being both 'researcher' and 'subject' at the same time. This has repercussions for questions about responsibility and credit, e.g. pertaining to the issue of authorship. In this paper, we first review some existing guidelines for authorship and their applicability to CS. Second, we assess the claim that the guidelines from the International Committee of Medical Journal Editors (ICMJE), known as 'the Vancouver guidelines', may lead to exclusion of deserving citizen scientists as authors. We maintain that the idea of including citizen scientists as authors is supported by at least two arguments: transparency and fairness. Third, we argue that it might be plausible to include groups as authors in CS. Fourth and finally, we offer a heuristic list of seven recommendations to be considered when deciding about whom to include as an author of a CS publication.
{"title":"Authorship and Citizen Science: Seven Heuristic Rules.","authors":"Per Sandin, Patrik Baard, William Bülow, Gert Helgesson","doi":"10.1007/s11948-024-00516-x","DOIUrl":"10.1007/s11948-024-00516-x","url":null,"abstract":"<p><p>Citizen science (CS) is an umbrella term for research with a significant amount of contributions from volunteers. Those volunteers can occupy a hybrid role, being both 'researcher' and 'subject' at the same time. This has repercussions for questions about responsibility and credit, e.g. pertaining to the issue of authorship. In this paper, we first review some existing guidelines for authorship and their applicability to CS. Second, we assess the claim that the guidelines from the International Committee of Medical Journal Editors (ICMJE), known as 'the Vancouver guidelines', may lead to exclusion of deserving citizen scientists as authors. We maintain that the idea of including citizen scientists as authors is supported by at least two arguments: transparency and fairness. Third, we argue that it might be plausible to include groups as authors in CS. Fourth and finally, we offer a heuristic list of seven recommendations to be considered when deciding about whom to include as an author of a CS publication.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"53"},"PeriodicalIF":2.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11522116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1007/s11948-024-00514-z
Tingting Sui, Sebastian Sunday Grève
Any moral algorithm for autonomous vehicles must provide a practical solution to moral problems of the trolley type, in which all possible courses of action will result in damage, injury, or death. This article discusses a hitherto neglected variety of this type of problem, based on a recent psychological study whose results are reported here. It argues that the most adequate solution to this problem will be achieved by a moral algorithm that is based on Confucian ethics. In addition to this philosophical and psychological discussion, the article outlines the mathematics, engineering, and legal implementation of a possible Confucian algorithm. The proposed Confucian algorithm is based on the idea of making it possible to set an autonomous vehicle to allow an increased level of protection for selected people. It is shown that the proposed algorithm can be implemented alongside other moral algorithms, using either the framework of personal ethics settings or that of mandatory ethics settings.
{"title":"A Confucian Algorithm for Autonomous Vehicles.","authors":"Tingting Sui, Sebastian Sunday Grève","doi":"10.1007/s11948-024-00514-z","DOIUrl":"10.1007/s11948-024-00514-z","url":null,"abstract":"<p><p>Any moral algorithm for autonomous vehicles must provide a practical solution to moral problems of the trolley type, in which all possible courses of action will result in damage, injury, or death. This article discusses a hitherto neglected variety of this type of problem, based on a recent psychological study whose results are reported here. It argues that the most adequate solution to this problem will be achieved by a moral algorithm that is based on Confucian ethics. In addition to this philosophical and psychological discussion, the article outlines the mathematics, engineering, and legal implementation of a possible Confucian algorithm. The proposed Confucian algorithm is based on the idea of making it possible to set an autonomous vehicle to allow an increased level of protection for selected people. It is shown that the proposed algorithm can be implemented alongside other moral algorithms, using either the framework of personal ethics settings or that of mandatory ethics settings.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"52"},"PeriodicalIF":2.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11493828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1007/s11948-024-00506-z
Yuqi Peng
To facilitate engineering students' understanding of engineering ethics and support instructors in developing course content, this study introduces an innovative educational tool drawing inspiration from the Rubik's Cube metaphor. This Engineering Ethics Knowledge Rubik's Cube (EEKRC) integrates six key aspects-ethical theories, codes of ethics, ethical issues, engineering disciplines, stakeholders, and life cycle-identified through an analysis of engineering ethics textbooks and courses across the United States, Singapore, and China. This analysis underpins the selection of the six aspects, reflecting the shared and unique elements of engineering ethics education in these regions. In an engineering ethics course, the EEKRC serves multiple functions: it provides visual support for grasping engineering ethics concepts, acts as a pedagogical guide for both experienced and inexperienced educators in course design, offers a complementary assessment method for evaluating students learning outcomes, and assists as a reference for students engaging in ethical analysis.
{"title":"A Rubik's Cube-Inspired Pedagogical Tool for Teaching and Learning Engineering Ethics.","authors":"Yuqi Peng","doi":"10.1007/s11948-024-00506-z","DOIUrl":"https://doi.org/10.1007/s11948-024-00506-z","url":null,"abstract":"<p><p>To facilitate engineering students' understanding of engineering ethics and support instructors in developing course content, this study introduces an innovative educational tool drawing inspiration from the Rubik's Cube metaphor. This Engineering Ethics Knowledge Rubik's Cube (EEKRC) integrates six key aspects-ethical theories, codes of ethics, ethical issues, engineering disciplines, stakeholders, and life cycle-identified through an analysis of engineering ethics textbooks and courses across the United States, Singapore, and China. This analysis underpins the selection of the six aspects, reflecting the shared and unique elements of engineering ethics education in these regions. In an engineering ethics course, the EEKRC serves multiple functions: it provides visual support for grasping engineering ethics concepts, acts as a pedagogical guide for both experienced and inexperienced educators in course design, offers a complementary assessment method for evaluating students learning outcomes, and assists as a reference for students engaging in ethical analysis.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"50"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1007/s11948-024-00508-x
Dane Leigh Gogoshin
In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard patient preferences, even those which are potentially morally problematic. A method for handling these at the design level is thus imperative. By way of uncontroversial starting points, I argue that the priority of the public healthcare system is the fulfillment of patients' therapeutic needs, among which certain potentially morally problematic preferences may be counted. There are further ethical considerations, however, which, taken together, suggest that the potential benefits of upholding these preferences are outweighed by the potential harms.
{"title":"Patient Preferences Concerning Humanoid Features in Healthcare Robots.","authors":"Dane Leigh Gogoshin","doi":"10.1007/s11948-024-00508-x","DOIUrl":"https://doi.org/10.1007/s11948-024-00508-x","url":null,"abstract":"<p><p>In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard patient preferences, even those which are potentially morally problematic. A method for handling these at the design level is thus imperative. By way of uncontroversial starting points, I argue that the priority of the public healthcare system is the fulfillment of patients' therapeutic needs, among which certain potentially morally problematic preferences may be counted. There are further ethical considerations, however, which, taken together, suggest that the potential benefits of upholding these preferences are outweighed by the potential harms.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"49"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1007/s11948-024-00509-w
Markus Kneer, Markus Christen
Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.
{"title":"Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.","authors":"Markus Kneer, Markus Christen","doi":"10.1007/s11948-024-00509-w","DOIUrl":"https://doi.org/10.1007/s11948-024-00509-w","url":null,"abstract":"<p><p>Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"51"},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1007/s11948-024-00517-w
Abu Bakkar Siddique, Brian Shaw, Johanna Dwyer, David A Fields, Kevin Fontaine, David Hand, Randy Schekman, Jeffrey Alberts, Julie Locher, David B Allison
The tutelage of our mentors as scientists included the analogy that writing a good scientific paper was an exercise in storytelling that omitted unessential details that did not move the story forward or that detracted from the overall message. However, the advice to not get lost in the details had an important flaw. In science, it is the many details of the data themselves and the methods used to generate and analyze them that give conclusions their probative meaning. Facts may sometimes slow or distract from the clarity, tidiness, intrigue, or flow of the narrative, but nevertheless they are important for the assessment of what was done, the trustworthiness of the science, and the meaning of the findings. Nevertheless, many critical elements and facts about research studies may be omitted from the narrative and become hidden from scholarly scrutiny. We describe a "baker's dozen" shortfalls in which such elements that are pertinent to evaluating the validity of scientific studies are sometimes hidden in reports of the work. Such shortfalls may be intentional or unintentional or lie somewhere in between. Additionally, shortfalls may occur at the level of the individual or an institution or of the entire system itself. We conclude by proposing countermeasures to these shortfalls.
{"title":"Hidden: A Baker's Dozen Ways in Which Research Reporting is Less Transparent than it Could be and Suggestions for Implementing Einstein's Dictum.","authors":"Abu Bakkar Siddique, Brian Shaw, Johanna Dwyer, David A Fields, Kevin Fontaine, David Hand, Randy Schekman, Jeffrey Alberts, Julie Locher, David B Allison","doi":"10.1007/s11948-024-00517-w","DOIUrl":"10.1007/s11948-024-00517-w","url":null,"abstract":"<p><p>The tutelage of our mentors as scientists included the analogy that writing a good scientific paper was an exercise in storytelling that omitted unessential details that did not move the story forward or that detracted from the overall message. However, the advice to not get lost in the details had an important flaw. In science, it is the many details of the data themselves and the methods used to generate and analyze them that give conclusions their probative meaning. Facts may sometimes slow or distract from the clarity, tidiness, intrigue, or flow of the narrative, but nevertheless they are important for the assessment of what was done, the trustworthiness of the science, and the meaning of the findings. Nevertheless, many critical elements and facts about research studies may be omitted from the narrative and become hidden from scholarly scrutiny. We describe a \"baker's dozen\" shortfalls in which such elements that are pertinent to evaluating the validity of scientific studies are sometimes hidden in reports of the work. Such shortfalls may be intentional or unintentional or lie somewhere in between. Additionally, shortfalls may occur at the level of the individual or an institution or of the entire system itself. We conclude by proposing countermeasures to these shortfalls.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"48"},"PeriodicalIF":2.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11485062/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1007/s11948-024-00513-0
Franziska Poszler, Maximilian Geisslinger, Christoph Lütge
Self-driving vehicles (SDVs) will need to make decisions that carry ethical dimensions and are of normative significance. For example, by choosing a specific trajectory, they determine how risks are distributed among traffic participants. Accordingly, policymakers, standardization organizations and scholars have conceptualized what (shall) constitute(s) ethical decision-making for SDVs. Eventually, these conceptualizations must be converted into specific system requirements to ensure proper technical implementation. Therefore, this article aims to translate critical requirements recently formulated in scholarly work, existing standards, regulatory drafts and guidelines into an explicit five-step ethical decision model for SDVs during hazardous situations. This model states a precise sequence of steps, indicates the guiding ethical principles that inform each step and points out a list of terms that demand further investigation and technical specification. By integrating ethical, legal and engineering considerations, we aim to contribute to the scholarly debate on computational ethics (particularly in autonomous driving) while offering practitioners in the automotive sector a decision-making process for SDVs that is technically viable, legally permissible, ethically grounded and adaptable to societal values. In the future, assessing the actual impact, effectiveness and admissibility of implementing the here sketched theories, terms and the overall decision process requires an empirical evaluation and testing of the overall decision-making model.
{"title":"Ethical Decision-Making for Self-Driving Vehicles: A Proposed Model & List of Value-Laden Terms that Warrant (Technical) Specification.","authors":"Franziska Poszler, Maximilian Geisslinger, Christoph Lütge","doi":"10.1007/s11948-024-00513-0","DOIUrl":"https://doi.org/10.1007/s11948-024-00513-0","url":null,"abstract":"<p><p>Self-driving vehicles (SDVs) will need to make decisions that carry ethical dimensions and are of normative significance. For example, by choosing a specific trajectory, they determine how risks are distributed among traffic participants. Accordingly, policymakers, standardization organizations and scholars have conceptualized what (shall) constitute(s) ethical decision-making for SDVs. Eventually, these conceptualizations must be converted into specific system requirements to ensure proper technical implementation. Therefore, this article aims to translate critical requirements recently formulated in scholarly work, existing standards, regulatory drafts and guidelines into an explicit five-step ethical decision model for SDVs during hazardous situations. This model states a precise sequence of steps, indicates the guiding ethical principles that inform each step and points out a list of terms that demand further investigation and technical specification. By integrating ethical, legal and engineering considerations, we aim to contribute to the scholarly debate on computational ethics (particularly in autonomous driving) while offering practitioners in the automotive sector a decision-making process for SDVs that is technically viable, legally permissible, ethically grounded and adaptable to societal values. In the future, assessing the actual impact, effectiveness and admissibility of implementing the here sketched theories, terms and the overall decision process requires an empirical evaluation and testing of the overall decision-making model.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"47"},"PeriodicalIF":2.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466986/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1007/s11948-024-00507-y
Salla Westerstrand
The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.
{"title":"Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.","authors":"Salla Westerstrand","doi":"10.1007/s11948-024-00507-y","DOIUrl":"10.1007/s11948-024-00507-y","url":null,"abstract":"<p><p>The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"46"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1007/s11948-024-00510-3
Nina Frahm, Kasper Schiølin
In this editorial to the Topical Collection "Innovation under Fire: The Rise of Ethics in Tech", we provide an overview of the papers gathered in the collection, reflect on similarities and differences in their analytical angles and methodological approaches, and carve out some of the cross-cutting themes that emerge from research on the production of 'Tech Ethics'. We identify two recurring ways through which 'Tech Ethics' are studied and forms of critique towards them developed, which we argue diverge primarily in their a priori commitments towards what ethical tech is and how it should best be pursued. Beyond these differences, we observe how current research on 'Tech Ethics' evidences a close relationship between public controversies about technological innovation and the rise of ethics discourses and instruments for their settlement, producing legitimacy crises for 'Tech Ethics' in and of itself. 'Tech Ethics' is not only instrumental for governing technoscientific projects in the present but is equally instrumental for the construction of socio-technical imaginaries and the essentialization of technological futures. We suggest that efforts to reach beyond single case-studies are needed and call for collective reflection on joint issues and challenges to advance the critical project of 'Tech Ethics'.
{"title":"The Rise of Tech Ethics: Approaches, Critique, and Future Pathways.","authors":"Nina Frahm, Kasper Schiølin","doi":"10.1007/s11948-024-00510-3","DOIUrl":"10.1007/s11948-024-00510-3","url":null,"abstract":"<p><p>In this editorial to the Topical Collection \"Innovation under Fire: The Rise of Ethics in Tech\", we provide an overview of the papers gathered in the collection, reflect on similarities and differences in their analytical angles and methodological approaches, and carve out some of the cross-cutting themes that emerge from research on the production of 'Tech Ethics'. We identify two recurring ways through which 'Tech Ethics' are studied and forms of critique towards them developed, which we argue diverge primarily in their a priori commitments towards what ethical tech is and how it should best be pursued. Beyond these differences, we observe how current research on 'Tech Ethics' evidences a close relationship between public controversies about technological innovation and the rise of ethics discourses and instruments for their settlement, producing legitimacy crises for 'Tech Ethics' in and of itself. 'Tech Ethics' is not only instrumental for governing technoscientific projects in the present but is equally instrumental for the construction of socio-technical imaginaries and the essentialization of technological futures. We suggest that efforts to reach beyond single case-studies are needed and call for collective reflection on joint issues and challenges to advance the critical project of 'Tech Ethics'.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"45"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1007/s11948-024-00504-1
Nico Dario Müller
The 3Rs framework in animal experimentation– “replace, reduce, refine” – has been alleged to be expressive of anthropocentrism, the view that only humans are directly morally relevant. After all, the 3Rs safeguard animal welfare only as far as given human research objectives permit, effectively prioritizing human use interests over animal interests. This article acknowledges this prioritization, but argues that the characterization as anthropocentric is inaccurate. In fact, the 3Rs prioritize research purposes even more strongly than an ethical anthropocentrist would. Drawing on the writings of Universities Federation for Animal Welfare (UFAW) founder Charles W. Hume, who employed Russell and Burch, it is argued that the 3Rs originally arose from an animal-centered ethic which was however restricted by an organizational strategy aiming at the voluntary cooperation of animal researchers. Research purposes thus had to be accepted as given. While this explains why the 3Rs focus narrowly on humane method selection, not on encouraging animal-free question selection in the first place, it suggests that governments should (also) focus on the latter if they recognize animals as deserving protection for their own sake.
{"title":"Beyond Anthropocentrism: The Moral and Strategic Philosophy behind Russell and Burch’s 3Rs in Animal Experimentation","authors":"Nico Dario Müller","doi":"10.1007/s11948-024-00504-1","DOIUrl":"https://doi.org/10.1007/s11948-024-00504-1","url":null,"abstract":"<p>The 3Rs framework in animal experimentation– “replace, reduce, refine” – has been alleged to be expressive of anthropocentrism, the view that only humans are directly morally relevant. After all, the 3Rs safeguard animal welfare only as far as given human research objectives permit, effectively prioritizing human use interests over animal interests. This article acknowledges this prioritization, but argues that the characterization as anthropocentric is inaccurate. In fact, the 3Rs prioritize research purposes even more strongly than an ethical anthropocentrist would. Drawing on the writings of Universities Federation for Animal Welfare (UFAW) founder Charles W. Hume, who employed Russell and Burch, it is argued that the 3Rs originally arose from an animal-centered ethic which was however restricted by an organizational strategy aiming at the voluntary cooperation of animal researchers. Research purposes thus had to be accepted as given. While this explains why the 3Rs focus narrowly on humane method selection, not on encouraging animal-free question selection in the first place, it suggests that governments should (also) focus on the latter if they recognize animals as deserving protection for their own sake.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"389 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}