Pub Date : 2023-04-25DOI: 10.1007/s11948-023-00437-1
José Luis Molina, Paola Tubaro, Antonio Casilli, Antonio Santos-Ortega
Scientific research is growingly increasingly reliant on "microwork" or "crowdsourcing" provided by digital platforms to collect new data. Digital platforms connect clients and workers, charging a fee for an algorithmically managed workflow based on Terms of Service agreements. Although these platforms offer a way to make a living or complement other sources of income, microworkers lack fundamental labor rights and basic safe working conditions, especially in the Global South. We ask how researchers and research institutions address the ethical issues involved in considering microworkers as "human participants." We argue that current scientific research fails to treat microworkers in the same way as in-person human participants, producing de facto a double morality: one applied to people with rights acknowledged by states and international bodies (e.g., the Helsinki Declaration), the other to guest workers of digital autocracies who have almost no rights at all. We illustrate our argument by drawing on 57 interviews conducted with microworkers in Spanish-speaking countries.
{"title":"Research Ethics in the Age of Digital Platforms.","authors":"José Luis Molina, Paola Tubaro, Antonio Casilli, Antonio Santos-Ortega","doi":"10.1007/s11948-023-00437-1","DOIUrl":"https://doi.org/10.1007/s11948-023-00437-1","url":null,"abstract":"<p><p>Scientific research is growingly increasingly reliant on \"microwork\" or \"crowdsourcing\" provided by digital platforms to collect new data. Digital platforms connect clients and workers, charging a fee for an algorithmically managed workflow based on Terms of Service agreements. Although these platforms offer a way to make a living or complement other sources of income, microworkers lack fundamental labor rights and basic safe working conditions, especially in the Global South. We ask how researchers and research institutions address the ethical issues involved in considering microworkers as \"human participants.\" We argue that current scientific research fails to treat microworkers in the same way as in-person human participants, producing de facto a double morality: one applied to people with rights acknowledged by states and international bodies (e.g., the Helsinki Declaration), the other to guest workers of digital autocracies who have almost no rights at all. We illustrate our argument by drawing on 57 interviews conducted with microworkers in Spanish-speaking countries.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 3","pages":"17"},"PeriodicalIF":3.7,"publicationDate":"2023-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10127972/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9721014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-25DOI: 10.1007/s11948-023-00434-4
Giovanni Frigo, Christine Milchram, Rafaela Hillerbrand
This article introduces Designing for Care (D4C), a distinctive approach to project management and technological design informed by Care Ethics. We propose to conceptualize "care" as both the foundational value of D4C and as its guiding mid-level principle. As a value, care provides moral grounding. As a principle, it equips D4C with moral guidance to enact a caring process. The latter is made of a set of concrete, and often recursive, caring practices. One of the key assumption of D4C is a relational ontology of individual and group identities, which fosters the actualization of caring practices as essentially relational and (often) reciprocal. Moreover, D4C adopts the "ecological turn" in CE and stresses the ecological situatedness and impact of concrete projects, envisioning an extension of caring from intra-species to inter-species relations. We argue that care and caring can influence directly some of the phases and practices within the management of (energy) projects and the design of sociotechnical (energy) artefacts and systems. When issues related to "value change" emerge as problematic (e.g., values trade-offs, conflicts), the mid-level guiding principle of care helps evaluate and prioritize different values at stake within specific projects. Although there may be several actors and stakeholders involved in project management and technological design, here we will focus on the professionals in charge of imagining, designing, and carrying out these processes (i.e., project managers, designers, engineers). We suggest that adopting D4C would improve their ability to capture and assess stakeholders' values, critically reflect on and evaluate their own values, and judge which values prioritize. Although D4C may be adaptable to different fields and design contexts, we recommend its use especially within small and medium-scale (energy) projects. To show the benefits of adopting it, we envisage the application of D4C within the project management and the technological design of a community battery. The adoption of D4C can have multiple positive effects: transforming the mentality and practice of managing a project and designing technologies; enhancing caring relationships between managers, designers, and users as well as among users; achieving better communication, more inclusive participation, and more just decision-making. This is an initial attempt to articulate the structure and the procedural character of D4C. The application of D4C in a concrete project is needed to assess its actual impact, benefits, and limitations.
{"title":"Designing for Care.","authors":"Giovanni Frigo, Christine Milchram, Rafaela Hillerbrand","doi":"10.1007/s11948-023-00434-4","DOIUrl":"https://doi.org/10.1007/s11948-023-00434-4","url":null,"abstract":"<p><p>This article introduces Designing for Care (D4C), a distinctive approach to project management and technological design informed by Care Ethics. We propose to conceptualize \"care\" as both the foundational value of D4C and as its guiding mid-level principle. As a value, care provides moral grounding. As a principle, it equips D4C with moral guidance to enact a caring process. The latter is made of a set of concrete, and often recursive, caring practices. One of the key assumption of D4C is a relational ontology of individual and group identities, which fosters the actualization of caring practices as essentially relational and (often) reciprocal. Moreover, D4C adopts the \"ecological turn\" in CE and stresses the ecological situatedness and impact of concrete projects, envisioning an extension of caring from intra-species to inter-species relations. We argue that care and caring can influence directly some of the phases and practices within the management of (energy) projects and the design of sociotechnical (energy) artefacts and systems. When issues related to \"value change\" emerge as problematic (e.g., values trade-offs, conflicts), the mid-level guiding principle of care helps evaluate and prioritize different values at stake within specific projects. Although there may be several actors and stakeholders involved in project management and technological design, here we will focus on the professionals in charge of imagining, designing, and carrying out these processes (i.e., project managers, designers, engineers). We suggest that adopting D4C would improve their ability to capture and assess stakeholders' values, critically reflect on and evaluate their own values, and judge which values prioritize. Although D4C may be adaptable to different fields and design contexts, we recommend its use especially within small and medium-scale (energy) projects. To show the benefits of adopting it, we envisage the application of D4C within the project management and the technological design of a community battery. The adoption of D4C can have multiple positive effects: transforming the mentality and practice of managing a project and designing technologies; enhancing caring relationships between managers, designers, and users as well as among users; achieving better communication, more inclusive participation, and more just decision-making. This is an initial attempt to articulate the structure and the procedural character of D4C. The application of D4C in a concrete project is needed to assess its actual impact, benefits, and limitations.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 3","pages":"16"},"PeriodicalIF":3.7,"publicationDate":"2023-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10129926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10015794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-25DOI: 10.1007/s11948-022-00425-x
Mariëtte van den Hoven, Tom Lindemann, Linda Zollitsch, Julia Prieß-Buchheit
Trainers often use information from previous learning sessions to design or redesign a course. Although universities conducted numerous research integrity training in the past decades, information on what works and what does not work in research integrity training are still scattered. The latest meta-reviews offer trainers some information about effective teaching and learning activities. Yet they lack information to determine which activities are plausible for specific target groups and learning outcomes and thus do not support course design decisions in the best possible manner. This article wants to change this status quo and outlines an easy-to-use taxonomy for research integrity training based on Kirkpatrick's four levels of evaluation to foster mutual exchange and improve research integrity course design. By describing the taxonomy for research integrity training (TRIT) in detail and outlining three European projects, their intended training effects before the project started, their learning outcomes, teaching and learning activities, and their assessment instruments, this article introduces a unified approach. This article gives practitioners references to identify didactical interrelations and impacts and (knowledge) gaps in how to (re-)design an RI course. The suggested taxonomy is easy to use and enables an increase in tailored and evidence-based (re-)designs of research integrity training.
培训师经常利用以前学习课程的信息来设计或重新设计课程。尽管大学在过去几十年中开展了大量研究诚信培训,但有关研究诚信培训中哪些有效、哪些无效的信息仍然很零散。最新的元综述为培训师提供了一些关于有效教学活动的信息。然而,它们缺乏信息来确定哪些活动对于特定目标群体和学习成果是可行的,因此无法以最佳方式为课程设计决策提供支持。本文希望改变这种现状,并根据柯克帕特里克的四个评估层次,概述了一种易于使用的研究诚信培训分类法,以促进相互交流,改进研究诚信课程设计。本文详细描述了研究诚信培训(TRIT)分类法,并概述了三个欧洲项目、项目开始前的预期培训效果、学习成果、教学活动及其评估工具,从而介绍了一种统一的方法。本文为从业人员提供了参考,以确定教学的相互关系和影响,以及如何(重新)设计 RI 课程的(知识)差距。所建议的分类法易于使用,可提高研究诚信培训的针对性和循证(重新)设计。
{"title":"A Taxonomy for Research Intergrity Training: Design, Conduct, and Improvements in Research Integrity Courses.","authors":"Mariëtte van den Hoven, Tom Lindemann, Linda Zollitsch, Julia Prieß-Buchheit","doi":"10.1007/s11948-022-00425-x","DOIUrl":"10.1007/s11948-022-00425-x","url":null,"abstract":"<p><p>Trainers often use information from previous learning sessions to design or redesign a course. Although universities conducted numerous research integrity training in the past decades, information on what works and what does not work in research integrity training are still scattered. The latest meta-reviews offer trainers some information about effective teaching and learning activities. Yet they lack information to determine which activities are plausible for specific target groups and learning outcomes and thus do not support course design decisions in the best possible manner. This article wants to change this status quo and outlines an easy-to-use taxonomy for research integrity training based on Kirkpatrick's four levels of evaluation to foster mutual exchange and improve research integrity course design. By describing the taxonomy for research integrity training (TRIT) in detail and outlining three European projects, their intended training effects before the project started, their learning outcomes, teaching and learning activities, and their assessment instruments, this article introduces a unified approach. This article gives practitioners references to identify didactical interrelations and impacts and (knowledge) gaps in how to (re-)design an RI course. The suggested taxonomy is easy to use and enables an increase in tailored and evidence-based (re-)designs of research integrity training.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 3","pages":"14"},"PeriodicalIF":3.7,"publicationDate":"2023-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10129911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9713724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1007/s11948-023-00436-2
Martin Peterson
The values that will govern choices among future energy systems are unlikely to be the same as the values we embrace today. This paper discusses principles of rational choice for agents expecting future value shifts. How do we ought to reason if we believe that some values are likely to change in the future? Are future values more, equally, or less important than present ones? To answer this question, I propose and discuss the Expected Center of Gravity Principle, which articulates what I believe to be a reasonable compromise between present and future values.
{"title":"Value Change, Energy Systems, and Rational Choice: The Expected Center of Gravity Principle.","authors":"Martin Peterson","doi":"10.1007/s11948-023-00436-2","DOIUrl":"https://doi.org/10.1007/s11948-023-00436-2","url":null,"abstract":"<p><p>The values that will govern choices among future energy systems are unlikely to be the same as the values we embrace today. This paper discusses principles of rational choice for agents expecting future value shifts. How do we ought to reason if we believe that some values are likely to change in the future? Are future values more, equally, or less important than present ones? To answer this question, I propose and discuss the Expected Center of Gravity Principle, which articulates what I believe to be a reasonable compromise between present and future values.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 3","pages":"13"},"PeriodicalIF":3.7,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10033273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-31DOI: 10.1007/s11948-023-00435-3
Emilian Mihailov, Cristina Voinea, Constantin Vică
Moral outrage is often characterized as a corrosive emotion, but it can also inspire collective action. In this article we aim to deepen our understanding of the dual nature of online moral outrage which divides people and contributes to inclusivist moral reform. We argue that the specifics of violating different types of moral norms will influence the effects of moral outrage: moral outrage against violating harm-based norms is less antagonistic than moral outrage against violating loyalty and purity/identity norms. We identify which features of social media platforms shape our moral lives. Connectivity, omniculturalism, online exposure, increased group identification and fostering what we call "expressionist experiences", all change how moral outrage is expressed in the digital realm. Finally, we propose changing the design of social media platforms and raise the issue of moral disillusion when ample moral protest in the online environment does not have the expected effects on the offline world.
{"title":"Is Online Moral Outrage Outrageous? Rethinking the Indignation Machine.","authors":"Emilian Mihailov, Cristina Voinea, Constantin Vică","doi":"10.1007/s11948-023-00435-3","DOIUrl":"https://doi.org/10.1007/s11948-023-00435-3","url":null,"abstract":"<p><p>Moral outrage is often characterized as a corrosive emotion, but it can also inspire collective action. In this article we aim to deepen our understanding of the dual nature of online moral outrage which divides people and contributes to inclusivist moral reform. We argue that the specifics of violating different types of moral norms will influence the effects of moral outrage: moral outrage against violating harm-based norms is less antagonistic than moral outrage against violating loyalty and purity/identity norms. We identify which features of social media platforms shape our moral lives. Connectivity, omniculturalism, online exposure, increased group identification and fostering what we call \"expressionist experiences\", all change how moral outrage is expressed in the digital realm. Finally, we propose changing the design of social media platforms and raise the issue of moral disillusion when ample moral protest in the online environment does not have the expected effects on the offline world.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 2","pages":"12"},"PeriodicalIF":3.7,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9681667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-23DOI: 10.1007/s11948-023-00428-2
Richard Volkman, Katleen Gabriels
Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the 'right' answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that 'AI mentors' could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.
{"title":"AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.","authors":"Richard Volkman, Katleen Gabriels","doi":"10.1007/s11948-023-00428-2","DOIUrl":"https://doi.org/10.1007/s11948-023-00428-2","url":null,"abstract":"<p><p>Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the 'right' answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that 'AI mentors' could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 2","pages":"11"},"PeriodicalIF":3.7,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10036265/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9307018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-23DOI: 10.1007/s11948-023-00433-5
Gonzalo Génova, Valentín Moreno, M Rosario González
Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by reflecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely "following a moral code". In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technified society.
{"title":"Machine Ethics: Do Androids Dream of Being Good People?","authors":"Gonzalo Génova, Valentín Moreno, M Rosario González","doi":"10.1007/s11948-023-00433-5","DOIUrl":"10.1007/s11948-023-00433-5","url":null,"abstract":"<p><p>Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by reflecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely \"following a moral code\". In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technified society.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 2","pages":"10"},"PeriodicalIF":3.7,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10036453/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9307016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-07DOI: 10.1007/s11948-023-00432-6
Enrique Asin-Garcia, Zoë Robaey, Linde F C Kampers, Vitor A P Martins Dos Santos
Synthetic biologists design and engineer organisms for a better and more sustainable future. While the manifold prospects are encouraging, concerns about the uncertain risks of genome editing affect public opinion as well as local regulations. As a consequence, biosafety and associated concepts, such as the Safe-by-design framework and genetic safeguard technologies, have gained notoriety and occupy a central position in the conversation about genetically modified organisms. Yet, as regulatory interest and academic research in genetic safeguard technologies advance, the implementation in industrial biotechnology, a sector that is already employing engineered microorganisms, lags behind. The main goal of this work is to explore the utilization of genetic safeguard technologies for designing biosafety in industrial biotechnology. Based on our results, we posit that biosafety is a case of a changing value, by means of further specification of how to realize biosafety. Our investigation is inspired by the Value Sensitive Design framework, to investigate scientific and technological choices in their appropriate social context. Our findings discuss stakeholder norms for biosafety, reasonings about genetic safeguards, and how these impact the practice of designing for biosafety. We show that tensions between stakeholders occur at the level of norms, and that prior stakeholder alignment is crucial for value specification to happen in practice. Finally, we elaborate in different reasonings about genetic safeguards for biosafety and conclude that, in absence of a common multi-stakeholder effort, the differences in informal biosafety norms and the disparity in biosafety thinking could end up leading to design requirements for compliance instead of for safety.
{"title":"Exploring the Impact of Tensions in Stakeholder Norms on Designing for Value Change: The Case of Biosafety in Industrial Biotechnology.","authors":"Enrique Asin-Garcia, Zoë Robaey, Linde F C Kampers, Vitor A P Martins Dos Santos","doi":"10.1007/s11948-023-00432-6","DOIUrl":"10.1007/s11948-023-00432-6","url":null,"abstract":"<p><p>Synthetic biologists design and engineer organisms for a better and more sustainable future. While the manifold prospects are encouraging, concerns about the uncertain risks of genome editing affect public opinion as well as local regulations. As a consequence, biosafety and associated concepts, such as the Safe-by-design framework and genetic safeguard technologies, have gained notoriety and occupy a central position in the conversation about genetically modified organisms. Yet, as regulatory interest and academic research in genetic safeguard technologies advance, the implementation in industrial biotechnology, a sector that is already employing engineered microorganisms, lags behind. The main goal of this work is to explore the utilization of genetic safeguard technologies for designing biosafety in industrial biotechnology. Based on our results, we posit that biosafety is a case of a changing value, by means of further specification of how to realize biosafety. Our investigation is inspired by the Value Sensitive Design framework, to investigate scientific and technological choices in their appropriate social context. Our findings discuss stakeholder norms for biosafety, reasonings about genetic safeguards, and how these impact the practice of designing for biosafety. We show that tensions between stakeholders occur at the level of norms, and that prior stakeholder alignment is crucial for value specification to happen in practice. Finally, we elaborate in different reasonings about genetic safeguards for biosafety and conclude that, in absence of a common multi-stakeholder effort, the differences in informal biosafety norms and the disparity in biosafety thinking could end up leading to design requirements for compliance instead of for safety.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 2","pages":"9"},"PeriodicalIF":2.7,"publicationDate":"2023-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9992083/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9312683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-02DOI: 10.1007/s11948-023-00430-8
Alison L Antes, Tristan J McIntosh, Stephanie Solomon Cargill, Samuel Bruton, Kari Baldwin
At the onset of the COVID-19 pandemic in the United States, stay-at-home orders disrupted normal research operations. Principal investigators (PIs) had to make decisions about conducting and staffing essential research under unprecedented, rapidly changing conditions. These decisions also had to be made amid other substantial work and life stressors, like pressures to be productive and staying healthy. Using survey methods, we asked PIs funded by the National Institutes of Health and the National Science Foundation (N = 930) to rate how they prioritized different considerations, such as personal risks, risks to research personnel, and career consequences, when making decisions. They also reported how difficult they found these choices and associated symptoms of stress. Using a checklist, PIs indicated those factors in their research environments that made their decisions easier (i.e., facilitators) or more difficult (i.e., barriers) to make. Finally, PIs also indicated how satisfied they were with their decisions and management of research during the disruption. Descriptive statistics summarize PIs' responses and inferential tests explore whether responses varied by academic rank or gender. PIs overall reported prioritizing the well-being and perspectives of research personnel, and they perceived more facilitators than barriers. Early-career faculty, however, rated concerns about their careers and productivity as higher priorities compared to their senior counterparts. Early-career faculty also perceived greater difficulty and stress, more barriers, fewer facilitators, and had less satisfaction with their decisions. Women rated several interpersonal concerns about their research personnel more highly than men and reported greater stress. The experience and perceptions of researchers during the COVID-19 pandemic can inform policies and practices when planning for future crises and recovering from the pandemic.
{"title":"Principal Investigators' Priorities and Perceived Barriers and Facilitators When Making Decisions About Conducting Essential Research in the COVID-19 Pandemic.","authors":"Alison L Antes, Tristan J McIntosh, Stephanie Solomon Cargill, Samuel Bruton, Kari Baldwin","doi":"10.1007/s11948-023-00430-8","DOIUrl":"10.1007/s11948-023-00430-8","url":null,"abstract":"<p><p>At the onset of the COVID-19 pandemic in the United States, stay-at-home orders disrupted normal research operations. Principal investigators (PIs) had to make decisions about conducting and staffing essential research under unprecedented, rapidly changing conditions. These decisions also had to be made amid other substantial work and life stressors, like pressures to be productive and staying healthy. Using survey methods, we asked PIs funded by the National Institutes of Health and the National Science Foundation (N = 930) to rate how they prioritized different considerations, such as personal risks, risks to research personnel, and career consequences, when making decisions. They also reported how difficult they found these choices and associated symptoms of stress. Using a checklist, PIs indicated those factors in their research environments that made their decisions easier (i.e., facilitators) or more difficult (i.e., barriers) to make. Finally, PIs also indicated how satisfied they were with their decisions and management of research during the disruption. Descriptive statistics summarize PIs' responses and inferential tests explore whether responses varied by academic rank or gender. PIs overall reported prioritizing the well-being and perspectives of research personnel, and they perceived more facilitators than barriers. Early-career faculty, however, rated concerns about their careers and productivity as higher priorities compared to their senior counterparts. Early-career faculty also perceived greater difficulty and stress, more barriers, fewer facilitators, and had less satisfaction with their decisions. Women rated several interpersonal concerns about their research personnel more highly than men and reported greater stress. The experience and perceptions of researchers during the COVID-19 pandemic can inform policies and practices when planning for future crises and recovering from the pandemic.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 2","pages":"8"},"PeriodicalIF":2.7,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9980856/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9313236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teaching responsible conduct of research (RCR) to PhD students is crucial for fostering responsible research practice. In this paper, we show how the use of Moral Case Deliberation-a case reflection method used in the Amsterdam UMC RCR PhD course-is particularity valuable to address three goals of RCR education: (1) making students aware of, and internalize, RCR principles and values, (2) supporting reflection on good conduct in personal daily practice, and (3) developing students' dialogical attitude and skills so that they can deliberate on RCR issues when they arise. What makes this method relevant for RCR education is the focus on values and personal motivations, the structured reflection on real experiences and dilemmas and the cultivation of participants' dialogical skills. During these structured conversations, students reflect on the personal motives that drive them to adhere to the principles of good science, thereby building connections between those principles and their personal values and motives. Moreover, by exploring personal questions and dilemmas related to RCR, they learn how to address these with colleagues and supervisors. The reflection on personal experiences with RCR issues and questions combined with the study of relevant normative frameworks, support students to act responsibly and to pursue RCR in their day-to-day research practice in spite of difficulties and external constraints.
{"title":"The Contribution of Moral Case Deliberation to Teaching RCR to PhD Students.","authors":"Giulia Inguaggiato, Krishma Labib, Natalie Evans, Fenneke Blom, Lex Bouter, Guy Widdershoven","doi":"10.1007/s11948-023-00431-7","DOIUrl":"https://doi.org/10.1007/s11948-023-00431-7","url":null,"abstract":"<p><p>Teaching responsible conduct of research (RCR) to PhD students is crucial for fostering responsible research practice. In this paper, we show how the use of Moral Case Deliberation-a case reflection method used in the Amsterdam UMC RCR PhD course-is particularity valuable to address three goals of RCR education: (1) making students aware of, and internalize, RCR principles and values, (2) supporting reflection on good conduct in personal daily practice, and (3) developing students' dialogical attitude and skills so that they can deliberate on RCR issues when they arise. What makes this method relevant for RCR education is the focus on values and personal motivations, the structured reflection on real experiences and dilemmas and the cultivation of participants' dialogical skills. During these structured conversations, students reflect on the personal motives that drive them to adhere to the principles of good science, thereby building connections between those principles and their personal values and motives. Moreover, by exploring personal questions and dilemmas related to RCR, they learn how to address these with colleagues and supervisors. The reflection on personal experiences with RCR issues and questions combined with the study of relevant normative frameworks, support students to act responsibly and to pursue RCR in their day-to-day research practice in spite of difficulties and external constraints.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 2","pages":"7"},"PeriodicalIF":3.7,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9977706/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9312658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}