Pub Date : 2024-05-08DOI: 10.1007/s11948-024-00478-0
Jared Howes, Yvonne Denier, Tijs Vandemeulebroucke, Chris Gastmans
Wandering is a symptom of dementia that can have devastating consequences on the lives of persons living with dementia and their families and caregivers. Increasingly, caregivers are turning towards electronic tracking devices to help manage wandering. Ethical questions have been raised regarding these location-based technologies and although qualitative research has been conducted to gain better insight into various stakeholders' views on the topic, developers of these technologies have been largely excluded. No qualitative research has focused on developers' perceptions of ethics related to electronic tracking devices. To address this, we performed a qualitative semi-structured interview study based on grounded theory. We interviewed 15 developers of electronic tracking devices to better understand how they perceive ethical issues surrounding the design, development, and use of these devices within dementia care. Our results reveal that developers are strongly motivated by moral considerations and believe that including stakeholders throughout the development process is critical for success. Developers felt a strong sense of moral obligation towards topics within their control and a weaker sense of moral obligation towards topics outside their control. This leads to a perceived moral boundary between development and use, where some moral responsibility is shifted to end-users.
{"title":"The Ethics of Electronic Tracking Devices in Dementia Care: An Interview Study with Developers.","authors":"Jared Howes, Yvonne Denier, Tijs Vandemeulebroucke, Chris Gastmans","doi":"10.1007/s11948-024-00478-0","DOIUrl":"10.1007/s11948-024-00478-0","url":null,"abstract":"<p><p>Wandering is a symptom of dementia that can have devastating consequences on the lives of persons living with dementia and their families and caregivers. Increasingly, caregivers are turning towards electronic tracking devices to help manage wandering. Ethical questions have been raised regarding these location-based technologies and although qualitative research has been conducted to gain better insight into various stakeholders' views on the topic, developers of these technologies have been largely excluded. No qualitative research has focused on developers' perceptions of ethics related to electronic tracking devices. To address this, we performed a qualitative semi-structured interview study based on grounded theory. We interviewed 15 developers of electronic tracking devices to better understand how they perceive ethical issues surrounding the design, development, and use of these devices within dementia care. Our results reveal that developers are strongly motivated by moral considerations and believe that including stakeholders throughout the development process is critical for success. Developers felt a strong sense of moral obligation towards topics within their control and a weaker sense of moral obligation towards topics outside their control. This leads to a perceived moral boundary between development and use, where some moral responsibility is shifted to end-users.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"17"},"PeriodicalIF":2.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11078786/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140891289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1007/s11948-024-00466-4
Rochelle E. Tractenberg, Victor I. Piercey, Catherine A. Buell
This project explored what constitutes “ethical practice of mathematics”. Thematic analysis of ethical practice standards from mathematics-adjacent disciplines (statistics and computing), were combined with two organizational codes of conduct and community input resulting in over 100 items. These analyses identified 29 of the 52 items in the 2018 American Statistical Association Ethical Guidelines for Statistical Practice, and 15 of the 24 additional (unique) items from the 2018 Association of Computing Machinery Code of Ethics for inclusion. Three of the 29 items synthesized from the 2019 American Mathematical Society Code of Ethics, and zero of the Mathematical Association of America Code of Ethics, were identified as reflective of “ethical mathematical practice” beyond items already identified from the other two codes. The community contributed six unique items. Item stems were standardized to, “The ethical mathematics practitioner…”. Invitations to complete the 30-min online survey were shared nationally (US) via Mathematics organization listservs and other widespread emails and announcements. We received 142 individual responses to the national survey, 75% of whom endorsed 41/52 items, with 90–100% endorsing 20/52 items on the survey. Items from different sources were endorsed at both high and low rates. A final thematic analysis yielded 44 items, grouped into “General” (12 items), “Profession” (10 items) and “Scholarship” (11 items). Moreover, for the practitioner in a leader/mentor/supervisor/instructor role, there are an additional 11 items (4 General/7 Professional). These results suggest that the community perceives a much wider range of behaviors by mathematicians to be subject to ethical practice standards than had been previously included in professional organization codes. The results provide evidence against the argument that mathematics practitioners engaged in “pure” or “theoretical” work have minimal, small, or no ethical obligations.
{"title":"Defining “Ethical Mathematical Practice” Through Engagement with Discipline-Adjacent Practice Standards and the Mathematical Community","authors":"Rochelle E. Tractenberg, Victor I. Piercey, Catherine A. Buell","doi":"10.1007/s11948-024-00466-4","DOIUrl":"https://doi.org/10.1007/s11948-024-00466-4","url":null,"abstract":"<p>This project explored what constitutes “ethical practice of mathematics”. Thematic analysis of ethical practice standards from mathematics-adjacent disciplines (statistics and computing), were combined with two organizational codes of conduct and community input resulting in over 100 items. These analyses identified 29 of the 52 items in the 2018 American Statistical Association Ethical Guidelines for Statistical Practice, and 15 of the 24 additional (unique) items from the 2018 Association of Computing Machinery Code of Ethics for inclusion. Three of the 29 items synthesized from the 2019 American Mathematical Society Code of Ethics, and zero of the Mathematical Association of America Code of Ethics, were identified as reflective of “ethical mathematical practice” beyond items already identified from the other two codes. The community contributed six unique items. Item stems were standardized to, “The ethical mathematics practitioner…”. Invitations to complete the 30-min online survey were shared nationally (US) via Mathematics organization listservs and other widespread emails and announcements. We received 142 individual responses to the national survey, 75% of whom endorsed 41/52 items, with 90–100% endorsing 20/52 items on the survey. Items from different sources were endorsed at both high and low rates. A final thematic analysis yielded 44 items, grouped into “General” (12 items), “Profession” (10 items) and “Scholarship” (11 items). Moreover, for the practitioner in a leader/mentor/supervisor/instructor role, there are an additional 11 items (4 General/7 Professional). These results suggest that the community perceives a much wider range of behaviors by mathematicians to be subject to ethical practice standards than had been previously included in professional organization codes. The results provide evidence against the argument that mathematics practitioners engaged in “pure” or “theoretical” work have minimal, small, or no ethical obligations.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"8 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140841920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1007/s11948-024-00475-3
Shreesh Chary
Two Boeing 737-MAX passenger planes crashed in October 2018 and March 2019, suspending all 737-MAX aircraft. The crashes put Boeing’s corporate practices and culture under the spotlight. The main objective of this paper is to use the case of Boeing to highlight the importance of efficient employee grievance redressal mechanisms and an independent external regulator. The methodology adopted is a qualitative analysis of statements of various whistleblowers and Boeing and the Federal Aviation Administration (FAA) stakeholders. It suggests that employee feedback flowing up the chain of command should be more flexible and dealt with more seriousness. It recommends that companies adopt a cooling-off period or a lifetime restriction for employees who have gone through the revolving door between regulators and the industry. The Boeing 737-MAX case, which emphasizes the ethical obligations of the job, can offer value to engineers, engineering educators, managers, ombudsmen, and human resource professionals.
{"title":"Employee Grievance Redressal and Corporate Ethics: Lessons from the Boeing 737-MAX Crashes","authors":"Shreesh Chary","doi":"10.1007/s11948-024-00475-3","DOIUrl":"https://doi.org/10.1007/s11948-024-00475-3","url":null,"abstract":"<p>Two Boeing 737-MAX passenger planes crashed in October 2018 and March 2019, suspending all 737-MAX aircraft. The crashes put Boeing’s corporate practices and culture under the spotlight. The main objective of this paper is to use the case of Boeing to highlight the importance of efficient employee grievance redressal mechanisms and an independent external regulator. The methodology adopted is a qualitative analysis of statements of various whistleblowers and Boeing and the Federal Aviation Administration (FAA) stakeholders. It suggests that employee feedback flowing up the chain of command should be more flexible and dealt with more seriousness. It recommends that companies adopt a cooling-off period or a lifetime restriction for employees who have gone through the revolving door between regulators and the industry. The Boeing 737-MAX case, which emphasizes the ethical obligations of the job, can offer value to engineers, engineering educators, managers, ombudsmen, and human resource professionals.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"70 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140575176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1007/s11948-024-00473-5
Karen Huang, P. M. Krafft
Controversies surrounding social media platforms have provided opportunities for institutional reflexivity amongst users and regulators on how to understand and govern platforms. Amidst contestation, platform companies have continued to enact projects that draw upon existing modes of privatized governance. We investigate how social media companies have attempted to achieve closure by continuing to set the terms around platform governance. We investigate two projects implemented by Facebook (Meta)—authenticity regulation and privacy controls—in response to the Russian Interference and Cambridge Analytica controversies surrounding the 2016 U.S. Presidential Election. Drawing on Goffman’s metaphor of stage management, we analyze the techniques deployed by Facebook to reinforce a division between what is visible and invisible to the user experience. These platform governance projects propose to act upon front-stage data relations: information that users can see from other users—whether that is content that users can see from “bad actors”, or information that other users can see about oneself. At the same time, these projects relegate back-stage data relations—information flows between users constituted by recommendation and targeted advertising systems—to invisibility and inaction. As such, Facebook renders the user experience actionable for governance, while foreclosing governance of back-stage data relations central to the economic value of the platform. As social media companies continue to perform platform governance projects following controversies, our paper invites reflection on the politics of these projects. By destabilizing the boundaries drawn by platform companies, we open space for continuous reflexivity on how platforms should be understood and governed.
{"title":"Performing Platform Governance: Facebook and the Stage Management of Data Relations","authors":"Karen Huang, P. M. Krafft","doi":"10.1007/s11948-024-00473-5","DOIUrl":"https://doi.org/10.1007/s11948-024-00473-5","url":null,"abstract":"<p>Controversies surrounding social media platforms have provided opportunities for institutional reflexivity amongst users and regulators on how to understand and govern platforms. Amidst contestation, platform companies have continued to enact projects that draw upon existing modes of privatized governance. We investigate how social media companies have attempted to achieve closure by continuing to set the terms around platform governance. We investigate two projects implemented by Facebook (Meta)—authenticity regulation and privacy controls—in response to the Russian Interference and Cambridge Analytica controversies surrounding the 2016 U.S. Presidential Election. Drawing on Goffman’s metaphor of stage management, we analyze the techniques deployed by Facebook to reinforce a division between what is visible and invisible to the user experience. These platform governance projects propose to act upon <i>front-stage data relations:</i> information that users can see from other users—whether that is content that users can see from “bad actors”, or information that other users can see about oneself. At the same time, these projects relegate <i>back-stage data relations</i>—information flows between users constituted by recommendation and targeted advertising systems—to invisibility and inaction. As such, Facebook renders the user experience actionable for governance, while foreclosing governance of back-stage data relations central to the economic value of the platform. As social media companies continue to perform platform governance projects following controversies, our paper invites reflection on the politics of these projects. By destabilizing the boundaries drawn by platform companies, we open space for continuous reflexivity on how platforms should be understood and governed.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"57 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140575261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1007/s11948-024-00467-3
Andrea Reyes Elizondo, Wolfgang Kaltenbrunner
Research Integrity (RI) is high on the agenda of both institutions and science policy. The European Union as well as national ministries of science have launched ambitious initiatives to combat misconduct and breaches of research integrity. Often, such initiatives entail attempts to regulate scientific behavior through guidelines that institutions and academic communities can use to more easily identify and deal with cases of misconduct. Rather than framing misconduct as a result of an information deficit, we instead conceptualize Questionable Research Practices (QRPs) as attempts by researchers to reconcile epistemic and social forms of uncertainty in knowledge production. Drawing on previous literature, we define epistemic uncertainty as the inherent intellectual unpredictability of scientific inquiry, while social uncertainty arises from the human-made conditions for scientific work. Our core argument—developed on the basis of 30 focus group interviews with researchers across different fields and European countries—is that breaches of research integrity can be understood as attempts to loosen overly tight coupling between the two forms of uncertainty. Our analytical approach is not meant to relativize or excuse misconduct, but rather to offer a more fine-grained perspective on what exactly it is that researchers want to accomplish by engaging in it. Based on the analysis, we conclude by proposing some concrete ways in which institutions and academic communities could try to reconcile epistemic and social uncertainties on a more collective level, thereby reducing incentives for researchers to engage in misconduct.
{"title":"Navigating the Science System: Research Integrity and Academic Survival Strategies","authors":"Andrea Reyes Elizondo, Wolfgang Kaltenbrunner","doi":"10.1007/s11948-024-00467-3","DOIUrl":"https://doi.org/10.1007/s11948-024-00467-3","url":null,"abstract":"<p>Research Integrity (RI) is high on the agenda of both institutions and science policy. The European Union as well as national ministries of science have launched ambitious initiatives to combat misconduct and breaches of research integrity. Often, such initiatives entail attempts to regulate scientific behavior through guidelines that institutions and academic communities can use to more easily identify and deal with cases of misconduct. Rather than framing misconduct as a result of an information deficit, we instead conceptualize Questionable Research Practices (QRPs) as attempts by researchers to reconcile epistemic and social forms of uncertainty in knowledge production. Drawing on previous literature, we define epistemic uncertainty as the inherent intellectual unpredictability of scientific inquiry, while social uncertainty arises from the human-made conditions for scientific work. Our core argument—developed on the basis of 30 focus group interviews with researchers across different fields and European countries—is that breaches of research integrity can be understood as attempts to loosen overly tight coupling between the two forms of uncertainty. Our analytical approach is not meant to relativize or excuse misconduct, but rather to offer a more fine-grained perspective on what exactly it is that researchers want to accomplish by engaging in it. Based on the analysis, we conclude by proposing some concrete ways in which institutions and academic communities could try to reconcile epistemic and social uncertainties on a more collective level, thereby reducing incentives for researchers to engage in misconduct.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"69 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140575555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-29DOI: 10.1007/s11948-024-00476-2
Danielle Swanepoel, Daniel Corks
Determining the agency-status of machines and AI has never been more pressing. As we progress into a future where humans and machines more closely co-exist, understanding hallmark features of agency affords us the ability to develop policy and narratives which cater to both humans and machines. This paper maintains that decision-making processes largely underpin agential action, and that in most instances, these processes yield good results in terms of making good choices. However, in some instances, when faced with two (or more) choices, an agent may find themselves with equal reasons to choose either - thus being presented with a tie. This paper argues that in the event of a tie, the ability to create a voluntarist reason is a hallmark feature of agency, and second, that AI, through current tie-breaking mechanisms does not have this ability, and thus fails at this particular feature of agency.
{"title":"Artificial Intelligence and Agency: Tie-breaking in AI Decision-Making.","authors":"Danielle Swanepoel, Daniel Corks","doi":"10.1007/s11948-024-00476-2","DOIUrl":"10.1007/s11948-024-00476-2","url":null,"abstract":"<p><p>Determining the agency-status of machines and AI has never been more pressing. As we progress into a future where humans and machines more closely co-exist, understanding hallmark features of agency affords us the ability to develop policy and narratives which cater to both humans and machines. This paper maintains that decision-making processes largely underpin agential action, and that in most instances, these processes yield good results in terms of making good choices. However, in some instances, when faced with two (or more) choices, an agent may find themselves with equal reasons to choose either - thus being presented with a tie. This paper argues that in the event of a tie, the ability to create a voluntarist reason is a hallmark feature of agency, and second, that AI, through current tie-breaking mechanisms does not have this ability, and thus fails at this particular feature of agency.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 2","pages":"11"},"PeriodicalIF":3.7,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10980648/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140327305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1007/s11948-024-00474-4
Fabio Tollon
In this paper, I introduce a "promises and perils" framework for understanding the "soft" impacts of emerging technology, and argue for a eudaimonic conception of well-being. This eudaimonic conception of well-being, however, presupposes that we have something like stable character traits. I therefore defend this view from the "situationist challenge" and show that instead of viewing this challenge as a threat to well-being, we can incorporate it into how we think about living well with technology. Human beings are susceptible to situational influences and are often unaware of the ways that their social and technological environment influence not only their ability to do well, but even their ability to know whether they are doing well. Any theory that attempts to describe what it means for us to be doing well, then, needs to take these contextual features into account and bake them into a theory of human flourishing. By paying careful attention to these contextual factors, we can design systems that promote human flourishing.
{"title":"Technology and the Situationist Challenge to Virtue Ethics.","authors":"Fabio Tollon","doi":"10.1007/s11948-024-00474-4","DOIUrl":"10.1007/s11948-024-00474-4","url":null,"abstract":"<p><p>In this paper, I introduce a \"promises and perils\" framework for understanding the \"soft\" impacts of emerging technology, and argue for a eudaimonic conception of well-being. This eudaimonic conception of well-being, however, presupposes that we have something like stable character traits. I therefore defend this view from the \"situationist challenge\" and show that instead of viewing this challenge as a threat to well-being, we can incorporate it into how we think about living well with technology. Human beings are susceptible to situational influences and are often unaware of the ways that their social and technological environment influence not only their ability to do well, but even their ability to know whether they are doing well. Any theory that attempts to describe what it means for us to be doing well, then, needs to take these contextual features into account and bake them into a theory of human flourishing. By paying careful attention to these contextual factors, we can design systems that promote human flourishing.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 2","pages":"10"},"PeriodicalIF":3.7,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10973075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-07DOI: 10.1007/s11948-024-00472-6
Tahereh Saheb, Tayebeh Saheb
As more national governments adopt policies addressing the ethical implications of artificial intelligence, a comparative analysis of policy documents on these topics can provide valuable insights into emerging concerns and areas of shared importance. This study critically examines 57 policy documents pertaining to ethical AI originating from 24 distinct countries, employing a combination of computational text mining methods and qualitative content analysis. The primary objective is to methodically identify common themes throughout these policy documents and perform a comparative analysis of the ways in which various governments give priority to crucial matters. A total of nineteen topics were initially retrieved. Through an iterative coding process, six overarching themes were identified: principles, the protection of personal data, governmental roles and responsibilities, procedural guidelines, governance and monitoring mechanisms, and epistemological considerations. Furthermore, the research revealed 31 ethical dilemmas pertaining to AI that had been overlooked previously but are now emerging. These dilemmas have been referred to in different extents throughout the policy documents. This research makes a scholarly contribution to the expanding field of technology policy formulations at the national level by analyzing similarities and differences among countries. Furthermore, this analysis has practical ramifications for policymakers who are attempting to comprehend prevailing trends and potentially neglected domains that demand focus in the ever-evolving field of artificial intelligence.
{"title":"Mapping Ethical Artificial Intelligence Policy Landscape: A Mixed Method Analysis.","authors":"Tahereh Saheb, Tayebeh Saheb","doi":"10.1007/s11948-024-00472-6","DOIUrl":"10.1007/s11948-024-00472-6","url":null,"abstract":"<p><p>As more national governments adopt policies addressing the ethical implications of artificial intelligence, a comparative analysis of policy documents on these topics can provide valuable insights into emerging concerns and areas of shared importance. This study critically examines 57 policy documents pertaining to ethical AI originating from 24 distinct countries, employing a combination of computational text mining methods and qualitative content analysis. The primary objective is to methodically identify common themes throughout these policy documents and perform a comparative analysis of the ways in which various governments give priority to crucial matters. A total of nineteen topics were initially retrieved. Through an iterative coding process, six overarching themes were identified: principles, the protection of personal data, governmental roles and responsibilities, procedural guidelines, governance and monitoring mechanisms, and epistemological considerations. Furthermore, the research revealed 31 ethical dilemmas pertaining to AI that had been overlooked previously but are now emerging. These dilemmas have been referred to in different extents throughout the policy documents. This research makes a scholarly contribution to the expanding field of technology policy formulations at the national level by analyzing similarities and differences among countries. Furthermore, this analysis has practical ramifications for policymakers who are attempting to comprehend prevailing trends and potentially neglected domains that demand focus in the ever-evolving field of artificial intelligence.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 2","pages":"9"},"PeriodicalIF":3.7,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10920462/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140050836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1007/s11948-024-00463-7
Joost Alleblas
This study examines an important aspect of energy history and policy: the intertwinement of energy technologies with ideals. Ideals play an important role in energy visions and innovation pathways. Aspirations to realize technical, social, and political ideals indicate a long-term commitment in the design of energy systems, distinguishable from commitment to other abstract goals, such as values. This study offers an analytical scheme that could help to conceptualize these differences and their impact on energy policy. In the proposed model, two spheres of interaction are highlighted: a material sphere in which values and technologies co-evolve, and an imaginary sphere in which ideals interact with idealized technologies. Furthermore, the relation between these two spheres can be understood in different ways. This study examines three cases that are illustrative of the different roles of ideals in the development of energy technologies and visions: (1) the evolution of safety in nuclear reactor design; (2) visions of atomic power in France; (3) the political idealization of a tidal power scheme in the Severn Estuary. Finally, the developed model implies more general insights for the development of sociotechnical systems. Amongst others, it shows why certain projects and technologies remain a political, but not a techno-economic option.
{"title":"Analyzing the Role of Values and Ideals in the Development of Energy Systems: How Values, Their Idealizations, and Technologies Shape Political Decision-Making.","authors":"Joost Alleblas","doi":"10.1007/s11948-024-00463-7","DOIUrl":"10.1007/s11948-024-00463-7","url":null,"abstract":"<p><p>This study examines an important aspect of energy history and policy: the intertwinement of energy technologies with ideals. Ideals play an important role in energy visions and innovation pathways. Aspirations to realize technical, social, and political ideals indicate a long-term commitment in the design of energy systems, distinguishable from commitment to other abstract goals, such as values. This study offers an analytical scheme that could help to conceptualize these differences and their impact on energy policy. In the proposed model, two spheres of interaction are highlighted: a material sphere in which values and technologies co-evolve, and an imaginary sphere in which ideals interact with idealized technologies. Furthermore, the relation between these two spheres can be understood in different ways. This study examines three cases that are illustrative of the different roles of ideals in the development of energy technologies and visions: (1) the evolution of safety in nuclear reactor design; (2) visions of atomic power in France; (3) the political idealization of a tidal power scheme in the Severn Estuary. Finally, the developed model implies more general insights for the development of sociotechnical systems. Amongst others, it shows why certain projects and technologies remain a political, but not a techno-economic option.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 2","pages":"8"},"PeriodicalIF":3.7,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10904412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139991653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1007/s11948-024-00465-5
Stephanie Meirmans
In the research integrity literature, funding plays two different roles: it is thought to elevate questionable research practices (QRPs) due to perverse incentives, and it is a potential actor to incentivize research integrity standards. Recent studies, asking funders, have emphasized the importance of the latter. However, the perspective of active researchers on the impact of competitive research funding on science has not been explored yet. Here, I address this issue by conducting a series of group sessions with researchers in two different countries with different degrees of competition for funding, from three scientific fields (medical sciences, natural sciences, humanities), and in two different career stages (permanent versus temporary employment). Researchers across all groups experienced that competition for funding shapes science, with many unintended negative consequences. Intriguingly, these consequences had little to do with the type of QRPs typically being presented in the research integrity literature. Instead, the researchers pointed out that funding could result in predictable, fashionable, short-sighted, and overpromising science. This was seen as highly problematic: scientists experienced that the 'projectification' of science makes it more and more difficult to do any science of real importance: plunging into the unknown or addressing big issues that need a long-term horizon to mature. They also problematized unintended negative effects from collaboration and strategizing. I suggest it may be time to move away from a focus on QRPs in connection with funding, and rather address the real problems. Such a shift may then call for entirely different types of policy actions.
{"title":"How Competition for Funding Impacts Scientific Practice: Building Pre-fab Houses but no Cathedrals.","authors":"Stephanie Meirmans","doi":"10.1007/s11948-024-00465-5","DOIUrl":"10.1007/s11948-024-00465-5","url":null,"abstract":"<p><p>In the research integrity literature, funding plays two different roles: it is thought to elevate questionable research practices (QRPs) due to perverse incentives, and it is a potential actor to incentivize research integrity standards. Recent studies, asking funders, have emphasized the importance of the latter. However, the perspective of active researchers on the impact of competitive research funding on science has not been explored yet. Here, I address this issue by conducting a series of group sessions with researchers in two different countries with different degrees of competition for funding, from three scientific fields (medical sciences, natural sciences, humanities), and in two different career stages (permanent versus temporary employment). Researchers across all groups experienced that competition for funding shapes science, with many unintended negative consequences. Intriguingly, these consequences had little to do with the type of QRPs typically being presented in the research integrity literature. Instead, the researchers pointed out that funding could result in predictable, fashionable, short-sighted, and overpromising science. This was seen as highly problematic: scientists experienced that the 'projectification' of science makes it more and more difficult to do any science of real importance: plunging into the unknown or addressing big issues that need a long-term horizon to mature. They also problematized unintended negative effects from collaboration and strategizing. I suggest it may be time to move away from a focus on QRPs in connection with funding, and rather address the real problems. Such a shift may then call for entirely different types of policy actions.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 1","pages":"6"},"PeriodicalIF":3.7,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10864468/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}