Pub Date : 2024-05-15DOI: 10.1007/s11948-024-00483-3
Yu-Leung Ng, Zhihuai Lin
This study investigated people's ethical concerns of surveillance technology. By adopting the spectrum of technological utopian and dystopian narratives, how people perceive a society constructed through the compulsory use of surveillance technology was explored. This study empirically examined the anonymous online expression of attitudes toward the society-wide, compulsory adoption of a contact tracing app that affected almost every aspect of all people's everyday lives at a societal level. By applying the structural topic modeling approach to analyze comments on four Hong Kong anonymous discussion forums, topics concerning the technological utopian, dystopian, and pragmatic views on the surveillance app were discovered. The findings showed that people with a technological utopian view on this app believed that the implementation of compulsory app use can facilitate social good and maintain social order. In contrast, individuals who had a technological dystopian view expressed privacy concerns and distrust of this surveillance technology. Techno-pragmatists took a balanced approach and evaluated its implementation practically.
{"title":"Between Technological Utopia and Dystopia: Online Expression of Compulsory Use of Surveillance Technology.","authors":"Yu-Leung Ng, Zhihuai Lin","doi":"10.1007/s11948-024-00483-3","DOIUrl":"10.1007/s11948-024-00483-3","url":null,"abstract":"<p><p>This study investigated people's ethical concerns of surveillance technology. By adopting the spectrum of technological utopian and dystopian narratives, how people perceive a society constructed through the compulsory use of surveillance technology was explored. This study empirically examined the anonymous online expression of attitudes toward the society-wide, compulsory adoption of a contact tracing app that affected almost every aspect of all people's everyday lives at a societal level. By applying the structural topic modeling approach to analyze comments on four Hong Kong anonymous discussion forums, topics concerning the technological utopian, dystopian, and pragmatic views on the surveillance app were discovered. The findings showed that people with a technological utopian view on this app believed that the implementation of compulsory app use can facilitate social good and maintain social order. In contrast, individuals who had a technological dystopian view expressed privacy concerns and distrust of this surveillance technology. Techno-pragmatists took a balanced approach and evaluated its implementation practically.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"19"},"PeriodicalIF":2.7,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11096232/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140923672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1007/s11948-024-00477-1
Peter van Oossanen, Martin Peterson
Australia II became the first foreign yacht to win the America's Cup in 1983. The boat had a revolutionary wing keel and a better underwater hull form. In official documents, Ben Lexcen is credited with the design. He is also listed as the sole inventor of the wing keel in a patent application submitted on February 5, 1982. However, as reported in New York Times, Sydney Morning Herald, and Professional Boatbuilder, the wing keel was in fact designed by engineer Peter van Oossanen at the Netherlands Ship Model Basin in Wageningen, assisted by Dr. Joop Slooff at the National Aerospace Laboratory in Amsterdam. Based on telexes, letters, drawings, and other documents preserved in his personal archive, this paper presents van Oossanen's account of how the revolutionary wing keel was designed. This is followed by an ethical analysis by Martin Peterson, in which he applies the American NSPE and Dutch KIVI codes of ethics to the information provided by van Oossanen. The NSPE and KIVI codes give conflicting advice about the case, and it is not obvious which document is most relevant. This impasse is resolved by applying a method of applied ethics in which similarity-based reasoning is extended to cases that are not fully similar. The key idea, presented in Peterson's book The Ethics of Technology (Peterson, The ethics of technology: A geometric analysis of five moral principles, Oxford University Press, 2017), is to use moral paradigm cases as reference points for constructing a "moral map".
1983 年,澳大利亚二号成为第一艘赢得美洲杯帆船赛的外国帆船。该船采用了革命性的翼龙骨和更好的水下船体形式。在官方文件中,Ben Lexcen 被认为是设计者。在 1982 年 2 月 5 日提交的专利申请中,他也被列为翼龙骨的唯一发明人。然而,正如《纽约时报》、《悉尼先驱晨报》和《专业造船师》所报道的那样,翼龙骨实际上是由位于瓦赫宁根的荷兰船模基地的工程师彼得-范-奥萨宁(Peter van Oossanen)在阿姆斯特丹国家航空航天实验室的乔普-斯洛夫(Joop Slooff)博士协助下设计的。本文以范-奥萨宁个人档案中保存的电传、信件、图纸和其他文件为基础,介绍了范-奥萨宁如何设计革命性的翼龙骨。随后,马丁-彼得森(Martin Peterson)进行了伦理分析,将美国 NSPE 和荷兰 KIVI 的伦理准则应用于范-奥萨宁提供的信息。美国 NSPE 和荷兰 KIVI 职业道德准则对该案例给出了相互矛盾的建议,而哪个文件最相关并不明显。解决这一僵局的方法是应用伦理学方法,将基于相似性的推理扩展到不完全相似的案例中。彼得森在《技术伦理学》(Peterson, The Ethics of Technology:五项道德原则的几何分析》,牛津大学出版社,2017 年)中提出的主要观点是,将道德范例案例作为构建 "道德地图 "的参照点。
{"title":"Australia II: A Case Study in Engineering Ethics.","authors":"Peter van Oossanen, Martin Peterson","doi":"10.1007/s11948-024-00477-1","DOIUrl":"10.1007/s11948-024-00477-1","url":null,"abstract":"<p><p>Australia II became the first foreign yacht to win the America's Cup in 1983. The boat had a revolutionary wing keel and a better underwater hull form. In official documents, Ben Lexcen is credited with the design. He is also listed as the sole inventor of the wing keel in a patent application submitted on February 5, 1982. However, as reported in New York Times, Sydney Morning Herald, and Professional Boatbuilder, the wing keel was in fact designed by engineer Peter van Oossanen at the Netherlands Ship Model Basin in Wageningen, assisted by Dr. Joop Slooff at the National Aerospace Laboratory in Amsterdam. Based on telexes, letters, drawings, and other documents preserved in his personal archive, this paper presents van Oossanen's account of how the revolutionary wing keel was designed. This is followed by an ethical analysis by Martin Peterson, in which he applies the American NSPE and Dutch KIVI codes of ethics to the information provided by van Oossanen. The NSPE and KIVI codes give conflicting advice about the case, and it is not obvious which document is most relevant. This impasse is resolved by applying a method of applied ethics in which similarity-based reasoning is extended to cases that are not fully similar. The key idea, presented in Peterson's book The Ethics of Technology (Peterson, The ethics of technology: A geometric analysis of five moral principles, Oxford University Press, 2017), is to use moral paradigm cases as reference points for constructing a \"moral map\".</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"16"},"PeriodicalIF":2.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11078783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1007/s11948-024-00478-0
Jared Howes, Yvonne Denier, Tijs Vandemeulebroucke, Chris Gastmans
Wandering is a symptom of dementia that can have devastating consequences on the lives of persons living with dementia and their families and caregivers. Increasingly, caregivers are turning towards electronic tracking devices to help manage wandering. Ethical questions have been raised regarding these location-based technologies and although qualitative research has been conducted to gain better insight into various stakeholders' views on the topic, developers of these technologies have been largely excluded. No qualitative research has focused on developers' perceptions of ethics related to electronic tracking devices. To address this, we performed a qualitative semi-structured interview study based on grounded theory. We interviewed 15 developers of electronic tracking devices to better understand how they perceive ethical issues surrounding the design, development, and use of these devices within dementia care. Our results reveal that developers are strongly motivated by moral considerations and believe that including stakeholders throughout the development process is critical for success. Developers felt a strong sense of moral obligation towards topics within their control and a weaker sense of moral obligation towards topics outside their control. This leads to a perceived moral boundary between development and use, where some moral responsibility is shifted to end-users.
{"title":"The Ethics of Electronic Tracking Devices in Dementia Care: An Interview Study with Developers.","authors":"Jared Howes, Yvonne Denier, Tijs Vandemeulebroucke, Chris Gastmans","doi":"10.1007/s11948-024-00478-0","DOIUrl":"10.1007/s11948-024-00478-0","url":null,"abstract":"<p><p>Wandering is a symptom of dementia that can have devastating consequences on the lives of persons living with dementia and their families and caregivers. Increasingly, caregivers are turning towards electronic tracking devices to help manage wandering. Ethical questions have been raised regarding these location-based technologies and although qualitative research has been conducted to gain better insight into various stakeholders' views on the topic, developers of these technologies have been largely excluded. No qualitative research has focused on developers' perceptions of ethics related to electronic tracking devices. To address this, we performed a qualitative semi-structured interview study based on grounded theory. We interviewed 15 developers of electronic tracking devices to better understand how they perceive ethical issues surrounding the design, development, and use of these devices within dementia care. Our results reveal that developers are strongly motivated by moral considerations and believe that including stakeholders throughout the development process is critical for success. Developers felt a strong sense of moral obligation towards topics within their control and a weaker sense of moral obligation towards topics outside their control. This leads to a perceived moral boundary between development and use, where some moral responsibility is shifted to end-users.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"17"},"PeriodicalIF":2.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11078786/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140891289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1007/s11948-024-00466-4
Rochelle E. Tractenberg, Victor I. Piercey, Catherine A. Buell
This project explored what constitutes “ethical practice of mathematics”. Thematic analysis of ethical practice standards from mathematics-adjacent disciplines (statistics and computing), were combined with two organizational codes of conduct and community input resulting in over 100 items. These analyses identified 29 of the 52 items in the 2018 American Statistical Association Ethical Guidelines for Statistical Practice, and 15 of the 24 additional (unique) items from the 2018 Association of Computing Machinery Code of Ethics for inclusion. Three of the 29 items synthesized from the 2019 American Mathematical Society Code of Ethics, and zero of the Mathematical Association of America Code of Ethics, were identified as reflective of “ethical mathematical practice” beyond items already identified from the other two codes. The community contributed six unique items. Item stems were standardized to, “The ethical mathematics practitioner…”. Invitations to complete the 30-min online survey were shared nationally (US) via Mathematics organization listservs and other widespread emails and announcements. We received 142 individual responses to the national survey, 75% of whom endorsed 41/52 items, with 90–100% endorsing 20/52 items on the survey. Items from different sources were endorsed at both high and low rates. A final thematic analysis yielded 44 items, grouped into “General” (12 items), “Profession” (10 items) and “Scholarship” (11 items). Moreover, for the practitioner in a leader/mentor/supervisor/instructor role, there are an additional 11 items (4 General/7 Professional). These results suggest that the community perceives a much wider range of behaviors by mathematicians to be subject to ethical practice standards than had been previously included in professional organization codes. The results provide evidence against the argument that mathematics practitioners engaged in “pure” or “theoretical” work have minimal, small, or no ethical obligations.
{"title":"Defining “Ethical Mathematical Practice” Through Engagement with Discipline-Adjacent Practice Standards and the Mathematical Community","authors":"Rochelle E. Tractenberg, Victor I. Piercey, Catherine A. Buell","doi":"10.1007/s11948-024-00466-4","DOIUrl":"https://doi.org/10.1007/s11948-024-00466-4","url":null,"abstract":"<p>This project explored what constitutes “ethical practice of mathematics”. Thematic analysis of ethical practice standards from mathematics-adjacent disciplines (statistics and computing), were combined with two organizational codes of conduct and community input resulting in over 100 items. These analyses identified 29 of the 52 items in the 2018 American Statistical Association Ethical Guidelines for Statistical Practice, and 15 of the 24 additional (unique) items from the 2018 Association of Computing Machinery Code of Ethics for inclusion. Three of the 29 items synthesized from the 2019 American Mathematical Society Code of Ethics, and zero of the Mathematical Association of America Code of Ethics, were identified as reflective of “ethical mathematical practice” beyond items already identified from the other two codes. The community contributed six unique items. Item stems were standardized to, “The ethical mathematics practitioner…”. Invitations to complete the 30-min online survey were shared nationally (US) via Mathematics organization listservs and other widespread emails and announcements. We received 142 individual responses to the national survey, 75% of whom endorsed 41/52 items, with 90–100% endorsing 20/52 items on the survey. Items from different sources were endorsed at both high and low rates. A final thematic analysis yielded 44 items, grouped into “General” (12 items), “Profession” (10 items) and “Scholarship” (11 items). Moreover, for the practitioner in a leader/mentor/supervisor/instructor role, there are an additional 11 items (4 General/7 Professional). These results suggest that the community perceives a much wider range of behaviors by mathematicians to be subject to ethical practice standards than had been previously included in professional organization codes. The results provide evidence against the argument that mathematics practitioners engaged in “pure” or “theoretical” work have minimal, small, or no ethical obligations.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"8 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140841920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1007/s11948-024-00475-3
Shreesh Chary
Two Boeing 737-MAX passenger planes crashed in October 2018 and March 2019, suspending all 737-MAX aircraft. The crashes put Boeing’s corporate practices and culture under the spotlight. The main objective of this paper is to use the case of Boeing to highlight the importance of efficient employee grievance redressal mechanisms and an independent external regulator. The methodology adopted is a qualitative analysis of statements of various whistleblowers and Boeing and the Federal Aviation Administration (FAA) stakeholders. It suggests that employee feedback flowing up the chain of command should be more flexible and dealt with more seriousness. It recommends that companies adopt a cooling-off period or a lifetime restriction for employees who have gone through the revolving door between regulators and the industry. The Boeing 737-MAX case, which emphasizes the ethical obligations of the job, can offer value to engineers, engineering educators, managers, ombudsmen, and human resource professionals.
{"title":"Employee Grievance Redressal and Corporate Ethics: Lessons from the Boeing 737-MAX Crashes","authors":"Shreesh Chary","doi":"10.1007/s11948-024-00475-3","DOIUrl":"https://doi.org/10.1007/s11948-024-00475-3","url":null,"abstract":"<p>Two Boeing 737-MAX passenger planes crashed in October 2018 and March 2019, suspending all 737-MAX aircraft. The crashes put Boeing’s corporate practices and culture under the spotlight. The main objective of this paper is to use the case of Boeing to highlight the importance of efficient employee grievance redressal mechanisms and an independent external regulator. The methodology adopted is a qualitative analysis of statements of various whistleblowers and Boeing and the Federal Aviation Administration (FAA) stakeholders. It suggests that employee feedback flowing up the chain of command should be more flexible and dealt with more seriousness. It recommends that companies adopt a cooling-off period or a lifetime restriction for employees who have gone through the revolving door between regulators and the industry. The Boeing 737-MAX case, which emphasizes the ethical obligations of the job, can offer value to engineers, engineering educators, managers, ombudsmen, and human resource professionals.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"70 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140575176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1007/s11948-024-00473-5
Karen Huang, P. M. Krafft
Controversies surrounding social media platforms have provided opportunities for institutional reflexivity amongst users and regulators on how to understand and govern platforms. Amidst contestation, platform companies have continued to enact projects that draw upon existing modes of privatized governance. We investigate how social media companies have attempted to achieve closure by continuing to set the terms around platform governance. We investigate two projects implemented by Facebook (Meta)—authenticity regulation and privacy controls—in response to the Russian Interference and Cambridge Analytica controversies surrounding the 2016 U.S. Presidential Election. Drawing on Goffman’s metaphor of stage management, we analyze the techniques deployed by Facebook to reinforce a division between what is visible and invisible to the user experience. These platform governance projects propose to act upon front-stage data relations: information that users can see from other users—whether that is content that users can see from “bad actors”, or information that other users can see about oneself. At the same time, these projects relegate back-stage data relations—information flows between users constituted by recommendation and targeted advertising systems—to invisibility and inaction. As such, Facebook renders the user experience actionable for governance, while foreclosing governance of back-stage data relations central to the economic value of the platform. As social media companies continue to perform platform governance projects following controversies, our paper invites reflection on the politics of these projects. By destabilizing the boundaries drawn by platform companies, we open space for continuous reflexivity on how platforms should be understood and governed.
{"title":"Performing Platform Governance: Facebook and the Stage Management of Data Relations","authors":"Karen Huang, P. M. Krafft","doi":"10.1007/s11948-024-00473-5","DOIUrl":"https://doi.org/10.1007/s11948-024-00473-5","url":null,"abstract":"<p>Controversies surrounding social media platforms have provided opportunities for institutional reflexivity amongst users and regulators on how to understand and govern platforms. Amidst contestation, platform companies have continued to enact projects that draw upon existing modes of privatized governance. We investigate how social media companies have attempted to achieve closure by continuing to set the terms around platform governance. We investigate two projects implemented by Facebook (Meta)—authenticity regulation and privacy controls—in response to the Russian Interference and Cambridge Analytica controversies surrounding the 2016 U.S. Presidential Election. Drawing on Goffman’s metaphor of stage management, we analyze the techniques deployed by Facebook to reinforce a division between what is visible and invisible to the user experience. These platform governance projects propose to act upon <i>front-stage data relations:</i> information that users can see from other users—whether that is content that users can see from “bad actors”, or information that other users can see about oneself. At the same time, these projects relegate <i>back-stage data relations</i>—information flows between users constituted by recommendation and targeted advertising systems—to invisibility and inaction. As such, Facebook renders the user experience actionable for governance, while foreclosing governance of back-stage data relations central to the economic value of the platform. As social media companies continue to perform platform governance projects following controversies, our paper invites reflection on the politics of these projects. By destabilizing the boundaries drawn by platform companies, we open space for continuous reflexivity on how platforms should be understood and governed.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"57 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140575261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1007/s11948-024-00467-3
Andrea Reyes Elizondo, Wolfgang Kaltenbrunner
Research Integrity (RI) is high on the agenda of both institutions and science policy. The European Union as well as national ministries of science have launched ambitious initiatives to combat misconduct and breaches of research integrity. Often, such initiatives entail attempts to regulate scientific behavior through guidelines that institutions and academic communities can use to more easily identify and deal with cases of misconduct. Rather than framing misconduct as a result of an information deficit, we instead conceptualize Questionable Research Practices (QRPs) as attempts by researchers to reconcile epistemic and social forms of uncertainty in knowledge production. Drawing on previous literature, we define epistemic uncertainty as the inherent intellectual unpredictability of scientific inquiry, while social uncertainty arises from the human-made conditions for scientific work. Our core argument—developed on the basis of 30 focus group interviews with researchers across different fields and European countries—is that breaches of research integrity can be understood as attempts to loosen overly tight coupling between the two forms of uncertainty. Our analytical approach is not meant to relativize or excuse misconduct, but rather to offer a more fine-grained perspective on what exactly it is that researchers want to accomplish by engaging in it. Based on the analysis, we conclude by proposing some concrete ways in which institutions and academic communities could try to reconcile epistemic and social uncertainties on a more collective level, thereby reducing incentives for researchers to engage in misconduct.
{"title":"Navigating the Science System: Research Integrity and Academic Survival Strategies","authors":"Andrea Reyes Elizondo, Wolfgang Kaltenbrunner","doi":"10.1007/s11948-024-00467-3","DOIUrl":"https://doi.org/10.1007/s11948-024-00467-3","url":null,"abstract":"<p>Research Integrity (RI) is high on the agenda of both institutions and science policy. The European Union as well as national ministries of science have launched ambitious initiatives to combat misconduct and breaches of research integrity. Often, such initiatives entail attempts to regulate scientific behavior through guidelines that institutions and academic communities can use to more easily identify and deal with cases of misconduct. Rather than framing misconduct as a result of an information deficit, we instead conceptualize Questionable Research Practices (QRPs) as attempts by researchers to reconcile epistemic and social forms of uncertainty in knowledge production. Drawing on previous literature, we define epistemic uncertainty as the inherent intellectual unpredictability of scientific inquiry, while social uncertainty arises from the human-made conditions for scientific work. Our core argument—developed on the basis of 30 focus group interviews with researchers across different fields and European countries—is that breaches of research integrity can be understood as attempts to loosen overly tight coupling between the two forms of uncertainty. Our analytical approach is not meant to relativize or excuse misconduct, but rather to offer a more fine-grained perspective on what exactly it is that researchers want to accomplish by engaging in it. Based on the analysis, we conclude by proposing some concrete ways in which institutions and academic communities could try to reconcile epistemic and social uncertainties on a more collective level, thereby reducing incentives for researchers to engage in misconduct.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"69 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140575555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-29DOI: 10.1007/s11948-024-00476-2
Danielle Swanepoel, Daniel Corks
Determining the agency-status of machines and AI has never been more pressing. As we progress into a future where humans and machines more closely co-exist, understanding hallmark features of agency affords us the ability to develop policy and narratives which cater to both humans and machines. This paper maintains that decision-making processes largely underpin agential action, and that in most instances, these processes yield good results in terms of making good choices. However, in some instances, when faced with two (or more) choices, an agent may find themselves with equal reasons to choose either - thus being presented with a tie. This paper argues that in the event of a tie, the ability to create a voluntarist reason is a hallmark feature of agency, and second, that AI, through current tie-breaking mechanisms does not have this ability, and thus fails at this particular feature of agency.
{"title":"Artificial Intelligence and Agency: Tie-breaking in AI Decision-Making.","authors":"Danielle Swanepoel, Daniel Corks","doi":"10.1007/s11948-024-00476-2","DOIUrl":"10.1007/s11948-024-00476-2","url":null,"abstract":"<p><p>Determining the agency-status of machines and AI has never been more pressing. As we progress into a future where humans and machines more closely co-exist, understanding hallmark features of agency affords us the ability to develop policy and narratives which cater to both humans and machines. This paper maintains that decision-making processes largely underpin agential action, and that in most instances, these processes yield good results in terms of making good choices. However, in some instances, when faced with two (or more) choices, an agent may find themselves with equal reasons to choose either - thus being presented with a tie. This paper argues that in the event of a tie, the ability to create a voluntarist reason is a hallmark feature of agency, and second, that AI, through current tie-breaking mechanisms does not have this ability, and thus fails at this particular feature of agency.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 2","pages":"11"},"PeriodicalIF":3.7,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10980648/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140327305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1007/s11948-024-00474-4
Fabio Tollon
In this paper, I introduce a "promises and perils" framework for understanding the "soft" impacts of emerging technology, and argue for a eudaimonic conception of well-being. This eudaimonic conception of well-being, however, presupposes that we have something like stable character traits. I therefore defend this view from the "situationist challenge" and show that instead of viewing this challenge as a threat to well-being, we can incorporate it into how we think about living well with technology. Human beings are susceptible to situational influences and are often unaware of the ways that their social and technological environment influence not only their ability to do well, but even their ability to know whether they are doing well. Any theory that attempts to describe what it means for us to be doing well, then, needs to take these contextual features into account and bake them into a theory of human flourishing. By paying careful attention to these contextual factors, we can design systems that promote human flourishing.
{"title":"Technology and the Situationist Challenge to Virtue Ethics.","authors":"Fabio Tollon","doi":"10.1007/s11948-024-00474-4","DOIUrl":"10.1007/s11948-024-00474-4","url":null,"abstract":"<p><p>In this paper, I introduce a \"promises and perils\" framework for understanding the \"soft\" impacts of emerging technology, and argue for a eudaimonic conception of well-being. This eudaimonic conception of well-being, however, presupposes that we have something like stable character traits. I therefore defend this view from the \"situationist challenge\" and show that instead of viewing this challenge as a threat to well-being, we can incorporate it into how we think about living well with technology. Human beings are susceptible to situational influences and are often unaware of the ways that their social and technological environment influence not only their ability to do well, but even their ability to know whether they are doing well. Any theory that attempts to describe what it means for us to be doing well, then, needs to take these contextual features into account and bake them into a theory of human flourishing. By paying careful attention to these contextual factors, we can design systems that promote human flourishing.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 2","pages":"10"},"PeriodicalIF":3.7,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10973075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-07DOI: 10.1007/s11948-024-00472-6
Tahereh Saheb, Tayebeh Saheb
As more national governments adopt policies addressing the ethical implications of artificial intelligence, a comparative analysis of policy documents on these topics can provide valuable insights into emerging concerns and areas of shared importance. This study critically examines 57 policy documents pertaining to ethical AI originating from 24 distinct countries, employing a combination of computational text mining methods and qualitative content analysis. The primary objective is to methodically identify common themes throughout these policy documents and perform a comparative analysis of the ways in which various governments give priority to crucial matters. A total of nineteen topics were initially retrieved. Through an iterative coding process, six overarching themes were identified: principles, the protection of personal data, governmental roles and responsibilities, procedural guidelines, governance and monitoring mechanisms, and epistemological considerations. Furthermore, the research revealed 31 ethical dilemmas pertaining to AI that had been overlooked previously but are now emerging. These dilemmas have been referred to in different extents throughout the policy documents. This research makes a scholarly contribution to the expanding field of technology policy formulations at the national level by analyzing similarities and differences among countries. Furthermore, this analysis has practical ramifications for policymakers who are attempting to comprehend prevailing trends and potentially neglected domains that demand focus in the ever-evolving field of artificial intelligence.
{"title":"Mapping Ethical Artificial Intelligence Policy Landscape: A Mixed Method Analysis.","authors":"Tahereh Saheb, Tayebeh Saheb","doi":"10.1007/s11948-024-00472-6","DOIUrl":"10.1007/s11948-024-00472-6","url":null,"abstract":"<p><p>As more national governments adopt policies addressing the ethical implications of artificial intelligence, a comparative analysis of policy documents on these topics can provide valuable insights into emerging concerns and areas of shared importance. This study critically examines 57 policy documents pertaining to ethical AI originating from 24 distinct countries, employing a combination of computational text mining methods and qualitative content analysis. The primary objective is to methodically identify common themes throughout these policy documents and perform a comparative analysis of the ways in which various governments give priority to crucial matters. A total of nineteen topics were initially retrieved. Through an iterative coding process, six overarching themes were identified: principles, the protection of personal data, governmental roles and responsibilities, procedural guidelines, governance and monitoring mechanisms, and epistemological considerations. Furthermore, the research revealed 31 ethical dilemmas pertaining to AI that had been overlooked previously but are now emerging. These dilemmas have been referred to in different extents throughout the policy documents. This research makes a scholarly contribution to the expanding field of technology policy formulations at the national level by analyzing similarities and differences among countries. Furthermore, this analysis has practical ramifications for policymakers who are attempting to comprehend prevailing trends and potentially neglected domains that demand focus in the ever-evolving field of artificial intelligence.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 2","pages":"9"},"PeriodicalIF":3.7,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10920462/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140050836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}