Pub Date : 2024-06-10DOI: 10.1007/s11948-024-00480-6
Kexin Huang, Yan Teng, Yang Chen, Yingchun Wang
The rapid development of computer vision technologies and applications has brought forth a range of social and ethical challenges. Due to the unique characteristics of visual technology in terms of data modalities and application scenarios, computer vision poses specific ethical issues. However, the majority of existing literature either addresses artificial intelligence as a whole or pays particular attention to natural language processing, leaving a gap in specialized research on ethical issues and systematic solutions in the field of computer vision. This paper utilizes bibliometrics and text-mining techniques to quantitatively analyze papers from prominent academic conferences in computer vision over the past decade. It first reveals the developing trends and specific distribution of attention regarding trustworthy aspects in the computer vision field, as well as the inherent connections between ethical dimensions and different stages of visual model development. A life-cycle framework regarding trustworthy computer vision is then presented by making the relevant trustworthy issues, the operation pipeline of AI models, and viable technical solutions interconnected, providing researchers and policymakers with references and guidance for achieving trustworthy CV. Finally, it discusses particular motivations for conducting trustworthy practices and underscores the consistency and ambivalence among various trustworthy principles and technical attributes.
{"title":"From Pixels to Principles: A Decade of Progress and Landscape in Trustworthy Computer Vision.","authors":"Kexin Huang, Yan Teng, Yang Chen, Yingchun Wang","doi":"10.1007/s11948-024-00480-6","DOIUrl":"10.1007/s11948-024-00480-6","url":null,"abstract":"<p><p>The rapid development of computer vision technologies and applications has brought forth a range of social and ethical challenges. Due to the unique characteristics of visual technology in terms of data modalities and application scenarios, computer vision poses specific ethical issues. However, the majority of existing literature either addresses artificial intelligence as a whole or pays particular attention to natural language processing, leaving a gap in specialized research on ethical issues and systematic solutions in the field of computer vision. This paper utilizes bibliometrics and text-mining techniques to quantitatively analyze papers from prominent academic conferences in computer vision over the past decade. It first reveals the developing trends and specific distribution of attention regarding trustworthy aspects in the computer vision field, as well as the inherent connections between ethical dimensions and different stages of visual model development. A life-cycle framework regarding trustworthy computer vision is then presented by making the relevant trustworthy issues, the operation pipeline of AI models, and viable technical solutions interconnected, providing researchers and policymakers with references and guidance for achieving trustworthy CV. Finally, it discusses particular motivations for conducting trustworthy practices and underscores the consistency and ambivalence among various trustworthy principles and technical attributes.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"26"},"PeriodicalIF":2.7,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11164730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141297147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1007/s11948-024-00487-z
Bridget Pratt
Six planetary boundaries have already been exceeded, including climate change, loss of biodiversity, chemical pollution, and land-system change. The health research sector contributes to the environmental crisis we are facing, though to a lesser extent than healthcare or agriculture sectors. It could take steps to reduce its environmental impact but generally has not done so, even as the planetary emergency worsens. So far, the normative case for why the health research sector should rectify that failure has not been made. This paper argues strong philosophical grounds, derived from theories of health and social justice, exist to support the claim that the sector has a duty to avoid or minimise causing or contributing to ecological harms that threaten human health or worsen health inequity. The paper next develops ideas about the duty's content, explaining why it should entail more than reducing carbon emissions, and considers what limits might be placed on the duty.
{"title":"Defending and Defining Environmental Responsibilities for the Health Research Sector.","authors":"Bridget Pratt","doi":"10.1007/s11948-024-00487-z","DOIUrl":"10.1007/s11948-024-00487-z","url":null,"abstract":"<p><p>Six planetary boundaries have already been exceeded, including climate change, loss of biodiversity, chemical pollution, and land-system change. The health research sector contributes to the environmental crisis we are facing, though to a lesser extent than healthcare or agriculture sectors. It could take steps to reduce its environmental impact but generally has not done so, even as the planetary emergency worsens. So far, the normative case for why the health research sector should rectify that failure has not been made. This paper argues strong philosophical grounds, derived from theories of health and social justice, exist to support the claim that the sector has a duty to avoid or minimise causing or contributing to ecological harms that threaten human health or worsen health inequity. The paper next develops ideas about the duty's content, explaining why it should entail more than reducing carbon emissions, and considers what limits might be placed on the duty.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"25"},"PeriodicalIF":2.7,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11156718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141263338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-04DOI: 10.1007/s11948-024-00486-0
Laura Arbelaez Ossa, Stephen R Milford, Michael Rost, Anja K Leist, David M Shaw, Bernice S Elger
While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.
在人工智能(AI)技术不断飞速发展的同时,人们对 AI 的有益产出也有了越来越多的承诺,同时也对医疗保健领域人机交互所面临的挑战表示担忧。为了解决这些问题,越来越多的医疗机构开始发布人工智能医疗指南,旨在使人工智能符合道德规范。然而,指南作为一种书面语言形式,可以通过分析来认识其文本交流与潜在社会观念之间的相互联系。从这个角度出发,我们进行了一项话语分析,以了解这些指南是如何构建、阐明和框定医疗保健领域的人工智能伦理的。我们纳入了八份指南,并确定了三种普遍存在且相互交织的话语:(1)人工智能是不可避免的,也是可取的;(2)人工智能需要以(某种形式的)原则为指导;(3)对人工智能的信任是工具性的,也是首要的。这些论述表明,技术理想已过度溢出人工智能伦理,如过度乐观主义和由此产生的过度批判。本研究深入探讨了人工智能指南中的基本思想,以及指南如何影响人工智能的实践,如何使人工智能与伦理、法律和社会价值观保持一致,从而在医疗保健领域塑造人工智能。
{"title":"AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare.","authors":"Laura Arbelaez Ossa, Stephen R Milford, Michael Rost, Anja K Leist, David M Shaw, Bernice S Elger","doi":"10.1007/s11948-024-00486-0","DOIUrl":"10.1007/s11948-024-00486-0","url":null,"abstract":"<p><p>While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"24"},"PeriodicalIF":2.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11150179/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-04DOI: 10.1007/s11948-024-00488-y
Richard T Cimino, Scott C Streiner, Daniel D Burkey, Michael F Young, Landon Bassett, Joshua B Reed
The Defining Issues Test 2 (DIT-2) and Engineering Ethical Reasoning Instrument (EERI) are designed to measure ethical reasoning of general (DIT-2) and engineering-student (EERI) populations. These tools-and the DIT-2 especially-have gained wide usage for assessing the ethical reasoning of undergraduate students. This paper reports on a research study in which the ethical reasoning of first-year undergraduate engineering students at multiple universities was assessed with both of these tools. In addition to these two instruments, students were also asked to create personal concept maps of the phrase "ethical decision-making." It was hypothesized that students whose instrument scores reflected more postconventional levels of moral development and more sophisticated ethical reasoning skills would likewise have richer, more detailed concept maps of ethical decision-making, reflecting their deeper levels of understanding of this topic and the complex of related concepts. In fact, there was no significant correlation between the instrument scores and concept map scoring, suggesting that the way first-year students conceptualize ethical decision making does not predict the way they behave when performing scenario-based ethical reasoning (perhaps more situated). This disparity indicates a need to more precisely quantify engineering ethical reasoning and decision making, if we wish to inform assessment outcomes using the results of such quantitative analyses.
{"title":"Comparing First-Year Engineering Student Conceptions of Ethical Decision-Making to Performance on Standardized Assessments of Ethical Reasoning.","authors":"Richard T Cimino, Scott C Streiner, Daniel D Burkey, Michael F Young, Landon Bassett, Joshua B Reed","doi":"10.1007/s11948-024-00488-y","DOIUrl":"10.1007/s11948-024-00488-y","url":null,"abstract":"<p><p>The Defining Issues Test 2 (DIT-2) and Engineering Ethical Reasoning Instrument (EERI) are designed to measure ethical reasoning of general (DIT-2) and engineering-student (EERI) populations. These tools-and the DIT-2 especially-have gained wide usage for assessing the ethical reasoning of undergraduate students. This paper reports on a research study in which the ethical reasoning of first-year undergraduate engineering students at multiple universities was assessed with both of these tools. In addition to these two instruments, students were also asked to create personal concept maps of the phrase \"ethical decision-making.\" It was hypothesized that students whose instrument scores reflected more postconventional levels of moral development and more sophisticated ethical reasoning skills would likewise have richer, more detailed concept maps of ethical decision-making, reflecting their deeper levels of understanding of this topic and the complex of related concepts. In fact, there was no significant correlation between the instrument scores and concept map scoring, suggesting that the way first-year students conceptualize ethical decision making does not predict the way they behave when performing scenario-based ethical reasoning (perhaps more situated). This disparity indicates a need to more precisely quantify engineering ethical reasoning and decision making, if we wish to inform assessment outcomes using the results of such quantitative analyses.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"23"},"PeriodicalIF":2.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11150177/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s11948-024-00479-z
Simona Tiribelli, Davide Calvaresi
Health Recommender Systems are promising Articial-Intelligence-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging. However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis. In particular, a brief overview of the HRS' technical aspects allows us to shed light on the ethical risks and challenges they might raise on individuals' well-being as they age. Moreover, the study proposes a categorization, understanding, and possible preventive/mitigation actions for the elicited risks and challenges through rethinking the AI ethics core principle of autonomy. Finally, elaborating on autonomy-related ethical theories, the paper proposes an autonomy-based ethical framework and how it can foster the development of autonomy-enabling HRS for AA.
健康推荐系统(Health Recommender Systems)是一种很有前途的基于艺术智能的工具,可在医疗保健领域提供健康的生活方式和治疗方法。在最受支持的领域中,值得一提的是积极老龄化。然而,目前支持 AA 的健康推荐系统提出了伦理方面的挑战,这些挑战仍需要适当的形式化和探索。本研究建议通过基于自主性的伦理分析,重新思考针对 AA 的 HRS。特别是,通过对用户体验系统技术方面的简要概述,我们可以揭示这些系统可能对个人老龄化过程中的福祉带来的伦理风险和挑战。此外,本研究还通过重新思考人工智能伦理的核心原则--自主性,对所引发的风险和挑战提出了分类、理解和可能的预防/缓解措施。最后,通过阐述与自主性相关的伦理理论,本文提出了一个基于自主性的伦理框架,以及如何利用该框架促进开发适用于 AA 的自主性人力资源系统。
{"title":"Rethinking Health Recommender Systems for Active Aging: An Autonomy-Based Ethical Analysis.","authors":"Simona Tiribelli, Davide Calvaresi","doi":"10.1007/s11948-024-00479-z","DOIUrl":"10.1007/s11948-024-00479-z","url":null,"abstract":"<p><p>Health Recommender Systems are promising Articial-Intelligence-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging. However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis. In particular, a brief overview of the HRS' technical aspects allows us to shed light on the ethical risks and challenges they might raise on individuals' well-being as they age. Moreover, the study proposes a categorization, understanding, and possible preventive/mitigation actions for the elicited risks and challenges through rethinking the AI ethics core principle of autonomy. Finally, elaborating on autonomy-related ethical theories, the paper proposes an autonomy-based ethical framework and how it can foster the development of autonomy-enabling HRS for AA.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"22"},"PeriodicalIF":2.7,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11129984/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141155519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-24DOI: 10.1007/s11948-024-00489-x
George Kwasi Barimah
In this paper, I develop and defend a moralized conception of epistemic trust in science against a particular kind of non-moral account defended by John (2015, 2018). I suggest that non-epistemic value considerations, non-epistemic norms of communication and affective trust properly characterize the relationship of epistemic trust between scientific experts and non-experts. I argue that it is through a moralized account of epistemic trust in science that we can make sense of the deep-seated moral undertones that are often at play when non-experts (dis)trust science.
{"title":"Epistemic Trust in Scientific Experts: A Moral Dimension.","authors":"George Kwasi Barimah","doi":"10.1007/s11948-024-00489-x","DOIUrl":"10.1007/s11948-024-00489-x","url":null,"abstract":"<p><p>In this paper, I develop and defend a moralized conception of epistemic trust in science against a particular kind of non-moral account defended by John (2015, 2018). I suggest that non-epistemic value considerations, non-epistemic norms of communication and affective trust properly characterize the relationship of epistemic trust between scientific experts and non-experts. I argue that it is through a moralized account of epistemic trust in science that we can make sense of the deep-seated moral undertones that are often at play when non-experts (dis)trust science.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"21"},"PeriodicalIF":2.7,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11126506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1007/s11948-024-00482-4
Stephen Rainey
This paper provides a justificatory rationale for recommending the inclusion of imagined future use cases in neurotechnology development processes, specifically for legal and policy ends. Including detailed imaginative engagement with future applications of neurotechnology can serve to connect ethical, legal, and policy issues potentially arising from the translation of brain stimulation research to the public consumer domain. Futurist scholars have for some time recommended approaches that merge creative arts with scientific development in order to theorise possible futures toward which current trends in technology development might be steered. Taking a creative, imaginative approach like this in the neurotechnology context can help move development processes beyond considerations of device functioning, safety, and compliance with existing regulation, and into an active engagement with potential future dynamics brought about by the emergence of the neurotechnology itself. Imagined scenarios can engage with potential consumer uses of devices that might come to challenge legal or policy contexts. An anticipatory, creative approach can imagine what such uses might consist in, and what they might imply. Justifying this approach also prompts a co-responsibility perspective for policymaking in technology contexts. Overall, this furnishes a mode of neurotechnology's emergence that can avoid crises of confidence in terms of ethico-legal issues, and promote policy responses balanced between knowledge, values, protected innovation potential, and regulatory safeguards.
{"title":"An Anticipatory Approach to Ethico-Legal Implications of Future Neurotechnology.","authors":"Stephen Rainey","doi":"10.1007/s11948-024-00482-4","DOIUrl":"10.1007/s11948-024-00482-4","url":null,"abstract":"<p><p>This paper provides a justificatory rationale for recommending the inclusion of imagined future use cases in neurotechnology development processes, specifically for legal and policy ends. Including detailed imaginative engagement with future applications of neurotechnology can serve to connect ethical, legal, and policy issues potentially arising from the translation of brain stimulation research to the public consumer domain. Futurist scholars have for some time recommended approaches that merge creative arts with scientific development in order to theorise possible futures toward which current trends in technology development might be steered. Taking a creative, imaginative approach like this in the neurotechnology context can help move development processes beyond considerations of device functioning, safety, and compliance with existing regulation, and into an active engagement with potential future dynamics brought about by the emergence of the neurotechnology itself. Imagined scenarios can engage with potential consumer uses of devices that might come to challenge legal or policy contexts. An anticipatory, creative approach can imagine what such uses might consist in, and what they might imply. Justifying this approach also prompts a co-responsibility perspective for policymaking in technology contexts. Overall, this furnishes a mode of neurotechnology's emergence that can avoid crises of confidence in terms of ethico-legal issues, and promote policy responses balanced between knowledge, values, protected innovation potential, and regulatory safeguards.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"18"},"PeriodicalIF":2.7,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11096192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140923671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1007/s11948-024-00483-3
Yu-Leung Ng, Zhihuai Lin
This study investigated people's ethical concerns of surveillance technology. By adopting the spectrum of technological utopian and dystopian narratives, how people perceive a society constructed through the compulsory use of surveillance technology was explored. This study empirically examined the anonymous online expression of attitudes toward the society-wide, compulsory adoption of a contact tracing app that affected almost every aspect of all people's everyday lives at a societal level. By applying the structural topic modeling approach to analyze comments on four Hong Kong anonymous discussion forums, topics concerning the technological utopian, dystopian, and pragmatic views on the surveillance app were discovered. The findings showed that people with a technological utopian view on this app believed that the implementation of compulsory app use can facilitate social good and maintain social order. In contrast, individuals who had a technological dystopian view expressed privacy concerns and distrust of this surveillance technology. Techno-pragmatists took a balanced approach and evaluated its implementation practically.
{"title":"Between Technological Utopia and Dystopia: Online Expression of Compulsory Use of Surveillance Technology.","authors":"Yu-Leung Ng, Zhihuai Lin","doi":"10.1007/s11948-024-00483-3","DOIUrl":"10.1007/s11948-024-00483-3","url":null,"abstract":"<p><p>This study investigated people's ethical concerns of surveillance technology. By adopting the spectrum of technological utopian and dystopian narratives, how people perceive a society constructed through the compulsory use of surveillance technology was explored. This study empirically examined the anonymous online expression of attitudes toward the society-wide, compulsory adoption of a contact tracing app that affected almost every aspect of all people's everyday lives at a societal level. By applying the structural topic modeling approach to analyze comments on four Hong Kong anonymous discussion forums, topics concerning the technological utopian, dystopian, and pragmatic views on the surveillance app were discovered. The findings showed that people with a technological utopian view on this app believed that the implementation of compulsory app use can facilitate social good and maintain social order. In contrast, individuals who had a technological dystopian view expressed privacy concerns and distrust of this surveillance technology. Techno-pragmatists took a balanced approach and evaluated its implementation practically.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"19"},"PeriodicalIF":2.7,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11096232/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140923672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1007/s11948-024-00477-1
Peter van Oossanen, Martin Peterson
Australia II became the first foreign yacht to win the America's Cup in 1983. The boat had a revolutionary wing keel and a better underwater hull form. In official documents, Ben Lexcen is credited with the design. He is also listed as the sole inventor of the wing keel in a patent application submitted on February 5, 1982. However, as reported in New York Times, Sydney Morning Herald, and Professional Boatbuilder, the wing keel was in fact designed by engineer Peter van Oossanen at the Netherlands Ship Model Basin in Wageningen, assisted by Dr. Joop Slooff at the National Aerospace Laboratory in Amsterdam. Based on telexes, letters, drawings, and other documents preserved in his personal archive, this paper presents van Oossanen's account of how the revolutionary wing keel was designed. This is followed by an ethical analysis by Martin Peterson, in which he applies the American NSPE and Dutch KIVI codes of ethics to the information provided by van Oossanen. The NSPE and KIVI codes give conflicting advice about the case, and it is not obvious which document is most relevant. This impasse is resolved by applying a method of applied ethics in which similarity-based reasoning is extended to cases that are not fully similar. The key idea, presented in Peterson's book The Ethics of Technology (Peterson, The ethics of technology: A geometric analysis of five moral principles, Oxford University Press, 2017), is to use moral paradigm cases as reference points for constructing a "moral map".
1983 年,澳大利亚二号成为第一艘赢得美洲杯帆船赛的外国帆船。该船采用了革命性的翼龙骨和更好的水下船体形式。在官方文件中,Ben Lexcen 被认为是设计者。在 1982 年 2 月 5 日提交的专利申请中,他也被列为翼龙骨的唯一发明人。然而,正如《纽约时报》、《悉尼先驱晨报》和《专业造船师》所报道的那样,翼龙骨实际上是由位于瓦赫宁根的荷兰船模基地的工程师彼得-范-奥萨宁(Peter van Oossanen)在阿姆斯特丹国家航空航天实验室的乔普-斯洛夫(Joop Slooff)博士协助下设计的。本文以范-奥萨宁个人档案中保存的电传、信件、图纸和其他文件为基础,介绍了范-奥萨宁如何设计革命性的翼龙骨。随后,马丁-彼得森(Martin Peterson)进行了伦理分析,将美国 NSPE 和荷兰 KIVI 的伦理准则应用于范-奥萨宁提供的信息。美国 NSPE 和荷兰 KIVI 职业道德准则对该案例给出了相互矛盾的建议,而哪个文件最相关并不明显。解决这一僵局的方法是应用伦理学方法,将基于相似性的推理扩展到不完全相似的案例中。彼得森在《技术伦理学》(Peterson, The Ethics of Technology:五项道德原则的几何分析》,牛津大学出版社,2017 年)中提出的主要观点是,将道德范例案例作为构建 "道德地图 "的参照点。
{"title":"Australia II: A Case Study in Engineering Ethics.","authors":"Peter van Oossanen, Martin Peterson","doi":"10.1007/s11948-024-00477-1","DOIUrl":"10.1007/s11948-024-00477-1","url":null,"abstract":"<p><p>Australia II became the first foreign yacht to win the America's Cup in 1983. The boat had a revolutionary wing keel and a better underwater hull form. In official documents, Ben Lexcen is credited with the design. He is also listed as the sole inventor of the wing keel in a patent application submitted on February 5, 1982. However, as reported in New York Times, Sydney Morning Herald, and Professional Boatbuilder, the wing keel was in fact designed by engineer Peter van Oossanen at the Netherlands Ship Model Basin in Wageningen, assisted by Dr. Joop Slooff at the National Aerospace Laboratory in Amsterdam. Based on telexes, letters, drawings, and other documents preserved in his personal archive, this paper presents van Oossanen's account of how the revolutionary wing keel was designed. This is followed by an ethical analysis by Martin Peterson, in which he applies the American NSPE and Dutch KIVI codes of ethics to the information provided by van Oossanen. The NSPE and KIVI codes give conflicting advice about the case, and it is not obvious which document is most relevant. This impasse is resolved by applying a method of applied ethics in which similarity-based reasoning is extended to cases that are not fully similar. The key idea, presented in Peterson's book The Ethics of Technology (Peterson, The ethics of technology: A geometric analysis of five moral principles, Oxford University Press, 2017), is to use moral paradigm cases as reference points for constructing a \"moral map\".</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"16"},"PeriodicalIF":2.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11078783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}