Pub Date : 2024-07-18DOI: 10.1007/s11948-024-00494-0
S A Ghahari, C Queiroz, S Labi, S McNeil
Indications that corruption mitigation in infrastructure systems delivery can be effective are found in the literature. However, there is an untapped opportunity to further enhance the efficacy of existing corruption mitigation strategies by placing them explicitly within the larger context of engineering ethics, and relevant policy statements, guidelines, codes and manuals published by international organizations. An effective matching of these formal statements on ethics to infrastructure systems delivery facilitates the identification of potential corruption hotspots and thus help establish or strengthen institutional mechanisms that address corruption. This paper reviews professional codes of ethics, and relevant literature on corruption mitigation in the context of civil engineering infrastructure development, as a platform for building a structure that connects ethical tenets and the mitigation strategies. The paper assesses corruption mitigation strategies against the background of the fundamental canons of practice in civil engineering ethical codes. As such, the paper's assessment is grounded in the civil engineer's ethical responsibilities (to society, the profession, and peers) and principles (such as safety, health, welfare, respect, and honesty) that are common to professional codes of ethics in engineering practice. Addressing corruption in infrastructure development continues to be imperative for national economic and social development, and such exigency is underscored by the sheer scale of investments in infrastructure development in any country and the billions of dollars lost annually through corruption and fraud.
{"title":"The Role of Engineering Ethics in Mitigating Corruption in Infrastructure Systems Delivery.","authors":"S A Ghahari, C Queiroz, S Labi, S McNeil","doi":"10.1007/s11948-024-00494-0","DOIUrl":"10.1007/s11948-024-00494-0","url":null,"abstract":"<p><p>Indications that corruption mitigation in infrastructure systems delivery can be effective are found in the literature. However, there is an untapped opportunity to further enhance the efficacy of existing corruption mitigation strategies by placing them explicitly within the larger context of engineering ethics, and relevant policy statements, guidelines, codes and manuals published by international organizations. An effective matching of these formal statements on ethics to infrastructure systems delivery facilitates the identification of potential corruption hotspots and thus help establish or strengthen institutional mechanisms that address corruption. This paper reviews professional codes of ethics, and relevant literature on corruption mitigation in the context of civil engineering infrastructure development, as a platform for building a structure that connects ethical tenets and the mitigation strategies. The paper assesses corruption mitigation strategies against the background of the fundamental canons of practice in civil engineering ethical codes. As such, the paper's assessment is grounded in the civil engineer's ethical responsibilities (to society, the profession, and peers) and principles (such as safety, health, welfare, respect, and honesty) that are common to professional codes of ethics in engineering practice. Addressing corruption in infrastructure development continues to be imperative for national economic and social development, and such exigency is underscored by the sheer scale of investments in infrastructure development in any country and the billions of dollars lost annually through corruption and fraud.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"29"},"PeriodicalIF":2.7,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11258101/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141635513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.1007/s11948-024-00492-2
David M Lyreskog, Hazem Zohny, Sebastian Porsdam Mann, Ilina Singh, Julian Savulescu
The rapidly advancing field of brain-computer (BCI) and brain-to-brain interfaces (BBI) is stimulating interest across various sectors including medicine, entertainment, research, and military. The developers of large-scale brain-computer networks, sometimes dubbed 'Mindplexes' or 'Cloudminds', aim to enhance cognitive functions by distributing them across expansive networks. A key technical challenge is the efficient transmission and storage of information. One proposed solution is employing blockchain technology over Web 3.0 to create decentralised cognitive entities. This paper explores the potential of a decentralised web for coordinating large brain-computer constellations, and its associated benefits, focusing in particular on the conceptual and ethical challenges this innovation may pose pertaining to (1) Identity, (2) Sovereignty (encompassing Autonomy, Authenticity, and Ownership), (3) Responsibility and Accountability, and (4) Privacy, Safety, and Security. We suggest that while a decentralised web can address some concerns and mitigate certain risks, underlying ethical issues persist. Fundamental questions about entity definition within these networks, the distinctions between individuals and collectives, and responsibility distribution within and between networks, demand further exploration.
快速发展的脑机(BCI)和脑对脑接口(BBI)领域正在激发医学、娱乐、研究和军事等各个领域的兴趣。大规模脑机网络(有时被称为 "Mindplexes "或 "Cloudminds")的开发者旨在通过在广阔的网络中分配认知功能来增强认知能力。一个关键的技术挑战是信息的高效传输和存储。一种建议的解决方案是在 Web 3.0 上采用区块链技术来创建去中心化的认知实体。本文探讨了去中心化网络在协调大型大脑-计算机星座方面的潜力及其相关益处,尤其关注这一创新可能带来的概念和伦理挑战,涉及:(1)身份;(2)主权(包括自主性、真实性和所有权);(3)责任和问责;以及(4)隐私、安全和保障。我们认为,虽然去中心化网络可以解决一些问题并降低某些风险,但潜在的伦理问题依然存在。关于这些网络中的实体定义、个人与集体之间的区别以及网络内部和网络之间的责任分配等基本问题需要进一步探讨。
{"title":"Decentralising the Self - Ethical Considerations in Utilizing Decentralised Web Technology for Direct Brain Interfaces.","authors":"David M Lyreskog, Hazem Zohny, Sebastian Porsdam Mann, Ilina Singh, Julian Savulescu","doi":"10.1007/s11948-024-00492-2","DOIUrl":"10.1007/s11948-024-00492-2","url":null,"abstract":"<p><p>The rapidly advancing field of brain-computer (BCI) and brain-to-brain interfaces (BBI) is stimulating interest across various sectors including medicine, entertainment, research, and military. The developers of large-scale brain-computer networks, sometimes dubbed 'Mindplexes' or 'Cloudminds', aim to enhance cognitive functions by distributing them across expansive networks. A key technical challenge is the efficient transmission and storage of information. One proposed solution is employing blockchain technology over Web 3.0 to create decentralised cognitive entities. This paper explores the potential of a decentralised web for coordinating large brain-computer constellations, and its associated benefits, focusing in particular on the conceptual and ethical challenges this innovation may pose pertaining to (1) Identity, (2) Sovereignty (encompassing Autonomy, Authenticity, and Ownership), (3) Responsibility and Accountability, and (4) Privacy, Safety, and Security. We suggest that while a decentralised web can address some concerns and mitigate certain risks, underlying ethical issues persist. Fundamental questions about entity definition within these networks, the distinctions between individuals and collectives, and responsibility distribution within and between networks, demand further exploration.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"28"},"PeriodicalIF":2.7,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11252225/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s11948-024-00485-1
Jannik Zeiser
Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call "decision ownership": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.
{"title":"Owning Decisions: AI Decision-Support and the Attributability-Gap.","authors":"Jannik Zeiser","doi":"10.1007/s11948-024-00485-1","DOIUrl":"10.1007/s11948-024-00485-1","url":null,"abstract":"<p><p>Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call \"decision ownership\": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"27"},"PeriodicalIF":2.7,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11189344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141421621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.1007/s11948-024-00480-6
Kexin Huang, Yan Teng, Yang Chen, Yingchun Wang
The rapid development of computer vision technologies and applications has brought forth a range of social and ethical challenges. Due to the unique characteristics of visual technology in terms of data modalities and application scenarios, computer vision poses specific ethical issues. However, the majority of existing literature either addresses artificial intelligence as a whole or pays particular attention to natural language processing, leaving a gap in specialized research on ethical issues and systematic solutions in the field of computer vision. This paper utilizes bibliometrics and text-mining techniques to quantitatively analyze papers from prominent academic conferences in computer vision over the past decade. It first reveals the developing trends and specific distribution of attention regarding trustworthy aspects in the computer vision field, as well as the inherent connections between ethical dimensions and different stages of visual model development. A life-cycle framework regarding trustworthy computer vision is then presented by making the relevant trustworthy issues, the operation pipeline of AI models, and viable technical solutions interconnected, providing researchers and policymakers with references and guidance for achieving trustworthy CV. Finally, it discusses particular motivations for conducting trustworthy practices and underscores the consistency and ambivalence among various trustworthy principles and technical attributes.
{"title":"From Pixels to Principles: A Decade of Progress and Landscape in Trustworthy Computer Vision.","authors":"Kexin Huang, Yan Teng, Yang Chen, Yingchun Wang","doi":"10.1007/s11948-024-00480-6","DOIUrl":"10.1007/s11948-024-00480-6","url":null,"abstract":"<p><p>The rapid development of computer vision technologies and applications has brought forth a range of social and ethical challenges. Due to the unique characteristics of visual technology in terms of data modalities and application scenarios, computer vision poses specific ethical issues. However, the majority of existing literature either addresses artificial intelligence as a whole or pays particular attention to natural language processing, leaving a gap in specialized research on ethical issues and systematic solutions in the field of computer vision. This paper utilizes bibliometrics and text-mining techniques to quantitatively analyze papers from prominent academic conferences in computer vision over the past decade. It first reveals the developing trends and specific distribution of attention regarding trustworthy aspects in the computer vision field, as well as the inherent connections between ethical dimensions and different stages of visual model development. A life-cycle framework regarding trustworthy computer vision is then presented by making the relevant trustworthy issues, the operation pipeline of AI models, and viable technical solutions interconnected, providing researchers and policymakers with references and guidance for achieving trustworthy CV. Finally, it discusses particular motivations for conducting trustworthy practices and underscores the consistency and ambivalence among various trustworthy principles and technical attributes.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"26"},"PeriodicalIF":2.7,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11164730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141297147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1007/s11948-024-00487-z
Bridget Pratt
Six planetary boundaries have already been exceeded, including climate change, loss of biodiversity, chemical pollution, and land-system change. The health research sector contributes to the environmental crisis we are facing, though to a lesser extent than healthcare or agriculture sectors. It could take steps to reduce its environmental impact but generally has not done so, even as the planetary emergency worsens. So far, the normative case for why the health research sector should rectify that failure has not been made. This paper argues strong philosophical grounds, derived from theories of health and social justice, exist to support the claim that the sector has a duty to avoid or minimise causing or contributing to ecological harms that threaten human health or worsen health inequity. The paper next develops ideas about the duty's content, explaining why it should entail more than reducing carbon emissions, and considers what limits might be placed on the duty.
{"title":"Defending and Defining Environmental Responsibilities for the Health Research Sector.","authors":"Bridget Pratt","doi":"10.1007/s11948-024-00487-z","DOIUrl":"10.1007/s11948-024-00487-z","url":null,"abstract":"<p><p>Six planetary boundaries have already been exceeded, including climate change, loss of biodiversity, chemical pollution, and land-system change. The health research sector contributes to the environmental crisis we are facing, though to a lesser extent than healthcare or agriculture sectors. It could take steps to reduce its environmental impact but generally has not done so, even as the planetary emergency worsens. So far, the normative case for why the health research sector should rectify that failure has not been made. This paper argues strong philosophical grounds, derived from theories of health and social justice, exist to support the claim that the sector has a duty to avoid or minimise causing or contributing to ecological harms that threaten human health or worsen health inequity. The paper next develops ideas about the duty's content, explaining why it should entail more than reducing carbon emissions, and considers what limits might be placed on the duty.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"25"},"PeriodicalIF":2.7,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11156718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141263338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-04DOI: 10.1007/s11948-024-00486-0
Laura Arbelaez Ossa, Stephen R Milford, Michael Rost, Anja K Leist, David M Shaw, Bernice S Elger
While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.
在人工智能(AI)技术不断飞速发展的同时,人们对 AI 的有益产出也有了越来越多的承诺,同时也对医疗保健领域人机交互所面临的挑战表示担忧。为了解决这些问题,越来越多的医疗机构开始发布人工智能医疗指南,旨在使人工智能符合道德规范。然而,指南作为一种书面语言形式,可以通过分析来认识其文本交流与潜在社会观念之间的相互联系。从这个角度出发,我们进行了一项话语分析,以了解这些指南是如何构建、阐明和框定医疗保健领域的人工智能伦理的。我们纳入了八份指南,并确定了三种普遍存在且相互交织的话语:(1)人工智能是不可避免的,也是可取的;(2)人工智能需要以(某种形式的)原则为指导;(3)对人工智能的信任是工具性的,也是首要的。这些论述表明,技术理想已过度溢出人工智能伦理,如过度乐观主义和由此产生的过度批判。本研究深入探讨了人工智能指南中的基本思想,以及指南如何影响人工智能的实践,如何使人工智能与伦理、法律和社会价值观保持一致,从而在医疗保健领域塑造人工智能。
{"title":"AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare.","authors":"Laura Arbelaez Ossa, Stephen R Milford, Michael Rost, Anja K Leist, David M Shaw, Bernice S Elger","doi":"10.1007/s11948-024-00486-0","DOIUrl":"10.1007/s11948-024-00486-0","url":null,"abstract":"<p><p>While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"24"},"PeriodicalIF":2.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11150179/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-04DOI: 10.1007/s11948-024-00488-y
Richard T Cimino, Scott C Streiner, Daniel D Burkey, Michael F Young, Landon Bassett, Joshua B Reed
The Defining Issues Test 2 (DIT-2) and Engineering Ethical Reasoning Instrument (EERI) are designed to measure ethical reasoning of general (DIT-2) and engineering-student (EERI) populations. These tools-and the DIT-2 especially-have gained wide usage for assessing the ethical reasoning of undergraduate students. This paper reports on a research study in which the ethical reasoning of first-year undergraduate engineering students at multiple universities was assessed with both of these tools. In addition to these two instruments, students were also asked to create personal concept maps of the phrase "ethical decision-making." It was hypothesized that students whose instrument scores reflected more postconventional levels of moral development and more sophisticated ethical reasoning skills would likewise have richer, more detailed concept maps of ethical decision-making, reflecting their deeper levels of understanding of this topic and the complex of related concepts. In fact, there was no significant correlation between the instrument scores and concept map scoring, suggesting that the way first-year students conceptualize ethical decision making does not predict the way they behave when performing scenario-based ethical reasoning (perhaps more situated). This disparity indicates a need to more precisely quantify engineering ethical reasoning and decision making, if we wish to inform assessment outcomes using the results of such quantitative analyses.
{"title":"Comparing First-Year Engineering Student Conceptions of Ethical Decision-Making to Performance on Standardized Assessments of Ethical Reasoning.","authors":"Richard T Cimino, Scott C Streiner, Daniel D Burkey, Michael F Young, Landon Bassett, Joshua B Reed","doi":"10.1007/s11948-024-00488-y","DOIUrl":"10.1007/s11948-024-00488-y","url":null,"abstract":"<p><p>The Defining Issues Test 2 (DIT-2) and Engineering Ethical Reasoning Instrument (EERI) are designed to measure ethical reasoning of general (DIT-2) and engineering-student (EERI) populations. These tools-and the DIT-2 especially-have gained wide usage for assessing the ethical reasoning of undergraduate students. This paper reports on a research study in which the ethical reasoning of first-year undergraduate engineering students at multiple universities was assessed with both of these tools. In addition to these two instruments, students were also asked to create personal concept maps of the phrase \"ethical decision-making.\" It was hypothesized that students whose instrument scores reflected more postconventional levels of moral development and more sophisticated ethical reasoning skills would likewise have richer, more detailed concept maps of ethical decision-making, reflecting their deeper levels of understanding of this topic and the complex of related concepts. In fact, there was no significant correlation between the instrument scores and concept map scoring, suggesting that the way first-year students conceptualize ethical decision making does not predict the way they behave when performing scenario-based ethical reasoning (perhaps more situated). This disparity indicates a need to more precisely quantify engineering ethical reasoning and decision making, if we wish to inform assessment outcomes using the results of such quantitative analyses.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"23"},"PeriodicalIF":2.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11150177/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s11948-024-00479-z
Simona Tiribelli, Davide Calvaresi
Health Recommender Systems are promising Articial-Intelligence-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging. However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis. In particular, a brief overview of the HRS' technical aspects allows us to shed light on the ethical risks and challenges they might raise on individuals' well-being as they age. Moreover, the study proposes a categorization, understanding, and possible preventive/mitigation actions for the elicited risks and challenges through rethinking the AI ethics core principle of autonomy. Finally, elaborating on autonomy-related ethical theories, the paper proposes an autonomy-based ethical framework and how it can foster the development of autonomy-enabling HRS for AA.
健康推荐系统(Health Recommender Systems)是一种很有前途的基于艺术智能的工具,可在医疗保健领域提供健康的生活方式和治疗方法。在最受支持的领域中,值得一提的是积极老龄化。然而,目前支持 AA 的健康推荐系统提出了伦理方面的挑战,这些挑战仍需要适当的形式化和探索。本研究建议通过基于自主性的伦理分析,重新思考针对 AA 的 HRS。特别是,通过对用户体验系统技术方面的简要概述,我们可以揭示这些系统可能对个人老龄化过程中的福祉带来的伦理风险和挑战。此外,本研究还通过重新思考人工智能伦理的核心原则--自主性,对所引发的风险和挑战提出了分类、理解和可能的预防/缓解措施。最后,通过阐述与自主性相关的伦理理论,本文提出了一个基于自主性的伦理框架,以及如何利用该框架促进开发适用于 AA 的自主性人力资源系统。
{"title":"Rethinking Health Recommender Systems for Active Aging: An Autonomy-Based Ethical Analysis.","authors":"Simona Tiribelli, Davide Calvaresi","doi":"10.1007/s11948-024-00479-z","DOIUrl":"10.1007/s11948-024-00479-z","url":null,"abstract":"<p><p>Health Recommender Systems are promising Articial-Intelligence-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging. However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis. In particular, a brief overview of the HRS' technical aspects allows us to shed light on the ethical risks and challenges they might raise on individuals' well-being as they age. Moreover, the study proposes a categorization, understanding, and possible preventive/mitigation actions for the elicited risks and challenges through rethinking the AI ethics core principle of autonomy. Finally, elaborating on autonomy-related ethical theories, the paper proposes an autonomy-based ethical framework and how it can foster the development of autonomy-enabling HRS for AA.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"22"},"PeriodicalIF":2.7,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11129984/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141155519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-24DOI: 10.1007/s11948-024-00489-x
George Kwasi Barimah
In this paper, I develop and defend a moralized conception of epistemic trust in science against a particular kind of non-moral account defended by John (2015, 2018). I suggest that non-epistemic value considerations, non-epistemic norms of communication and affective trust properly characterize the relationship of epistemic trust between scientific experts and non-experts. I argue that it is through a moralized account of epistemic trust in science that we can make sense of the deep-seated moral undertones that are often at play when non-experts (dis)trust science.
{"title":"Epistemic Trust in Scientific Experts: A Moral Dimension.","authors":"George Kwasi Barimah","doi":"10.1007/s11948-024-00489-x","DOIUrl":"10.1007/s11948-024-00489-x","url":null,"abstract":"<p><p>In this paper, I develop and defend a moralized conception of epistemic trust in science against a particular kind of non-moral account defended by John (2015, 2018). I suggest that non-epistemic value considerations, non-epistemic norms of communication and affective trust properly characterize the relationship of epistemic trust between scientific experts and non-experts. I argue that it is through a moralized account of epistemic trust in science that we can make sense of the deep-seated moral undertones that are often at play when non-experts (dis)trust science.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"21"},"PeriodicalIF":2.7,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11126506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}