Pub Date : 2024-08-09DOI: 10.1007/s11948-024-00498-w
Andrea Sauchelli
I defend the claim that life-suspending technologies can constitute a catastrophic and existential security factor for risks structurally similar to those related to climate change. The gist of the argument is that, under certain conditions, life-suspending technologies such as cryonics can provide self-interested actors with incentives to efficiently tackle such risks-in particular, they provide reasons to overcome certain manifestations of generational egoism, a risk factor of several catastrophic and existential risks. Provided we have reasons to decrease catastrophic and existential risks such as climate change, we also have a (defeasible) reason for investing in developing and making life-suspending technologies (more) widespread.
{"title":"Life-Suspending Technologies, Cryonics, and Catastrophic Risks.","authors":"Andrea Sauchelli","doi":"10.1007/s11948-024-00498-w","DOIUrl":"10.1007/s11948-024-00498-w","url":null,"abstract":"<p><p>I defend the claim that life-suspending technologies can constitute a catastrophic and existential security factor for risks structurally similar to those related to climate change. The gist of the argument is that, under certain conditions, life-suspending technologies such as cryonics can provide self-interested actors with incentives to efficiently tackle such risks-in particular, they provide reasons to overcome certain manifestations of generational egoism, a risk factor of several catastrophic and existential risks. Provided we have reasons to decrease catastrophic and existential risks such as climate change, we also have a (defeasible) reason for investing in developing and making life-suspending technologies (more) widespread.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"37"},"PeriodicalIF":2.7,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315739/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1007/s11948-024-00491-3
Christian Wendelborn, Michael Anger, Christoph Schickhardt
Sharing research data has great potential to benefit science and society. However, data sharing is still not common practice. Since public research funding agencies have a particular impact on research and researchers, the question arises: Are public funding agencies morally obligated to promote data sharing? We argue from a research ethics perspective that public funding agencies have several pro tanto obligations requiring them to promote data sharing. However, there are also pro tanto obligations that speak against promoting data sharing in general as well as with regard to particular instruments of such promotion. We examine and weigh these obligations and conclude that all things considered funders ought to promote the sharing of data. Even the instrument of mandatory data sharing policies can be justified under certain conditions.
{"title":"Promoting Data Sharing: The Moral Obligations of Public Funding Agencies.","authors":"Christian Wendelborn, Michael Anger, Christoph Schickhardt","doi":"10.1007/s11948-024-00491-3","DOIUrl":"10.1007/s11948-024-00491-3","url":null,"abstract":"<p><p>Sharing research data has great potential to benefit science and society. However, data sharing is still not common practice. Since public research funding agencies have a particular impact on research and researchers, the question arises: Are public funding agencies morally obligated to promote data sharing? We argue from a research ethics perspective that public funding agencies have several pro tanto obligations requiring them to promote data sharing. However, there are also pro tanto obligations that speak against promoting data sharing in general as well as with regard to particular instruments of such promotion. We examine and weigh these obligations and conclude that all things considered funders ought to promote the sharing of data. Even the instrument of mandatory data sharing policies can be justified under certain conditions.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"35"},"PeriodicalIF":2.7,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11303567/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1007/s11948-024-00501-4
Brandon Ferlito, Seppe Segers, Michiel De Proost, Heidi Mertes
Due to its enormous potential, artificial intelligence (AI) can transform healthcare on a seemingly infinite scale. However, as we continue to explore the immense potential of AI, it is vital to consider the ethical concerns associated with its development and deployment. One specific concern that has been flagged in the literature is the responsibility gap (RG) due to the introduction of AI in healthcare. When the use of an AI algorithm or system results in a negative outcome for a patient(s), to whom can or should responsibility for that outcome be assigned? Although the concept of the RG was introduced in Anglo-American and European philosophy, this paper aims to broaden the debate by providing an Ubuntu-inspired perspective on the RG. Ubuntu, deeply rooted in African philosophy, calls for collective responsibility, and offers a uniquely forward-looking approach to address the alleged RG caused by AI in healthcare. An Ubuntu-inspired perspective can serve as a valuable guide and tool when addressing the alleged RG. Incorporating Ubuntu into the AI ethics discourse can contribute to a more ethical and responsible integration of AI in healthcare.
{"title":"Responsibility Gap(s) Due to the Introduction of AI in Healthcare: An Ubuntu-Inspired Approach","authors":"Brandon Ferlito, Seppe Segers, Michiel De Proost, Heidi Mertes","doi":"10.1007/s11948-024-00501-4","DOIUrl":"https://doi.org/10.1007/s11948-024-00501-4","url":null,"abstract":"<p>Due to its enormous potential, artificial intelligence (AI) can transform healthcare on a seemingly infinite scale. However, as we continue to explore the immense potential of AI, it is vital to consider the ethical concerns associated with its development and deployment. One specific concern that has been flagged in the literature is the responsibility gap (RG) due to the introduction of AI in healthcare. When the use of an AI algorithm or system results in a negative outcome for a patient(s), to whom can or should responsibility for that outcome be assigned? Although the concept of the RG was introduced in Anglo-American and European philosophy, this paper aims to broaden the debate by providing an Ubuntu-inspired perspective on the RG. Ubuntu, deeply rooted in African philosophy, calls for collective responsibility, and offers a uniquely forward-looking approach to address the alleged RG caused by AI in healthcare. An Ubuntu-inspired perspective can serve as a valuable guide and tool when addressing the alleged RG. Incorporating Ubuntu into the AI ethics discourse can contribute to a more ethical and responsible integration of AI in healthcare.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"59 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141864782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-25DOI: 10.1007/s11948-024-00495-z
Gabrielle Samuel
Concerns about research's environmental impacts have been articulated in the research arena, but questions remain about what types of role responsibilities are appropriate to place on researchers, if any. The research question of this paper is: what are the views of UK health researchers who use data-intensive methods on their responsibilities to consider the environmental impacts of their research? Twenty-six interviews were conducted with UK health researchers using data-intensive methods. Participants expressed a desire to take responsibility for the environmental impacts of their research, however, they were unable to consolidate this because there were often obstacles that prevented them from taking such role responsibilities. They suggested strategies to address this, predominantly related to the need for regulation to monitor their own behaviour. This paper discusses the implications of adopting such a regulatory approach as a mechanism to promote researchers' role responsibilities using a neo-liberal critique.
{"title":"Responsibility for the Environmental Impact of Data-Intensive Research: An Exploration of UK Health Researchers.","authors":"Gabrielle Samuel","doi":"10.1007/s11948-024-00495-z","DOIUrl":"10.1007/s11948-024-00495-z","url":null,"abstract":"<p><p>Concerns about research's environmental impacts have been articulated in the research arena, but questions remain about what types of role responsibilities are appropriate to place on researchers, if any. The research question of this paper is: what are the views of UK health researchers who use data-intensive methods on their responsibilities to consider the environmental impacts of their research? Twenty-six interviews were conducted with UK health researchers using data-intensive methods. Participants expressed a desire to take responsibility for the environmental impacts of their research, however, they were unable to consolidate this because there were often obstacles that prevented them from taking such role responsibilities. They suggested strategies to address this, predominantly related to the need for regulation to monitor their own behaviour. This paper discusses the implications of adopting such a regulatory approach as a mechanism to promote researchers' role responsibilities using a neo-liberal critique.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"33"},"PeriodicalIF":2.7,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11281977/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141767882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1007/s11948-024-00496-y
Cornelius Ewuoso
In this article, I interrogate whether the deployment and development of the Metaverse should take into account African values and modes of knowing to foster the uptake of this hyped technology in Africa. Specifically, I draw on the moral norms arising from the components of communal interactions and humanness in Afro-communitarianism to contend that the deployment of the Metaverse and its development ought to reflect core African moral values to foster its uptake in the region. To adequately align the Metaverse with African core values and thus foster its uptake among Africans, significant technological advancement that makes simulating genuine human experiences possible must occur. Additionally, it would be necessary for the developers and deployers to ensure that higher forms of spiritual activities can be had in the Metaverse to foster its uptake in Africa. Finally, I justify why the preceding points do not necessarily imply that the Metaverse will have a higher moral status than real life on the moral scale that can be grounded in Afro-communitarianism.
{"title":"What does the Thinking about Relationalism and Humanness in African Philosophy imply for Different Modes of Being Present in the Metaverse?","authors":"Cornelius Ewuoso","doi":"10.1007/s11948-024-00496-y","DOIUrl":"10.1007/s11948-024-00496-y","url":null,"abstract":"<p><p>In this article, I interrogate whether the deployment and development of the Metaverse should take into account African values and modes of knowing to foster the uptake of this hyped technology in Africa. Specifically, I draw on the moral norms arising from the components of communal interactions and humanness in Afro-communitarianism to contend that the deployment of the Metaverse and its development ought to reflect core African moral values to foster its uptake in the region. To adequately align the Metaverse with African core values and thus foster its uptake among Africans, significant technological advancement that makes simulating genuine human experiences possible must occur. Additionally, it would be necessary for the developers and deployers to ensure that higher forms of spiritual activities can be had in the Metaverse to foster its uptake in Africa. Finally, I justify why the preceding points do not necessarily imply that the Metaverse will have a higher moral status than real life on the moral scale that can be grounded in Afro-communitarianism.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"31"},"PeriodicalIF":2.7,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11266389/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1007/s11948-024-00493-1
Daniel Crean, Bert Gordijn, Alan J Kearns
Presented here is a systematic literature review of what the academic literature asserts about: (1) the stages of the ethical decision-making process (i.e. awareness, reasoning, motivation, and action) that are claimed to be improved or not improved by RI teaching and whether these claims are supported by evidence; (2) the measurements used to determine the effectiveness of RI teaching; and (3) the stage/s of the ethical decision-making process that are difficult to assess. Regarding (1), awareness was the stage most claimed to be amenable to improvement following RI teaching, and with motivation being the stage that is rarely addressed in the academic literature. While few, some sources claimed RI teaching cannot improve specific stages. With behaviour (action) being the stage referenced most, albeit in only 9% of the total sources, for not being amenable to improvement following RI teaching. Finally, most claims were supported by empirical evidence. Regarding (2), measures most frequently used are custom in-house surveys and some validated measures. Additionally, there is much debate in the literature regarding the adequacy of current assessment measures in RI teaching, and even their absence. Such debate warrants caution when we are considering the empirical evidence supplied to support that RI teaching does or does not improve a specific stage of the decision-making process. Regarding (3), only behaviour was discussed as being difficult to assess, if not impossible. In our discussion section we contextualise these results, and following this we derive some recommendations for relevant stakeholders in RI teaching.
本文对学术文献中有关以下方面的论断进行了系统的文献综述:(1) 声称道德决策过程中的各个阶段(即意识、推理、动机和行动)通过 RI 教学得到了改善或没有得到改善,以及这些说法是否有证据支持;(2) 用于确定 RI 教学有效性的测量方法;(3) 道德决策过程中难以评估的各个阶段。关于(1),意识是最有可能在开展 RI 教学后得到改善的阶段,而动机则是学术文献中很少涉及的阶段。一些资料声称,区域性学习教学无法改善特定阶段,但这种说法为数不多。行为(行动)是被提及最多的一个阶段,尽管只占全部资料的 9%,但却被认为在进行区域统 一教学后无法得到改善。最后,大多数说法都有实证支持。关于第(2)项,最常用的衡量标准是定制的内部调查和一些经过验证的衡量标准。此外,文献中还有很多关于目前的评估措施是否足以胜任 RI 教学的争论,甚至是没有评估措施的争论。当我们考虑提供实证证据来证明 RI 教学是否改善了决策过程的特定阶段时,这种争论值得警惕。关于第(3)项,只有行为被认为是难以评估的,甚至是不可能评估的。在我们的讨论部分,我们对这些结果进行了背景分析,并在此基础上为相关利益方在理 论创新教学中提出了一些建议。
{"title":"Impact and Assessment of Research Integrity Teaching: A Systematic Literature Review.","authors":"Daniel Crean, Bert Gordijn, Alan J Kearns","doi":"10.1007/s11948-024-00493-1","DOIUrl":"10.1007/s11948-024-00493-1","url":null,"abstract":"<p><p>Presented here is a systematic literature review of what the academic literature asserts about: (1) the stages of the ethical decision-making process (i.e. awareness, reasoning, motivation, and action) that are claimed to be improved or not improved by RI teaching and whether these claims are supported by evidence; (2) the measurements used to determine the effectiveness of RI teaching; and (3) the stage/s of the ethical decision-making process that are difficult to assess. Regarding (1), awareness was the stage most claimed to be amenable to improvement following RI teaching, and with motivation being the stage that is rarely addressed in the academic literature. While few, some sources claimed RI teaching cannot improve specific stages. With behaviour (action) being the stage referenced most, albeit in only 9% of the total sources, for not being amenable to improvement following RI teaching. Finally, most claims were supported by empirical evidence. Regarding (2), measures most frequently used are custom in-house surveys and some validated measures. Additionally, there is much debate in the literature regarding the adequacy of current assessment measures in RI teaching, and even their absence. Such debate warrants caution when we are considering the empirical evidence supplied to support that RI teaching does or does not improve a specific stage of the decision-making process. Regarding (3), only behaviour was discussed as being difficult to assess, if not impossible. In our discussion section we contextualise these results, and following this we derive some recommendations for relevant stakeholders in RI teaching.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"30"},"PeriodicalIF":2.7,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11266247/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141749452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1007/s11948-024-00490-4
Nan Wang, Carl Mitcham
This essay aims to rectify a failure on the part of Western philosophers of technology to attend to the creative philosophical work of Li Bocong at the University of Chinese Academy of Sciences in Beijing. After a brief account of Li Bocong's personal contacts with the West and some remarks on his relationship to Marxism, we take up three aspects of his philosophy that can contribute to enlarging Western philosophical thinking about engineering and technology: (1) Li's analysis of engineering as more than design, (2) his argument for the relevance of the sociology of engineering, and (3) his conceptualization of engineering ethics as more than professional ethics.
{"title":"What Western Philosophers of Technology Might Learn from Li Bocong's Philosophy of Engineering.","authors":"Nan Wang, Carl Mitcham","doi":"10.1007/s11948-024-00490-4","DOIUrl":"10.1007/s11948-024-00490-4","url":null,"abstract":"<p><p>This essay aims to rectify a failure on the part of Western philosophers of technology to attend to the creative philosophical work of Li Bocong at the University of Chinese Academy of Sciences in Beijing. After a brief account of Li Bocong's personal contacts with the West and some remarks on his relationship to Marxism, we take up three aspects of his philosophy that can contribute to enlarging Western philosophical thinking about engineering and technology: (1) Li's analysis of engineering as more than design, (2) his argument for the relevance of the sociology of engineering, and (3) his conceptualization of engineering ethics as more than professional ethics.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"32"},"PeriodicalIF":2.7,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11266239/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1007/s11948-024-00494-0
S A Ghahari, C Queiroz, S Labi, S McNeil
Indications that corruption mitigation in infrastructure systems delivery can be effective are found in the literature. However, there is an untapped opportunity to further enhance the efficacy of existing corruption mitigation strategies by placing them explicitly within the larger context of engineering ethics, and relevant policy statements, guidelines, codes and manuals published by international organizations. An effective matching of these formal statements on ethics to infrastructure systems delivery facilitates the identification of potential corruption hotspots and thus help establish or strengthen institutional mechanisms that address corruption. This paper reviews professional codes of ethics, and relevant literature on corruption mitigation in the context of civil engineering infrastructure development, as a platform for building a structure that connects ethical tenets and the mitigation strategies. The paper assesses corruption mitigation strategies against the background of the fundamental canons of practice in civil engineering ethical codes. As such, the paper's assessment is grounded in the civil engineer's ethical responsibilities (to society, the profession, and peers) and principles (such as safety, health, welfare, respect, and honesty) that are common to professional codes of ethics in engineering practice. Addressing corruption in infrastructure development continues to be imperative for national economic and social development, and such exigency is underscored by the sheer scale of investments in infrastructure development in any country and the billions of dollars lost annually through corruption and fraud.
{"title":"The Role of Engineering Ethics in Mitigating Corruption in Infrastructure Systems Delivery.","authors":"S A Ghahari, C Queiroz, S Labi, S McNeil","doi":"10.1007/s11948-024-00494-0","DOIUrl":"10.1007/s11948-024-00494-0","url":null,"abstract":"<p><p>Indications that corruption mitigation in infrastructure systems delivery can be effective are found in the literature. However, there is an untapped opportunity to further enhance the efficacy of existing corruption mitigation strategies by placing them explicitly within the larger context of engineering ethics, and relevant policy statements, guidelines, codes and manuals published by international organizations. An effective matching of these formal statements on ethics to infrastructure systems delivery facilitates the identification of potential corruption hotspots and thus help establish or strengthen institutional mechanisms that address corruption. This paper reviews professional codes of ethics, and relevant literature on corruption mitigation in the context of civil engineering infrastructure development, as a platform for building a structure that connects ethical tenets and the mitigation strategies. The paper assesses corruption mitigation strategies against the background of the fundamental canons of practice in civil engineering ethical codes. As such, the paper's assessment is grounded in the civil engineer's ethical responsibilities (to society, the profession, and peers) and principles (such as safety, health, welfare, respect, and honesty) that are common to professional codes of ethics in engineering practice. Addressing corruption in infrastructure development continues to be imperative for national economic and social development, and such exigency is underscored by the sheer scale of investments in infrastructure development in any country and the billions of dollars lost annually through corruption and fraud.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"29"},"PeriodicalIF":2.7,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11258101/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141635513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.1007/s11948-024-00492-2
David M Lyreskog, Hazem Zohny, Sebastian Porsdam Mann, Ilina Singh, Julian Savulescu
The rapidly advancing field of brain-computer (BCI) and brain-to-brain interfaces (BBI) is stimulating interest across various sectors including medicine, entertainment, research, and military. The developers of large-scale brain-computer networks, sometimes dubbed 'Mindplexes' or 'Cloudminds', aim to enhance cognitive functions by distributing them across expansive networks. A key technical challenge is the efficient transmission and storage of information. One proposed solution is employing blockchain technology over Web 3.0 to create decentralised cognitive entities. This paper explores the potential of a decentralised web for coordinating large brain-computer constellations, and its associated benefits, focusing in particular on the conceptual and ethical challenges this innovation may pose pertaining to (1) Identity, (2) Sovereignty (encompassing Autonomy, Authenticity, and Ownership), (3) Responsibility and Accountability, and (4) Privacy, Safety, and Security. We suggest that while a decentralised web can address some concerns and mitigate certain risks, underlying ethical issues persist. Fundamental questions about entity definition within these networks, the distinctions between individuals and collectives, and responsibility distribution within and between networks, demand further exploration.
快速发展的脑机(BCI)和脑对脑接口(BBI)领域正在激发医学、娱乐、研究和军事等各个领域的兴趣。大规模脑机网络(有时被称为 "Mindplexes "或 "Cloudminds")的开发者旨在通过在广阔的网络中分配认知功能来增强认知能力。一个关键的技术挑战是信息的高效传输和存储。一种建议的解决方案是在 Web 3.0 上采用区块链技术来创建去中心化的认知实体。本文探讨了去中心化网络在协调大型大脑-计算机星座方面的潜力及其相关益处,尤其关注这一创新可能带来的概念和伦理挑战,涉及:(1)身份;(2)主权(包括自主性、真实性和所有权);(3)责任和问责;以及(4)隐私、安全和保障。我们认为,虽然去中心化网络可以解决一些问题并降低某些风险,但潜在的伦理问题依然存在。关于这些网络中的实体定义、个人与集体之间的区别以及网络内部和网络之间的责任分配等基本问题需要进一步探讨。
{"title":"Decentralising the Self - Ethical Considerations in Utilizing Decentralised Web Technology for Direct Brain Interfaces.","authors":"David M Lyreskog, Hazem Zohny, Sebastian Porsdam Mann, Ilina Singh, Julian Savulescu","doi":"10.1007/s11948-024-00492-2","DOIUrl":"10.1007/s11948-024-00492-2","url":null,"abstract":"<p><p>The rapidly advancing field of brain-computer (BCI) and brain-to-brain interfaces (BBI) is stimulating interest across various sectors including medicine, entertainment, research, and military. The developers of large-scale brain-computer networks, sometimes dubbed 'Mindplexes' or 'Cloudminds', aim to enhance cognitive functions by distributing them across expansive networks. A key technical challenge is the efficient transmission and storage of information. One proposed solution is employing blockchain technology over Web 3.0 to create decentralised cognitive entities. This paper explores the potential of a decentralised web for coordinating large brain-computer constellations, and its associated benefits, focusing in particular on the conceptual and ethical challenges this innovation may pose pertaining to (1) Identity, (2) Sovereignty (encompassing Autonomy, Authenticity, and Ownership), (3) Responsibility and Accountability, and (4) Privacy, Safety, and Security. We suggest that while a decentralised web can address some concerns and mitigate certain risks, underlying ethical issues persist. Fundamental questions about entity definition within these networks, the distinctions between individuals and collectives, and responsibility distribution within and between networks, demand further exploration.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"28"},"PeriodicalIF":2.7,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11252225/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s11948-024-00485-1
Jannik Zeiser
Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call "decision ownership": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.
{"title":"Owning Decisions: AI Decision-Support and the Attributability-Gap.","authors":"Jannik Zeiser","doi":"10.1007/s11948-024-00485-1","DOIUrl":"10.1007/s11948-024-00485-1","url":null,"abstract":"<p><p>Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call \"decision ownership\": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"27"},"PeriodicalIF":2.7,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11189344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141421621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}