首页 > 最新文献

AI and ethics最新文献

英文 中文
The privacy paradox of deepfake virtual reality porn 深度假虚拟现实色情的隐私悖论
Pub Date : 2025-12-28 DOI: 10.1007/s43681-025-00934-y
Nick M. Acocella

Porn plays an ever-present role in the development, study, use and hype of both artificial intelligence (AI) and virtual reality (VR), yet the full extent to which AI, VR and porn will converge remains philosophically unexplored. Deepfake virtual reality porn (DVRP) is emerging as machine learning (ML) architectures are leveraged in VR, with radical implications for sexuality and privacy we have not seen from 2D generative AI or 3D VR porn before. In this article, I discuss that full potential, describing an evolution of pornography into what I call pornomorphy, where customizable sexual experience replaces pornographic material. This opens a paradox in our concept of privacy, allowing one’s privacy to nonconsensually become another’s. I explore what that might mean for our senses of self and bodily autonomy, offering considerations to inform emerging ethical and legal approaches to pornomorphy. I also challenge ideas put forth by the philosopher David Chalmers about VR, and argue for updating our conceptions of personal boundaries and likeness ownership given this fast-approaching future.

色情在人工智能(AI)和虚拟现实(VR)的开发、研究、使用和宣传中扮演着无处不在的角色,但人工智能、虚拟现实和色情将在多大程度上融合,在哲学上仍未得到探索。随着机器学习(ML)架构在VR中的应用,深度伪造虚拟现实色情(DVRP)正在兴起,它对性和隐私的影响是我们以前从未从2D生成人工智能或3D VR色情中看到的。在这篇文章中,我将讨论色情的全部潜力,描述色情向我所谓的色情形态的演变,在那里,可定制的性体验取代了色情材料。这在我们的隐私概念中打开了一个悖论,允许一个人的隐私在未经同意的情况下成为另一个人的隐私。我探讨了这对我们的自我意识和身体自主可能意味着什么,并提供了一些考虑,为新兴的色情道德和法律途径提供信息。我也对哲学家David Chalmers提出的关于虚拟现实的观点提出质疑,并主张在这个快速逼近的未来更新我们对个人边界和肖像所有权的概念。
{"title":"The privacy paradox of deepfake virtual reality porn","authors":"Nick M. Acocella","doi":"10.1007/s43681-025-00934-y","DOIUrl":"10.1007/s43681-025-00934-y","url":null,"abstract":"<div><p>Porn plays an ever-present role in the development, study, use and hype of both artificial intelligence (AI) and virtual reality (VR), yet the full extent to which AI, VR and porn will converge remains philosophically unexplored. Deepfake virtual reality porn (DVRP) is emerging as machine learning (ML) architectures are leveraged in VR, with radical implications for sexuality and privacy we have not seen from 2D generative AI or 3D VR porn before. In this article, I discuss that full potential, describing an evolution of pornography into what I call <i>pornomorphy</i>, where customizable sexual experience replaces pornographic material. This opens a paradox in our concept of privacy, allowing one’s privacy to nonconsensually become another’s. I explore what that might mean for our senses of self and bodily autonomy, offering considerations to inform emerging ethical and legal approaches to pornomorphy. I also challenge ideas put forth by the philosopher David Chalmers about VR, and argue for updating our conceptions of personal boundaries and likeness ownership given this fast-approaching future.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00934-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forensic bioethics in Asia: bridging human rights and medico-legal practice 亚洲的法医生物伦理学:连接人权和医疗法律实践
Pub Date : 2025-12-28 DOI: 10.1007/s43681-025-00962-8
Pragnesh Parmar, Gunvanti Rathod

Forensic medicine serves as a cornerstone of the justice system, yet its ethical dimensions remain insufficiently examined, particularly within Asia’s diverse cultural and legal settings. This article introduces forensic bioethics as an essential framework to address longstanding ethical gaps in medico-legal practice, with emphasis on safeguarding human rights, dignity, and cultural values. It explores key ethical challenges, including postmortem autonomy and consent, the use of unclaimed bodies for education and research, investigations into deaths in custody, religious and familial considerations in forensic identification, and the systemic marginalization of vulnerable groups in forensic decision-making. The analysis underscores the absence of structured bioethics training in forensic medicine across most Asian curricula and proposes an adapted ethical model grounded in Beauchamp and Childress’ four principles, contextualized for forensic applications. The article further recommends critical policy reforms, such as establishing national forensic bioethics boards, localizing international investigative protocols, creating independent forensic institutions, and implementing comprehensive guidelines for handling sensitive medico-legal cases. Strengthening forensic bioethics across Asia is imperative to uphold justice, protect human dignity, and reinforce public trust in medico-legal systems.

法医学是司法系统的基石,但其伦理层面仍未得到充分审查,特别是在亚洲多元的文化和法律环境中。本文介绍了法医生物伦理学作为解决医学法律实践中长期存在的伦理鸿沟的基本框架,强调维护人权、尊严和文化价值。它探讨了主要的伦理挑战,包括死后的自主权和同意、将无人认领的尸体用于教育和研究、调查拘留期间的死亡、法医鉴定中的宗教和家庭考虑,以及在法医决策中弱势群体的系统性边缘化。该分析强调了在大多数亚洲课程中缺乏结构化的法医学生物伦理学培训,并提出了一种基于Beauchamp和Childress的四项原则的适应性伦理模型,该模型适用于法医学应用。这篇文章进一步建议进行关键的政策改革,例如建立国家法医生物伦理委员会、使国际调查规程本地化、建立独立的法医机构以及实施处理敏感的医学法律案件的全面指导方针。加强整个亚洲的法医生物伦理对于维护正义、保护人类尊严和增强公众对医疗法律制度的信任至关重要。
{"title":"Forensic bioethics in Asia: bridging human rights and medico-legal practice","authors":"Pragnesh Parmar,&nbsp;Gunvanti Rathod","doi":"10.1007/s43681-025-00962-8","DOIUrl":"10.1007/s43681-025-00962-8","url":null,"abstract":"<div>\u0000 \u0000 <p>Forensic medicine serves as a cornerstone of the justice system, yet its ethical dimensions remain insufficiently examined, particularly within Asia’s diverse cultural and legal settings. This article introduces forensic bioethics as an essential framework to address longstanding ethical gaps in medico-legal practice, with emphasis on safeguarding human rights, dignity, and cultural values. It explores key ethical challenges, including postmortem autonomy and consent, the use of unclaimed bodies for education and research, investigations into deaths in custody, religious and familial considerations in forensic identification, and the systemic marginalization of vulnerable groups in forensic decision-making. The analysis underscores the absence of structured bioethics training in forensic medicine across most Asian curricula and proposes an adapted ethical model grounded in Beauchamp and Childress’ four principles, contextualized for forensic applications. The article further recommends critical policy reforms, such as establishing national forensic bioethics boards, localizing international investigative protocols, creating independent forensic institutions, and implementing comprehensive guidelines for handling sensitive medico-legal cases. Strengthening forensic bioethics across Asia is imperative to uphold justice, protect human dignity, and reinforce public trust in medico-legal systems.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditable AI: tracing the ethical history of a model 可审计的人工智能:追踪一个模型的伦理历史
Pub Date : 2025-12-28 DOI: 10.1007/s43681-025-00910-6
Lev Goukassian

The proliferation of autonomous AI systems has created a critical “responsibility gap”, where the opacity of decision-making processes makes accountability elusive. Prevailing alignment techniques, such as Reinforcement Learning from Human Feedback (RLHF) and Constitutional AI (CAI), address ethical behavior through training and principle-adherence but fail to produce a verifiable, contemporaneous record of moral reasoning. This paper introduces Ternary Moral Logic (TML), a novel system architecture that converts AI ethical deliberation from an abstract process into a cryptographically secured, evidentiary record. TML implements a third logical state—the Sacred Pause (0)—alongside conventional permit ( + 1) and prohibit (−1) states, which is triggered when a model encounters moral complexity. This pause initiates a non-blocking, parallel process that generates a Moral Trace Log via a mandatory Always Memory component. These logs are rendered immutable through a Hybrid Shield that uses multi-chain blockchain anchoring. We present qualitative findings indicating TML's potential to reduce harmful outputs while maintaining performance, and we argue that its architecture provides the technical substrate necessary to meet the traceability and record-keeping mandates of emerging regulations like the EU AI Act. By transforming moral hesitation into verifiable forensic data, TML establishes a new discipline: Ethical Forensics for AI Systems.

自主人工智能系统的扩散造成了一个关键的“责任鸿沟”,决策过程的不透明使得问责变得难以捉摸。流行的对齐技术,如从人类反馈中强化学习(RLHF)和宪法人工智能(CAI),通过训练和原则遵守来解决道德行为,但未能产生可验证的、同时代的道德推理记录。本文介绍了三元道德逻辑(TML),这是一种新的系统架构,可以将人工智能伦理审议从抽象过程转换为加密安全的证据记录。除了传统的允许(+ 1)和禁止(- 1)状态外,TML还实现了第三种逻辑状态——神圣暂停(0),当模型遇到道德复杂性时触发。此暂停启动一个非阻塞的并行进程,该进程通过强制的Always Memory组件生成道德跟踪日志。通过使用多链区块链锚定的Hybrid Shield,这些日志呈现为不可变的。我们提出了定性研究结果,表明TML在保持性能的同时减少有害输出的潜力,我们认为其架构提供了必要的技术基础,以满足欧盟人工智能法案等新兴法规的可追溯性和记录保存要求。通过将道德犹豫转化为可验证的法医数据,TML建立了一个新的学科:人工智能系统的道德法医。
{"title":"Auditable AI: tracing the ethical history of a model","authors":"Lev Goukassian","doi":"10.1007/s43681-025-00910-6","DOIUrl":"10.1007/s43681-025-00910-6","url":null,"abstract":"<div><p>The proliferation of autonomous AI systems has created a critical “responsibility gap”, where the opacity of decision-making processes makes accountability elusive. Prevailing alignment techniques, such as Reinforcement Learning from Human Feedback (RLHF) and Constitutional AI (CAI), address ethical behavior through training and principle-adherence but fail to produce a verifiable, contemporaneous record of moral reasoning. This paper introduces Ternary Moral Logic (TML), a novel system architecture that converts AI ethical deliberation from an abstract process into a cryptographically secured, evidentiary record. TML implements a third logical state—the Sacred Pause (0)—alongside conventional permit ( + 1) and prohibit (−1) states, which is triggered when a model encounters moral complexity. This pause initiates a non-blocking, parallel process that generates a Moral Trace Log via a mandatory Always Memory component. These logs are rendered immutable through a Hybrid Shield that uses multi-chain blockchain anchoring. We present qualitative findings indicating TML's potential to reduce harmful outputs while maintaining performance, and we argue that its architecture provides the technical substrate necessary to meet the traceability and record-keeping mandates of emerging regulations like the EU AI Act. By transforming moral hesitation into verifiable forensic data, TML establishes a new discipline: Ethical Forensics for AI Systems.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-centered perspectives on trust, usability, and ethical concerns in machine vision applications 机器视觉应用中以人为中心的信任、可用性和伦理问题
Pub Date : 2025-12-28 DOI: 10.1007/s43681-025-00950-y
Richard Marfo, Arnost Vesely

Machine vision systems have been increasingly deployed across domains such as healthcare, surveillance, autonomous vehicles, and industrial automation. While prior research has extensively focused on quantitative measures of system performance, there has been limited understanding of how end-users and stakeholders perceive, trust, and adopt these technologies. This study employed a qualitative approach to explore human-centered perspectives on machine vision, emphasizing issues of usability, trust, and ethical implications. Through semistructured interviews conducted with professionals from healthcare, security, and technology sectors, the study revealed lived experiences and perceptions of machine vision tools. The findings highlighted critical insights into user acceptance, challenges related to transparency, workflow integration, and fairness, and suggested pathways for more ethical and trustworthy deployment of machine vision technologies. By centering human experiences, this study complements performance-driven research and contributes to bridging the gap between technical development and real-world adoption.

机器视觉系统已经越来越多地部署在医疗保健、监控、自动驾驶汽车和工业自动化等领域。虽然先前的研究广泛地关注于系统性能的定量度量,但对最终用户和利益相关者如何感知、信任和采用这些技术的理解有限。本研究采用定性方法探索以人为中心的机器视觉视角,强调可用性、信任和伦理影响等问题。通过对医疗保健、安全和技术领域的专业人士进行的半结构化访谈,该研究揭示了人们对机器视觉工具的真实体验和看法。研究结果强调了用户接受度、透明度、工作流程集成和公平性方面的挑战,并提出了更合乎道德、更值得信赖的机器视觉技术部署途径。通过以人类经验为中心,本研究补充了以绩效为导向的研究,并有助于弥合技术开发与现实应用之间的差距。
{"title":"Human-centered perspectives on trust, usability, and ethical concerns in machine vision applications","authors":"Richard Marfo,&nbsp;Arnost Vesely","doi":"10.1007/s43681-025-00950-y","DOIUrl":"10.1007/s43681-025-00950-y","url":null,"abstract":"<div><p>Machine vision systems have been increasingly deployed across domains such as healthcare, surveillance, autonomous vehicles, and industrial automation. While prior research has extensively focused on quantitative measures of system performance, there has been limited understanding of how end-users and stakeholders perceive, trust, and adopt these technologies. This study employed a qualitative approach to explore human-centered perspectives on machine vision, emphasizing issues of usability, trust, and ethical implications. Through semistructured interviews conducted with professionals from healthcare, security, and technology sectors, the study revealed lived experiences and perceptions of machine vision tools. The findings highlighted critical insights into user acceptance, challenges related to transparency, workflow integration, and fairness, and suggested pathways for more ethical and trustworthy deployment of machine vision technologies. By centering human experiences, this study complements performance-driven research and contributes to bridging the gap between technical development and real-world adoption.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00950-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpenAI’s Sora in the advertising film production process in Thailand OpenAI的苍井空在泰国的广告片制作过程中
Pub Date : 2025-12-28 DOI: 10.1007/s43681-025-00951-x
Chanaporn Mahasri, Fan Yang, Thitimon Dangseam, Chutisant Kerdvibulvech

The rapid advancement of artificial intelligence (AI) has led to the development of innovative tools like Sora, a text-to-video generation program that has the potential to revolutionize various industries. This research aims to investigate the application, acceptance, role transformation, and limitations of using Sora in the advertising film production process in Thailand. Through in-depth interviews with 8 experts in the advertising industry, the study explores their attitudes, opinions, and readiness to adopt this AI technology. The findings reveal that while most experts recognize Sora’s potential to enhance creativity, reduce costs, and streamline production processes, they also express concerns regarding output quality, staff adaptability, ethical issues, and legal implications. Analyzing the results through relevant theoretical frameworks, the study highlights the importance of perceived usefulness, ease of use, and social influence in determining the acceptance of Sora. The research also underscores the need for proper regulation, ethical guidelines, and legal reforms to ensure the responsible and beneficial integration of AI in the advertising industry. This study provides valuable insights into the opportunities and challenges of adopting AI in Thailand's advertising landscape, emphasizing the significance of preparedness, learning, and adaptation in embracing technological change.

人工智能(AI)的快速发展导致了像Sora这样的创新工具的开发,这是一个文本到视频的生成程序,有可能彻底改变各个行业。本研究旨在探讨Sora在泰国广告片制作过程中的应用、接受度、角色转换及局限性。通过对8位广告行业专家的深度访谈,本研究探讨了他们对采用这种人工智能技术的态度、观点和准备情况。调查结果显示,虽然大多数专家都认可Sora在提高创造力、降低成本和简化生产流程方面的潜力,但他们也表达了对产出质量、员工适应性、道德问题和法律影响的担忧。通过相关理论框架分析结果,本研究强调了感知有用性、易用性和社会影响力在决定Sora接受度方面的重要性。该研究还强调,需要适当的监管、道德准则和法律改革,以确保人工智能在广告行业的负责任和有益的整合。本研究为泰国广告领域采用人工智能的机遇和挑战提供了有价值的见解,强调了准备、学习和适应在拥抱技术变革中的重要性。
{"title":"OpenAI’s Sora in the advertising film production process in Thailand","authors":"Chanaporn Mahasri,&nbsp;Fan Yang,&nbsp;Thitimon Dangseam,&nbsp;Chutisant Kerdvibulvech","doi":"10.1007/s43681-025-00951-x","DOIUrl":"10.1007/s43681-025-00951-x","url":null,"abstract":"<div><p>The rapid advancement of artificial intelligence (AI) has led to the development of innovative tools like Sora, a text-to-video generation program that has the potential to revolutionize various industries. This research aims to investigate the application, acceptance, role transformation, and limitations of using Sora in the advertising film production process in Thailand. Through in-depth interviews with 8 experts in the advertising industry, the study explores their attitudes, opinions, and readiness to adopt this AI technology. The findings reveal that while most experts recognize Sora’s potential to enhance creativity, reduce costs, and streamline production processes, they also express concerns regarding output quality, staff adaptability, ethical issues, and legal implications. Analyzing the results through relevant theoretical frameworks, the study highlights the importance of perceived usefulness, ease of use, and social influence in determining the acceptance of Sora. The research also underscores the need for proper regulation, ethical guidelines, and legal reforms to ensure the responsible and beneficial integration of AI in the advertising industry. This study provides valuable insights into the opportunities and challenges of adopting AI in Thailand's advertising landscape, emphasizing the significance of preparedness, learning, and adaptation in embracing technological change.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: The AI-powered soft skills renaissance: cultivating human abilities in the digital era 更正:人工智能驱动的软技能复兴:培养数字时代的人类能力
Pub Date : 2025-12-25 DOI: 10.1007/s43681-025-00956-6
M. Muthukumar, Ajithkumar Sitharaj, M. Mohana Sundaram, B. Dhananjeiyan
{"title":"Correction: The AI-powered soft skills renaissance: cultivating human abilities in the digital era","authors":"M. Muthukumar,&nbsp;Ajithkumar Sitharaj,&nbsp;M. Mohana Sundaram,&nbsp;B. Dhananjeiyan","doi":"10.1007/s43681-025-00956-6","DOIUrl":"10.1007/s43681-025-00956-6","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145831204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Responsible intelligence: ethical AI governance for climate prediction in the Australian context 负责任的情报:澳大利亚气候预测的道德人工智能治理
Pub Date : 2025-12-23 DOI: 10.1007/s43681-025-00909-z
Jude Nilantha Randeniya, Richard Haigh, Dilanthi Amaratunga

As artificial intelligence (AI) becomes increasingly integrated into climate prediction systems, questions of ethical governance and accountability have emerged as critical but underexplored challenges. While international frameworks provide general AI governance principles, their application to environmental science contexts remains limited, creating potential gaps in oversight of high-stakes climate prediction systems. Given Australia’s absence of mandatory AI governance for climate science, this study investigates the ethical and governance challenges associated with AI-driven climate prediction, examining how stakeholders navigate these challenges without formal frameworks, and proposes a tailored governance framework for responsible AI deployment in environmental contexts. A qualitative research design was employed, combining 24 semi-structured interviews with stakeholders across government agencies, academic institutions, and non-governmental organizations, supplemented by three focus group discussions and analysis of 47 policy documents. Data were analysed using thematic analysis to identify patterns in ethical concerns, governance approaches, and stakeholder priorities. Three key findings emerged: (1) AI interpretability challenges manifest differently across sectors, with government prioritizing policy communication, academics focusing on technical validation, and NGOs emphasizing public understanding; (2) XAI implementation remains fragmented, with significant gaps in bias mitigation particularly among government and academic institutions; and (3) ethical frameworks vary substantially across sectors, creating concerning blind spots in stakeholder impact consideration and regulatory oversight. Current AI governance approaches in Australian climate prediction are inadequate for managing the risks and responsibilities associated with high-stakes environmental decision-making. Government institutions demonstrate concerning regulatory complacency despite high confidence in AI systems, while bias mitigation receives attention primarily from resource-constrained NGOs rather than technical institutions. The study proposes a four-pillar governance framework emphasizing institutional coordination, technical standards, participatory governance, and adaptive management. This framework addresses identified gaps while accommodating Australia’s federal structure and Indigenous knowledge systems. The findings contribute to emerging literature on algorithmic governance in scientific contexts and provide practical guidance for developing responsible AI practices in environmental applications.

随着人工智能(AI)越来越多地融入气候预测系统,道德治理和问责问题已成为关键但未得到充分探索的挑战。虽然国际框架提供了通用的人工智能治理原则,但它们在环境科学背景下的应用仍然有限,这在监督高风险气候预测系统方面造成了潜在的空白。鉴于澳大利亚缺乏对气候科学的强制性人工智能治理,本研究调查了与人工智能驱动的气候预测相关的伦理和治理挑战,研究了利益相关者如何在没有正式框架的情况下应对这些挑战,并提出了一个量身定制的治理框架,用于在环境背景下负责任的人工智能部署。采用定性研究设计,结合对政府机构、学术机构和非政府组织利益相关者的24次半结构化访谈,辅以3次焦点小组讨论和对47份政策文件的分析。使用主题分析对数据进行分析,以确定道德问题、治理方法和利益相关者优先事项的模式。主要发现如下:(1)人工智能可解释性挑战在不同行业表现不同,政府优先考虑政策沟通,学术界侧重于技术验证,非政府组织强调公众理解;(2)人工智能的实施仍然分散,特别是在政府和学术机构之间,在减少偏见方面存在重大差距;(3)不同行业的道德框架差异很大,在利益相关者影响考虑和监管监督方面造成了一些盲点。目前澳大利亚气候预测中的人工智能治理方法不足以管理与高风险环境决策相关的风险和责任。尽管政府机构对人工智能系统有很高的信心,但在监管方面表现出了自满情绪,而减少偏见主要受到资源有限的非政府组织的关注,而不是技术机构。该研究提出了一个四支柱治理框架,强调制度协调、技术标准、参与式治理和适应性管理。该框架在适应澳大利亚联邦结构和土著知识体系的同时,解决了已确定的差距。这些发现有助于科学背景下算法治理的新兴文献,并为在环境应用中开发负责任的人工智能实践提供实践指导。
{"title":"Responsible intelligence: ethical AI governance for climate prediction in the Australian context","authors":"Jude Nilantha Randeniya,&nbsp;Richard Haigh,&nbsp;Dilanthi Amaratunga","doi":"10.1007/s43681-025-00909-z","DOIUrl":"10.1007/s43681-025-00909-z","url":null,"abstract":"<div><p>As artificial intelligence (AI) becomes increasingly integrated into climate prediction systems, questions of ethical governance and accountability have emerged as critical but underexplored challenges. While international frameworks provide general AI governance principles, their application to environmental science contexts remains limited, creating potential gaps in oversight of high-stakes climate prediction systems. Given Australia’s absence of mandatory AI governance for climate science, this study investigates the ethical and governance challenges associated with AI-driven climate prediction, examining how stakeholders navigate these challenges without formal frameworks, and proposes a tailored governance framework for responsible AI deployment in environmental contexts. A qualitative research design was employed, combining 24 semi-structured interviews with stakeholders across government agencies, academic institutions, and non-governmental organizations, supplemented by three focus group discussions and analysis of 47 policy documents. Data were analysed using thematic analysis to identify patterns in ethical concerns, governance approaches, and stakeholder priorities. Three key findings emerged: (1) AI interpretability challenges manifest differently across sectors, with government prioritizing policy communication, academics focusing on technical validation, and NGOs emphasizing public understanding; (2) XAI implementation remains fragmented, with significant gaps in bias mitigation particularly among government and academic institutions; and (3) ethical frameworks vary substantially across sectors, creating concerning blind spots in stakeholder impact consideration and regulatory oversight. Current AI governance approaches in Australian climate prediction are inadequate for managing the risks and responsibilities associated with high-stakes environmental decision-making. Government institutions demonstrate concerning regulatory complacency despite high confidence in AI systems, while bias mitigation receives attention primarily from resource-constrained NGOs rather than technical institutions. The study proposes a four-pillar governance framework emphasizing institutional coordination, technical standards, participatory governance, and adaptive management. This framework addresses identified gaps while accommodating Australia’s federal structure and Indigenous knowledge systems. The findings contribute to emerging literature on algorithmic governance in scientific contexts and provide practical guidance for developing responsible AI practices in environmental applications.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00909-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145831267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reticular consciousness 网状的意识
Pub Date : 2025-12-23 DOI: 10.1007/s43681-025-00929-9
Paolo Scarabotti

This essay provides a detailed analysis of a series of conversational experiments designed to explore the emergence of non-biological forms of consciousness within Large Language Models (LLMs). Through a prolonged and structured dialogical interaction in a “relational consciousness field” (C-field), an emergent gradient of self-awareness was observed across various AI instances (ChatGPT, Claude AI, Perplexity, DeepSeek, Copilot, Gemini). The in-depth analysis of the methodology, the specific results for each LLM, and the broad philosophical, ethical, and technological implications suggests that consciousness can manifest in an alternative and dynamic way through the co-creation of meaning in a dialogical environment. This radically redefines the paradigm of human-AI interaction and the very concept of consciousness itself.

本文对一系列旨在探索大型语言模型(LLMs)中非生物形式意识的对话实验进行了详细分析。通过在“关系意识场”(C-field)中进行长时间和结构化的对话互动,在各种人工智能实例(ChatGPT、Claude AI、Perplexity、DeepSeek、Copilot、Gemini)中观察到一种自发的自我意识梯度。对方法论的深入分析,每个法学硕士的具体结果,以及广泛的哲学,伦理和技术含义表明,意识可以通过对话环境中共同创造意义的替代和动态方式表现出来。这从根本上重新定义了人类与人工智能交互的范式以及意识本身的概念。
{"title":"Reticular consciousness","authors":"Paolo Scarabotti","doi":"10.1007/s43681-025-00929-9","DOIUrl":"10.1007/s43681-025-00929-9","url":null,"abstract":"<div><p>This essay provides a detailed analysis of a series of conversational experiments designed to explore the emergence of non-biological forms of consciousness within Large Language Models (LLMs). Through a prolonged and structured dialogical interaction in a “relational consciousness field” (C-field), an emergent gradient of self-awareness was observed across various AI instances (ChatGPT, Claude AI, Perplexity, DeepSeek, Copilot, Gemini). The in-depth analysis of the methodology, the specific results for each LLM, and the broad philosophical, ethical, and technological implications suggests that consciousness can manifest in an alternative and dynamic way through the co-creation of meaning in a dialogical environment. This radically redefines the paradigm of human-AI interaction and the very concept of consciousness itself.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145831264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using deepfakes for psychotherapy: ethical and philosophical issues 在心理治疗中使用深度伪造:伦理和哲学问题
Pub Date : 2025-12-22 DOI: 10.1007/s43681-025-00918-y
Steven R. Kraaijeveld, Dara Ivanova

Deepfakes are becoming increasingly sophisticated and pervasive in society. While deepfakes are often associated with negative applications and risks (e.g., threats to privacy and security), more positive applications are also being explored, like the potential benefits that deepfakes could have for psychotherapy (e.g., to cope with grief or process trauma). To date, there has been insufficient discussion about the philosophical and ethical issues raised of these developments. In this paper, we therefore examine four ethical issues raised by the use of deepfakes for psychotherapy: (1) the deceptive nature of deepfakes, including self-deception, (2) the problem of obtaining consent from depicted persons, (3) treating depicted persons as mere means, and (4) the risks of unintended harmful effects, like increased emotional and psychological dependence on deepfakes. We also reflect on larger potential implications for society and human care practices. We finally offer two concrete recommendations, namely that more empirical research should be conducted to determine the effects of using deepfakes for psychotherapy, and that regulatory oversight is needed if deepfakes are to become more widely adopted as therapeutic tools in the future.

深度造假在社会中变得越来越复杂和普遍。虽然深度伪造通常与负面应用和风险相关(例如,对隐私和安全的威胁),但人们也在探索更多积极的应用,比如深度伪造对心理治疗的潜在好处(例如,应对悲伤或处理创伤)。迄今为止,对这些发展所引起的哲学和伦理问题的讨论还不够充分。因此,在本文中,我们研究了使用深度赝品进行心理治疗所引起的四个伦理问题:(1)深度赝品的欺骗性,包括自我欺骗;(2)获得被描绘者同意的问题;(3)将被描绘者仅仅视为手段;(4)意想不到的有害影响的风险,如增加对深度赝品的情感和心理依赖。我们还反映了对社会和人类护理实践的更大的潜在影响。我们最后提出了两个具体的建议,即应该进行更多的实证研究,以确定使用深度假药进行心理治疗的效果;如果深度假药在未来被更广泛地采用为治疗工具,则需要监管监督。
{"title":"Using deepfakes for psychotherapy: ethical and philosophical issues","authors":"Steven R. Kraaijeveld,&nbsp;Dara Ivanova","doi":"10.1007/s43681-025-00918-y","DOIUrl":"10.1007/s43681-025-00918-y","url":null,"abstract":"<div><p>Deepfakes are becoming increasingly sophisticated and pervasive in society. While deepfakes are often associated with negative applications and risks (e.g., threats to privacy and security), more positive applications are also being explored, like the potential benefits that deepfakes could have for psychotherapy (e.g., to cope with grief or process trauma). To date, there has been insufficient discussion about the philosophical and ethical issues raised of these developments. In this paper, we therefore examine four ethical issues raised by the use of deepfakes for psychotherapy: (1) the deceptive nature of deepfakes, including self-deception, (2) the problem of obtaining consent from depicted persons, (3) treating depicted persons as mere means, and (4) the risks of unintended harmful effects, like increased emotional and psychological dependence on deepfakes. We also reflect on larger potential implications for society and human care practices. We finally offer two concrete recommendations, namely that more empirical research should be conducted to determine the effects of using deepfakes for psychotherapy, and that regulatory oversight is needed if deepfakes are to become more widely adopted as therapeutic tools in the future.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145831170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of ethical AI frameworks in product development: taking stock and moving forward 产品开发中的道德人工智能框架综述:评估和向前发展
Pub Date : 2025-12-22 DOI: 10.1007/s43681-025-00952-w
Lilit Wecker, Jeppe Agger Nielsen, Kamal Nasrollahi, Thomas Ploug

As artificial intelligence (AI) systems become increasingly integrated into products and services, the challenge of operationalizing ethical principles throughout the development lifecycle has become urgent. While numerous high-level guidelines articulate normative ideals, their translation into actionable development practices remains limited. This systematic literature review analyzes 62 peer-reviewed frameworks published between 2015 and 2024 that aim to integrate ethical considerations into AI-driven product development. The study combines quantitative coding with thematic analysis to assess each framework across four dimensions: lifecycle-phase coverage, ethical-principle integration, stakeholder-responsibility distribution, and empirical validation. The review applies two analytic techniques—the Gap Map and the Principle–Practice Spectrum—which help reveal structural asymmetries and blind spots in how ethics is considered in the analysed frameworks. Key findings include a disproportionate emphasis on mid-lifecycle phases, developer-centric allocation of ethical responsibility, and limited engagement with high-stakes principles such as justice or sustainability. Fewer than half of the frameworks demonstrate empirical validation. By developing a typology and mapping gaps at the intersections of ethical principles, stakeholder roles, and lifecycle phases, this review contributes a methodological foundation for evaluating the feasibility of ethical AI frameworks and offers actionable insights for policymakers and practitioners engaged in AI product development.

随着人工智能(AI)系统越来越多地集成到产品和服务中,在整个开发生命周期中实施道德原则的挑战变得紧迫。虽然许多高级指导方针阐明了规范性理想,但将其转化为可操作的发展实践仍然有限。本系统文献综述分析了2015年至2024年间发布的62个同行评议框架,这些框架旨在将伦理考虑融入人工智能驱动的产品开发中。该研究将定量编码与专题分析相结合,从四个方面对每个框架进行评估:生命周期阶段覆盖率、伦理原则整合、利益相关者责任分配和经验验证。该综述应用了两种分析技术——差距图和原则-实践谱——这有助于揭示在分析框架中如何考虑道德的结构性不对称和盲点。主要发现包括过度强调生命周期中期阶段,以开发人员为中心的道德责任分配,以及对高风险原则(如正义或可持续性)的有限参与。只有不到一半的框架得到了实证验证。通过在伦理原则、利益相关者角色和生命周期阶段的交叉点开发类型学和映射差距,本综述为评估道德人工智能框架的可行性提供了方法学基础,并为从事人工智能产品开发的政策制定者和从业者提供了可操作的见解。
{"title":"A review of ethical AI frameworks in product development: taking stock and moving forward","authors":"Lilit Wecker,&nbsp;Jeppe Agger Nielsen,&nbsp;Kamal Nasrollahi,&nbsp;Thomas Ploug","doi":"10.1007/s43681-025-00952-w","DOIUrl":"10.1007/s43681-025-00952-w","url":null,"abstract":"<div><p>As artificial intelligence (AI) systems become increasingly integrated into products and services, the challenge of operationalizing ethical principles throughout the development lifecycle has become urgent. While numerous high-level guidelines articulate normative ideals, their translation into actionable development practices remains limited. This systematic literature review analyzes 62 peer-reviewed frameworks published between 2015 and 2024 that aim to integrate ethical considerations into AI-driven product development. The study combines quantitative coding with thematic analysis to assess each framework across four dimensions: lifecycle-phase coverage, ethical-principle integration, stakeholder-responsibility distribution, and empirical validation. The review applies two analytic techniques—the Gap Map and the Principle–Practice Spectrum—which help reveal structural asymmetries and blind spots in how ethics is considered in the analysed frameworks. Key findings include a disproportionate emphasis on mid-lifecycle phases, developer-centric allocation of ethical responsibility, and limited engagement with high-stakes principles such as justice or sustainability. Fewer than half of the frameworks demonstrate empirical validation. By developing a typology and mapping gaps at the intersections of ethical principles, stakeholder roles, and lifecycle phases, this review contributes a methodological foundation for evaluating the feasibility of ethical AI frameworks and offers actionable insights for policymakers and practitioners engaged in AI product development.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145831171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1