Pub Date : 2026-01-12DOI: 10.1007/s43681-025-00974-4
Ivan Efreaim Gozum, Blaise Ringor, Dennis Ian Sy
Artificial intelligence (AI)’s rapid advancement poses opportunities and challenges for environmental ethics. While AI has the potential to enhance ecological sustainability through data-driven solutions, it also risks depersonalizing ethical decision-making and reinforcing a technocratic paradigm that prioritizes efficiency over human dignity and environmental stewardship. This paper explores how Karol Wojtyła’s personalist philosophy provides a sound ethical framework for addressing these concerns. Personalism, which emphasizes the irreducibility of the human person, responsibility, and relationality, offers a foundation for rethinking environmental ethics in the AI era. This study supports a person-centered approach to environmental decision-making by including Wojtyła’s ideas on human agency, the common good, and ecological responsibility. Such an approach resists the reduction of human moral agency to algorithmic processes while fostering solidarity, subsidiarity, and ecological justice. Ultimately, this paper argues that a personalist environmental ethic can guide technological development toward serving humanity and the natural world, ensuring that AI remains a tool for sustainable and ethical progress rather than an autonomous arbiter of ecological fate.
{"title":"Persona ex machina: personalist environmental ethics in the age of artificial intelligence","authors":"Ivan Efreaim Gozum, Blaise Ringor, Dennis Ian Sy","doi":"10.1007/s43681-025-00974-4","DOIUrl":"10.1007/s43681-025-00974-4","url":null,"abstract":"<div><p>Artificial intelligence (AI)’s rapid advancement poses opportunities and challenges for environmental ethics. While AI has the potential to enhance ecological sustainability through data-driven solutions, it also risks depersonalizing ethical decision-making and reinforcing a technocratic paradigm that prioritizes efficiency over human dignity and environmental stewardship. This paper explores how Karol Wojtyła’s personalist philosophy provides a sound ethical framework for addressing these concerns. Personalism, which emphasizes the irreducibility of the human person, responsibility, and relationality, offers a foundation for rethinking environmental ethics in the AI era. This study supports a person-centered approach to environmental decision-making by including Wojtyła’s ideas on human agency, the common good, and ecological responsibility. Such an approach resists the reduction of human moral agency to algorithmic processes while fostering solidarity, subsidiarity, and ecological justice. Ultimately, this paper argues that a personalist environmental ethic can guide technological development toward serving humanity and the natural world, ensuring that AI remains a tool for sustainable and ethical progress rather than an autonomous arbiter of ecological fate.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1007/s43681-025-00965-5
Dhruv Verma, Vagan Terziyan, Tuure Tuunanen, Amit K. Shukla
The rapid advancement of artificial intelligence (AI) has introduced profound societal and ethical challenges, necessitating a paradigm shift in AI system design. This paper introduces a novel framework that enables AI systems to design, audit, and evolve themselves ethically, through an adaptation of the echeloned design science research (eDSR) methodology. These AI systems will certainly evolve beyond mere tools to design, refine, and govern themselves within ethical constraints. The framework embeds four core principles: responsible autonomy, where AI systems self-regulate their decisions within ethical boundaries; AI self-explainability, enabling AI-to-AI transparency and internal decision auditing; AI bootstrapping, supporting iterative self-enhancement; and knowledge-informed machine learning (KIML), which integrates domain expertise for context-aware learning. We extend the concept of AI-as-a-User-of-AI, wherein autonomous AI agents behave as collaborative entities that engage in structured dialogues to refine decisions and enforce ethical alignment. Unlike traditional systems that rely on human-in-the-loop oversight or post-hoc explanations, our framework allows AI to monitor and evolve its reasoning in real time. By embedding ethical reasoning, self-explanation, and learning directly into system architecture through modular design echelons, the proposed generative eDSR (GeDSR) framework combines eDSR’s structured and multi-phased approach with AI-to-AI collaboration, which enables scalability, adaptability, and ethical alignment across diverse applications. By embedding ethical reasoning and iterative learning at the architectural level, the proposed framework promotes the development of self-improving AI systems aligned with human values, thus laying the groundwork for a shift from human-dependent oversight to a resilient, AI-centric ecosystem.
{"title":"Toward an artifact that designs itself: generative design science research approach","authors":"Dhruv Verma, Vagan Terziyan, Tuure Tuunanen, Amit K. Shukla","doi":"10.1007/s43681-025-00965-5","DOIUrl":"10.1007/s43681-025-00965-5","url":null,"abstract":"<div><p>The rapid advancement of artificial intelligence (AI) has introduced profound societal and ethical challenges, necessitating a paradigm shift in AI system design. This paper introduces a novel framework that enables AI systems to design, audit, and evolve themselves ethically, through an adaptation of the echeloned design science research (eDSR) methodology. These AI systems will certainly evolve beyond mere tools to design, refine, and govern themselves within ethical constraints. The framework embeds four core principles: responsible autonomy, where AI systems self-regulate their decisions within ethical boundaries; AI self-explainability, enabling AI-to-AI transparency and internal decision auditing; AI bootstrapping, supporting iterative self-enhancement; and knowledge-informed machine learning (KIML), which integrates domain expertise for context-aware learning. We extend the concept of AI-as-a-User-of-AI, wherein autonomous AI agents behave as collaborative entities that engage in structured dialogues to refine decisions and enforce ethical alignment<b>.</b> Unlike traditional systems that rely on human-in-the-loop oversight or post-hoc explanations, our framework allows AI to monitor and evolve its reasoning in real time. By embedding ethical reasoning, self-explanation, and learning directly into system architecture through modular design echelons, the proposed generative eDSR (GeDSR) framework combines eDSR’s structured and multi-phased approach with AI-to-AI collaboration, which enables scalability, adaptability, and ethical alignment across diverse applications. By embedding ethical reasoning and iterative learning at the architectural level, the proposed framework promotes the development of self-improving AI systems aligned with human values, thus laying the groundwork for a shift from human-dependent oversight to a resilient, AI-centric ecosystem.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00965-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1007/s43681-025-00944-w
Caleb Bennett, Jeremy Bennett
The integration of artificial intelligence (AI) into human resource development (HRD) demands a re-examination of learning theories, developmental practices, and ethical frameworks. This integrative review synthesizes sociotechnical systems theory, adult learning models, augmentation strategies, technology adoption frameworks, and ethics considerations to build a comprehensive conceptual model of AI-enhanced HRD. Key themes include the joint optimization of human and machine capabilities, the personalization and critical interpretation of AI-mediated learning, and the proactive stewardship of fairness, transparency, and inclusion. Drawing upon contemporary studies (2021–2025) and emerging empirical evidence, this paper offers an integrative model that links technical, social, and ethical subsystems within HRD. Cross-cultural dimensions and methodological innovations for studying AI-HRD dynamics are also discussed. The paper offers theoretical contributions to HRD scholarship and practical recommendations for designing adaptive, ethical, and human-centered learning ecosystems in the age of intelligent technologies. Ultimately, the model situates HRD as an active agent in shaping responsible AI futures that enhance, rather than erode, human learning and development.
{"title":"Human resource development in the age of artificial intelligence: a theoretical synthesis","authors":"Caleb Bennett, Jeremy Bennett","doi":"10.1007/s43681-025-00944-w","DOIUrl":"10.1007/s43681-025-00944-w","url":null,"abstract":"<div><p>The integration of artificial intelligence (AI) into human resource development (HRD) demands a re-examination of learning theories, developmental practices, and ethical frameworks. This integrative review synthesizes sociotechnical systems theory, adult learning models, augmentation strategies, technology adoption frameworks, and ethics considerations to build a comprehensive conceptual model of AI-enhanced HRD. Key themes include the joint optimization of human and machine capabilities, the personalization and critical interpretation of AI-mediated learning, and the proactive stewardship of fairness, transparency, and inclusion. Drawing upon contemporary studies (2021–2025) and emerging empirical evidence, this paper offers an integrative model that links technical, social, and ethical subsystems within HRD. Cross-cultural dimensions and methodological innovations for studying AI-HRD dynamics are also discussed. The paper offers theoretical contributions to HRD scholarship and practical recommendations for designing adaptive, ethical, and human-centered learning ecosystems in the age of intelligent technologies. Ultimately, the model situates HRD as an active agent in shaping responsible AI futures that enhance, rather than erode, human learning and development.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}