Pub Date : 2025-11-13DOI: 10.1007/s11948-025-00560-1
Mihaela Constantinescu, Muel Kaptein
{"title":"Responsibility Gaps, LLMs & Organisations: Many Agents, Many Levels, and Many Interactions.","authors":"Mihaela Constantinescu, Muel Kaptein","doi":"10.1007/s11948-025-00560-1","DOIUrl":"10.1007/s11948-025-00560-1","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 6","pages":"36"},"PeriodicalIF":3.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12615531/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1007/s11948-025-00562-z
Éric Pardoux, Angeliki Kerasidou
Artificial intelligence (AI) is gradually transforming healthcare. However, despite its promised benefits, AI in healthcare also raises a number of ethical, legal and social concerns. Compliance by design (CbD) has been proposed as one way of addressing some of these concerns. In the context of healthcare, CbD efforts could focus on building compliance with existing clinical guidelines (CGs), given that they provide the best practices identified according to evidence-based medicine. In this paper we use the example of AI-based clinical decision support systems (CDSS) to theoretically examine whether medical AI tools could be designed to be inherently compliant with CGs, and implication for ethics and trust. We argue that AI-based CDSS systematically complying with CGs when applied to specific patient cases are not desirable, as CGs, despite their usefulness in guiding medical decision-making, are only recommendations on how to diagnose and treat medical conditions. We thus propose a new understanding of CbD for CGs as a sociotechnical program supported by AI that applies to the whole clinical decision-making process rather than just understanding CbD for CGs as a process located only within the AI tool. This implies taking into account emerging knowledge from actual clinical practices to put CGs in perspective, reflexivity from users regarding the information needed for decision-making, as well as a shift in the design culture, from AI as a stand-alone tool to AI as an in-situ service located within particular healthcare settings.
{"title":"Compliance with Clinical Guidelines and AI-Based Clinical Decision Support Systems: Implications for Ethics and Trust.","authors":"Éric Pardoux, Angeliki Kerasidou","doi":"10.1007/s11948-025-00562-z","DOIUrl":"10.1007/s11948-025-00562-z","url":null,"abstract":"<p><p>Artificial intelligence (AI) is gradually transforming healthcare. However, despite its promised benefits, AI in healthcare also raises a number of ethical, legal and social concerns. Compliance by design (CbD) has been proposed as one way of addressing some of these concerns. In the context of healthcare, CbD efforts could focus on building compliance with existing clinical guidelines (CGs), given that they provide the best practices identified according to evidence-based medicine. In this paper we use the example of AI-based clinical decision support systems (CDSS) to theoretically examine whether medical AI tools could be designed to be inherently compliant with CGs, and implication for ethics and trust. We argue that AI-based CDSS systematically complying with CGs when applied to specific patient cases are not desirable, as CGs, despite their usefulness in guiding medical decision-making, are only recommendations on how to diagnose and treat medical conditions. We thus propose a new understanding of CbD for CGs as a sociotechnical program supported by AI that applies to the whole clinical decision-making process rather than just understanding CbD for CGs as a process located only within the AI tool. This implies taking into account emerging knowledge from actual clinical practices to put CGs in perspective, reflexivity from users regarding the information needed for decision-making, as well as a shift in the design culture, from AI as a stand-alone tool to AI as an in-situ service located within particular healthcare settings.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 6","pages":"34"},"PeriodicalIF":3.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12615534/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-05DOI: 10.1007/s11948-025-00561-0
Yong-Hong Xia
{"title":"Filling the Responsibility Gap: Agency and Responsibility in the Technological Age.","authors":"Yong-Hong Xia","doi":"10.1007/s11948-025-00561-0","DOIUrl":"10.1007/s11948-025-00561-0","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 6","pages":"33"},"PeriodicalIF":3.0,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12589311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145446297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29DOI: 10.1007/s11948-025-00559-8
Tricia A Griffin, Roos Goorman, Brian P Green, Jos V M Welie
{"title":"Hidden Agents, Explicit Obligations: A Linguistic Analysis of AI Ethics Guidelines.","authors":"Tricia A Griffin, Roos Goorman, Brian P Green, Jos V M Welie","doi":"10.1007/s11948-025-00559-8","DOIUrl":"10.1007/s11948-025-00559-8","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 6","pages":"32"},"PeriodicalIF":3.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12572076/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145402608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1007/s11948-025-00552-1
Stefano Canali, Alessandro Falcetta, Massimo Pavan, Manuel Roveri, Viola Schiaffonati
The use of big data and machine learning has been discussed in an expanding literature, detailing concerns on ethical issues and societal implications. In this paper we focus on big data and machine learning in the context of health systems and with the specific purpose of personalization. Whilst personalization is considered very promising in this context, by focusing on concrete uses of personalized models for glucose monitoring and anomaly detection we identify issues that emerge with personalized models and show that personalization is not necessarily nor always a positive development. We argue that there is a new problem of trade-offs between the expected benefits of personalization and new and exacerbated issues - results that have serious implications for strategies of mitigation and ethical concerns on big data and machine learning.
{"title":"Big Data, Machine Learning, and Personalization in Health Systems: Ethical Issues and Emerging Trade-Offs.","authors":"Stefano Canali, Alessandro Falcetta, Massimo Pavan, Manuel Roveri, Viola Schiaffonati","doi":"10.1007/s11948-025-00552-1","DOIUrl":"10.1007/s11948-025-00552-1","url":null,"abstract":"<p><p>The use of big data and machine learning has been discussed in an expanding literature, detailing concerns on ethical issues and societal implications. In this paper we focus on big data and machine learning in the context of health systems and with the specific purpose of personalization. Whilst personalization is considered very promising in this context, by focusing on concrete uses of personalized models for glucose monitoring and anomaly detection we identify issues that emerge with personalized models and show that personalization is not necessarily nor always a positive development. We argue that there is a new problem of trade-offs between the expected benefits of personalization and new and exacerbated issues - results that have serious implications for strategies of mitigation and ethical concerns on big data and machine learning.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"29"},"PeriodicalIF":3.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518398/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1007/s11948-025-00557-w
Clare Shelley-Egan, Eline De Jong
A significant amount of scholarship and funding has been dedicated to ethical and social studies of new and emerging science and technology (NEST), from nanotechnology to synthetic biology, and Artificial Intelligence. Quantum technologies comprise the latest NEST attracting interest from scholarship in the social sciences and humanities. While there is a small community now emerging around broader discussion of quantum technologies in society, the concepts of ethics of quantum technologies and responsible innovation are still fluid. In this article, we argue that lessons from previous instances of NEST can offer important insights into the early stages of quantum technology discourse and development. In the embryonic stages of discourse around NEST, there is often an undue emphasis on the novelty of ethical issues, leading to speculation and misplaced resources and energy. Using a lens of continuity, we revisit experiences and lessons from nanotechnology discourse. Zooming in on key characteristics of the nanoethics discourse, we use these features as analytical tools with which to assess and analyse emerging discourse around quantum technologies. We point to continuities between nano and quantum discourse, including the focus on 'responsible' or 'good' technology; the intensification of ethical issues brought about by enabling technologies; the limitations and risks of speculative ethics; the effects of ambivalence on the framing of ethics; and the importance of paying attention to the present. These issues are taken forward to avoid 'reinventing the wheel' and to offer guidance in shaping the ethics discourse around quantum technologies into a more focused and effective debate.
{"title":"From Nano to Quantum: Ethics Through a Lens of Continuity.","authors":"Clare Shelley-Egan, Eline De Jong","doi":"10.1007/s11948-025-00557-w","DOIUrl":"10.1007/s11948-025-00557-w","url":null,"abstract":"<p><p>A significant amount of scholarship and funding has been dedicated to ethical and social studies of new and emerging science and technology (NEST), from nanotechnology to synthetic biology, and Artificial Intelligence. Quantum technologies comprise the latest NEST attracting interest from scholarship in the social sciences and humanities. While there is a small community now emerging around broader discussion of quantum technologies in society, the concepts of ethics of quantum technologies and responsible innovation are still fluid. In this article, we argue that lessons from previous instances of NEST can offer important insights into the early stages of quantum technology discourse and development. In the embryonic stages of discourse around NEST, there is often an undue emphasis on the novelty of ethical issues, leading to speculation and misplaced resources and energy. Using a lens of continuity, we revisit experiences and lessons from nanotechnology discourse. Zooming in on key characteristics of the nanoethics discourse, we use these features as analytical tools with which to assess and analyse emerging discourse around quantum technologies. We point to continuities between nano and quantum discourse, including the focus on 'responsible' or 'good' technology; the intensification of ethical issues brought about by enabling technologies; the limitations and risks of speculative ethics; the effects of ambivalence on the framing of ethics; and the importance of paying attention to the present. These issues are taken forward to avoid 'reinventing the wheel' and to offer guidance in shaping the ethics discourse around quantum technologies into a more focused and effective debate.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"31"},"PeriodicalIF":3.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518464/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1007/s11948-025-00556-x
Eline de Jong
The ethics of emerging technologies faces an anticipation dilemma: engaging too early risks overly speculative concerns, while engaging too late may forfeit the chance to shape a technology's trajectory. Despite various methods to address this challenge, no framework exists to assess their suitability across different stages of technological development. This paper proposes such a framework. I conceptualise two main ethical approaches: outcomes-oriented ethics, which assesses the potential consequences of a technology's materialisation, and meaning-oriented ethics, which examines how (social) meaning is attributed to a technology. I argue that the strengths and limitations of outcomes- and meaning-oriented ethics depend on the uncertainties surrounding a technology, which shift as it matures. To capture this evolution, I introduce the concept of ethics readiness-the readiness of a technology to undergo detailed ethical scrutiny. Building on the widely known Technology Readiness Levels (TRLs), I propose Ethics Readiness Levels (ERLs) to illustrate how the suitability of ethical approaches evolves with a technology's development. At lower ERLs, where uncertainties are most pronounced, meaning-oriented ethics proves more effective; while at higher ERLs, as impacts become clearer, outcomes-oriented ethics gains relevance. By linking Ethics Readiness to Technology Readiness, this framework underscores that the appropriateness of ethical approaches evolves alongside technological maturity, ensuring scrutiny remains grounded and relevant. Finally, I demonstrate the practical value of this framework by applying it to quantum technologies, showing how Ethics Readiness can guide effective ethical engagement.
{"title":"Ethics Readiness of Technology: The Case for Aligning Ethical Approaches with Technological Maturity.","authors":"Eline de Jong","doi":"10.1007/s11948-025-00556-x","DOIUrl":"10.1007/s11948-025-00556-x","url":null,"abstract":"<p><p>The ethics of emerging technologies faces an anticipation dilemma: engaging too early risks overly speculative concerns, while engaging too late may forfeit the chance to shape a technology's trajectory. Despite various methods to address this challenge, no framework exists to assess their suitability across different stages of technological development. This paper proposes such a framework. I conceptualise two main ethical approaches: outcomes-oriented ethics, which assesses the potential consequences of a technology's materialisation, and meaning-oriented ethics, which examines how (social) meaning is attributed to a technology. I argue that the strengths and limitations of outcomes- and meaning-oriented ethics depend on the uncertainties surrounding a technology, which shift as it matures. To capture this evolution, I introduce the concept of ethics readiness-the readiness of a technology to undergo detailed ethical scrutiny. Building on the widely known Technology Readiness Levels (TRLs), I propose Ethics Readiness Levels (ERLs) to illustrate how the suitability of ethical approaches evolves with a technology's development. At lower ERLs, where uncertainties are most pronounced, meaning-oriented ethics proves more effective; while at higher ERLs, as impacts become clearer, outcomes-oriented ethics gains relevance. By linking Ethics Readiness to Technology Readiness, this framework underscores that the appropriateness of ethical approaches evolves alongside technological maturity, ensuring scrutiny remains grounded and relevant. Finally, I demonstrate the practical value of this framework by applying it to quantum technologies, showing how Ethics Readiness can guide effective ethical engagement.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"30"},"PeriodicalIF":3.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518457/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1007/s11948-025-00554-z
Simeon C Calvert
{"title":"Principles and Framework for the Operationalisation of Meaningful Human Control Over Autonomous Systems.","authors":"Simeon C Calvert","doi":"10.1007/s11948-025-00554-z","DOIUrl":"10.1007/s11948-025-00554-z","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"27"},"PeriodicalIF":3.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12460435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1007/s11948-025-00550-3
Nina Klimburg-Witjes, Kai Strycker, Vitali Braun
Satellites and space technologies enable global communication, navigation, and weather forecasting, and are vital for financial systems, disaster management, climate monitoring, military missions and many more. Yet, decades of spaceflight activities have left an ever-growing debris formation - rocket part, defunct satellites, and propellant residues and more - in Earth's orbits. A congested outer space has now taken the shape of a haunting specter. Hurtling through space at incredibly high velocities, space debris has become a risk for active satellites and space infrastructures alike. This article offers a novel perspective on the security legacies and infrastructures of space debris mitigation and how these affect current and future space debris detection, knowledge production, and mitigation practices. Acknowledging that space debris is not just a technical challenge, but an ethico-political problem, we develop a transdisciplinary approach that links social science to aerospace engineering and practical insights and experiences from the European Space Agency´ (ESA) Space Debris Office. Specifically, we examine the role of secrecy and (mis)trust between international space agencies and how these complicate space situational awareness practices. Attending to the "mundane" practices of how space debris experts cope with uncertainty and security logics offers a crucial starting point to developing an ethical approach that prioritizes care and responsibility for innovation over ever more technological fixes to socio-political problems. Space debris encapsulates our historical and cultural value constellations, prompting us to reflect on sustainability and responsibility for Earth-Space systems in the future.
{"title":"Who Cares for Space Debris? Conflicting Logics of Security and Sustainability in Space Situational Awareness Practices.","authors":"Nina Klimburg-Witjes, Kai Strycker, Vitali Braun","doi":"10.1007/s11948-025-00550-3","DOIUrl":"10.1007/s11948-025-00550-3","url":null,"abstract":"<p><p>Satellites and space technologies enable global communication, navigation, and weather forecasting, and are vital for financial systems, disaster management, climate monitoring, military missions and many more. Yet, decades of spaceflight activities have left an ever-growing debris formation - rocket part, defunct satellites, and propellant residues and more - in Earth's orbits. A congested outer space has now taken the shape of a haunting specter. Hurtling through space at incredibly high velocities, space debris has become a risk for active satellites and space infrastructures alike. This article offers a novel perspective on the security legacies and infrastructures of space debris mitigation and how these affect current and future space debris detection, knowledge production, and mitigation practices. Acknowledging that space debris is not just a technical challenge, but an ethico-political problem, we develop a transdisciplinary approach that links social science to aerospace engineering and practical insights and experiences from the European Space Agency´ (ESA) Space Debris Office. Specifically, we examine the role of secrecy and (mis)trust between international space agencies and how these complicate space situational awareness practices. Attending to the \"mundane\" practices of how space debris experts cope with uncertainty and security logics offers a crucial starting point to developing an ethical approach that prioritizes care and responsibility for innovation over ever more technological fixes to socio-political problems. Space debris encapsulates our historical and cultural value constellations, prompting us to reflect on sustainability and responsibility for Earth-Space systems in the future.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"28"},"PeriodicalIF":3.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12460384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-18DOI: 10.1007/s11948-025-00555-y
Pak-Hang Wong, Gernot Rieder
In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.
{"title":"After Harm: A Plea for Moral Repair after Algorithms Have Failed.","authors":"Pak-Hang Wong, Gernot Rieder","doi":"10.1007/s11948-025-00555-y","DOIUrl":"10.1007/s11948-025-00555-y","url":null,"abstract":"<p><p>In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"26"},"PeriodicalIF":3.0,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12446399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}