首页 > 最新文献

Science and Engineering Ethics最新文献

英文 中文
Responsibility Gaps, LLMs & Organisations: Many Agents, Many Levels, and Many Interactions. 责任差距,法学硕士和组织:许多代理人,许多层次和许多互动。
IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-11-13 DOI: 10.1007/s11948-025-00560-1
Mihaela Constantinescu, Muel Kaptein
{"title":"Responsibility Gaps, LLMs & Organisations: Many Agents, Many Levels, and Many Interactions.","authors":"Mihaela Constantinescu, Muel Kaptein","doi":"10.1007/s11948-025-00560-1","DOIUrl":"10.1007/s11948-025-00560-1","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 6","pages":"36"},"PeriodicalIF":3.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12615531/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compliance with Clinical Guidelines and AI-Based Clinical Decision Support Systems: Implications for Ethics and Trust. 遵守临床指南和基于人工智能的临床决策支持系统:对伦理和信任的影响。
IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-11-13 DOI: 10.1007/s11948-025-00562-z
Éric Pardoux, Angeliki Kerasidou

Artificial intelligence (AI) is gradually transforming healthcare. However, despite its promised benefits, AI in healthcare also raises a number of ethical, legal and social concerns. Compliance by design (CbD) has been proposed as one way of addressing some of these concerns. In the context of healthcare, CbD efforts could focus on building compliance with existing clinical guidelines (CGs), given that they provide the best practices identified according to evidence-based medicine. In this paper we use the example of AI-based clinical decision support systems (CDSS) to theoretically examine whether medical AI tools could be designed to be inherently compliant with CGs, and implication for ethics and trust. We argue that AI-based CDSS systematically complying with CGs when applied to specific patient cases are not desirable, as CGs, despite their usefulness in guiding medical decision-making, are only recommendations on how to diagnose and treat medical conditions. We thus propose a new understanding of CbD for CGs as a sociotechnical program supported by AI that applies to the whole clinical decision-making process rather than just understanding CbD for CGs as a process located only within the AI tool. This implies taking into account emerging knowledge from actual clinical practices to put CGs in perspective, reflexivity from users regarding the information needed for decision-making, as well as a shift in the design culture, from AI as a stand-alone tool to AI as an in-situ service located within particular healthcare settings.

人工智能(AI)正在逐渐改变医疗行业。然而,尽管人工智能在医疗保健领域有望带来好处,但它也引发了一些道德、法律和社会问题。设计遵从性(CbD)已被提议作为解决这些问题的一种方法。在医疗保健方面,鉴于现有临床指南(CGs)提供了根据循证医学确定的最佳实践,CbD的工作可以侧重于建立对现有临床指南(CGs)的遵守。在本文中,我们以基于人工智能的临床决策支持系统(CDSS)为例,从理论上研究医疗人工智能工具是否可以设计成本质上符合cg,以及对伦理和信任的影响。我们认为,基于人工智能的CDSS在应用于特定患者病例时系统地遵守cg是不可取的,因为尽管cg在指导医疗决策方面很有用,但它们只是关于如何诊断和治疗医疗状况的建议。因此,我们建议将CbD作为AI支持的社会技术项目,应用于整个临床决策过程,而不仅仅是将CbD理解为仅位于AI工具内的过程。这意味着要考虑实际临床实践中的新兴知识,以正确看待cg,考虑用户对决策所需信息的反射性,以及设计文化的转变,从人工智能作为独立工具到人工智能作为特定医疗保健环境中的原位服务。
{"title":"Compliance with Clinical Guidelines and AI-Based Clinical Decision Support Systems: Implications for Ethics and Trust.","authors":"Éric Pardoux, Angeliki Kerasidou","doi":"10.1007/s11948-025-00562-z","DOIUrl":"10.1007/s11948-025-00562-z","url":null,"abstract":"<p><p>Artificial intelligence (AI) is gradually transforming healthcare. However, despite its promised benefits, AI in healthcare also raises a number of ethical, legal and social concerns. Compliance by design (CbD) has been proposed as one way of addressing some of these concerns. In the context of healthcare, CbD efforts could focus on building compliance with existing clinical guidelines (CGs), given that they provide the best practices identified according to evidence-based medicine. In this paper we use the example of AI-based clinical decision support systems (CDSS) to theoretically examine whether medical AI tools could be designed to be inherently compliant with CGs, and implication for ethics and trust. We argue that AI-based CDSS systematically complying with CGs when applied to specific patient cases are not desirable, as CGs, despite their usefulness in guiding medical decision-making, are only recommendations on how to diagnose and treat medical conditions. We thus propose a new understanding of CbD for CGs as a sociotechnical program supported by AI that applies to the whole clinical decision-making process rather than just understanding CbD for CGs as a process located only within the AI tool. This implies taking into account emerging knowledge from actual clinical practices to put CGs in perspective, reflexivity from users regarding the information needed for decision-making, as well as a shift in the design culture, from AI as a stand-alone tool to AI as an in-situ service located within particular healthcare settings.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 6","pages":"34"},"PeriodicalIF":3.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12615534/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Filling the Responsibility Gap: Agency and Responsibility in the Technological Age. 填补责任缺口:技术时代的代理与责任。
IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-11-05 DOI: 10.1007/s11948-025-00561-0
Yong-Hong Xia
{"title":"Filling the Responsibility Gap: Agency and Responsibility in the Technological Age.","authors":"Yong-Hong Xia","doi":"10.1007/s11948-025-00561-0","DOIUrl":"10.1007/s11948-025-00561-0","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 6","pages":"33"},"PeriodicalIF":3.0,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12589311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145446297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hidden Agents, Explicit Obligations: A Linguistic Analysis of AI Ethics Guidelines. 隐藏的代理人,明确的义务:人工智能伦理准则的语言分析。
IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-10-29 DOI: 10.1007/s11948-025-00559-8
Tricia A Griffin, Roos Goorman, Brian P Green, Jos V M Welie
{"title":"Hidden Agents, Explicit Obligations: A Linguistic Analysis of AI Ethics Guidelines.","authors":"Tricia A Griffin, Roos Goorman, Brian P Green, Jos V M Welie","doi":"10.1007/s11948-025-00559-8","DOIUrl":"10.1007/s11948-025-00559-8","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 6","pages":"32"},"PeriodicalIF":3.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12572076/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145402608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Big Data, Machine Learning, and Personalization in Health Systems: Ethical Issues and Emerging Trade-Offs. 医疗系统中的大数据、机器学习和个性化:伦理问题和新兴权衡。
IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-10-13 DOI: 10.1007/s11948-025-00552-1
Stefano Canali, Alessandro Falcetta, Massimo Pavan, Manuel Roveri, Viola Schiaffonati

The use of big data and machine learning has been discussed in an expanding literature, detailing concerns on ethical issues and societal implications. In this paper we focus on big data and machine learning in the context of health systems and with the specific purpose of personalization. Whilst personalization is considered very promising in this context, by focusing on concrete uses of personalized models for glucose monitoring and anomaly detection we identify issues that emerge with personalized models and show that personalization is not necessarily nor always a positive development. We argue that there is a new problem of trade-offs between the expected benefits of personalization and new and exacerbated issues - results that have serious implications for strategies of mitigation and ethical concerns on big data and machine learning.

越来越多的文献讨论了大数据和机器学习的使用,详细介绍了对伦理问题和社会影响的关注。在本文中,我们关注卫生系统背景下的大数据和机器学习,并以个性化为具体目的。虽然在这种情况下,个性化被认为是非常有前途的,但通过关注个性化模型在血糖监测和异常检测方面的具体应用,我们发现了个性化模型出现的问题,并表明个性化不一定是一个积极的发展。我们认为,在个性化的预期收益与新的和加剧的问题之间存在权衡的新问题-这些结果对大数据和机器学习的缓解策略和伦理问题具有严重影响。
{"title":"Big Data, Machine Learning, and Personalization in Health Systems: Ethical Issues and Emerging Trade-Offs.","authors":"Stefano Canali, Alessandro Falcetta, Massimo Pavan, Manuel Roveri, Viola Schiaffonati","doi":"10.1007/s11948-025-00552-1","DOIUrl":"10.1007/s11948-025-00552-1","url":null,"abstract":"<p><p>The use of big data and machine learning has been discussed in an expanding literature, detailing concerns on ethical issues and societal implications. In this paper we focus on big data and machine learning in the context of health systems and with the specific purpose of personalization. Whilst personalization is considered very promising in this context, by focusing on concrete uses of personalized models for glucose monitoring and anomaly detection we identify issues that emerge with personalized models and show that personalization is not necessarily nor always a positive development. We argue that there is a new problem of trade-offs between the expected benefits of personalization and new and exacerbated issues - results that have serious implications for strategies of mitigation and ethical concerns on big data and machine learning.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"29"},"PeriodicalIF":3.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518398/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Nano to Quantum: Ethics Through a Lens of Continuity. 从纳米到量子:连续性镜头下的伦理学。
IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-10-13 DOI: 10.1007/s11948-025-00557-w
Clare Shelley-Egan, Eline De Jong

A significant amount of scholarship and funding has been dedicated to ethical and social studies of new and emerging science and technology (NEST), from nanotechnology to synthetic biology, and Artificial Intelligence. Quantum technologies comprise the latest NEST attracting interest from scholarship in the social sciences and humanities. While there is a small community now emerging around broader discussion of quantum technologies in society, the concepts of ethics of quantum technologies and responsible innovation are still fluid. In this article, we argue that lessons from previous instances of NEST can offer important insights into the early stages of quantum technology discourse and development. In the embryonic stages of discourse around NEST, there is often an undue emphasis on the novelty of ethical issues, leading to speculation and misplaced resources and energy. Using a lens of continuity, we revisit experiences and lessons from nanotechnology discourse. Zooming in on key characteristics of the nanoethics discourse, we use these features as analytical tools with which to assess and analyse emerging discourse around quantum technologies. We point to continuities between nano and quantum discourse, including the focus on 'responsible' or 'good' technology; the intensification of ethical issues brought about by enabling technologies; the limitations and risks of speculative ethics; the effects of ambivalence on the framing of ethics; and the importance of paying attention to the present. These issues are taken forward to avoid 'reinventing the wheel' and to offer guidance in shaping the ethics discourse around quantum technologies into a more focused and effective debate.

大量的奖学金和资金一直致力于新兴科学技术(NEST)的伦理和社会研究,从纳米技术到合成生物学和人工智能。量子技术构成了最新的NEST,吸引了社会科学和人文学科学者的兴趣。虽然现在有一个小社区围绕着社会上更广泛的量子技术讨论而出现,但量子技术的伦理概念和负责任的创新仍然是不稳定的。在本文中,我们认为从以前的NEST实例中吸取的教训可以为量子技术话语和发展的早期阶段提供重要的见解。在围绕NEST的讨论的萌芽阶段,往往过分强调伦理问题的新颖性,导致猜测和资源和精力的错位。使用连续性的镜头,我们重新审视纳米技术话语的经验和教训。放大纳米伦理学话语的关键特征,我们使用这些特征作为分析工具来评估和分析围绕量子技术的新兴话语。我们指出纳米和量子话语之间的连续性,包括对“负责任”或“好”技术的关注;使能技术所带来的伦理问题的加剧;思辨伦理的局限性与风险矛盾心理对伦理建构的影响以及关注当下的重要性。提出这些问题是为了避免“重新发明轮子”,并为将围绕量子技术的伦理话语塑造成更集中、更有效的辩论提供指导。
{"title":"From Nano to Quantum: Ethics Through a Lens of Continuity.","authors":"Clare Shelley-Egan, Eline De Jong","doi":"10.1007/s11948-025-00557-w","DOIUrl":"10.1007/s11948-025-00557-w","url":null,"abstract":"<p><p>A significant amount of scholarship and funding has been dedicated to ethical and social studies of new and emerging science and technology (NEST), from nanotechnology to synthetic biology, and Artificial Intelligence. Quantum technologies comprise the latest NEST attracting interest from scholarship in the social sciences and humanities. While there is a small community now emerging around broader discussion of quantum technologies in society, the concepts of ethics of quantum technologies and responsible innovation are still fluid. In this article, we argue that lessons from previous instances of NEST can offer important insights into the early stages of quantum technology discourse and development. In the embryonic stages of discourse around NEST, there is often an undue emphasis on the novelty of ethical issues, leading to speculation and misplaced resources and energy. Using a lens of continuity, we revisit experiences and lessons from nanotechnology discourse. Zooming in on key characteristics of the nanoethics discourse, we use these features as analytical tools with which to assess and analyse emerging discourse around quantum technologies. We point to continuities between nano and quantum discourse, including the focus on 'responsible' or 'good' technology; the intensification of ethical issues brought about by enabling technologies; the limitations and risks of speculative ethics; the effects of ambivalence on the framing of ethics; and the importance of paying attention to the present. These issues are taken forward to avoid 'reinventing the wheel' and to offer guidance in shaping the ethics discourse around quantum technologies into a more focused and effective debate.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"31"},"PeriodicalIF":3.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518464/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethics Readiness of Technology: The Case for Aligning Ethical Approaches with Technological Maturity. 技术的伦理准备:将伦理方法与技术成熟度相结合的案例。
IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-10-13 DOI: 10.1007/s11948-025-00556-x
Eline de Jong

The ethics of emerging technologies faces an anticipation dilemma: engaging too early risks overly speculative concerns, while engaging too late may forfeit the chance to shape a technology's trajectory. Despite various methods to address this challenge, no framework exists to assess their suitability across different stages of technological development. This paper proposes such a framework. I conceptualise two main ethical approaches: outcomes-oriented ethics, which assesses the potential consequences of a technology's materialisation, and meaning-oriented ethics, which examines how (social) meaning is attributed to a technology. I argue that the strengths and limitations of outcomes- and meaning-oriented ethics depend on the uncertainties surrounding a technology, which shift as it matures. To capture this evolution, I introduce the concept of ethics readiness-the readiness of a technology to undergo detailed ethical scrutiny. Building on the widely known Technology Readiness Levels (TRLs), I propose Ethics Readiness Levels (ERLs) to illustrate how the suitability of ethical approaches evolves with a technology's development. At lower ERLs, where uncertainties are most pronounced, meaning-oriented ethics proves more effective; while at higher ERLs, as impacts become clearer, outcomes-oriented ethics gains relevance. By linking Ethics Readiness to Technology Readiness, this framework underscores that the appropriateness of ethical approaches evolves alongside technological maturity, ensuring scrutiny remains grounded and relevant. Finally, I demonstrate the practical value of this framework by applying it to quantum technologies, showing how Ethics Readiness can guide effective ethical engagement.

新兴技术的伦理面临着一个预期困境:过早参与可能会带来过度投机的风险,而太晚参与可能会丧失塑造技术轨迹的机会。尽管有各种方法来应对这一挑战,但没有一个框架来评估它们在不同技术发展阶段的适用性。本文提出了这样一个框架。我将两种主要的伦理方法概念化:以结果为导向的伦理学,评估技术物质化的潜在后果,以及以意义为导向的伦理学,研究如何将(社会)意义归因于技术。我认为,以结果和意义为导向的伦理学的优势和局限性取决于围绕技术的不确定性,这些不确定性随着技术的成熟而变化。为了捕捉这种演变,我引入了道德准备度的概念——一项技术接受详细的道德审查的准备程度。在众所周知的技术准备水平(trl)的基础上,我提出了道德准备水平(erl)来说明道德方法的适用性如何随着技术的发展而演变。在不确定性最明显的低erl中,以意义为导向的伦理被证明更有效;而在较高的erl中,随着影响变得更加清晰,以结果为导向的伦理也变得更加重要。通过将道德准备与技术准备联系起来,该框架强调道德方法的适当性随着技术成熟而发展,确保审查保持基础和相关性。最后,我通过将该框架应用于量子技术来展示其实用价值,展示道德准备如何指导有效的道德参与。
{"title":"Ethics Readiness of Technology: The Case for Aligning Ethical Approaches with Technological Maturity.","authors":"Eline de Jong","doi":"10.1007/s11948-025-00556-x","DOIUrl":"10.1007/s11948-025-00556-x","url":null,"abstract":"<p><p>The ethics of emerging technologies faces an anticipation dilemma: engaging too early risks overly speculative concerns, while engaging too late may forfeit the chance to shape a technology's trajectory. Despite various methods to address this challenge, no framework exists to assess their suitability across different stages of technological development. This paper proposes such a framework. I conceptualise two main ethical approaches: outcomes-oriented ethics, which assesses the potential consequences of a technology's materialisation, and meaning-oriented ethics, which examines how (social) meaning is attributed to a technology. I argue that the strengths and limitations of outcomes- and meaning-oriented ethics depend on the uncertainties surrounding a technology, which shift as it matures. To capture this evolution, I introduce the concept of ethics readiness-the readiness of a technology to undergo detailed ethical scrutiny. Building on the widely known Technology Readiness Levels (TRLs), I propose Ethics Readiness Levels (ERLs) to illustrate how the suitability of ethical approaches evolves with a technology's development. At lower ERLs, where uncertainties are most pronounced, meaning-oriented ethics proves more effective; while at higher ERLs, as impacts become clearer, outcomes-oriented ethics gains relevance. By linking Ethics Readiness to Technology Readiness, this framework underscores that the appropriateness of ethical approaches evolves alongside technological maturity, ensuring scrutiny remains grounded and relevant. Finally, I demonstrate the practical value of this framework by applying it to quantum technologies, showing how Ethics Readiness can guide effective ethical engagement.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"30"},"PeriodicalIF":3.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518457/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Principles and Framework for the Operationalisation of Meaningful Human Control Over Autonomous Systems. 对自治系统进行有意义的人类控制的操作原则和框架。
IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-09-24 DOI: 10.1007/s11948-025-00554-z
Simeon C Calvert
{"title":"Principles and Framework for the Operationalisation of Meaningful Human Control Over Autonomous Systems.","authors":"Simeon C Calvert","doi":"10.1007/s11948-025-00554-z","DOIUrl":"10.1007/s11948-025-00554-z","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"27"},"PeriodicalIF":3.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12460435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Who Cares for Space Debris? Conflicting Logics of Security and Sustainability in Space Situational Awareness Practices. 谁关心太空碎片?空间态势感知实践中安全与可持续性的冲突逻辑。
IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-09-24 DOI: 10.1007/s11948-025-00550-3
Nina Klimburg-Witjes, Kai Strycker, Vitali Braun

Satellites and space technologies enable global communication, navigation, and weather forecasting, and are vital for financial systems, disaster management, climate monitoring, military missions and many more. Yet, decades of spaceflight activities have left an ever-growing debris formation - rocket part, defunct satellites, and propellant residues and more - in Earth's orbits. A congested outer space has now taken the shape of a haunting specter. Hurtling through space at incredibly high velocities, space debris has become a risk for active satellites and space infrastructures alike. This article offers a novel perspective on the security legacies and infrastructures of space debris mitigation and how these affect current and future space debris detection, knowledge production, and mitigation practices. Acknowledging that space debris is not just a technical challenge, but an ethico-political problem, we develop a transdisciplinary approach that links social science to aerospace engineering and practical insights and experiences from the European Space Agency´ (ESA) Space Debris Office. Specifically, we examine the role of secrecy and (mis)trust between international space agencies and how these complicate space situational awareness practices. Attending to the "mundane" practices of how space debris experts cope with uncertainty and security logics offers a crucial starting point to developing an ethical approach that prioritizes care and responsibility for innovation over ever more technological fixes to socio-political problems. Space debris encapsulates our historical and cultural value constellations, prompting us to reflect on sustainability and responsibility for Earth-Space systems in the future.

卫星和空间技术使全球通信、导航和天气预报成为可能,对金融系统、灾害管理、气候监测、军事任务等至关重要。然而,几十年的太空飞行活动在地球轨道上留下了越来越多的碎片——火箭部件、报废的卫星、推进剂残留物等等。拥挤的外太空现在已经变成了一个挥之不去的幽灵。空间碎片以令人难以置信的高速度在太空中奔跑,已经成为现役卫星和空间基础设施的风险。本文就空间碎片缓减的安全遗留问题和基础设施以及这些问题如何影响当前和未来的空间碎片探测、知识生产和缓减做法提供了新的视角。认识到空间碎片不仅是一个技术挑战,而且是一个伦理政治问题,我们开发了一种跨学科的方法,将社会科学与航空航天工程以及欧洲航天局空间碎片办公室的实践见解和经验联系起来。具体而言,我们研究了国际空间机构之间的保密和(错误)信任的作用,以及这些如何使空间态势感知实践复杂化。关注空间碎片专家如何应对不确定性和安全逻辑的“平凡”实践,为发展一种道德方法提供了一个至关重要的起点,这种方法优先考虑对创新的关心和责任,而不是对社会政治问题进行更多的技术修复。空间碎片概括了我们的历史和文化价值体系,促使我们反思未来地球-空间系统的可持续性和责任。
{"title":"Who Cares for Space Debris? Conflicting Logics of Security and Sustainability in Space Situational Awareness Practices.","authors":"Nina Klimburg-Witjes, Kai Strycker, Vitali Braun","doi":"10.1007/s11948-025-00550-3","DOIUrl":"10.1007/s11948-025-00550-3","url":null,"abstract":"<p><p>Satellites and space technologies enable global communication, navigation, and weather forecasting, and are vital for financial systems, disaster management, climate monitoring, military missions and many more. Yet, decades of spaceflight activities have left an ever-growing debris formation - rocket part, defunct satellites, and propellant residues and more - in Earth's orbits. A congested outer space has now taken the shape of a haunting specter. Hurtling through space at incredibly high velocities, space debris has become a risk for active satellites and space infrastructures alike. This article offers a novel perspective on the security legacies and infrastructures of space debris mitigation and how these affect current and future space debris detection, knowledge production, and mitigation practices. Acknowledging that space debris is not just a technical challenge, but an ethico-political problem, we develop a transdisciplinary approach that links social science to aerospace engineering and practical insights and experiences from the European Space Agency´ (ESA) Space Debris Office. Specifically, we examine the role of secrecy and (mis)trust between international space agencies and how these complicate space situational awareness practices. Attending to the \"mundane\" practices of how space debris experts cope with uncertainty and security logics offers a crucial starting point to developing an ethical approach that prioritizes care and responsibility for innovation over ever more technological fixes to socio-political problems. Space debris encapsulates our historical and cultural value constellations, prompting us to reflect on sustainability and responsibility for Earth-Space systems in the future.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"28"},"PeriodicalIF":3.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12460384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
After Harm: A Plea for Moral Repair after Algorithms Have Failed. 伤害之后:算法失败后的道德修复请求。
IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-09-18 DOI: 10.1007/s11948-025-00555-y
Pak-Hang Wong, Gernot Rieder

In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.

为了应对对人工智能和算法决策的社会影响日益增长的担忧,目前的学术和法律努力主要集中在识别风险和实施防止有害后果的保障措施上,并制定法规以确保系统安全、值得信赖和合乎道德。然而,这种预防性方法是建立在这样一种假设之上的,即通过规定防止潜在危险的规则和要求,基本上可以避免算法造成的伤害。因此,对伤害后情景的关注相对较少,即个人已经受到算法系统伤害的案例和情况。我们认为,这种对伤害后果的忽视构成了人工智能伦理和治理的主要盲点,并提出了算法印记的概念,作为理解算法伤害的性质和潜在长期影响的敏感概念。我们认为,有害系统的退役和破坏性决策的逆转都不足以完全解决这些影响,因此我们建议,对算法伤害的更全面回应需要参与道德修复的讨论,为这种道德修复的请求最终需要什么提供方向。
{"title":"After Harm: A Plea for Moral Repair after Algorithms Have Failed.","authors":"Pak-Hang Wong, Gernot Rieder","doi":"10.1007/s11948-025-00555-y","DOIUrl":"10.1007/s11948-025-00555-y","url":null,"abstract":"<p><p>In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"26"},"PeriodicalIF":3.0,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12446399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science and Engineering Ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1