Pub Date : 2026-01-01Epub Date: 2026-01-12DOI: 10.1007/s13347-025-01014-z
Tillmann Vierkant
In their intriguing paper Responsibility and Accountability in an Algorithmic Society (2025) the authors argue that the debate on how to deal with responsibility related issues with algorithmic agents requires a distinction between responsibility and accountability. In this comment to their paper, it is argued that while the notion of accountability as understood by the authors brings some significant benefits it also is ambiguous in an important way. Accountability could be understood as being purely instrumental with regard to general morally desirable consequences or it could be understood as necessarily containing an element of scaffolding for the agent who is held to account. The comment develops the options and discusses the consequences of choosing either of them.
{"title":"Does Accountability Require Agency? Comment on Responsibility and Accountability in the Algorithmic Society.","authors":"Tillmann Vierkant","doi":"10.1007/s13347-025-01014-z","DOIUrl":"https://doi.org/10.1007/s13347-025-01014-z","url":null,"abstract":"<p><p>In their intriguing paper <i>Responsibility and Accountability in an Algorithmic Society (2025)</i> the authors argue that the debate on how to deal with responsibility related issues with algorithmic agents requires a distinction between responsibility and accountability. In this comment to their paper, it is argued that while the notion of accountability as understood by the authors brings some significant benefits it also is ambiguous in an important way. Accountability could be understood as being purely instrumental with regard to general morally desirable consequences or it could be understood as necessarily containing an element of scaffolding for the agent who is held to account. The comment develops the options and discusses the consequences of choosing either of them.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"39 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795937/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-18eCollection Date: 2025-12-01DOI: 10.1007/s13347-025-00978-2
Christopher Register, Maryam Ali Khan, Alberto Giubilini, Brian David Earp, Julian Savulescu
Artificial intelligence (AI) agents such as chatbots and personal AI assistants are increasingly popular. These technologies raise new privacy concerns beyond those posed by other AI systems or information technologies. For example, anthropomorphic features of AI chatbots may invite users to disclose more information with these systems than they would otherwise, especially when users interact with chatbots in relationship-like ways. In this paper, we aim to develop a framework for assessing the distinctive privacy ramifications of AI agents, especially as humans begin to interact with them in relationship-like ways. In particular, we draw from prominent theories of privacy and results from human relational psychology to better understand how AI agents may affect human behavior and the flow of personal information. We then assess how these effects could bear on eight distinct values of privacy, such as autonomy, the value of forming and maintaining relationships, security from harm, and more.
{"title":"Privacy and Human-AI Relationships.","authors":"Christopher Register, Maryam Ali Khan, Alberto Giubilini, Brian David Earp, Julian Savulescu","doi":"10.1007/s13347-025-00978-2","DOIUrl":"10.1007/s13347-025-00978-2","url":null,"abstract":"<p><p>Artificial intelligence (AI) agents such as chatbots and personal AI assistants are increasingly popular. These technologies raise new privacy concerns beyond those posed by other AI systems or information technologies. For example, anthropomorphic features of AI chatbots may invite users to disclose more information with these systems than they would otherwise, especially when users interact with chatbots in relationship-like ways. In this paper, we aim to develop a framework for assessing the distinctive privacy ramifications of AI agents, especially as humans begin to interact with them in relationship-like ways. In particular, we draw from prominent theories of privacy and results from human relational psychology to better understand how AI agents may affect human behavior and the flow of personal information. We then assess how these effects could bear on eight distinct values of privacy, such as autonomy, the value of forming and maintaining relationships, security from harm, and more.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7618715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-08-30DOI: 10.1007/s13347-025-00962-w
Jeroen Hopster
Critical reflection on the material, environmental, and social conditions underlying technology remains peripheral to the field of technology ethics. In this commentary, I underwrite the diagnosis by Vandemeulebroucke et al. (2025) that the field suffers from an "extractivist blindspot", but propose a somewhat different cure. First, rather than focusing on the material ontogenesis of technical artefacts, a more radical turn away from artefacts is called for, towards layered socio-technical systems as the field's core object of analysis. Second, notwithstanding the merits of their intercultural proposal, I argue that in overcoming extractivism the conceptual resources of more adjacent philosophical traditions should not be overlooked.
{"title":"Vertical Technologies and Relational Values: Rethinking Ethics of Technology in an Age of Extractivism.","authors":"Jeroen Hopster","doi":"10.1007/s13347-025-00962-w","DOIUrl":"10.1007/s13347-025-00962-w","url":null,"abstract":"<p><p>Critical reflection on the material, environmental, and social conditions underlying technology remains peripheral to the field of technology ethics. In this commentary, I underwrite the diagnosis by Vandemeulebroucke et al. (2025) that the field suffers from an \"extractivist blindspot\", but propose a somewhat different cure. First, rather than focusing on the material ontogenesis of technical artefacts, a more radical turn away from artefacts is called for, towards layered socio-technical systems as the field's core object of analysis. Second, notwithstanding the merits of their intercultural proposal, I argue that in overcoming extractivism the conceptual resources of more adjacent philosophical traditions should not be overlooked.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 3","pages":"124"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398441/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-14DOI: 10.1007/s13347-025-00971-9
João C Magalhães, Nick Couldry
Critical platform scholars have long suggested, if indirectly, that social media power is somehow akin to social engineering. This article argues that the parallel is analytically productive, but for reasons that are more complex than has previously been appreciated. By examining Facebook's foundational technologies, as described in patents that sought to protect the company's early innovations, we argue that, unlike previous technocratic attempts to reconstruct society, the platform's equally consequential rendering of social reality into a legible and controllable social graph involved no substantive vision of the social world at all. Rather, the company engaged in a form of socially blind engineering, misrecognizing the actual social world as a terra nullius, as if it had no inhabitants who needed to be taken into account, and so was a domain from which profit could be extracted with relative impunity. In so doing, we develop a conceptual vocabulary to understand the widely-criticised recklessness that, notwithstanding some more charitable recent readings, marked the early Facebook - and that might still influence the tech sector as a whole.
{"title":"Human Life as <i>Terra Nullius</i>: Socially Blind Engineering in Facebook's Foundational Technologies.","authors":"João C Magalhães, Nick Couldry","doi":"10.1007/s13347-025-00971-9","DOIUrl":"10.1007/s13347-025-00971-9","url":null,"abstract":"<p><p>Critical platform scholars have long suggested, if indirectly, that social media power is somehow akin to social engineering. This article argues that the parallel is analytically productive, but for reasons that are more complex than has previously been appreciated. By examining Facebook's foundational technologies, as described in patents that sought to protect the company's early innovations, we argue that, unlike previous technocratic attempts to reconstruct society, the platform's equally consequential rendering of social reality into a legible and controllable social graph involved no substantive vision of the social world at all. Rather, the company engaged in a form of <i>socially blind engineering</i>, misrecognizing the actual social world as a <i>terra nullius</i>, as if it had no inhabitants who needed to be taken into account, and so was a domain from which profit could be extracted with relative impunity. In so doing, we develop a conceptual vocabulary to understand the widely-criticised recklessness that, notwithstanding some more charitable recent readings, marked the early Facebook - and that might still influence the tech sector as a whole.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 4","pages":"140"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12521260/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-29DOI: 10.1007/s13347-025-00887-4
Massimiliano Simons
This article provides an overview of the philosophy of Gilbert Hottois, who is usually credited with popularizing the concept of technoscience. Hottois starts from a metaphilosophy of language that diagnoses twentieth-century philosophy as fixated on language at the expense of technology. As an alternative, he developed a philosophy of technoscience that reinterprets science as primarily an intervening and technical activity rather than a contemplative and theoretical one. As I will argue, Hottois articulates the nature of this technicity through a philosophy of time, reflecting on the specific temporality of technoscience as distinct from human history. This temporality of technoscience provoked the need for ethical reflection, since technoscience is constantly changing and transforming the world. This led to Hottois's engagement with bioethics, in which he sought to develop a framework capable of "guiding" technoscience. Aiming to avoid both total symbolic closure and total technical openness, this guidance is concerned with the preservation of diversity, especially the human capacity for ethics, ethicity. This idea of guidance was later taken up by Dutch philosophers such as Hans Achterhuis and Peter-Paul Verbeek, inspiring their empirical turn in the philosophy of technology. What remains missing in this framework, however, is Hottois's critical analysis of the different temporalities at work in technology and culture.
{"title":"What Will Happen to Humanity in a Million Years? Gilbert Hottois and the Temporality of Technoscience.","authors":"Massimiliano Simons","doi":"10.1007/s13347-025-00887-4","DOIUrl":"10.1007/s13347-025-00887-4","url":null,"abstract":"<p><p>This article provides an overview of the philosophy of Gilbert Hottois, who is usually credited with popularizing the concept of technoscience. Hottois starts from a metaphilosophy of language that diagnoses twentieth-century philosophy as fixated on language at the expense of technology. As an alternative, he developed a philosophy of technoscience that reinterprets science as primarily an intervening and technical activity rather than a contemplative and theoretical one. As I will argue, Hottois articulates the nature of this technicity through a philosophy of time, reflecting on the specific temporality of technoscience as distinct from human history. This temporality of technoscience provoked the need for ethical reflection, since technoscience is constantly changing and transforming the world. This led to Hottois's engagement with bioethics, in which he sought to develop a framework capable of \"guiding\" technoscience. Aiming to avoid both total symbolic closure and total technical openness, this guidance is concerned with the preservation of diversity, especially the human capacity for ethics, ethicity. This idea of guidance was later taken up by Dutch philosophers such as Hans Achterhuis and Peter-Paul Verbeek, inspiring their empirical turn in the philosophy of technology. What remains missing in this framework, however, is Hottois's critical analysis of the different temporalities at work in technology and culture.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 2","pages":"58"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12041151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-13DOI: 10.1007/s13347-025-00864-x
Matthieu Queloz
A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in accurately and comprehensively modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but coherent, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and coherence promise to facilitate progress towards comprehensiveness in an LLM's representation of the world. However, philosophers have identified compelling reasons to doubt that the truth is systematic across all domains of thought, arguing that in normative domains, in particular, the truth is largely asystematic. I argue that insofar as the truth in normative domains is asystematic, this renders it correspondingly harder for LLMs to make progress, because they cannot then leverage the systematicity of truth. And the less LLMs can rely on the systematicity of truth, the less we can rely on them to do our practical deliberation for us, because the very asystematicity of normative domains requires human agency to play a greater role in practical thought.
{"title":"Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.","authors":"Matthieu Queloz","doi":"10.1007/s13347-025-00864-x","DOIUrl":"10.1007/s13347-025-00864-x","url":null,"abstract":"<p><p>A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in accurately and comprehensively modelling the world is that the truth is <i>systematic</i>: true statements about the world form a whole that is not just <i>consistent</i>, in that it contains no contradictions, but <i>coherent</i>, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and coherence promise to facilitate progress towards <i>comprehensiveness</i> in an LLM's representation of the world. However, philosophers have identified compelling reasons to doubt that the truth is systematic across all domains of thought, arguing that in normative domains, in particular, the truth is largely asystematic. I argue that insofar as the truth in normative domains is asystematic, this renders it correspondingly harder for LLMs to make progress, because they cannot then leverage the systematicity of truth. And the less LLMs can rely on the systematicity of truth, the less we can rely on them to do our practical deliberation for us, because the very asystematicity of normative domains requires human agency to play a greater role in practical thought.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"34"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906541/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143650431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-12-16DOI: 10.1007/s13347-024-00826-9
Mauricio Figueroa-Torres
The development and deployment of chatbot technology, while spanning decades and employing different techniques, require innovative frameworks to understand and interrogate their functionality and implications. A mere technocentric account of the evolution of chatbot technology does not fully illuminate how conversational systems are embedded in societal dynamics. This study presents a structured examination of chatbots across three societal dimensions, highlighting their roles as objects of scientific research, commercial instruments, and agents of intimate interaction. Through furnishing a dimensional framework for the evolution of conversational systems - from laboratories to marketplaces to private lives- this article contributes to the wider scholarly inquiry of chatbot technology and its impact in lived human experiences and dynamics.
{"title":"The Three Social Dimensions of Chatbot Technology.","authors":"Mauricio Figueroa-Torres","doi":"10.1007/s13347-024-00826-9","DOIUrl":"10.1007/s13347-024-00826-9","url":null,"abstract":"<p><p>The development and deployment of chatbot technology, while spanning decades and employing different techniques, require innovative frameworks to understand and interrogate their functionality and implications. A mere technocentric account of the evolution of chatbot technology does not fully illuminate how conversational systems are embedded in societal dynamics. This study presents a structured examination of chatbots across three societal dimensions, highlighting their roles as objects of scientific research, commercial instruments, and agents of intimate interaction. Through furnishing a dimensional framework for the evolution of conversational systems - from laboratories to marketplaces to private lives- this article contributes to the wider scholarly inquiry of chatbot technology and its impact in lived human experiences and dynamics.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12234634/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144601836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-27DOI: 10.1007/s13347-025-00895-4
Leonhard Menges, Eva Weber-Guskar
Intuitively, it seems reasonable to prefer that not everyone knows about all our emotions, for example, who we are in love with, who we are angry with, and what we are ashamed of. Moreover, prominent examples in the philosophical discussion of privacy include emotions. Finally, empirical studies show that a significant number of people in the UK and US are uncomfortable with digital emotion detection. In light of this, it may be surprising to learn that current data protection laws in Europe, which are designed to protect privacy, do not specifically address data about emotions. Understanding and discussing this incongruity is the subject of this paper. We will argue for two main claims: first, that anonymous emotion data does not need special legal protection, and second, that there are very good moral reasons to provide non-anonymous emotion data with special legal protection.
{"title":"Digital Emotion Detection, Privacy, and the Law.","authors":"Leonhard Menges, Eva Weber-Guskar","doi":"10.1007/s13347-025-00895-4","DOIUrl":"10.1007/s13347-025-00895-4","url":null,"abstract":"<p><p>Intuitively, it seems reasonable to prefer that not everyone knows about all our emotions, for example, who we are in love with, who we are angry with, and what we are ashamed of. Moreover, prominent examples in the philosophical discussion of privacy include emotions. Finally, empirical studies show that a significant number of people in the UK and US are uncomfortable with digital emotion detection. In light of this, it may be surprising to learn that current data protection laws in Europe, which are designed to protect privacy, do not specifically address data about emotions. Understanding and discussing this incongruity is the subject of this paper. We will argue for two main claims: first, that anonymous emotion data does not need special legal protection, and second, that there are very good moral reasons to provide non-anonymous emotion data with special legal protection.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 2","pages":"77"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12106471/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-02-20DOI: 10.1007/s13347-025-00855-y
Ugur Aytac
This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking. This political reform democratically contains corporate power in two ways: first, the legally enforceable fiduciary duties of corporate boards are reconceptualized in relation to democratic purposes rather than shareholder interests. Second, corporate governance structures should be redesigned to ensure that the abstract purpose is realized through representatives whose incentives align with the existence of a democratic public sphere. My argument complements radical proposals such as platform socialism by drawing a connection between democratizing social media governance and identifying the proper purpose of social media companies.
{"title":"What is the Point of Social Media? Corporate Purpose and Digital Democratization.","authors":"Ugur Aytac","doi":"10.1007/s13347-025-00855-y","DOIUrl":"10.1007/s13347-025-00855-y","url":null,"abstract":"<p><p>This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking. This political reform democratically contains corporate power in two ways: first, the legally enforceable fiduciary duties of corporate boards are reconceptualized in relation to democratic purposes rather than shareholder interests. Second, corporate governance structures should be redesigned to ensure that the abstract purpose is realized through representatives whose incentives align with the existence of a democratic public sphere. My argument complements radical proposals such as platform socialism by drawing a connection between democratizing social media governance and identifying the proper purpose of social media companies.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-15DOI: 10.1007/s13347-025-00898-1
Kamil Mamak
{"title":"The Designer of A Robot Determines Its Position Within The Moral Circle.","authors":"Kamil Mamak","doi":"10.1007/s13347-025-00898-1","DOIUrl":"10.1007/s13347-025-00898-1","url":null,"abstract":"","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 2","pages":"66"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12081538/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144095332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}