首页 > 最新文献

Philosophy and Technology最新文献

英文 中文
Does Accountability Require Agency? Comment on Responsibility and Accountability in the Algorithmic Society. 问责制需要机构吗?论算法社会中的责任与问责。
Q1 Arts and Humanities Pub Date : 2026-01-01 Epub Date: 2026-01-12 DOI: 10.1007/s13347-025-01014-z
Tillmann Vierkant

In their intriguing paper Responsibility and Accountability in an Algorithmic Society (2025) the authors argue that the debate on how to deal with responsibility related issues with algorithmic agents requires a distinction between responsibility and accountability. In this comment to their paper, it is argued that while the notion of accountability as understood by the authors brings some significant benefits it also is ambiguous in an important way. Accountability could be understood as being purely instrumental with regard to general morally desirable consequences or it could be understood as necessarily containing an element of scaffolding for the agent who is held to account. The comment develops the options and discusses the consequences of choosing either of them.

在他们有趣的论文《算法社会中的责任和问责制》(2025)中,作者认为,关于如何处理与算法代理相关的责任问题的辩论需要区分责任和问责制。在这篇论文的评论中,有人认为,虽然作者所理解的问责制概念带来了一些显著的好处,但在一个重要的方面,它也是模糊的。问责制可以被理解为纯粹的工具,就一般道德上可取的结果而言,或者它可以被理解为必然包含对被追究责任的行为人的脚手架元素。该评论发展了选项,并讨论了选择其中任何一个的后果。
{"title":"Does Accountability Require Agency? Comment on Responsibility and Accountability in the Algorithmic Society.","authors":"Tillmann Vierkant","doi":"10.1007/s13347-025-01014-z","DOIUrl":"https://doi.org/10.1007/s13347-025-01014-z","url":null,"abstract":"<p><p>In their intriguing paper <i>Responsibility and Accountability in an Algorithmic Society (2025)</i> the authors argue that the debate on how to deal with responsibility related issues with algorithmic agents requires a distinction between responsibility and accountability. In this comment to their paper, it is argued that while the notion of accountability as understood by the authors brings some significant benefits it also is ambiguous in an important way. Accountability could be understood as being purely instrumental with regard to general morally desirable consequences or it could be understood as necessarily containing an element of scaffolding for the agent who is held to account. The comment develops the options and discusses the consequences of choosing either of them.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"39 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795937/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy and Human-AI Relationships. 隐私和人类与人工智能的关系。
Q1 Arts and Humanities Pub Date : 2025-10-18 eCollection Date: 2025-12-01 DOI: 10.1007/s13347-025-00978-2
Christopher Register, Maryam Ali Khan, Alberto Giubilini, Brian David Earp, Julian Savulescu

Artificial intelligence (AI) agents such as chatbots and personal AI assistants are increasingly popular. These technologies raise new privacy concerns beyond those posed by other AI systems or information technologies. For example, anthropomorphic features of AI chatbots may invite users to disclose more information with these systems than they would otherwise, especially when users interact with chatbots in relationship-like ways. In this paper, we aim to develop a framework for assessing the distinctive privacy ramifications of AI agents, especially as humans begin to interact with them in relationship-like ways. In particular, we draw from prominent theories of privacy and results from human relational psychology to better understand how AI agents may affect human behavior and the flow of personal information. We then assess how these effects could bear on eight distinct values of privacy, such as autonomy, the value of forming and maintaining relationships, security from harm, and more.

人工智能(AI)代理,如聊天机器人和个人人工智能助手越来越受欢迎。这些技术引发了新的隐私问题,超出了其他人工智能系统或信息技术带来的问题。例如,人工智能聊天机器人的拟人化特征可能会邀请用户通过这些系统透露更多的信息,尤其是当用户以类似关系的方式与聊天机器人互动时。在本文中,我们的目标是开发一个框架来评估人工智能代理的独特隐私后果,特别是当人类开始以类似关系的方式与它们互动时。特别是,我们借鉴了著名的隐私理论和人类关系心理学的结果,以更好地理解人工智能代理如何影响人类行为和个人信息的流动。然后,我们评估了这些影响如何影响隐私的八种不同价值,如自主性、建立和维持关系的价值、免受伤害的安全等等。
{"title":"Privacy and Human-AI Relationships.","authors":"Christopher Register, Maryam Ali Khan, Alberto Giubilini, Brian David Earp, Julian Savulescu","doi":"10.1007/s13347-025-00978-2","DOIUrl":"10.1007/s13347-025-00978-2","url":null,"abstract":"<p><p>Artificial intelligence (AI) agents such as chatbots and personal AI assistants are increasingly popular. These technologies raise new privacy concerns beyond those posed by other AI systems or information technologies. For example, anthropomorphic features of AI chatbots may invite users to disclose more information with these systems than they would otherwise, especially when users interact with chatbots in relationship-like ways. In this paper, we aim to develop a framework for assessing the distinctive privacy ramifications of AI agents, especially as humans begin to interact with them in relationship-like ways. In particular, we draw from prominent theories of privacy and results from human relational psychology to better understand how AI agents may affect human behavior and the flow of personal information. We then assess how these effects could bear on eight distinct values of privacy, such as autonomy, the value of forming and maintaining relationships, security from harm, and more.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7618715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vertical Technologies and Relational Values: Rethinking Ethics of Technology in an Age of Extractivism. 垂直技术与相关价值:对采掘主义时代技术伦理的再思考。
Q1 Arts and Humanities Pub Date : 2025-01-01 Epub Date: 2025-08-30 DOI: 10.1007/s13347-025-00962-w
Jeroen Hopster

Critical reflection on the material, environmental, and social conditions underlying technology remains peripheral to the field of technology ethics. In this commentary, I underwrite the diagnosis by Vandemeulebroucke et al. (2025) that the field suffers from an "extractivist blindspot", but propose a somewhat different cure. First, rather than focusing on the material ontogenesis of technical artefacts, a more radical turn away from artefacts is called for, towards layered socio-technical systems as the field's core object of analysis. Second, notwithstanding the merits of their intercultural proposal, I argue that in overcoming extractivism the conceptual resources of more adjacent philosophical traditions should not be overlooked.

对技术背后的物质、环境和社会条件的批判性反思仍然是技术伦理领域的外围问题。在这篇评论中,我赞同Vandemeulebroucke等人(2025)的诊断,即该领域存在“采掘者盲点”,但提出了一种不同的治疗方法。首先,与其关注技术人工制品的物质本体,还不如更激进地转向人工制品,将分层的社会技术系统作为该领域的核心分析对象。其次,尽管他们的跨文化建议有其优点,但我认为,在克服提取主义时,不应忽视更邻近的哲学传统的概念资源。
{"title":"Vertical Technologies and Relational Values: Rethinking Ethics of Technology in an Age of Extractivism.","authors":"Jeroen Hopster","doi":"10.1007/s13347-025-00962-w","DOIUrl":"10.1007/s13347-025-00962-w","url":null,"abstract":"<p><p>Critical reflection on the material, environmental, and social conditions underlying technology remains peripheral to the field of technology ethics. In this commentary, I underwrite the diagnosis by Vandemeulebroucke et al. (2025) that the field suffers from an \"extractivist blindspot\", but propose a somewhat different cure. First, rather than focusing on the material ontogenesis of technical artefacts, a more radical turn away from artefacts is called for, towards layered socio-technical systems as the field's core object of analysis. Second, notwithstanding the merits of their intercultural proposal, I argue that in overcoming extractivism the conceptual resources of more adjacent philosophical traditions should not be overlooked.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 3","pages":"124"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398441/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Life as Terra Nullius: Socially Blind Engineering in Facebook's Foundational Technologies. 《作为无主之地的人类生活:Facebook基础技术中的社会盲工程》
Q1 Arts and Humanities Pub Date : 2025-01-01 Epub Date: 2025-10-14 DOI: 10.1007/s13347-025-00971-9
João C Magalhães, Nick Couldry

Critical platform scholars have long suggested, if indirectly, that social media power is somehow akin to social engineering. This article argues that the parallel is analytically productive, but for reasons that are more complex than has previously been appreciated. By examining Facebook's foundational technologies, as described in patents that sought to protect the company's early innovations, we argue that, unlike previous technocratic attempts to reconstruct society, the platform's equally consequential rendering of social reality into a legible and controllable social graph involved no substantive vision of the social world at all. Rather, the company engaged in a form of socially blind engineering, misrecognizing the actual social world as a terra nullius, as if it had no inhabitants who needed to be taken into account, and so was a domain from which profit could be extracted with relative impunity. In so doing, we develop a conceptual vocabulary to understand the widely-criticised recklessness that, notwithstanding some more charitable recent readings, marked the early Facebook - and that might still influence the tech sector as a whole.

长期以来,批判平台的学者们一直间接地认为,社交媒体的力量在某种程度上类似于社会工程。这篇文章认为,并行分析是有效的,但原因比以前所理解的要复杂得多。通过研究Facebook的基础技术,正如专利中所描述的那样,旨在保护公司的早期创新,我们认为,与以前技术官僚试图重建社会不同,该平台同样重要的将社会现实呈现为可读和可控的社会图谱,根本没有涉及社会世界的实质性愿景。相反,该公司从事某种形式的社会盲目工程,错误地将实际的社会世界视为无主之地(terra nullius),就好像它没有需要考虑的居民,因此是一个可以相对不受惩罚地从中提取利润的领域。在这样做的过程中,我们开发了一个概念性的词汇来理解被广泛批评的鲁莽行为,尽管最近有一些更仁慈的解读,但这种鲁莽行为标志着早期的Facebook——这可能仍会影响整个科技行业。
{"title":"Human Life as <i>Terra Nullius</i>: Socially Blind Engineering in Facebook's Foundational Technologies.","authors":"João C Magalhães, Nick Couldry","doi":"10.1007/s13347-025-00971-9","DOIUrl":"10.1007/s13347-025-00971-9","url":null,"abstract":"<p><p>Critical platform scholars have long suggested, if indirectly, that social media power is somehow akin to social engineering. This article argues that the parallel is analytically productive, but for reasons that are more complex than has previously been appreciated. By examining Facebook's foundational technologies, as described in patents that sought to protect the company's early innovations, we argue that, unlike previous technocratic attempts to reconstruct society, the platform's equally consequential rendering of social reality into a legible and controllable social graph involved no substantive vision of the social world at all. Rather, the company engaged in a form of <i>socially blind engineering</i>, misrecognizing the actual social world as a <i>terra nullius</i>, as if it had no inhabitants who needed to be taken into account, and so was a domain from which profit could be extracted with relative impunity. In so doing, we develop a conceptual vocabulary to understand the widely-criticised recklessness that, notwithstanding some more charitable recent readings, marked the early Facebook - and that might still influence the tech sector as a whole.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 4","pages":"140"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12521260/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Will Happen to Humanity in a Million Years? Gilbert Hottois and the Temporality of Technoscience. 一百万年后人类会发生什么?吉尔伯特·霍托伊斯和技术科学的时间性。
Q1 Arts and Humanities Pub Date : 2025-01-01 Epub Date: 2025-04-29 DOI: 10.1007/s13347-025-00887-4
Massimiliano Simons

This article provides an overview of the philosophy of Gilbert Hottois, who is usually credited with popularizing the concept of technoscience. Hottois starts from a metaphilosophy of language that diagnoses twentieth-century philosophy as fixated on language at the expense of technology. As an alternative, he developed a philosophy of technoscience that reinterprets science as primarily an intervening and technical activity rather than a contemplative and theoretical one. As I will argue, Hottois articulates the nature of this technicity through a philosophy of time, reflecting on the specific temporality of technoscience as distinct from human history. This temporality of technoscience provoked the need for ethical reflection, since technoscience is constantly changing and transforming the world. This led to Hottois's engagement with bioethics, in which he sought to develop a framework capable of "guiding" technoscience. Aiming to avoid both total symbolic closure and total technical openness, this guidance is concerned with the preservation of diversity, especially the human capacity for ethics, ethicity. This idea of guidance was later taken up by Dutch philosophers such as Hans Achterhuis and Peter-Paul Verbeek, inspiring their empirical turn in the philosophy of technology. What remains missing in this framework, however, is Hottois's critical analysis of the different temporalities at work in technology and culture.

本文概述了Gilbert Hottois的哲学,他通常被认为是普及技术科学概念的人。Hottois从一种语言的元哲学开始,这种哲学认为20世纪的哲学以牺牲技术为代价而专注于语言。作为一种选择,他发展了一种技术科学哲学,将科学重新解释为一种干预和技术活动,而不是一种沉思和理论活动。正如我将要论证的那样,Hottois通过一种时间哲学阐明了这种技术性的本质,反映了与人类历史不同的技术科学的特定时间性。技术科学的这种暂时性引发了对伦理反思的需要,因为技术科学正在不断地改变和改造世界。这导致了Hottois对生物伦理学的研究,他试图建立一个能够“指导”技术科学的框架。为了避免完全的象征性封闭和完全的技术开放,这一指导方针关注的是多样性的保护,特别是人类的伦理能力。这种指导思想后来被荷兰哲学家如Hans Achterhuis和Peter-Paul Verbeek所采用,启发了他们在技术哲学上的经验主义转向。然而,在这个框架中仍然缺失的是Hottois对技术和文化中起作用的不同时间性的批判性分析。
{"title":"What Will Happen to Humanity in a Million Years? Gilbert Hottois and the Temporality of Technoscience.","authors":"Massimiliano Simons","doi":"10.1007/s13347-025-00887-4","DOIUrl":"10.1007/s13347-025-00887-4","url":null,"abstract":"<p><p>This article provides an overview of the philosophy of Gilbert Hottois, who is usually credited with popularizing the concept of technoscience. Hottois starts from a metaphilosophy of language that diagnoses twentieth-century philosophy as fixated on language at the expense of technology. As an alternative, he developed a philosophy of technoscience that reinterprets science as primarily an intervening and technical activity rather than a contemplative and theoretical one. As I will argue, Hottois articulates the nature of this technicity through a philosophy of time, reflecting on the specific temporality of technoscience as distinct from human history. This temporality of technoscience provoked the need for ethical reflection, since technoscience is constantly changing and transforming the world. This led to Hottois's engagement with bioethics, in which he sought to develop a framework capable of \"guiding\" technoscience. Aiming to avoid both total symbolic closure and total technical openness, this guidance is concerned with the preservation of diversity, especially the human capacity for ethics, ethicity. This idea of guidance was later taken up by Dutch philosophers such as Hans Achterhuis and Peter-Paul Verbeek, inspiring their empirical turn in the philosophy of technology. What remains missing in this framework, however, is Hottois's critical analysis of the different temporalities at work in technology and culture.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 2","pages":"58"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12041151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains. 人工智能能否依赖系统性的真理?规范领域建模的挑战。
Q1 Arts and Humanities Pub Date : 2025-01-01 Epub Date: 2025-03-13 DOI: 10.1007/s13347-025-00864-x
Matthieu Queloz

A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in accurately and comprehensively modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but coherent, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and coherence promise to facilitate progress towards comprehensiveness in an LLM's representation of the world. However, philosophers have identified compelling reasons to doubt that the truth is systematic across all domains of thought, arguing that in normative domains, in particular, the truth is largely asystematic. I argue that insofar as the truth in normative domains is asystematic, this renders it correspondingly harder for LLMs to make progress, because they cannot then leverage the systematicity of truth. And the less LLMs can rely on the systematicity of truth, the less we can rely on them to do our practical deliberation for us, because the very asystematicity of normative domains requires human agency to play a greater role in practical thought.

对于大型语言模型(llm)在准确和全面地模拟世界方面的进展,一个关键的假设是,真理是系统的:关于世界的真实陈述形成了一个整体,这个整体不仅是一致的,因为它不包含矛盾,而且是连贯的,因为真理是相互关联的。这表明,法学硕士可能在原则上依赖于这种系统性来填补空白和纠正训练数据中的不准确性:一致性和连贯性有望促进法学硕士对世界的全面表达。然而,哲学家们已经找到了令人信服的理由来怀疑真理在所有思想领域都是系统的,他们认为,特别是在规范领域,真理在很大程度上是无系统的。我认为,就规范领域的真理是非系统性的而言,这相应地使得法学硕士更难取得进展,因为他们无法利用真理的系统性。法学硕士对真理的系统性依赖越少,我们就越不能依赖他们为我们做实际的思考,因为规范领域的非系统性需要人类在实践思维中发挥更大的作用。
{"title":"Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.","authors":"Matthieu Queloz","doi":"10.1007/s13347-025-00864-x","DOIUrl":"10.1007/s13347-025-00864-x","url":null,"abstract":"<p><p>A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in accurately and comprehensively modelling the world is that the truth is <i>systematic</i>: true statements about the world form a whole that is not just <i>consistent</i>, in that it contains no contradictions, but <i>coherent</i>, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and coherence promise to facilitate progress towards <i>comprehensiveness</i> in an LLM's representation of the world. However, philosophers have identified compelling reasons to doubt that the truth is systematic across all domains of thought, arguing that in normative domains, in particular, the truth is largely asystematic. I argue that insofar as the truth in normative domains is asystematic, this renders it correspondingly harder for LLMs to make progress, because they cannot then leverage the systematicity of truth. And the less LLMs can rely on the systematicity of truth, the less we can rely on them to do our practical deliberation for us, because the very asystematicity of normative domains requires human agency to play a greater role in practical thought.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"34"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906541/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143650431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Three Social Dimensions of Chatbot Technology. 聊天机器人技术的三个社会维度。
Q1 Arts and Humanities Pub Date : 2025-01-01 Epub Date: 2024-12-16 DOI: 10.1007/s13347-024-00826-9
Mauricio Figueroa-Torres

The development and deployment of chatbot technology, while spanning decades and employing different techniques, require innovative frameworks to understand and interrogate their functionality and implications. A mere technocentric account of the evolution of chatbot technology does not fully illuminate how conversational systems are embedded in societal dynamics. This study presents a structured examination of chatbots across three societal dimensions, highlighting their roles as objects of scientific research, commercial instruments, and agents of intimate interaction. Through furnishing a dimensional framework for the evolution of conversational systems - from laboratories to marketplaces to private lives- this article contributes to the wider scholarly inquiry of chatbot technology and its impact in lived human experiences and dynamics.

聊天机器人技术的发展和部署虽然跨越了几十年,采用了不同的技术,但需要创新的框架来理解和审视它们的功能和影响。仅仅以技术为中心来描述聊天机器人技术的发展,并不能完全阐明对话系统是如何嵌入到社会动态中的。本研究从三个社会维度对聊天机器人进行了结构化的研究,突出了它们作为科学研究对象、商业工具和亲密互动代理的角色。通过为对话系统的演变提供一个维度框架——从实验室到市场再到私人生活——本文有助于对聊天机器人技术及其对人类生活体验和动态的影响进行更广泛的学术研究。
{"title":"The Three Social Dimensions of Chatbot Technology.","authors":"Mauricio Figueroa-Torres","doi":"10.1007/s13347-024-00826-9","DOIUrl":"10.1007/s13347-024-00826-9","url":null,"abstract":"<p><p>The development and deployment of chatbot technology, while spanning decades and employing different techniques, require innovative frameworks to understand and interrogate their functionality and implications. A mere technocentric account of the evolution of chatbot technology does not fully illuminate how conversational systems are embedded in societal dynamics. This study presents a structured examination of chatbots across three societal dimensions, highlighting their roles as objects of scientific research, commercial instruments, and agents of intimate interaction. Through furnishing a dimensional framework for the evolution of conversational systems - from laboratories to marketplaces to private lives- this article contributes to the wider scholarly inquiry of chatbot technology and its impact in lived human experiences and dynamics.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12234634/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144601836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Emotion Detection, Privacy, and the Law. 数字情感检测,隐私和法律。
Q1 Arts and Humanities Pub Date : 2025-01-01 Epub Date: 2025-05-27 DOI: 10.1007/s13347-025-00895-4
Leonhard Menges, Eva Weber-Guskar

Intuitively, it seems reasonable to prefer that not everyone knows about all our emotions, for example, who we are in love with, who we are angry with, and what we are ashamed of. Moreover, prominent examples in the philosophical discussion of privacy include emotions. Finally, empirical studies show that a significant number of people in the UK and US are uncomfortable with digital emotion detection. In light of this, it may be surprising to learn that current data protection laws in Europe, which are designed to protect privacy, do not specifically address data about emotions. Understanding and discussing this incongruity is the subject of this paper. We will argue for two main claims: first, that anonymous emotion data does not need special legal protection, and second, that there are very good moral reasons to provide non-anonymous emotion data with special legal protection.

直觉上,我们似乎有理由希望不是每个人都知道我们所有的情绪,例如,我们爱上了谁,我们对谁生气,我们对什么感到羞耻。此外,关于隐私的哲学讨论中突出的例子包括情感。最后,实证研究表明,英国和美国有相当多的人对数字情绪检测感到不舒服。鉴于此,人们可能会惊讶地发现,欧洲目前旨在保护隐私的数据保护法,并没有专门针对与情绪有关的数据。理解和探讨这种不协调是本文的主题。我们将论证两个主要主张:第一,匿名情绪数据不需要特殊的法律保护;第二,有很好的道德理由为非匿名情绪数据提供特殊的法律保护。
{"title":"Digital Emotion Detection, Privacy, and the Law.","authors":"Leonhard Menges, Eva Weber-Guskar","doi":"10.1007/s13347-025-00895-4","DOIUrl":"10.1007/s13347-025-00895-4","url":null,"abstract":"<p><p>Intuitively, it seems reasonable to prefer that not everyone knows about all our emotions, for example, who we are in love with, who we are angry with, and what we are ashamed of. Moreover, prominent examples in the philosophical discussion of privacy include emotions. Finally, empirical studies show that a significant number of people in the UK and US are uncomfortable with digital emotion detection. In light of this, it may be surprising to learn that current data protection laws in Europe, which are designed to protect privacy, do not specifically address data about emotions. Understanding and discussing this incongruity is the subject of this paper. We will argue for two main claims: first, that anonymous emotion data does not need special legal protection, and second, that there are very good moral reasons to provide non-anonymous emotion data with special legal protection.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 2","pages":"77"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12106471/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What is the Point of Social Media? Corporate Purpose and Digital Democratization. 社交媒体的意义是什么?企业目标与数字民主化。
Q1 Arts and Humanities Pub Date : 2025-01-01 Epub Date: 2025-02-20 DOI: 10.1007/s13347-025-00855-y
Ugur Aytac

This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking. This political reform democratically contains corporate power in two ways: first, the legally enforceable fiduciary duties of corporate boards are reconceptualized in relation to democratic purposes rather than shareholder interests. Second, corporate governance structures should be redesigned to ensure that the abstract purpose is realized through representatives whose incentives align with the existence of a democratic public sphere. My argument complements radical proposals such as platform socialism by drawing a connection between democratizing social media governance and identifying the proper purpose of social media companies.

本文提出了一个新的规范框架来思考大科技改革。以数字通信为例,我认为,重新思考社交媒体公司的企业目的,是如何赋予科技公司权力民主合法性的辩论的一个独特切入点。我认为,我们需要努力进行一场改革,重新定义社交媒体公司的企业宗旨。从这个角度来看,他们的目的应该是创造和维持一个自由、平等和民主的公共领域,而不是追求利润。这种政治改革以两种方式民主地包含了公司权力:首先,根据民主目的而不是股东利益,重新定义了公司董事会在法律上可执行的受托责任。其次,应该重新设计公司治理结构,以确保抽象的目的是通过代表来实现的,这些代表的激励与民主公共领域的存在相一致。通过将社交媒体治理民主化与确定社交媒体公司的正当目的联系起来,我的观点对平台社会主义等激进提议进行了补充。
{"title":"What is the Point of Social Media? Corporate Purpose and Digital Democratization.","authors":"Ugur Aytac","doi":"10.1007/s13347-025-00855-y","DOIUrl":"10.1007/s13347-025-00855-y","url":null,"abstract":"<p><p>This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking. This political reform democratically contains corporate power in two ways: first, the legally enforceable fiduciary duties of corporate boards are reconceptualized in relation to democratic purposes rather than shareholder interests. Second, corporate governance structures should be redesigned to ensure that the abstract purpose is realized through representatives whose incentives align with the existence of a democratic public sphere. My argument complements radical proposals such as platform socialism by drawing a connection between democratizing social media governance and identifying the proper purpose of social media companies.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Designer of A Robot Determines Its Position Within The Moral Circle. 机器人的设计者决定了机器人在道德圈中的位置。
Q1 Arts and Humanities Pub Date : 2025-01-01 Epub Date: 2025-05-15 DOI: 10.1007/s13347-025-00898-1
Kamil Mamak
{"title":"The Designer of A Robot Determines Its Position Within The Moral Circle.","authors":"Kamil Mamak","doi":"10.1007/s13347-025-00898-1","DOIUrl":"10.1007/s13347-025-00898-1","url":null,"abstract":"","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 2","pages":"66"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12081538/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144095332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Philosophy and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1