首页 > 最新文献

Philosophy and Technology最新文献

英文 中文
What is the Point of Social Media? Corporate Purpose and Digital Democratization.
Q1 Arts and Humanities Pub Date : 2025-01-01 Epub Date: 2025-02-20 DOI: 10.1007/s13347-025-00855-y
Ugur Aytac

This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking. This political reform democratically contains corporate power in two ways: first, the legally enforceable fiduciary duties of corporate boards are reconceptualized in relation to democratic purposes rather than shareholder interests. Second, corporate governance structures should be redesigned to ensure that the abstract purpose is realized through representatives whose incentives align with the existence of a democratic public sphere. My argument complements radical proposals such as platform socialism by drawing a connection between democratizing social media governance and identifying the proper purpose of social media companies.

{"title":"What is the Point of Social Media? Corporate Purpose and Digital Democratization.","authors":"Ugur Aytac","doi":"10.1007/s13347-025-00855-y","DOIUrl":"10.1007/s13347-025-00855-y","url":null,"abstract":"<p><p>This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking. This political reform democratically contains corporate power in two ways: first, the legally enforceable fiduciary duties of corporate boards are reconceptualized in relation to democratic purposes rather than shareholder interests. Second, corporate governance structures should be redesigned to ensure that the abstract purpose is realized through representatives whose incentives align with the existence of a democratic public sphere. My argument complements radical proposals such as platform socialism by drawing a connection between democratizing social media governance and identifying the proper purpose of social media companies.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Where Technology Leads, the Problems Follow. Technosolutionism and the Dutch Contact Tracing App. 技术引领,问题随行。技术解决主义与荷兰联系人追踪应用程序。
Q1 Arts and Humanities Pub Date : 2024-01-01 Epub Date: 2024-10-28 DOI: 10.1007/s13347-024-00807-y
Lotje E Siffels, Tamar Sharon

In April 2020, in the midst of its first pandemic lockdown, the Dutch government announced plans to develop a contact tracing app to help contain the spread of the coronavirus - the Coronamelder. Originally intended to address the problem of the overburdening of manual contract tracers, by the time the app was released six months later, the problem it sought to solve had drastically changed, without the solution undergoing any modification, making it a prime example of technosolutionism. While numerous critics have mobilised the concept of technosolutionism, the questions of how technosolutionism works in practice and which specific harms it can provoke have been understudied. In this paper we advance a thick conception of technosolutionism which, drawing on Evgeny Morozov, distinguishes it from the notion of technological fix, and, drawing on constructivism, emphasizes its constructivist dimension. Using this concept, we closely follow the problem that the Coronamelder aimed to solve and how it shifted over time to fit the Coronamelder solution, rather than the other way around. We argue that, although problems are always constructed, technosolutionist problems are badly constructed, insofar as the careful and cautious deliberation which should accompany problem construction in public policy is absent in the case of technosolutionism. This can lead to three harms: a subversion of democratic decision-making; the presence of powerful new actors in the public policy context - here Big Tech; and the creation of "orphan problems", whereby the initial problems that triggered the need to develop a (techno)solution are left behind. We question whether the most popular form of technology ethics today, which focuses predominantly on the design of technology, is well-equipped to address these technosolutionist harms, insofar as such a focus may preclude critical thinking about whether or not technology should be the solution in the first place.

2020 年 4 月,荷兰政府在其首次大流行病封锁期间宣布计划开发一款联系人追踪应用程序--Coronamelder,以帮助遏制冠状病毒的传播。该应用程序的初衷是为了解决人工合同追踪人员负担过重的问题,但在六个月后发布时,它所要解决的问题已经发生了巨大变化,而解决方案却没有进行任何修改,这使其成为技术解决主义的一个典型例子。尽管许多批评家都提出了技术解决主义的概念,但关于技术解决主义在实践中如何运作以及它可能引发哪些具体危害的问题却一直没有得到充分研究。在本文中,我们借鉴叶夫根尼-莫罗佐夫(Evgeny Morozov)的观点,将技术解决主义与技术固定概念区分开来,并借鉴建构主义的观点,强调其建构主义维度。利用这一概念,我们密切关注 Coronamelder 旨在解决的问题,以及随着时间的推移,问题是如何转变以适应 Coronamelder 解决方案的,而不是相反。我们认为,尽管问题总是被建构出来的,但技术解决主义的问题被建构得很糟糕,因为在公共政策中,问题建构过程中本应伴随着小心谨慎的讨论,但技术解决主义却没有这样做。这可能导致三种危害:颠覆民主决策;公共政策中出现强大的新参与者--这里指的是大科技公司;以及产生 "孤儿问题",即最初引发制定(技术)解决方案需求的问题被抛在脑后。我们质疑当今最流行的技术伦理形式(主要关注技术的设计)是否有能力解决这些技术解决主义的危害,因为这种关注可能会排除对技术是否首先应该成为解决方案的批判性思考。
{"title":"Where Technology Leads, the Problems Follow. Technosolutionism and the Dutch Contact Tracing App.","authors":"Lotje E Siffels, Tamar Sharon","doi":"10.1007/s13347-024-00807-y","DOIUrl":"10.1007/s13347-024-00807-y","url":null,"abstract":"<p><p>In April 2020, in the midst of its first pandemic lockdown, the Dutch government announced plans to develop a contact tracing app to help contain the spread of the coronavirus - the <i>Coronamelder.</i> Originally intended to address the problem of the overburdening of manual contract tracers, by the time the app was released six months later, the problem it sought to solve had drastically changed, without the solution undergoing any modification, making it a prime example of technosolutionism. While numerous critics have mobilised the concept of technosolutionism, the questions of how technosolutionism works in practice and which specific harms it can provoke have been understudied. In this paper we advance a thick conception of technosolutionism which, drawing on Evgeny Morozov, distinguishes it from the notion of technological fix, and, drawing on constructivism, emphasizes its constructivist dimension. Using this concept, we closely follow the problem that the Coronamelder aimed to solve and how it shifted over time to fit the Coronamelder solution, rather than the other way around. We argue that, although problems are always constructed, technosolutionist problems are <i>badly</i> constructed, insofar as the careful and cautious deliberation which should accompany problem construction in public policy is absent in the case of technosolutionism. This can lead to three harms: a subversion of democratic decision-making; the presence of powerful new actors in the public policy context - here Big Tech; and the creation of \"orphan problems\", whereby the initial problems that triggered the need to develop a (techno)solution are left behind. We question whether the most popular form of technology ethics today, which focuses predominantly on the <i>design</i> of technology, is well-equipped to address these technosolutionist harms, insofar as such a focus may preclude critical thinking about whether or not technology should be the solution in the first place.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 4","pages":"125"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11519188/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Track Thyself? The Value and Ethics of Self-knowledge Through Technology. 追踪你自己?通过技术认识自我的价值与伦理。
Q1 Arts and Humanities Pub Date : 2024-01-01 Epub Date: 2024-01-27 DOI: 10.1007/s13347-024-00704-4
Muriel Leuenberger

Novel technological devices, applications, and algorithms can provide us with a vast amount of personal information about ourselves. Given that we have ethical and practical reasons to pursue self-knowledge, should we use technology to increase our self-knowledge? And which ethical issues arise from the pursuit of technologically sourced self-knowledge? In this paper, I explore these questions in relation to bioinformation technologies (health and activity trackers, DTC genetic testing, and DTC neurotechnologies) and algorithmic profiling used for recommender systems, targeted advertising, and technologically supported decision-making. First, I distinguish between impersonal, critical, and relational self-knowledge. Relational self-knowledge is a so far neglected dimension of self-knowledge which is introduced in this paper. Next, I investigate the contribution of these technologies to the three types of self-knowledge and uncover the connected ethical concerns. Technology can provide a lot of impersonal self-knowledge, but we should focus on the quality of the information which tends to be particularly insufficient for marginalized groups. In terms of critical self-knowledge, the nature of technologically sourced personal information typically impedes critical engagement. The value of relational self-knowledge speaks in favour of transparency of information technology, notably for algorithms that are involved in decision-making about individuals. Moreover, bioinformation technologies and digital profiling shape the concepts and norms that define us. We should ensure they not only serve commercial interests but our identity and self-knowledge interests.

新颖的技术设备、应用程序和算法可以为我们提供大量关于我们自己的个人信息。既然我们有追求自我认知的伦理和现实理由,那么我们是否应该利用技术来增加我们的自我认知?追求技术上的自我认知又会引发哪些伦理问题?在本文中,我将结合生物信息技术(健康和活动追踪器、DTC 基因检测和 DTC 神经技术)以及用于推荐系统、定向广告和技术支持决策的算法分析来探讨这些问题。首先,我将自我知识区分为非个人自我知识、临界自我知识和关系自我知识。关系型自我认知是迄今为止被忽视的自我认知维度,本文对此进行了介绍。接下来,我将研究这些技术对这三种自我认知的贡献,并揭示相关的伦理问题。技术可以提供大量非个人化的自我知识,但我们应该关注信息的质量,因为对于边缘化群体来说,信息的质量往往尤为不足。就批判性自我认识而言,技术来源的个人信息的性质通常会阻碍批判性参与。关系性自我认知的价值有利于信息技术的透明化,尤其是对参与个人决策的算法而言。此外,生物信息技术和数字特征分析塑造了定义我们的概念和规范。我们应确保它们不仅服务于商业利益,也服务于我们的身份和自我认知利益。
{"title":"Track Thyself? The Value and Ethics of Self-knowledge Through Technology.","authors":"Muriel Leuenberger","doi":"10.1007/s13347-024-00704-4","DOIUrl":"10.1007/s13347-024-00704-4","url":null,"abstract":"<p><p>Novel technological devices, applications, and algorithms can provide us with a vast amount of personal information about ourselves. Given that we have ethical and practical reasons to pursue self-knowledge, should we use technology to increase our self-knowledge? And which ethical issues arise from the pursuit of technologically sourced self-knowledge? In this paper, I explore these questions in relation to bioinformation technologies (health and activity trackers, DTC genetic testing, and DTC neurotechnologies) and algorithmic profiling used for recommender systems, targeted advertising, and technologically supported decision-making. First, I distinguish between impersonal, critical, and relational self-knowledge. Relational self-knowledge is a so far neglected dimension of self-knowledge which is introduced in this paper. Next, I investigate the contribution of these technologies to the three types of self-knowledge and uncover the connected ethical concerns. Technology can provide a lot of impersonal self-knowledge, but we should focus on the quality of the information which tends to be particularly insufficient for marginalized groups. In terms of critical self-knowledge, the nature of technologically sourced personal information typically impedes critical engagement. The value of relational self-knowledge speaks in favour of transparency of information technology, notably for algorithms that are involved in decision-making about individuals. Moreover, bioinformation technologies and digital profiling shape the concepts and norms that define us. We should ensure they not only serve commercial interests but our identity and self-knowledge interests.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10821817/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139576841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Moderating Synthetic Content: the Challenge of Generative AI. 调节合成内容:生成式人工智能的挑战。
Q1 Arts and Humanities Pub Date : 2024-01-01 Epub Date: 2024-11-13 DOI: 10.1007/s13347-024-00818-9
Sarah A Fisher, Jeffrey W Howard, Beatriz Kira

Artificially generated content threatens to seriously disrupt the public sphere. Generative AI massively facilitates the production of convincing portrayals of fabricated events. We have already begun to witness the spread of synthetic misinformation, political propaganda, and non-consensual intimate deepfakes. Malicious uses of the new technologies can only be expected to proliferate over time. In the face of this threat, social media platforms must surely act. But how? While it is tempting to think they need new sui generis policies targeting synthetic content, we argue that the challenge posed by generative AI should be met through the enforcement of general platform rules. We demonstrate that the threat posed to individuals and society by AI-generated content is no different in kind from that of ordinary harmful content-a threat which is already well recognised. Generative AI massively increases the problem but, ultimately, it requires the same approach. Therefore, platforms do best to double down on improving and enforcing their existing rules, regardless of whether the content they are dealing with was produced by humans or machines.

人工生成的内容有可能严重扰乱公共领域。生成式人工智能为制造令人信服的虚构事件提供了巨大便利。我们已经开始目睹人工合成的错误信息、政治宣传和未经同意的隐私深度伪造的传播。随着时间的推移,对新技术的恶意使用只会越来越多。面对这种威胁,社交媒体平台必须采取行动。但如何行动呢?虽然人们很容易认为平台需要针对合成内容制定新的特殊政策,但我们认为,生成式人工智能带来的挑战应该通过执行一般平台规则来应对。我们证明,人工智能生成的内容对个人和社会构成的威胁与普通的有害内容并无不同--这种威胁已经得到了广泛认可。生成式人工智能大大增加了问题的严重性,但归根结底,它需要同样的方法。因此,无论所处理的内容是由人类还是机器生成的,平台最好都能加倍努力改进和执行现有规则。
{"title":"Moderating Synthetic Content: the Challenge of Generative AI.","authors":"Sarah A Fisher, Jeffrey W Howard, Beatriz Kira","doi":"10.1007/s13347-024-00818-9","DOIUrl":"https://doi.org/10.1007/s13347-024-00818-9","url":null,"abstract":"<p><p>Artificially generated content threatens to seriously disrupt the public sphere. Generative AI massively facilitates the production of convincing portrayals of fabricated events. We have already begun to witness the spread of synthetic misinformation, political propaganda, and non-consensual intimate deepfakes. Malicious uses of the new technologies can only be expected to proliferate over time. In the face of this threat, social media platforms must surely act. But how? While it is tempting to think they need new sui generis policies targeting synthetic content, we argue that the challenge posed by generative AI should be met through the enforcement of general platform rules. We demonstrate that the threat posed to individuals and society by AI-generated content is no different in kind from that of ordinary harmful content-a threat which is already well recognised. Generative AI massively increases the problem but, ultimately, it requires the same approach. Therefore, platforms do best to double down on improving and enforcing their existing rules, regardless of whether the content they are dealing with was produced by humans or machines.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 4","pages":"133"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11561028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Incalculability of the Generated Text. 生成文本的不可计算性。
Q1 Arts and Humanities Pub Date : 2024-01-01 Epub Date: 2024-02-17 DOI: 10.1007/s13347-024-00708-0
Alžbeta Kuchtová

In this paper, I explore Derrida's concept of exteriorization in relation to texts generated by machine learning. I first discuss Heidegger's view of machine creation and then present Derrida's criticism of Heidegger. I explain the concept of iterability, which is the central notion on which Derrida's criticism is based. The thesis defended in the paper is that Derrida's account of iterability provides a helpful framework for understanding the phenomenon of machine learning-generated literature. His account of textuality highlights the incalculability and mechanical elements characteristic of all texts, including machine-generated texts. By applying Derrida's concept to the phenomenon of machine creation, we can deconstruct the distinction between human and non-human creation. As I propose in the conclusion to this paper, this provides a basis on which to consider potential positive uses of machine learning.

在本文中,我结合机器学习生成的文本,探讨了德里达的外化概念。我首先讨论了海德格尔关于机器创造的观点,然后介绍了德里达对海德格尔的批评。我解释了可迭代性的概念,这是德里达批评所依据的核心概念。本文的论点是,德里达关于可迭代性的论述为理解机器学习生成的文学现象提供了一个有用的框架。他关于文本性的论述强调了所有文本(包括机器生成的文本)所特有的不可计算性和机械元素。通过将德里达的概念应用于机器创作现象,我们可以解构人类创作与非人类创作之间的区别。正如我在本文结论中提出的,这为我们考虑机器学习的潜在积极用途提供了基础。
{"title":"The Incalculability of the Generated Text.","authors":"Alžbeta Kuchtová","doi":"10.1007/s13347-024-00708-0","DOIUrl":"10.1007/s13347-024-00708-0","url":null,"abstract":"<p><p>In this paper, I explore Derrida's concept of exteriorization in relation to texts generated by machine learning. I first discuss Heidegger's view of machine creation and then present Derrida's criticism of Heidegger. I explain the concept of iterability, which is the central notion on which Derrida's criticism is based. The thesis defended in the paper is that Derrida's account of iterability provides a helpful framework for understanding the phenomenon of machine learning-generated literature. His account of textuality highlights the incalculability and mechanical elements characteristic of all texts, including machine-generated texts. By applying Derrida's concept to the phenomenon of machine creation, we can deconstruct the distinction between human and non-human creation. As I propose in the conclusion to this paper, this provides a basis on which to consider potential positive uses of machine learning.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 1","pages":"25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10874339/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139906570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Authorship and ChatGPT: a Conservative View. 作者身份与 ChatGPT:一种保守的观点。
Q1 Arts and Humanities Pub Date : 2024-01-01 Epub Date: 2024-02-26 DOI: 10.1007/s13347-024-00715-1
René van Woudenberg, Chris Ranalli, Daniel Bracker

Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and cannot take responsibility for the texts it produces. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT's authorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.

ChatGPT 是作者吗?鉴于 ChatGPT 能够根据提示生成读起来像人类撰写的文本,将作者归属于 ChatGPT 似乎很自然。然而,我们认为 ChatGPT 并不是作者。ChatGPT 不符合作者身份的标准,因为它没有能力实施诸如承诺或断言之类的言语行为,缺乏知识、信念或意图等相应的心理状态,也无法对其产生的文本负责。我们比较了三种观点:自由主义(认为 ChatGPT 拥有作者身份)、保守主义(出于规范和形而上学的原因否认 ChatGPT 的作者身份)和温和主义(将 ChatGPT 视为拥有作者身份,但不承诺存在知识、信念或意图等心理状态)。我们的结论是,与自由主义和温和主义相比,保守主义对人工智能中的作者身份提供了更细致入微的理解,同时也不否认人工智能技术(如 ChatGPT)的巨大潜力、影响或效用。
{"title":"Authorship and ChatGPT: a Conservative View.","authors":"René van Woudenberg, Chris Ranalli, Daniel Bracker","doi":"10.1007/s13347-024-00715-1","DOIUrl":"10.1007/s13347-024-00715-1","url":null,"abstract":"<p><p>Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and cannot take responsibility for the texts it produces. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT's authorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 1","pages":"34"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10896910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139991438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breaking the Wheel, Credibility, and Hermeneutical Injustice: A Response to Harris. 打破车轮、可信度和解释学上的不公正:对哈里斯的回应。
Q1 Arts and Humanities Pub Date : 2024-01-01 Epub Date: 2024-11-29 DOI: 10.1007/s13347-024-00828-7
Taylor Matthews

In this short paper, I respond to Keith Raymond Harris' paper "Synthetic Media, The Wheel, and the Burden of Proof". In particular, I examine his arguments against two prominent approaches employed to deal with synthetic media such as deepfakes and other GenAI content, namely, the "reactive" and "proactive" approaches. In the first part, I raise a worry about the problem Harris levels at the reactive approach, before providing a constructive way of expanding his worry regarding the proactive approach.

在这篇短文中,我回应了Keith Raymond Harris的论文“合成媒体、车轮和举证责任”。特别是,我研究了他反对两种主要方法的论点,这些方法用于处理合成媒体,如深度伪造和其他GenAI内容,即“反应性”和“前瞻性”方法。在第一部分中,我提出了哈里斯对被动方法问题的担忧,然后提供了一种建设性的方法来扩展他对主动方法的担忧。
{"title":"Breaking the Wheel, Credibility, and Hermeneutical Injustice: A Response to Harris.","authors":"Taylor Matthews","doi":"10.1007/s13347-024-00828-7","DOIUrl":"https://doi.org/10.1007/s13347-024-00828-7","url":null,"abstract":"<p><p>In this short paper, I respond to Keith Raymond Harris' paper \"Synthetic Media, The Wheel, and the Burden of Proof\". In particular, I examine his arguments against two prominent approaches employed to deal with synthetic media such as deepfakes and other GenAI content, namely, the \"reactive\" and \"proactive\" approaches. In the first part, I raise a worry about the problem Harris levels at the reactive approach, before providing a constructive way of expanding his worry regarding the proactive approach.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 4","pages":"138"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607036/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technology and Neutrality 技术与中立
Q1 Arts and Humanities Pub Date : 2023-11-09 DOI: 10.1007/s13347-023-00672-1
Sybren Heyndels
Abstract This paper clarifies and answers the following question: is technology morally neutral? It is argued that the debate between proponents and opponents of the Neutrality Thesis depends on different underlying assumptions about the nature of technological artifacts. My central argument centres around the claim that a mere physicalistic vocabulary does not suffice in characterizing technological artifacts as artifacts, and that the concepts of function and intention are necessary to describe technological artifacts at the right level of description. Once this has been established, I demystify talk about the possible value-ladenness of technological artifacts by showing how these values can be empirically identified. I draw from examples in biology and the social sciences to show that there is a non-mysterious sense in which functions and values can be empirically identified. I conclude from this that technology can be value-laden and that its value-ladenness can both derive from the intended functions as well as the harmful non-intended functions of technological artifacts.
摘要本文澄清并回答了以下问题:技术是否道德中立?有人认为,中立命题的支持者和反对者之间的争论取决于对技术产物本质的不同潜在假设。我的中心论点围绕着这样的主张:仅仅是物理主义的词汇不足以将技术人工制品描述为人工制品,而功能和意图的概念对于在正确的描述层次上描述技术人工制品是必要的。一旦建立了这一点,我就会通过展示如何通过经验来识别这些价值来揭开技术工件可能的价值负担的神秘面纱。我从生物学和社会科学的例子中得出结论,表明在经验上可以识别功能和价值的非神秘意义。我由此得出结论,技术可以承载价值,它的价值承载既可以来自技术工件的预期功能,也可以来自有害的非预期功能。
{"title":"Technology and Neutrality","authors":"Sybren Heyndels","doi":"10.1007/s13347-023-00672-1","DOIUrl":"https://doi.org/10.1007/s13347-023-00672-1","url":null,"abstract":"Abstract This paper clarifies and answers the following question: is technology morally neutral? It is argued that the debate between proponents and opponents of the Neutrality Thesis depends on different underlying assumptions about the nature of technological artifacts. My central argument centres around the claim that a mere physicalistic vocabulary does not suffice in characterizing technological artifacts as artifacts, and that the concepts of function and intention are necessary to describe technological artifacts at the right level of description. Once this has been established, I demystify talk about the possible value-ladenness of technological artifacts by showing how these values can be empirically identified. I draw from examples in biology and the social sciences to show that there is a non-mysterious sense in which functions and values can be empirically identified. I conclude from this that technology can be value-laden and that its value-ladenness can both derive from the intended functions as well as the harmful non-intended functions of technological artifacts.","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":" 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135290695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commentary on Artificial Intelligence (AI) in Islamic Ethics: Towards Pluralist Ethical Benchmarking for AI 伊斯兰伦理中的人工智能(AI):走向人工智能的多元伦理标杆
Q1 Arts and Humanities Pub Date : 2023-11-07 DOI: 10.1007/s13347-023-00677-w
Amana Raquib
{"title":"Commentary on Artificial Intelligence (AI) in Islamic Ethics: Towards Pluralist Ethical Benchmarking for AI","authors":"Amana Raquib","doi":"10.1007/s13347-023-00677-w","DOIUrl":"https://doi.org/10.1007/s13347-023-00677-w","url":null,"abstract":"","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"50 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135432592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence (AI) in Islamic Ethics: Towards Pluralist Ethical Benchmarking for AI 伊斯兰伦理中的人工智能:走向人工智能的多元伦理基准
Q1 Arts and Humanities Pub Date : 2023-11-01 DOI: 10.1007/s13347-023-00668-x
Ezieddin Elmahjub
Abstract This paper explores artificial intelligence (AI) ethics from an Islamic perspective at a critical time for AI ethical norm-setting. It advocates for a pluralist approach to ethical AI benchmarking. As rapid advancements in AI technologies pose challenges surrounding autonomy, privacy, fairness, and transparency, the prevailing ethical discourse has been predominantly Western or Eurocentric. To address this imbalance, this paper delves into the Islamic ethical traditions to develop a framework that contributes to the global debate on optimal norm setting for designing and using AI technologies. The paper outlines Islamic parameters for ethical values and moral actions in the context of AI's ethical uncertainties. It emphasizes the significance of both textual and non-textual Islamic sources in addressing these uncertainties while placing a strong emphasis on the notion of "good" or " maṣlaḥa " as a normative guide for AI's ethical evaluation. Defining maṣlaḥa as an ethical state of affairs in harmony with divine will, the paper highlights the coexistence of two interpretations of maṣlaḥa : welfarist/utility-based and duty-based. Islamic jurisprudence allows for arguments supporting ethical choices that prioritize building the technical infrastructure for AI to maximize utility. Conversely, it also supports choices that reject consequential utility calculations as the sole measure of value in determining ethical responses to AI advancements.
在人工智能伦理规范制定的关键时刻,本文从伊斯兰教的角度探讨人工智能伦理。它提倡对人工智能的道德基准采取多元化的方法。随着人工智能技术的快速发展,对自主性、隐私、公平和透明度提出了挑战,主流的伦理话语主要以西方或欧洲为中心。为了解决这种不平衡,本文深入研究了伊斯兰伦理传统,以开发一个框架,该框架有助于就设计和使用人工智能技术的最佳规范设置进行全球辩论。本文概述了人工智能伦理不确定性背景下的伦理价值和道德行为的伊斯兰参数。它强调了伊斯兰文本和非文本来源在解决这些不确定性方面的重要性,同时强调了“好”或“maṣlaḥa”的概念,作为人工智能伦理评估的规范性指南。本文将maṣlaḥa定义为一种与神的意志和谐相处的伦理状态,并强调了对maṣlaḥa的两种解释的共存:福利主义/功利主义和责任主义。伊斯兰法理学允许支持伦理选择的论点,优先考虑为人工智能构建技术基础设施,以实现效用最大化。相反,它也支持拒绝将结果效用计算作为决定对人工智能进步的道德反应的唯一价值衡量标准的选择。
{"title":"Artificial Intelligence (AI) in Islamic Ethics: Towards Pluralist Ethical Benchmarking for AI","authors":"Ezieddin Elmahjub","doi":"10.1007/s13347-023-00668-x","DOIUrl":"https://doi.org/10.1007/s13347-023-00668-x","url":null,"abstract":"Abstract This paper explores artificial intelligence (AI) ethics from an Islamic perspective at a critical time for AI ethical norm-setting. It advocates for a pluralist approach to ethical AI benchmarking. As rapid advancements in AI technologies pose challenges surrounding autonomy, privacy, fairness, and transparency, the prevailing ethical discourse has been predominantly Western or Eurocentric. To address this imbalance, this paper delves into the Islamic ethical traditions to develop a framework that contributes to the global debate on optimal norm setting for designing and using AI technologies. The paper outlines Islamic parameters for ethical values and moral actions in the context of AI's ethical uncertainties. It emphasizes the significance of both textual and non-textual Islamic sources in addressing these uncertainties while placing a strong emphasis on the notion of \"good\" or \" maṣlaḥa \" as a normative guide for AI's ethical evaluation. Defining maṣlaḥa as an ethical state of affairs in harmony with divine will, the paper highlights the coexistence of two interpretations of maṣlaḥa : welfarist/utility-based and duty-based. Islamic jurisprudence allows for arguments supporting ethical choices that prioritize building the technical infrastructure for AI to maximize utility. Conversely, it also supports choices that reject consequential utility calculations as the sole measure of value in determining ethical responses to AI advancements.","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"69 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135222065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Philosophy and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1