Pub Date : 2025-01-01Epub Date: 2025-02-20DOI: 10.1007/s13347-025-00855-y
Ugur Aytac
This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking. This political reform democratically contains corporate power in two ways: first, the legally enforceable fiduciary duties of corporate boards are reconceptualized in relation to democratic purposes rather than shareholder interests. Second, corporate governance structures should be redesigned to ensure that the abstract purpose is realized through representatives whose incentives align with the existence of a democratic public sphere. My argument complements radical proposals such as platform socialism by drawing a connection between democratizing social media governance and identifying the proper purpose of social media companies.
{"title":"What is the Point of Social Media? Corporate Purpose and Digital Democratization.","authors":"Ugur Aytac","doi":"10.1007/s13347-025-00855-y","DOIUrl":"10.1007/s13347-025-00855-y","url":null,"abstract":"<p><p>This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking. This political reform democratically contains corporate power in two ways: first, the legally enforceable fiduciary duties of corporate boards are reconceptualized in relation to democratic purposes rather than shareholder interests. Second, corporate governance structures should be redesigned to ensure that the abstract purpose is realized through representatives whose incentives align with the existence of a democratic public sphere. My argument complements radical proposals such as platform socialism by drawing a connection between democratizing social media governance and identifying the proper purpose of social media companies.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-10-28DOI: 10.1007/s13347-024-00807-y
Lotje E Siffels, Tamar Sharon
In April 2020, in the midst of its first pandemic lockdown, the Dutch government announced plans to develop a contact tracing app to help contain the spread of the coronavirus - the Coronamelder. Originally intended to address the problem of the overburdening of manual contract tracers, by the time the app was released six months later, the problem it sought to solve had drastically changed, without the solution undergoing any modification, making it a prime example of technosolutionism. While numerous critics have mobilised the concept of technosolutionism, the questions of how technosolutionism works in practice and which specific harms it can provoke have been understudied. In this paper we advance a thick conception of technosolutionism which, drawing on Evgeny Morozov, distinguishes it from the notion of technological fix, and, drawing on constructivism, emphasizes its constructivist dimension. Using this concept, we closely follow the problem that the Coronamelder aimed to solve and how it shifted over time to fit the Coronamelder solution, rather than the other way around. We argue that, although problems are always constructed, technosolutionist problems are badly constructed, insofar as the careful and cautious deliberation which should accompany problem construction in public policy is absent in the case of technosolutionism. This can lead to three harms: a subversion of democratic decision-making; the presence of powerful new actors in the public policy context - here Big Tech; and the creation of "orphan problems", whereby the initial problems that triggered the need to develop a (techno)solution are left behind. We question whether the most popular form of technology ethics today, which focuses predominantly on the design of technology, is well-equipped to address these technosolutionist harms, insofar as such a focus may preclude critical thinking about whether or not technology should be the solution in the first place.
{"title":"Where Technology Leads, the Problems Follow. Technosolutionism and the Dutch Contact Tracing App.","authors":"Lotje E Siffels, Tamar Sharon","doi":"10.1007/s13347-024-00807-y","DOIUrl":"10.1007/s13347-024-00807-y","url":null,"abstract":"<p><p>In April 2020, in the midst of its first pandemic lockdown, the Dutch government announced plans to develop a contact tracing app to help contain the spread of the coronavirus - the <i>Coronamelder.</i> Originally intended to address the problem of the overburdening of manual contract tracers, by the time the app was released six months later, the problem it sought to solve had drastically changed, without the solution undergoing any modification, making it a prime example of technosolutionism. While numerous critics have mobilised the concept of technosolutionism, the questions of how technosolutionism works in practice and which specific harms it can provoke have been understudied. In this paper we advance a thick conception of technosolutionism which, drawing on Evgeny Morozov, distinguishes it from the notion of technological fix, and, drawing on constructivism, emphasizes its constructivist dimension. Using this concept, we closely follow the problem that the Coronamelder aimed to solve and how it shifted over time to fit the Coronamelder solution, rather than the other way around. We argue that, although problems are always constructed, technosolutionist problems are <i>badly</i> constructed, insofar as the careful and cautious deliberation which should accompany problem construction in public policy is absent in the case of technosolutionism. This can lead to three harms: a subversion of democratic decision-making; the presence of powerful new actors in the public policy context - here Big Tech; and the creation of \"orphan problems\", whereby the initial problems that triggered the need to develop a (techno)solution are left behind. We question whether the most popular form of technology ethics today, which focuses predominantly on the <i>design</i> of technology, is well-equipped to address these technosolutionist harms, insofar as such a focus may preclude critical thinking about whether or not technology should be the solution in the first place.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 4","pages":"125"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11519188/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-01-27DOI: 10.1007/s13347-024-00704-4
Muriel Leuenberger
Novel technological devices, applications, and algorithms can provide us with a vast amount of personal information about ourselves. Given that we have ethical and practical reasons to pursue self-knowledge, should we use technology to increase our self-knowledge? And which ethical issues arise from the pursuit of technologically sourced self-knowledge? In this paper, I explore these questions in relation to bioinformation technologies (health and activity trackers, DTC genetic testing, and DTC neurotechnologies) and algorithmic profiling used for recommender systems, targeted advertising, and technologically supported decision-making. First, I distinguish between impersonal, critical, and relational self-knowledge. Relational self-knowledge is a so far neglected dimension of self-knowledge which is introduced in this paper. Next, I investigate the contribution of these technologies to the three types of self-knowledge and uncover the connected ethical concerns. Technology can provide a lot of impersonal self-knowledge, but we should focus on the quality of the information which tends to be particularly insufficient for marginalized groups. In terms of critical self-knowledge, the nature of technologically sourced personal information typically impedes critical engagement. The value of relational self-knowledge speaks in favour of transparency of information technology, notably for algorithms that are involved in decision-making about individuals. Moreover, bioinformation technologies and digital profiling shape the concepts and norms that define us. We should ensure they not only serve commercial interests but our identity and self-knowledge interests.
{"title":"Track Thyself? The Value and Ethics of Self-knowledge Through Technology.","authors":"Muriel Leuenberger","doi":"10.1007/s13347-024-00704-4","DOIUrl":"10.1007/s13347-024-00704-4","url":null,"abstract":"<p><p>Novel technological devices, applications, and algorithms can provide us with a vast amount of personal information about ourselves. Given that we have ethical and practical reasons to pursue self-knowledge, should we use technology to increase our self-knowledge? And which ethical issues arise from the pursuit of technologically sourced self-knowledge? In this paper, I explore these questions in relation to bioinformation technologies (health and activity trackers, DTC genetic testing, and DTC neurotechnologies) and algorithmic profiling used for recommender systems, targeted advertising, and technologically supported decision-making. First, I distinguish between impersonal, critical, and relational self-knowledge. Relational self-knowledge is a so far neglected dimension of self-knowledge which is introduced in this paper. Next, I investigate the contribution of these technologies to the three types of self-knowledge and uncover the connected ethical concerns. Technology can provide a lot of impersonal self-knowledge, but we should focus on the quality of the information which tends to be particularly insufficient for marginalized groups. In terms of critical self-knowledge, the nature of technologically sourced personal information typically impedes critical engagement. The value of relational self-knowledge speaks in favour of transparency of information technology, notably for algorithms that are involved in decision-making about individuals. Moreover, bioinformation technologies and digital profiling shape the concepts and norms that define us. We should ensure they not only serve commercial interests but our identity and self-knowledge interests.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10821817/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139576841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-11-13DOI: 10.1007/s13347-024-00818-9
Sarah A Fisher, Jeffrey W Howard, Beatriz Kira
Artificially generated content threatens to seriously disrupt the public sphere. Generative AI massively facilitates the production of convincing portrayals of fabricated events. We have already begun to witness the spread of synthetic misinformation, political propaganda, and non-consensual intimate deepfakes. Malicious uses of the new technologies can only be expected to proliferate over time. In the face of this threat, social media platforms must surely act. But how? While it is tempting to think they need new sui generis policies targeting synthetic content, we argue that the challenge posed by generative AI should be met through the enforcement of general platform rules. We demonstrate that the threat posed to individuals and society by AI-generated content is no different in kind from that of ordinary harmful content-a threat which is already well recognised. Generative AI massively increases the problem but, ultimately, it requires the same approach. Therefore, platforms do best to double down on improving and enforcing their existing rules, regardless of whether the content they are dealing with was produced by humans or machines.
{"title":"Moderating Synthetic Content: the Challenge of Generative AI.","authors":"Sarah A Fisher, Jeffrey W Howard, Beatriz Kira","doi":"10.1007/s13347-024-00818-9","DOIUrl":"https://doi.org/10.1007/s13347-024-00818-9","url":null,"abstract":"<p><p>Artificially generated content threatens to seriously disrupt the public sphere. Generative AI massively facilitates the production of convincing portrayals of fabricated events. We have already begun to witness the spread of synthetic misinformation, political propaganda, and non-consensual intimate deepfakes. Malicious uses of the new technologies can only be expected to proliferate over time. In the face of this threat, social media platforms must surely act. But how? While it is tempting to think they need new sui generis policies targeting synthetic content, we argue that the challenge posed by generative AI should be met through the enforcement of general platform rules. We demonstrate that the threat posed to individuals and society by AI-generated content is no different in kind from that of ordinary harmful content-a threat which is already well recognised. Generative AI massively increases the problem but, ultimately, it requires the same approach. Therefore, platforms do best to double down on improving and enforcing their existing rules, regardless of whether the content they are dealing with was produced by humans or machines.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 4","pages":"133"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11561028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-02-17DOI: 10.1007/s13347-024-00708-0
Alžbeta Kuchtová
In this paper, I explore Derrida's concept of exteriorization in relation to texts generated by machine learning. I first discuss Heidegger's view of machine creation and then present Derrida's criticism of Heidegger. I explain the concept of iterability, which is the central notion on which Derrida's criticism is based. The thesis defended in the paper is that Derrida's account of iterability provides a helpful framework for understanding the phenomenon of machine learning-generated literature. His account of textuality highlights the incalculability and mechanical elements characteristic of all texts, including machine-generated texts. By applying Derrida's concept to the phenomenon of machine creation, we can deconstruct the distinction between human and non-human creation. As I propose in the conclusion to this paper, this provides a basis on which to consider potential positive uses of machine learning.
{"title":"The Incalculability of the Generated Text.","authors":"Alžbeta Kuchtová","doi":"10.1007/s13347-024-00708-0","DOIUrl":"10.1007/s13347-024-00708-0","url":null,"abstract":"<p><p>In this paper, I explore Derrida's concept of exteriorization in relation to texts generated by machine learning. I first discuss Heidegger's view of machine creation and then present Derrida's criticism of Heidegger. I explain the concept of iterability, which is the central notion on which Derrida's criticism is based. The thesis defended in the paper is that Derrida's account of iterability provides a helpful framework for understanding the phenomenon of machine learning-generated literature. His account of textuality highlights the incalculability and mechanical elements characteristic of all texts, including machine-generated texts. By applying Derrida's concept to the phenomenon of machine creation, we can deconstruct the distinction between human and non-human creation. As I propose in the conclusion to this paper, this provides a basis on which to consider potential positive uses of machine learning.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 1","pages":"25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10874339/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139906570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-02-26DOI: 10.1007/s13347-024-00715-1
René van Woudenberg, Chris Ranalli, Daniel Bracker
Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and cannot take responsibility for the texts it produces. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT's authorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.
{"title":"Authorship and ChatGPT: a Conservative View.","authors":"René van Woudenberg, Chris Ranalli, Daniel Bracker","doi":"10.1007/s13347-024-00715-1","DOIUrl":"10.1007/s13347-024-00715-1","url":null,"abstract":"<p><p>Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and cannot take responsibility for the texts it produces. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT's authorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 1","pages":"34"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10896910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139991438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-11-29DOI: 10.1007/s13347-024-00828-7
Taylor Matthews
In this short paper, I respond to Keith Raymond Harris' paper "Synthetic Media, The Wheel, and the Burden of Proof". In particular, I examine his arguments against two prominent approaches employed to deal with synthetic media such as deepfakes and other GenAI content, namely, the "reactive" and "proactive" approaches. In the first part, I raise a worry about the problem Harris levels at the reactive approach, before providing a constructive way of expanding his worry regarding the proactive approach.
在这篇短文中,我回应了Keith Raymond Harris的论文“合成媒体、车轮和举证责任”。特别是,我研究了他反对两种主要方法的论点,这些方法用于处理合成媒体,如深度伪造和其他GenAI内容,即“反应性”和“前瞻性”方法。在第一部分中,我提出了哈里斯对被动方法问题的担忧,然后提供了一种建设性的方法来扩展他对主动方法的担忧。
{"title":"Breaking the Wheel, Credibility, and Hermeneutical Injustice: A Response to Harris.","authors":"Taylor Matthews","doi":"10.1007/s13347-024-00828-7","DOIUrl":"https://doi.org/10.1007/s13347-024-00828-7","url":null,"abstract":"<p><p>In this short paper, I respond to Keith Raymond Harris' paper \"Synthetic Media, The Wheel, and the Burden of Proof\". In particular, I examine his arguments against two prominent approaches employed to deal with synthetic media such as deepfakes and other GenAI content, namely, the \"reactive\" and \"proactive\" approaches. In the first part, I raise a worry about the problem Harris levels at the reactive approach, before providing a constructive way of expanding his worry regarding the proactive approach.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 4","pages":"138"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607036/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-09DOI: 10.1007/s13347-023-00672-1
Sybren Heyndels
Abstract This paper clarifies and answers the following question: is technology morally neutral? It is argued that the debate between proponents and opponents of the Neutrality Thesis depends on different underlying assumptions about the nature of technological artifacts. My central argument centres around the claim that a mere physicalistic vocabulary does not suffice in characterizing technological artifacts as artifacts, and that the concepts of function and intention are necessary to describe technological artifacts at the right level of description. Once this has been established, I demystify talk about the possible value-ladenness of technological artifacts by showing how these values can be empirically identified. I draw from examples in biology and the social sciences to show that there is a non-mysterious sense in which functions and values can be empirically identified. I conclude from this that technology can be value-laden and that its value-ladenness can both derive from the intended functions as well as the harmful non-intended functions of technological artifacts.
{"title":"Technology and Neutrality","authors":"Sybren Heyndels","doi":"10.1007/s13347-023-00672-1","DOIUrl":"https://doi.org/10.1007/s13347-023-00672-1","url":null,"abstract":"Abstract This paper clarifies and answers the following question: is technology morally neutral? It is argued that the debate between proponents and opponents of the Neutrality Thesis depends on different underlying assumptions about the nature of technological artifacts. My central argument centres around the claim that a mere physicalistic vocabulary does not suffice in characterizing technological artifacts as artifacts, and that the concepts of function and intention are necessary to describe technological artifacts at the right level of description. Once this has been established, I demystify talk about the possible value-ladenness of technological artifacts by showing how these values can be empirically identified. I draw from examples in biology and the social sciences to show that there is a non-mysterious sense in which functions and values can be empirically identified. I conclude from this that technology can be value-laden and that its value-ladenness can both derive from the intended functions as well as the harmful non-intended functions of technological artifacts.","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":" 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135290695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1007/s13347-023-00677-w
Amana Raquib
{"title":"Commentary on Artificial Intelligence (AI) in Islamic Ethics: Towards Pluralist Ethical Benchmarking for AI","authors":"Amana Raquib","doi":"10.1007/s13347-023-00677-w","DOIUrl":"https://doi.org/10.1007/s13347-023-00677-w","url":null,"abstract":"","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"50 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135432592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1007/s13347-023-00668-x
Ezieddin Elmahjub
Abstract This paper explores artificial intelligence (AI) ethics from an Islamic perspective at a critical time for AI ethical norm-setting. It advocates for a pluralist approach to ethical AI benchmarking. As rapid advancements in AI technologies pose challenges surrounding autonomy, privacy, fairness, and transparency, the prevailing ethical discourse has been predominantly Western or Eurocentric. To address this imbalance, this paper delves into the Islamic ethical traditions to develop a framework that contributes to the global debate on optimal norm setting for designing and using AI technologies. The paper outlines Islamic parameters for ethical values and moral actions in the context of AI's ethical uncertainties. It emphasizes the significance of both textual and non-textual Islamic sources in addressing these uncertainties while placing a strong emphasis on the notion of "good" or " maṣlaḥa " as a normative guide for AI's ethical evaluation. Defining maṣlaḥa as an ethical state of affairs in harmony with divine will, the paper highlights the coexistence of two interpretations of maṣlaḥa : welfarist/utility-based and duty-based. Islamic jurisprudence allows for arguments supporting ethical choices that prioritize building the technical infrastructure for AI to maximize utility. Conversely, it also supports choices that reject consequential utility calculations as the sole measure of value in determining ethical responses to AI advancements.
{"title":"Artificial Intelligence (AI) in Islamic Ethics: Towards Pluralist Ethical Benchmarking for AI","authors":"Ezieddin Elmahjub","doi":"10.1007/s13347-023-00668-x","DOIUrl":"https://doi.org/10.1007/s13347-023-00668-x","url":null,"abstract":"Abstract This paper explores artificial intelligence (AI) ethics from an Islamic perspective at a critical time for AI ethical norm-setting. It advocates for a pluralist approach to ethical AI benchmarking. As rapid advancements in AI technologies pose challenges surrounding autonomy, privacy, fairness, and transparency, the prevailing ethical discourse has been predominantly Western or Eurocentric. To address this imbalance, this paper delves into the Islamic ethical traditions to develop a framework that contributes to the global debate on optimal norm setting for designing and using AI technologies. The paper outlines Islamic parameters for ethical values and moral actions in the context of AI's ethical uncertainties. It emphasizes the significance of both textual and non-textual Islamic sources in addressing these uncertainties while placing a strong emphasis on the notion of \"good\" or \" maṣlaḥa \" as a normative guide for AI's ethical evaluation. Defining maṣlaḥa as an ethical state of affairs in harmony with divine will, the paper highlights the coexistence of two interpretations of maṣlaḥa : welfarist/utility-based and duty-based. Islamic jurisprudence allows for arguments supporting ethical choices that prioritize building the technical infrastructure for AI to maximize utility. Conversely, it also supports choices that reject consequential utility calculations as the sole measure of value in determining ethical responses to AI advancements.","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"69 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135222065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}