首页 > 最新文献

AI and ethics最新文献

英文 中文
Exploring the mutations of society in the era of generative AI
Pub Date : 2025-01-13 DOI: 10.1007/s43681-024-00632-1
Hubert Etienne, Brent Mittelstadt, Rob Reich, John Basl, Jeff Behrends, Dominique Lestel, Chloé Bakalar, Geoff Keeling, Giada Pistilli, Marta Cantero Gamito
{"title":"Exploring the mutations of society in the era of generative AI","authors":"Hubert Etienne, Brent Mittelstadt, Rob Reich, John Basl, Jeff Behrends, Dominique Lestel, Chloé Bakalar, Geoff Keeling, Giada Pistilli, Marta Cantero Gamito","doi":"10.1007/s43681-024-00632-1","DOIUrl":"10.1007/s43681-024-00632-1","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"1 - 1"},"PeriodicalIF":0.0,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The need for an empirical research program regarding human–AI relational norms 需要开展有关人类与人工智能关系规范的实证研究计划
Pub Date : 2025-01-09 DOI: 10.1007/s43681-024-00631-2
Madeline G. Reinecke, Andreas Kappes, Sebastian Porsdam Mann, Julian Savulescu, Brian D. Earp

As artificial intelligence (AI) systems begin to take on social roles traditionally filled by humans, it will be crucial to understand how this affects people’s cooperative expectations. In the case of human–human dyads, different relationships are governed by different norms: For example, how two strangers—versus two friends or colleagues—should interact when faced with a similar coordination problem often differs. How will the rise of ‘social’ artificial intelligence (and ultimately, superintelligent AI) complicate people’s expectations about the cooperative norms that should govern different types of relationships, whether human–human or human–AI? Do people expect AI to adhere to the same cooperative dynamics as humans when in a given social role? Conversely, will they begin to expect humans in certain types of relationships to act more like AI? Here, we consider how people’s cooperative expectations may pull apart between human–human and human–AI relationships, detailing an empirical proposal for mapping these distinctions across relationship types. We see the data resulting from our proposal as relevant for understanding people’s relationship–specific cooperative expectations in an age of social AI, which may also forecast potential resistance towards AI systems occupying certain social roles. Finally, these data can form the basis for ethical evaluations: What relationship–specific cooperative norms we should adopt for human–AI interactions, or reinforce through responsible AI design, depends partly on empirical facts about what norms people find intuitive for such interactions (along with the costs and benefits of maintaining these). Toward the end of the paper, we discuss how these relational norms may change over time and consider the implications of this for the proposed research program.

{"title":"The need for an empirical research program regarding human–AI relational norms","authors":"Madeline G. Reinecke,&nbsp;Andreas Kappes,&nbsp;Sebastian Porsdam Mann,&nbsp;Julian Savulescu,&nbsp;Brian D. Earp","doi":"10.1007/s43681-024-00631-2","DOIUrl":"10.1007/s43681-024-00631-2","url":null,"abstract":"<div><p>As artificial intelligence (AI) systems begin to take on social roles traditionally filled by humans, it will be crucial to understand how this affects people’s cooperative expectations. In the case of human–human dyads, different relationships are governed by different norms: For example, how two strangers—versus two friends or colleagues—should interact when faced with a similar coordination problem often differs. How will the rise of ‘social’ artificial intelligence (and ultimately, superintelligent AI) complicate people’s expectations about the cooperative norms that should govern different types of relationships, whether human–human or human–AI? Do people expect AI to adhere to the same cooperative dynamics as humans when in a given social role? Conversely, will they begin to expect humans in certain types of relationships to act more like AI? Here, we consider how people’s cooperative expectations may pull apart between human–human and human–AI relationships, detailing an empirical proposal for mapping these distinctions across relationship types. We see the data resulting from our proposal as relevant for understanding people’s relationship–specific cooperative expectations in an age of social AI, which may also forecast potential resistance towards AI systems occupying certain social roles. Finally, these data can form the basis for ethical evaluations: What relationship–specific cooperative norms we should adopt for human–AI interactions, or reinforce through responsible AI design, depends partly on empirical facts about what norms people find intuitive for such interactions (along with the costs and benefits of maintaining these). Toward the end of the paper, we discuss how these relational norms may change over time and consider the implications of this for the proposed research program.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"71 - 80"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00631-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI to renew public employment services? Explanation and trust of domain experts
Pub Date : 2025-01-09 DOI: 10.1007/s43681-024-00629-w
Thomas Souverain

It is often assumed in explainable AI (XAI) literature that explaining AI predictions will enhance trust of users. To bridge this research gap, we explored trust in XAI on public policies. The French Employment Agency deploys neural networks since 2021 to help job counsellors reject the illegal employment offers. Digging into that case, we adopted philosophical lens on trust in AI which is also compatible with measurements, on demonstrated and perceived trust. We performed a three-months experimental study, joining sociological and psychological methods: Qualitative (S1): Relying on sociological field work methods, we conducted 1 h semi-structured interviews with job counsellors. On 5 regional agencies, we asked 18 counsellors to describe their work practices with AI warnings. Quantitative (S2): Having gathered agents' perceptions, we quantified the reasons to trust AI. We administered a questionnaire, comparing three homogeneous cohorts of 100 counsellors each with different information on AI. We tested the impact of two local XAI, general rule and counterfactual rewording. Our survey provided empirical evidence for the link between XAI and trust, but it also stressed that XAI supports differently appeal to rationality. The rule helps advisors to be sure that criteria motivating AI predictions comply with the law, whereas counterfactual raises doubts on the offer’s quality. Whereas XAI enhanced both demonstrated and perceived trust, our study also revealed limits to full adoption, based on profiles of experts. XAI could more efficiently trigger trust, but only when addressing personal beliefs, or rearranging work conditions to let experts the time to understand AI.

{"title":"AI to renew public employment services? Explanation and trust of domain experts","authors":"Thomas Souverain","doi":"10.1007/s43681-024-00629-w","DOIUrl":"10.1007/s43681-024-00629-w","url":null,"abstract":"<div><p>It is often assumed in explainable AI (XAI) literature that explaining AI predictions will enhance trust of users. To bridge this research gap, we explored trust in XAI on public policies. The French Employment Agency deploys neural networks since 2021 to help job counsellors reject the illegal employment offers. Digging into that case, we adopted philosophical lens on trust in AI which is also compatible with measurements, on demonstrated and perceived trust. We performed a three-months experimental study, joining sociological and psychological methods: Qualitative (S1): Relying on sociological field work methods, we conducted 1 h semi-structured interviews with job counsellors. On 5 regional agencies, we asked 18 counsellors to describe their work practices with AI warnings. Quantitative (S2): Having gathered agents' perceptions, we quantified the reasons to trust AI. We administered a questionnaire, comparing three homogeneous cohorts of 100 counsellors each with different information on AI. We tested the impact of two local XAI, general rule and counterfactual rewording. Our survey provided empirical evidence for the link between XAI and trust, but it also stressed that XAI supports differently appeal to rationality. The rule helps advisors to be sure that criteria motivating AI predictions comply with the law, whereas counterfactual raises doubts on the offer’s quality. Whereas XAI enhanced both demonstrated and perceived trust, our study also revealed limits to full adoption, based on profiles of experts. XAI could more efficiently trigger trust, but only when addressing personal beliefs, or rearranging work conditions to let experts the time to understand AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"55 - 70"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Waging warfare against states: the deployment of artificial intelligence in cyber espionage
Pub Date : 2025-01-08 DOI: 10.1007/s43681-024-00628-x
Wan Rosalili Wan Rosli

Cyber espionage has significantly been viewed as a risk towards nation-states, especially in the area of security and protection of Critical National Infrastructures. The race against digitisation has also raised concerns about how emerging technologies are defining how cyber activities are linked to waging warfare between States. Real-world crimes have since found a place in cyberspace, and with high connectivity, has exposed various actors to various risks and vulnerabilities, including cyber espionage. Cyber espionage has always been a national security issue as it does not only target States but also affects public–private networks, corporations and individuals. The challenge of crimes committed within the cyber realm is how the nature of cybercrimes distorts the dichotomy of state responsibility in responding to cyber threats and vulnerabilities. Furthermore, the veil of anonymity and emerging technologies such as artificial intelligence have further provided opportunities for a larger scale impact on the state for such crime. The imminent threat of cyber espionage is impacting the economic and political interactions between nation-states and changing the nature of modern conflict. Due to these implications, this paper will discuss the current legal landscape governing cyber espionage and the impact of the use of artificial intelligence in the commission of such crimes.

{"title":"Waging warfare against states: the deployment of artificial intelligence in cyber espionage","authors":"Wan Rosalili Wan Rosli","doi":"10.1007/s43681-024-00628-x","DOIUrl":"10.1007/s43681-024-00628-x","url":null,"abstract":"<div><p>Cyber espionage has significantly been viewed as a risk towards nation-states, especially in the area of security and protection of Critical National Infrastructures. The race against digitisation has also raised concerns about how emerging technologies are defining how cyber activities are linked to waging warfare between States. Real-world crimes have since found a place in cyberspace, and with high connectivity, has exposed various actors to various risks and vulnerabilities, including cyber espionage. Cyber espionage has always been a national security issue as it does not only target States but also affects public–private networks, corporations and individuals. The challenge of crimes committed within the cyber realm is how the nature of cybercrimes distorts the dichotomy of state responsibility in responding to cyber threats and vulnerabilities. Furthermore, the veil of anonymity and emerging technologies such as artificial intelligence have further provided opportunities for a larger scale impact on the state for such crime. The imminent threat of cyber espionage is impacting the economic and political interactions between nation-states and changing the nature of modern conflict. Due to these implications, this paper will discuss the current legal landscape governing cyber espionage and the impact of the use of artificial intelligence in the commission of such crimes.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"47 - 53"},"PeriodicalIF":0.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00628-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technology, liberty, and guardrails
Pub Date : 2024-12-21 DOI: 10.1007/s43681-024-00625-0
Kevin Mills

Technology companies are increasingly being asked to take responsibility for the technologies they create. Many of them are rising to the challenge. One way they do this is by implementing “guardrails”: restrictions on functionality that prevent people from misusing their technologies (per some standard of misuse). While there can be excellent reasons for implementing guardrails (and doing so is sometimes morally obligatory), I argue that the unrestricted authority to implement guardrails is incompatible with proper respect for user freedom, and is not something we should welcome. I argue instead that guardrails should be implemented for only two reasons: to prevent accidental misuse of the technology, and as a proportionate means of preventing people from using the technology to violate other people’s rights. If I’m right, then we may have to get more comfortable with developers releasing technologies that can, and to some extent inevitably will, be misused; people using technologies in ways we disagree with is one of the costs of liberty, but it is a cost we have excellent reasons to bear.

越来越多的技术公司被要求对其创造的技术负责。许多公司正在迎接这一挑战。其中一种方法就是实施 "防护栏":对功能进行限制,防止人们滥用其技术(按照某种滥用标准)。虽然实施 "防护栏 "可能有很好的理由(而且有时在道义上必须这样做),但我认为,不受限制地实施 "防护栏 "与适当尊重用户自由是不相容的,也不是我们应该欢迎的。相反,我认为,实施防护栏只应出于两个原因:一是为了防止技术被意外滥用,二是作为防止人们利用技术侵犯他人权利的适度手段。如果我的观点是正确的,那么我们可能就必须对开发者发布可能、而且在某种程度上不可避免地会被滥用的技术更加放心;人们以我们不同意的方式使用技术是自由的代价之一,但我们有充分的理由承担这种代价。
{"title":"Technology, liberty, and guardrails","authors":"Kevin Mills","doi":"10.1007/s43681-024-00625-0","DOIUrl":"10.1007/s43681-024-00625-0","url":null,"abstract":"<div><p>Technology companies are increasingly being asked to take responsibility for the technologies they create. Many of them are rising to the challenge. One way they do this is by implementing “guardrails”: restrictions on functionality that prevent people from misusing their technologies (per some standard of misuse). While there can be excellent reasons for implementing guardrails (and doing so is sometimes morally obligatory), I argue that the unrestricted authority to implement guardrails is incompatible with proper respect for user freedom, and is not something we should welcome. I argue instead that guardrails should be implemented for only two reasons: to prevent accidental misuse of the technology, and as a proportionate means of preventing people from using the technology to violate other people’s rights. If I’m right, then we may have to get more comfortable with developers releasing technologies that can, and to some extent inevitably will, be misused; people using technologies in ways we disagree with is one of the costs of liberty, but it is a cost we have excellent reasons to bear.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"39 - 46"},"PeriodicalIF":0.0,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unmasking camouflage: exploring the challenges of large language models in deciphering African American language & online performativity
Pub Date : 2024-12-10 DOI: 10.1007/s43681-024-00623-2
Shana Kleiner, Jessica A. Grieser, Shug Miller, James Shepard, Javier Garcia-Perez, Nick Deas, Desmond U. Patton, Elsbeth Turcan, Kathleen McKeown

The growing accessibility of large language models (LLMs) has raised many questions about the reliability of probabilistically generated natural language responses. While researchers have documented how bias in the training data leads to biased and ethically problematic output, little attention has been paid to the problems which arise from the nature of the varieties of language on which these models are trained. In particular, certain kinds of expressive and performative language use are more common among African American social media users than they occur in the naturalistic speech of African Americans, a discrepancy which models may fail to take into account when they are training on easily-scraped data as being representative of African American speech. Because LLM training data is generally proprietary, in this work we simulate the training data using a collected dataset consisting of 274 posts from Twitter, Reddit, and Hip-Hop lyrics and analyze how LLMs interpreted their meaning. We highlight the difficulties LLMs, including GPT-3 and GPT-4, have in understanding performative AAL and examine how camouflaging and performativity are addressed (or not) by LLMs and demonstrate the harmful implications of misinterpreting online performance.

{"title":"Unmasking camouflage: exploring the challenges of large language models in deciphering African American language & online performativity","authors":"Shana Kleiner,&nbsp;Jessica A. Grieser,&nbsp;Shug Miller,&nbsp;James Shepard,&nbsp;Javier Garcia-Perez,&nbsp;Nick Deas,&nbsp;Desmond U. Patton,&nbsp;Elsbeth Turcan,&nbsp;Kathleen McKeown","doi":"10.1007/s43681-024-00623-2","DOIUrl":"10.1007/s43681-024-00623-2","url":null,"abstract":"<div><p>The growing accessibility of large language models (LLMs) has raised many questions about the reliability of probabilistically generated natural language responses. While researchers have documented how bias in the training data leads to biased and ethically problematic output, little attention has been paid to the problems which arise from the nature of the varieties of language on which these models are trained. In particular, certain kinds of expressive and performative language use are more common among African American social media users than they occur in the naturalistic speech of African Americans, a discrepancy which models may fail to take into account when they are training on easily-scraped data as being representative of African American speech. Because LLM training data is generally proprietary, in this work we simulate the training data using a collected dataset consisting of 274 posts from Twitter, Reddit, and Hip-Hop lyrics and analyze how LLMs interpreted their meaning. We highlight the difficulties LLMs, including GPT-3 and GPT-4, have in understanding performative AAL and examine how camouflaging and performativity are addressed (or not) by LLMs and demonstrate the harmful implications of misinterpreting online performance.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"29 - 37"},"PeriodicalIF":0.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00623-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From human-system interaction to human-system co-action and back: ethical assessment of generative AI and mutual theory of mind
Pub Date : 2024-12-04 DOI: 10.1007/s43681-024-00626-z
Florian Richter

Human-machine ethics has emerged as a rapidly growing research field in recent years. However, it seems that Generative Artificial Intelligence (AI) leads to a paradigm shift from human-machine interaction to co-action. The ethical assessment of such relationships is still in the making and needs further scrutiny. First, studies about the influence of technology in human-system interactions and manipulation are reviewed. Second, the “mutual theory of mind” approach is critically examined to identify its shortcomings. Third, creating user models is reconstruced to demonstrate the strategies of systems. Finally, use cases are discussed and assessed to outline ethical implications.

近年来,人机伦理已成为一个快速发展的研究领域。然而,生成式人工智能(AI)似乎导致了从人机互动到共同行动的范式转变。对这种关系的伦理评估仍在酝酿之中,需要进一步研究。首先,我们回顾了有关技术对人机互动和操纵的影响的研究。其次,对 "相互心智理论 "方法进行了批判性研究,以找出其不足之处。第三,重新构建用户模型,以展示系统策略。最后,对使用案例进行讨论和评估,以概述其伦理意义。
{"title":"From human-system interaction to human-system co-action and back: ethical assessment of generative AI and mutual theory of mind","authors":"Florian Richter","doi":"10.1007/s43681-024-00626-z","DOIUrl":"10.1007/s43681-024-00626-z","url":null,"abstract":"<div><p>Human-machine ethics has emerged as a rapidly growing research field in recent years. However, it seems that Generative Artificial Intelligence (AI) leads to a paradigm shift from human-machine interaction to co-action. The ethical assessment of such relationships is still in the making and needs further scrutiny. First, studies about the influence of technology in human-system interactions and manipulation are reviewed. Second, the “mutual theory of mind” approach is critically examined to identify its shortcomings. Third, creating user models is reconstruced to demonstrate the strategies of systems. Finally, use cases are discussed and assessed to outline ethical implications.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"19 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00626-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Democratizing value alignment: from authoritarian to democratic AI ethics
Pub Date : 2024-12-02 DOI: 10.1007/s43681-024-00624-1
Linus Ta-Lun Huang, Gleb Papyshev, James K. Wong

Value alignment is essential for ensuring that AI systems act in ways that are consistent with human values. Existing approaches, such as reinforcement learning with human feedback and constitutional AI, however, exhibit power asymmetries and lack transparency. These “authoritarian” approaches fail to adequately accommodate a broad array of human opinions, raising concerns about whose values are being prioritized. In response, we introduce the Dynamic Value Alignment approach, theoretically grounded in the principles of parallel constraint satisfaction, which models moral reasoning as a dynamic process that balances multiple value principles. Our approach also enhances users’ moral and epistemic agency by granting users greater control over the values that influence AI behavior. As a more user-centric, transparent, and participatory framework for AI ethics, our approach not only addresses the democratic deficits inherent in current practices but also ensures that AI systems are flexibly aligned with a diverse array of human values.

{"title":"Democratizing value alignment: from authoritarian to democratic AI ethics","authors":"Linus Ta-Lun Huang,&nbsp;Gleb Papyshev,&nbsp;James K. Wong","doi":"10.1007/s43681-024-00624-1","DOIUrl":"10.1007/s43681-024-00624-1","url":null,"abstract":"<div><p>Value alignment is essential for ensuring that AI systems act in ways that are consistent with human values. Existing approaches, such as reinforcement learning with human feedback and constitutional AI, however, exhibit power asymmetries and lack transparency. These “authoritarian” approaches fail to adequately accommodate a broad array of human opinions, raising concerns about whose values are being prioritized. In response, we introduce the Dynamic Value Alignment approach, theoretically grounded in the principles of parallel constraint satisfaction, which models moral reasoning as a dynamic process that balances multiple value principles. Our approach also enhances users’ moral and epistemic agency by granting users greater control over the values that influence AI behavior. As a more user-centric, transparent, and participatory framework for AI ethics, our approach not only addresses the democratic deficits inherent in current practices but also ensures that AI systems are flexibly aligned with a diverse array of human values.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"11 - 18"},"PeriodicalIF":0.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00624-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The prospects for digital democracy
Pub Date : 2024-11-27 DOI: 10.1007/s43681-024-00627-y
Ivan Mladenović

This paper aims to answer a basic question: is it possible to forge democratic citizenship through various online tools that are already available? To answer this question, I introduce the conception of digital political identities, i.e., the ways in which online environments contribute to creating, maintaining, and changing political identities. Because the well-functioning of democracy rests on citizens with the ability to make informed decisions, vote, and engage in public deliberation, this paper is looking for new and innovative online tools for participating in meaningful online deliberation, acquiring accurate information in the digital space, and making informed voting decisions. By introducing the conception of digital political identities and linking it to online tools that can improve democracy and citizen engagement, I aim to make further progress in cutting edge research on the relationship between digital technologies and democracy. In a nutshell, I am mainly concerned with proposing and defending a normative framework for the use of various online tools that could foster digital democracy.

{"title":"The prospects for digital democracy","authors":"Ivan Mladenović","doi":"10.1007/s43681-024-00627-y","DOIUrl":"10.1007/s43681-024-00627-y","url":null,"abstract":"<div><p>This paper aims to answer a basic question: is it possible to forge democratic citizenship through various online tools that are already available? To answer this question, I introduce the conception of <i>digital political identities</i>, i.e., the ways in which online environments contribute to creating, maintaining, and changing political identities. Because the well-functioning of democracy rests on citizens with the ability to make informed decisions, vote, and engage in public deliberation, this paper is looking for new and innovative online tools for participating in meaningful online deliberation, acquiring accurate information in the digital space, and making informed voting decisions. By introducing the conception of digital political identities and linking it to online tools that can improve democracy and citizen engagement, I aim to make further progress in cutting edge research on the relationship between digital technologies and democracy. In a nutshell, I am mainly concerned with proposing and defending a normative framework for the use of various online tools that could foster digital democracy.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"3 - 9"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Opinion piece: on the ethics of a pending AI crisis in business
Pub Date : 2024-08-19 DOI: 10.1007/s43681-024-00551-1
David De Cremer

Because of a fear of missing out, organizations today rush out to adopt AI while not understanding what the technology stands for and how to deploy it most effectively. Trusting blindly the promises of AI as the ultimate value-creator, business leaders are unclear about their roles in making AI work for the organization and therefore delegate responsibility of the adoption process entirely to tech experts. In this opinion paper, I argue that this situation breeds fertile ground for a pending AI crisis as organizations will fail to align AI deployment with organizational purpose and in doing so fail to put AI to use in socially responsible and ethical ways. As a result, no real gains are achieved when adopting AI while threats and potential harm to society and humanity in general are fostered.

{"title":"Opinion piece: on the ethics of a pending AI crisis in business","authors":"David De Cremer","doi":"10.1007/s43681-024-00551-1","DOIUrl":"10.1007/s43681-024-00551-1","url":null,"abstract":"<div><p>Because of a fear of missing out, organizations today rush out to adopt AI while not understanding what the technology stands for and how to deploy it most effectively. Trusting blindly the promises of AI as the ultimate value-creator, business leaders are unclear about their roles in making AI work for the organization and therefore delegate responsibility of the adoption process entirely to tech experts. In this opinion paper, I argue that this situation breeds fertile ground for a pending AI crisis as organizations will fail to align AI deployment with organizational purpose and in doing so fail to put AI to use in socially responsible and ethical ways. As a result, no real gains are achieved when adopting AI while threats and potential harm to society and humanity in general are fostered.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"101 - 104"},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1