首页 > 最新文献

Ethics and Information Technology最新文献

英文 中文
Should we embrace “Big Sister”? Smart speakers as a means to combat intimate partner violence 我们应该拥抱“大姐”吗?智能音箱是对抗亲密伴侣暴力的一种手段
2区 哲学 Q1 ETHICS Pub Date : 2023-11-04 DOI: 10.1007/s10676-023-09727-5
Robert Sparrow, Mark Andrejevic, Bridget Harris
Abstract It is estimated that one in three women experience intimate partner violence (IPV) across the course of their life. The popular uptake of “smart speakers” powered by sophisticated AI means that surveillance of the domestic environment is increasingly possible. Correspondingly, there are various proposals to use smart speakers to detect or report IPV. In this paper, we clarify what might be possible when it comes to combatting IPV using existing or near-term technology and also begin the project of evaluating this project both ethically and politically. We argue that the ethical landscape looks different depending on whether one is considering the decision to develop the technology or the decision to use it once it has been developed. If activists and governments wish to avoid the privatisation of responses to IPV, ubiquitous surveillance of domestic spaces, increasing the risk posed to members of minority communities by police responses to IPV, and the danger that more powerful smart speakers will be co-opted by men to control and abuse women, then they should resist the development of this technology rather than wait until these systems are developed. If it is judged that the moral urgency of IPV justifies exploring what might be possible by developing this technology, even in the face of these risks, then it will be imperative that victim-survivors from a range of demographics, as well as government and non-government stakeholders, are engaged in shaping this technology and the legislation and policies needed to regulate it.
据估计,三分之一的女性在其一生中经历过亲密伴侣暴力(IPV)。由复杂人工智能驱动的“智能扬声器”的普及意味着对国内环境进行监控的可能性越来越大。相应地,使用智能音箱检测或报告IPV的建议也多种多样。在本文中,我们阐明了使用现有或近期技术打击IPV的可能性,并开始了从道德和政治上评估该项目的项目。我们认为,根据人们是在考虑开发技术的决定还是在技术开发后使用它的决定,伦理前景看起来会有所不同。如果活动家和政府希望避免IPV反应的私有化,家庭空间无处不在的监视,增加警察对IPV反应对少数民族社区成员构成的风险,以及更强大的智能扬声器将被男性用来控制和虐待女性的危险,那么他们应该抵制这项技术的发展,而不是等到这些系统被开发出来。如果判断IPV的道德紧迫性证明了通过开发这项技术探索可能的可能性是合理的,即使面对这些风险,那么来自各种人口统计数据的受害者-幸存者,以及政府和非政府利益相关者,都必须参与塑造这项技术以及监管它所需的立法和政策。
{"title":"Should we embrace “Big Sister”? Smart speakers as a means to combat intimate partner violence","authors":"Robert Sparrow, Mark Andrejevic, Bridget Harris","doi":"10.1007/s10676-023-09727-5","DOIUrl":"https://doi.org/10.1007/s10676-023-09727-5","url":null,"abstract":"Abstract It is estimated that one in three women experience intimate partner violence (IPV) across the course of their life. The popular uptake of “smart speakers” powered by sophisticated AI means that surveillance of the domestic environment is increasingly possible. Correspondingly, there are various proposals to use smart speakers to detect or report IPV. In this paper, we clarify what might be possible when it comes to combatting IPV using existing or near-term technology and also begin the project of evaluating this project both ethically and politically. We argue that the ethical landscape looks different depending on whether one is considering the decision to develop the technology or the decision to use it once it has been developed. If activists and governments wish to avoid the privatisation of responses to IPV, ubiquitous surveillance of domestic spaces, increasing the risk posed to members of minority communities by police responses to IPV, and the danger that more powerful smart speakers will be co-opted by men to control and abuse women, then they should resist the development of this technology rather than wait until these systems are developed. If it is judged that the moral urgency of IPV justifies exploring what might be possible by developing this technology, even in the face of these risks, then it will be imperative that victim-survivors from a range of demographics, as well as government and non-government stakeholders, are engaged in shaping this technology and the legislation and policies needed to regulate it.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"108 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135773389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI models should include detection mechanisms as a condition for public release 生成式人工智能模型应该包括检测机制,作为公开发布的条件
2区 哲学 Q1 ETHICS Pub Date : 2023-10-28 DOI: 10.1007/s10676-023-09728-4
Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio
Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.
新一波的“基础模型”——用于生成文本(如ChatGPT)或图像(如MidJourney)的通用生成人工智能模型——代表了人工智能技术的巨大进步。但它们的使用也带来了一系列新的风险,这引发了一场关于可能的监管机制的持续讨论。在这里,我们提出了一个应该纳入立法的具体原则:任何组织开发用于公共使用的基础模型必须证明其生成的内容具有可靠的检测机制,作为其公开发布的条件。检测机制应该在一个工具中公开可用,该工具允许用户查询任意内容项,该项是否(全部或部分)由模型生成。在本文中,我们认为这一要求在技术上是可行的,并且将在许多领域中降低新人工智能模型的某些风险方面发挥重要作用。我们还概述了该工具设计的一些选项,并总结了需要政策制定者和研究人员进一步投入的一些要点。
{"title":"Generative AI models should include detection mechanisms as a condition for public release","authors":"Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio","doi":"10.1007/s10676-023-09728-4","DOIUrl":"https://doi.org/10.1007/s10676-023-09728-4","url":null,"abstract":"Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"37 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136160670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The landscape of data and AI documentation approaches in the European policy context 欧洲政策背景下的数据和人工智能文档方法的景观
2区 哲学 Q1 ETHICS Pub Date : 2023-10-28 DOI: 10.1007/s10676-023-09725-7
Marina Micheli, Isabelle Hupont, Blagoj Delipetrev, Josep Soler-Garrido
Abstract Nowadays, Artificial Intelligence (AI) is present in all sectors of the economy. Consequently, both data-the raw material used to build AI systems- and AI have an unprecedented impact on society and there is a need to ensure that they work for its benefit. For this reason, the European Union has put data and trustworthy AI at the center of recent legislative initiatives. An important element in these regulations is transparency, understood as the provision of information to relevant stakeholders to support their understanding of AI systems and data throughout their lifecycle. In recent years, an increasing number of approaches for documenting AI and datasets have emerged, both within academia and the private sector. In this work, we identify the 36 most relevant ones from more than 2200 papers related to trustworthy AI. We assess their relevance from the angle of European regulatory objectives, their coverage of AI technologies and economic sectors, and their suitability to address the specific needs of multiple stakeholders. Finally, we discuss the main documentation gaps found, including the need to better address data innovation practices (e.g. data sharing, data reuse) and large-scale algorithmic systems (e.g. those used in online platforms), and to widen the focus from algorithms and data to AI systems as a whole.
如今,人工智能(AI)存在于经济的各个领域。因此,数据(用于构建人工智能系统的原材料)和人工智能都对社会产生了前所未有的影响,因此有必要确保它们为人工智能的利益而工作。出于这个原因,欧盟将数据和可信赖的人工智能作为最近立法举措的核心。这些法规的一个重要因素是透明度,被理解为向相关利益相关者提供信息,以支持他们在整个生命周期中理解人工智能系统和数据。近年来,学术界和私营部门出现了越来越多的记录人工智能和数据集的方法。在这项工作中,我们从2200多篇与可信赖的人工智能相关的论文中确定了36篇最相关的论文。我们从欧洲监管目标的角度评估它们的相关性,它们对人工智能技术和经济部门的覆盖范围,以及它们满足多个利益相关者特定需求的适用性。最后,我们讨论了发现的主要文档差距,包括需要更好地解决数据创新实践(例如数据共享,数据重用)和大规模算法系统(例如在线平台中使用的系统),并将焦点从算法和数据扩大到整个人工智能系统。
{"title":"The landscape of data and AI documentation approaches in the European policy context","authors":"Marina Micheli, Isabelle Hupont, Blagoj Delipetrev, Josep Soler-Garrido","doi":"10.1007/s10676-023-09725-7","DOIUrl":"https://doi.org/10.1007/s10676-023-09725-7","url":null,"abstract":"Abstract Nowadays, Artificial Intelligence (AI) is present in all sectors of the economy. Consequently, both data-the raw material used to build AI systems- and AI have an unprecedented impact on society and there is a need to ensure that they work for its benefit. For this reason, the European Union has put data and trustworthy AI at the center of recent legislative initiatives. An important element in these regulations is transparency, understood as the provision of information to relevant stakeholders to support their understanding of AI systems and data throughout their lifecycle. In recent years, an increasing number of approaches for documenting AI and datasets have emerged, both within academia and the private sector. In this work, we identify the 36 most relevant ones from more than 2200 papers related to trustworthy AI. We assess their relevance from the angle of European regulatory objectives, their coverage of AI technologies and economic sectors, and their suitability to address the specific needs of multiple stakeholders. Finally, we discuss the main documentation gaps found, including the need to better address data innovation practices (e.g. data sharing, data reuse) and large-scale algorithmic systems (e.g. those used in online platforms), and to widen the focus from algorithms and data to AI systems as a whole.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"7 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136158338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Person, thing, Robot: a moral and legal ontology for the 21st century and beyond: by David Gunkel 《人、物、机器人:21世纪及以后的道德和法律本体论》作者:大卫·冈克尔
2区 哲学 Q1 ETHICS Pub Date : 2023-10-24 DOI: 10.1007/s10676-023-09731-9
Abootaleb Safdari
{"title":"Person, thing, Robot: a moral and legal ontology for the 21st century and beyond: by David Gunkel","authors":"Abootaleb Safdari","doi":"10.1007/s10676-023-09731-9","DOIUrl":"https://doi.org/10.1007/s10676-023-09731-9","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"34 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135265928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Smart cities as a testbed for experimenting with humans? - Applying psychological ethical guidelines to smart city interventions 智慧城市是人类实验的试验台?-在智慧城市干预中应用心理伦理准则
2区 哲学 Q1 ETHICS Pub Date : 2023-10-24 DOI: 10.1007/s10676-023-09729-3
Verena Zimmermann
Abstract Smart Cities consist of a multitude of interconnected devices and services to, among others, enhance efficiency, comfort, and safety. To achieve these aims, smart cities rely on an interplay of measures including the deployment of interventions targeted to foster certain human behaviors, such as saving energy, or collecting and exchanging sensor and user data. Both aspects have ethical implications, e.g., when it comes to intervention design or the handling of privacy-related data such as personal information, user preferences or geolocations. Resulting concerns must be taken seriously, as they reduce user acceptance and can even lead to the abolition of otherwise promising Smart City projects. Established guidelines for ethical research and practice from the psychological sciences provide a useful framework for the kinds of ethical issues raised when designing human-centered interventions or dealing with user-generated data. This article thus reviews relevant psychological guidelines and discusses their applicability to the Smart City context. A special focus is on the guidelines’ implications and resulting challenges for certain Smart City applications. Additionally, potential gaps in current guidelines and the limits of applicability are reflected upon.
智慧城市由大量相互连接的设备和服务组成,以提高效率、舒适度和安全性。为了实现这些目标,智慧城市依赖于各种措施的相互作用,包括部署旨在促进某些人类行为的干预措施,例如节约能源,或收集和交换传感器和用户数据。这两个方面都涉及伦理问题,例如,当涉及到干预设计或处理与隐私相关的数据(如个人信息、用户偏好或地理位置)时。由此产生的担忧必须认真对待,因为它们降低了用户的接受度,甚至可能导致原本有希望的智慧城市项目被取消。在设计以人为中心的干预措施或处理用户生成的数据时,已建立的伦理研究和实践指南为各种伦理问题提供了有用的框架。因此,本文回顾了相关的心理学指南,并讨论了它们在智慧城市背景下的适用性。特别关注指南的含义和对某些智慧城市应用的挑战。此外,还反映了现行准则的潜在差距和适用性限制。
{"title":"Smart cities as a testbed for experimenting with humans? - Applying psychological ethical guidelines to smart city interventions","authors":"Verena Zimmermann","doi":"10.1007/s10676-023-09729-3","DOIUrl":"https://doi.org/10.1007/s10676-023-09729-3","url":null,"abstract":"Abstract Smart Cities consist of a multitude of interconnected devices and services to, among others, enhance efficiency, comfort, and safety. To achieve these aims, smart cities rely on an interplay of measures including the deployment of interventions targeted to foster certain human behaviors, such as saving energy, or collecting and exchanging sensor and user data. Both aspects have ethical implications, e.g., when it comes to intervention design or the handling of privacy-related data such as personal information, user preferences or geolocations. Resulting concerns must be taken seriously, as they reduce user acceptance and can even lead to the abolition of otherwise promising Smart City projects. Established guidelines for ethical research and practice from the psychological sciences provide a useful framework for the kinds of ethical issues raised when designing human-centered interventions or dealing with user-generated data. This article thus reviews relevant psychological guidelines and discusses their applicability to the Smart City context. A special focus is on the guidelines’ implications and resulting challenges for certain Smart City applications. Additionally, potential gaps in current guidelines and the limits of applicability are reflected upon.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"68 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135273893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Violent video games: content, attitudes, and norms 暴力电子游戏:内容、态度和规范
2区 哲学 Q1 ETHICS Pub Date : 2023-10-16 DOI: 10.1007/s10676-023-09726-6
Alexander Andersson, Per-Erik Milam
Abstract Violent video games (VVGs) are a source of serious and continuing controversy. They are not unique in this respect, though. Other entertainment products have been criticized on moral grounds, from pornography to heavy metal, horror films, and Harry Potter books. Some of these controversies have fizzled out over time and have come to be viewed as cases of moral panic. Others, including moral objections to VVGs, have persisted. The aim of this paper is to determine which, if any, of the concerns raised about VVGs are legitimate. We argue that common moral objections to VVGs are unsuccessful, but that a plausible critique can be developed that captures the insights of these objections while avoiding their pitfalls. Our view suggests that the moral badness of a game depends on how well its internal logic expresses or encourages the players’ objectionable attitudes. This allows us to recognize that some games are morally worse than others—and that it can be morally wrong to design and play some VVGs—but that the moral badness of these games is not necessarily dependent on how violent they are.
暴力电子游戏(vvg)是一个严重且持续争议的来源。不过,在这方面,它们并非独一无二。其他娱乐产品也因道德原因受到批评,从色情到重金属,从恐怖电影到哈利波特。随着时间的推移,这些争议中的一些已经失败,并被视为道德恐慌的案例。其他的,包括对vvg的道德反对,仍然存在。本文的目的是确定,如果有的话,对vvg提出的关注是合理的。我们认为,对vvg的普遍道德反对是不成功的,但我们可以提出一种合理的批评,既能抓住这些反对意见的深刻见解,又能避免它们的陷阱。我们的观点认为,游戏的道德问题取决于其内部逻辑表达或鼓励玩家反感态度的程度。这让我们认识到,有些游戏在道德上比其他游戏更糟糕——设计和玩一些vvg在道德上可能是错误的——但这些游戏的道德不良并不一定取决于它们有多暴力。
{"title":"Violent video games: content, attitudes, and norms","authors":"Alexander Andersson, Per-Erik Milam","doi":"10.1007/s10676-023-09726-6","DOIUrl":"https://doi.org/10.1007/s10676-023-09726-6","url":null,"abstract":"Abstract Violent video games (VVGs) are a source of serious and continuing controversy. They are not unique in this respect, though. Other entertainment products have been criticized on moral grounds, from pornography to heavy metal, horror films, and Harry Potter books. Some of these controversies have fizzled out over time and have come to be viewed as cases of moral panic. Others, including moral objections to VVGs, have persisted. The aim of this paper is to determine which, if any, of the concerns raised about VVGs are legitimate. We argue that common moral objections to VVGs are unsuccessful, but that a plausible critique can be developed that captures the insights of these objections while avoiding their pitfalls. Our view suggests that the moral badness of a game depends on how well its internal logic expresses or encourages the players’ objectionable attitudes. This allows us to recognize that some games are morally worse than others—and that it can be morally wrong to design and play some VVGs—but that the moral badness of these games is not necessarily dependent on how violent they are.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136115561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition 人工智能伦理委员会在智能社会中有争议的角色:朝着基于董事会组成的改进迈出的一步
2区 哲学 Q1 ETHICS Pub Date : 2023-10-06 DOI: 10.1007/s10676-023-09724-8
Ludovico Giacomo Conti, Peter Seele
Abstract The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries : a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy.
最近人工智能丑闻的激增导致私人和公共组织实施新的道德准则,引入人工智能道德委员会,并列出道德原则。然而,其中一些努力仍然是表面功夫,没有得到任何实质性行动的支持。这种行为使公众质疑人工智能行业的合法性,并促使学者们指责该行业的道德清洗、机器清洗和道德庸俗化——这些批评蔓延到了机构人工智能伦理委员会。为了应对这个广泛存在的问题,文献中的贡献已经提出了不考虑其系统特征的修复方法,并且基于自上而下的、以专家为中心的治理。为了填补这一空白,我们建议使用合格的知情抽签:一个两步模型,将古老的分类实践的记录好处转化为人工智能伦理委员会成员的选择,并将其与利益相关者驱动的、参与式的、自下而上的审议过程的优势结合起来。该模式允许提高公众在决策过程及其成果中的合法性和参与度,遏制该行业的过度影响和游说,并减少道德委员会的工具化。我们认为,这种基于解决方案的方法可以为智能社会中的公共和私人组织提供一个健全的基础,以构建一个分散的、自下而上的、参与式的数字民主。
{"title":"The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition","authors":"Ludovico Giacomo Conti, Peter Seele","doi":"10.1007/s10676-023-09724-8","DOIUrl":"https://doi.org/10.1007/s10676-023-09724-8","url":null,"abstract":"Abstract The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries : a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135351995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empathy training through virtual reality: moral enhancement with the freedom to fall? 通过虚拟现实进行移情训练:道德提升与堕落自由?
2区 哲学 Q1 ETHICS Pub Date : 2023-09-26 DOI: 10.1007/s10676-023-09723-9
Anda Zahiu, Emilian Mihailov, Brian D. Earp, Kathryn B. Francis, Julian Savulescu
{"title":"Empathy training through virtual reality: moral enhancement with the freedom to fall?","authors":"Anda Zahiu, Emilian Mihailov, Brian D. Earp, Kathryn B. Francis, Julian Savulescu","doi":"10.1007/s10676-023-09723-9","DOIUrl":"https://doi.org/10.1007/s10676-023-09723-9","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134960000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Melting contestation: insurance fairness and machine learning 融化的争论:保险公平和机器学习
2区 哲学 Q1 ETHICS Pub Date : 2023-09-20 DOI: 10.1007/s10676-023-09720-y
Laurence Barry, Arthur Charpentier
{"title":"Melting contestation: insurance fairness and machine learning","authors":"Laurence Barry, Arthur Charpentier","doi":"10.1007/s10676-023-09720-y","DOIUrl":"https://doi.org/10.1007/s10676-023-09720-y","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136308923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cognitive warfare: an ethical analysis 认知战:一种伦理分析
IF 3.6 2区 哲学 Q1 ETHICS Pub Date : 2023-09-01 DOI: 10.1007/s10676-023-09717-7
Seumas Miller
{"title":"Cognitive warfare: an ethical analysis","authors":"Seumas Miller","doi":"10.1007/s10676-023-09717-7","DOIUrl":"https://doi.org/10.1007/s10676-023-09717-7","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42769335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Ethics and Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1