首页 > 最新文献

Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society最新文献

英文 中文
Does AI Qualify for the Job?: A Bidirectional Model Mapping Labour and AI Intensities 人工智能能胜任这份工作吗?一个映射劳动和人工智能强度的双向模型
Pub Date : 2020-01-23 DOI: 10.1145/3375627.3375831
Fernando Martínez-Plumed, Songül Tolan, Annarosa Pesole, J. Hernández-Orallo, Enrique Fernández-Macías, Emilia Gómez
In this paper we present a setting for examining the relation be-tween the distribution of research intensity in AI research and the relevance for a range of work tasks (and occupations) in current and simulated scenarios. We perform a mapping between labourand AI using a set of cognitive abilities as an intermediate layer. This setting favours a two-way interpretation to analyse (1) what impact current or simulated AI research activity has or would have on labour-related tasks and occupations, and (2) what areas of AI research activity would be responsible for a desired or undesired effect on specific labour tasks and occupations. Concretely, in our analysis we map 59 generic labour-related tasks from several worker surveys and databases to 14 cognitive abilities from the cognitive science literature, and these to a comprehensive list of 328 AI benchmarks used to evaluate progress in AI techniques. We provide this model and its implementation as a tool for simulations. We also show the effectiveness of our setting with some illustrative examples.
在本文中,我们提出了一个设置,用于检查人工智能研究中研究强度的分布与当前和模拟场景中一系列工作任务(和职业)的相关性之间的关系。我们使用一组认知能力作为中间层,在劳动力和人工智能之间进行映射。这种设置有利于双向解释,以分析(1)当前或模拟的人工智能研究活动对劳动相关任务和职业产生或将产生的影响,以及(2)人工智能研究活动的哪些领域将对特定劳动任务和职业产生期望或不期望的影响。具体来说,在我们的分析中,我们将来自几个工人调查和数据库的59个通用劳动相关任务映射到认知科学文献中的14个认知能力,并将这些映射到用于评估人工智能技术进展的328个人工智能基准的综合列表中。我们提供该模型及其实现作为仿真工具。我们还通过一些示例说明了该设置的有效性。
{"title":"Does AI Qualify for the Job?: A Bidirectional Model Mapping Labour and AI Intensities","authors":"Fernando Martínez-Plumed, Songül Tolan, Annarosa Pesole, J. Hernández-Orallo, Enrique Fernández-Macías, Emilia Gómez","doi":"10.1145/3375627.3375831","DOIUrl":"https://doi.org/10.1145/3375627.3375831","url":null,"abstract":"In this paper we present a setting for examining the relation be-tween the distribution of research intensity in AI research and the relevance for a range of work tasks (and occupations) in current and simulated scenarios. We perform a mapping between labourand AI using a set of cognitive abilities as an intermediate layer. This setting favours a two-way interpretation to analyse (1) what impact current or simulated AI research activity has or would have on labour-related tasks and occupations, and (2) what areas of AI research activity would be responsible for a desired or undesired effect on specific labour tasks and occupations. Concretely, in our analysis we map 59 generic labour-related tasks from several worker surveys and databases to 14 cognitive abilities from the cognitive science literature, and these to a comprehensive list of 328 AI benchmarks used to evaluate progress in AI techniques. We provide this model and its implementation as a tool for simulations. We also show the effectiveness of our setting with some illustrative examples.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"91 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72632890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Activism by the AI Community: Analysing Recent Achievements and Future Prospects 人工智能社区的行动主义:分析最近的成就和未来前景
Pub Date : 2020-01-17 DOI: 10.1145/3375627.3375814
Haydn Belfield
The artificial intelligence (AI) community has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI 'talent'. Both are crucial to the future of AI activism and worthy of sustained attention.
人工智能(AI)社区最近参与了与雇主、社区其他成员和政府有关的行动主义,以塑造人工智能的社会和伦理影响。它已经取得了一些显著的成功,但进一步的政治组织和行动主义的前景是不确定的。我们调查了过去六年里人工智能社区的行动主义;运用两种分析框架,借鉴知识共同体和工人组织与谈判的文献;并探讨它们对人工智能社区未来前景的影响。到目前为止,成功取决于一个连贯的共享文化,以及由于对有限的人工智能“人才”的高需求而产生的高议价能力。两者都对人工智能行动主义的未来至关重要,值得持续关注。
{"title":"Activism by the AI Community: Analysing Recent Achievements and Future Prospects","authors":"Haydn Belfield","doi":"10.1145/3375627.3375814","DOIUrl":"https://doi.org/10.1145/3375627.3375814","url":null,"abstract":"The artificial intelligence (AI) community has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI 'talent'. Both are crucial to the future of AI activism and worthy of sustained attention.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77386214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Monitoring Misuse for Accountable 'Artificial Intelligence as a Service' 监测滥用“人工智能即服务”的责任
Pub Date : 2020-01-14 DOI: 10.1145/3375627.3375873
S. A. Javadi, Richard Cloete, Jennifer Cobbe, M. S. Lee, Jatinder Singh
AI is increasingly being offered 'as a service' (AIaaS). This entails service providers offering customers access to pre-built AI models and services, for tasks such as object recognition, text translation, text-to-voice conversion, and facial recognition, to name a few. The offerings enable customers to easily integrate a range of powerful AI-driven capabilities into their applications. Customers access these models through the provider's APIs, sending particular data to which models are applied, the results of which returned. However, there are many situations in which the use of AI can be problematic. AIaaS services typically represent generic functionality, available 'at a click'. Providers may therefore, for reasons of reputation or responsibility, seek to ensure that the AIaaS services they offer are being used by customers for 'appropriate' purposes. This paper introduces and explores the concept whereby AIaaS providers uncover situations of possible service misuse by their customers. Illustrated through topical examples, we consider the technical usage patterns that could signal situations warranting scrutiny, and raise some of the legal and technical challenges of monitoring for misuse. In all, by introducing this concept, we indicate a potential area for further inquiry from a range of perspectives.
人工智能越来越多地被作为“服务”(AIaaS)提供。这需要服务提供商为客户提供预先构建的人工智能模型和服务,用于对象识别、文本翻译、文本到语音转换和面部识别等任务。这些产品使客户能够轻松地将一系列强大的人工智能驱动功能集成到他们的应用程序中。客户通过提供者的api访问这些模型,向应用的模型发送特定的数据,并返回其结果。然而,在许多情况下,人工智能的使用可能会产生问题。AIaaS服务通常代表通用功能,“点击一下”就可以使用。因此,出于声誉或责任的原因,提供商可能会寻求确保其提供的AIaaS服务被客户用于“适当”目的。本文介绍并探讨了AIaaS提供商发现其客户可能滥用服务的情况的概念。通过主题示例进行说明,我们考虑了可能表明需要审查的情况的技术使用模式,并提出了监视滥用的一些法律和技术挑战。总而言之,通过引入这一概念,我们指出了从一系列角度进一步探究的潜在领域。
{"title":"Monitoring Misuse for Accountable 'Artificial Intelligence as a Service'","authors":"S. A. Javadi, Richard Cloete, Jennifer Cobbe, M. S. Lee, Jatinder Singh","doi":"10.1145/3375627.3375873","DOIUrl":"https://doi.org/10.1145/3375627.3375873","url":null,"abstract":"AI is increasingly being offered 'as a service' (AIaaS). This entails service providers offering customers access to pre-built AI models and services, for tasks such as object recognition, text translation, text-to-voice conversion, and facial recognition, to name a few. The offerings enable customers to easily integrate a range of powerful AI-driven capabilities into their applications. Customers access these models through the provider's APIs, sending particular data to which models are applied, the results of which returned. However, there are many situations in which the use of AI can be problematic. AIaaS services typically represent generic functionality, available 'at a click'. Providers may therefore, for reasons of reputation or responsibility, seek to ensure that the AIaaS services they offer are being used by customers for 'appropriate' purposes. This paper introduces and explores the concept whereby AIaaS providers uncover situations of possible service misuse by their customers. Illustrated through topical examples, we consider the technical usage patterns that could signal situations warranting scrutiny, and raise some of the legal and technical challenges of monitoring for misuse. In all, by introducing this concept, we indicate a potential area for further inquiry from a range of perspectives.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82302444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Social and Governance Implications of Improved Data Efficiency 提高数据效率对社会和治理的影响
Pub Date : 2020-01-14 DOI: 10.1145/3375627.3375863
Aaron David Tucker, Markus Anderljung, A. Dafoe
Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent data-rich AI firms, exposing them to more competition from data-poor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency -- as more actors gain access to any level of capability -- the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the "AI production function", will be key to understanding the development of the AI industry and its societal impacts.
许多研究人员致力于提高机器学习的数据效率。如果他们成功了会发生什么?本文探讨了提高数据效率的社会经济影响。具体来说,我们研究了这样一种直觉,即数据效率将削弱保护现有数据丰富的人工智能公司的进入壁垒,使它们面临来自数据贫乏公司的更多竞争。我们发现这种直觉只是部分正确的:数据效率使创建机器学习应用程序变得更容易,但大型人工智能公司可能会从性能更高的人工智能系统中获得更多收益。此外,我们发现对隐私、数据市场、鲁棒性和滥用的影响是复杂的。例如,虽然滥用风险会随着数据效率的提高而增加——因为更多的参与者可以访问任何级别的能力——但净效应关键取决于防御措施的改善程度。对数据效率的更多调查,以及对“人工智能生产函数”的研究,将是理解人工智能产业发展及其社会影响的关键。
{"title":"Social and Governance Implications of Improved Data Efficiency","authors":"Aaron David Tucker, Markus Anderljung, A. Dafoe","doi":"10.1145/3375627.3375863","DOIUrl":"https://doi.org/10.1145/3375627.3375863","url":null,"abstract":"Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent data-rich AI firms, exposing them to more competition from data-poor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency -- as more actors gain access to any level of capability -- the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the \"AI production function\", will be key to understanding the development of the AI industry and its societal impacts.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88142007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Robot Rights?: Let's Talk about Human Welfare Instead 机器人的权利吗?我们来谈谈人类的福利吧
Pub Date : 2020-01-14 DOI: 10.1145/3375627.3375855
A. Birhane, J. V. Dijk
The 'robot rights' debate, and its related question of 'robot responsibility', invokes some of the most polarized positions in AI ethics. While some advocate for granting robots rights on a par with human beings, others, in a stark opposition argue that robots are not deserving of rights but are objects that should be our slaves. Grounded in post-Cartesian philosophical foundations, we argue not just to deny robots 'rights', but to deny that robots, as artifacts emerging out of and mediating human being, are the kinds of things that could be granted rights in the first place. Once we see robots as mediators of human being, we can understand how the 'robots rights' debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting society's least privileged individuals. We conclude that, if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from machinic systems, as well as the lack of taking responsibility by people designing, selling and deploying such machines, remains the most pressing ethical discussion in AI.
关于“机器人权利”的争论,以及与之相关的“机器人责任”问题,引发了人工智能伦理中一些最两极分化的立场。虽然一些人主张赋予机器人与人类同等的权利,但另一些人则坚决反对,认为机器人不应该享有权利,而是应该成为我们的奴隶。基于后笛卡尔的哲学基础,我们不仅要否认机器人的“权利”,而且要否认机器人作为从人类中产生并调解人类的人工制品,是一种可以被赋予权利的东西。一旦我们把机器人视为人类的调解人,我们就能理解“机器人权利”的辩论是如何集中在第一世界的问题上,而牺牲了紧迫的伦理问题,比如机器偏见,机器引发的人类劳动剥削,以及对隐私的侵蚀,这些都影响着社会上最弱势的个体。我们的结论是,如果人类是我们的出发点,人类的福利是主要关注的问题,那么机械系统产生的负面影响,以及设计、销售和部署这些机器的人缺乏责任,仍然是人工智能领域最紧迫的伦理讨论。
{"title":"Robot Rights?: Let's Talk about Human Welfare Instead","authors":"A. Birhane, J. V. Dijk","doi":"10.1145/3375627.3375855","DOIUrl":"https://doi.org/10.1145/3375627.3375855","url":null,"abstract":"The 'robot rights' debate, and its related question of 'robot responsibility', invokes some of the most polarized positions in AI ethics. While some advocate for granting robots rights on a par with human beings, others, in a stark opposition argue that robots are not deserving of rights but are objects that should be our slaves. Grounded in post-Cartesian philosophical foundations, we argue not just to deny robots 'rights', but to deny that robots, as artifacts emerging out of and mediating human being, are the kinds of things that could be granted rights in the first place. Once we see robots as mediators of human being, we can understand how the 'robots rights' debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting society's least privileged individuals. We conclude that, if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from machinic systems, as well as the lack of taking responsibility by people designing, selling and deploying such machines, remains the most pressing ethical discussion in AI.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"109 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79220104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
Artificial Artificial Intelligence: Measuring Influence of AI 'Assessments' on Moral Decision-Making 人工智能:衡量人工智能“评估”对道德决策的影响
Pub Date : 2020-01-13 DOI: 10.1145/3375627.3375870
Lok Chan, Kenzie Doyle, Duncan C. McElfresh, Vincent Conitzer, John P. Dickerson, Jana Schaich Borg, Walter Sinnott-Armstrong
Given AI's growing role in modeling and improving decision-making, how and when to present users with feedback is an urgent topic to address. We empirically examined the effect of feedback from false AI on moral decision-making about donor kidney allocation. We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI, even if the feedback is entirely random. We also discovered different effects between assessments presented as being from human experts and assessments presented as being from AI.
鉴于人工智能在建模和改进决策方面的作用越来越大,如何以及何时向用户提供反馈是一个迫切需要解决的话题。我们通过实证检验了虚假人工智能的反馈对捐赠肾分配道德决策的影响。我们发现一些证据表明,关于患者是否应该接受肾脏移植的判断可能会受到参与者自己的决策反馈的影响,这些反馈被认为是人工智能给出的,即使这些反馈完全是随机的。我们还发现了来自人类专家的评估和来自人工智能的评估之间的不同影响。
{"title":"Artificial Artificial Intelligence: Measuring Influence of AI 'Assessments' on Moral Decision-Making","authors":"Lok Chan, Kenzie Doyle, Duncan C. McElfresh, Vincent Conitzer, John P. Dickerson, Jana Schaich Borg, Walter Sinnott-Armstrong","doi":"10.1145/3375627.3375870","DOIUrl":"https://doi.org/10.1145/3375627.3375870","url":null,"abstract":"Given AI's growing role in modeling and improving decision-making, how and when to present users with feedback is an urgent topic to address. We empirically examined the effect of feedback from false AI on moral decision-making about donor kidney allocation. We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI, even if the feedback is entirely random. We also discovered different effects between assessments presented as being from human experts and assessments presented as being from AI.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"109 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77187537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society 超越近期和长期:对人工智能伦理和社会的研究重点进行更清晰的描述
Pub Date : 2020-01-13 DOI: 10.1145/3375627.3375803
Carina E. A. Prunkl, Jess Whittlestone
One way of carving up the broad 'AI ethics and society' research space that has emerged in recent years is to distinguish between 'near-term' and 'long-term' research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed. We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.
划分近年来出现的广泛的“人工智能伦理与社会”研究空间的一种方法是区分“近期”和“长期”研究。虽然这种分解研究空间的方法可能是有用的,但我们提出了一些关于近期/长期区分在如何构建研究问题和优先事项方面过于突出的担忧。我们强调了如何使用这种区分的一些含糊不清和不一致之处,并认为尽管在这个广泛的研究界中存在不同的优先级,但这些差异并没有很好地反映在近期/长期区分中。我们将近期/长期的区别分为四个不同的维度,并提出了一些方法,研究人员可以使用这些维度更清楚地沟通他们的工作和优先事项。我们建议,就研究优先事项进行更细致入微的对话可以帮助建立新的合作机会,帮助制定更一致和连贯的研究议程,并能够识别以前被忽视的研究领域。
{"title":"Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society","authors":"Carina E. A. Prunkl, Jess Whittlestone","doi":"10.1145/3375627.3375803","DOIUrl":"https://doi.org/10.1145/3375627.3375803","url":null,"abstract":"One way of carving up the broad 'AI ethics and society' research space that has emerged in recent years is to distinguish between 'near-term' and 'long-term' research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed. We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78720879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Should Artificial Intelligence Governance be Centralised?: Design Lessons from History 人工智能治理应该集中吗?:历史上的设计教训
Pub Date : 2020-01-10 DOI: 10.1145/3375627.3375857
P. Cihon, M. Maas, Luke Kemp
Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.
人工智能的有效国际治理是否仍然是碎片化的,还是需要一个集中的人工智能国际组织?我们借鉴其他国际制度的历史,以确定集中人工智能治理的利弊。一些考虑,比如效率和政治权力,支持中央集权。相反,创建一个缓慢而脆弱的机构的风险,以及在制定严格规则的同时确保参与的困难,都不利于这种做法。其他考虑因素取决于中央机构的具体设计。一个设计良好的机构或许能够阻止“买论坛”,并确保政策协调。然而,论坛购物可能是有益的,分散的机构格局可能是自我组织的。中央集权需要权衡取舍,细节也很重要。最后,我们提出两个核心建议。首先,结果将取决于中央机构的具体设计。一个设计良好、涵盖一系列连贯问题的集中机制可能是有益的。但是,锁定一个不适当的结构可能会造成比分裂更糟糕的命运。其次,就目前而言,碎片化可能会持续下去。这应该受到密切监控,看看它是自我组织还是仅仅是不足。
{"title":"Should Artificial Intelligence Governance be Centralised?: Design Lessons from History","authors":"P. Cihon, M. Maas, Luke Kemp","doi":"10.1145/3375627.3375857","DOIUrl":"https://doi.org/10.1145/3375627.3375857","url":null,"abstract":"Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"149 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77472952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Investigating the Impact of Inclusion in Face Recognition Training Data on Individual Face Identification 研究人脸识别训练数据中包含对个体人脸识别的影响
Pub Date : 2020-01-09 DOI: 10.1145/3375627.3375875
Chris Dulhanty, A. Wong
Modern face recognition systems leverage datasets containing images of hundreds of thousands of specific individuals' faces to train deep convolutional neural networks to learn an embedding space that maps an arbitrary individual's face to a vector representation of their identity. The performance of a face recognition system in face verification (1:1) and face identification (1:N) tasks is directly related to the ability of an embedding space to discriminate between identities. Recently, there has been significant public scrutiny into the source and privacy implications of large-scale face recognition training datasets such as MS-Celeb-1M and MegaFace, as many people are uncomfortable with their face being used to train dual-use technologies that can enable mass surveillance. However, the impact of an individual's inclusion in training data on a derived system's ability to recognize them has not previously been studied. In this work, we audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images. We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present. This modest difference in accuracy demonstrates that face recognition systems using deep learning work better for individuals they are trained on, which has serious privacy implications when one considers all major open source face recognition training datasets do not obtain informed consent from individuals during their collection.
现代人脸识别系统利用包含数十万特定个体面部图像的数据集来训练深度卷积神经网络,以学习嵌入空间,将任意个体的面部映射到其身份的向量表示。人脸识别系统在人脸验证(1:1)和人脸识别(1:1:N)任务中的性能直接关系到嵌入空间区分身份的能力。最近,公众对MS-Celeb-1M和MegaFace等大规模人脸识别训练数据集的来源和隐私影响进行了重大审查,因为许多人对他们的脸被用于训练可以实现大规模监控的军民两用技术感到不舒服。然而,在训练数据中包含个人对派生系统识别他们的能力的影响以前没有研究过。在这项工作中,我们审计ArcFace,一个最先进的,开源的人脸识别系统,在一个大规模的人脸识别实验超过一百万分心图像。我们发现,对于模型训练数据中存在的个体,Rank-1人脸识别准确率为79.71%,对于不存在的个体,准确率为75.73%。这种准确性上的适度差异表明,使用深度学习的人脸识别系统对他们所训练的个人工作得更好,当人们考虑到所有主要的开源人脸识别训练数据集在收集过程中都没有获得个人的知情同意时,这就会产生严重的隐私影响。
{"title":"Investigating the Impact of Inclusion in Face Recognition Training Data on Individual Face Identification","authors":"Chris Dulhanty, A. Wong","doi":"10.1145/3375627.3375875","DOIUrl":"https://doi.org/10.1145/3375627.3375875","url":null,"abstract":"Modern face recognition systems leverage datasets containing images of hundreds of thousands of specific individuals' faces to train deep convolutional neural networks to learn an embedding space that maps an arbitrary individual's face to a vector representation of their identity. The performance of a face recognition system in face verification (1:1) and face identification (1:N) tasks is directly related to the ability of an embedding space to discriminate between identities. Recently, there has been significant public scrutiny into the source and privacy implications of large-scale face recognition training datasets such as MS-Celeb-1M and MegaFace, as many people are uncomfortable with their face being used to train dual-use technologies that can enable mass surveillance. However, the impact of an individual's inclusion in training data on a derived system's ability to recognize them has not previously been studied. In this work, we audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images. We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present. This modest difference in accuracy demonstrates that face recognition systems using deep learning work better for individuals they are trained on, which has serious privacy implications when one considers all major open source face recognition training datasets do not obtain informed consent from individuals during their collection.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"69 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74069231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Algorithmic Fairness from a Non-ideal Perspective 非理想视角下的算法公平
Pub Date : 2020-01-08 DOI: 10.1145/3375627.3375828
S. Fazelpour, Zachary Chase Lipton
Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In the hopes of mitigating these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might hope to observe in a fair world, offering a variety of algorithms that attempt to satisfy subsets of these parities or to trade off the degree to which they are satisfied against utility. In this paper, we connect this approach to fair machine learning to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and ideal worlds. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of their actions, naive applications of ideal thinking can lead to misguided policies. In this paper, we demonstrate a connection between the recent literature on fair machine learning and the ideal approach in political philosophy, and show that some recently uncovered shortcomings in proposed algorithms reflect broader troubles faced by the ideal approach. We work this analysis through for different formulations of fairness and conclude with a critical discussion of real-world impacts and directions for new research.
受预测建模最新突破的启发,行业和政府的从业者都转向机器学习,希望将预测操作化,以推动自动化决策。不幸的是,许多关于后果性决策的社会期望,如正义或公平,在纯粹的预测框架内没有自然的公式。为了缓解这些问题,研究人员提出了各种度量标准,用于量化我们可能希望在公平世界中观察到的各种统计奇偶的偏差,并提供了各种算法,试图满足这些奇偶的子集,或者权衡它们对效用的满意程度。在本文中,我们将这种公平机器学习方法与政治哲学中关于理想和非理想方法论方法的文献联系起来。理想的方法需要提出一个公正的世界赖以运作的原则。在理想理论最直接的应用中,人们支持一项被提议的政策,认为它缩小了现实世界和理想世界之间的差距。然而,由于没有考虑到我们的非理想世界产生的机制、各种决策者的责任以及他们行动的影响,天真地应用理想思维可能会导致错误的政策。在本文中,我们展示了最近关于公平机器学习的文献与政治哲学中的理想方法之间的联系,并表明最近发现的一些算法中的缺点反映了理想方法面临的更广泛的问题。我们对公平的不同表述进行了分析,并对现实世界的影响和新研究的方向进行了批判性的讨论。
{"title":"Algorithmic Fairness from a Non-ideal Perspective","authors":"S. Fazelpour, Zachary Chase Lipton","doi":"10.1145/3375627.3375828","DOIUrl":"https://doi.org/10.1145/3375627.3375828","url":null,"abstract":"Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In the hopes of mitigating these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might hope to observe in a fair world, offering a variety of algorithms that attempt to satisfy subsets of these parities or to trade off the degree to which they are satisfied against utility. In this paper, we connect this approach to fair machine learning to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and ideal worlds. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of their actions, naive applications of ideal thinking can lead to misguided policies. In this paper, we demonstrate a connection between the recent literature on fair machine learning and the ideal approach in political philosophy, and show that some recently uncovered shortcomings in proposed algorithms reflect broader troubles faced by the ideal approach. We work this analysis through for different formulations of fairness and conclude with a critical discussion of real-world impacts and directions for new research.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"87 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83771179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
期刊
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1