Modelling and Influencing the AI Bidding War: A Research Agenda

H. Anh, L. Pereira, T. Lenaerts
{"title":"Modelling and Influencing the AI Bidding War: A Research Agenda","authors":"H. Anh, L. Pereira, T. Lenaerts","doi":"10.1145/3306618.3314265","DOIUrl":null,"url":null,"abstract":"A race for technological supremacy in AI could lead to serious negative consequences, especially whenever ethical and safety procedures are underestimated or even ignored, leading potentially to the rejection of AI in general. For all to enjoy the benefits provided by safe, ethical and trustworthy AI systems, it is crucial to incentivise participants with appropriate strategies that ensure mutually beneficial normative behaviour and safety-compliance from all parties involved. Little attention has been given to understanding the dynamics and emergent behaviours arising from this AI bidding war, and moreover, how to influence it to achieve certain desirable outcomes (e.g. AI for public good and participant compliance). To bridge this gap, this paper proposes a research agenda to develop theoretical models that capture key factors of the AI race, revealing which strategic behaviours may emerge and hypothetical scenarios therein. Strategies from incentive and agreement modelling are directly applicable to systematically analyse how different types of incentives (namely, positive vs. negative, peer vs. institutional, and their combinations) influence safety-compliant behaviours over time, and how such behaviours should be configured to ensure desired global outcomes, studying at the same time how these mechanisms influence AI development. This agenda will provide actionable policies, showing how they need to be employed and deployed in order to achieve compliance and thereby avoid disasters as well as loosing confidence and trust in AI in general.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306618.3314265","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29

Abstract

A race for technological supremacy in AI could lead to serious negative consequences, especially whenever ethical and safety procedures are underestimated or even ignored, leading potentially to the rejection of AI in general. For all to enjoy the benefits provided by safe, ethical and trustworthy AI systems, it is crucial to incentivise participants with appropriate strategies that ensure mutually beneficial normative behaviour and safety-compliance from all parties involved. Little attention has been given to understanding the dynamics and emergent behaviours arising from this AI bidding war, and moreover, how to influence it to achieve certain desirable outcomes (e.g. AI for public good and participant compliance). To bridge this gap, this paper proposes a research agenda to develop theoretical models that capture key factors of the AI race, revealing which strategic behaviours may emerge and hypothetical scenarios therein. Strategies from incentive and agreement modelling are directly applicable to systematically analyse how different types of incentives (namely, positive vs. negative, peer vs. institutional, and their combinations) influence safety-compliant behaviours over time, and how such behaviours should be configured to ensure desired global outcomes, studying at the same time how these mechanisms influence AI development. This agenda will provide actionable policies, showing how they need to be employed and deployed in order to achieve compliance and thereby avoid disasters as well as loosing confidence and trust in AI in general.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
建模和影响人工智能竞标战:一个研究议程
人工智能领域的技术优势竞争可能会导致严重的负面后果,尤其是在伦理和安全程序被低估甚至忽视的情况下,这可能会导致对人工智能的普遍排斥。为了让所有人都能享受到安全、道德和值得信赖的人工智能系统所带来的好处,至关重要的是,要用适当的策略来激励参与者,确保所有相关方的互利规范行为和安全合规。很少有人关注理解这种人工智能竞标战产生的动态和紧急行为,此外,如何影响它以实现某些理想的结果(例如,人工智能用于公共利益和参与者合规)。为了弥补这一差距,本文提出了一个研究议程,以开发捕捉人工智能竞赛关键因素的理论模型,揭示哪些战略行为可能出现,以及其中的假设场景。激励和协议模型的策略直接适用于系统分析不同类型的激励(即积极与消极,同伴与机构及其组合)如何随着时间的推移影响安全合规行为,以及如何配置这些行为以确保预期的全球结果,同时研究这些机制如何影响人工智能发展。该议程将提供可操作的政策,展示如何使用和部署这些政策,以实现合规性,从而避免灾难,并在总体上失去对人工智能的信心和信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices Requirements for an Artificial Agent with Norm Competence Enabling Effective Transparency: Towards User-Centric Intelligent Systems Killer Robots and Human Dignity The Value of Trustworthy AI
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1