The Complex Economics of Artificial Intelligence

J. Mateos-Garcia
{"title":"The Complex Economics of Artificial Intelligence","authors":"J. Mateos-Garcia","doi":"10.2139/ssrn.3294552","DOIUrl":null,"url":null,"abstract":"Artificial Intelligence (AI) systems help organisations manage complexity: they reduce the cost of predictions and hold the promise of more, better and faster decisions that enhance productivity and innovation. However, their deployment increases complexity at all levels of the economy, and with it, the risk of undesirable outcomes. Organisationally, uncertainty about how to adopt fallible AI systems could create AI divides between sectors and organisations. Transactionally, pervasive information asymmetries in AI markets could lead to unsafe, abusive and mediocre applications. Societally, individuals might opt for extreme levels of AI deployment in other sectors in exchange for lower prices and more convenience, creating disruption and inequality. Temporally, scientific, technological and market inertias could lock society into AI trajectories that are found to be inferior to alternative paths. New Sciences (and Policies) of the Artificial are needed to understand and manage the new economic complexities that AI brings, acknowledging that AI technologies are not neutral and can be steered in societally beneficial directions guided by the principles of experimentation and evidence to discover where and how to apply AI, transparency and compliance to remove information asymmetries and increase safety in AI markets, social solidarity to share the benefits and costs of AI deployment, and diversity in the AI trajectories that are explored and pursued and the perspectives that guide this process. This will involve an explicit elucidation of human and social goals and values, a mirror of the Turing test where different societies learn about themselves through their responses to the opportunities and challenges that powerful AI technologies pose.","PeriodicalId":448105,"journal":{"name":"ERN: Productivity (Topic)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ERN: Productivity (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3294552","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

Artificial Intelligence (AI) systems help organisations manage complexity: they reduce the cost of predictions and hold the promise of more, better and faster decisions that enhance productivity and innovation. However, their deployment increases complexity at all levels of the economy, and with it, the risk of undesirable outcomes. Organisationally, uncertainty about how to adopt fallible AI systems could create AI divides between sectors and organisations. Transactionally, pervasive information asymmetries in AI markets could lead to unsafe, abusive and mediocre applications. Societally, individuals might opt for extreme levels of AI deployment in other sectors in exchange for lower prices and more convenience, creating disruption and inequality. Temporally, scientific, technological and market inertias could lock society into AI trajectories that are found to be inferior to alternative paths. New Sciences (and Policies) of the Artificial are needed to understand and manage the new economic complexities that AI brings, acknowledging that AI technologies are not neutral and can be steered in societally beneficial directions guided by the principles of experimentation and evidence to discover where and how to apply AI, transparency and compliance to remove information asymmetries and increase safety in AI markets, social solidarity to share the benefits and costs of AI deployment, and diversity in the AI trajectories that are explored and pursued and the perspectives that guide this process. This will involve an explicit elucidation of human and social goals and values, a mirror of the Turing test where different societies learn about themselves through their responses to the opportunities and challenges that powerful AI technologies pose.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人工智能的复杂经济学
人工智能(AI)系统帮助组织管理复杂性:它们降低了预测成本,并有望做出更多、更好、更快的决策,从而提高生产力和创新能力。然而,它们的部署增加了经济各个层面的复杂性,随之而来的是不良后果的风险。从组织上讲,关于如何采用易出错的人工智能系统的不确定性可能会在部门和组织之间造成人工智能鸿沟。在交易方面,人工智能市场中普遍存在的信息不对称可能导致不安全、滥用和平庸的应用程序。在社会上,个人可能会选择在其他领域极端部署人工智能,以换取更低的价格和更多的便利,从而造成破坏和不平等。在短期内,科学、技术和市场的惯性可能会将社会锁定在人工智能的轨迹上,人们发现这些轨迹不如其他路径。需要新的人工智能科学(和政策)来理解和管理人工智能带来的新经济复杂性,承认人工智能技术不是中立的,可以在实验和证据原则的指导下朝着有利于社会的方向发展,以发现人工智能应用的地点和方式,透明度和合规性以消除信息不对称并提高人工智能市场的安全性,社会团结以分享人工智能部署的收益和成本。探索和追求的人工智能轨迹的多样性以及指导这一过程的观点。这将涉及对人类和社会目标和价值观的明确阐明,这是图灵测试的一面镜子,在图灵测试中,不同的社会通过对强大的人工智能技术带来的机遇和挑战的反应来了解自己。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Why Do Schooling and Experience Affect Workers’ Productivity on the Job So Differently Across Countries? Multi-Product Firms, Factor Endowment and Trade Liberalization An Empirical Analysis of Key Antecedents of Workforce Diversity on Job Performance in Nigeria A Study on Soft Skill and Its Impact of Growth and Productivity in Service Industry Under-Identification of Structural Models Based on Timing and Information Set Assumptions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1