人工智能、注意力不集中和责任规则

IF 0.9 3区 社会学 Q3 ECONOMICS International Review of Law and Economics Pub Date : 2024-06-28 DOI:10.1016/j.irle.2024.106211
Marie Obidzinski , Yves Oytana
{"title":"人工智能、注意力不集中和责任规则","authors":"Marie Obidzinski ,&nbsp;Yves Oytana","doi":"10.1016/j.irle.2024.106211","DOIUrl":null,"url":null,"abstract":"<div><p>We characterize the socially optimal liability sharing rule in a situation where a manufacturer develops an artificial intelligence (AI) system that is then used by a human operator (or user). First, the manufacturer invests to increase the autonomy of the AI (<em>i.e</em>, the set of situations that the AI can handle without human intervention) and sets a selling price. The user then decides whether or not to buy the AI. Since the autonomy of the AI remains limited, the human operator must sometimes intervene even when the AI is in use. Our main assumptions relate to behavioral inattention. Behavioral inattention reduces the effectiveness of user intervention and increases the expected harm. Only some users are aware of their own attentional limits. Under the assumption that AI outperforms users, we show that policymakers may face a trade-off when choosing how to allocate liability between the manufacturer and the user. Indeed, the manufacturer may underinvest in the autonomy of the AI. If this is the case, the policymaker can incentivize the latter to invest more by increasing his share of liability. On the other hand, increasing the liability of the manufacturer may come at the cost of slowing down the diffusion of AI technology.</p></div>","PeriodicalId":47202,"journal":{"name":"International Review of Law and Economics","volume":"79 ","pages":"Article 106211"},"PeriodicalIF":0.9000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence, inattention and liability rules\",\"authors\":\"Marie Obidzinski ,&nbsp;Yves Oytana\",\"doi\":\"10.1016/j.irle.2024.106211\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>We characterize the socially optimal liability sharing rule in a situation where a manufacturer develops an artificial intelligence (AI) system that is then used by a human operator (or user). First, the manufacturer invests to increase the autonomy of the AI (<em>i.e</em>, the set of situations that the AI can handle without human intervention) and sets a selling price. The user then decides whether or not to buy the AI. Since the autonomy of the AI remains limited, the human operator must sometimes intervene even when the AI is in use. Our main assumptions relate to behavioral inattention. Behavioral inattention reduces the effectiveness of user intervention and increases the expected harm. Only some users are aware of their own attentional limits. Under the assumption that AI outperforms users, we show that policymakers may face a trade-off when choosing how to allocate liability between the manufacturer and the user. Indeed, the manufacturer may underinvest in the autonomy of the AI. If this is the case, the policymaker can incentivize the latter to invest more by increasing his share of liability. On the other hand, increasing the liability of the manufacturer may come at the cost of slowing down the diffusion of AI technology.</p></div>\",\"PeriodicalId\":47202,\"journal\":{\"name\":\"International Review of Law and Economics\",\"volume\":\"79 \",\"pages\":\"Article 106211\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2024-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Review of Law and Economics\",\"FirstCategoryId\":\"96\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0144818824000310\",\"RegionNum\":3,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ECONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Review of Law and Economics","FirstCategoryId":"96","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0144818824000310","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

摘要

在制造商开发出人工智能系统并由人类操作员(或用户)使用的情况下,我们描述了社会最优责任分担规则的特征。首先,制造商投资提高人工智能的自主性(即人工智能无需人工干预即可处理的情况集合),并设定销售价格。然后,用户决定是否购买人工智能。由于人工智能的自主性仍然有限,即使人工智能在使用过程中,人类操作员有时也必须进行干预。我们的主要假设与行为不集中有关。行为上的不注意会降低用户干预的效果,增加预期伤害。只有部分用户意识到自己的注意力极限。在人工智能优于用户的假设下,我们表明决策者在选择如何在制造商和用户之间分配责任时可能会面临权衡。事实上,制造商可能会对人工智能的自主性投资不足。如果是这种情况,政策制定者可以通过增加其责任份额来激励后者加大投资。另一方面,增加制造商的责任可能会以减缓人工智能技术的传播为代价。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artificial intelligence, inattention and liability rules

We characterize the socially optimal liability sharing rule in a situation where a manufacturer develops an artificial intelligence (AI) system that is then used by a human operator (or user). First, the manufacturer invests to increase the autonomy of the AI (i.e, the set of situations that the AI can handle without human intervention) and sets a selling price. The user then decides whether or not to buy the AI. Since the autonomy of the AI remains limited, the human operator must sometimes intervene even when the AI is in use. Our main assumptions relate to behavioral inattention. Behavioral inattention reduces the effectiveness of user intervention and increases the expected harm. Only some users are aware of their own attentional limits. Under the assumption that AI outperforms users, we show that policymakers may face a trade-off when choosing how to allocate liability between the manufacturer and the user. Indeed, the manufacturer may underinvest in the autonomy of the AI. If this is the case, the policymaker can incentivize the latter to invest more by increasing his share of liability. On the other hand, increasing the liability of the manufacturer may come at the cost of slowing down the diffusion of AI technology.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.60
自引率
18.20%
发文量
38
审稿时长
48 days
期刊介绍: The International Review of Law and Economics provides a forum for interdisciplinary research at the interface of law and economics. IRLE is international in scope and audience and particularly welcomes both theoretical and empirical papers on comparative law and economics, globalization and legal harmonization, and the endogenous emergence of legal institutions, in addition to more traditional legal topics.
期刊最新文献
Estimating the effect of concealed carry laws on murder: A response to Bondy, et al. The broken-windows theory of crime: A Bayesian approach Workload, legal doctrine, and judicial review in an authoritarian regime: A study of expropriation judgments in China Illicit enrichment in Germany: An evaluation of the reformed asset recovery regime's ability to confiscate proceeds of crime On the strategic choice of overconfident lawyers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1