An FDA for Algorithms

A. Tutt
{"title":"An FDA for Algorithms","authors":"A. Tutt","doi":"10.2139/ssrn.2747994","DOIUrl":null,"url":null,"abstract":"The rise of increasingly complex algorithms calls for critical thought about how best to prevent, deter, and compensate for the harms that they cause. This paper argues that the criminal law and tort regulatory systems will prove no match for the difficult regulatory puzzles algorithms pose. Algorithmic regulation will require federal uniformity, expert judgment, political independence, and pre-market review to prevent - without stifling innovation - the introduction of unacceptably dangerous algorithms into the market. This paper proposes that a new specialist regulatory agency should be created to regulate algorithmic safety. An FDA for algorithms.Such a federal consumer protection agency should have three powers. First, it should have the power to organize and classify algorithms into regulatory categories by their design, complexity, and potential for harm (in both ordinary use and through misuse). Second, it should have the power to prevent the introduction of algorithms into the market until their safety and efficacy has been proven through evidence-based pre-market trials. Third, the agency should have broad authority to impose disclosure requirements and usage restrictions to prevent algorithms’ harmful misuse.To explain why a federal agency will be necessary, this paper proceeds in three parts. First, it explains the diversity of algorithms that already exist and that are soon to come. In the future many algorithms will be “trained,” not “designed.” That means that the operation of many algorithms will be opaque and difficult to predict in border cases, and responsibility for their harms will be diffuse and difficult to assign. Moreover, although “designed” algorithms already play important roles in many life-or-death situations (from emergency landings to automated braking systems), increasingly “trained” algorithms will be deployed in these mission-critical applications.Second, this paper explains why other possible regulatory schemes - such as state tort and criminal law or regulation through subject-matter regulatory agencies - will not be as desirable as the creation of a centralized federal regulatory agency for the administration of algorithms as a category. For consumers, tort and criminal law are unlikely to efficiently counter the harms from algorithms. Harms traceable to algorithms may frequently be diffuse and difficult to detect. Human responsibility and liability for such harms will be difficult to establish. And narrowly tailored usage restrictions may be difficult to enforce through indirect regulation. For innovators, the availability of federal preemption from local and ex-post liability is likely to be desired. Third, this paper explains that the concerns driving the regulation of food, drugs, and cosmetics closely resemble the concerns that should drive the regulation of algorithms. With respect to the operation of many drugs, the precise mechanisms by which they produce their benefits and harms are not well understood. The same will soon be true of many of the most important (and potentially dangerous) future algorithms. Drawing on lessons from the fitful growth and development of the FDA, the paper proposes that the FDA’s regulatory scheme is an appropriate model from which to design an agency charged with algorithmic regulation.The paper closes by emphasizing the need to think proactively about the potential dangers algorithms pose. The United States created the FDA and expanded its regulatory reach only after several serious tragedies revealed its necessity. If we fail to anticipate the trajectory of modern algorithmic technology, history may repeat itself.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"136","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Privacy Law eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.2747994","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 136

Abstract

The rise of increasingly complex algorithms calls for critical thought about how best to prevent, deter, and compensate for the harms that they cause. This paper argues that the criminal law and tort regulatory systems will prove no match for the difficult regulatory puzzles algorithms pose. Algorithmic regulation will require federal uniformity, expert judgment, political independence, and pre-market review to prevent - without stifling innovation - the introduction of unacceptably dangerous algorithms into the market. This paper proposes that a new specialist regulatory agency should be created to regulate algorithmic safety. An FDA for algorithms.Such a federal consumer protection agency should have three powers. First, it should have the power to organize and classify algorithms into regulatory categories by their design, complexity, and potential for harm (in both ordinary use and through misuse). Second, it should have the power to prevent the introduction of algorithms into the market until their safety and efficacy has been proven through evidence-based pre-market trials. Third, the agency should have broad authority to impose disclosure requirements and usage restrictions to prevent algorithms’ harmful misuse.To explain why a federal agency will be necessary, this paper proceeds in three parts. First, it explains the diversity of algorithms that already exist and that are soon to come. In the future many algorithms will be “trained,” not “designed.” That means that the operation of many algorithms will be opaque and difficult to predict in border cases, and responsibility for their harms will be diffuse and difficult to assign. Moreover, although “designed” algorithms already play important roles in many life-or-death situations (from emergency landings to automated braking systems), increasingly “trained” algorithms will be deployed in these mission-critical applications.Second, this paper explains why other possible regulatory schemes - such as state tort and criminal law or regulation through subject-matter regulatory agencies - will not be as desirable as the creation of a centralized federal regulatory agency for the administration of algorithms as a category. For consumers, tort and criminal law are unlikely to efficiently counter the harms from algorithms. Harms traceable to algorithms may frequently be diffuse and difficult to detect. Human responsibility and liability for such harms will be difficult to establish. And narrowly tailored usage restrictions may be difficult to enforce through indirect regulation. For innovators, the availability of federal preemption from local and ex-post liability is likely to be desired. Third, this paper explains that the concerns driving the regulation of food, drugs, and cosmetics closely resemble the concerns that should drive the regulation of algorithms. With respect to the operation of many drugs, the precise mechanisms by which they produce their benefits and harms are not well understood. The same will soon be true of many of the most important (and potentially dangerous) future algorithms. Drawing on lessons from the fitful growth and development of the FDA, the paper proposes that the FDA’s regulatory scheme is an appropriate model from which to design an agency charged with algorithmic regulation.The paper closes by emphasizing the need to think proactively about the potential dangers algorithms pose. The United States created the FDA and expanded its regulatory reach only after several serious tragedies revealed its necessity. If we fail to anticipate the trajectory of modern algorithmic technology, history may repeat itself.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
算法的FDA
日益复杂的算法的兴起要求我们对如何最好地预防、阻止和补偿它们造成的伤害进行批判性思考。本文认为,刑法和侵权监管制度将被证明无法与算法带来的困难监管难题相匹配。算法规则将要求联邦统一,专家判断,政治独立,和市场评论防止-没有扼杀创新的引入算法不可接受的进入市场。本文建议建立一个新的专门监管机构来监管算法的安全性。一个算法的FDA。这样一个联邦消费者保护机构应该有三个权力。首先,它应该有能力组织和分类算法分为管理类的设计,复杂性,和潜在的伤害(在普通使用和滥用)。其次,它应该有权阻止算法进入市场,直到它们的安全性和有效性通过基于证据的上市前试验得到证明。第三,该机构应该有广泛的权力实施信息披露要求和使用限制,以防止算法的有害的滥用。为了解释为什么需要一个联邦机构,本文分三个部分进行。首先,它解释了已经存在和即将出现的算法的多样性。在未来,许多算法将被“训练”,而不是“设计”。这意味着,在边界情况下,许多算法的操作将是不透明的,难以预测,对其危害的责任将是分散的,难以分配。此外,尽管“设计”算法已经在许多生死攸关的情况下发挥了重要作用(从紧急着陆到自动制动系统),但越来越多的“训练”算法将部署在这些关键任务应用中。其次,本文解释了为什么其他可能的监管方案——例如州侵权法和刑法或通过主题监管机构进行监管——不如创建一个集中的联邦监管机构来管理算法这一类别。对消费者来说,侵权法和刑法不太可能有效地抵消算法带来的危害。可追溯到算法的危害往往是分散的,难以检测。人类对这种损害的责任和责任将难以确定。而且,通过间接监管,可能很难执行狭隘的使用限制。对于革新者来说,联邦政府对地方和事后责任的优先考虑可能是可取的。第三,本文解释了推动食品、药品和化妆品监管的担忧与推动算法监管的担忧非常相似。就许多药物的作用而言,它们产生益处和危害的确切机制尚不清楚。很快,许多最重要(也有潜在危险)的未来算法也会出现同样的情况。本文从FDA断断续续的成长和发展中吸取教训,提出FDA的监管方案是设计一个负责算法监管的机构的合适模型。论文最后强调需要积极思考算法带来的潜在危险。美国建立了食品药品监督管理局,并扩大了其监管范围,只是在几起严重的悲剧揭示了其必要性之后。如果我们不能预测现代算法技术的发展轨迹,历史可能会重演。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Policy Responses to Cross-border Central Bank Digital Currencies – Assessing the Transborder Effects of Digital Yuan Artificial Intelligence in the Internet of Health Things: Is the Solution to AI Privacy More AI? Comments on GDPR Enforcement EDPB Decision 01/020 Privacy Rights and Data Security: GDPR and Personal Data Driven Markets Big Boss is Watching You! The Right to Privacy of Employees in the Context of Workplace Surveillance
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1