Regulating Gatekeeper Artificial Intelligence and Data: Transparency, Access and Fairness under the Digital Markets Act, the General Data Protection Regulation and Beyond

IF 1.8 Q1 LAW European Journal of Risk Regulation Pub Date : 2023-12-13 DOI:10.1017/err.2023.81
Philipp Hacker, Johann Cordes, Janina Rochon
{"title":"Regulating Gatekeeper Artificial Intelligence and Data: Transparency, Access and Fairness under the Digital Markets Act, the General Data Protection Regulation and Beyond","authors":"Philipp Hacker, Johann Cordes, Janina Rochon","doi":"10.1017/err.2023.81","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) is not only increasingly being used in business and administration contexts, but a race for its regulation is also underway, with the European Union (EU) spearheading the efforts. Contrary to existing literature, this article suggests that the most far-reaching and effective EU rules for AI applications in the digital economy will not be contained in the proposed AI Act, but in the Digital Markets Act (DMA). We analyse the impact of the DMA and related EU acts on AI models and underlying data across four key areas: disclosure requirements; the regulation of AI training data; access rules; and the regime for fair rankings. We demonstrate that fairness, under the DMA, goes beyond traditionally protected categories of non-discrimination law on which scholarship at the intersection of AI and law has focused on. Rather, we draw on competition law and the FRAND criteria known from intellectual property law to interpret and refine the DMA provisions on fair rankings. Moreover, we show how, based on Court of Justice of the European Union jurisprudence, a coherent interpretation of the concept of non-discrimination in both traditional non-discrimination and competition law may be found. The final section sketches out proposals for a comprehensive framework of transparency, access and fairness under the DMA and beyond.","PeriodicalId":46207,"journal":{"name":"European Journal of Risk Regulation","volume":"1 1","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Risk Regulation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/err.2023.81","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) is not only increasingly being used in business and administration contexts, but a race for its regulation is also underway, with the European Union (EU) spearheading the efforts. Contrary to existing literature, this article suggests that the most far-reaching and effective EU rules for AI applications in the digital economy will not be contained in the proposed AI Act, but in the Digital Markets Act (DMA). We analyse the impact of the DMA and related EU acts on AI models and underlying data across four key areas: disclosure requirements; the regulation of AI training data; access rules; and the regime for fair rankings. We demonstrate that fairness, under the DMA, goes beyond traditionally protected categories of non-discrimination law on which scholarship at the intersection of AI and law has focused on. Rather, we draw on competition law and the FRAND criteria known from intellectual property law to interpret and refine the DMA provisions on fair rankings. Moreover, we show how, based on Court of Justice of the European Union jurisprudence, a coherent interpretation of the concept of non-discrimination in both traditional non-discrimination and competition law may be found. The final section sketches out proposals for a comprehensive framework of transparency, access and fairness under the DMA and beyond.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
监管把关人人工智能和数据:数字市场法》、《通用数据保护条例》及其他规定下的透明度、访问权和公平性
人工智能(AI)不仅越来越多地应用于商业和行政领域,其监管竞赛也在进行之中,而欧盟(EU)则是这一竞赛的先锋。与现有文献相反,本文认为,欧盟针对数字经济中的人工智能应用制定的影响最深远、最有效的规则将不会包含在拟议的《人工智能法》中,而是包含在《数字市场法》(DMA)中。我们分析了《数字市场法》和相关欧盟法案对人工智能模型和基础数据在以下四个关键领域的影响:披露要求;人工智能训练数据的监管;访问规则;以及公平排名制度。我们证明,根据《人工智能法》,公平性超越了传统上受保护的非歧视法类别,而人工智能与法律交叉领域的学术研究一直专注于此。相反,我们借鉴了竞争法和知识产权法中的 "FRAND "标准来解释和完善《数字千年发展目标》中有关公平排名的条款。此外,我们还展示了如何根据欧盟法院的判例,在传统非歧视法和竞争法中找到对非歧视概念的一致解释。最后一节概述了在 DMA 及其他法律框架下建立透明度、可及性和公平性综合框架的建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.10
自引率
0.00%
发文量
34
期刊介绍: European Journal of Risk Regulation is an interdisciplinary forum bringing together legal practitioners, academics, risk analysts and policymakers in a dialogue on how risks to individuals’ health, safety and the environment are regulated across policy domains globally. The journal’s wide scope encourages exploration of public health, safety and environmental aspects of pharmaceuticals, food and other consumer products alongside a wider interpretation of risk, which includes financial regulation, technology-related risks, natural disasters and terrorism.
期刊最新文献
Management and Enforcement Theories for Compliance with the Rule of Law A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities Standards for Including Scientific Evidence in Restrictions on Freedom of Movement: The Case of EU Covid Certificates Scheme Collaborative Governance Structures for Interoperability in the EU’s new data acts Dangerous Legacy of Food Contact Materials on the EU Market: Recall of Products Containing PFAS
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1