{"title":"Three pathways for standardisation and ethical disclosure by default under the European Union Artificial Intelligence Act","authors":"Johann Laux , Sandra Wachter , Brent Mittelstadt","doi":"10.1016/j.clsr.2024.105957","DOIUrl":null,"url":null,"abstract":"<div><p>Under its proposed Artificial Intelligence Act (‘AIA’), the European Union seeks to develop harmonised standards involving abstract normative concepts such transparency, fairness, and accountability. Applying such concepts inevitably requires answering hard normative questions. Considering this challenge, we argue that there are three possible pathways for future standardisation under the AIA. First, European standard-setting organisations (‘SSOs’) could answer hard normative questions themselves. This approach would raise concerns about its democratic legitimacy. Standardisation is a technical discourse and tends to exclude non-expert stakeholders and the public at large. Second, instead of passing their own normative judgments, SSOs could track the normative consensus they find available. By analysing the standard-setting history of one major SSO, we show that such consensus tracking has historically been its pathway of choice. If standardisation under the AIA took the same route, we demonstrate how this would lead to a false sense of safety as the process is not infallible. Consensus tracking would furthermore push the need to solve unavoidable normative problems down the line. Instead of regulators, AI developers and/or users could define what, for example, fairness requires. By the institutional design of its AIA, the European Commission would have essentially kicked the ‘AI Ethics’ can down the road. We thus suggest a third pathway which aims to avoid the pitfalls of the previous two: SSOs should create standards which require “ethical disclosure by default.” These standards will specify minimum technical testing, documentation, and public reporting requirements to shift ethical decision-making to local stakeholders and limit provider discretion in answering hard normative questions in the development of AI products and services. Our proposed pathway is about putting the right information in the hands of the people with the legitimacy to make complex normative decisions at a local, context-sensitive level.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105957"},"PeriodicalIF":3.3000,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000244/pdfft?md5=04b22c11bc630a648f5dc35efe33f508&pid=1-s2.0-S0267364924000244-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0267364924000244","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
Under its proposed Artificial Intelligence Act (‘AIA’), the European Union seeks to develop harmonised standards involving abstract normative concepts such transparency, fairness, and accountability. Applying such concepts inevitably requires answering hard normative questions. Considering this challenge, we argue that there are three possible pathways for future standardisation under the AIA. First, European standard-setting organisations (‘SSOs’) could answer hard normative questions themselves. This approach would raise concerns about its democratic legitimacy. Standardisation is a technical discourse and tends to exclude non-expert stakeholders and the public at large. Second, instead of passing their own normative judgments, SSOs could track the normative consensus they find available. By analysing the standard-setting history of one major SSO, we show that such consensus tracking has historically been its pathway of choice. If standardisation under the AIA took the same route, we demonstrate how this would lead to a false sense of safety as the process is not infallible. Consensus tracking would furthermore push the need to solve unavoidable normative problems down the line. Instead of regulators, AI developers and/or users could define what, for example, fairness requires. By the institutional design of its AIA, the European Commission would have essentially kicked the ‘AI Ethics’ can down the road. We thus suggest a third pathway which aims to avoid the pitfalls of the previous two: SSOs should create standards which require “ethical disclosure by default.” These standards will specify minimum technical testing, documentation, and public reporting requirements to shift ethical decision-making to local stakeholders and limit provider discretion in answering hard normative questions in the development of AI products and services. Our proposed pathway is about putting the right information in the hands of the people with the legitimacy to make complex normative decisions at a local, context-sensitive level.
期刊介绍:
CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.