{"title":"在降低风险和执行劳工权利之间:从比较角度评估跨大西洋治理人工智能驱动决策的竞赛","authors":"Antonio Aloisi, V. De Stefano","doi":"10.1177/20319525231167982","DOIUrl":null,"url":null,"abstract":"In this article, we provide an overview of efforts to regulate the various phases of the artificial intelligence (AI) life cycle. In doing so, we examine whether—and, if so, to what extent—highly fragmented legal frameworks are able to provide safeguards capable of preventing the dangers that stem from AI- and algorithm-driven organisational practices. We critically analyse related developments at the European Union (EU) level, namely the General Data Protection Regulation, the draft AI Regulation, and the proposal for a Directive on improving working conditions in platform work. We also consider bills and regulations proposed or adopted in the United States and Canada via a transatlantic comparative approach, underlining analogies and variations between EU and North American attitudes towards the risk assessment and management of AI systems. We aim to answer the following questions: Is the widely adopted risk-based approach fit for purpose? Is it consistent with the actual enforcement of fundamental rights at work, such as privacy, human dignity, equality and collective rights? To answer these questions, in section 2 we unpack the various, often ambiguous, facets of the notion(s) of ‘risk’—that is, the common denominator with the EU and North American legal instruments. Here, we determine that a scalable, decentralised framework is not appropriate for ensuring the enforcement of constitutional labour-related rights. In addition to presenting the key provisions of existing schemes in the EU and North America, in section 3 we disentangle the consistencies and tensions between the frameworks that regulate AI and constrain how it must be handled in specific contexts, such as work environments and platform-orchestrated arrangements. Paradoxically, the frenzied race to regulate AI-driven decision-making could exacerbate the current legal uncertainty and pave the way for regulatory arbitrage. Such a scenario would slow technological innovation and egregiously undermine labour rights. Thus, in section 4 we advocate for the adoption of a dedicated legal instrument at the supranational level to govern technologies that manage people in workplaces. Given the high stakes involved, we conclude by stressing the salience of a multi-stakeholder AI governance framework.","PeriodicalId":41157,"journal":{"name":"European Labour Law Journal","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Between risk mitigation and labour rights enforcement: Assessing the transatlantic race to govern AI-driven decision-making through a comparative lens\",\"authors\":\"Antonio Aloisi, V. De Stefano\",\"doi\":\"10.1177/20319525231167982\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this article, we provide an overview of efforts to regulate the various phases of the artificial intelligence (AI) life cycle. In doing so, we examine whether—and, if so, to what extent—highly fragmented legal frameworks are able to provide safeguards capable of preventing the dangers that stem from AI- and algorithm-driven organisational practices. We critically analyse related developments at the European Union (EU) level, namely the General Data Protection Regulation, the draft AI Regulation, and the proposal for a Directive on improving working conditions in platform work. We also consider bills and regulations proposed or adopted in the United States and Canada via a transatlantic comparative approach, underlining analogies and variations between EU and North American attitudes towards the risk assessment and management of AI systems. We aim to answer the following questions: Is the widely adopted risk-based approach fit for purpose? Is it consistent with the actual enforcement of fundamental rights at work, such as privacy, human dignity, equality and collective rights? To answer these questions, in section 2 we unpack the various, often ambiguous, facets of the notion(s) of ‘risk’—that is, the common denominator with the EU and North American legal instruments. Here, we determine that a scalable, decentralised framework is not appropriate for ensuring the enforcement of constitutional labour-related rights. In addition to presenting the key provisions of existing schemes in the EU and North America, in section 3 we disentangle the consistencies and tensions between the frameworks that regulate AI and constrain how it must be handled in specific contexts, such as work environments and platform-orchestrated arrangements. Paradoxically, the frenzied race to regulate AI-driven decision-making could exacerbate the current legal uncertainty and pave the way for regulatory arbitrage. Such a scenario would slow technological innovation and egregiously undermine labour rights. Thus, in section 4 we advocate for the adoption of a dedicated legal instrument at the supranational level to govern technologies that manage people in workplaces. Given the high stakes involved, we conclude by stressing the salience of a multi-stakeholder AI governance framework.\",\"PeriodicalId\":41157,\"journal\":{\"name\":\"European Labour Law Journal\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2023-04-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Labour Law Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/20319525231167982\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Labour Law Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/20319525231167982","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"LAW","Score":null,"Total":0}
Between risk mitigation and labour rights enforcement: Assessing the transatlantic race to govern AI-driven decision-making through a comparative lens
In this article, we provide an overview of efforts to regulate the various phases of the artificial intelligence (AI) life cycle. In doing so, we examine whether—and, if so, to what extent—highly fragmented legal frameworks are able to provide safeguards capable of preventing the dangers that stem from AI- and algorithm-driven organisational practices. We critically analyse related developments at the European Union (EU) level, namely the General Data Protection Regulation, the draft AI Regulation, and the proposal for a Directive on improving working conditions in platform work. We also consider bills and regulations proposed or adopted in the United States and Canada via a transatlantic comparative approach, underlining analogies and variations between EU and North American attitudes towards the risk assessment and management of AI systems. We aim to answer the following questions: Is the widely adopted risk-based approach fit for purpose? Is it consistent with the actual enforcement of fundamental rights at work, such as privacy, human dignity, equality and collective rights? To answer these questions, in section 2 we unpack the various, often ambiguous, facets of the notion(s) of ‘risk’—that is, the common denominator with the EU and North American legal instruments. Here, we determine that a scalable, decentralised framework is not appropriate for ensuring the enforcement of constitutional labour-related rights. In addition to presenting the key provisions of existing schemes in the EU and North America, in section 3 we disentangle the consistencies and tensions between the frameworks that regulate AI and constrain how it must be handled in specific contexts, such as work environments and platform-orchestrated arrangements. Paradoxically, the frenzied race to regulate AI-driven decision-making could exacerbate the current legal uncertainty and pave the way for regulatory arbitrage. Such a scenario would slow technological innovation and egregiously undermine labour rights. Thus, in section 4 we advocate for the adoption of a dedicated legal instrument at the supranational level to govern technologies that manage people in workplaces. Given the high stakes involved, we conclude by stressing the salience of a multi-stakeholder AI governance framework.