Pub Date : 2024-11-12DOI: 10.1016/j.clsr.2024.106079
Shuai Guo , Xiang Li
This article examines China's latest development in the governance of cross-border data flow. Under the general framework of Cyber Security Law, Data Security Law, and Personal Data Protection Law, China established its own regime of cross-border data flow. In recent years, contrary to the general international perception that China imposes strict restrictions especially due to national security concerns, China has been de facto relaxing its regulations on cross-border data flow, especially for digital trade. This article suggests three underlying incentives. First, China is in an increasing need to gain economic growth through international trade and investment. Second, China intends to compete in technology development and take the lead in shaping international rules on data governance. Third, China is seeking to adhere to international standards, particularly those prescribed in international free trade agreements. This article further submits that this paradigm shift would have international implications. First, China's practices need to be examined under the domestic regulatory frameworks of international free trade agreements. Second, China's current legislative and judicial practices are multifaceted, taking into account various factors, including international business, national security, and data protection, which may contribute to the further development of international cross-border data flow rules.
{"title":"Cross-border data flow in China: Shifting from restriction to relaxation?","authors":"Shuai Guo , Xiang Li","doi":"10.1016/j.clsr.2024.106079","DOIUrl":"10.1016/j.clsr.2024.106079","url":null,"abstract":"<div><div>This article examines China's latest development in the governance of cross-border data flow. Under the general framework of Cyber Security Law, Data Security Law, and Personal Data Protection Law, China established its own regime of cross-border data flow. In recent years, contrary to the general international perception that China imposes strict restrictions especially due to national security concerns, China has been de facto relaxing its regulations on cross-border data flow, especially for digital trade. This article suggests three underlying incentives. First, China is in an increasing need to gain economic growth through international trade and investment. Second, China intends to compete in technology development and take the lead in shaping international rules on data governance. Third, China is seeking to adhere to international standards, particularly those prescribed in international free trade agreements. This article further submits that this paradigm shift would have international implications. First, China's practices need to be examined under the domestic regulatory frameworks of international free trade agreements. Second, China's current legislative and judicial practices are multifaceted, taking into account various factors, including international business, national security, and data protection, which may contribute to the further development of international cross-border data flow rules.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106079"},"PeriodicalIF":3.3,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-09DOI: 10.1016/j.clsr.2024.106069
Leon Y. Xiao
Loot boxes are gambling-like products inside video games that can be bought with real-world money to obtain random rewards. They are widely available to children, and stakeholders are concerned about potential harms, e.g., overspending. UK advertising must disclose, if relevant, that a game contains (i) any in-game purchases and (ii) loot boxes specifically. An empirical examination of relevant adverts on Meta-owned platforms (i.e., Facebook, Instagram, and Messenger) and TikTok revealed that only about 7 % disclosed loot box presence. The vast majority of social media advertising (93 %) was therefore non-compliant with UK advertising regulations and also EU consumer protection law. In the UK alone, the 93 most viewed TikTok adverts failing to disclose loot box presence were watched 292,641,000 times total or approximately 11 impressions per active user. Many people have therefore been repeatedly exposed to prohibited and socially irresponsible advertising that failed to provide important and mandated information. Implementation deficiencies with ad repositories, which must comply with transparency obligations imposed by the EU Digital Services Act, are also highlighted, e.g., not disclosing the beneficiary. How data access empowered by law can and should be used by researchers is practically demonstrated. Policymakers should consider enabling more such opportunities for the public benefit.
{"title":"Illegal loot box advertising on social media? An empirical study using the Meta and TikTok ad transparency repositories","authors":"Leon Y. Xiao","doi":"10.1016/j.clsr.2024.106069","DOIUrl":"10.1016/j.clsr.2024.106069","url":null,"abstract":"<div><div>Loot boxes are gambling-like products inside video games that can be bought with real-world money to obtain random rewards. They are widely available to children, and stakeholders are concerned about potential harms, <em>e.g.</em>, overspending. UK advertising must disclose, if relevant, that a game contains (i) any in-game purchases and (ii) loot boxes specifically. An empirical examination of relevant adverts on Meta-owned platforms (<em>i.e.</em>, Facebook, Instagram, and Messenger) and TikTok revealed that only about 7 % disclosed loot box presence. The vast majority of social media advertising (93 %) was therefore non-compliant with UK advertising regulations and also EU consumer protection law. In the UK alone, the 93 most viewed TikTok adverts failing to disclose loot box presence were watched 292,641,000 times total or approximately 11 impressions per active user. Many people have therefore been repeatedly exposed to prohibited and socially irresponsible advertising that failed to provide important and mandated information. Implementation deficiencies with ad repositories, which must comply with transparency obligations imposed by the EU Digital Services Act, are also highlighted, <em>e.g.</em>, not disclosing the beneficiary. How data access empowered by law can and should be used by researchers is practically demonstrated. Policymakers should consider enabling more such opportunities for the public benefit.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106069"},"PeriodicalIF":3.3,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-09DOI: 10.1016/j.clsr.2024.106071
Lee A. Bygrave
Using a series of metaphors, this opinion piece charts patterns, trends and obstacles shaping the development of EU cybersecurity law over the last three decades. It shows that this development is more than simply a function of the EU's increasing regulatory capacity. It argues that, to a large degree, the development has been a reactive, gap-filling process, which is partly due to the piecemeal character of the regulatory areas in which the EU legislates, combined with smouldering ‘turf wars’ over regulatory competence. An overarching point is that EU cybersecurity law is far from reminiscent of a well-kempt forest; rather, it resembles a sprawling jungle of regulatory instruments interacting in complex, confusing and sometimes disjointed ways. Thus, this field of regulation underlines the fact that increased regulatory capacity does not necessarily beget optimal regulatory coherence. Nonetheless, the paper also identifies multiple positive traits in the legislative development—traits that signal Brussels’ ability to learn from weaknesses with previous regulatory instruments.
{"title":"The emergence of EU cybersecurity law: A tale of lemons, angst, turf, surf and grey boxes","authors":"Lee A. Bygrave","doi":"10.1016/j.clsr.2024.106071","DOIUrl":"10.1016/j.clsr.2024.106071","url":null,"abstract":"<div><div>Using a series of metaphors, this opinion piece charts patterns, trends and obstacles shaping the development of EU cybersecurity law over the last three decades. It shows that this development is more than simply a function of the EU's increasing regulatory capacity. It argues that, to a large degree, the development has been a reactive, gap-filling process, which is partly due to the piecemeal character of the regulatory areas in which the EU legislates, combined with smouldering ‘turf wars’ over regulatory competence. An overarching point is that EU cybersecurity law is far from reminiscent of a well-kempt forest; rather, it resembles a sprawling jungle of regulatory instruments interacting in complex, confusing and sometimes disjointed ways. Thus, this field of regulation underlines the fact that increased regulatory capacity does not necessarily beget optimal regulatory coherence. Nonetheless, the paper also identifies multiple positive traits in the legislative development—traits that signal Brussels’ ability to learn from weaknesses with previous regulatory instruments.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106071"},"PeriodicalIF":3.3,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106068
Ahmed Ragib Chowdhury
Lawrence Lessig in “Code: Version 2.0” presents “code” as the new law and regulator of cyberspace. Previously, techno-authoritarianism represented state sponsored authoritarian use of the internet, and digital technologies. It has now experienced a takeover by private entities such as social media platforms, who exercise extensive control over the platforms and how users interact with them. Code, akin to the law of cyberspace emboldens social media platforms to administer it according to their agenda, the terms of use of such platforms being one such example. The terms of use, which are also clickwrap agreements, are imposed unilaterally on users without scope of negotiation, essentially amounting to unconscionable contracts of adhesion. This paper will focus on one specific angle of the impact brought upon by the terms of use, user-generated content on social media platforms, and their copyright related rights. This paper will doctrinally assess the impact the “terms of use” of social media platforms has on user-generated content from a copyright law perspective, and consider whether the terms amount to unconscionable contracts of adhesion. This paper revisits, or reimagines this problem surrounding copyrightability of user-generated content and social media platform terms of use from the lens of techno-authoritarianism and the influence of code.
{"title":"Techno-authoritarianism & copyright issues of user-generated content on social- media","authors":"Ahmed Ragib Chowdhury","doi":"10.1016/j.clsr.2024.106068","DOIUrl":"10.1016/j.clsr.2024.106068","url":null,"abstract":"<div><div>Lawrence Lessig in “Code: Version 2.0” presents “code” as the new law and regulator of cyberspace. Previously, techno-authoritarianism represented state sponsored authoritarian use of the internet, and digital technologies. It has now experienced a takeover by private entities such as social media platforms, who exercise extensive control over the platforms and how users interact with them. Code, akin to the law of cyberspace emboldens social media platforms to administer it according to their agenda, the terms of use of such platforms being one such example. The terms of use, which are also clickwrap agreements, are imposed unilaterally on users without scope of negotiation, essentially amounting to unconscionable contracts of adhesion. This paper will focus on one specific angle of the impact brought upon by the terms of use, user-generated content on social media platforms, and their copyright related rights. This paper will doctrinally assess the impact the “terms of use” of social media platforms has on user-generated content from a copyright law perspective, and consider whether the terms amount to unconscionable contracts of adhesion. This paper revisits, or reimagines this problem surrounding copyrightability of user-generated content and social media platform terms of use from the lens of techno-authoritarianism and the influence of code.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106068"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106075
Ekaterina Martynova , Andrey Shcherbovich
The paper outlines core aspects of the digital transformation process in Russia since the early 2000s, as well as recent legislative initiatives and practices at the federal level. It considers the digitalization of public services, efforts towards ‘sovereignization’ of the Russian segment of the Internet, and the current focus on cybersecurity and the development of artificial intelligence. The paper highlights the tendency to strengthen the factor of protection of state interests and national security alongside control over online activities of citizens in comparison with the initial understanding of digital transformation as a human-oriented process aimed at increasing the accessibility and convenience of public services. It can be assumed that this change in the goals and methods of digital transformation is one of the manifestations of a broader political, social and cultural process of separation, primarily from the West, that Russian society is currently undergoing, amidst a growing official narrative of threats from both external and internal forces that require greater independence and increased vigilance, including in the digital domain.
{"title":"Digital transformation in Russia: Turning from a service model to ensuring technological sovereignty","authors":"Ekaterina Martynova , Andrey Shcherbovich","doi":"10.1016/j.clsr.2024.106075","DOIUrl":"10.1016/j.clsr.2024.106075","url":null,"abstract":"<div><div>The paper outlines core aspects of the digital transformation process in Russia since the early 2000s, as well as recent legislative initiatives and practices at the federal level. It considers the digitalization of public services, efforts towards ‘sovereignization’ of the Russian segment of the Internet, and the current focus on cybersecurity and the development of artificial intelligence. The paper highlights the tendency to strengthen the factor of protection of state interests and national security alongside control over online activities of citizens in comparison with the initial understanding of digital transformation as a human-oriented process aimed at increasing the accessibility and convenience of public services. It can be assumed that this change in the goals and methods of digital transformation is one of the manifestations of a broader political, social and cultural process of separation, primarily from the West, that Russian society is currently undergoing, amidst a growing official narrative of threats from both external and internal forces that require greater independence and increased vigilance, including in the digital domain.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106075"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106064
Luca Belli, Larissa Galdino de Magalhães Santos
{"title":"Editorial: Toward a BRICS stack? Leveraging digital transformation to construct digital sovereignty in the BRICS countries","authors":"Luca Belli, Larissa Galdino de Magalhães Santos","doi":"10.1016/j.clsr.2024.106064","DOIUrl":"10.1016/j.clsr.2024.106064","url":null,"abstract":"","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106064"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106062
Nick Pantlin
This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106073
Chuyue Zhang, Yuchen Meng
The integration of artificial intelligence into the field of law has penetrated the underlying logic of legal operations. Currently, legal AI systems face difficulties in representing legal knowledge, exhibit insufficient legal reasoning capabilities, have poor explainability, and are inefficient in handling causal inference and uncertainty. In legal practice, various legal reasoning methods (deductive reasoning, inductive reasoning, abductive reasoning, etc.) are often intertwined and used comprehensively. However, the reasoning modes employed by current legal AI systems are inadequate. Identifying AI models that are more suitable for legal reasoning is crucial for advancing the development of legal AI systems.
Distinguished from the current high-profile large language models, we believe that Bayesian reasoning is highly compatible with legal reasoning, as it can perferm abductive reasoning, excel at causal inference, and admits the "defeasibility" of reasoning conclusions, which is consistent with the cognitive development pattern of legal professionals from apriori to posteriori. AI models based on Bayesian methods can also become the main technological support for legal AI systems. Bayesian neural networks have advantages in uncertainty modeling, avoiding overfitting, and explainability. Legal AI systems based on Bayesian deep learning frameworks can combine the advantages of deep learning and probabilistic graphical models, facilitating the exchange and supplementation of information between perception tasks and reasoning tasks. In this paper, we take perpetrator prediction systems and legal judegment prediction systems as examples to discuss the construction and basic operation modes of the Bayesian deep learning framework. Bayesian deep learning can enhance reasoning ability, improve the explainability of models, and make the reasoning process more transparent and visualizable. Furthermore, Bayesian deep learning framework is well-suited for human-machine collaborative tasks, enabling the complementary strengths of humans and machines.
{"title":"Bayesian deep learning: An enhanced AI framework for legal reasoning alignment","authors":"Chuyue Zhang, Yuchen Meng","doi":"10.1016/j.clsr.2024.106073","DOIUrl":"10.1016/j.clsr.2024.106073","url":null,"abstract":"<div><div>The integration of artificial intelligence into the field of law has penetrated the underlying logic of legal operations. Currently, legal AI systems face difficulties in representing legal knowledge, exhibit insufficient legal reasoning capabilities, have poor explainability, and are inefficient in handling causal inference and uncertainty. In legal practice, various legal reasoning methods (deductive reasoning, inductive reasoning, abductive reasoning, etc.) are often intertwined and used comprehensively. However, the reasoning modes employed by current legal AI systems are inadequate. Identifying AI models that are more suitable for legal reasoning is crucial for advancing the development of legal AI systems.</div><div>Distinguished from the current high-profile large language models, we believe that Bayesian reasoning is highly compatible with legal reasoning, as it can perferm abductive reasoning, excel at causal inference, and admits the \"defeasibility\" of reasoning conclusions, which is consistent with the cognitive development pattern of legal professionals from apriori to posteriori. AI models based on Bayesian methods can also become the main technological support for legal AI systems. Bayesian neural networks have advantages in uncertainty modeling, avoiding overfitting, and explainability. Legal AI systems based on Bayesian deep learning frameworks can combine the advantages of deep learning and probabilistic graphical models, facilitating the exchange and supplementation of information between perception tasks and reasoning tasks. In this paper, we take perpetrator prediction systems and legal judegment prediction systems as examples to discuss the construction and basic operation modes of the Bayesian deep learning framework. Bayesian deep learning can enhance reasoning ability, improve the explainability of models, and make the reasoning process more transparent and visualizable. Furthermore, Bayesian deep learning framework is well-suited for human-machine collaborative tasks, enabling the complementary strengths of humans and machines.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106073"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106066
Claudio Novelli , Federico Casolari , Philipp Hacker , Giorgio Spedicato , Luciano Floridi
The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and shortcomings in the EU legislative framework and proposes recommendations to ensure the safe and compliant deployment of generative models.
{"title":"Generative AI in EU law: Liability, privacy, intellectual property, and cybersecurity","authors":"Claudio Novelli , Federico Casolari , Philipp Hacker , Giorgio Spedicato , Luciano Floridi","doi":"10.1016/j.clsr.2024.106066","DOIUrl":"10.1016/j.clsr.2024.106066","url":null,"abstract":"<div><div>The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and shortcomings in the EU legislative framework and proposes recommendations to ensure the safe and compliant deployment of generative models.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106066"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106070
Trang Anh MAC
In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.1 The documentary, Trials of Gabriel Fernandez in 2020,2 has discussed the Allegheny Family Screening Tool (AFST3), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan4, the Centre for Social Data Analytics co-director, and the Children's Data Network5 members, with Emily Putnam-Hornstein6, established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse7. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.8 This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future.
1 纪录片《2020 年加布里埃尔-费尔南德斯的审判》(Trials of Gabriel Fernandez in 2020)2 讨论了美国阿勒格尼县自 2016 年起实施的阿勒格尼家庭筛查工具(Allegheny Family Screening Tool,AFST3),该工具旨在预测社会服务系统的介入情况。社会数据分析中心(Centre for Social Data Analytics)联合主任雷玛-瓦伊蒂亚纳坦(Rhema Vaithianathan)4 和儿童数据网络(Children's Data Network)5 成员与艾米丽-普特南-霍恩斯坦(Emily Putnam-Hornstein )6 共同建立了这一示范性筛查工具,整合并分析了存放在国土安全部数据仓库(DHS Data Warehouse)7 中的大量数据,这些数据涉及据称对儿童造成不公正待遇的人的详细信息。他们认为,这可能是解决人工行政系统不堪重负的办法。然而,就像人工智能在现代社会的其他应用一样,在公共部门,算法决策和支持系统也因为数据和算法的偏差而受到谴责8 。因此,这篇拙作是对这些问题--基于人工智能的行政决策与支持系统的偏见和歧视--的一个梳理。首先,我从法律的角度确定了偏见和歧视及其两个定义之间的模糊界限,然后详细阐述了在人工智能系统开发的各个阶段产生偏见的原因,主要是过去的偏见数据源和人类决策的结果、社会和政治背景以及开发者的伦理道德。在同一章中,我介绍了非歧视法律框架,包括它们在自动决策和支持系统方面的应用和与行政法的衔接,以及道德和个人数据保护法规的参与。在下一章中,我试图从法律和技术两个角度概述潜在解决方案的新建议。就前者而言,我的重点是公平性定义和其他可供开发者选择的现有方案,例如工具包、基准数据集、去偏数据等。对于后者,我报告了有关数据集和人工智能系统开发的战略和新建议,以及在不久的将来的实施情况。
{"title":"Bias and discrimination in ML-based systems of administrative decision-making and support","authors":"Trang Anh MAC","doi":"10.1016/j.clsr.2024.106070","DOIUrl":"10.1016/j.clsr.2024.106070","url":null,"abstract":"<div><div>In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.<span><span><sup>1</sup></span></span> The documentary, Trials of Gabriel Fernandez in 2020,<span><span><sup>2</sup></span></span> has discussed the Allegheny Family Screening Tool (AFST<span><span><sup>3</sup></span></span>), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan<span><span><sup>4</sup></span></span>, the Centre for Social Data Analytics co-director, and the Children's Data Network<span><span><sup>5</sup></span></span> members, with Emily Putnam-Hornstein<span><span><sup>6</sup></span></span>, established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse<span><span><sup>7</sup></span></span>. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.<span><span><sup>8</sup></span></span> This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106070"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}