Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106075
Ekaterina Martynova , Andrey Shcherbovich
The paper outlines core aspects of the digital transformation process in Russia since the early 2000s, as well as recent legislative initiatives and practices at the federal level. It considers the digitalization of public services, efforts towards ‘sovereignization’ of the Russian segment of the Internet, and the current focus on cybersecurity and the development of artificial intelligence. The paper highlights the tendency to strengthen the factor of protection of state interests and national security alongside control over online activities of citizens in comparison with the initial understanding of digital transformation as a human-oriented process aimed at increasing the accessibility and convenience of public services. It can be assumed that this change in the goals and methods of digital transformation is one of the manifestations of a broader political, social and cultural process of separation, primarily from the West, that Russian society is currently undergoing, amidst a growing official narrative of threats from both external and internal forces that require greater independence and increased vigilance, including in the digital domain.
{"title":"Digital transformation in Russia: Turning from a service model to ensuring technological sovereignty","authors":"Ekaterina Martynova , Andrey Shcherbovich","doi":"10.1016/j.clsr.2024.106075","DOIUrl":"10.1016/j.clsr.2024.106075","url":null,"abstract":"<div><div>The paper outlines core aspects of the digital transformation process in Russia since the early 2000s, as well as recent legislative initiatives and practices at the federal level. It considers the digitalization of public services, efforts towards ‘sovereignization’ of the Russian segment of the Internet, and the current focus on cybersecurity and the development of artificial intelligence. The paper highlights the tendency to strengthen the factor of protection of state interests and national security alongside control over online activities of citizens in comparison with the initial understanding of digital transformation as a human-oriented process aimed at increasing the accessibility and convenience of public services. It can be assumed that this change in the goals and methods of digital transformation is one of the manifestations of a broader political, social and cultural process of separation, primarily from the West, that Russian society is currently undergoing, amidst a growing official narrative of threats from both external and internal forces that require greater independence and increased vigilance, including in the digital domain.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106075"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106064
Luca Belli, Larissa Galdino de Magalhães Santos
{"title":"Editorial: Toward a BRICS stack? Leveraging digital transformation to construct digital sovereignty in the BRICS countries","authors":"Luca Belli, Larissa Galdino de Magalhães Santos","doi":"10.1016/j.clsr.2024.106064","DOIUrl":"10.1016/j.clsr.2024.106064","url":null,"abstract":"","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106064"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106062
Nick Pantlin
This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106073
Chuyue Zhang, Yuchen Meng
The integration of artificial intelligence into the field of law has penetrated the underlying logic of legal operations. Currently, legal AI systems face difficulties in representing legal knowledge, exhibit insufficient legal reasoning capabilities, have poor explainability, and are inefficient in handling causal inference and uncertainty. In legal practice, various legal reasoning methods (deductive reasoning, inductive reasoning, abductive reasoning, etc.) are often intertwined and used comprehensively. However, the reasoning modes employed by current legal AI systems are inadequate. Identifying AI models that are more suitable for legal reasoning is crucial for advancing the development of legal AI systems.
Distinguished from the current high-profile large language models, we believe that Bayesian reasoning is highly compatible with legal reasoning, as it can perferm abductive reasoning, excel at causal inference, and admits the "defeasibility" of reasoning conclusions, which is consistent with the cognitive development pattern of legal professionals from apriori to posteriori. AI models based on Bayesian methods can also become the main technological support for legal AI systems. Bayesian neural networks have advantages in uncertainty modeling, avoiding overfitting, and explainability. Legal AI systems based on Bayesian deep learning frameworks can combine the advantages of deep learning and probabilistic graphical models, facilitating the exchange and supplementation of information between perception tasks and reasoning tasks. In this paper, we take perpetrator prediction systems and legal judegment prediction systems as examples to discuss the construction and basic operation modes of the Bayesian deep learning framework. Bayesian deep learning can enhance reasoning ability, improve the explainability of models, and make the reasoning process more transparent and visualizable. Furthermore, Bayesian deep learning framework is well-suited for human-machine collaborative tasks, enabling the complementary strengths of humans and machines.
{"title":"Bayesian deep learning: An enhanced AI framework for legal reasoning alignment","authors":"Chuyue Zhang, Yuchen Meng","doi":"10.1016/j.clsr.2024.106073","DOIUrl":"10.1016/j.clsr.2024.106073","url":null,"abstract":"<div><div>The integration of artificial intelligence into the field of law has penetrated the underlying logic of legal operations. Currently, legal AI systems face difficulties in representing legal knowledge, exhibit insufficient legal reasoning capabilities, have poor explainability, and are inefficient in handling causal inference and uncertainty. In legal practice, various legal reasoning methods (deductive reasoning, inductive reasoning, abductive reasoning, etc.) are often intertwined and used comprehensively. However, the reasoning modes employed by current legal AI systems are inadequate. Identifying AI models that are more suitable for legal reasoning is crucial for advancing the development of legal AI systems.</div><div>Distinguished from the current high-profile large language models, we believe that Bayesian reasoning is highly compatible with legal reasoning, as it can perferm abductive reasoning, excel at causal inference, and admits the \"defeasibility\" of reasoning conclusions, which is consistent with the cognitive development pattern of legal professionals from apriori to posteriori. AI models based on Bayesian methods can also become the main technological support for legal AI systems. Bayesian neural networks have advantages in uncertainty modeling, avoiding overfitting, and explainability. Legal AI systems based on Bayesian deep learning frameworks can combine the advantages of deep learning and probabilistic graphical models, facilitating the exchange and supplementation of information between perception tasks and reasoning tasks. In this paper, we take perpetrator prediction systems and legal judegment prediction systems as examples to discuss the construction and basic operation modes of the Bayesian deep learning framework. Bayesian deep learning can enhance reasoning ability, improve the explainability of models, and make the reasoning process more transparent and visualizable. Furthermore, Bayesian deep learning framework is well-suited for human-machine collaborative tasks, enabling the complementary strengths of humans and machines.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106073"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106066
Claudio Novelli , Federico Casolari , Philipp Hacker , Giorgio Spedicato , Luciano Floridi
The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and shortcomings in the EU legislative framework and proposes recommendations to ensure the safe and compliant deployment of generative models.
{"title":"Generative AI in EU law: Liability, privacy, intellectual property, and cybersecurity","authors":"Claudio Novelli , Federico Casolari , Philipp Hacker , Giorgio Spedicato , Luciano Floridi","doi":"10.1016/j.clsr.2024.106066","DOIUrl":"10.1016/j.clsr.2024.106066","url":null,"abstract":"<div><div>The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and shortcomings in the EU legislative framework and proposes recommendations to ensure the safe and compliant deployment of generative models.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106066"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106070
Trang Anh MAC
In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.1 The documentary, Trials of Gabriel Fernandez in 2020,2 has discussed the Allegheny Family Screening Tool (AFST3), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan4, the Centre for Social Data Analytics co-director, and the Children's Data Network5 members, with Emily Putnam-Hornstein6, established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse7. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.8 This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future.
1 纪录片《2020 年加布里埃尔-费尔南德斯的审判》(Trials of Gabriel Fernandez in 2020)2 讨论了美国阿勒格尼县自 2016 年起实施的阿勒格尼家庭筛查工具(Allegheny Family Screening Tool,AFST3),该工具旨在预测社会服务系统的介入情况。社会数据分析中心(Centre for Social Data Analytics)联合主任雷玛-瓦伊蒂亚纳坦(Rhema Vaithianathan)4 和儿童数据网络(Children's Data Network)5 成员与艾米丽-普特南-霍恩斯坦(Emily Putnam-Hornstein )6 共同建立了这一示范性筛查工具,整合并分析了存放在国土安全部数据仓库(DHS Data Warehouse)7 中的大量数据,这些数据涉及据称对儿童造成不公正待遇的人的详细信息。他们认为,这可能是解决人工行政系统不堪重负的办法。然而,就像人工智能在现代社会的其他应用一样,在公共部门,算法决策和支持系统也因为数据和算法的偏差而受到谴责8 。因此,这篇拙作是对这些问题--基于人工智能的行政决策与支持系统的偏见和歧视--的一个梳理。首先,我从法律的角度确定了偏见和歧视及其两个定义之间的模糊界限,然后详细阐述了在人工智能系统开发的各个阶段产生偏见的原因,主要是过去的偏见数据源和人类决策的结果、社会和政治背景以及开发者的伦理道德。在同一章中,我介绍了非歧视法律框架,包括它们在自动决策和支持系统方面的应用和与行政法的衔接,以及道德和个人数据保护法规的参与。在下一章中,我试图从法律和技术两个角度概述潜在解决方案的新建议。就前者而言,我的重点是公平性定义和其他可供开发者选择的现有方案,例如工具包、基准数据集、去偏数据等。对于后者,我报告了有关数据集和人工智能系统开发的战略和新建议,以及在不久的将来的实施情况。
{"title":"Bias and discrimination in ML-based systems of administrative decision-making and support","authors":"Trang Anh MAC","doi":"10.1016/j.clsr.2024.106070","DOIUrl":"10.1016/j.clsr.2024.106070","url":null,"abstract":"<div><div>In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.<span><span><sup>1</sup></span></span> The documentary, Trials of Gabriel Fernandez in 2020,<span><span><sup>2</sup></span></span> has discussed the Allegheny Family Screening Tool (AFST<span><span><sup>3</sup></span></span>), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan<span><span><sup>4</sup></span></span>, the Centre for Social Data Analytics co-director, and the Children's Data Network<span><span><sup>5</sup></span></span> members, with Emily Putnam-Hornstein<span><span><sup>6</sup></span></span>, established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse<span><span><sup>7</sup></span></span>. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.<span><span><sup>8</sup></span></span> This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106070"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.clsr.2024.106039
Nick Pantlin
This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.
Pub Date : 2024-10-28DOI: 10.1016/j.clsr.2024.106072
Xiaodong Ding , Hao Huang
This article examines two types of privacy policies required by the GDPR and the PIPL. It argues that even if privacy policies fail to effectively assist data subjects in making informed consent but still facilitate private and public enforcement, it does not mean that privacy policies should exclusively serve one category of its readers. The article argues that, considering the scope and meaning of the transparency value protected by data privacy laws, the role of privacy policies must be repositioned to reduce costs of obtaining and understanding information for all readers of privacy policies.
{"title":"For whom is privacy policy written? A new understanding of privacy policies","authors":"Xiaodong Ding , Hao Huang","doi":"10.1016/j.clsr.2024.106072","DOIUrl":"10.1016/j.clsr.2024.106072","url":null,"abstract":"<div><div>This article examines two types of privacy policies required by the GDPR and the PIPL. It argues that even if privacy policies fail to effectively assist data subjects in making informed consent but still facilitate private and public enforcement, it does not mean that privacy policies should exclusively serve one category of its readers. The article argues that, considering the scope and meaning of the transparency value protected by data privacy laws, the role of privacy policies must be repositioned to reduce costs of obtaining and understanding information for all readers of privacy policies.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106072"},"PeriodicalIF":3.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.clsr.2024.106067
Irina Carnat
The rapid advancements in natural language processing, particularly the development of generative large language models (LLMs), have renewed interest in using artificial intelligence (AI) for judicial decision-making. While these technological breakthroughs present new possibilities for legal automation, they also raise concerns about over-reliance and automation bias. Drawing insights from the COMPAS case, this paper examines the implications of deploying generative LLMs in the judicial domain. It identifies the persistent factors that contributed to an accountability gap when AI systems were previously used for judicial decision-making. To address these risks, the paper analyses the relevant provisions of the EU Artificial Intelligence Act, outlining a comprehensive accountability framework based on the regulation's risk-based approach. The paper concludes that the successful integration of generative LLMs in judicial decision-making requires a holistic approach addressing cognitive biases. By emphasising shared responsibility and the imperative of AI literacy across the AI value chain, the regulatory framework can help mitigate the risks of automation bias and preserve the rule of law.
{"title":"Addressing the risks of generative AI for the judiciary: The accountability framework(s) under the EU AI Act","authors":"Irina Carnat","doi":"10.1016/j.clsr.2024.106067","DOIUrl":"10.1016/j.clsr.2024.106067","url":null,"abstract":"<div><div>The rapid advancements in natural language processing, particularly the development of generative large language models (LLMs), have renewed interest in using artificial intelligence (AI) for judicial decision-making. While these technological breakthroughs present new possibilities for legal automation, they also raise concerns about over-reliance and automation bias. Drawing insights from the COMPAS case, this paper examines the implications of deploying generative LLMs in the judicial domain. It identifies the persistent factors that contributed to an accountability gap when AI systems were previously used for judicial decision-making. To address these risks, the paper analyses the relevant provisions of the EU Artificial Intelligence Act, outlining a comprehensive accountability framework based on the regulation's risk-based approach. The paper concludes that the successful integration of generative LLMs in judicial decision-making requires a holistic approach addressing cognitive biases. By emphasising shared responsibility and the imperative of AI literacy across the AI value chain, the regulatory framework can help mitigate the risks of automation bias and preserve the rule of law.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106067"},"PeriodicalIF":3.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1016/j.clsr.2024.106065
Francesca Palmiotto
In response to the increasing digitalization of asylum procedures, this paper examines the legal challenges surrounding the use of automated tools in refugee status determination (RSD). Focusing on the European Union (EU) context, where interoperable databases and advanced technologies are employed to streamline asylum processes, the paper asks how EU fundamental rights can address the challenges that automation raises. Through a comprehensive analysis of EU law and several real-life cases, the paper focuses on the relationship between procedural fairness and the use of automated tools to provide evidence in RSD. The paper illustrates what standards apply to automated systems based on a legal doctrinal analysis of EU primary and secondary law and emerging case law from national courts and the CJEU. The article contends that the rights to privacy and data protection enhance procedural fairness in asylum procedures and shows how they can be leveraged for increased protection of asylum seekers and refugees. Moreover, the paper also claims that asylum authorities carry a new pivotal responsibility as the medium between the technologies, asylum seekers and their rights.
{"title":"Procedural fairness in automated asylum procedures: Fundamental rights for fundamental challenges","authors":"Francesca Palmiotto","doi":"10.1016/j.clsr.2024.106065","DOIUrl":"10.1016/j.clsr.2024.106065","url":null,"abstract":"<div><div>In response to the increasing digitalization of asylum procedures, this paper examines the legal challenges surrounding the use of automated tools in refugee status determination (RSD). Focusing on the European Union (EU) context, where interoperable databases and advanced technologies are employed to streamline asylum processes, the paper asks how EU fundamental rights can address the challenges that automation raises. Through a comprehensive analysis of EU law and several real-life cases, the paper focuses on the relationship between procedural fairness and the use of automated tools to provide evidence in RSD. The paper illustrates what standards apply to automated systems based on a legal doctrinal analysis of EU primary and secondary law and emerging case law from national courts and the CJEU. The article contends that the rights to privacy and data protection enhance procedural fairness in asylum procedures and shows how they can be leveraged for increased protection of asylum seekers and refugees. Moreover, the paper also claims that asylum authorities carry a new pivotal responsibility as the medium between the technologies, asylum seekers and their rights.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106065"},"PeriodicalIF":3.3,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142444763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}