首页 > 最新文献

Computer Law & Security Review最新文献

英文 中文
Digital transformation in Russia: Turning from a service model to ensuring technological sovereignty 俄罗斯的数字化转型:从服务模式转向确保技术主权
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106075
Ekaterina Martynova , Andrey Shcherbovich
The paper outlines core aspects of the digital transformation process in Russia since the early 2000s, as well as recent legislative initiatives and practices at the federal level. It considers the digitalization of public services, efforts towards ‘sovereignization’ of the Russian segment of the Internet, and the current focus on cybersecurity and the development of artificial intelligence. The paper highlights the tendency to strengthen the factor of protection of state interests and national security alongside control over online activities of citizens in comparison with the initial understanding of digital transformation as a human-oriented process aimed at increasing the accessibility and convenience of public services. It can be assumed that this change in the goals and methods of digital transformation is one of the manifestations of a broader political, social and cultural process of separation, primarily from the West, that Russian society is currently undergoing, amidst a growing official narrative of threats from both external and internal forces that require greater independence and increased vigilance, including in the digital domain.
本文概述了自 2000 年代初以来俄罗斯数字化转型进程的核心方面,以及近期联邦层面的立法举措和实践。本文探讨了公共服务的数字化、俄罗斯互联网 "主权化 "的努力,以及当前对网络安全和人工智能发展的关注。与最初将数字化转型理解为以人为本、旨在提高公共服务可获取性和便利性的过程相比,本文强调了在控制公民在线活动的同时加强保护国家利益和国家安全因素的趋势。可以认为,数字化转型目标和方法的这一变化是俄罗斯社会目前正在经历的更广泛的政治、社会和文化分离过程(主要是与西方的分离)的表现之一,因为官方日益认为来自外部和内部力量的威胁需要更大的独立性和更高的警惕性,包括在数字化领域。
{"title":"Digital transformation in Russia: Turning from a service model to ensuring technological sovereignty","authors":"Ekaterina Martynova ,&nbsp;Andrey Shcherbovich","doi":"10.1016/j.clsr.2024.106075","DOIUrl":"10.1016/j.clsr.2024.106075","url":null,"abstract":"<div><div>The paper outlines core aspects of the digital transformation process in Russia since the early 2000s, as well as recent legislative initiatives and practices at the federal level. It considers the digitalization of public services, efforts towards ‘sovereignization’ of the Russian segment of the Internet, and the current focus on cybersecurity and the development of artificial intelligence. The paper highlights the tendency to strengthen the factor of protection of state interests and national security alongside control over online activities of citizens in comparison with the initial understanding of digital transformation as a human-oriented process aimed at increasing the accessibility and convenience of public services. It can be assumed that this change in the goals and methods of digital transformation is one of the manifestations of a broader political, social and cultural process of separation, primarily from the West, that Russian society is currently undergoing, amidst a growing official narrative of threats from both external and internal forces that require greater independence and increased vigilance, including in the digital domain.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106075"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Toward a BRICS stack? Leveraging digital transformation to construct digital sovereignty in the BRICS countries 社论:走向金砖国家堆栈?利用数字转型,构建金砖国家数字主权
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106064
Luca Belli, Larissa Galdino de Magalhães Santos
{"title":"Editorial: Toward a BRICS stack? Leveraging digital transformation to construct digital sovereignty in the BRICS countries","authors":"Luca Belli,&nbsp;Larissa Galdino de Magalhães Santos","doi":"10.1016/j.clsr.2024.106064","DOIUrl":"10.1016/j.clsr.2024.106064","url":null,"abstract":"","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106064"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
European National News 欧洲国家新闻
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106062
Nick Pantlin
This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.
© 2024 Herbert Smith Freehills LLP. Published by Elsevier Ltd. All rights reserved.
本文跟踪了欧洲主要国家在信息技术和通信领域的国家一级的发展,并提供了重要的国家发展的简明警报服务。它由赫伯特·史密斯·费希尔律师事务所协调,并由欧洲各地的公司提供资金。本专栏为欧洲主要国家的重要国家发展提供简明的预警服务。它的部分目的是补充《华尔街日报》的专题文章和简报,让读者了解当前在国家层面上实施欧盟立法和国际公约和条约的“实地”情况。如果某项欧洲国家新闻具有特别重要的意义,CLSR也可能在当前或以后的版本中对其进行更详细的报道。©2024 Herbert Smith Freehills LLP。Elsevier Ltd.出版。版权所有。
{"title":"European National News","authors":"Nick Pantlin","doi":"10.1016/j.clsr.2024.106062","DOIUrl":"10.1016/j.clsr.2024.106062","url":null,"abstract":"<div><div>This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.</div><div>© 2024 Herbert Smith Freehills LLP. Published by Elsevier Ltd. All rights reserved.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106062"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian deep learning: An enhanced AI framework for legal reasoning alignment 贝叶斯深度学习:用于法律推理调整的增强型人工智能框架
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106073
Chuyue Zhang, Yuchen Meng
The integration of artificial intelligence into the field of law has penetrated the underlying logic of legal operations. Currently, legal AI systems face difficulties in representing legal knowledge, exhibit insufficient legal reasoning capabilities, have poor explainability, and are inefficient in handling causal inference and uncertainty. In legal practice, various legal reasoning methods (deductive reasoning, inductive reasoning, abductive reasoning, etc.) are often intertwined and used comprehensively. However, the reasoning modes employed by current legal AI systems are inadequate. Identifying AI models that are more suitable for legal reasoning is crucial for advancing the development of legal AI systems.
Distinguished from the current high-profile large language models, we believe that Bayesian reasoning is highly compatible with legal reasoning, as it can perferm abductive reasoning, excel at causal inference, and admits the "defeasibility" of reasoning conclusions, which is consistent with the cognitive development pattern of legal professionals from apriori to posteriori. AI models based on Bayesian methods can also become the main technological support for legal AI systems. Bayesian neural networks have advantages in uncertainty modeling, avoiding overfitting, and explainability. Legal AI systems based on Bayesian deep learning frameworks can combine the advantages of deep learning and probabilistic graphical models, facilitating the exchange and supplementation of information between perception tasks and reasoning tasks. In this paper, we take perpetrator prediction systems and legal judegment prediction systems as examples to discuss the construction and basic operation modes of the Bayesian deep learning framework. Bayesian deep learning can enhance reasoning ability, improve the explainability of models, and make the reasoning process more transparent and visualizable. Furthermore, Bayesian deep learning framework is well-suited for human-machine collaborative tasks, enabling the complementary strengths of humans and machines.
人工智能与法律领域的融合已经渗透到法律运作的底层逻辑中。目前,法律人工智能系统在表征法律知识方面面临困难,表现出法律推理能力不足、可解释性差、因果推理和不确定性处理效率低等问题。在法律实践中,各种法律推理方法(演绎推理、归纳推理、归纳推理等)往往交织在一起,综合使用。然而,目前的法律人工智能系统所采用的推理模式并不完善。区别于目前备受关注的大语言模型,我们认为贝叶斯推理与法律推理具有很高的契合度,因为它可以进行归纳推理,擅长因果推理,并承认推理结论的 "可败性",这符合法律专业人士从先验到后验的认知发展规律。基于贝叶斯方法的人工智能模型也可以成为法律人工智能系统的主要技术支撑。贝叶斯神经网络在不确定性建模、避免过拟合、可解释性等方面具有优势。基于贝叶斯深度学习框架的法律人工智能系统可以结合深度学习和概率图模型的优势,促进感知任务和推理任务之间的信息交换和补充。本文以犯罪人预测系统和法律判决预测系统为例,探讨贝叶斯深度学习框架的构建和基本运行模式。贝叶斯深度学习可以增强推理能力,提高模型的可解释性,使推理过程更加透明和可视化。此外,贝叶斯深度学习框架非常适合人机协作任务,可以实现人机优势互补。
{"title":"Bayesian deep learning: An enhanced AI framework for legal reasoning alignment","authors":"Chuyue Zhang,&nbsp;Yuchen Meng","doi":"10.1016/j.clsr.2024.106073","DOIUrl":"10.1016/j.clsr.2024.106073","url":null,"abstract":"<div><div>The integration of artificial intelligence into the field of law has penetrated the underlying logic of legal operations. Currently, legal AI systems face difficulties in representing legal knowledge, exhibit insufficient legal reasoning capabilities, have poor explainability, and are inefficient in handling causal inference and uncertainty. In legal practice, various legal reasoning methods (deductive reasoning, inductive reasoning, abductive reasoning, etc.) are often intertwined and used comprehensively. However, the reasoning modes employed by current legal AI systems are inadequate. Identifying AI models that are more suitable for legal reasoning is crucial for advancing the development of legal AI systems.</div><div>Distinguished from the current high-profile large language models, we believe that Bayesian reasoning is highly compatible with legal reasoning, as it can perferm abductive reasoning, excel at causal inference, and admits the \"defeasibility\" of reasoning conclusions, which is consistent with the cognitive development pattern of legal professionals from apriori to posteriori. AI models based on Bayesian methods can also become the main technological support for legal AI systems. Bayesian neural networks have advantages in uncertainty modeling, avoiding overfitting, and explainability. Legal AI systems based on Bayesian deep learning frameworks can combine the advantages of deep learning and probabilistic graphical models, facilitating the exchange and supplementation of information between perception tasks and reasoning tasks. In this paper, we take perpetrator prediction systems and legal judegment prediction systems as examples to discuss the construction and basic operation modes of the Bayesian deep learning framework. Bayesian deep learning can enhance reasoning ability, improve the explainability of models, and make the reasoning process more transparent and visualizable. Furthermore, Bayesian deep learning framework is well-suited for human-machine collaborative tasks, enabling the complementary strengths of humans and machines.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106073"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI in EU law: Liability, privacy, intellectual property, and cybersecurity 欧盟法律中的生成人工智能:责任、隐私、知识产权和网络安全
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106066
Claudio Novelli , Federico Casolari , Philipp Hacker , Giorgio Spedicato , Luciano Floridi
The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and shortcomings in the EU legislative framework and proposes recommendations to ensure the safe and compliant deployment of generative models.
生成式人工智能系统的复杂性和涌现的自主性在可预测性和法律遵从性方面带来了挑战。本文分析了欧盟背景下此类挑战的一些法律和监管影响,重点关注四个领域:责任、隐私、知识产权和网络安全。它审查了现有和拟议的欧盟立法的充分性,包括人工智能法案(AIA),以解决一般生成人工智能,特别是法学硕士所带来的挑战。本文确定了欧盟立法框架中的潜在差距和缺点,并提出了确保生成模型安全和合规部署的建议。
{"title":"Generative AI in EU law: Liability, privacy, intellectual property, and cybersecurity","authors":"Claudio Novelli ,&nbsp;Federico Casolari ,&nbsp;Philipp Hacker ,&nbsp;Giorgio Spedicato ,&nbsp;Luciano Floridi","doi":"10.1016/j.clsr.2024.106066","DOIUrl":"10.1016/j.clsr.2024.106066","url":null,"abstract":"<div><div>The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and shortcomings in the EU legislative framework and proposes recommendations to ensure the safe and compliant deployment of generative models.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106066"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bias and discrimination in ML-based systems of administrative decision-making and support 基于 ML 的行政决策和支持系统中的偏见和歧视
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106070
Trang Anh MAC
In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.1 The documentary, Trials of Gabriel Fernandez in 2020,2 has discussed the Allegheny Family Screening Tool (AFST3), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan4, the Centre for Social Data Analytics co-director, and the Children's Data Network5 members, with Emily Putnam-Hornstein6, established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse7. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.8 This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future.
1 纪录片《2020 年加布里埃尔-费尔南德斯的审判》(Trials of Gabriel Fernandez in 2020)2 讨论了美国阿勒格尼县自 2016 年起实施的阿勒格尼家庭筛查工具(Allegheny Family Screening Tool,AFST3),该工具旨在预测社会服务系统的介入情况。社会数据分析中心(Centre for Social Data Analytics)联合主任雷玛-瓦伊蒂亚纳坦(Rhema Vaithianathan)4 和儿童数据网络(Children's Data Network)5 成员与艾米丽-普特南-霍恩斯坦(Emily Putnam-Hornstein )6 共同建立了这一示范性筛查工具,整合并分析了存放在国土安全部数据仓库(DHS Data Warehouse)7 中的大量数据,这些数据涉及据称对儿童造成不公正待遇的人的详细信息。他们认为,这可能是解决人工行政系统不堪重负的办法。然而,就像人工智能在现代社会的其他应用一样,在公共部门,算法决策和支持系统也因为数据和算法的偏差而受到谴责8 。因此,这篇拙作是对这些问题--基于人工智能的行政决策与支持系统的偏见和歧视--的一个梳理。首先,我从法律的角度确定了偏见和歧视及其两个定义之间的模糊界限,然后详细阐述了在人工智能系统开发的各个阶段产生偏见的原因,主要是过去的偏见数据源和人类决策的结果、社会和政治背景以及开发者的伦理道德。在同一章中,我介绍了非歧视法律框架,包括它们在自动决策和支持系统方面的应用和与行政法的衔接,以及道德和个人数据保护法规的参与。在下一章中,我试图从法律和技术两个角度概述潜在解决方案的新建议。就前者而言,我的重点是公平性定义和其他可供开发者选择的现有方案,例如工具包、基准数据集、去偏数据等。对于后者,我报告了有关数据集和人工智能系统开发的战略和新建议,以及在不久的将来的实施情况。
{"title":"Bias and discrimination in ML-based systems of administrative decision-making and support","authors":"Trang Anh MAC","doi":"10.1016/j.clsr.2024.106070","DOIUrl":"10.1016/j.clsr.2024.106070","url":null,"abstract":"<div><div>In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.<span><span><sup>1</sup></span></span> The documentary, Trials of Gabriel Fernandez in 2020,<span><span><sup>2</sup></span></span> has discussed the Allegheny Family Screening Tool (AFST<span><span><sup>3</sup></span></span>), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan<span><span><sup>4</sup></span></span>, the Centre for Social Data Analytics co-director, and the Children's Data Network<span><span><sup>5</sup></span></span> members, with Emily Putnam-Hornstein<span><span><sup>6</sup></span></span>, established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse<span><span><sup>7</sup></span></span>. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.<span><span><sup>8</sup></span></span> This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106070"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
European National News 欧洲国家新闻
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106039
Nick Pantlin
This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.
© 2024 Herbert Smith Freehills LLP. Published by Elsevier Ltd. All rights reserved.
本文跟踪了欧洲主要国家在信息技术和通信领域的国家一级的发展,并提供了重要的国家发展的简明警报服务。它由赫伯特·史密斯·费希尔律师事务所协调,并由欧洲各地的公司提供资金。本专栏为欧洲主要国家的重要国家发展提供简明的预警服务。它的部分目的是补充《华尔街日报》的专题文章和简报,让读者了解当前在国家层面上实施欧盟立法和国际公约和条约的“实地”情况。如果某项欧洲国家新闻具有特别重要的意义,CLSR也可能在当前或以后的版本中对其进行更详细的报道。©2024 Herbert Smith Freehills LLP。Elsevier Ltd.出版。版权所有。
{"title":"European National News","authors":"Nick Pantlin","doi":"10.1016/j.clsr.2024.106039","DOIUrl":"10.1016/j.clsr.2024.106039","url":null,"abstract":"<div><div>This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.</div><div>© 2024 Herbert Smith Freehills LLP. Published by Elsevier Ltd. All rights reserved.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106039"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
For whom is privacy policy written? A new understanding of privacy policies 隐私政策为谁而写?重新认识隐私政策
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-10-28 DOI: 10.1016/j.clsr.2024.106072
Xiaodong Ding , Hao Huang
This article examines two types of privacy policies required by the GDPR and the PIPL. It argues that even if privacy policies fail to effectively assist data subjects in making informed consent but still facilitate private and public enforcement, it does not mean that privacy policies should exclusively serve one category of its readers. The article argues that, considering the scope and meaning of the transparency value protected by data privacy laws, the role of privacy policies must be repositioned to reduce costs of obtaining and understanding information for all readers of privacy policies.
本文探讨了 GDPR 和 PIPL 所要求的两类隐私政策。文章认为,即使隐私政策不能有效帮助数据主体做出知情同意,但仍能促进私人和公共执法,这并不意味着隐私政策只应服务于一类读者。文章认为,考虑到数据隐私法所保护的透明度价值的范围和意义,隐私政策的作用必须重新定位,以降低隐私政策所有读者获取和理解信息的成本。
{"title":"For whom is privacy policy written? A new understanding of privacy policies","authors":"Xiaodong Ding ,&nbsp;Hao Huang","doi":"10.1016/j.clsr.2024.106072","DOIUrl":"10.1016/j.clsr.2024.106072","url":null,"abstract":"<div><div>This article examines two types of privacy policies required by the GDPR and the PIPL. It argues that even if privacy policies fail to effectively assist data subjects in making informed consent but still facilitate private and public enforcement, it does not mean that privacy policies should exclusively serve one category of its readers. The article argues that, considering the scope and meaning of the transparency value protected by data privacy laws, the role of privacy policies must be repositioned to reduce costs of obtaining and understanding information for all readers of privacy policies.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106072"},"PeriodicalIF":3.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the risks of generative AI for the judiciary: The accountability framework(s) under the EU AI Act 应对人工智能对司法机构的风险:欧盟《人工智能法》下的问责框架
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-10-28 DOI: 10.1016/j.clsr.2024.106067
Irina Carnat
The rapid advancements in natural language processing, particularly the development of generative large language models (LLMs), have renewed interest in using artificial intelligence (AI) for judicial decision-making. While these technological breakthroughs present new possibilities for legal automation, they also raise concerns about over-reliance and automation bias. Drawing insights from the COMPAS case, this paper examines the implications of deploying generative LLMs in the judicial domain. It identifies the persistent factors that contributed to an accountability gap when AI systems were previously used for judicial decision-making. To address these risks, the paper analyses the relevant provisions of the EU Artificial Intelligence Act, outlining a comprehensive accountability framework based on the regulation's risk-based approach. The paper concludes that the successful integration of generative LLMs in judicial decision-making requires a holistic approach addressing cognitive biases. By emphasising shared responsibility and the imperative of AI literacy across the AI value chain, the regulatory framework can help mitigate the risks of automation bias and preserve the rule of law.
自然语言处理技术的飞速发展,尤其是大型语言生成模型 (LLM) 的开发,重新激发了人们对使用人工智能 (AI) 进行司法决策的兴趣。虽然这些技术突破为法律自动化带来了新的可能性,但也引发了对过度依赖和自动化偏见的担忧。本文从 COMPAS 案中汲取灵感,探讨了在司法领域部署生成式法律知识的影响。本文指出了以前将人工智能系统用于司法决策时造成问责空白的持续性因素。为了应对这些风险,本文分析了欧盟《人工智能法》的相关规定,概述了基于该法规基于风险的方法的全面问责框架。本文的结论是,要将生成式 LLM 成功纳入司法决策,需要采取整体方法解决认知偏差问题。通过强调整个人工智能价值链的共同责任和人工智能扫盲的必要性,监管框架可以帮助减轻自动化偏见的风险,维护法治。
{"title":"Addressing the risks of generative AI for the judiciary: The accountability framework(s) under the EU AI Act","authors":"Irina Carnat","doi":"10.1016/j.clsr.2024.106067","DOIUrl":"10.1016/j.clsr.2024.106067","url":null,"abstract":"<div><div>The rapid advancements in natural language processing, particularly the development of generative large language models (LLMs), have renewed interest in using artificial intelligence (AI) for judicial decision-making. While these technological breakthroughs present new possibilities for legal automation, they also raise concerns about over-reliance and automation bias. Drawing insights from the COMPAS case, this paper examines the implications of deploying generative LLMs in the judicial domain. It identifies the persistent factors that contributed to an accountability gap when AI systems were previously used for judicial decision-making. To address these risks, the paper analyses the relevant provisions of the EU Artificial Intelligence Act, outlining a comprehensive accountability framework based on the regulation's risk-based approach. The paper concludes that the successful integration of generative LLMs in judicial decision-making requires a holistic approach addressing cognitive biases. By emphasising shared responsibility and the imperative of AI literacy across the AI value chain, the regulatory framework can help mitigate the risks of automation bias and preserve the rule of law.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106067"},"PeriodicalIF":3.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Procedural fairness in automated asylum procedures: Fundamental rights for fundamental challenges 自动庇护程序中的程序公正:应对基本挑战的基本权利
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-10-17 DOI: 10.1016/j.clsr.2024.106065
Francesca Palmiotto
In response to the increasing digitalization of asylum procedures, this paper examines the legal challenges surrounding the use of automated tools in refugee status determination (RSD). Focusing on the European Union (EU) context, where interoperable databases and advanced technologies are employed to streamline asylum processes, the paper asks how EU fundamental rights can address the challenges that automation raises. Through a comprehensive analysis of EU law and several real-life cases, the paper focuses on the relationship between procedural fairness and the use of automated tools to provide evidence in RSD. The paper illustrates what standards apply to automated systems based on a legal doctrinal analysis of EU primary and secondary law and emerging case law from national courts and the CJEU. The article contends that the rights to privacy and data protection enhance procedural fairness in asylum procedures and shows how they can be leveraged for increased protection of asylum seekers and refugees. Moreover, the paper also claims that asylum authorities carry a new pivotal responsibility as the medium between the technologies, asylum seekers and their rights.
为应对庇护程序日益数字化的趋势,本文探讨了在难民身份确定(RSD)过程中使用自动化工具所面临的法律挑战。在欧盟(EU)的背景下,可互操作的数据库和先进技术被用于简化庇护程序,本文将重点关注欧盟基本权利如何应对自动化带来的挑战。通过对欧盟法律和几个真实案例的全面分析,本文重点探讨了程序公正与使用自动化工具提供难民身份确定证据之间的关系。本文基于对欧盟主要法律和次要法律以及国家法院和欧盟法院新判例法的法律理论分析,说明了哪些标准适用于自动化系统。文章认为,隐私权和数据保护权增强了庇护程序的程序公正性,并说明了如何利用这些权利加强对寻求庇护者和难民的保护。此外,本文还声称,庇护当局作为技术、寻求庇护者及其权利之间的媒介,承担着新的关键责任。
{"title":"Procedural fairness in automated asylum procedures: Fundamental rights for fundamental challenges","authors":"Francesca Palmiotto","doi":"10.1016/j.clsr.2024.106065","DOIUrl":"10.1016/j.clsr.2024.106065","url":null,"abstract":"<div><div>In response to the increasing digitalization of asylum procedures, this paper examines the legal challenges surrounding the use of automated tools in refugee status determination (RSD). Focusing on the European Union (EU) context, where interoperable databases and advanced technologies are employed to streamline asylum processes, the paper asks how EU fundamental rights can address the challenges that automation raises. Through a comprehensive analysis of EU law and several real-life cases, the paper focuses on the relationship between procedural fairness and the use of automated tools to provide evidence in RSD. The paper illustrates what standards apply to automated systems based on a legal doctrinal analysis of EU primary and secondary law and emerging case law from national courts and the CJEU. The article contends that the rights to privacy and data protection enhance procedural fairness in asylum procedures and shows how they can be leveraged for increased protection of asylum seekers and refugees. Moreover, the paper also claims that asylum authorities carry a new pivotal responsibility as the medium between the technologies, asylum seekers and their rights.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106065"},"PeriodicalIF":3.3,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142444763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Law & Security Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1