首页 > 最新文献

Computer Law & Security Review最新文献

英文 中文
Anonymising personal data under the data legislative acquis established by the Data Governance Act 根据《数据治理法》建立的数据立法获取,对个人数据进行匿名处理
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2026-01-29 DOI: 10.1016/j.clsr.2026.106261
Emanuela Podda , Daniela Spajic , Pierangela Samarati
The re-identification risk test, established under Recital 26 of the General Data Protection Regulation (GDPR), constitutes a milestone in assessing the efficiency of personal data anonymisation. Its interpretation and implementation have been largely discussed by scholars and practitioners. This article illustrates the challenges to the plausibility of the anonymisation risk test, especially considering the most recent European jurisprudence on data anonymisation, and the recent Digital Omnibus proposal. Although this regulatory proposal aims at repealing the Data Governance Act, it transposes its data governance model into the Data Act.
With the aim to foster the regulatory and scholar debate of this new proposal, our work puts forward a new perspective on data anonymisation within the frame the DGA, considering its potential to reduce legal uncertainty to the application of anonymisation through data intermediaries. Specifically, our work investigates how Data Intermediation Service Provider (DISP) could support data holders in anonymising data, impacting on accountability when providing access to data and sharing of data. Although the involvement of DISPs may raise additional questions in terms of the responsibilities of a DISP, this article outlines the pertinent rules established by the DGA to analyse and discuss such potential responsibilities and to elaborate on their possible contractual consequences with respect to data anonymisation.
根据《通用数据保护条例》(GDPR)序言26建立的重新识别风险测试是评估个人数据匿名化效率的里程碑。学者和实践者对其解释和实施进行了大量讨论。本文说明了匿名风险测试的合理性所面临的挑战,特别是考虑到最近欧洲关于数据匿名的判例,以及最近的数字综合提案。尽管该监管提案旨在废除《数据治理法案》,但它将其数据治理模型转换为《数据法案》。为了促进对这一新提案的监管和学者辩论,我们的工作在DGA框架内提出了一个关于数据匿名的新视角,考虑到它有可能减少通过数据中介机构匿名应用的法律不确定性。具体来说,我们的工作调查了数据中介服务提供商(DISP)如何支持数据持有者匿名化数据,在提供数据访问和数据共享时影响问责制。尽管DISP的参与可能会在DISP的责任方面提出额外的问题,但本文概述了DGA制定的相关规则,以分析和讨论这些潜在的责任,并详细说明它们在数据匿名方面可能产生的合同后果。
{"title":"Anonymising personal data under the data legislative acquis established by the Data Governance Act","authors":"Emanuela Podda ,&nbsp;Daniela Spajic ,&nbsp;Pierangela Samarati","doi":"10.1016/j.clsr.2026.106261","DOIUrl":"10.1016/j.clsr.2026.106261","url":null,"abstract":"<div><div>The re-identification risk test, established under Recital 26 of the General Data Protection Regulation (GDPR), constitutes a milestone in assessing the efficiency of personal data anonymisation. Its interpretation and implementation have been largely discussed by scholars and practitioners. This article illustrates the challenges to the plausibility of the anonymisation risk test, especially considering the most recent European jurisprudence on data anonymisation, and the recent Digital Omnibus proposal. Although this regulatory proposal aims at repealing the Data Governance Act, it transposes its data governance model into the Data Act.</div><div>With the aim to foster the regulatory and scholar debate of this new proposal, our work puts forward a new perspective on data anonymisation within the frame the DGA, considering its potential to reduce legal uncertainty to the application of anonymisation through data intermediaries. Specifically, our work investigates how Data Intermediation Service Provider (DISP) could support data holders in anonymising data, impacting on accountability when providing access to data and sharing of data. Although the involvement of DISPs may raise additional questions in terms of the responsibilities of a DISP, this article outlines the pertinent rules established by the DGA to analyse and discuss such potential responsibilities and to elaborate on their possible contractual consequences with respect to data anonymisation.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106261"},"PeriodicalIF":3.2,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating AI incident reporting into telecommunications law and policy: Insights from India 将人工智能事件报告纳入电信法律和政策:来自印度的见解
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2026-01-27 DOI: 10.1016/j.clsr.2026.106263
Avinash Agarwal , Manisha J. Nene
The integration of artificial intelligence (AI) into telecommunications infrastructure introduces novel risks, such as algorithmic bias and unpredictable system behavior, that fall outside the scope of traditional cybersecurity and data protection frameworks. This paper introduces a precise definition and a detailed typology of telecommunications AI incidents, establishing them as a distinct category of risk that extends beyond conventional cybersecurity and data protection breaches. It argues for their recognition as a distinct regulatory concern. Using India as a case study for jurisdictions that lack a horizontal AI law, the paper analyzes the country’s key digital regulations. The analysis reveals that India’s existing legal instruments, including the Telecommunications Act, 2023, the CERT-In Rules, and the Digital Personal Data Protection Act, 2023, focus on cybersecurity and data breaches, creating a significant regulatory gap for AI-specific operational incidents, such as performance degradation and algorithmic bias. The paper also examines structural barriers to disclosure and the limitations of existing AI incident repositories. Based on these findings, the paper proposes targeted policy recommendations centered on integrating AI incident reporting into India’s existing telecom governance. Key proposals include mandating reporting for high-risk AI failures, designating an existing government body as a nodal agency to manage incident data, and developing standardized reporting frameworks. These recommendations aim to enhance regulatory clarity and strengthen long-term resilience, offering a pragmatic and replicable blueprint for other nations seeking to govern AI risks within their existing sectoral frameworks.
将人工智能(AI)集成到电信基础设施中会带来新的风险,例如算法偏差和不可预测的系统行为,这些风险超出了传统网络安全和数据保护框架的范围。本文介绍了电信人工智能事件的精确定义和详细类型,将其确立为超越传统网络安全和数据保护漏洞的独特风险类别。它主张将它们视为一个独特的监管问题。本文以印度作为缺乏横向人工智能法律的司法管辖区的案例研究,分析了该国的关键数字法规。分析显示,印度现有的法律文书,包括《2023年电信法》、《CERT-In规则》和《2023年数字个人数据保护法》,重点关注网络安全和数据泄露,对人工智能特定的操作事件(如性能下降和算法偏差)造成了重大的监管缺口。本文还研究了披露的结构性障碍和现有人工智能事件存储库的局限性。基于这些发现,本文提出了有针对性的政策建议,重点是将人工智能事件报告整合到印度现有的电信治理中。主要建议包括强制报告高风险人工智能故障,指定一个现有的政府机构作为管理事件数据的节点机构,以及制定标准化的报告框架。这些建议旨在提高监管清晰度,增强长期韧性,为寻求在现有部门框架内管理人工智能风险的其他国家提供务实和可复制的蓝图。
{"title":"Incorporating AI incident reporting into telecommunications law and policy: Insights from India","authors":"Avinash Agarwal ,&nbsp;Manisha J. Nene","doi":"10.1016/j.clsr.2026.106263","DOIUrl":"10.1016/j.clsr.2026.106263","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) into telecommunications infrastructure introduces novel risks, such as algorithmic bias and unpredictable system behavior, that fall outside the scope of traditional cybersecurity and data protection frameworks. This paper introduces a precise definition and a detailed typology of <em>telecommunications AI incidents</em>, establishing them as a distinct category of risk that extends beyond conventional cybersecurity and data protection breaches. It argues for their recognition as a distinct regulatory concern. Using India as a case study for jurisdictions that lack a horizontal AI law, the paper analyzes the country’s key digital regulations. The analysis reveals that India’s existing legal instruments, including the Telecommunications Act, 2023, the CERT-In Rules, and the Digital Personal Data Protection Act, 2023, focus on cybersecurity and data breaches, creating a significant regulatory gap for AI-specific operational incidents, such as performance degradation and algorithmic bias. The paper also examines structural barriers to disclosure and the limitations of existing AI incident repositories. Based on these findings, the paper proposes targeted policy recommendations centered on integrating AI incident reporting into India’s existing telecom governance. Key proposals include mandating reporting for high-risk AI failures, designating an existing government body as a nodal agency to manage incident data, and developing standardized reporting frameworks. These recommendations aim to enhance regulatory clarity and strengthen long-term resilience, offering a pragmatic and replicable blueprint for other nations seeking to govern AI risks within their existing sectoral frameworks.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106263"},"PeriodicalIF":3.2,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
If it ain’t broke, don’t fix it? Ten improvements for the upcoming tenth anniversary of the General Data Protection Regulation 如果没坏,就不修?即将到来的《通用数据保护条例》十周年的十项改进
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2026-01-23 DOI: 10.1016/j.clsr.2025.106251
Dariusz Kloza (ed.) , Laura Drechsler (ed.) , Elora Fernandes (ed.) , Arian Birth , Julien Rossi , Pierre Dewitte , Jarosław Greser , Lisette Mustert , Gianclaudio Malgieri , Heidi Beate Bentzen
As the General Data Protection Regulation (GDPR) approaches its tenth anniversary, the European legislator is considering reforms thereto. This article offers a set of research-based suggestions for what such reforms could look like, based on two assumptions. First, that the GDPR is overall a solid piece of legislation that upholds the enduring objectives and principles of data protection law. Second, that any improvement cannot compromise the level of protection of fundamental rights currently offered. To this end, ten scholars from across Europe were invited to choose a provision of the GDPR, write about what works well and what does not, and why, as well as to suggest a solution for a concrete amendment of the text. The resulting wish-list discussing ten provisions (i.e., those concerning conditions for consent, children’s consent, automated decision-making, data protection by design, data security, data protection impact assessment and prior consultation, derogations for data transfers, dispute resolution by the European Data Protection Board, representation of data subjects and processing for scientific purposes) is necessarily random and far from exhaustive. However, it lays the groundwork for a constructive debate, and we invite others to build on the list with their own proposals.
随着《通用数据保护条例》(GDPR)临近十周年,欧洲立法者正在考虑对其进行改革。本文基于两个假设,就此类改革的形式提出了一系列基于研究的建议。首先,GDPR总体上是一项坚实的立法,它维护了数据保护法的持久目标和原则。第二,任何改善都不能损害目前对基本权利的保护水平。为此,来自欧洲各地的10位学者被邀请选择GDPR的一项条款,撰写哪些条款有效,哪些条款无效,以及为什么有效,并为具体修改文本提出解决方案。由此产生的愿望清单讨论了十个条款(即关于同意条件、儿童同意、自动决策、设计数据保护、数据安全、数据保护影响评估和事先咨询、数据传输的减损、欧洲数据保护委员会的争议解决、数据主体的代表和科学目的的处理),这必然是随机的,远非详尽无遗。然而,它为建设性的辩论奠定了基础,我们邀请其他人在此基础上提出自己的建议。
{"title":"If it ain’t broke, don’t fix it? Ten improvements for the upcoming tenth anniversary of the General Data Protection Regulation","authors":"Dariusz Kloza (ed.) ,&nbsp;Laura Drechsler (ed.) ,&nbsp;Elora Fernandes (ed.) ,&nbsp;Arian Birth ,&nbsp;Julien Rossi ,&nbsp;Pierre Dewitte ,&nbsp;Jarosław Greser ,&nbsp;Lisette Mustert ,&nbsp;Gianclaudio Malgieri ,&nbsp;Heidi Beate Bentzen","doi":"10.1016/j.clsr.2025.106251","DOIUrl":"10.1016/j.clsr.2025.106251","url":null,"abstract":"<div><div>As the General Data Protection Regulation (GDPR) approaches its tenth anniversary, the European legislator is considering reforms thereto. This article offers a set of research-based suggestions for what such reforms could look like, based on two assumptions. First, that the GDPR is overall a solid piece of legislation that upholds the enduring objectives and principles of data protection law. Second, that any improvement cannot compromise the level of protection of fundamental rights currently offered. To this end, ten scholars from across Europe were invited to choose a provision of the GDPR, write about what works well and what does not, and why, as well as to suggest a solution for a concrete amendment of the text. The resulting wish-list discussing ten provisions (i.e., those concerning conditions for consent, children’s consent, automated decision-making, data protection by design, data security, data protection impact assessment and prior consultation, derogations for data transfers, dispute resolution by the European Data Protection Board, representation of data subjects and processing for scientific purposes) is necessarily random and far from exhaustive. However, it lays the groundwork for a constructive debate, and we invite others to build on the list with their own proposals.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106251"},"PeriodicalIF":3.2,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence and adjudication: A new pathway to justice in China? 人工智能与裁判:中国司法的新途径?
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2026-01-19 DOI: 10.1016/j.clsr.2026.106260
Yi Chen
The application of artificial intelligence (AI) in the judicial system has become an inevitable trend in the digital era. This article analyzes the institutional background and practical drivers behind the rapid development and wide application of AI in China’s judicial system, examines its concrete use in adjudication, and critically discusses its limitations and potential challenges to due process, the right to a fair trial, and judicial independence. It argues that the rapid growth and deep integration of AI in the judicial field result from both top-level policy design and the judiciary’s need to address structural pressures. While AI may accelerate case handling and improve efficiency, its application is constrained by algorithmic limitations and institutional conditions, introducing risks to procedural fairness, substantive justice, and judges’ discretion and independence. Despite the Chinese judiciary's emphasis on AI as a merely auxiliary tool, judicial performance evaluations and accountability mechanisms may still encourage judges to rely on it excessively, leading to mechanical adjudication and neglect of case-specific circumstances. In light of these concerns, this article concludes that, beyond technical safeguards, institutional measures are also required to protect fair trial rights and judicial independence, ensuring that AI enhances rather than undermines justice.
人工智能(AI)在司法系统中的应用已成为数字时代的必然趋势。本文分析了人工智能在中国司法系统快速发展和广泛应用背后的制度背景和现实驱动因素,考察了人工智能在审判中的具体应用,并批判性地讨论了人工智能在正当程序、公平审判权和司法独立方面的局限性和潜在挑战。文章认为,人工智能在司法领域的快速发展和深度融合,既有顶层政策设计的结果,也有司法部门应对结构性压力的需要。虽然人工智能可以加快办案速度,提高效率,但其应用受到算法限制和制度条件的制约,给程序公平、实体正义以及法官的自由裁量权和独立性带来风险。尽管中国司法机构强调人工智能只是一种辅助工具,但司法绩效评估和问责机制仍可能鼓励法官过度依赖人工智能,导致机械审判和忽视具体案件情况。鉴于这些担忧,本文的结论是,除了技术保障外,还需要采取制度措施来保护公平审判权利和司法独立,确保人工智能增强而不是破坏司法公正。
{"title":"Artificial intelligence and adjudication: A new pathway to justice in China?","authors":"Yi Chen","doi":"10.1016/j.clsr.2026.106260","DOIUrl":"10.1016/j.clsr.2026.106260","url":null,"abstract":"<div><div>The application of artificial intelligence (AI) in the judicial system has become an inevitable trend in the digital era. This article analyzes the institutional background and practical drivers behind the rapid development and wide application of AI in China’s judicial system, examines its concrete use in adjudication, and critically discusses its limitations and potential challenges to due process, the right to a fair trial, and judicial independence. It argues that the rapid growth and deep integration of AI in the judicial field result from both top-level policy design and the judiciary’s need to address structural pressures. While AI may accelerate case handling and improve efficiency, its application is constrained by algorithmic limitations and institutional conditions, introducing risks to procedural fairness, substantive justice, and judges’ discretion and independence. Despite the Chinese judiciary's emphasis on AI as a merely auxiliary tool, judicial performance evaluations and accountability mechanisms may still encourage judges to rely on it excessively, leading to mechanical adjudication and neglect of case-specific circumstances. In light of these concerns, this article concludes that, beyond technical safeguards, institutional measures are also required to protect fair trial rights and judicial independence, ensuring that AI enhances rather than undermines justice.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106260"},"PeriodicalIF":3.2,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping the meaning of human dignity at the European Court of Human Rights: An unsupervised learning approach 在欧洲人权法院描绘人类尊严的意义:一种无监督学习方法
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2026-01-17 DOI: 10.1016/j.clsr.2025.106253
Gustavo Arosemena , Foivos Ioannis Tzavellos , Rohan Nanda
This paper applies topic modeling techniques to trace the legal concept of human dignity in the jurisprudence of the European Court of Human Rights. Using unsupervised learning methods, we aim to detect recurring topics and themes in the case law on human dignity, offering a data-driven perspective to a long-standing theoretical debate. We implemented a specific preprocessing pipeline to prepare the dataset for analysis and employed several state-of-the-art topic modeling algorithms, including LDA, LSI, NMF, and BERTopic. To evaluate the results, we used a measure of topic quality, combining topic coherence and topic diversity. Our findings suggest that a coherent ‘substantive’ notion of dignity can indeed be inferred from the Court’s case law with some degree of consistency. This paper contributes both methodologically, by demonstrating the efficacy of different approaches to topic modeling in legal contexts, and substantively, by deepening the understanding of how the concept of human dignity is interpreted in human rights law.
本文运用主题建模技术对欧洲人权法院法理学中关于人的尊严的法律概念进行追溯。使用无监督学习方法,我们旨在发现关于人类尊严的判例法中反复出现的话题和主题,为长期存在的理论辩论提供数据驱动的视角。我们实现了一个特定的预处理管道来准备数据集进行分析,并采用了几种最先进的主题建模算法,包括LDA、LSI、NMF和BERTopic。为了评估结果,我们使用了主题质量的衡量标准,结合了主题一致性和主题多样性。我们的研究结果表明,一个连贯的“实质性”尊严概念确实可以从法院的判例法中推断出来,并具有一定程度的一致性。本文在方法论上和实质性上都做出了贡献,前者展示了在法律背景下不同主题建模方法的有效性,后者加深了对人权法中如何解释人类尊严概念的理解。
{"title":"Mapping the meaning of human dignity at the European Court of Human Rights: An unsupervised learning approach","authors":"Gustavo Arosemena ,&nbsp;Foivos Ioannis Tzavellos ,&nbsp;Rohan Nanda","doi":"10.1016/j.clsr.2025.106253","DOIUrl":"10.1016/j.clsr.2025.106253","url":null,"abstract":"<div><div>This paper applies topic modeling techniques to trace the legal concept of human dignity in the jurisprudence of the European Court of Human Rights. Using unsupervised learning methods, we aim to detect recurring topics and themes in the case law on human dignity, offering a data-driven perspective to a long-standing theoretical debate. We implemented a specific preprocessing pipeline to prepare the dataset for analysis and employed several state-of-the-art topic modeling algorithms, including LDA, LSI, NMF, and BERTopic. To evaluate the results, we used a measure of topic quality, combining topic coherence and topic diversity. Our findings suggest that a coherent ‘substantive’ notion of dignity can indeed be inferred from the Court’s case law with some degree of consistency. This paper contributes both methodologically, by demonstrating the efficacy of different approaches to topic modeling in legal contexts, and substantively, by deepening the understanding of how the concept of human dignity is interpreted in human rights law.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106253"},"PeriodicalIF":3.2,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regulatory regimes for disruptive IT: A framework for their design and evaluation 颠覆性IT的监管制度:设计和评估的框架
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2026-01-08 DOI: 10.1016/j.clsr.2025.106231
Roger Clarke
The pervasiveness and the impactfulness of information technology (IT) have been growing steeply for decades. Recent forms of IT are highly obscure in their operation. At the same time, IT-based systems are being permitted greater freedom to draw inferences, make decisions, and even act in the real world, without meaningful supervision. There are prospects of serious harm arising from misconceived, mis-designed or misimplemented projects.
Organisations developing and applying IT need to be subject to obligations to take degrees of care, prior to deploying impactful initiatives, that are commensurate with the risks involved. They also need to be subject to accountability mechanisms that act as strong disincentives against reckless behaviour by executives and professionals alike. This article presents a framework whereby practitioners can evaluate the efficacy of regulatory regimes for impactful IT-based systems, design new regimes, and adapt existing ones. The author has matured the framework over several decades, applied early variants of it in multiple contexts, and published articles on many of those projects.
The article commences by defining regulation and the kinds of entities and behaviour to which it is applied, and identifying the criteria for an effective regulatory mechanism. This is followed by presentation of models of the layers of regulatory measures from which regimes are constructed, and the players in the processes of regime formation and operation. Observations are also provided concerning the nature of the principles and rules that need to be established in order to provide substance within the regulatory frame. An evaluation form is provided as an Appendix. Also provided as Appendices are pilot applications of the evaluation form in several diverse contexts. A companion article (Clarke 2025b) applies the framework to a technology of current concern.
几十年来,信息技术(IT)的普及程度和影响力一直在急剧增长。最近的IT形式在操作上是非常模糊的。与此同时,基于it的系统被允许有更大的自由来推断、做出决定,甚至在现实世界中行动,而无需有意义的监督。错误构思、错误设计或错误执行的项目可能会造成严重危害。开发和应用IT的组织需要遵守义务,在部署与所涉及的风险相称的有影响力的计划之前,采取一定程度的谨慎。它们还需要接受问责机制的约束,这种机制对高管和专业人士的鲁莽行为起到强有力的抑制作用。本文提出了一个框架,从业人员可以借此评估有效的基于it的系统的监管制度的有效性,设计新的制度,并适应现有的制度。作者在几十年的时间里使该框架成熟起来,在多种上下文中应用了它的早期变体,并在其中许多项目上发表了文章。本文首先定义了监管及其适用的实体和行为类型,并确定了有效监管机制的标准。随后介绍了构建制度所依据的监管措施层的模型,以及制度形成和运作过程中的参与者。还就需要确立的原则和规则的性质提出了意见,以便在管理框架内提供实质内容。评估表格作为附录提供。附录还提供了评价表在若干不同情况下的试点应用。一篇配套文章(Clarke 2025b)将该框架应用于当前关注的技术。
{"title":"Regulatory regimes for disruptive IT: A framework for their design and evaluation","authors":"Roger Clarke","doi":"10.1016/j.clsr.2025.106231","DOIUrl":"10.1016/j.clsr.2025.106231","url":null,"abstract":"<div><div>The pervasiveness and the impactfulness of information technology (IT) have been growing steeply for decades. Recent forms of IT are highly obscure in their operation. At the same time, IT-based systems are being permitted greater freedom to draw inferences, make decisions, and even act in the real world, without meaningful supervision. There are prospects of serious harm arising from misconceived, mis-designed or misimplemented projects.</div><div>Organisations developing and applying IT need to be subject to obligations to take degrees of care, prior to deploying impactful initiatives, that are commensurate with the risks involved. They also need to be subject to accountability mechanisms that act as strong disincentives against reckless behaviour by executives and professionals alike. This article presents a framework whereby practitioners can evaluate the efficacy of regulatory regimes for impactful IT-based systems, design new regimes, and adapt existing ones. The author has matured the framework over several decades, applied early variants of it in multiple contexts, and published articles on many of those projects.</div><div>The article commences by defining regulation and the kinds of entities and behaviour to which it is applied, and identifying the criteria for an effective regulatory mechanism. This is followed by presentation of models of the layers of regulatory measures from which regimes are constructed, and the players in the processes of regime formation and operation. Observations are also provided concerning the nature of the principles and rules that need to be established in order to provide substance within the regulatory frame. An evaluation form is provided as an Appendix. Also provided as Appendices are pilot applications of the evaluation form in several diverse contexts. A companion article (Clarke 2025b) applies the framework to a technology of current concern.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106231"},"PeriodicalIF":3.2,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging textual content, citational aspects and dissenting opinions through a multi-view contrastive learning methodology for legal precedent analysis 运用多视角对比学习方法,综合运用文本内容、引证内容和不同意见进行判例分析
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2026-01-02 DOI: 10.1016/j.clsr.2025.106257
Graziella De Martino , Piero Marra , Annunziata D’Aversa , Lorenzo Pulito , Antonio Pellicani , Gianvito Pio , Michelangelo Ceci
Artificial Intelligence is transforming the digital justice field by introducing technologies to automate document review, predict case outcomes, and perform legal research tasks. While offering significant benefits, these systems appear to prioritize decision-making patterns that are simply repeated over time, thus neglecting the importance of a dynamic evolution and potentially leading to the risk of stagnation of case law.
To mitigate this risk, this paper proposes ContraLEX, a methodology based on a multi-view contrastive learning framework to compare legal judgments, considering those from the European Court of Human Rights as an emblematic case study. Methodologically, our goal is to capture the positive influence on the similarity, provided by both textual content and citations of precedents, and the negative influence of dissenting opinions, by relying on a contrastive learning approach. We argue that our methodology can enhance legal analysis by creating a proper representation of case law to prevent the stagnation of legal precedents and promote their evolution over time. A case study on ECtHR data empirically demonstrated that the proposed pipeline is very promising for properly supporting legal precedent analysis.
人工智能通过引入自动化文件审查、预测案件结果、执行法律研究任务的技术,正在改变数字司法领域。虽然提供了显著的好处,但这些系统似乎优先考虑了随着时间的推移而简单重复的决策模式,从而忽视了动态演变的重要性,并可能导致判例法停滞的风险。为了降低这种风险,本文提出了ContraLEX,这是一种基于多视角对比学习框架的比较法律判决的方法,并以欧洲人权法院的判决为典型案例进行了研究。在方法上,我们的目标是通过对比学习方法,捕捉文本内容和先例引用对相似性的积极影响,以及不同意见的消极影响。我们认为,我们的方法可以通过创建判例法的适当代表来加强法律分析,以防止法律判例的停滞,并促进它们随着时间的推移而演变。对欧洲人权法院数据的案例研究表明,拟议的管道非常有希望适当地支持法律先例分析。
{"title":"Leveraging textual content, citational aspects and dissenting opinions through a multi-view contrastive learning methodology for legal precedent analysis","authors":"Graziella De Martino ,&nbsp;Piero Marra ,&nbsp;Annunziata D’Aversa ,&nbsp;Lorenzo Pulito ,&nbsp;Antonio Pellicani ,&nbsp;Gianvito Pio ,&nbsp;Michelangelo Ceci","doi":"10.1016/j.clsr.2025.106257","DOIUrl":"10.1016/j.clsr.2025.106257","url":null,"abstract":"<div><div>Artificial Intelligence is transforming the digital justice field by introducing technologies to automate document review, predict case outcomes, and perform legal research tasks. While offering significant benefits, these systems appear to prioritize decision-making patterns that are simply repeated over time, thus neglecting the importance of a dynamic evolution and potentially leading to the risk of stagnation of case law.</div><div>To mitigate this risk, this paper proposes ContraLEX, a methodology based on a multi-view contrastive learning framework to compare legal judgments, considering those from the European Court of Human Rights as an emblematic case study. Methodologically, our goal is to capture the positive influence on the similarity, provided by both textual content and citations of precedents, and the negative influence of dissenting opinions, by relying on a contrastive learning approach. We argue that our methodology can enhance legal analysis by creating a proper representation of case law to prevent the stagnation of legal precedents and promote their evolution over time. A case study on ECtHR data empirically demonstrated that the proposed pipeline is very promising for properly supporting legal precedent analysis.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106257"},"PeriodicalIF":3.2,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy as institutional design: A legal-technological analysis of CBDC governance and compliance 隐私作为制度设计:CBDC治理与合规性的法律技术分析
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-27 DOI: 10.1016/j.clsr.2025.106258
Ammar Zafar
CBDCs reconfigure the relationship between public money, institutional authority, and informational power. While privacy in CBDC systems is often seen as a technical issue of cryptography and compliance, this paper argues that it is primarily an institutional design challenge: who may access transactional data, under what legal authority, and subject to which constraints. Using Sweden’s e-Krona pilot and the emerging digital euro framework as comparative references, the analysis demonstrates how identical privacy-enhancing technologies can produce different outcomes depending on how central banks, intermediaries, and supervisory bodies allocate visibility, responsibility, and access. The paper also highlights the limitations of pilot environments, which cannot replicate behavioural diversity, fraud incentives, or the governance frictions typical of live monetary systems. Furthermore, it examines how cross-border legal fragmentation hampers the feasibility of privacy-preserving interoperability, even when technical standards seem compatible. The findings suggest that lasting privacy in CBDCs cannot rely solely on PETs; it requires institutional restraint, legally defined access rights, and governance structures capable of maintaining credible limits on informational power.
cbdc重新配置了公共资金、制度权威和信息权力之间的关系。虽然CBDC系统中的隐私通常被视为密码学和合规性的技术问题,但本文认为,它主要是一个制度设计挑战:谁可以访问交易数据,在何种法律权限下,受到何种约束。以瑞典的e-Krona试点和新兴的数字欧元框架作为比较参考,分析表明,根据中央银行、中介机构和监管机构如何分配可见性、责任和访问权,相同的隐私增强技术如何产生不同的结果。该论文还强调了试点环境的局限性,它不能复制行为多样性、欺诈激励或实际货币体系中典型的治理摩擦。此外,它还研究了即使在技术标准似乎兼容的情况下,跨境法律碎片化如何阻碍隐私保护互操作性的可行性。研究结果表明,cbdc中的持久隐私不能仅仅依赖于pet;它需要制度上的约束,法律上定义的访问权,以及能够对信息权力保持可信限制的治理结构。
{"title":"Privacy as institutional design: A legal-technological analysis of CBDC governance and compliance","authors":"Ammar Zafar","doi":"10.1016/j.clsr.2025.106258","DOIUrl":"10.1016/j.clsr.2025.106258","url":null,"abstract":"<div><div>CBDCs reconfigure the relationship between public money, institutional authority, and informational power. While privacy in CBDC systems is often seen as a technical issue of cryptography and compliance, this paper argues that it is primarily an institutional design challenge: who may access transactional data, under what legal authority, and subject to which constraints. Using Sweden’s e-Krona pilot and the emerging digital euro framework as comparative references, the analysis demonstrates how identical privacy-enhancing technologies can produce different outcomes depending on how central banks, intermediaries, and supervisory bodies allocate visibility, responsibility, and access. The paper also highlights the limitations of pilot environments, which cannot replicate behavioural diversity, fraud incentives, or the governance frictions typical of live monetary systems. Furthermore, it examines how cross-border legal fragmentation hampers the feasibility of privacy-preserving interoperability, even when technical standards seem compatible. The findings suggest that lasting privacy in CBDCs cannot rely solely on PETs; it requires institutional restraint, legally defined access rights, and governance structures capable of maintaining credible limits on informational power.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106258"},"PeriodicalIF":3.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From the law of everything to a system that works: why recalibrating personal data enables, rather than undermines, digital protection (A response to Professor Nadezhda Purtova) 从万物法则到一个有效的系统:为什么重新校准个人数据能够而不是破坏数字保护(对Nadezhda Purtova教授的回应)
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-26 DOI: 10.1016/j.clsr.2025.106256
M.R. Leiser
{"title":"From the law of everything to a system that works: why recalibrating personal data enables, rather than undermines, digital protection (A response to Professor Nadezhda Purtova)","authors":"M.R. Leiser","doi":"10.1016/j.clsr.2025.106256","DOIUrl":"10.1016/j.clsr.2025.106256","url":null,"abstract":"","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106256"},"PeriodicalIF":3.2,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Engineering the law-machine learning translation problem: developing legally aligned models 设计法律-机器学习翻译问题:开发符合法律的模型
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-25 DOI: 10.1016/j.clsr.2025.106252
Mathias Hanson , Gregory Lewkowicz , Sam Verboven
Organizations developing machine learning-based (ML) technologies face the complex challenge of achieving high predictive performance while respecting the law. This intersection between ML and the law creates new complexities. As ML model behavior is inferred from training data, legal obligations cannot be operationalized in source code directly. Rather, legal obligations require "indirect" operationalization. However, choosing context-appropriate operationalizations presents two compounding challenges: (1) laws often permit multiple valid operationalizations for a given legal obligation—each with varying degrees of legal adequacy; and, (2) each operationalization creates unpredictable trade-offs among the different legal obligations and with predictive performance. Evaluating these trade-offs requires metrics (or heuristics), which are in turn difficult to validate against legal obligations. Current methodologies fail to fully address these interwoven challenges as they either focus on legal compliance for traditional software or on ML model development without adequately considering legal complexities. In response, we introduce a five-stage interdisciplinary framework that integrates legal and ML-technical analysis during ML model development. This framework facilitates designing ML models in a legally aligned way and identifying high-performing models that are legally justifiable. Legal reasoning guides choices for operationalizations and evaluation metrics, while ML experts ensure technical feasibility, performance optimization and an accurate interpretation of metric values. This framework bridges the gap between more conceptual analysis of law and ML models’ need for deterministic specifications. We illustrate its application using a case study in the context of anti-money laundering.
开发基于机器学习(ML)技术的组织面临着在遵守法律的同时实现高预测性能的复杂挑战。机器学习和法律之间的交集创造了新的复杂性。由于ML模型的行为是从训练数据中推断出来的,因此不能直接在源代码中实现法律义务。相反,法律义务需要“间接”的操作化。然而,选择适合上下文的操作方式带来了两个复杂的挑战:(1)法律通常允许对给定的法律义务进行多种有效的操作,每种操作方式都具有不同程度的法律充分性;(2)每个操作化在不同的法律义务和预测性能之间产生不可预测的权衡。评估这些权衡需要度量(或启发式方法),而这些度量反过来又很难根据法律义务进行验证。当前的方法无法完全解决这些相互交织的挑战,因为它们要么关注传统软件的法律遵从性,要么关注ML模型开发,而没有充分考虑法律复杂性。作为回应,我们引入了一个五阶段的跨学科框架,在机器学习模型开发过程中集成了法律和机器学习技术分析。该框架有助于以合法的方式设计ML模型,并识别在法律上合理的高性能模型。法律推理指导操作和评估指标的选择,而ML专家则确保技术可行性、性能优化和对度量值的准确解释。这个框架在法律的概念性分析和ML模型对确定性规范的需求之间架起了桥梁。我们通过一个反洗钱案例来说明其应用。
{"title":"Engineering the law-machine learning translation problem: developing legally aligned models","authors":"Mathias Hanson ,&nbsp;Gregory Lewkowicz ,&nbsp;Sam Verboven","doi":"10.1016/j.clsr.2025.106252","DOIUrl":"10.1016/j.clsr.2025.106252","url":null,"abstract":"<div><div>Organizations developing machine learning-based (ML) technologies face the complex challenge of achieving high predictive performance while respecting the law. This intersection between ML and the law creates new complexities. As ML model behavior is inferred from training data, legal obligations cannot be operationalized in source code directly. Rather, legal obligations require \"indirect\" operationalization. However, choosing context-appropriate operationalizations presents two compounding challenges: (1) laws often permit multiple valid operationalizations for a given legal obligation—each with varying degrees of legal adequacy; and, (2) each operationalization creates unpredictable trade-offs among the different legal obligations and with predictive performance. Evaluating these trade-offs requires metrics (or heuristics), which are in turn difficult to validate against legal obligations. Current methodologies fail to fully address these interwoven challenges as they either focus on legal compliance for traditional software or on ML model development without adequately considering legal complexities. In response, we introduce a five-stage interdisciplinary framework that integrates legal and ML-technical analysis during ML model development. This framework facilitates designing ML models in a legally aligned way and identifying high-performing models that are legally justifiable. Legal reasoning guides choices for operationalizations and evaluation metrics, while ML experts ensure technical feasibility, performance optimization and an accurate interpretation of metric values. This framework bridges the gap between more conceptual analysis of law and ML models’ need for deterministic specifications. We illustrate its application using a case study in the context of anti-money laundering.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106252"},"PeriodicalIF":3.2,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Law & Security Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1