Pub Date : 2025-09-25DOI: 10.1016/j.clsr.2025.106208
Sheng Zhang , Henry Gao
Despite the rapid expansion of the digital economy, the global regulatory framework for data flows remains fragmented, with countries adopting divergent approaches shaped by their own regulatory priorities. As a key player in the Internet economy, China’s approach to cross-border data flows (CBDF) not only defines its domestic digital landscape but also influences emerging global norms. This paper takes a comprehensive view of the evolution of China’s CBDF regime, examining its development through both domestic and international lenses. Domestically, China’s regulation of CBDF has evolved from a security-first approach to one that seeks to balance security with economic development. This paper examines the economic, political, and international drivers behind this shift. This paper also compares the approaches of China and the United States to CBDF, in light of the recent tightening of US restrictions, from both technical and geopolitical perspectives. At the technical level, recent policy trends in both countries reveal notable similarities. At the geopolitical level, however, the divergence between the two frameworks is not only significant but continues to widen. The paper concludes by examining the broader implications for global data governance and offering recommendations to bridge digital divides and promote a more inclusive international framework.
{"title":"Bridging the Great Wall: China’s Evolving Cross-Border Data Flow Policies and Implications for Global Data Governance","authors":"Sheng Zhang , Henry Gao","doi":"10.1016/j.clsr.2025.106208","DOIUrl":"10.1016/j.clsr.2025.106208","url":null,"abstract":"<div><div>Despite the rapid expansion of the digital economy, the global regulatory framework for data flows remains fragmented, with countries adopting divergent approaches shaped by their own regulatory priorities. As a key player in the Internet economy, China’s approach to cross-border data flows (CBDF) not only defines its domestic digital landscape but also influences emerging global norms. This paper takes a comprehensive view of the evolution of China’s CBDF regime, examining its development through both domestic and international lenses. Domestically, China’s regulation of CBDF has evolved from a security-first approach to one that seeks to balance security with economic development. This paper examines the economic, political, and international drivers behind this shift. This paper also compares the approaches of China and the United States to CBDF, in light of the recent tightening of US restrictions, from both technical and geopolitical perspectives. At the technical level, recent policy trends in both countries reveal notable similarities. At the geopolitical level, however, the divergence between the two frameworks is not only significant but continues to widen. The paper concludes by examining the broader implications for global data governance and offering recommendations to bridge digital divides and promote a more inclusive international framework.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106208"},"PeriodicalIF":3.2,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-23DOI: 10.1016/j.clsr.2025.106209
Fabian Teichmann , Bruno S. Sergi
This article advances a governance-theoretical account of the EU Cyber Resilience Act (CRA) as a form of hybrid regulation that combines command-and-control duties with risk-based calibration, co-regulation through European harmonized standards, and enforced self-regulation by firms. The central research question is: how does the CRA’s hybrid design reallocate regulatory functions between public authorities and private actors along the digital-product lifecycle, and with what compliance and enforcement consequences? Methodologically, the paper doctrinally analyses the CRA’s core provisions and situates them in the New Legislative Framework (NLF) for product regulation, the legal regime for standards under Regulation (EU) No 1025/2012 and Court of Justice of the European Union (CJEU) case law, and adjacent EU instruments (NIS2; Cybersecurity Act). It further offers a concise comparative sidebar on the United States and the United Kingdom to contrast policy trajectories. The contribution is threefold: (i) it clarifies the legal status and governance role of harmonized standards within CRA conformity assessment; (ii) it analytically distinguishes external obligations from firm-internal “meta-regulation”; and (iii) it maps institutional interfaces with NIS2 and the Cybersecurity Act, highlighting pathways for dynamic escalation (including mandatory certification). The analysis yields implications for corporate compliance design, market surveillance, and future rule updates via delegated acts.
本文提出了欧盟网络弹性法案(CRA)的治理理论解释,将其作为一种混合监管形式,将命令与控制职责与基于风险的校准、通过欧洲统一标准进行的共同监管以及企业强制自我监管相结合。研究的核心问题是:CRA的混合设计如何在数字产品生命周期中重新分配公共当局和私人参与者之间的监管职能,以及合规和执行的后果是什么?在方法上,本文从理论上分析了CRA的核心条款,并将其置于产品监管的新立法框架(NLF)、法规(EU) No 1025/2012和欧盟法院(CJEU)判例法下标准的法律制度以及相邻的欧盟文书(NIS2;网络安全法)中。它还提供了一个简洁的比较侧边栏,以对比美国和英国的政策轨迹。其贡献有三方面:(i)阐明了协调标准在CRA合格评定中的法律地位和治理作用;(ii)分析区分外部义务与公司内部“元监管”;(iii)它映射了与NIS2和网络安全法的机构接口,突出了动态升级的途径(包括强制性认证)。该分析对公司合规性设计、市场监督和未来通过授权法案更新规则产生了影响。
{"title":"The EU Cyber Resilience Act: Hybrid governance, compliance, and cybersecurity regulation in the digital ecosystem","authors":"Fabian Teichmann , Bruno S. Sergi","doi":"10.1016/j.clsr.2025.106209","DOIUrl":"10.1016/j.clsr.2025.106209","url":null,"abstract":"<div><div>This article advances a governance-theoretical account of the EU Cyber Resilience Act (CRA) as a form of hybrid regulation that combines command-and-control duties with risk-based calibration, co-regulation through European harmonized standards, and enforced self-regulation by firms. The central research question is: how does the CRA’s hybrid design reallocate regulatory functions between public authorities and private actors along the digital-product lifecycle, and with what compliance and enforcement consequences? Methodologically, the paper doctrinally analyses the CRA’s core provisions and situates them in the New Legislative Framework (NLF) for product regulation, the legal regime for standards under Regulation (EU) No 1025/2012 and Court of Justice of the European Union (CJEU) case law, and adjacent EU instruments (NIS2; Cybersecurity Act). It further offers a concise comparative sidebar on the United States and the United Kingdom to contrast policy trajectories. The contribution is threefold: (i) it clarifies the legal status and governance role of harmonized standards within CRA conformity assessment; (ii) it analytically distinguishes external obligations from firm-internal “meta-regulation”; and (iii) it maps institutional interfaces with NIS2 and the Cybersecurity Act, highlighting pathways for dynamic escalation (including mandatory certification). The analysis yields implications for corporate compliance design, market surveillance, and future rule updates via delegated acts.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106209"},"PeriodicalIF":3.2,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145118738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-19DOI: 10.1016/j.clsr.2025.106196
Giancarlo Frosio , Faith Obafemi
This article examines regulated data access (RDA) in the metaverse—an interconnected and immersive digital ecosystem comprising virtual, augmented, and hyper-physical realities. We organise the argument across taxonomy (Section 2), Digital Services Act (DSA)-anchored doctrine (Section 3), implementation challenges (Section 4), platform practices (Section 5), and a global blueprint (Section 6). Building on the European Union’s DSA, particularly Article 40, the analysis evaluates whether metaverse platforms qualify as Very Large Online Platforms or Very Large Online Search Engines and thus fall within the DSA’s data access rules. Drawing comparative insights from the UK’s Online Safety Act and the United States’ proposed Platform Accountability and Transparency Act, the article highlights differing global approaches to data sharing and the significant governance gaps that persist.
This article categorizes metaverse-native data, including spatial, biometric, and eye-tracking data, into personal and non-personal types, stressing the heightened complexity of governing immersive, multidimensional information flows. While existing legal frameworks offer a starting point, the metaverse’s novel data practices demand targeted adaptations to address challenges like decentralised governance, user consent in real-time environments, and the integration of privacy-enhancing technologies. Through an examination of data access regimes across selected metaverse platforms, the article identifies a lack of uniform, transparent processes for external researchers.
In this context, the article highlights RDA's broader public-interest function, facilitating external scrutiny of platform activities and ensuring service providers are held accountable. The absence of consistent RDA frameworks obstructs systemic risk research, undermining both risk assessment and mitigation efforts while leaving user rights vulnerable to opaque platform governance. To address these gaps, the article advances a set of policy recommendations aimed at strengthening RDA in the metaverse—adapting regulatory strategies to its evolving, decentralised architecture. By tailoring regulatory strategies to the metaverse’s dynamic nature, policymakers can foster accountability, innovation, and trust—both domestically (in jurisdictions like the UK, where data access provisions remain underdeveloped) and internationally. The analysis extends beyond mere applications to metaverse platforms, providing insights that can be applied to the online platform ecosystem in its entirety. Ultimately, this article charts a path toward harmonized, future-ready data governance frameworks—one that integrates RDA as a core regulatory mechanism for ‘augmented accountability’, essential for safeguarding user rights and enabling independent risk assessment in the metaverse.
{"title":"Augmented accountability: Data access in the metaverse","authors":"Giancarlo Frosio , Faith Obafemi","doi":"10.1016/j.clsr.2025.106196","DOIUrl":"10.1016/j.clsr.2025.106196","url":null,"abstract":"<div><div>This article examines regulated data access (RDA) in the metaverse—an interconnected and immersive digital ecosystem comprising virtual, augmented, and hyper-physical realities. We organise the argument across taxonomy (Section 2), Digital Services Act (DSA)-anchored doctrine (Section 3), implementation challenges (Section 4), platform practices (Section 5), and a global blueprint (Section 6). Building on the European Union’s DSA, particularly Article 40, the analysis evaluates whether metaverse platforms qualify as Very Large Online Platforms or Very Large Online Search Engines and thus fall within the DSA’s data access rules. Drawing comparative insights from the UK’s Online Safety Act and the United States’ proposed Platform Accountability and Transparency Act, the article highlights differing global approaches to data sharing and the significant governance gaps that persist.</div><div>This article categorizes metaverse-native data, including spatial, biometric, and eye-tracking data, into personal and non-personal types, stressing the heightened complexity of governing immersive, multidimensional information flows. While existing legal frameworks offer a starting point, the metaverse’s novel data practices demand targeted adaptations to address challenges like decentralised governance, user consent in real-time environments, and the integration of privacy-enhancing technologies. Through an examination of data access regimes across selected metaverse platforms, the article identifies a lack of uniform, transparent processes for external researchers.</div><div>In this context, the article highlights RDA's broader public-interest function, facilitating external scrutiny of platform activities and ensuring service providers are held accountable. The absence of consistent RDA frameworks obstructs systemic risk research, undermining both risk assessment and mitigation efforts while leaving user rights vulnerable to opaque platform governance. To address these gaps, the article advances a set of policy recommendations aimed at strengthening RDA in the metaverse—adapting regulatory strategies to its evolving, decentralised architecture. By tailoring regulatory strategies to the metaverse’s dynamic nature, policymakers can foster accountability, innovation, and trust—both domestically (in jurisdictions like the UK, where data access provisions remain underdeveloped) and internationally. The analysis extends beyond mere applications to metaverse platforms, providing insights that can be applied to the online platform ecosystem in its entirety. Ultimately, this article charts a path toward harmonized, future-ready data governance frameworks—one that integrates RDA as a core regulatory mechanism for ‘augmented accountability’, essential for safeguarding user rights and enabling independent risk assessment in the metaverse.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106196"},"PeriodicalIF":3.2,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The pace of technological progress has been increasing in recent years. As novel technologies arise or existing ones further develop, it becomes increasingly challenging to balance leveraging these advancements and safeguarding personal data. By relying on firsthand accounts of professionals in the field, the paper identifies how these challenges, which appear to be applicable to data controllers and Data Protection Authorities, are substantially connected with ensuring a sound interpretation of the law through time.
The paper examines the leading foresight and anticipation techniques and explores their possible data protection applications by reviewing existing initiatives that attempt to implement foresight in the context of data protection.
Section 2 delves into the evolving regulatory landscape, emphasising the need for a foresight-based approach to tackle the complexities arising from data-intensive technologies and the changing European regulatory framework. Section 3 introduces foresight as a discipline, its history and evolution, and leading techniques. Section 4 presents practical examples of foresight in data protection, detailing initiatives by the authors and other actors in the data protection space.
In conclusion, the paper underscores the initial consensus on the benefits of anticipatory approaches in addressing current data protection challenges. Anticipation techniques, as a flexible concept, can be tailored to meet the needs of various stakeholders, fostering a collaborative and practical approach to data protection. However, a gap in consolidated methodologies persists, necessitating further research to design and implement practical foresight approaches.
{"title":"Anticipating compliance. An exploration of foresight initiatives in data protection","authors":"Alessandro Ortalda , Stefano Leucci , Gabriele Rizzo","doi":"10.1016/j.clsr.2025.106182","DOIUrl":"10.1016/j.clsr.2025.106182","url":null,"abstract":"<div><div>The pace of technological progress has been increasing in recent years. As novel technologies arise or existing ones further develop, it becomes increasingly challenging to balance leveraging these advancements and safeguarding personal data. By relying on firsthand accounts of professionals in the field, the paper identifies how these challenges, which appear to be applicable to data controllers and Data Protection Authorities, are substantially connected with ensuring a sound interpretation of the law through time.</div><div>The paper examines the leading foresight and anticipation techniques and explores their possible data protection applications by reviewing existing initiatives that attempt to implement foresight in the context of data protection.</div><div>Section 2 delves into the evolving regulatory landscape, emphasising the need for a foresight-based approach to tackle the complexities arising from data-intensive technologies and the changing European regulatory framework. Section 3 introduces foresight as a discipline, its history and evolution, and leading techniques. Section 4 presents practical examples of foresight in data protection, detailing initiatives by the authors and other actors in the data protection space.</div><div>In conclusion, the paper underscores the initial consensus on the benefits of anticipatory approaches in addressing current data protection challenges. Anticipation techniques, as a flexible concept, can be tailored to meet the needs of various stakeholders, fostering a collaborative and practical approach to data protection. However, a gap in consolidated methodologies persists, necessitating further research to design and implement practical foresight approaches.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106182"},"PeriodicalIF":3.2,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-17DOI: 10.1016/j.clsr.2025.106206
Xuechen Chen , Lu Xu
This study challenges the prevailing perception of China's AI governance as a monolithic, state-driven model and instead presents a nuanced analysis of its complex governance landscape. Utilizing governance theories, we develop an analytical framework examining key governing nodes, tools, actors, and norms. Through case studies on minor protection and content regulation, this study demonstrates that Chinese AI governance involves a diverse array of stakeholders—including the state, private sector, and society—who co-produce norms and regulatory mechanisms. Contrary to conventional narratives, China's governance approach adapts existing regulatory tools to meet new challenges, balancing political, social, and economic interests. This study highlights how China has rapidly formalized AI regulations, in areas such as minor protection and content regulation, setting a precedent in global AI governance. The findings contribute to a broader understanding of AI regulation beyond ideological binaries and offer insights relevant to international AI policy discussions.
{"title":"State, society, and market: Interpreting the norms and dynamics of China's AI governance","authors":"Xuechen Chen , Lu Xu","doi":"10.1016/j.clsr.2025.106206","DOIUrl":"10.1016/j.clsr.2025.106206","url":null,"abstract":"<div><div>This study challenges the prevailing perception of China's AI governance as a monolithic, state-driven model and instead presents a nuanced analysis of its complex governance landscape. Utilizing governance theories, we develop an analytical framework examining key governing nodes, tools, actors, and norms. Through case studies on minor protection and content regulation, this study demonstrates that Chinese AI governance involves a diverse array of stakeholders—including the state, private sector, and society—who co-produce norms and regulatory mechanisms. Contrary to conventional narratives, China's governance approach adapts existing regulatory tools to meet new challenges, balancing political, social, and economic interests. This study highlights how China has rapidly formalized AI regulations, in areas such as minor protection and content regulation, setting a precedent in global AI governance. The findings contribute to a broader understanding of AI regulation beyond ideological binaries and offer insights relevant to international AI policy discussions.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106206"},"PeriodicalIF":3.2,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-16DOI: 10.1016/j.clsr.2025.106204
Pratham Ajmera
The European cybersecurity regulation framework, not unlike European regulatory initiatives in general, has oft been criticized as being fragmented and divided among industry sectors. However, the past few years have seen legislative initiatives aimed at harmonizing cybersecurity across the EU, the most recent being the newly adopted Cyber-Resilience Act. The Act attempts to harmonize cybersecurity from the product side, establishing minimum requirements that must be met before digital products are brought into the Union market. It marks the initial foray of the EUs framework for product regulation (i.e., the New Legislative Framework or NLF) into the realm of cybersecurity regulation. Consistent with the NLF, the Cyber-Resilience Act provides for high-level cybersecurity requirements for all digital products, with demonstrable conformity met through multiple avenues including international/industrial standards adopted by European Standardization Organizations. However, unlike conventional product regulation, the Cyber-Resilience Act attempts to fulfil its objectives as part of an overarching framework of multiple harmonization legislations geared towards enhancing cybersecurity in the European Union. This article examines the Cyber-Resilience Act, its interplay with other harmonizing legislations in the EU cybersecurity regulatory regime, and raises critical challenges and questions raised through the trends identified in said interplay.
{"title":"Cybersecurity in the Internet of Things: Trends and challenges in a nascent field","authors":"Pratham Ajmera","doi":"10.1016/j.clsr.2025.106204","DOIUrl":"10.1016/j.clsr.2025.106204","url":null,"abstract":"<div><div>The European cybersecurity regulation framework, not unlike European regulatory initiatives in general, has oft been criticized as being fragmented and divided among industry sectors. However, the past few years have seen legislative initiatives aimed at harmonizing cybersecurity across the EU, the most recent being the newly adopted Cyber-Resilience Act. The Act attempts to harmonize cybersecurity from the product side, establishing minimum requirements that must be met before digital products are brought into the Union market. It marks the initial foray of the EUs framework for product regulation (i.e., the New Legislative Framework or NLF) into the realm of cybersecurity regulation. Consistent with the NLF, the Cyber-Resilience Act provides for high-level cybersecurity requirements for all digital products, with demonstrable conformity met through multiple avenues including international/industrial standards adopted by European Standardization Organizations. However, unlike conventional product regulation, the Cyber-Resilience Act attempts to fulfil its objectives as part of an overarching framework of multiple harmonization legislations geared towards enhancing cybersecurity in the European Union. This article examines the Cyber-Resilience Act, its interplay with other harmonizing legislations in the EU cybersecurity regulatory regime, and raises critical challenges and questions raised through the trends identified in said interplay.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106204"},"PeriodicalIF":3.2,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative AI has only gained public prominence in the past two years, yet instances of AI-generated CSAM videos have already been observed. It can be foreseen that in the next five years, these videos and images will become more realistic and widespread. In the United States, the FBI is already handling its first cases involving the generation of AI CSAM. This paper employs a comprehensive legal analysis of existing EU laws, including the AI Act, the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), the proposed Child Sexual Abuse Regulation (CSAR), and the Child Sexual Abuse Directive to address the critical question of whether generative AI can be effectively policed to prevent the creation of deepfakes involving children. While EU legislation is promising, it remains limited, in particular regarding the regulation of training data used by generative AI technologies. To comprehensively address AI-generated CSAM, a proactive, effective regulation and holistic approach are required, ensuring that child protection against online CSAM is integrated into the guidelines, codes of conduct, and technical standards that bring these legal instruments to life.
{"title":"The legal framework and legal gaps for AI-generated child sexual abuse material","authors":"Desara Dushi , Nertil Berdufi , Anastasia Karagianni","doi":"10.1016/j.clsr.2025.106205","DOIUrl":"10.1016/j.clsr.2025.106205","url":null,"abstract":"<div><div>Generative AI has only gained public prominence in the past two years, yet instances of AI-generated CSAM videos have already been observed. It can be foreseen that in the next five years, these videos and images will become more realistic and widespread. In the United States, the FBI is already handling its first cases involving the generation of AI CSAM. This paper employs a comprehensive legal analysis of existing EU laws, including the AI Act, the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), the proposed Child Sexual Abuse Regulation (CSAR), and the Child Sexual Abuse Directive to address the critical question of whether generative AI can be effectively policed to prevent the creation of deepfakes involving children. While EU legislation is promising, it remains limited, in particular regarding the regulation of training data used by generative AI technologies. To comprehensively address AI-generated CSAM, a proactive, effective regulation and holistic approach are required, ensuring that child protection against online CSAM is integrated into the guidelines, codes of conduct, and technical standards that bring these legal instruments to life.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106205"},"PeriodicalIF":3.2,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-10DOI: 10.1016/j.clsr.2025.106194
Fahimeh Abedi, Abbas Rajabifard, Davood Shojaei
Land, as a fundamental resource, holds immense importance in meeting human needs and driving economic prosperity, but often becomes a focal point for disputes. Resolving these disputes poses challenges stemming from inadequate laws, complexities in land administration systems and limited judicial capacity. Recognising the importance of strong legal rights and efficient dispute resolution in fostering economic development, this paper explores the role of technology, specifically Online Dispute Resolution (ODR), in addressing land and property disputes and protecting land rights. ODR systems, have revolutionised traditional approaches to conflict resolution. ODR offers a novel and accessible method for resolving disputes, reducing costs, and eliminating the need for physical presence. The integration of Artificial Intelligence (AI) into ODR platforms further enhances these benefits by streamlining case management and improving decision-making processes. AI can analyse large volumes of data, predict outcomes, and offer insights that aid in dispute resolution. The widespread adoption of ODR platforms globally underscores its potential to enhance access to justice, while AI technologies promise to refine and expedite these systems. Through a comprehensive examination, this paper explores into the intricate landscape of land and property disputes, emphasising the significance of technology-driven solutions. The potential applications of AI-ODR in mitigating complexities associated with land disputes offer promising avenues for progress in ensuring accountable land governance, sustainable development, and the protection of human. This research aims to contribute to the ongoing discourse on advancing legal empowerment and access to justice, particularly in the area of land and property rights and disputes.
{"title":"Enhancing access to justice for land and property disputes through online dispute resolution and artificial intelligence","authors":"Fahimeh Abedi, Abbas Rajabifard, Davood Shojaei","doi":"10.1016/j.clsr.2025.106194","DOIUrl":"10.1016/j.clsr.2025.106194","url":null,"abstract":"<div><div>Land, as a fundamental resource, holds immense importance in meeting human needs and driving economic prosperity, but often becomes a focal point for disputes. Resolving these disputes poses challenges stemming from inadequate laws, complexities in land administration systems and limited judicial capacity. Recognising the importance of strong legal rights and efficient dispute resolution in fostering economic development, this paper explores the role of technology, specifically Online Dispute Resolution (ODR), in addressing land and property disputes and protecting land rights. ODR systems, have revolutionised traditional approaches to conflict resolution. ODR offers a novel and accessible method for resolving disputes, reducing costs, and eliminating the need for physical presence. The integration of Artificial Intelligence (AI) into ODR platforms further enhances these benefits by streamlining case management and improving decision-making processes. AI can analyse large volumes of data, predict outcomes, and offer insights that aid in dispute resolution. The widespread adoption of ODR platforms globally underscores its potential to enhance access to justice, while AI technologies promise to refine and expedite these systems. Through a comprehensive examination, this paper explores into the intricate landscape of land and property disputes, emphasising the significance of technology-driven solutions. The potential applications of AI-ODR in mitigating complexities associated with land disputes offer promising avenues for progress in ensuring accountable land governance, sustainable development, and the protection of human. This research aims to contribute to the ongoing discourse on advancing legal empowerment and access to justice, particularly in the area of land and property rights and disputes.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106194"},"PeriodicalIF":3.2,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145027228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1016/j.clsr.2025.106173
Nils Holzenberger, Winston Maxwell
This article examines two tests from the European General Data Protection Regulation (GDPR): (1) the test for anonymisation (the “anonymisation test”), and (2) the test for applying “appropriate technical and organisational measures” to protect personal data (the “ATOM test”). Both tests depend on vague legal standards and have given rise to legal disputes and differing interpretations among data protection authorities and courts, including in the context of machine learning. Under the anonymisation test, data are sufficiently anonymised when the risk of identification is “insignificant” taking into account “all means reasonably likely to be used” by an attacker. Under the ATOM test, measures to protect personal data must be “appropriate” with regard to the risks of data loss. Here, we use methods from law and economics to transform these two qualitative tests into quantitative approaches that can be visualized on a graph. For the anonymisation test, we chart different attack efforts and identification probabilities, and propose this as a methodology to help stakeholders discuss what attack efforts are “reasonably likely” to be deployed and their likelihood of success. For the ATOM test, we use the Learned Hand formula from law and economics to chart the incremental costs and benefits of privacy protection measures to identify the point where those measures maximize social welfare. The Hand formula permits the negative effects of privacy protection measures, such as the loss of data utility and negative impacts on model fairness, to be taken into account when defining what level of protection is “appropriate”. We apply our proposed framework to several scenarios, applying the anonymisation test to a Large Language Model, and the ATOM test to a database protected with differential privacy.
{"title":"A quantitative approach to the GDPR’s anonymisation and “appropriate technical and organisational measures” tests","authors":"Nils Holzenberger, Winston Maxwell","doi":"10.1016/j.clsr.2025.106173","DOIUrl":"10.1016/j.clsr.2025.106173","url":null,"abstract":"<div><div>This article examines two tests from the European General Data Protection Regulation (GDPR): (1) the test for anonymisation (the “anonymisation test”), and (2) the test for applying “appropriate technical and organisational measures” to protect personal data (the “ATOM test”). Both tests depend on vague legal standards and have given rise to legal disputes and differing interpretations among data protection authorities and courts, including in the context of machine learning. Under the anonymisation test, data are sufficiently anonymised when the risk of identification is “insignificant” taking into account “all means reasonably likely to be used” by an attacker. Under the ATOM test, measures to protect personal data must be “appropriate” with regard to the risks of data loss. Here, we use methods from law and economics to transform these two qualitative tests into quantitative approaches that can be visualized on a graph. For the anonymisation test, we chart different attack efforts and identification probabilities, and propose this as a methodology to help stakeholders discuss what attack efforts are “reasonably likely” to be deployed and their likelihood of success. For the ATOM test, we use the Learned Hand formula from law and economics to chart the incremental costs and benefits of privacy protection measures to identify the point where those measures maximize social welfare. The Hand formula permits the negative effects of privacy protection measures, such as the loss of data utility and negative impacts on model fairness, to be taken into account when defining what level of protection is “appropriate”. We apply our proposed framework to several scenarios, applying the anonymisation test to a Large Language Model, and the ATOM test to a database protected with differential privacy.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106173"},"PeriodicalIF":3.2,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1016/j.clsr.2025.106195
Yu Liu
Jurisdictional conflicts in SEP litigation have intensified as both SEP holders and implementers increasingly resort to antisuit injunctions (ASIs) and retaliatory anti-antisuit injunctions (AASIs). This article contends that a stricter interpretation of two particular requirements for granting ASIs—the “dispositive” and “vexatious or oppressive” requirements—offers the most viable short-term strategy for de-escalating this global procedural arms race. First, courts should resist the assumption that resolution of a breach of FRAND obligation claim necessarily disposes of foreign SEP infringement actions brought by the SEP holder. Second, the assessment of whether a foreign parallel proceeding is vexatious or oppressive should be grounded in the doctrine of forum non conveniens.
{"title":"Before the first shots are fired: A guide to granting antisuit injunctions in SEP litigation","authors":"Yu Liu","doi":"10.1016/j.clsr.2025.106195","DOIUrl":"10.1016/j.clsr.2025.106195","url":null,"abstract":"<div><div>Jurisdictional conflicts in SEP litigation have intensified as both SEP holders and implementers increasingly resort to antisuit injunctions (ASIs) and retaliatory anti-antisuit injunctions (AASIs). This article contends that a stricter interpretation of two particular requirements for granting ASIs—the “dispositive” and “vexatious or oppressive” requirements—offers the most viable short-term strategy for de-escalating this global procedural arms race. First, courts should resist the assumption that resolution of a breach of FRAND obligation claim necessarily disposes of foreign SEP infringement actions brought by the SEP holder. Second, the assessment of whether a foreign parallel proceeding is vexatious or oppressive should be grounded in the doctrine of forum non conveniens.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106195"},"PeriodicalIF":3.2,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}