Pub Date : 2025-06-23DOI: 10.1016/j.clsr.2025.106163
Cheng L. SAW, Bryan Zhi Yang TAN
Generative AI (“GAI”) refers to deep learning models that ingest input data and “learn” to produce output that mimics such data when duly prompted. This feature, however, has given rise to numerous claims of infringement by the owners of copyright in the training material. Relevantly, three questions have emerged for the law of copyright: (1) whether prima facie acts of infringement are disclosed at each stage of the GAI development lifecycle; (2) whether such acts fall within the scope of the text and data mining (“TDM”) exceptions; and (3) whether (and, if so, how successfully) the fair use exception may be invoked by GAI developers as a defence to infringement claims. This paper critically examines these questions in turn and considers, in particular, their interplay with the so-called “memorisation” phenomenon. It is argued that although infringing acts might occur in the process of downloading in-copyright training material and training the GAI model in question, TDM and fair use exceptions (where available) may yet exonerate developers from copyright liability under the right conditions.
{"title":"Unpacking copyright infringement issues in the GenAI development lifecycle and a peek into the future","authors":"Cheng L. SAW, Bryan Zhi Yang TAN","doi":"10.1016/j.clsr.2025.106163","DOIUrl":"10.1016/j.clsr.2025.106163","url":null,"abstract":"<div><div>Generative AI (“GAI”) refers to deep learning models that ingest input data and “learn” to produce output that mimics such data when duly prompted. This feature, however, has given rise to numerous claims of infringement by the owners of copyright in the training material. Relevantly, three questions have emerged for the law of copyright: (1) whether <em>prima facie</em> acts of infringement are disclosed at each stage of the GAI development lifecycle; (2) whether such acts fall within the scope of the text and data mining (“TDM”) exceptions; and (3) whether (and, if so, how successfully) the fair use exception may be invoked by GAI developers as a defence to infringement claims. This paper critically examines these questions in turn and considers, in particular, their interplay with the so-called “memorisation” phenomenon. It is argued that although infringing acts might occur in the process of downloading in-copyright training material and training the GAI model in question, TDM and fair use exceptions (where available) may yet exonerate developers from copyright liability under the right conditions.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106163"},"PeriodicalIF":3.3,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-23DOI: 10.1016/j.clsr.2025.106162
Felipe Romero-Moreno
Deepfakes, exploited for financial fraud, political misinformation, non-consensual imagery, and targeted harassment, represent a rapidly evolving threat to global information integrity, demanding immediate and coordinated intervention. This research undertakes technical and comparative legal analyses of deepfake detection methods. It examines key mitigation strategies—including AI-powered detection, provenance tracking, and watermarking—highlighting the pivotal role of the Coalition for Content Provenance and Authenticity (C2PA) in establishing media authentication standards. The study investigates deepfakes' complex intersections with the admissibility of legal evidence, non-discrimination, data protection, freedom of expression, and copyright, questioning whether existing legal frameworks adequately balance advances in detection technologies with the protection of individual rights. As national strategies become increasingly vital amid geopolitical realities and fragmented global governance, the research advocates for a unified international approach grounded in UN Resolution 78/265 on safe, secure, and trustworthy AI. It calls for a collaborative framework that prioritizes interoperable technical standards and harmonized regulations. The paper critiques legal frameworks in the EU, US, UK, and China—jurisdictions selected for their global digital influence and divergent regulatory philosophies—and recommends developing robust, accessible, adaptable, and internationally interoperable tools to address evidentiary reliability, privacy, freedom of expression, copyright, and algorithmic bias. Specifically, it proposes enhanced technical standards; regulatory frameworks that support the adoption of explainable AI (XAI) and C2PA; and strengthened cross-sector collaboration to foster a trustworthy deepfake ecosystem.
{"title":"Deepfake detection in generative AI: A legal framework proposal to protect human rights","authors":"Felipe Romero-Moreno","doi":"10.1016/j.clsr.2025.106162","DOIUrl":"10.1016/j.clsr.2025.106162","url":null,"abstract":"<div><div>Deepfakes, exploited for financial fraud, political misinformation, non-consensual imagery, and targeted harassment, represent a rapidly evolving threat to global information integrity, demanding immediate and coordinated intervention. This research undertakes technical and comparative legal analyses of deepfake detection methods. It examines key mitigation strategies—including AI-powered detection, provenance tracking, and watermarking—highlighting the pivotal role of the Coalition for Content Provenance and Authenticity (C2PA) in establishing media authentication standards. The study investigates deepfakes' complex intersections with the admissibility of legal evidence, non-discrimination, data protection, freedom of expression, and copyright, questioning whether existing legal frameworks adequately balance advances in detection technologies with the protection of individual rights. As national strategies become increasingly vital amid geopolitical realities and fragmented global governance, the research advocates for a unified international approach grounded in UN Resolution 78/265 on safe, secure, and trustworthy AI. It calls for a collaborative framework that prioritizes interoperable technical standards and harmonized regulations. The paper critiques legal frameworks in the EU, US, UK, and China—jurisdictions selected for their global digital influence and divergent regulatory philosophies—and recommends developing robust, accessible, adaptable, and internationally interoperable tools to address evidentiary reliability, privacy, freedom of expression, copyright, and algorithmic bias. Specifically, it proposes enhanced technical standards; regulatory frameworks that support the adoption of explainable AI (XAI) and C2PA; and strengthened cross-sector collaboration to foster a trustworthy deepfake ecosystem.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106162"},"PeriodicalIF":3.3,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-20DOI: 10.1016/j.clsr.2025.106153
Qifan Yang
The Opinions on Building Basic Systems for Data to Better Exploit the Value of Data Factors (the 20 Key Measures on Data) in China has significantly influenced the discourse around propertising personal data, leading to a distinct approach to personal data protection from the EU and the US. The ownership-usufruct system and conditional personal data property system are raised as two representative property systems in China. In the ownership-usufruct system, the ownership of personal data belongs to the original subject, and the data processors (the data controllers in the GDPR) obtain their usufructuary right through “obtaining consent + consideration”. In the conditional personal data property system, the data processors originally acquired the data property right based on legitimate data collection behaviour. The data property right is limited by pre-existing rights, the proportionality principle, and the fair use principle. Rather than idealising the propertisation of personal data, this paper offers a nuanced critique of its limitations, including conceptual ambiguities, the failure of the consent mechanism, and unbalanced digital market structures. These challenges reveal that the propertisation of personal data is a socio-technical issue that requires legal frameworks and technical infrastructures.
{"title":"Personal data propertisation in China: A difficult road under the 20 Key Measures on Data","authors":"Qifan Yang","doi":"10.1016/j.clsr.2025.106153","DOIUrl":"10.1016/j.clsr.2025.106153","url":null,"abstract":"<div><div>The Opinions on Building Basic Systems for Data to Better Exploit the Value of Data Factors (the 20 Key Measures on Data) in China has significantly influenced the discourse around propertising personal data, leading to a distinct approach to personal data protection from the EU and the US. The ownership-usufruct system and conditional personal data property system are raised as two representative property systems in China. In the ownership-usufruct system, the ownership of personal data belongs to the original subject, and the data processors (the data controllers in the GDPR) obtain their usufructuary right through “obtaining consent + consideration”. In the conditional personal data property system, the data processors originally acquired the data property right based on legitimate data collection behaviour. The data property right is limited by pre-existing rights, the proportionality principle, and the fair use principle. Rather than idealising the propertisation of personal data, this paper offers a nuanced critique of its limitations, including conceptual ambiguities, the failure of the consent mechanism, and unbalanced digital market structures. These challenges reveal that the propertisation of personal data is a socio-technical issue that requires legal frameworks and technical infrastructures.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106153"},"PeriodicalIF":3.3,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144322509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-07DOI: 10.1016/j.clsr.2025.106161
Andrea Parziale
This case note examines a judgment by the Court of Justice on Europol's civil liability for unlawful disclosure of personal data during cross-border cooperation with Member State authorities. The Court overturned the General Court's decision, establishing that joint and several liability between Europol and Member States can arise under Article 50 of Regulation 2016/794 (Europol Regulation), informed by Recital 57. While this ruling facilitates compensation for injured parties when the exact source of data disclosure cannot be identified, the Court awarded only €2000 in damages to the appellant, a modest sum that may undermine Article 50′s effectiveness as a data protection mechanism. The case note analyzes both the joint liability determination and the damages quantification, arguing that while the recognition of joint liability strengthens data subject protection in principle, the symbolic damages awarded significantly limit its practical impact as an accountability tool for ensuring responsible data handling in cross-border criminal investigations.
{"title":"Joint and several liability between Europol and a Member State for damages from unlawful disclosure of personal data (comment on European Court of Justice, 5 March 2024, C‑755/21 P)","authors":"Andrea Parziale","doi":"10.1016/j.clsr.2025.106161","DOIUrl":"10.1016/j.clsr.2025.106161","url":null,"abstract":"<div><div>This case note examines a judgment by the Court of Justice on Europol's civil liability for unlawful disclosure of personal data during cross-border cooperation with Member State authorities. The Court overturned the General Court's decision, establishing that joint and several liability between Europol and Member States can arise under Article 50 of Regulation 2016/794 (Europol Regulation), informed by Recital 57. While this ruling facilitates compensation for injured parties when the exact source of data disclosure cannot be identified, the Court awarded only €2000 in damages to the appellant, a modest sum that may undermine Article 50′s effectiveness as a data protection mechanism. The case note analyzes both the joint liability determination and the damages quantification, arguing that while the recognition of joint liability strengthens data subject protection in principle, the symbolic damages awarded significantly limit its practical impact as an accountability tool for ensuring responsible data handling in cross-border criminal investigations.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106161"},"PeriodicalIF":3.3,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144240726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-02DOI: 10.1016/j.clsr.2025.106151
Gabriela Kennedy (Partner) , Joanna Wong (Associate) , Arun Babu (Partner) , Gayathri Poti (Associate) , Avindra Yuliansyah Taher (Partner) , Kiyoko Nakaoka (Attorney-at-Law) , Jillian Chia (Partner) , Beatrice Yew (Senior Associate) , Karen Ngan (Partner) , Lam Chung Nian (Partner) , Huey Lee (Associate) , Quang Minh Vu (Associate)
This column provides a country by country analysis of the latest legal developments, cases and issues relevant to the IT, media and telecommunications' industries in key jurisdictions across the Asia Pacific region. The articles appearing in this column are intended to serve as ‘alerts’ and are not submitted as detailed analyses of cases or legal developments.
{"title":"Asia–Pacific developments","authors":"Gabriela Kennedy (Partner) , Joanna Wong (Associate) , Arun Babu (Partner) , Gayathri Poti (Associate) , Avindra Yuliansyah Taher (Partner) , Kiyoko Nakaoka (Attorney-at-Law) , Jillian Chia (Partner) , Beatrice Yew (Senior Associate) , Karen Ngan (Partner) , Lam Chung Nian (Partner) , Huey Lee (Associate) , Quang Minh Vu (Associate)","doi":"10.1016/j.clsr.2025.106151","DOIUrl":"10.1016/j.clsr.2025.106151","url":null,"abstract":"<div><div>This column provides a country by country analysis of the latest legal developments, cases and issues relevant to the IT, media and telecommunications' industries in key jurisdictions across the Asia Pacific region. The articles appearing in this column are intended to serve as ‘alerts’ and are not submitted as detailed analyses of cases or legal developments.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106151"},"PeriodicalIF":3.3,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144189436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-02DOI: 10.1016/j.clsr.2025.106147
Nick Pantlin
This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening "on the ground" at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.
{"title":"European national News","authors":"Nick Pantlin","doi":"10.1016/j.clsr.2025.106147","DOIUrl":"10.1016/j.clsr.2025.106147","url":null,"abstract":"<div><div>This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening \"on the ground\" at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106147"},"PeriodicalIF":3.3,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144189435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-30DOI: 10.1016/j.clsr.2025.106152
Mattis van ‘t Schip
Many modern consumer devices rely on network connections and cloud services to perform their core functions. This dependency is especially present in Internet of Things (IoT) devices, which combine hardware and software with network connections (e.g., a ‘smart’ doorbell with a camera). This paper argues that current European product legislation, which aims to protect consumers of, inter alia, IoT devices, has a blind spot for an increasing problem in the competitive IoT market: manufacturer cessation. Without the manufacturer’s cloud servers, many IoT devices cannot perform core functions such as data analysis. If an IoT manufacturer ceases their operations, consumers of the manufacturer’s devices are thus often left with an obsolete device and, as the paper shows, hardly any legal remedies. This paper therefore investigates three properties that could support legislators in finding a solution for IoT manufacturer cessation: i) pre-emptive measures, aimed at ii) manufacturer-independent iii) collective control. The paper finally shows how these three properties already align with current legislative processes surrounding data portability, interoperability and open-source software development and analyses whether these processes can provide an adequate remedy for consumers.
{"title":"The Internet of Forgotten Things: European cybersecurity regulation and the cessation of Internet of Things manufacturers","authors":"Mattis van ‘t Schip","doi":"10.1016/j.clsr.2025.106152","DOIUrl":"10.1016/j.clsr.2025.106152","url":null,"abstract":"<div><div>Many modern consumer devices rely on network connections and cloud services to perform their core functions. This dependency is especially present in Internet of Things (IoT) devices, which combine hardware and software with network connections (e.g., a ‘smart’ doorbell with a camera). This paper argues that current European product legislation, which aims to protect consumers of, inter alia, IoT devices, has a blind spot for an increasing problem in the competitive IoT market: manufacturer cessation. Without the manufacturer’s cloud servers, many IoT devices cannot perform core functions such as data analysis. If an IoT manufacturer ceases their operations, consumers of the manufacturer’s devices are thus often left with an obsolete device and, as the paper shows, hardly any legal remedies. This paper therefore investigates three properties that could support legislators in finding a solution for IoT manufacturer cessation: i) pre-emptive measures, aimed at ii) manufacturer-independent iii) collective control. The paper finally shows how these three properties already align with current legislative processes surrounding data portability, interoperability and open-source software development and analyses whether these processes can provide an adequate remedy for consumers.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106152"},"PeriodicalIF":3.3,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144167157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-24DOI: 10.1016/j.clsr.2025.106150
Akshita Rohatgi , Tae Jung Park
AI models developed using scraped personal data pose an inherent risk of en-masse shadow profiling to the subjects, harming their privacy, autonomy, and dignity. This paper argues that the protection of public personal data is essential to mitigate AI-scraping risks, noting that the EU is among the few to confer such protection. The GDPR regulates both public and non-public personal data similarly but contains exemptions from notice provisions in the case of legitimate interest-based processing. This exemption contributes to the information asymmetry between stakeholders who enforce anti-scraping covenants i.e., data subjects and platforms, versus scrapers. Limited supervisory powers and the lack of other mechanisms to address the problems of enforcing privacy laws in public data contribute to the GDPR’s inefficiency in controlling AI harms. The AI Act strives to plug in GDPR loopholes via reporting obligations on general-purpose AI providers to disclose the sources of their training data. Other jurisdictions could consider the principles and mechanisms of the EU regime as a guide to regulate public data scraping.
{"title":"Privacy in the public: Analysing the EU framework to outline approaches for regulating AI personal data scraping","authors":"Akshita Rohatgi , Tae Jung Park","doi":"10.1016/j.clsr.2025.106150","DOIUrl":"10.1016/j.clsr.2025.106150","url":null,"abstract":"<div><div>AI models developed using scraped personal data pose an inherent risk of <em>en-masse</em> shadow profiling to the subjects, harming their privacy, autonomy, and dignity. This paper argues that the protection of public personal data is essential to mitigate AI-scraping risks, noting that the EU is among the few to confer such protection. The GDPR regulates both public and non-public personal data similarly but contains exemptions from notice provisions in the case of legitimate interest-based processing. This exemption contributes to the information asymmetry between stakeholders who enforce anti-scraping covenants i.e., data subjects and platforms, versus scrapers. Limited supervisory powers and the lack of other mechanisms to address the problems of enforcing privacy laws in public data contribute to the GDPR’s inefficiency in controlling AI harms. The AI Act strives to plug in GDPR loopholes via reporting obligations on general-purpose AI providers to disclose the sources of their training data. Other jurisdictions could consider the principles and mechanisms of the EU regime as a guide to regulate public data scraping.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106150"},"PeriodicalIF":3.3,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144123562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-23DOI: 10.1016/j.clsr.2025.106145
Aura Esther Vilalta Nicuesa, Marian Gili Saldaña
This paper focuses on the impact of the new EU AI Act in alternative and online dispute resolution. After briefly analysing the state of the art regarding international regulations on artificial intelligence (AI) and the strategy followed in the European Union (EU) in the field of dispute resolution, the research provides a critical discursive overview of the international existing legal guidelines and frameworks for the use of AI in dispute resolution, aiming to identify the different levels of risk addressed by the EU IA Act in this context. The paper also offers forward-looking reflections intended to contribute to the improvement of the current legal framework on AI applied to dispute resolution in the EU. To this end, it identifies various AI tools applicable to the justice sector, highlighting their main advantages and limitations. It then outlines the most relevant hard law and soft law instruments at both international and European levels, with a particular focus on the strategy implemented by the EU leading to the adoption of the current EU AI Act. The study also reviews initiatives carried out by organisations to promote the ethical use of AI in judicial systems and examines the legislative approach adopted by the EU to regulate AI in the field of justice. Finally, the paper proposes a new categorisation of AI-assisted alternative and online dispute resolution mechanisms based on their degree of risk and autonomy.
{"title":"AI-driven alternative and online dispute resolution in the European Union: An analysis of the legal framework and a proposed categorization","authors":"Aura Esther Vilalta Nicuesa, Marian Gili Saldaña","doi":"10.1016/j.clsr.2025.106145","DOIUrl":"10.1016/j.clsr.2025.106145","url":null,"abstract":"<div><div>This paper focuses on the impact of the new EU AI Act in alternative and online dispute resolution. After briefly analysing the state of the art regarding international regulations on artificial intelligence (AI) and the strategy followed in the European Union (EU) in the field of dispute resolution, the research provides a critical discursive overview of the international existing legal guidelines and frameworks for the use of AI in dispute resolution, aiming to identify the different levels of risk addressed by the EU IA Act in this context. The paper also offers forward-looking reflections intended to contribute to the improvement of the current legal framework on AI applied to dispute resolution in the EU. To this end, it identifies various AI tools applicable to the justice sector, highlighting their main advantages and limitations. It then outlines the most relevant hard law and soft law instruments at both international and European levels, with a particular focus on the strategy implemented by the EU leading to the adoption of the current EU AI Act. The study also reviews initiatives carried out by organisations to promote the ethical use of AI in judicial systems and examines the legislative approach adopted by the EU to regulate AI in the field of justice. Finally, the paper proposes a new categorisation of AI-assisted alternative and online dispute resolution mechanisms based on their degree of risk and autonomy.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106145"},"PeriodicalIF":3.3,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144115771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-22DOI: 10.1016/j.clsr.2025.106148
Ilya Kokorin
This article explores the phenomenon of the decentralisation defence, which refers to instances where ‘decentralisation’ is invoked either as a shield against liability or as insulation from the reach of the law. This defence is rooted in the technological features of distributed ledger technology and smart contracts built on the blockchain settlement layer, including pseudonymity, programmability, immutability and decentralisation. Together, these features enable transactions while reducing reliance on centralised intermediaries. Although major decentralised finance (DeFi) applications, such as decentralised crypto exchanges, are not harmful per se, their misuse by bad actors creates risks for market participants. The recent cases of Uniswap Labs and Tornado Cash illustrate that the decentralisation defence can result in unaddressed harms and produce other negative externalities. These outcomes have prompted efforts to identify regulatory hooks along the centralisation vectors. The search for a responsible party in blockchain-enabled decentralised arrangements resembles processes observed with two other key technological advancements in the digital space – the internet and artificial intelligence. Drawing inspiration from the modern EU regulation of these transformative technologies, this article focuses on the role of user interfaces as DeFi gatekeepers, and software developers engaged in the creation of smart contract code and blockchain protocols.
{"title":"The decentralisation defence","authors":"Ilya Kokorin","doi":"10.1016/j.clsr.2025.106148","DOIUrl":"10.1016/j.clsr.2025.106148","url":null,"abstract":"<div><div>This article explores the phenomenon of the decentralisation defence, which refers to instances where ‘decentralisation’ is invoked either as a shield against liability or as insulation from the reach of the law. This defence is rooted in the technological features of distributed ledger technology and smart contracts built on the blockchain settlement layer, including pseudonymity, programmability, immutability and decentralisation. Together, these features enable transactions while reducing reliance on centralised intermediaries. Although major decentralised finance (DeFi) applications, such as decentralised crypto exchanges, are not harmful per se, their misuse by bad actors creates risks for market participants. The recent cases of <em>Uniswap Labs</em> and <em>Tornado Cash</em> illustrate that the decentralisation defence can result in unaddressed harms and produce other negative externalities. These outcomes have prompted efforts to identify regulatory hooks along the centralisation vectors. The search for a responsible party in blockchain-enabled decentralised arrangements resembles processes observed with two other key technological advancements in the digital space – the internet and artificial intelligence. Drawing inspiration from the modern EU regulation of these transformative technologies, this article focuses on the role of user interfaces as DeFi gatekeepers, and software developers engaged in the creation of smart contract code and blockchain protocols.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106148"},"PeriodicalIF":3.3,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144105419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}