Pub Date : 2025-12-24DOI: 10.1016/j.clsr.2025.106255
Julia Krämer
In 2021, Apple shook up the AdTech industry by introducing the iOS 14.5 update, which not only changed the default access to an app's advertising identifier but also restructured the process of user consent within mobile apps through the App Tracking Transparency (ATT) framework. Given that Apple dominates one of the main mobile operating systems (iOS), and one of the major mobile app store (Apple App Store) in the European Union (EU), the question arises to what extent such a powerful private party is able to govern privacy standards at this scale. While the introduction of the ATT has already raised competition concerns, its impact on privacy and data protection within the EU legal order remains largely unexplored. Therefore, this article investigates how the ATT affects EU privacy and data protection compliance and explores the extent of the General Data Protection Regulation (GDPR) in restricting the privacy regulator role of app stores and mobile operating systems. While the ATT limits certain privacy risks by limiting disclosures to third-parties, Apple is redefining core privacy concepts such as tracking. This may lead to the emergence of “walled gardens”, closed ecosystems which are managed and curated by their owners, which may alter the structure of the mobile ecosystem in general. The paper contributes to the overall discussion about the impact of private sector-led initiatives and powerful private actors in setting privacy standards.
{"title":"Balancing privacy and platform power in the mobile ecosystem: The case of Apple’s App Tracking Transparency","authors":"Julia Krämer","doi":"10.1016/j.clsr.2025.106255","DOIUrl":"10.1016/j.clsr.2025.106255","url":null,"abstract":"<div><div>In 2021, Apple shook up the AdTech industry by introducing the iOS 14.5 update, which not only changed the default access to an app's advertising identifier but also restructured the process of user consent within mobile apps through the App Tracking Transparency (ATT) framework. Given that Apple dominates one of the main mobile operating systems (iOS), and one of the major mobile app store (Apple App Store) in the European Union (EU), the question arises to what extent such a powerful private party is able to govern privacy standards at this scale. While the introduction of the ATT has already raised competition concerns, its impact on privacy and data protection within the EU legal order remains largely unexplored. Therefore, this article investigates how the ATT affects EU privacy and data protection compliance and explores the extent of the General Data Protection Regulation (GDPR) in restricting the privacy regulator role of app stores and mobile operating systems. While the ATT limits certain privacy risks by limiting disclosures to third-parties, Apple is redefining core privacy concepts such as tracking. This may lead to the emergence of “walled gardens”, closed ecosystems which are managed and curated by their owners, which may alter the structure of the mobile ecosystem in general. The paper contributes to the overall discussion about the impact of private sector-led initiatives and powerful private actors in setting privacy standards.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106255"},"PeriodicalIF":3.2,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.clsr.2025.106254
Christina Pasvanti Gkioka, Eduard Fosch-Villaronga
Gender-based discrimination in the Metaverse often takes the form as harassment or unwanted sexual behavior directed at avatars. Such harm is frequently underestimated because people assume a clear divide between users and their digital selves, overlooking how strongly individuals identify with their avatars. Mediated embodiment theory shows, nonetheless, that users experience their avatars as extensions of themselves, making virtual discrimination a real-world concern affecting dignity, mental health, and well-being. As digital spaces replicate and sometimes amplify existing gender inequalities, this study examines the extent to which gender equality is safeguarded in the Metaverse. It focuses on both legal and platform-based safeguards, assessing how the European Union’s Digital Services Act (DSA) can address gender-based risks in virtual environments. The analysis clarifies how the DSA’s obligations for hosting services and online platforms may apply to Metaverse providers, while acknowledging that most do not yet meet the threshold for designation as Very Large Online Platforms (VLOPs). The DSA provides a valuable starting point for promoting accountability and transparency but leaves important gaps in enforcement and coverage. At the platform level, policies, moderation tools, and safety features vary widely, underscoring the need for context-specific governance measures and legal recognition of avatar-mediated harm. Strengthening these safeguards is essential to ensure that the Metaverse evolves into a safer and more inclusive space, free from gender-based discrimination.
{"title":"Exploring gender equality in the metaverse","authors":"Christina Pasvanti Gkioka, Eduard Fosch-Villaronga","doi":"10.1016/j.clsr.2025.106254","DOIUrl":"10.1016/j.clsr.2025.106254","url":null,"abstract":"<div><div>Gender-based discrimination in the Metaverse often takes the form as harassment or unwanted sexual behavior directed at avatars. Such harm is frequently underestimated because people assume a clear divide between users and their digital selves, overlooking how strongly individuals identify with their avatars. Mediated embodiment theory shows, nonetheless, that users experience their avatars as extensions of themselves, making virtual discrimination a real-world concern affecting dignity, mental health, and well-being. As digital spaces replicate and sometimes amplify existing gender inequalities, this study examines the extent to which gender equality is safeguarded in the Metaverse. It focuses on both legal and platform-based safeguards, assessing how the European Union’s Digital Services Act (DSA) can address gender-based risks in virtual environments. The analysis clarifies how the DSA’s obligations for hosting services and online platforms may apply to Metaverse providers, while acknowledging that most do not yet meet the threshold for designation as Very Large Online Platforms (VLOPs). The DSA provides a valuable starting point for promoting accountability and transparency but leaves important gaps in enforcement and coverage. At the platform level, policies, moderation tools, and safety features vary widely, underscoring the need for context-specific governance measures and legal recognition of avatar-mediated harm. Strengthening these safeguards is essential to ensure that the Metaverse evolves into a safer and more inclusive space, free from gender-based discrimination.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106254"},"PeriodicalIF":3.2,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1016/j.clsr.2025.106230
Federico Galli , Thiago Raulino Dal Pont , Galileo Sartor , Giuseppe Contissa
The EU Artificial Intelligence Act (AIA) exemplifies the growing complexity of digital regulation in the domain of computer technologies. Characterised by abstract terminology, multi-layered provisions, and intersecting regulatory requirements, the AIA poses significant challenges for the identification and interpretation of legal obligations, making compliance a demanding and potentially error-prone endeavour for legal professionals and organisations alike.
Recent advances in Artificial Intelligence (AI), particularly in the fields of Natural Language Processing (NLP) and Large Language Models (LLMs), offer promising support for addressing these challenges. By automating the extraction and structuring of legal rules, AI-based tools have the potential to assist regulatory compliance activities and provide more systematic insights into complex legislative frameworks.
This paper presents an experiment combining NLP techniques and LLMs to automate the extraction and structuring of legal obligations from the AIA.
The approach is based on a modular workflow comprising four main stages: identification of obligations, filtering of deontic statements, analysis of deontic content, and the construction of searchable knowledge graphs. The experiment employed the LLaMA 3.3 70B model, supported by more traditional NLP tools.
Five experts (4 Ph.D. students and 1 post-doc in legal informatics and philosophy) evaluated the system’s performance on a subset of cases. The results indicate a precision of 93% in the obligation filtering phase and over 99% accuracy in classifying obligation types, addressees, and predicates. A quantitative analysis of the extracted and analysed obligations revealed a predominance of prescriptive obligations (603 out of 729 total), among which 136 are imposed on the European Commission, while 88 consist of informative duties. The results are in line with current discussions around the AI Act regulatory approach.
These findings underscore the potential of LLM-based tools to enhance regulatory compliance and analysis. Future research will focus on extending the system to additional EU regulations and integrating formal ontologies to enable more advanced representations of legal obligations.
{"title":"Approaching the AI Act... with AI: LLMs and knowledge graphs to extract and analyse obligations","authors":"Federico Galli , Thiago Raulino Dal Pont , Galileo Sartor , Giuseppe Contissa","doi":"10.1016/j.clsr.2025.106230","DOIUrl":"10.1016/j.clsr.2025.106230","url":null,"abstract":"<div><div>The EU Artificial Intelligence Act (AIA) exemplifies the growing complexity of digital regulation in the domain of computer technologies. Characterised by abstract terminology, multi-layered provisions, and intersecting regulatory requirements, the AIA poses significant challenges for the identification and interpretation of legal obligations, making compliance a demanding and potentially error-prone endeavour for legal professionals and organisations alike.</div><div>Recent advances in Artificial Intelligence (AI), particularly in the fields of Natural Language Processing (NLP) and Large Language Models (LLMs), offer promising support for addressing these challenges. By automating the extraction and structuring of legal rules, AI-based tools have the potential to assist regulatory compliance activities and provide more systematic insights into complex legislative frameworks.</div><div>This paper presents an experiment combining NLP techniques and LLMs to automate the extraction and structuring of legal obligations from the AIA.</div><div>The approach is based on a modular workflow comprising four main stages: identification of obligations, filtering of deontic statements, analysis of deontic content, and the construction of searchable knowledge graphs. The experiment employed the LLaMA 3.3 70B model, supported by more traditional NLP tools.</div><div>Five experts (4 Ph.D. students and 1 post-doc in legal informatics and philosophy) evaluated the system’s performance on a subset of cases. The results indicate a precision of 93% in the obligation filtering phase and over 99% accuracy in classifying obligation types, addressees, and predicates. A quantitative analysis of the extracted and analysed obligations revealed a predominance of prescriptive obligations (603 out of 729 total), among which 136 are imposed on the European Commission, while 88 consist of informative duties. The results are in line with current discussions around the AI Act regulatory approach.</div><div>These findings underscore the potential of LLM-based tools to enhance regulatory compliance and analysis. Future research will focus on extending the system to additional EU regulations and integrating formal ontologies to enable more advanced representations of legal obligations.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106230"},"PeriodicalIF":3.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12DOI: 10.1016/j.clsr.2025.106233
Michael Spratt, TJ McIntyre
The Data Protection Impact Assessment (DPIA) is an innovation adopted in the 2016 General Data Protection Regulation (GDPR) as a core part of its move towards ex ante regulation of data processing. However there is little empirical work examining how data controllers carry out DPIAs in practice. In this article we address that gap by providing the first systematic analysis of multiple DPIAs on the same topic: those adopted across European states for COVID-19 contact tracing apps using the Google/Apple Exposure Notification (GAEN) system. We identify significant discrepancies between these DPIAs (particularly in relation to risk identification and mitigation) even though they address identical fact patterns. We discuss factors leading to these inconsistencies, and make recommendations to promote uniformity, transparency, and feedback in the DPIA process.
{"title":"Assessing data protection impact assessments: Lessons from COVID-19 contact tracing apps","authors":"Michael Spratt, TJ McIntyre","doi":"10.1016/j.clsr.2025.106233","DOIUrl":"10.1016/j.clsr.2025.106233","url":null,"abstract":"<div><div>The Data Protection Impact Assessment (DPIA) is an innovation adopted in the 2016 General Data Protection Regulation (GDPR) as a core part of its move towards <em>ex ante</em> regulation of data processing. However there is little empirical work examining how data controllers carry out DPIAs in practice. In this article we address that gap by providing the first systematic analysis of multiple DPIAs on the same topic: those adopted across European states for COVID-19 contact tracing apps using the Google/Apple Exposure Notification (GAEN) system. We identify significant discrepancies between these DPIAs (particularly in relation to risk identification and mitigation) even though they address identical fact patterns. We discuss factors leading to these inconsistencies, and make recommendations to promote uniformity, transparency, and feedback in the DPIA process.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106233"},"PeriodicalIF":3.2,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1016/j.clsr.2025.106250
Yang Feng , Yuanyuan Cheng , Xingyu Yan
China leads globally in the large-scale deployment of facial recognition technologies (FRTs). As the country’s data protection legislation intensifies, the wide use of FRTs is raising increasing concerns about their legitimacy. To examine the legal response to FRTs in China, we analyse the legislative framework through a normative lens, evaluate the relevant administrative enforcement decisions with a mixed-method approach combining quantitative descriptive statistics and qualitative case study, and examine the judicial stance on FRT regulation through a case study. We find that despite some plausible legislative developments, the current legal framework provides inadequate facial information protection with an ineffective separate consent rule, a conspicuous lack of control over FRT use in the public sector, and weak enforcement of existing facial information protection laws. Additionally, the courts appear reluctant to address the abuse of FRTs, likely due to concerns about hindering the development of the FRT industry. We recommend a comprehensive approach to facial information protection, encompassing complementary legislative, administrative, and judicial measures.
{"title":"Legal response to facial recognition technologies in China: still seeking the balance","authors":"Yang Feng , Yuanyuan Cheng , Xingyu Yan","doi":"10.1016/j.clsr.2025.106250","DOIUrl":"10.1016/j.clsr.2025.106250","url":null,"abstract":"<div><div>China leads globally in the large-scale deployment of facial recognition technologies (FRTs). As the country’s data protection legislation intensifies, the wide use of FRTs is raising increasing concerns about their legitimacy. To examine the legal response to FRTs in China, we analyse the legislative framework through a normative lens, evaluate the relevant administrative enforcement decisions with a mixed-method approach combining quantitative descriptive statistics and qualitative case study, and examine the judicial stance on FRT regulation through a case study. We find that despite some plausible legislative developments, the current legal framework provides inadequate facial information protection with an ineffective separate consent rule, a conspicuous lack of control over FRT use in the public sector, and weak enforcement of existing facial information protection laws. Additionally, the courts appear reluctant to address the abuse of FRTs, likely due to concerns about hindering the development of the FRT industry. We recommend a comprehensive approach to facial information protection, encompassing complementary legislative, administrative, and judicial measures.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106250"},"PeriodicalIF":3.2,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1016/j.clsr.2025.106245
Kai Zenner
The Commission’s simplification agenda makes sense if it focuses on the effects of rules rather than diluting what they are trying to protect. With a series of limited legislative and operational adjustments, the EU could lift competitiveness without lowering standards. By cutting procedural complexity across the digital rulebook, EU companies would face less red tape, while EU institutions could pursue their policy goals more efficiently. However, the catch is execution: Brussels’ crisis-driven, highly politicised processes make it hard to assemble stable coalitions and to produce the high-quality outcomes such an endeavour requires.
{"title":"Escaping the simplification trap: A playbook for the EU’s digital rulebook","authors":"Kai Zenner","doi":"10.1016/j.clsr.2025.106245","DOIUrl":"10.1016/j.clsr.2025.106245","url":null,"abstract":"<div><div>The Commission’s simplification agenda makes sense if it focuses on the effects of rules rather than diluting what they are trying to protect. With a series of limited legislative and operational adjustments, the EU could lift competitiveness without lowering standards. By cutting procedural complexity across the digital rulebook, EU companies would face less red tape, while EU institutions could pursue their policy goals more efficiently. However, the catch is execution: Brussels’ crisis-driven, highly politicised processes make it hard to assemble stable coalitions and to produce the high-quality outcomes such an endeavour requires.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106245"},"PeriodicalIF":3.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.clsr.2025.106246
Ludovico Bossi
Major social media terms of service (i.e., YouTube, TikTok, Facebook, Instagram, LinkedIn, X) impose to users a royalty-free license covering uploaded “content” protected by intellectual property rights (“IPRs”). Consequently, while social media service providers’ revenues are significant, users that are also authors and performers do not directly receive any remuneration in most cases. Most recently, the benefits of training artificial intelligence (“AI”) tools on what is published on social media further intensify this imbalance.
This bargain has not gone completely unnoticed. However, the doctrine often questioned the workability of any legislative or judicial intervention aimed at restoring balance. This article argues that online social media service providers have an obligation under EU law to share the revenues derived from the exploitation of works and performances published on their platforms with authors and performers.
For this purpose, this work discusses the legitimacy of free licenses with the fair remuneration principle of authors and performers. It interprets the so-called “Linux clause” of Recital 82 Directive (EU) 2019/790 (“CDSMD”) and proposes a distinction between “free licences for the benefit of any users” (“open licenses”) and those for the benefit of specific licensees (“gratuitous licenses”). Abuses by the general public cannot occur in the case of open licenses. On the contrary, specific licensees who have a stronger position could unfairly impose gratuitous licenses to authors and performers. This inquiry runs in parallel with a recent litigation in Belgium on the matter (the “Streamz” case).
{"title":"Volunteering for the platforms – How social media terms of service may violate the fair remuneration principle of authors and performers","authors":"Ludovico Bossi","doi":"10.1016/j.clsr.2025.106246","DOIUrl":"10.1016/j.clsr.2025.106246","url":null,"abstract":"<div><div>Major social media terms of service (<em>i.e.</em>, YouTube, TikTok, Facebook, Instagram, LinkedIn, X) impose to users a royalty-free license covering uploaded “content” protected by intellectual property rights (“IPRs”). Consequently, while social media service providers’ revenues are significant, users that are also authors and performers do not directly receive any remuneration in most cases. Most recently, the benefits of training artificial intelligence (“AI”) tools on what is published on social media further intensify this imbalance.</div><div>This bargain has not gone completely unnoticed. However, the doctrine often questioned the workability of any legislative or judicial intervention aimed at restoring balance. This article argues that online social media service providers have an obligation under EU law to share the revenues derived from the exploitation of works and performances published on their platforms with authors and performers.</div><div>For this purpose, this work discusses the legitimacy of free licenses with the fair remuneration principle of authors and performers. It interprets the so-called “Linux clause” of Recital 82 Directive (EU) 2019/790 (“CDSMD”) and proposes a distinction between “free licences for the benefit of any users” (“open licenses”) and those for the benefit of specific licensees (“gratuitous licenses”). Abuses by the general public cannot occur in the case of open licenses. On the contrary, specific licensees who have a stronger position could unfairly impose gratuitous licenses to authors and performers. This inquiry runs in parallel with a recent litigation in Belgium on the matter (the “Streamz” case).</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106246"},"PeriodicalIF":3.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.clsr.2025.106247
Holli Sargeant
The recent amendments to the United Kingdom’s GDPR under the Data (Use and Access) Act 2025 marks a significant divergence from the European Union’s approach to automated decision-making, substantively weakening the ‘right to explanation’ for automated decisions. This paper provides a critical legal analysis of the new regime, arguing that it dismantles crucial protections for individuals. The principal finding is that the legislation creates significant legal lacunas by introducing an ambiguous ‘no meaningful human involvement’ standard and restricting key safeguards to decisions involving ‘special category data’. These changes allow firms to shield opaque models from scrutiny, increasing the risk of algorithmic discrimination, particularly in high-stakes sectors like consumer credit.
Drawing on a comparative review of the United States’ technology-neutral adverse action notice requirement, the paper concludes that data protection law is no longer a sufficient safeguard against algorithmic harm in the United Kingdom. It proposes the establishment of a new right to an explanation for any adverse credit decision. This right should be anchored not in data protection law, but in consumer protection law, and be enforced by a specialist regulator, the Financial Conduct Authority. Such a framework would close the new accountability gaps and create market incentives for developing transparent, explainable-by-design systems, better aligning technological innovation with consumer protection.
{"title":"Mind the gap: Securing algorithmic explainability for credit decisions beyond the UK GDPR","authors":"Holli Sargeant","doi":"10.1016/j.clsr.2025.106247","DOIUrl":"10.1016/j.clsr.2025.106247","url":null,"abstract":"<div><div>The recent amendments to the United Kingdom’s GDPR under the Data (Use and Access) Act 2025 marks a significant divergence from the European Union’s approach to automated decision-making, substantively weakening the ‘right to explanation’ for automated decisions. This paper provides a critical legal analysis of the new regime, arguing that it dismantles crucial protections for individuals. The principal finding is that the legislation creates significant legal lacunas by introducing an ambiguous ‘no meaningful human involvement’ standard and restricting key safeguards to decisions involving ‘special category data’. These changes allow firms to shield opaque models from scrutiny, increasing the risk of algorithmic discrimination, particularly in high-stakes sectors like consumer credit.</div><div>Drawing on a comparative review of the United States’ technology-neutral adverse action notice requirement, the paper concludes that data protection law is no longer a sufficient safeguard against algorithmic harm in the United Kingdom. It proposes the establishment of a new right to an explanation for any adverse credit decision. This right should be anchored not in data protection law, but in consumer protection law, and be enforced by a specialist regulator, the Financial Conduct Authority. Such a framework would close the new accountability gaps and create market incentives for developing transparent, explainable-by-design systems, better aligning technological innovation with consumer protection.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106247"},"PeriodicalIF":3.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.clsr.2025.106237
Itxaso Domínguez de Olazábal
In the name of competitiveness, the European Union (EU) is witnessing a political push to dilute its cornerstone digital regulation: the General Data Protection Regulation (GDPR). This opinion piece critically examines the emerging narrative that regulatory effectiveness must be traded off against innovation, agility, and economic growth. It challenges the assumption that the GDPR poses an inherent barrier to a ‘competitive digital EU’ and scrutinises how ongoing deregulation efforts - framed as simplification - undermine both the normative foundations and the enforceability of the Regulation. Drawing on recent legislative initiatives (including the so-called IVth Omnibus Proposal, with a brief reference to the so-called ‘Digital Omnibus’), the article argues that competitiveness has become a rhetorical device for shifting the regulatory Overton window. It contends that the real barriers to effectiveness lie not in the GDPR’s design but in uneven enforcement, institutional under-resourcing, and the failure to challenge extractive business models. Rather than weakening existing rules, the EU’s digital competitiveness would be better served by safeguarding the GDPR’s rights-based approach and by treating regulatory integrity and fundamental rights as the preconditions for a sustainable and just digital future.
{"title":"False choices: Competitiveness, deregulation, and the erosion of GDPR’s regulatory integrity","authors":"Itxaso Domínguez de Olazábal","doi":"10.1016/j.clsr.2025.106237","DOIUrl":"10.1016/j.clsr.2025.106237","url":null,"abstract":"<div><div>In the name of competitiveness, the European Union (EU) is witnessing a political push to dilute its cornerstone digital regulation: the General Data Protection Regulation (GDPR). This opinion piece critically examines the emerging narrative that regulatory effectiveness must be traded off against innovation, agility, and economic growth. It challenges the assumption that the GDPR poses an inherent barrier to a ‘competitive digital EU’ and scrutinises how ongoing deregulation efforts - framed as simplification - undermine both the normative foundations and the enforceability of the Regulation. Drawing on recent legislative initiatives (including the so-called IVth Omnibus Proposal, with a brief reference to the so-called ‘Digital Omnibus’), the article argues that competitiveness has become a rhetorical device for shifting the regulatory Overton window. It contends that the real barriers to effectiveness lie not in the GDPR’s design but in uneven enforcement, institutional under-resourcing, and the failure to challenge extractive business models. Rather than weakening existing rules, the EU’s digital competitiveness would be better served by safeguarding the GDPR’s rights-based approach and by treating regulatory integrity and fundamental rights as the preconditions for a sustainable and just digital future.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106237"},"PeriodicalIF":3.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1016/j.clsr.2025.106234
Philipp Hacker , Matthias Holweg
This paper addresses the regulatory and liability implications of modifying general-purpose AI (GPAI) models under the EU AI Act and related legal frameworks. We make five principal contributions to this debate. First, the analysis maps the spectrum of technical modifications to GPAI models and proposes a detailed taxonomy of these interventions and their associated compliance burdens. Second, the discussion clarifies when exactly a modifying entity qualifies as a GPAI provider under the AI Act, which significantly alters the compliance mandate. Third, we develop a novel, hybrid legal test to distinguish substantial from insubstantial modifications that combines a compute-based threshold with consequence scanning to assess the introduction or amplification of risk. Fourth, the paper examines liability under the revised Product Liability Directive (PLD) and tort law, arguing that entities substantially modifying GPAI models become “manufacturers” under the PLD and may face liability for defects. The paper aligns the concept of “substantial modification” across both regimes for legal coherence and argues for a one-to-one mapping between “new provider” (AI Act) and “new manufacturer” (PLD). Fifth, the recommendations offer concrete governance strategies for policymakers and managers that propose a federated compliance structure, based on joint testing of base and modified models, implementation of Failure Mode and Effects Analysis and consequence scanning, a new database for GPAI models and modifications, robust documentation, and adherence to voluntary codes of practice. The framework also proposes simplified compliance options for SMEs while maintaining their liability obligations. Overall, the paper aims to map out a proportionate and risk-sensitive regulatory framework for modified GPAI models that integrates technical, legal, and wider societal considerations.
{"title":"The regulation of fine-tuning: Federated compliance for modified general-purpose AI models","authors":"Philipp Hacker , Matthias Holweg","doi":"10.1016/j.clsr.2025.106234","DOIUrl":"10.1016/j.clsr.2025.106234","url":null,"abstract":"<div><div>This paper addresses the regulatory and liability implications of modifying general-purpose AI (GPAI) models under the EU AI Act and related legal frameworks. We make five principal contributions to this debate. First, the analysis maps the spectrum of technical modifications to GPAI models and proposes a detailed taxonomy of these interventions and their associated compliance burdens. Second, the discussion clarifies when exactly a modifying entity qualifies as a GPAI provider under the AI Act, which significantly alters the compliance mandate. Third, we develop a novel, hybrid legal test to distinguish substantial from insubstantial modifications that combines a compute-based threshold with consequence scanning to assess the introduction or amplification of risk. Fourth, the paper examines liability under the revised Product Liability Directive (PLD) and tort law, arguing that entities substantially modifying GPAI models become “manufacturers” under the PLD and may face liability for defects. The paper aligns the concept of “substantial modification” across both regimes for legal coherence and argues for a one-to-one mapping between “new provider” (AI Act) and “new manufacturer” (PLD). Fifth, the recommendations offer concrete governance strategies for policymakers and managers that propose a federated compliance structure, based on joint testing of base and modified models, implementation of Failure Mode and Effects Analysis and consequence scanning, a new database for GPAI models and modifications, robust documentation, and adherence to voluntary codes of practice. The framework also proposes simplified compliance options for SMEs while maintaining their liability obligations. Overall, the paper aims to map out a proportionate and risk-sensitive regulatory framework for modified GPAI models that integrates technical, legal, and wider societal considerations.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106234"},"PeriodicalIF":3.2,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}