Pub Date : 2025-09-16DOI: 10.1016/j.clsr.2025.106204
Pratham Ajmera
The European cybersecurity regulation framework, not unlike European regulatory initiatives in general, has oft been criticized as being fragmented and divided among industry sectors. However, the past few years have seen legislative initiatives aimed at harmonizing cybersecurity across the EU, the most recent being the newly adopted Cyber-Resilience Act. The Act attempts to harmonize cybersecurity from the product side, establishing minimum requirements that must be met before digital products are brought into the Union market. It marks the initial foray of the EUs framework for product regulation (i.e., the New Legislative Framework or NLF) into the realm of cybersecurity regulation. Consistent with the NLF, the Cyber-Resilience Act provides for high-level cybersecurity requirements for all digital products, with demonstrable conformity met through multiple avenues including international/industrial standards adopted by European Standardization Organizations. However, unlike conventional product regulation, the Cyber-Resilience Act attempts to fulfil its objectives as part of an overarching framework of multiple harmonization legislations geared towards enhancing cybersecurity in the European Union. This article examines the Cyber-Resilience Act, its interplay with other harmonizing legislations in the EU cybersecurity regulatory regime, and raises critical challenges and questions raised through the trends identified in said interplay.
{"title":"Cybersecurity in the Internet of Things: Trends and challenges in a nascent field","authors":"Pratham Ajmera","doi":"10.1016/j.clsr.2025.106204","DOIUrl":"10.1016/j.clsr.2025.106204","url":null,"abstract":"<div><div>The European cybersecurity regulation framework, not unlike European regulatory initiatives in general, has oft been criticized as being fragmented and divided among industry sectors. However, the past few years have seen legislative initiatives aimed at harmonizing cybersecurity across the EU, the most recent being the newly adopted Cyber-Resilience Act. The Act attempts to harmonize cybersecurity from the product side, establishing minimum requirements that must be met before digital products are brought into the Union market. It marks the initial foray of the EUs framework for product regulation (i.e., the New Legislative Framework or NLF) into the realm of cybersecurity regulation. Consistent with the NLF, the Cyber-Resilience Act provides for high-level cybersecurity requirements for all digital products, with demonstrable conformity met through multiple avenues including international/industrial standards adopted by European Standardization Organizations. However, unlike conventional product regulation, the Cyber-Resilience Act attempts to fulfil its objectives as part of an overarching framework of multiple harmonization legislations geared towards enhancing cybersecurity in the European Union. This article examines the Cyber-Resilience Act, its interplay with other harmonizing legislations in the EU cybersecurity regulatory regime, and raises critical challenges and questions raised through the trends identified in said interplay.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106204"},"PeriodicalIF":3.2,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative AI has only gained public prominence in the past two years, yet instances of AI-generated CSAM videos have already been observed. It can be foreseen that in the next five years, these videos and images will become more realistic and widespread. In the United States, the FBI is already handling its first cases involving the generation of AI CSAM. This paper employs a comprehensive legal analysis of existing EU laws, including the AI Act, the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), the proposed Child Sexual Abuse Regulation (CSAR), and the Child Sexual Abuse Directive to address the critical question of whether generative AI can be effectively policed to prevent the creation of deepfakes involving children. While EU legislation is promising, it remains limited, in particular regarding the regulation of training data used by generative AI technologies. To comprehensively address AI-generated CSAM, a proactive, effective regulation and holistic approach are required, ensuring that child protection against online CSAM is integrated into the guidelines, codes of conduct, and technical standards that bring these legal instruments to life.
{"title":"The legal framework and legal gaps for AI-generated child sexual abuse material","authors":"Desara Dushi , Nertil Berdufi , Anastasia Karagianni","doi":"10.1016/j.clsr.2025.106205","DOIUrl":"10.1016/j.clsr.2025.106205","url":null,"abstract":"<div><div>Generative AI has only gained public prominence in the past two years, yet instances of AI-generated CSAM videos have already been observed. It can be foreseen that in the next five years, these videos and images will become more realistic and widespread. In the United States, the FBI is already handling its first cases involving the generation of AI CSAM. This paper employs a comprehensive legal analysis of existing EU laws, including the AI Act, the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), the proposed Child Sexual Abuse Regulation (CSAR), and the Child Sexual Abuse Directive to address the critical question of whether generative AI can be effectively policed to prevent the creation of deepfakes involving children. While EU legislation is promising, it remains limited, in particular regarding the regulation of training data used by generative AI technologies. To comprehensively address AI-generated CSAM, a proactive, effective regulation and holistic approach are required, ensuring that child protection against online CSAM is integrated into the guidelines, codes of conduct, and technical standards that bring these legal instruments to life.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106205"},"PeriodicalIF":3.2,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-10DOI: 10.1016/j.clsr.2025.106194
Fahimeh Abedi, Abbas Rajabifard, Davood Shojaei
Land, as a fundamental resource, holds immense importance in meeting human needs and driving economic prosperity, but often becomes a focal point for disputes. Resolving these disputes poses challenges stemming from inadequate laws, complexities in land administration systems and limited judicial capacity. Recognising the importance of strong legal rights and efficient dispute resolution in fostering economic development, this paper explores the role of technology, specifically Online Dispute Resolution (ODR), in addressing land and property disputes and protecting land rights. ODR systems, have revolutionised traditional approaches to conflict resolution. ODR offers a novel and accessible method for resolving disputes, reducing costs, and eliminating the need for physical presence. The integration of Artificial Intelligence (AI) into ODR platforms further enhances these benefits by streamlining case management and improving decision-making processes. AI can analyse large volumes of data, predict outcomes, and offer insights that aid in dispute resolution. The widespread adoption of ODR platforms globally underscores its potential to enhance access to justice, while AI technologies promise to refine and expedite these systems. Through a comprehensive examination, this paper explores into the intricate landscape of land and property disputes, emphasising the significance of technology-driven solutions. The potential applications of AI-ODR in mitigating complexities associated with land disputes offer promising avenues for progress in ensuring accountable land governance, sustainable development, and the protection of human. This research aims to contribute to the ongoing discourse on advancing legal empowerment and access to justice, particularly in the area of land and property rights and disputes.
{"title":"Enhancing access to justice for land and property disputes through online dispute resolution and artificial intelligence","authors":"Fahimeh Abedi, Abbas Rajabifard, Davood Shojaei","doi":"10.1016/j.clsr.2025.106194","DOIUrl":"10.1016/j.clsr.2025.106194","url":null,"abstract":"<div><div>Land, as a fundamental resource, holds immense importance in meeting human needs and driving economic prosperity, but often becomes a focal point for disputes. Resolving these disputes poses challenges stemming from inadequate laws, complexities in land administration systems and limited judicial capacity. Recognising the importance of strong legal rights and efficient dispute resolution in fostering economic development, this paper explores the role of technology, specifically Online Dispute Resolution (ODR), in addressing land and property disputes and protecting land rights. ODR systems, have revolutionised traditional approaches to conflict resolution. ODR offers a novel and accessible method for resolving disputes, reducing costs, and eliminating the need for physical presence. The integration of Artificial Intelligence (AI) into ODR platforms further enhances these benefits by streamlining case management and improving decision-making processes. AI can analyse large volumes of data, predict outcomes, and offer insights that aid in dispute resolution. The widespread adoption of ODR platforms globally underscores its potential to enhance access to justice, while AI technologies promise to refine and expedite these systems. Through a comprehensive examination, this paper explores into the intricate landscape of land and property disputes, emphasising the significance of technology-driven solutions. The potential applications of AI-ODR in mitigating complexities associated with land disputes offer promising avenues for progress in ensuring accountable land governance, sustainable development, and the protection of human. This research aims to contribute to the ongoing discourse on advancing legal empowerment and access to justice, particularly in the area of land and property rights and disputes.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106194"},"PeriodicalIF":3.2,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145027228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1016/j.clsr.2025.106173
Nils Holzenberger, Winston Maxwell
This article examines two tests from the European General Data Protection Regulation (GDPR): (1) the test for anonymisation (the “anonymisation test”), and (2) the test for applying “appropriate technical and organisational measures” to protect personal data (the “ATOM test”). Both tests depend on vague legal standards and have given rise to legal disputes and differing interpretations among data protection authorities and courts, including in the context of machine learning. Under the anonymisation test, data are sufficiently anonymised when the risk of identification is “insignificant” taking into account “all means reasonably likely to be used” by an attacker. Under the ATOM test, measures to protect personal data must be “appropriate” with regard to the risks of data loss. Here, we use methods from law and economics to transform these two qualitative tests into quantitative approaches that can be visualized on a graph. For the anonymisation test, we chart different attack efforts and identification probabilities, and propose this as a methodology to help stakeholders discuss what attack efforts are “reasonably likely” to be deployed and their likelihood of success. For the ATOM test, we use the Learned Hand formula from law and economics to chart the incremental costs and benefits of privacy protection measures to identify the point where those measures maximize social welfare. The Hand formula permits the negative effects of privacy protection measures, such as the loss of data utility and negative impacts on model fairness, to be taken into account when defining what level of protection is “appropriate”. We apply our proposed framework to several scenarios, applying the anonymisation test to a Large Language Model, and the ATOM test to a database protected with differential privacy.
{"title":"A quantitative approach to the GDPR’s anonymisation and “appropriate technical and organisational measures” tests","authors":"Nils Holzenberger, Winston Maxwell","doi":"10.1016/j.clsr.2025.106173","DOIUrl":"10.1016/j.clsr.2025.106173","url":null,"abstract":"<div><div>This article examines two tests from the European General Data Protection Regulation (GDPR): (1) the test for anonymisation (the “anonymisation test”), and (2) the test for applying “appropriate technical and organisational measures” to protect personal data (the “ATOM test”). Both tests depend on vague legal standards and have given rise to legal disputes and differing interpretations among data protection authorities and courts, including in the context of machine learning. Under the anonymisation test, data are sufficiently anonymised when the risk of identification is “insignificant” taking into account “all means reasonably likely to be used” by an attacker. Under the ATOM test, measures to protect personal data must be “appropriate” with regard to the risks of data loss. Here, we use methods from law and economics to transform these two qualitative tests into quantitative approaches that can be visualized on a graph. For the anonymisation test, we chart different attack efforts and identification probabilities, and propose this as a methodology to help stakeholders discuss what attack efforts are “reasonably likely” to be deployed and their likelihood of success. For the ATOM test, we use the Learned Hand formula from law and economics to chart the incremental costs and benefits of privacy protection measures to identify the point where those measures maximize social welfare. The Hand formula permits the negative effects of privacy protection measures, such as the loss of data utility and negative impacts on model fairness, to be taken into account when defining what level of protection is “appropriate”. We apply our proposed framework to several scenarios, applying the anonymisation test to a Large Language Model, and the ATOM test to a database protected with differential privacy.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106173"},"PeriodicalIF":3.2,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1016/j.clsr.2025.106195
Yu Liu
Jurisdictional conflicts in SEP litigation have intensified as both SEP holders and implementers increasingly resort to antisuit injunctions (ASIs) and retaliatory anti-antisuit injunctions (AASIs). This article contends that a stricter interpretation of two particular requirements for granting ASIs—the “dispositive” and “vexatious or oppressive” requirements—offers the most viable short-term strategy for de-escalating this global procedural arms race. First, courts should resist the assumption that resolution of a breach of FRAND obligation claim necessarily disposes of foreign SEP infringement actions brought by the SEP holder. Second, the assessment of whether a foreign parallel proceeding is vexatious or oppressive should be grounded in the doctrine of forum non conveniens.
{"title":"Before the first shots are fired: A guide to granting antisuit injunctions in SEP litigation","authors":"Yu Liu","doi":"10.1016/j.clsr.2025.106195","DOIUrl":"10.1016/j.clsr.2025.106195","url":null,"abstract":"<div><div>Jurisdictional conflicts in SEP litigation have intensified as both SEP holders and implementers increasingly resort to antisuit injunctions (ASIs) and retaliatory anti-antisuit injunctions (AASIs). This article contends that a stricter interpretation of two particular requirements for granting ASIs—the “dispositive” and “vexatious or oppressive” requirements—offers the most viable short-term strategy for de-escalating this global procedural arms race. First, courts should resist the assumption that resolution of a breach of FRAND obligation claim necessarily disposes of foreign SEP infringement actions brought by the SEP holder. Second, the assessment of whether a foreign parallel proceeding is vexatious or oppressive should be grounded in the doctrine of forum non conveniens.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106195"},"PeriodicalIF":3.2,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1016/j.clsr.2025.106186
Patrick Smieskol , Timo Jakobi , Max von Grafenstein
In an increasingly digitized world, personalization has emerged as a key mechanism for matching users with relevant content, advertisements, services, and other products. For personalization to work, typically, users' online behavior is tracked to create unique profiles about their individual behavior and interests. This process creates trade-offs between data collection and users' privacy concerns. These conflicts are regulated, amongst other laws, by the General Data Protection Regulation (GDPR) as well as the ePrivacy Directive. While the ePrivacy Directive requires the data controller to get the consent from data subjects for the setting of cookies through which data subjects can be tracked across different websites and even devices, the GDPR requires further user control and transparency with respect to the processing of such data, especially profiling, on which the personalization of content is based. However, plenty of research shows that, up to date, users do neither understand the effects of tracking technology on their online experience nor do they feel in control of their profiles created. As a consequence, users report helplessness and even fatalism instead of being able to effectively control tracking for personalization, even where controls are provided to the users. Based on the rich research on feedback design, we argue that for learning how to effectively control tracking and, as a consequence, personalization, users need effective feedback mechanisms to learn about the outcomes of their settings and evaluate their performance. One of the key elements for effectiveness of feedback in general are its situatedness and timeliness. In this paper we therefore address the question of how feedback mechanisms should be designed so that they enable users to make an effective decision for or against tracking and personalization. To this aim, we conducted in a first research phase 20 qualitative interviews to explore users' privacy expectations, what benefits of personalization they value and which risks they see and, most importantly, what controls do they think they should have? The results of this study suggested an immediate feedback mechanism. In a second phase, we therefore prototyped an on/off switch that users could use to enable or disable the personalisation of advertising and other content on a website and compare the results of the two settings. A preliminary evaluation confirms such a feedback mechanism as a promising approach for effective user control according to the data protection by design requirement in Art. 25 sect. 1 GDPR. If this mechanism were to be further developed and evaluated into an effective solution available on the market, it would represent the so-called state of the art, which would have to be considered by all data controllers in accordance with Art. 25 sect. 1 GDPR.
{"title":"From consent to control by closing the feedback loop: Enabling data subjects to directly compare personalized and non-personalized content through an On/Off toggle","authors":"Patrick Smieskol , Timo Jakobi , Max von Grafenstein","doi":"10.1016/j.clsr.2025.106186","DOIUrl":"10.1016/j.clsr.2025.106186","url":null,"abstract":"<div><div>In an increasingly digitized world, personalization has emerged as a key mechanism for matching users with relevant content, advertisements, services, and other products. For personalization to work, typically, users' online behavior is tracked to create unique profiles about their individual behavior and interests. This process creates trade-offs between data collection and users' privacy concerns. These conflicts are regulated, amongst other laws, by the General Data Protection Regulation (GDPR) as well as the ePrivacy Directive. While the ePrivacy Directive requires the data controller to get the consent from data subjects for the setting of cookies through which data subjects can be tracked across different websites and even devices, the GDPR requires further user control and transparency with respect to the processing of such data, especially profiling, on which the personalization of content is based. However, plenty of research shows that, up to date, users do neither understand the effects of tracking technology on their online experience nor do they feel in control of their profiles created. As a consequence, users report helplessness and even fatalism instead of being able to effectively control tracking for personalization, even where controls are provided to the users. Based on the rich research on feedback design, we argue that for learning how to effectively control tracking and, as a consequence, personalization, users need effective feedback mechanisms to learn about the outcomes of their settings and evaluate their performance. One of the key elements for effectiveness of feedback in general are its situatedness and timeliness. In this paper we therefore address the question of how feedback mechanisms should be designed so that they enable users to make an effective decision for or against tracking and personalization. To this aim, we conducted in a first research phase 20 qualitative interviews to explore users' privacy expectations, what benefits of personalization they value and which risks they see and, most importantly, what controls do they think they should have? The results of this study suggested an immediate feedback mechanism. In a second phase, we therefore prototyped an on/off switch that users could use to enable or disable the personalisation of advertising and other content on a website and compare the results of the two settings. A preliminary evaluation confirms such a feedback mechanism as a promising approach for effective user control according to the data protection by design requirement in Art. 25 sect. 1 GDPR. If this mechanism were to be further developed and evaluated into an effective solution available on the market, it would represent the so-called state of the art, which would have to be considered by all data controllers in accordance with Art. 25 sect. 1 GDPR.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106186"},"PeriodicalIF":3.2,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1016/j.clsr.2025.106167
Julien Cabay , Thomas Vandamme , Olivier Debeir
For the past few years, Intellectual Property (IP) Offices have provided their users the possibility to carry out searches in the Trade Mark (TM) public registries through image-search tools, powered by Artificial Intelligence (AI) technologies. Such tools allegedly alleviate the burden to identify similar figurative trade marks (TM), which is a crucial yet cumbersome task for TM proprietors, TM applicants and IP Offices. Amongst others, the European Union Intellectual Property Office (EUIPO) and the Benelux Office for Intellectual Property (BOIP) provide access to such tools, respectively developed in-house and by a private company. Yet, the inner functionings of those systems are unknown and their performances difficult to assess, which in turn raises many concerns, especially in light of the legal certainty rationale underlying the registration requirement of TM law. To address those concerns, we designed an experiment to benchmark and audit those tools. Using the case law from the EUIPO and the BOIP on opposition to TM registration, we evaluated the capacity of those tools to identify similarities between signs that possibly amount to a likelihood of confusion (LoC), the main trigger of TM law. Our findings show that the performances of those tools are poor, and that the black-box auditing is highly contingent and possibly elusive for many AI technologies used in the legal field. This suggests that black-box auditing is not suitable for Legal AIs, which should be subject to enhanced transparency obligations, possibly pursuant to the AI Act interpreted broadly.
{"title":"Looking through the crack in the black box: A comparative case law benchmark for auditing AI-Powered Trade Mark search engines","authors":"Julien Cabay , Thomas Vandamme , Olivier Debeir","doi":"10.1016/j.clsr.2025.106167","DOIUrl":"10.1016/j.clsr.2025.106167","url":null,"abstract":"<div><div>For the past few years, Intellectual Property (IP) Offices have provided their users the possibility to carry out searches in the Trade Mark (TM) public registries through image-search tools, powered by Artificial Intelligence (AI) technologies. Such tools allegedly alleviate the burden to identify similar figurative trade marks (TM), which is a crucial yet cumbersome task for TM proprietors, TM applicants and IP Offices. Amongst others, the European Union Intellectual Property Office (EUIPO) and the Benelux Office for Intellectual Property (BOIP) provide access to such tools, respectively developed in-house and by a private company. Yet, the inner functionings of those systems are unknown and their performances difficult to assess, which in turn raises many concerns, especially in light of the legal certainty rationale underlying the registration requirement of TM law. To address those concerns, we designed an experiment to benchmark and audit those tools. Using the case law from the EUIPO and the BOIP on opposition to TM registration, we evaluated the capacity of those tools to identify similarities between signs that possibly amount to a likelihood of confusion (LoC), the main trigger of TM law. Our findings show that the performances of those tools are poor, and that the black-box auditing is highly contingent and possibly elusive for many AI technologies used in the legal field. This suggests that black-box auditing is not suitable for Legal AIs, which should be subject to enhanced transparency obligations, possibly pursuant to the AI Act interpreted broadly.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106167"},"PeriodicalIF":3.2,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1016/j.clsr.2025.106192
Chuyi Wei , Jingchen Zhao , Li Sun
China’s advancement in End-to-End Autonomous Driving (E2E AD) presents profound legal and regulatory challenges due to its “black box” nature and data dependency, rendering traditional frameworks inadequate. This paper argues for a tiered liability system, shifting responsibility to manufacturers with increasing vehicle autonomy. Additionally, it proposes an adaptive, multi-tiered, risk-stratified data governance model. Underpinning these proposals, robust transparency and explainability (XAI) are crucial for ensuring accountability and achieving effective regulatory alignment. These proposed frameworks offer critical insights for China and provide a practical and theoretical basis for other nations navigating AI governance in autonomous mobility.
{"title":"Achieving regulatory alignment for E2E autonomous driving in China: A framework for tort liability and data governance","authors":"Chuyi Wei , Jingchen Zhao , Li Sun","doi":"10.1016/j.clsr.2025.106192","DOIUrl":"10.1016/j.clsr.2025.106192","url":null,"abstract":"<div><div>China’s advancement in End-to-End Autonomous Driving (E2E AD) presents profound legal and regulatory challenges due to its “black box” nature and data dependency, rendering traditional frameworks inadequate. This paper argues for a tiered liability system, shifting responsibility to manufacturers with increasing vehicle autonomy. Additionally, it proposes an adaptive, multi-tiered, risk-stratified data governance model. Underpinning these proposals, robust transparency and explainability (XAI) are crucial for ensuring accountability and achieving effective regulatory alignment. These proposed frameworks offer critical insights for China and provide a practical and theoretical basis for other nations navigating AI governance in autonomous mobility.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106192"},"PeriodicalIF":3.2,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144997702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1016/j.clsr.2025.106181
Laura Aade
Social media commerce, defined as the direct selling of goods and services through social media, is emerging as a prominent business model in the platform economy. As social media platforms introduce e-commerce features, they are becoming what I call social marketplaces: a new category of online platforms found at the intersection of social networks and online marketplaces. This article examines how the Digital Services Act (DSA) protects consumers in relation to social media commerce, and what specific obligations it imposes on social marketplaces to increase transparency in online transactions. While the DSA does not explicitly address social media commerce, it indirectly applies through Section 4 which imposes obligations on ‘online platforms allowing consumers to conclude distance contracts with traders'. I argue that because social marketplaces fall within this category of online platforms, they are subject to the obligations laid down in Section 4 DSA, namely Article 30 DSA (traceability of traders), Article 31 DSA (compliance by design), and Article 32 DSA (right to information). This article critically analyses the application of these provisions to social marketplaces and examines their interaction with EU consumer laws. Based on the analysis, it identifies three shortcomings in the DSA’s approach to protecting consumers on social marketplaces: (i) regulatory complexity due to overlaps with the EU consumer acquis, (ii) interpretative ambiguity, as the DSA was not designed with social marketplaces in mind, and (iii) an enforcement gap specific to social media commerce. Rather than calling for new legislation, this article concludes that effective consumer protection on social marketplaces requires clarifying the interaction between legal instruments, interpreting existing provisions in light of evolving platform practices, and ensuring coordinated enforcement across relevant actors.
{"title":"The regulation of social media commerce under the DSA: A consumer protection perspective","authors":"Laura Aade","doi":"10.1016/j.clsr.2025.106181","DOIUrl":"10.1016/j.clsr.2025.106181","url":null,"abstract":"<div><div>Social media commerce, defined as the direct selling of goods and services through social media, is emerging as a prominent business model in the platform economy. As social media platforms introduce e-commerce features, they are becoming what I call <em>social marketplaces:</em> a new category of online platforms found at the intersection of social networks and online marketplaces. This article examines how the Digital Services Act (DSA) protects consumers in relation to social media commerce, and what specific obligations it imposes on social marketplaces to increase transparency in online transactions. While the DSA does not explicitly address social media commerce, it indirectly applies through Section 4 which imposes obligations on ‘online platforms allowing consumers to conclude distance contracts with traders'. I argue that because social marketplaces fall within this category of online platforms, they are subject to the obligations laid down in Section 4 DSA, namely Article 30 DSA (traceability of traders), Article 31 DSA (compliance by design), and Article 32 DSA (right to information). This article critically analyses the application of these provisions to social marketplaces and examines their interaction with EU consumer laws. Based on the analysis, it identifies three shortcomings in the DSA’s approach to protecting consumers on social marketplaces: (i) regulatory complexity due to overlaps with the EU consumer <em>acquis</em>, (ii) interpretative ambiguity, as the DSA was not designed with social marketplaces in mind, and (iii) an enforcement gap specific to social media commerce. Rather than calling for new legislation, this article concludes that effective consumer protection on social marketplaces requires clarifying the interaction between legal instruments, interpreting existing provisions in light of evolving platform practices, and ensuring coordinated enforcement across relevant actors.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106181"},"PeriodicalIF":3.2,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144997703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-04DOI: 10.1016/j.clsr.2025.106191
Ryan Yang Wang , Sydney Forde , Ahmed Al Rawi , Erika Solis , Krishna Jayakar
This study offers the very first investigation of the global diffusion and convergence of domain name dispute resolution policies (NDRPs) by analyzing 34 policies adopted by country code top-level domains (ccTLDs) between 1999 and 2023. While prior research has largely focused on ICANN’s Uniform Dispute Resolution Policy (UDRP), this paper offers a novel cross-national comparison of NDRPs to evaluate textual convergence and underlying policy drivers. Combining qualitative content analysis with network-based similarity modeling, the study constructs a matrix representing pairwise textual similarity between policy documents. To account for network dependencies, we apply Multiple Regression Quadratic Assignment Procedures and generalized linear mixed models with beta regression. The analysis identifies key predictors of policy similarity, showing that countries with similar levels of government effectiveness and differing export intensities are more likely to share convergent policy texts. This suggests that policy convergence occurs not merely through regional or legal affinity, but through a combination of institutional alignment and economic asymmetry. Despite the decentralized and uncoordinated adoption of NDRPs globally, a substantially unified dispute resolution framework for domain names appears to be emerging.
{"title":"Textual convergence in national domain name dispute resolution regimes: a mixed-methods analysis of ccTLD arbitration policies","authors":"Ryan Yang Wang , Sydney Forde , Ahmed Al Rawi , Erika Solis , Krishna Jayakar","doi":"10.1016/j.clsr.2025.106191","DOIUrl":"10.1016/j.clsr.2025.106191","url":null,"abstract":"<div><div>This study offers the very first investigation of the global diffusion and convergence of domain name dispute resolution policies (NDRPs) by analyzing 34 policies adopted by country code top-level domains (ccTLDs) between 1999 and 2023. While prior research has largely focused on ICANN’s Uniform Dispute Resolution Policy (UDRP), this paper offers a novel cross-national comparison of NDRPs to evaluate textual convergence and underlying policy drivers. Combining qualitative content analysis with network-based similarity modeling, the study constructs a matrix representing pairwise textual similarity between policy documents. To account for network dependencies, we apply Multiple Regression Quadratic Assignment Procedures and generalized linear mixed models with beta regression. The analysis identifies key predictors of policy similarity, showing that countries with similar levels of government effectiveness and differing export intensities are more likely to share convergent policy texts. This suggests that policy convergence occurs not merely through regional or legal affinity, but through a combination of institutional alignment and economic asymmetry. Despite the decentralized and uncoordinated adoption of NDRPs globally, a substantially unified dispute resolution framework for domain names appears to be emerging.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106191"},"PeriodicalIF":3.2,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144989981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}