Pub Date : 2024-12-17DOI: 10.1016/j.clsr.2024.106100
Weijie Huang , Xi Chen
Copyright-regulated reproduction should encompass both technological and economic elements. The technological element, which means the generation of a reproduction, determines whether a certain act constitutes reproduction. The economic element, which means the potential for public distribution and undermining copyright owners’ incentives by the generated reproduction, ascertains whether such an act of reproduction falls within copyright law's jurisdiction. We not only delineate the boundary of copyright-regulated reproduction but also elucidate the underlying rationale for the spectrum of reproduction regulated by presumption, reproduction regulated by exemption, and non-regulated reproduction. Accordingly, we analyze the regulatability of GenAI's reproduction of works throughout its stages. During the data acquisition and preprocessing stages, GenAI engages in reproduction with the technological element but lacks the economic element. During the training and generation stages, GenAI normally does not generate output similar to training data, thus initially lacking the technological element; in exceptional cases where GenAI generates output similar to prior works and possesses the technological element, copyright-regulated reproduction only occurs in the generation stage where the output has the potential for public distribution. Furthermore, we address the possible criticism that GenAI's unregulated reproduction would lead to an inequitable scenario by free riding on preexisting works.
{"title":"Does generative AI copy? Rethinking the right to copy under copyright law","authors":"Weijie Huang , Xi Chen","doi":"10.1016/j.clsr.2024.106100","DOIUrl":"10.1016/j.clsr.2024.106100","url":null,"abstract":"<div><div>Copyright-regulated reproduction should encompass both technological and economic elements. The technological element, which means the generation of a reproduction, determines whether a certain act constitutes reproduction. The economic element, which means the potential for public distribution and undermining copyright owners’ incentives by the generated reproduction, ascertains whether such an act of reproduction falls within copyright law's jurisdiction. We not only delineate the boundary of copyright-regulated reproduction but also elucidate the underlying rationale for the spectrum of reproduction regulated by presumption, reproduction regulated by exemption, and non-regulated reproduction. Accordingly, we analyze the regulatability of GenAI's reproduction of works throughout its stages. During the data acquisition and preprocessing stages, GenAI engages in reproduction with the technological element but lacks the economic element. During the training and generation stages, GenAI normally does not generate output similar to training data, thus initially lacking the technological element; in exceptional cases where GenAI generates output similar to prior works and possesses the technological element, copyright-regulated reproduction only occurs in the generation stage where the output has the potential for public distribution. Furthermore, we address the possible criticism that GenAI's unregulated reproduction would lead to an inequitable scenario by free riding on preexisting works.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106100"},"PeriodicalIF":3.3,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-14DOI: 10.1016/j.clsr.2024.106094
Thomas Riis
Internet intermediaries’ content moderation raises two major problems. The first relates to the accuracy of the moderation practices, which is an issue on whether the intermediaries over-enforce or under-enforce. The second problem concerns the inherent privatization of justice that results when enforcement of rights is left to a private party. The purpose of the article is to develop a model of ‘rough justice’ for internet intermediaries’ content moderation practices taking into account the obvious fact that such content moderation cannot comply with the degree of justice known from civil procedural law. There is no reason to believe that internet intermediaries strive to achieve the highest level of justice in their content moderation. As a consequence, the model of rough justice presupposes legislative intervention related to 3 different groups of provisions: 1) Procedural rules, 2) substantive rules, and 3) competences of persons involved in content moderation.
{"title":"A model of ‘rough justice’ for internet intermediaries from the perspective of EU copyright law","authors":"Thomas Riis","doi":"10.1016/j.clsr.2024.106094","DOIUrl":"10.1016/j.clsr.2024.106094","url":null,"abstract":"<div><div>Internet intermediaries’ content moderation raises two major problems. The first relates to the accuracy of the moderation practices, which is an issue on whether the intermediaries over-enforce or under-enforce. The second problem concerns the inherent privatization of justice that results when enforcement of rights is left to a private party. The purpose of the article is to develop a model of ‘rough justice’ for internet intermediaries’ content moderation practices taking into account the obvious fact that such content moderation cannot comply with the degree of justice known from civil procedural law. There is no reason to believe that internet intermediaries strive to achieve the highest level of justice in their content moderation. As a consequence, the model of rough justice presupposes legislative intervention related to 3 different groups of provisions: 1) Procedural rules, 2) substantive rules, and 3) competences of persons involved in content moderation.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106094"},"PeriodicalIF":3.3,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-04DOI: 10.1016/j.clsr.2024.106089
Meihui Zhang , Chi Zhang
Fintech has seen exponential growth in recent years, breaking into markets often underserved by traditional financial services. Along with fintech's benefits, a series of risks caused by fintech has drawn regulators’ attention globally. Fintech activities can be generally categorised into two parts, namely investment-oriented fintech activities such as peer-to-peer lending, equity crowdfunding, and initial coin offerings; the other is payment-oriented fintech, which includes digital payment and central bank digital currencies. China has been one of the pioneers in promoting fintech markets during the past decade. Given that the former type of fintech will generate distinct investment risks while the latter one's risk is much slighter, China's regulator treats the two kinds of fintech differently. This article examines China's differing regulatory approaches to its investment-oriented and payment-oriented fintech sectors, respectively, and explores market conditions to which the above difference attributes. Beyond China, this article argues that a perfect result cannot be reached by pure external regulation; instead, successful regulation over investment-oriented fintech is significantly subject to the economic foundation of a given jurisdiction, among which maturity of investors is a constraint condition for mitigating risks in investment-oriented fintech industry.
{"title":"The political economy of the fintech regulation in China and its implications","authors":"Meihui Zhang , Chi Zhang","doi":"10.1016/j.clsr.2024.106089","DOIUrl":"10.1016/j.clsr.2024.106089","url":null,"abstract":"<div><div>Fintech has seen exponential growth in recent years, breaking into markets often underserved by traditional financial services. Along with fintech's benefits, a series of risks caused by fintech has drawn regulators’ attention globally. Fintech activities can be generally categorised into two parts, namely investment-oriented fintech activities such as peer-to-peer lending, equity crowdfunding, and initial coin offerings; the other is payment-oriented fintech, which includes digital payment and central bank digital currencies. China has been one of the pioneers in promoting fintech markets during the past decade. Given that the former type of fintech will generate distinct investment risks while the latter one's risk is much slighter, China's regulator treats the two kinds of fintech differently. This article examines China's differing regulatory approaches to its investment-oriented and payment-oriented fintech sectors, respectively, and explores market conditions to which the above difference attributes. Beyond China, this article argues that a perfect result cannot be reached by pure external regulation; instead, successful regulation over investment-oriented fintech is significantly subject to the economic foundation of a given jurisdiction, among which maturity of investors is a constraint condition for mitigating risks in investment-oriented fintech industry.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106089"},"PeriodicalIF":3.3,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-04DOI: 10.1016/j.clsr.2024.106093
Michal Rampášek, Matúš Mesarčík, Jozef Andraško
The field of cybersecurity has changed dramatically since the Cybersecurity Strategy for the Digital Decade was presented by the European Commission and the High Representative of the Union for Foreign Affairs and Security Policy in December 2020. The Cybersecurity Strategy highlights the potential of AI as a new technology, but also the need for cyber security of AI technology. Indeed, since the strategy was adopted, AI has shown that it has enormous potential for growth, but also several risks and vulnerabilities that this new technology brings. The paper analyses the shift and further development in the field of cybersecurity of digital products and services, AI itself as a technology, as well as products and services that will contain an AI component. In our opinion, the way to ensure that not only AI technology itself, but also products and services are cyber-secure, is to achieve a high level of standardisation of best practices, as there are many gaps in this area. The adoption of technical standards will fully form a path for conformity assessment and certification of not only AI systems but also AI-featured digital products and services. However, the current regulatory trend is to adopt a comprehensive legal regulation of AI even before such technical standards are fully developed and adopted. We consider this risky. Despite the well-intentioned effort to define and regulate AI, the purpose set forth in the AIA may not be achieved, as the requirements adopted in this way can very quickly become unnecessarily burdensome or even outdated due to increasing technological development. The proof of this is also the recent rise of large ML models, known as foundation models, which significantly changed the previous understanding of the creation of AI systems. It will be the technological development of AI, AI specific standardisation, and subsequent certification of digital products and services, which will govern future activities in building Europe's cyber resilience.
{"title":"Evolving cybersecurity of AI-featured digital products and services: Rise of standardisation and certification?","authors":"Michal Rampášek, Matúš Mesarčík, Jozef Andraško","doi":"10.1016/j.clsr.2024.106093","DOIUrl":"10.1016/j.clsr.2024.106093","url":null,"abstract":"<div><div>The field of cybersecurity has changed dramatically since the Cybersecurity Strategy for the Digital Decade was presented by the European Commission and the High Representative of the Union for Foreign Affairs and Security Policy in December 2020. The Cybersecurity Strategy highlights the potential of AI as a new technology, but also the need for cyber security of AI technology. Indeed, since the strategy was adopted, AI has shown that it has enormous potential for growth, but also several risks and vulnerabilities that this new technology brings. The paper analyses the shift and further development in the field of cybersecurity of digital products and services, AI itself as a technology, as well as products and services that will contain an AI component. In our opinion, the way to ensure that not only AI technology itself, but also products and services are cyber-secure, is to achieve a high level of standardisation of best practices, as there are many gaps in this area. The adoption of technical standards will fully form a path for conformity assessment and certification of not only AI systems but also AI-featured digital products and services. However, the current regulatory trend is to adopt a comprehensive legal regulation of AI even before such technical standards are fully developed and adopted. We consider this risky. Despite the well-intentioned effort to define and regulate AI, the purpose set forth in the AIA may not be achieved, as the requirements adopted in this way can very quickly become unnecessarily burdensome or even outdated due to increasing technological development. The proof of this is also the recent rise of large ML models, known as foundation models, which significantly changed the previous understanding of the creation of AI systems. It will be the technological development of AI, AI specific standardisation, and subsequent certification of digital products and services, which will govern future activities in building Europe's cyber resilience.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106093"},"PeriodicalIF":3.3,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-30DOI: 10.1016/j.clsr.2024.106090
Duoqi Xu, Li Chen
The integration of LegalTech in China's financial and legal sectors offers useful insights for innovative legal practices, financial regulation and judicial reform. This article examines how LegalTech transforms personal credit risk management in China, analyzing its integration within banking compliance systems and judicial processes. It explores three key dimensions: the evolution of debt collection practices through technological innovation, the enhancement of public remedies through automated judicial systems, and the development of legal frameworks to legitimize LegalTech solutions. While highlighting LegalTech's potential to improve efficiency in credit risk resolution, the article addresses critical challenges including moral hazard in automated systems and the preservation of judicial discretion in technological implementation.
{"title":"Between progress and caution: LegalTech's promise in transforming personal credit risk management in China","authors":"Duoqi Xu, Li Chen","doi":"10.1016/j.clsr.2024.106090","DOIUrl":"10.1016/j.clsr.2024.106090","url":null,"abstract":"<div><div>The integration of LegalTech in China's financial and legal sectors offers useful insights for innovative legal practices, financial regulation and judicial reform. This article examines how LegalTech transforms personal credit risk management in China, analyzing its integration within banking compliance systems and judicial processes. It explores three key dimensions: the evolution of debt collection practices through technological innovation, the enhancement of public remedies through automated judicial systems, and the development of legal frameworks to legitimize LegalTech solutions. While highlighting LegalTech's potential to improve efficiency in credit risk resolution, the article addresses critical challenges including moral hazard in automated systems and the preservation of judicial discretion in technological implementation.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106090"},"PeriodicalIF":3.3,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142756883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-29DOI: 10.1016/j.clsr.2024.106092
Monika Simmler, Giulia Canova
Facial recognition technology (FRT) has emerged as a powerful tool for law enforcement, enabling the automated identification of individuals based on their unique facial features. Authorities have more and more made use of the technology to enhance criminal investigations through the analysis of images and video footage. In view of its increased use in Europe, this paper explores the legal implications of FRT in law enforcement under EU law and evaluates approaches to regulation. FRT use constitutes biometric data processing and comes with a particularly sensitive analysis of data. Its specific nature is grounded in the creation of a new (biometric) quality of data in order to subsequently compare for matches. Due to its impact on fundamental rights, this approach differs from conventional forensic analyses and must be appropriately regulated. Such regulation should consider the multiple data processing steps and reflect each step's impact on fundamental rights. From this procedural stance, the shortcomings of the EU Artificial Intelligence Act (AI Act) become evident. The AI Act contains specific rules for biometric AI systems but does not provide the necessary legal bases to justify FRT use by law enforcement. Without a comprehensive legal framework, such use is not permitted. This article provides concrete guidelines for addressing such regulation.
{"title":"Facial recognition technology in law enforcement: Regulating data analysis of another kind","authors":"Monika Simmler, Giulia Canova","doi":"10.1016/j.clsr.2024.106092","DOIUrl":"10.1016/j.clsr.2024.106092","url":null,"abstract":"<div><div>Facial recognition technology (FRT) has emerged as a powerful tool for law enforcement, enabling the automated identification of individuals based on their unique facial features. Authorities have more and more made use of the technology to enhance criminal investigations through the analysis of images and video footage. In view of its increased use in Europe, this paper explores the legal implications of FRT in law enforcement under EU law and evaluates approaches to regulation. FRT use constitutes biometric data processing and comes with a particularly sensitive analysis of data. Its specific nature is grounded in the creation of a new (biometric) quality of data in order to subsequently compare for matches. Due to its impact on fundamental rights, this approach differs from conventional forensic analyses and must be appropriately regulated. Such regulation should consider the multiple data processing steps and reflect each step's impact on fundamental rights. From this procedural stance, the shortcomings of the EU Artificial Intelligence Act (AI Act) become evident. The AI Act contains specific rules for biometric AI systems but does not provide the necessary legal bases to justify FRT use by law enforcement. Without a comprehensive legal framework, such use is not permitted. This article provides concrete guidelines for addressing such regulation.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106092"},"PeriodicalIF":3.3,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142748477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-25DOI: 10.1016/j.clsr.2024.106078
Jennifer Tridgell
<div><div>‘Digital sovereignty’ is the geopolitical mantra of the moment. A key agent of that policy shift, the European Union (‘EU’) has increasingly embraced ‘digital sovereignty’ as both the ideological foundation and impetus for building its digital future in accordance with ‘European values and principles,’ often driven by and intersecting with cybersecurity concerns as articulated in its 2020 <em>Cybersecurity Strategy for the Digital Decade</em> (‘Strategy’). Yet it is impossible to consider cybersecurity without open-source software (‘OSS’). Increasingly, the EU, USA and other Governments have recognised that fact in the wake of HeartBleed and Log4j incidents. OSS’ decentralised governance and ubiquity, underpinning most software worldwide, may amplify vulnerabilities and adverse effects of cyberattacks, whilst its typically collaborative model of development and innovation often fosters valuable, open cybersecurity solutions.</div><div>In navigating that policy tightrope of OSS as a double-edged sword for cybersecurity, the EU has adopted ‘closed’ language of ‘digital sovereignty’ that is ostensibly contrary to the ‘open’ nature of OSS. That rhetorical duality is particularly pronounced since the EU described OSS as a tool for realising its ‘digital sovereignty,’ in addition to policy support for ‘a global, open, interoperable cyberspace’ alongside the pursuit of ‘digital sovereignty.’ While there is a epistemic gap in understanding the relationship between the EU's rhetoric of ‘digital sovereignty’ and reality, nascent studies indicate that it has a tangible effect on policy change in multiple digital spheres, generally furthering a degree of ‘control.’ However, that relationship within the OSS cybersecurity context has underexplored and poorly understood, although that policy is a priority for the EU and may bear significant implications for OSS globally.</div><div>Particularly analyzing the Cyber Resilience Act (‘CRA’) as key means for implementing the EU's Strategy and its first cybersecurity legislation that would comprehensively engage OSS if adopted by the Council, this article argues that the EU's desire to strengthening cybersecurity in OSS is generally welcome. Yet there is an ostensibly a disjunct between ‘digital sovereignty’ that underpins that legislation and OSS cybersecurity, with too much control of OSS potentially proving counterproductive for EU cybersecurity. This paper illustrates that (i) it is imperative for the EU to address OSS cybersecurity; (ii) yet the lens of digital sovereignty is ostensibly a rough fit for that approach, considering OSS’ philosophy and practice; and (iii) based on the CRA, EU's practice of translating ‘digital sovereignty’ into policy change is mixed, leaving uncertain ramifications for OSS cybersecurity in the EU and beyond. On the one hand, it moves towards more ‘control’ at least in determining definitional parameters and power dynamics with novel ‘stewardship’ positions for certain OSS
{"title":"Open or closing doors? The influence of ‘digital sovereignty’ in the EU's Cybersecurity Strategy on cybersecurity of open-source software","authors":"Jennifer Tridgell","doi":"10.1016/j.clsr.2024.106078","DOIUrl":"10.1016/j.clsr.2024.106078","url":null,"abstract":"<div><div>‘Digital sovereignty’ is the geopolitical mantra of the moment. A key agent of that policy shift, the European Union (‘EU’) has increasingly embraced ‘digital sovereignty’ as both the ideological foundation and impetus for building its digital future in accordance with ‘European values and principles,’ often driven by and intersecting with cybersecurity concerns as articulated in its 2020 <em>Cybersecurity Strategy for the Digital Decade</em> (‘Strategy’). Yet it is impossible to consider cybersecurity without open-source software (‘OSS’). Increasingly, the EU, USA and other Governments have recognised that fact in the wake of HeartBleed and Log4j incidents. OSS’ decentralised governance and ubiquity, underpinning most software worldwide, may amplify vulnerabilities and adverse effects of cyberattacks, whilst its typically collaborative model of development and innovation often fosters valuable, open cybersecurity solutions.</div><div>In navigating that policy tightrope of OSS as a double-edged sword for cybersecurity, the EU has adopted ‘closed’ language of ‘digital sovereignty’ that is ostensibly contrary to the ‘open’ nature of OSS. That rhetorical duality is particularly pronounced since the EU described OSS as a tool for realising its ‘digital sovereignty,’ in addition to policy support for ‘a global, open, interoperable cyberspace’ alongside the pursuit of ‘digital sovereignty.’ While there is a epistemic gap in understanding the relationship between the EU's rhetoric of ‘digital sovereignty’ and reality, nascent studies indicate that it has a tangible effect on policy change in multiple digital spheres, generally furthering a degree of ‘control.’ However, that relationship within the OSS cybersecurity context has underexplored and poorly understood, although that policy is a priority for the EU and may bear significant implications for OSS globally.</div><div>Particularly analyzing the Cyber Resilience Act (‘CRA’) as key means for implementing the EU's Strategy and its first cybersecurity legislation that would comprehensively engage OSS if adopted by the Council, this article argues that the EU's desire to strengthening cybersecurity in OSS is generally welcome. Yet there is an ostensibly a disjunct between ‘digital sovereignty’ that underpins that legislation and OSS cybersecurity, with too much control of OSS potentially proving counterproductive for EU cybersecurity. This paper illustrates that (i) it is imperative for the EU to address OSS cybersecurity; (ii) yet the lens of digital sovereignty is ostensibly a rough fit for that approach, considering OSS’ philosophy and practice; and (iii) based on the CRA, EU's practice of translating ‘digital sovereignty’ into policy change is mixed, leaving uncertain ramifications for OSS cybersecurity in the EU and beyond. On the one hand, it moves towards more ‘control’ at least in determining definitional parameters and power dynamics with novel ‘stewardship’ positions for certain OSS ","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106078"},"PeriodicalIF":3.3,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-21DOI: 10.1016/j.clsr.2024.106080
Piotr Rataj
We analyse the legal framework spanned by EU data protection law with respect to the defence against botnet-related threats. In particular, we examine what legal constraints the General Data Protection Regulation (GDPR) (and others) impose on the processing of personal data when that processing aims at detecting botnet-related traffic. We thereby put data protection rules into perspective with current trends in European IT security regulation, specifically Directive 2022/2555/EU (NIS 2 Directive).
We find that the resulting legal landscape is complex and has not yet been sufficiently explored. Our analysis provides an initial evaluation of a wide range of emerging legal issues. In particular, we consider four typical processing scenarios, such as DNS sinkholing by a public authority or sharing of cybersecurity-related personal data, and discuss some of their legal problems, linking them as thoroughly as possible to potentially relevant case law of the European Court of Justice.
我们分析了欧盟数据保护法在防御僵尸网络相关威胁方面所涵盖的法律框架。特别是,我们研究了《通用数据保护条例》(GDPR)(及其他)对以检测僵尸网络相关流量为目的的个人数据处理施加了哪些法律限制。因此,我们将数据保护规则与欧洲 IT 安全法规的当前趋势,特别是第 2022/2555/EU 号指令(NIS 2 指令)结合起来。我们的分析对一系列新出现的法律问题进行了初步评估。特别是,我们考虑了四种典型的处理情景,如公共机构的 DNS sinkholing 或网络安全相关个人数据的共享,并讨论了其中的一些法律问题,尽可能全面地将其与欧洲法院可能的相关判例法联系起来。
{"title":"Botnet defense under EU data protection law","authors":"Piotr Rataj","doi":"10.1016/j.clsr.2024.106080","DOIUrl":"10.1016/j.clsr.2024.106080","url":null,"abstract":"<div><div>We analyse the legal framework spanned by EU data protection law with respect to the defence against botnet-related threats. In particular, we examine what legal constraints the General Data Protection Regulation (GDPR) (and others) impose on the processing of personal data when that processing aims at detecting botnet-related traffic. We thereby put data protection rules into perspective with current trends in European IT security regulation, specifically Directive 2022/2555/EU (NIS 2 Directive).</div><div>We find that the resulting legal landscape is complex and has not yet been sufficiently explored. Our analysis provides an initial evaluation of a wide range of emerging legal issues. In particular, we consider four typical processing scenarios, such as DNS sinkholing by a public authority or sharing of cybersecurity-related personal data, and discuss some of their legal problems, linking them as thoroughly as possible to potentially relevant case law of the European Court of Justice.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106080"},"PeriodicalIF":3.3,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15DOI: 10.1016/j.clsr.2024.106076
Mark Brady , Kieran Tranter , Belinda Bennett
This article examines the driver dilemma as it applies to the increasing automation of road traffic with a focus on roadside enforcement stopping powers. The driver dilemma exists where road traffic laws are expressed as directed toward human drivers. As automation increases, it becomes more problematic who is the driver, in fact and in law, for the purposes of international and national road traffic laws. An obvious solution to the driver dilemma is to enact reforms that deem automated driving systems ‘drivers’ under road traffic laws. This can be seen in recent amendments to the Vienna Convention on Road Traffic. However, the deeming solution has limitations. Through a case study on specific Australian provisions that authorise roadside enforcement officers to stop vehicles, two paradigms informing regulation of road traffic are revealed. The legacy paradigm, founded on the unity of driver and vehicle, conceives road transport involving individuals with an expectation of freedom of movement. The deeming solution attempts to preserve this paradigm. The case study also revealed an alternative paradigm of road traffic as a system that should be regulated to ensure overarching public policy goals. This alternative paradigm is evident in the specific passenger transport laws, where stopping powers are expressed as vehicle-centric. There is no driver proxy and no need for a further wrong for the powers to be enlivened. The article concludes that automated transport futures need this alternative paradigm of road traffic regulation and vehicle-centric rules should be a template for more adaptable road traffic laws.
{"title":"Automated vehicles, the ‘driver dilemma’, stopping powers, and paradigms of regulating road traffic","authors":"Mark Brady , Kieran Tranter , Belinda Bennett","doi":"10.1016/j.clsr.2024.106076","DOIUrl":"10.1016/j.clsr.2024.106076","url":null,"abstract":"<div><div>This article examines the driver dilemma as it applies to the increasing automation of road traffic with a focus on roadside enforcement stopping powers. The driver dilemma exists where road traffic laws are expressed as directed toward human drivers. As automation increases, it becomes more problematic who is the driver, in fact and in law, for the purposes of international and national road traffic laws. An obvious solution to the driver dilemma is to enact reforms that deem automated driving systems ‘drivers’ under road traffic laws. This can be seen in recent amendments to the <em>Vienna Convention on Road Traffic</em>. However, the deeming solution has limitations. Through a case study on specific Australian provisions that authorise roadside enforcement officers to stop vehicles, two paradigms informing regulation of road traffic are revealed. The legacy paradigm, founded on the unity of driver and vehicle, conceives road transport involving individuals with an expectation of freedom of movement. The deeming solution attempts to preserve this paradigm. The case study also revealed an alternative paradigm of road traffic as a system that should be regulated to ensure overarching public policy goals. This alternative paradigm is evident in the specific passenger transport laws, where stopping powers are expressed as vehicle-centric. There is no driver proxy and no need for a further wrong for the powers to be enlivened. The article concludes that automated transport futures need this alternative paradigm of road traffic regulation and vehicle-centric rules should be a template for more adaptable road traffic laws.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106076"},"PeriodicalIF":3.3,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1016/j.clsr.2024.106074
Xueting Fu
The circulation of data presents a significant challenge to the development of China's digital economy. On data exchanges, trading activity has declined. Off-exchange, stringent barriers between data-sharing consortia have resulted in data silos, producing crises of trust and legitimacy. Treating personal data as consideration, by incentivising individuals' motivation to share data through both financial gain and the protection of their personal rights, can establish a robust and comprehensive legal basis for extensive commercial data processing. Accordingly, this connects primary and secondary data element markets, facilitates data circulation, and strengthens the real economy. In the legal framework of personal data as consideration, the agreement between users and enterprises constitutes a bilateral contract, wherein individuals are obliged to "provide personal data and/ or authorise processing" as counter-performance. Through this exchange, enterprises, predicated on user authorisation, can secure one or more rights to hold, use or operate the data, thereby achieving a separation of data property rights. The data property rights enterprises acquire are governed by the principle of registration confrontation. The data subject's inheritors, prior or subsequent parties in transactions, and infringers are all third parties that could be confronted absolutely, while a subsequent licensee's ability to confront a prior licensee hinges on whether the pre-existing data property rights have been registered. Even when data property rights derive from a non-exclusive licence, the enterprise can still confront the bankruptcy administrator and proceed with data processing.
{"title":"The dilemma and resolution of data circulation in China: Is data as consideration the solution?","authors":"Xueting Fu","doi":"10.1016/j.clsr.2024.106074","DOIUrl":"10.1016/j.clsr.2024.106074","url":null,"abstract":"<div><div>The circulation of data presents a significant challenge to the development of China's digital economy. On data exchanges, trading activity has declined. Off-exchange, stringent barriers between data-sharing consortia have resulted in data silos, producing crises of trust and legitimacy. Treating personal data as consideration, by incentivising individuals' motivation to share data through both financial gain and the protection of their personal rights, can establish a robust and comprehensive legal basis for extensive commercial data processing. Accordingly, this connects primary and secondary data element markets, facilitates data circulation, and strengthens the real economy. In the legal framework of personal data as consideration, the agreement between users and enterprises constitutes a bilateral contract, wherein individuals are obliged to \"provide personal data and/ or authorise processing\" as counter-performance. Through this exchange, enterprises, predicated on user authorisation, can secure one or more rights to hold, use or operate the data, thereby achieving a separation of data property rights. The data property rights enterprises acquire are governed by the principle of registration confrontation. The data subject's inheritors, prior or subsequent parties in transactions, and infringers are all third parties that could be confronted absolutely, while a subsequent licensee's ability to confront a prior licensee hinges on whether the pre-existing data property rights have been registered. Even when data property rights derive from a non-exclusive licence, the enterprise can still confront the bankruptcy administrator and proceed with data processing.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106074"},"PeriodicalIF":3.3,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}