Pub Date : 2025-09-01DOI: 10.1016/j.clsr.2025.106190
Stina Teilmann-Lock, Andrej Savin
The advent of generative AI raises profound questions about the ownership not only of data but also of data sets. European law has, in the main, sought to address these questions through the lens of copyright law in an attempt to address what the creative sector sees as a blatant theft of its work. While this approach has its merits, this paper suggests that key issues might be better dealt with using the AI Act of 2024. The Act has created an outline of a conceptual approach which we tentatively call “dataset law”. This is a more effective tool for dealing with violations at scale than copyright as it accents the inherent (economic and non-economic) value of data sets rather than on individual damage. Unfolding our argument in the article we also reflect on the fact that while this ex ante approach may appear novel in magnitude, it follows a pattern of innovative EU legal solutions in copyright and other areas.
{"title":"Beyond the AI-copyright wars: towards European dataset law?","authors":"Stina Teilmann-Lock, Andrej Savin","doi":"10.1016/j.clsr.2025.106190","DOIUrl":"10.1016/j.clsr.2025.106190","url":null,"abstract":"<div><div>The advent of generative AI raises profound questions about the ownership not only of data but also of data sets. European law has, in the main, sought to address these questions through the lens of copyright law in an attempt to address what the creative sector sees as a blatant theft of its work. While this approach has its merits, this paper suggests that key issues might be better dealt with using the AI Act of 2024. The Act has created an outline of a conceptual approach which we tentatively call “dataset law”. This is a more effective tool for dealing with violations at scale than copyright as it accents the inherent (economic and non-economic) value of data sets rather than on individual damage. Unfolding our argument in the article we also reflect on the fact that while this <em>ex ante</em> approach may appear novel in magnitude, it follows a pattern of innovative EU legal solutions in copyright and other areas.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106190"},"PeriodicalIF":3.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144921685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1016/j.clsr.2025.106193
Nick Pantlin
This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills Kramer LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.
Pub Date : 2025-08-30DOI: 10.1016/j.clsr.2025.106189
Gustavo Gil Gasiola
The risk-based approach of the AI Act (AIA) results in a complex normative structure, in which the applicable subset of rules for a specific AI system is determined by the general scope of application and the classification of the system into particular risk levels. A pyramid of risks, a widely accepted explanation of the risk-based approach proposed by the European Commission, fails to provide a comprehensive classification process and does not accurately reflect the risk levels (either directly or indirectly) recognized in the AIA or the relation between classification criteria. This paper proposes a corrective solution to rebuild the pyramid of risks. Given that each AI system must be classified into one risk level and the AIA assigns a specific subset of rules to each risk level, an adaptation of the Commission’s risk levels was necessary. Two types of exceptions are included in the list of prohibited AI practices, which significantly impact the classification process. The exception stricto sensu (in a strict sense) is the result of a balancing of interests, whereas the exception lato sensu (in a broader sense) is due to the absence of excessive regulatory risks. The transparency requirements, identified by the pyramid as a “limited-risk level,” operate in parallel with the risk-based approach and do not constitute an independent risk level. Furthermore, as the AIA assigns a specific subset of rules to AI systems used in critical areas that do not pose significant risks, it is necessary to recognize a separate risk level (non-high risk). By analyzing the pyramid of risks, this study suggests representing the classification process as a binary decision diagram. This ensures that the risk-based approach is clearly defined and can help regulators and regulatees classify AI systems in accordance with the AIA.
{"title":"Rebuilding the pyramid: The AI Act’s risk-based approach using a binary decision diagram","authors":"Gustavo Gil Gasiola","doi":"10.1016/j.clsr.2025.106189","DOIUrl":"10.1016/j.clsr.2025.106189","url":null,"abstract":"<div><div>The risk-based approach of the AI Act (AIA) results in a complex normative structure, in which the applicable subset of rules for a specific AI system is determined by the general scope of application and the classification of the system into particular risk levels. A pyramid of risks, a widely accepted explanation of the risk-based approach proposed by the European Commission, fails to provide a comprehensive classification process and does not accurately reflect the risk levels (either directly or indirectly) recognized in the AIA or the relation between classification criteria. This paper proposes a corrective solution to rebuild the pyramid of risks. Given that each AI system must be classified into one risk level and the AIA assigns a specific subset of rules to each risk level, an adaptation of the Commission’s risk levels was necessary. Two types of exceptions are included in the list of prohibited AI practices, which significantly impact the classification process. The exception <em>stricto sensu</em> (in a strict sense) is the result of a balancing of interests, whereas the exception <em>lato sensu</em> (in a broader sense) is due to the absence of excessive regulatory risks. The transparency requirements, identified by the pyramid as a “limited-risk level,” operate in parallel with the risk-based approach and do not constitute an independent risk level. Furthermore, as the AIA assigns a specific subset of rules to AI systems used in critical areas that do not pose significant risks, it is necessary to recognize a separate risk level (non-high risk). By analyzing the pyramid of risks, this study suggests representing the classification process as a binary decision diagram. This ensures that the risk-based approach is clearly defined and can help regulators and regulatees classify AI systems in accordance with the AIA.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106189"},"PeriodicalIF":3.2,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-28DOI: 10.1016/j.clsr.2025.106188
Nynke Elske Vellinga, Ekaterina Hailevich
Data is becoming ever more important in the mobility sectors, as the European Mobility Data Space is further taking shape. The legislative framework for the European Mobility Data Space is, however, complex. In this paper, we examine the legislation applicable to the European Mobility Data Space and the main obligations of stakeholders derived from the different legal instruments. We map the relevant legal instruments for the European Mobility Data Space. Thereby, the fragmentation of this legal framework is highlighted, in addition to the strong emphasis on the protetcion of personal data throughout this fragmented legal landscape.
{"title":"The legal framework for sharing mobility data: on the road to an EU mobility data space","authors":"Nynke Elske Vellinga, Ekaterina Hailevich","doi":"10.1016/j.clsr.2025.106188","DOIUrl":"10.1016/j.clsr.2025.106188","url":null,"abstract":"<div><div>Data is becoming ever more important in the mobility sectors, as the European Mobility Data Space is further taking shape. The legislative framework for the European Mobility Data Space is, however, complex. In this paper, we examine the legislation applicable to the European Mobility Data Space and the main obligations of stakeholders derived from the different legal instruments. We map the relevant legal instruments for the European Mobility Data Space. Thereby, the fragmentation of this legal framework is highlighted, in addition to the strong emphasis on the protetcion of personal data throughout this fragmented legal landscape.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106188"},"PeriodicalIF":3.2,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144907761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25DOI: 10.1016/j.clsr.2025.106183
Baiyang Xiao
Generative AI has been a buzzword in various sectors, advancing at an exponential rate and profoundly transforming the way we communicate and innovate. However, its potential benefits come with compelling ethical and legal risks, necessitating proper guardrails steering AI in beneficial directions. Amid the global race to AI regulation, China has exhibited strong and open ambition in shaping the emerging global AI order. In response to challenges posed by AI, China has not only implemented agile administrative intervention in policy support, central coordination, and investment, but also adopted an AI governance framework that characterized by ‘complexity,’ ‘agility,’ ‘stability,’ and ‘flexibility.’ Nevertheless, while an agile, sector-specific approach to AI governance may yield short-term benefits, it raises long-term concerns, including opaque decision-making, weak enforcement, fragmented oversight, and inadequate protection of fundamental rights. In particular, governance fragmentation marked by overlapping regulatory bodies and layered rulemaking risks producing piecemeal and outdated regulations that struggle to keep pace with rapid technological change. Current interventions often prioritize systemic stability over ethical clarity and robust supervision, while strategic ambiguity further complicates the implementation of AI ethics and hinders effective oversight, whether internal or administrative.
Instead of calling for an omnibus AI law that applies a uniform package of rules, Chinese regulators chose to adapt horizontal elements into vertical regulations through a set of bureaucratic know-how and iterative regulatory tools. Through a comparative legal analysis, this paper finds that comprehending the intricacies of China’s AI regulatory approach is vital not only for projecting its future technological progression but also for understanding its impact on international tech competition. Differences in diverse AI governance may offer valuable insights while commonalities in AI governance values and principles hold promises for global cooperation in responsible AI governance. Moreover, it is plausible to expect that China will reconcile existing horizontal regulatory tools in a horizontal legislative package, while the EU AI Act provides valuable practical implications for the transition from vertical AI-related regulations to a horizontal Chinese AI Law, and the decentralized regulatory approach adopted by the U.S. serves as a useful reference for multi-stakeholder and multi-level cooperation. In addition, EU’s rights-driven framework and US’s market-driven model may serve as critical benchmarks, influencing China’s state-driven approach to harmonizing its legislative strategies for a responsive AI regulation.
{"title":"Agile and iterative governance: China’s regulatory response to AI","authors":"Baiyang Xiao","doi":"10.1016/j.clsr.2025.106183","DOIUrl":"10.1016/j.clsr.2025.106183","url":null,"abstract":"<div><div>Generative AI has been a buzzword in various sectors, advancing at an exponential rate and profoundly transforming the way we communicate and innovate. However, its potential benefits come with compelling ethical and legal risks, necessitating proper guardrails steering AI in beneficial directions. Amid the global race to AI regulation, China has exhibited strong and open ambition in shaping the emerging global AI order. In response to challenges posed by AI, China has not only implemented agile administrative intervention in policy support, central coordination, and investment, but also adopted an AI governance framework that characterized by ‘complexity,’ ‘agility,’ ‘stability,’ and ‘flexibility.’ Nevertheless, while an agile, sector-specific approach to AI governance may yield short-term benefits, it raises long-term concerns, including opaque decision-making, weak enforcement, fragmented oversight, and inadequate protection of fundamental rights. In particular, governance fragmentation marked by overlapping regulatory bodies and layered rulemaking risks producing piecemeal and outdated regulations that struggle to keep pace with rapid technological change. Current interventions often prioritize systemic stability over ethical clarity and robust supervision, while strategic ambiguity further complicates the implementation of AI ethics and hinders effective oversight, whether internal or administrative.</div><div>Instead of calling for an omnibus AI law that applies a uniform package of rules, Chinese regulators chose to adapt horizontal elements into vertical regulations through a set of bureaucratic know-how and iterative regulatory tools. Through a comparative legal analysis, this paper finds that comprehending the intricacies of China’s AI regulatory approach is vital not only for projecting its future technological progression but also for understanding its impact on international tech competition. Differences in diverse AI governance may offer valuable insights while commonalities in AI governance values and principles hold promises for global cooperation in responsible AI governance. Moreover, it is plausible to expect that China will reconcile existing horizontal regulatory tools in a horizontal legislative package, while the EU AI Act provides valuable practical implications for the transition from vertical AI-related regulations to a horizontal Chinese AI Law, and the decentralized regulatory approach adopted by the U.S. serves as a useful reference for multi-stakeholder and multi-level cooperation. In addition, EU’s rights-driven framework and US’s market-driven model may serve as critical benchmarks, influencing China’s state-driven approach to harmonizing its legislative strategies for a responsive AI regulation.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106183"},"PeriodicalIF":3.2,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-22DOI: 10.1016/j.clsr.2025.106185
Sarah Eskens
The EU takes many different actions against disinformation. With the adoption of the Digital Services Act (‘DSA’), the EU took legal measures against disinformation for the first time. The DSA has been supplemented by new legislation which has received considerably less attention for its anti-disinformation goals: the Regulation on the transparency and targeting of political advertising (‘TTPA Regulation’) and European Media Freedom Act (‘EMFA’). The goal of this paper is to bring the TTPA Regulation and EMFA into the debate about the EU’s actions against disinformation. This paper shows how the TTPA Regulation and EMFA are meant to help with curbing disinformation and how they complement the DSA and Code of Conduct on Disinformation. The paper also shows how the TTPA Regulation and EMFA should be understood against three developments in the line of actions that the EU took against disinformation over the past decade, leading to new research questions about the EU’s legal measures against disinformation.
{"title":"The role of the Regulation on the transparency and targeting of political advertising and European Media Freedom Act in the EU’s anti-disinformation strategy","authors":"Sarah Eskens","doi":"10.1016/j.clsr.2025.106185","DOIUrl":"10.1016/j.clsr.2025.106185","url":null,"abstract":"<div><div>The EU takes many different actions against disinformation. With the adoption of the Digital Services Act (‘DSA’), the EU took legal measures against disinformation for the first time. The DSA has been supplemented by new legislation which has received considerably less attention for its anti-disinformation goals: the Regulation on the transparency and targeting of political advertising (‘TTPA Regulation’) and European Media Freedom Act (‘EMFA’). The goal of this paper is to bring the TTPA Regulation and EMFA into the debate about the EU’s actions against disinformation. This paper shows how the TTPA Regulation and EMFA are meant to help with curbing disinformation and how they complement the DSA and Code of Conduct on Disinformation. The paper also shows how the TTPA Regulation and EMFA should be understood against three developments in the line of actions that the EU took against disinformation over the past decade, leading to new research questions about the EU’s legal measures against disinformation.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106185"},"PeriodicalIF":3.2,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144885852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-21DOI: 10.1016/j.clsr.2025.106179
Gabriela Kennedy
This column provides a country by country analysis of the latest legal developments, cases and issues relevant to the IT, media and telecommunications' industries in key jurisdictions across the Asia Pacific region. The articles appearing in this column are intended to serve as ‘alerts’ and are not submitted as detailed analysis of cases or legal developments.
{"title":"Asia–Pacific developments","authors":"Gabriela Kennedy","doi":"10.1016/j.clsr.2025.106179","DOIUrl":"10.1016/j.clsr.2025.106179","url":null,"abstract":"<div><div>This column provides a country by country analysis of the latest legal developments, cases and issues relevant to the IT, media and telecommunications' industries in key jurisdictions across the Asia Pacific region. The articles appearing in this column are intended to serve as ‘alerts’ and are not submitted as detailed analysis of cases or legal developments.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106179"},"PeriodicalIF":3.2,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144885851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The complexity and constantly rising volume of regulatory documents leads to tedious and error-prone manual analysis tasks. At the same time, Artificial Intelligence (AI) techniques offer new opportunities in handling legal information by, e.g., supporting legal stakeholder through automated knowledge acquisition. An example is the extraction of legal terms accompanied by their explanations in order to build a legal vocabulary or an ontology. A challenge aggravating this task is legal knowledge being implicitly stated. Thus, the occurrence of implicit actors, due to the usage of passive constructs in regulatory documents, is observed frequently. Consider the phrase “the provider keeps the data up to date” vs. “the data is kept up to date”. In the former phrase, the actor (provider) is explicit, while the latter requires additional context in order to determine who is keeping the data up to date. Hence, we provide an approach grounded in Natural Language Processing (NLP) to support the identification and clarification of explicit legal definitions and their relations. We then use this information to also identify implicit actors and make them explicit through insertion into the sentence. In addition, we provide a set of visual representations, including annotated documents, knowledge graphs, and statistics on how many legal definitions and implicit actors are present in an article. The evaluation is based on European regulations and demonstrates that explicit legal information can be used to clarify implicit information, enhancing the transparency and interpretability of complex legal documents.
{"title":"Identification and visual representation of explicit legal definitions, their relations and implicit actors in regulatory documents","authors":"Catherine Sai , Lukas Rossi , Anastasiya Damaratskaya , Karolin Winter , Stefanie Rinderle-Ma","doi":"10.1016/j.clsr.2025.106174","DOIUrl":"10.1016/j.clsr.2025.106174","url":null,"abstract":"<div><div>The complexity and constantly rising volume of regulatory documents leads to tedious and error-prone manual analysis tasks. At the same time, Artificial Intelligence (AI) techniques offer new opportunities in handling legal information by, e.g., supporting legal stakeholder through automated knowledge acquisition. An example is the extraction of legal terms accompanied by their explanations in order to build a legal vocabulary or an ontology. A challenge aggravating this task is legal knowledge being implicitly stated. Thus, the occurrence of implicit actors, due to the usage of passive constructs in regulatory documents, is observed frequently. Consider the phrase “the provider keeps the data up to date” vs. “the data is kept up to date”. In the former phrase, the actor (provider) is explicit, while the latter requires additional context in order to determine who is keeping the data up to date. Hence, we provide an approach grounded in Natural Language Processing (NLP) to support the identification and clarification of explicit legal definitions and their relations. We then use this information to also identify implicit actors and make them explicit through insertion into the sentence. In addition, we provide a set of visual representations, including annotated documents, knowledge graphs, and statistics on how many legal definitions and implicit actors are present in an article. The evaluation is based on European regulations and demonstrates that explicit legal information can be used to clarify implicit information, enhancing the transparency and interpretability of complex legal documents.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106174"},"PeriodicalIF":3.2,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144867213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-18DOI: 10.1016/j.clsr.2025.106177
WooJung Jon , Kyoung-sun Min
This comparative study analyzes the legal categorization and regulatory approaches adopted by the United States and South Korea in response to the collapse of Terra and Luna, two major digital assets issued by Terraform Labs (TFL) that lost nearly all their value in May 2022. The case highlights key challenges in regulating digital assets. This study examines the critical issue of defining digital assets within existing securities law frameworks, and the implications for policy formulation and investor protection. The Korean Capital Markets Act categorizes securities into six types, with Terra and Luna’s classification hinging on their potential categorization as “investment contract securities.” In contrast, the U.S. Securities Act of 1933 (hereinafter, the Securities Act) provides a broader definition, also including “investment contracts.” This study examines key enforcement challenges in prosecuting Do Hyeong Kwon, TFL’s CEO, under Korean criminal law if Terra and Luna are not classified as “investment contract securities,” requiring demonstration of intent to deceive under the Korean Criminal Code, which proves more challenging in secondary markets than primary markets. The U.S. wire fraud statute has a considerably broader scope. South Korea's Prosecutors' Office has criminally indicted Kwon, and Korean victims have filed civil actions against Kwon, whereas the U.S. has pursued both criminal indictment and civil enforcement actions by the Securities and Exchange Commission (SEC). If the SEC is successful, the funds recovered are distributed through the Fair Fund, which is unavailable in South Korea. This study contributes to the ongoing discourse on legal classification and regulation of digital assets, emphasizing the need for comprehensive frameworks that balance innovation and investor protection.
{"title":"The race to punish Terra-Luna of the United States and South Korea: Lessons toward avoiding another digital asset catastrophe","authors":"WooJung Jon , Kyoung-sun Min","doi":"10.1016/j.clsr.2025.106177","DOIUrl":"10.1016/j.clsr.2025.106177","url":null,"abstract":"<div><div>This comparative study analyzes the legal categorization and regulatory approaches adopted by the United States and South Korea in response to the collapse of Terra and Luna, two major digital assets issued by Terraform Labs (TFL) that lost nearly all their value in May 2022. The case highlights key challenges in regulating digital assets. This study examines the critical issue of defining digital assets within existing securities law frameworks, and the implications for policy formulation and investor protection. The Korean Capital Markets Act categorizes securities into six types, with Terra and Luna’s classification hinging on their potential categorization as “investment contract securities.” In contrast, the U.S. Securities Act of 1933 (hereinafter, the Securities Act) provides a broader definition, also including “investment contracts.” This study examines key enforcement challenges in prosecuting Do Hyeong Kwon, TFL’s CEO, under Korean criminal law if Terra and Luna are not classified as “investment contract securities,” requiring demonstration of intent to deceive under the Korean Criminal Code, which proves more challenging in secondary markets than primary markets. The U.S. wire fraud statute has a considerably broader scope. South Korea's Prosecutors' Office has criminally indicted Kwon, and Korean victims have filed civil actions against Kwon, whereas the U.S. has pursued both criminal indictment and civil enforcement actions by the Securities and Exchange Commission (SEC). If the SEC is successful, the funds recovered are distributed through the Fair Fund, which is unavailable in South Korea. This study contributes to the ongoing discourse on legal classification and regulation of digital assets, emphasizing the need for comprehensive frameworks that balance innovation and investor protection.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106177"},"PeriodicalIF":3.2,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144867211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-18DOI: 10.1016/j.clsr.2025.106176
Chien-Yi Chang , Xin He
This paper explores the legal implications of violating robots.txt, a technical standard widely used by webmasters to communicate restrictions on automated access to website content. Although historically regarded as a voluntary guideline, the rise of generative AI and large-scale web scraping has amplified the consequences of disregarding robots.txt directives. While previous legal discourse has largely focused on criminal or copyright-based remedies, we argue that civil doctrines, particularly in contract and tort law, offer a more balanced and sustainable framework for regulating web robot behavior in common law jurisdictions. Under certain conditions, robots.txt can give rise to a unilateral contract or serve as a form of notice sufficient to establish tortious liability, including trespass to chattels and negligence. Ultimately, we argue that clarifying liability for robots.txt violations is essential to addressing the growing fragmentation of the internet. By restoring balance and accountability in the digital ecosystem, our proposed framework helps preserve the internet’s open and cooperative foundations. Through this lens, robots.txt can remain an equitable and effective tool for digital governance in the age of AI.
{"title":"The liabilities of robots.txt","authors":"Chien-Yi Chang , Xin He","doi":"10.1016/j.clsr.2025.106176","DOIUrl":"10.1016/j.clsr.2025.106176","url":null,"abstract":"<div><div>This paper explores the legal implications of violating robots.txt, a technical standard widely used by webmasters to communicate restrictions on automated access to website content. Although historically regarded as a voluntary guideline, the rise of generative AI and large-scale web scraping has amplified the consequences of disregarding robots.txt directives. While previous legal discourse has largely focused on criminal or copyright-based remedies, we argue that civil doctrines, particularly in contract and tort law, offer a more balanced and sustainable framework for regulating web robot behavior in common law jurisdictions. Under certain conditions, robots.txt can give rise to a unilateral contract or serve as a form of notice sufficient to establish tortious liability, including trespass to chattels and negligence. Ultimately, we argue that clarifying liability for robots.txt violations is essential to addressing the growing fragmentation of the internet. By restoring balance and accountability in the digital ecosystem, our proposed framework helps preserve the internet’s open and cooperative foundations. Through this lens, robots.txt can remain an equitable and effective tool for digital governance in the age of AI.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106176"},"PeriodicalIF":3.2,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144867212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}