首页 > 最新文献

Computer Law & Security Review最新文献

英文 中文
The digital prior restraint: Applying human rights safeguards to upload filters in the EU 数字先行约束:在欧盟应用人权保障上传过滤器
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-10-08 DOI: 10.1016/j.clsr.2025.106219
Emmanuel Vargas Penagos
This article examines the human rights standards relevant to the use of upload filters for content moderation within EU secondary legislation. Upload filters, which automatically screen user-generated content before publication, are a type of prior restraint, which raises critical concerns on freedom of expression. EU secondary legislation establishes rules for both mandatory and voluntary use of these technologies, which must be read in light of human rights protections. This article analyses the characteristics of both mandatory and voluntary upload filters as prior restraints, the relevant EU legal provisions governing their use, and the safeguards required to prevent disproportionate restrictions on speech. Additionally, it explores the procedural and institutional safeguards under EU law, viewed through the lens of the CJEU and ECtHR case law on prior restraints and the rights to a fair trial and to an effective remedy.
本文考察了与在欧盟二级立法中使用上传过滤器进行内容审核相关的人权标准。上传过滤器在发布之前自动筛选用户生成的内容,这是一种预先限制,引发了对言论自由的严重担忧。欧盟二级立法为这些技术的强制性和自愿性使用制定了规则,必须从人权保护的角度来解读。本文分析了强制上传过滤器和自愿上传过滤器作为事先限制的特点,管理其使用的相关欧盟法律规定,以及防止对言论的不成比例限制所需的保障措施。此外,它还通过欧洲法院和欧洲人权法院判例法关于事先限制和公平审判和有效补救权利的视角,探讨了欧盟法律下的程序和体制保障。
{"title":"The digital prior restraint: Applying human rights safeguards to upload filters in the EU","authors":"Emmanuel Vargas Penagos","doi":"10.1016/j.clsr.2025.106219","DOIUrl":"10.1016/j.clsr.2025.106219","url":null,"abstract":"<div><div>This article examines the human rights standards relevant to the use of upload filters for content moderation within EU secondary legislation. Upload filters, which automatically screen user-generated content before publication, are a type of prior restraint, which raises critical concerns on freedom of expression. EU secondary legislation establishes rules for both mandatory and voluntary use of these technologies, which must be read in light of human rights protections. This article analyses the characteristics of both mandatory and voluntary upload filters as prior restraints, the relevant EU legal provisions governing their use, and the safeguards required to prevent disproportionate restrictions on speech. Additionally, it explores the procedural and institutional safeguards under EU law, viewed through the lens of the CJEU and ECtHR case law on prior restraints and the rights to a fair trial and to an effective remedy.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106219"},"PeriodicalIF":3.2,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Antitrust in artificial intelligence infrastructure – between regulation and innovation in the EU, the US, and China 人工智能基础设施的反垄断——在欧盟、美国和中国的监管与创新之间
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-10-07 DOI: 10.1016/j.clsr.2025.106211
Kena Zheng
The enormous amount of data and the substantial computational resources are crucial inputs of artificial intelligence (AI) infrastructure, enabling the development and training of AI models. Incumbent firms in adjacent technology markets hold significant advantages in AI development, due to their established large user bases and substantial financial resources. These advantages facilitate the accumulation of enormous amounts of data, and the establishment of computational infrastructure necessary for sufficient data processing and high-performance computing. By controlling data and computational resources, incumbents raise entry barriers, leverage advantages to favour their own AI services, and drive significant vertical integration across the AI supply chain, thereby entrenching their market dominance and shielding themselves from competition. This article examines regulatory responses to these antitrust risks in the European Union (EU), the United States (US), and China, given their leadership in digital regulation and AI development. It demonstrates that the EU’s Digital Markets Act, and China’s Interim Measures for the Management of Generative Artificial Intelligence Services introduce broadly framed yet applicable rules to address challenges related to data and computational resources in AI markets. Conversely, the US lacks both AI regulations and digital-specific competition laws, instead adopting innovation-centric policies aimed at ensuring its AI dominance globally. Given the strategic importance of AI development, all three jurisdictions have adopted a cautious approach in investigating potential abusive practices.
海量的数据和大量的计算资源是人工智能基础设施的重要投入,是人工智能模型开发和训练的基础。邻近技术市场的现有公司在人工智能开发方面拥有显著优势,因为它们拥有庞大的用户基础和大量的财务资源。这些优势有利于海量数据的积累,也有利于建立足够的数据处理和高性能计算所必需的计算基础设施。通过控制数据和计算资源,现有企业提高了进入壁垒,利用优势来支持自己的人工智能服务,并推动整个人工智能供应链的显著垂直整合,从而巩固了它们的市场主导地位,并保护自己免受竞争。本文考察了欧盟(EU)、美国(US)和中国在数字监管和人工智能发展方面的领导地位,对这些反垄断风险的监管反应。它表明,欧盟的《数字市场法案》和中国的《生成式人工智能服务管理暂行办法》引入了框架广泛但适用的规则,以应对人工智能市场中与数据和计算资源相关的挑战。相反,美国既缺乏人工智能法规,也缺乏数字特定竞争法,而是采取以创新为中心的政策,旨在确保其在全球人工智能领域的主导地位。鉴于人工智能发展的战略重要性,这三个司法管辖区在调查潜在的滥用行为时都采取了谨慎的态度。
{"title":"Antitrust in artificial intelligence infrastructure – between regulation and innovation in the EU, the US, and China","authors":"Kena Zheng","doi":"10.1016/j.clsr.2025.106211","DOIUrl":"10.1016/j.clsr.2025.106211","url":null,"abstract":"<div><div>The enormous amount of data and the substantial computational resources are crucial inputs of artificial intelligence (AI) infrastructure, enabling the development and training of AI models. Incumbent firms in adjacent technology markets hold significant advantages in AI development, due to their established large user bases and substantial financial resources. These advantages facilitate the accumulation of enormous amounts of data, and the establishment of computational infrastructure necessary for sufficient data processing and high-performance computing. By controlling data and computational resources, incumbents raise entry barriers, leverage advantages to favour their own AI services, and drive significant vertical integration across the AI supply chain, thereby entrenching their market dominance and shielding themselves from competition. This article examines regulatory responses to these antitrust risks in the European Union (EU), the United States (US), and China, given their leadership in digital regulation and AI development. It demonstrates that the EU’s Digital Markets Act, and China’s Interim Measures for the Management of Generative Artificial Intelligence Services introduce broadly framed yet applicable rules to address challenges related to data and computational resources in AI markets. Conversely, the US lacks both AI regulations and digital-specific competition laws, instead adopting innovation-centric policies aimed at ensuring its AI dominance globally. Given the strategic importance of AI development, all three jurisdictions have adopted a cautious approach in investigating potential abusive practices.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106211"},"PeriodicalIF":3.2,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The ‘DPIA+’: Aligning data protection with UK equality law “DPIA+”:使数据保护与英国平等法保持一致
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-10-06 DOI: 10.1016/j.clsr.2025.106212
Miranda Mourby
In recent years, data protection scholarship has moved beyond the assumption that the General Data Protection Regulation (‘GDPR’) is solely concerned with individual rights. Tools such as the Human Rights Impact Assessment (‘HRIA’) and the Fundamental Rights Impact Assessment ('FRIA') have been promoted to apply the GDPR more expansively, capturing broader societal harms that may flow from personal data processing. These tools can widen the scope of the GDPR’s Data Protection Impact Assessment (‘DPIA’) through aligned consideration with human rights law. They have been outlined at an international level, but require adaptation to national contexts in practice.
This article advances the discussion in three ways. First, it develops a jurisdiction-anchored expansion of the DPIA (‘DPIA+’) by integrating the UK Public Sector Equality Duty in s.149 Equality Act 2010. Second, it highlights equality law as both overlapping with, and distinct from, human rights law. In the UK, equality law imports a proactive duty to investigate risks of discrimination, while also providing an evaluative template in the form of an Equality Impact Assessment. Finally, it considers the distinctive value of an equality-inflected DPIA+ in life-and-death contexts, such as the Covid-19 pandemic.
The open-ended term ‘DPIA+’ acknowledges that various legal frameworks may supplement a DPIA in each national context. The central argument, however, is that equality and human rights law should be considered together when augmenting a DPIA, as both can help identify and address risks of discrimination in personal data processing.
近年来,数据保护学术已经超越了一般数据保护条例(“GDPR”)仅与个人权利有关的假设。人权影响评估(“HRIA”)和基本权利影响评估(“FRIA”)等工具已得到推广,以更广泛地应用GDPR,捕捉个人数据处理可能产生的更广泛的社会危害。这些工具可以通过与人权法保持一致的考虑,扩大GDPR数据保护影响评估(DPIA)的范围。它们已在国际一级概述,但在实践中需要适应各国的情况。本文从三个方面展开了讨论。首先,它通过在第149条中整合英国公共部门平等义务,发展了以司法管辖区为基础的DPIA扩展(“DPIA+”)2010年平等法案。其次,强调平等法与人权法既有重叠又有区别。在英国,平等法规定了主动调查歧视风险的义务,同时也以平等影响评估的形式提供了一个评估模板。最后,它考虑了受平等影响的DPIA+在生死攸关的背景下的独特价值,例如Covid-19大流行。“DPIA+”这一开放式术语承认,在每个国家的情况下,各种法律框架可以补充DPIA。然而,核心论点是,在扩大个人数据保护条例时,应同时考虑平等法和人权法,因为两者都有助于识别和解决个人数据处理中的歧视风险。
{"title":"The ‘DPIA+’: Aligning data protection with UK equality law","authors":"Miranda Mourby","doi":"10.1016/j.clsr.2025.106212","DOIUrl":"10.1016/j.clsr.2025.106212","url":null,"abstract":"<div><div>In recent years, data protection scholarship has moved beyond the assumption that the General Data Protection Regulation (‘GDPR’) is solely concerned with individual rights. Tools such as the Human Rights Impact Assessment (‘HRIA’) and the Fundamental Rights Impact Assessment ('FRIA') have been promoted to apply the GDPR more expansively, capturing broader societal harms that may flow from personal data processing. These tools can widen the scope of the GDPR’s Data Protection Impact Assessment (‘DPIA’) through aligned consideration with human rights law. They have been outlined at an international level, but require adaptation to national contexts in practice.</div><div>This article advances the discussion in three ways. First, it develops a jurisdiction-anchored expansion of the DPIA (‘DPIA+’) by integrating the UK Public Sector Equality Duty in s.149 Equality Act 2010. Second, it highlights equality law as both overlapping with, and distinct from, human rights law. In the UK, equality law imports a proactive duty to investigate risks of discrimination, while also providing an evaluative template in the form of an Equality Impact Assessment. Finally, it considers the distinctive value of an equality-inflected DPIA+ in life-and-death contexts, such as the Covid-19 pandemic.</div><div>The open-ended term ‘DPIA+’ acknowledges that various legal frameworks may supplement a DPIA in each national context. The central argument, however, is that equality and human rights law should be considered together when augmenting a DPIA, as both can help identify and address risks of discrimination in personal data processing.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106212"},"PeriodicalIF":3.2,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The future of the movie industry in the wake of generative AI: A perspective under EU and UK copyright law 生成式人工智能之后电影业的未来:欧盟和英国版权法下的视角
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-09-27 DOI: 10.1016/j.clsr.2025.106207
Eleonora Rosati
<div><div>Like all sectors, the movie industry has been both affected by and exploring potential uses of generative Artificial Intelligence ('<strong>AI</strong>'). On the one hand, movie studios have detected and begun to add warnings against unlicensed third-party uses of their content, including for AI training,<span><span><sup>1</sup></span></span> and have taken enforcement initiatives through court action. On the other hand, the use of AI within and by the industry itself has been growing. Regarding the latter, some have emphasised the opportunities presented by the implementation of AI, including by advancing claims that AI tools can offer a `purer' form of expression. Others have instead warned against the potential displacement of industry workers, including workers employed in technical roles and younger and emerging actors.</div><div>Against the background illustrated above, this study maps and critically evaluates relevant issues facing the development, deployment, and use of AI models from a movie industry perspective. The legal analysis is conducted having regard to EU and UK copyright law and is divided into three parts:<ul><li><span>•</span><span><div><strong>Input/AI training</strong>: By considering relevant legal restrictions applicable to the training of AI models on protected audiovisual content, the border between lawful unlicensed uses and restricted uses is drawn;</div></span></li><li><span>•</span><span><div><strong>Protectability of AI-generated outputs</strong>: Turning to the output generation phase, the protectability of such outputs is considered next, by focusing in particular on the requirements of authorship and originality under EU and UK copyright law;</div></span></li><li><span>•</span><span><div><strong>Legal risks and potential liability stemming from the use of third-party AI models for output generation</strong>: Still having regard to the output generation phase, relevant legal issues that might arise having regard to the use of AI models that `regurgitate' third-party training data at output generation are considered, alongside the question of style protection under copyright.</div></span></li></ul></div><div>The main conclusions are as follows:<ul><li><span>•</span><span><div><strong>Input/AI training</strong>: Insofar as model training on third-party protected content is concerned, there are no exceptions under EU/UK law that fully cover the entirety of these processes. As a result, lacking legislative reform, the establishment of a licensing framework appears unavoidable for such activities to be deemed lawful;</div></span></li><li><span>•</span><span><div><strong>Protectability of AI-generated outputs</strong>: The deployment of AI across various phases of the creative process does not render the resulting content unprotectable, provided that human involvement and control remain significant throughout, with the result that AI is relied upon as a tool that aids – rather than replaces – the creativity o
与所有行业一样,电影行业也受到了生成式人工智能(“AI”)的影响,并在探索其潜在用途。一方面,电影制片厂已经发现并开始对未经许可的第三方使用其内容(包括人工智能培训)增加警告,并通过法院诉讼采取了强制措施。另一方面,人工智能在行业内部和行业本身的使用一直在增长。关于后者,一些人强调了人工智能的实施所带来的机会,包括声称人工智能工具可以提供“更纯粹”的表达形式。另一些人则警告说,工业工人可能会被取代,包括从事技术工作的工人,以及更年轻和新兴的工人。在上述背景下,本研究从电影行业的角度绘制并批判性地评估了开发、部署和使用人工智能模型所面临的相关问题。法律分析是根据欧盟和英国版权法进行的,分为三个部分:•输入/人工智能训练:通过考虑适用于人工智能模型在受保护的视听内容上进行训练的相关法律限制,划定合法的未经许可的使用和受限制的使用之间的边界;•人工智能生成输出的可保护性:转向输出生成阶段,接下来将考虑此类输出的可保护性,特别关注欧盟和英国版权法下的作者身份和原创性要求;•使用第三方人工智能模型进行输出生成所产生的法律风险和潜在责任:仍然考虑到输出生成阶段,在输出生成时使用“反哺”第三方训练数据的人工智能模型可能产生的相关法律问题,以及版权下的风格保护问题。主要结论如下:•输入/人工智能培训:就第三方受保护内容的模型培训而言,在欧盟/英国法律下没有例外,完全涵盖了所有这些过程。结果,由于缺乏立法改革,为使这些活动被视为合法,似乎不可避免地要建立一个许可框架;•人工智能生成输出的可保护性:在创作过程的各个阶段部署人工智能并不会使最终的内容无法保护,前提是人类的参与和控制在整个过程中仍然很重要,结果是人工智能被依赖为辅助工具,而不是取代行业工人的创造力。•使用第三方AI模型生成输出所产生的法律风险和潜在责任:使用产生侵权输出的AI模型,例如通过重复输入数据或仅仅模仿风格,可能会触发版权和相关权利下的专有权的应用。由此产生的责任可能归属于这些模型的用户,以及模型开发人员/提供者。后一个方面意味着排除任何此类责任的条款最终可能被发现对用户不可执行,对权利人无效。
{"title":"The future of the movie industry in the wake of generative AI: A perspective under EU and UK copyright law","authors":"Eleonora Rosati","doi":"10.1016/j.clsr.2025.106207","DOIUrl":"10.1016/j.clsr.2025.106207","url":null,"abstract":"&lt;div&gt;&lt;div&gt;Like all sectors, the movie industry has been both affected by and exploring potential uses of generative Artificial Intelligence ('&lt;strong&gt;AI&lt;/strong&gt;'). On the one hand, movie studios have detected and begun to add warnings against unlicensed third-party uses of their content, including for AI training,&lt;span&gt;&lt;span&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/span&gt;&lt;/span&gt; and have taken enforcement initiatives through court action. On the other hand, the use of AI within and by the industry itself has been growing. Regarding the latter, some have emphasised the opportunities presented by the implementation of AI, including by advancing claims that AI tools can offer a `purer' form of expression. Others have instead warned against the potential displacement of industry workers, including workers employed in technical roles and younger and emerging actors.&lt;/div&gt;&lt;div&gt;Against the background illustrated above, this study maps and critically evaluates relevant issues facing the development, deployment, and use of AI models from a movie industry perspective. The legal analysis is conducted having regard to EU and UK copyright law and is divided into three parts:&lt;ul&gt;&lt;li&gt;&lt;span&gt;•&lt;/span&gt;&lt;span&gt;&lt;div&gt;&lt;strong&gt;Input/AI training&lt;/strong&gt;: By considering relevant legal restrictions applicable to the training of AI models on protected audiovisual content, the border between lawful unlicensed uses and restricted uses is drawn;&lt;/div&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span&gt;•&lt;/span&gt;&lt;span&gt;&lt;div&gt;&lt;strong&gt;Protectability of AI-generated outputs&lt;/strong&gt;: Turning to the output generation phase, the protectability of such outputs is considered next, by focusing in particular on the requirements of authorship and originality under EU and UK copyright law;&lt;/div&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span&gt;•&lt;/span&gt;&lt;span&gt;&lt;div&gt;&lt;strong&gt;Legal risks and potential liability stemming from the use of third-party AI models for output generation&lt;/strong&gt;: Still having regard to the output generation phase, relevant legal issues that might arise having regard to the use of AI models that `regurgitate' third-party training data at output generation are considered, alongside the question of style protection under copyright.&lt;/div&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;The main conclusions are as follows:&lt;ul&gt;&lt;li&gt;&lt;span&gt;•&lt;/span&gt;&lt;span&gt;&lt;div&gt;&lt;strong&gt;Input/AI training&lt;/strong&gt;: Insofar as model training on third-party protected content is concerned, there are no exceptions under EU/UK law that fully cover the entirety of these processes. As a result, lacking legislative reform, the establishment of a licensing framework appears unavoidable for such activities to be deemed lawful;&lt;/div&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span&gt;•&lt;/span&gt;&lt;span&gt;&lt;div&gt;&lt;strong&gt;Protectability of AI-generated outputs&lt;/strong&gt;: The deployment of AI across various phases of the creative process does not render the resulting content unprotectable, provided that human involvement and control remain significant throughout, with the result that AI is relied upon as a tool that aids – rather than replaces – the creativity o","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106207"},"PeriodicalIF":3.2,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A semantic approach to understanding GDPR fines: From text to compliance insights 理解GDPR罚款的语义方法:从文本到合规性洞察
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-09-26 DOI: 10.1016/j.clsr.2025.106187
Albina Orlando, Mario Santoro
This study introduces an explainable Artificial Intelligence (XAI) framework that couples legal-domain NLP with Structural Topic Modeling (STM) and WordNet semantic graphs to rigorously analyze over 1,900 GDPR enforcement decision summaries from a public dataset. Our methodology focuses on demonstrating the pipeline’s validity respect to manual analyses by inspecting the results of four well-know research questions: (1) cross-country fine distribution disparities (automated metadata extraction); (2) the violation severity–fine amount relationship (keyness and semantic analysis); (3) structural text patterns (network analysis and STM); and (4) prevalent enforcement triggers (topic prevalence modeling) The pipeline’s validity is underscored by its ability to replicate key findings from previous manual analyses while enabling a more nuanced exploration of GDPR enforcement trends. Our results confirm significant disparities in enforcement across EU member states and reveal that monetary penalties do not consistently correlate with violation severity. Specifically, serious infringements, particularly those involving video surveillance, frequently result in low-value fines, especially when committed by individuals or smaller entities. This highlights that a substantial proportion of severe violations are attributed to smaller actors. Methodologically, the framework’s ability to quickly replicate such well-known patterns, alongside its transparency and reproducibility, establishes its potential as a scalable tool for transparent and explainable GDPR enforcement analytics.
本研究引入了一个可解释的人工智能(XAI)框架,该框架将法律领域的NLP与结构主题建模(STM)和WordNet语义图相结合,以严格分析来自公共数据集的1,900多个GDPR执行决策摘要。我们的方法侧重于通过检查四个众所周知的研究问题的结果来证明管道在人工分析方面的有效性:(1)跨国精细分布差异(自动元数据提取);(2)违规严重程度-罚款金额关系(关键字和语义分析);(3)结构文本模式(网络分析和STM);(4)普遍执行触发器(主题流行度建模)该管道的有效性强调了它能够复制以前手工分析的关键发现,同时能够更细致地探索GDPR执行趋势。我们的研究结果证实了欧盟成员国在执法方面的显著差异,并揭示了罚款并不总是与违规严重程度相关。具体来说,严重的侵权行为,特别是涉及视频监控的侵权行为,往往会导致小额罚款,尤其是个人或较小的实体犯下的侵权行为。这突出表明,很大一部分严重侵犯行为是由较小的行为者造成的。在方法上,该框架能够快速复制这些众所周知的模式,以及它的透明度和可重复性,确立了它作为透明和可解释的GDPR执行分析的可扩展工具的潜力。
{"title":"A semantic approach to understanding GDPR fines: From text to compliance insights","authors":"Albina Orlando,&nbsp;Mario Santoro","doi":"10.1016/j.clsr.2025.106187","DOIUrl":"10.1016/j.clsr.2025.106187","url":null,"abstract":"<div><div>This study introduces an explainable Artificial Intelligence (XAI) framework that couples legal-domain NLP with Structural Topic Modeling (STM) and WordNet semantic graphs to rigorously analyze over 1,900 GDPR enforcement decision summaries from a public dataset. Our methodology focuses on demonstrating the pipeline’s validity respect to manual analyses by inspecting the results of four well-know research questions: (1) cross-country fine distribution disparities (automated metadata extraction); (2) the violation severity–fine amount relationship (keyness and semantic analysis); (3) structural text patterns (network analysis and STM); and (4) prevalent enforcement triggers (topic prevalence modeling) The pipeline’s validity is underscored by its ability to replicate key findings from previous manual analyses while enabling a more nuanced exploration of GDPR enforcement trends. Our results confirm significant disparities in enforcement across EU member states and reveal that monetary penalties do not consistently correlate with violation severity. Specifically, serious infringements, particularly those involving video surveillance, frequently result in low-value fines, especially when committed by individuals or smaller entities. This highlights that a substantial proportion of severe violations are attributed to smaller actors. Methodologically, the framework’s ability to quickly replicate such well-known patterns, alongside its transparency and reproducibility, establishes its potential as a scalable tool for transparent and explainable GDPR enforcement analytics.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106187"},"PeriodicalIF":3.2,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging the Great Wall: China’s Evolving Cross-Border Data Flow Policies and Implications for Global Data Governance 跨越长城:中国不断演变的跨境数据流政策及其对全球数据治理的影响
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-09-25 DOI: 10.1016/j.clsr.2025.106208
Sheng Zhang , Henry Gao
Despite the rapid expansion of the digital economy, the global regulatory framework for data flows remains fragmented, with countries adopting divergent approaches shaped by their own regulatory priorities. As a key player in the Internet economy, China’s approach to cross-border data flows (CBDF) not only defines its domestic digital landscape but also influences emerging global norms. This paper takes a comprehensive view of the evolution of China’s CBDF regime, examining its development through both domestic and international lenses. Domestically, China’s regulation of CBDF has evolved from a security-first approach to one that seeks to balance security with economic development. This paper examines the economic, political, and international drivers behind this shift. This paper also compares the approaches of China and the United States to CBDF, in light of the recent tightening of US restrictions, from both technical and geopolitical perspectives. At the technical level, recent policy trends in both countries reveal notable similarities. At the geopolitical level, however, the divergence between the two frameworks is not only significant but continues to widen. The paper concludes by examining the broader implications for global data governance and offering recommendations to bridge digital divides and promote a more inclusive international framework.
尽管数字经济迅速扩张,但全球数据流监管框架仍然支离破碎,各国根据自己的监管重点采取了不同的方法。作为互联网经济的关键参与者,中国对跨境数据流(CBDF)的处理方式不仅决定了其国内的数字格局,也影响着新兴的全球规范。本文全面考察了中国CBDF制度的演变,从国内和国际两个角度考察了其发展。在国内,中国对CBDF的监管已经从安全第一的方式演变为寻求安全与经济发展之间的平衡。本文考察了这一转变背后的经济、政治和国际驱动因素。鉴于美国最近收紧了对CBDF的限制,本文还从技术和地缘政治的角度比较了中国和美国对CBDF的做法。在技术层面,两国最近的政策趋势显示出显著的相似之处。然而,在地缘政治层面,这两个框架之间的分歧不仅很大,而且还在继续扩大。最后,本文考察了全球数据治理的更广泛影响,并为弥合数字鸿沟和促进更具包容性的国际框架提出了建议。
{"title":"Bridging the Great Wall: China’s Evolving Cross-Border Data Flow Policies and Implications for Global Data Governance","authors":"Sheng Zhang ,&nbsp;Henry Gao","doi":"10.1016/j.clsr.2025.106208","DOIUrl":"10.1016/j.clsr.2025.106208","url":null,"abstract":"<div><div>Despite the rapid expansion of the digital economy, the global regulatory framework for data flows remains fragmented, with countries adopting divergent approaches shaped by their own regulatory priorities. As a key player in the Internet economy, China’s approach to cross-border data flows (CBDF) not only defines its domestic digital landscape but also influences emerging global norms. This paper takes a comprehensive view of the evolution of China’s CBDF regime, examining its development through both domestic and international lenses. Domestically, China’s regulation of CBDF has evolved from a security-first approach to one that seeks to balance security with economic development. This paper examines the economic, political, and international drivers behind this shift. This paper also compares the approaches of China and the United States to CBDF, in light of the recent tightening of US restrictions, from both technical and geopolitical perspectives. At the technical level, recent policy trends in both countries reveal notable similarities. At the geopolitical level, however, the divergence between the two frameworks is not only significant but continues to widen. The paper concludes by examining the broader implications for global data governance and offering recommendations to bridge digital divides and promote a more inclusive international framework.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106208"},"PeriodicalIF":3.2,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The EU Cyber Resilience Act: Hybrid governance, compliance, and cybersecurity regulation in the digital ecosystem 欧盟网络弹性法案:数字生态系统中的混合治理、合规性和网络安全监管
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-09-23 DOI: 10.1016/j.clsr.2025.106209
Fabian Teichmann , Bruno S. Sergi
This article advances a governance-theoretical account of the EU Cyber Resilience Act (CRA) as a form of hybrid regulation that combines command-and-control duties with risk-based calibration, co-regulation through European harmonized standards, and enforced self-regulation by firms. The central research question is: how does the CRA’s hybrid design reallocate regulatory functions between public authorities and private actors along the digital-product lifecycle, and with what compliance and enforcement consequences? Methodologically, the paper doctrinally analyses the CRA’s core provisions and situates them in the New Legislative Framework (NLF) for product regulation, the legal regime for standards under Regulation (EU) No 1025/2012 and Court of Justice of the European Union (CJEU) case law, and adjacent EU instruments (NIS2; Cybersecurity Act). It further offers a concise comparative sidebar on the United States and the United Kingdom to contrast policy trajectories. The contribution is threefold: (i) it clarifies the legal status and governance role of harmonized standards within CRA conformity assessment; (ii) it analytically distinguishes external obligations from firm-internal “meta-regulation”; and (iii) it maps institutional interfaces with NIS2 and the Cybersecurity Act, highlighting pathways for dynamic escalation (including mandatory certification). The analysis yields implications for corporate compliance design, market surveillance, and future rule updates via delegated acts.
本文提出了欧盟网络弹性法案(CRA)的治理理论解释,将其作为一种混合监管形式,将命令与控制职责与基于风险的校准、通过欧洲统一标准进行的共同监管以及企业强制自我监管相结合。研究的核心问题是:CRA的混合设计如何在数字产品生命周期中重新分配公共当局和私人参与者之间的监管职能,以及合规和执行的后果是什么?在方法上,本文从理论上分析了CRA的核心条款,并将其置于产品监管的新立法框架(NLF)、法规(EU) No 1025/2012和欧盟法院(CJEU)判例法下标准的法律制度以及相邻的欧盟文书(NIS2;网络安全法)中。它还提供了一个简洁的比较侧边栏,以对比美国和英国的政策轨迹。其贡献有三方面:(i)阐明了协调标准在CRA合格评定中的法律地位和治理作用;(ii)分析区分外部义务与公司内部“元监管”;(iii)它映射了与NIS2和网络安全法的机构接口,突出了动态升级的途径(包括强制性认证)。该分析对公司合规性设计、市场监督和未来通过授权法案更新规则产生了影响。
{"title":"The EU Cyber Resilience Act: Hybrid governance, compliance, and cybersecurity regulation in the digital ecosystem","authors":"Fabian Teichmann ,&nbsp;Bruno S. Sergi","doi":"10.1016/j.clsr.2025.106209","DOIUrl":"10.1016/j.clsr.2025.106209","url":null,"abstract":"<div><div>This article advances a governance-theoretical account of the EU Cyber Resilience Act (CRA) as a form of hybrid regulation that combines command-and-control duties with risk-based calibration, co-regulation through European harmonized standards, and enforced self-regulation by firms. The central research question is: how does the CRA’s hybrid design reallocate regulatory functions between public authorities and private actors along the digital-product lifecycle, and with what compliance and enforcement consequences? Methodologically, the paper doctrinally analyses the CRA’s core provisions and situates them in the New Legislative Framework (NLF) for product regulation, the legal regime for standards under Regulation (EU) No 1025/2012 and Court of Justice of the European Union (CJEU) case law, and adjacent EU instruments (NIS2; Cybersecurity Act). It further offers a concise comparative sidebar on the United States and the United Kingdom to contrast policy trajectories. The contribution is threefold: (i) it clarifies the legal status and governance role of harmonized standards within CRA conformity assessment; (ii) it analytically distinguishes external obligations from firm-internal “meta-regulation”; and (iii) it maps institutional interfaces with NIS2 and the Cybersecurity Act, highlighting pathways for dynamic escalation (including mandatory certification). The analysis yields implications for corporate compliance design, market surveillance, and future rule updates via delegated acts.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106209"},"PeriodicalIF":3.2,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145118738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented accountability: Data access in the metaverse 增强的问责制:元空间中的数据访问
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-09-19 DOI: 10.1016/j.clsr.2025.106196
Giancarlo Frosio , Faith Obafemi
This article examines regulated data access (RDA) in the metaverse—an interconnected and immersive digital ecosystem comprising virtual, augmented, and hyper-physical realities. We organise the argument across taxonomy (Section 2), Digital Services Act (DSA)-anchored doctrine (Section 3), implementation challenges (Section 4), platform practices (Section 5), and a global blueprint (Section 6). Building on the European Union’s DSA, particularly Article 40, the analysis evaluates whether metaverse platforms qualify as Very Large Online Platforms or Very Large Online Search Engines and thus fall within the DSA’s data access rules. Drawing comparative insights from the UK’s Online Safety Act and the United States’ proposed Platform Accountability and Transparency Act, the article highlights differing global approaches to data sharing and the significant governance gaps that persist.
This article categorizes metaverse-native data, including spatial, biometric, and eye-tracking data, into personal and non-personal types, stressing the heightened complexity of governing immersive, multidimensional information flows. While existing legal frameworks offer a starting point, the metaverse’s novel data practices demand targeted adaptations to address challenges like decentralised governance, user consent in real-time environments, and the integration of privacy-enhancing technologies. Through an examination of data access regimes across selected metaverse platforms, the article identifies a lack of uniform, transparent processes for external researchers.
In this context, the article highlights RDA's broader public-interest function, facilitating external scrutiny of platform activities and ensuring service providers are held accountable. The absence of consistent RDA frameworks obstructs systemic risk research, undermining both risk assessment and mitigation efforts while leaving user rights vulnerable to opaque platform governance. To address these gaps, the article advances a set of policy recommendations aimed at strengthening RDA in the metaverse—adapting regulatory strategies to its evolving, decentralised architecture. By tailoring regulatory strategies to the metaverse’s dynamic nature, policymakers can foster accountability, innovation, and trust—both domestically (in jurisdictions like the UK, where data access provisions remain underdeveloped) and internationally. The analysis extends beyond mere applications to metaverse platforms, providing insights that can be applied to the online platform ecosystem in its entirety. Ultimately, this article charts a path toward harmonized, future-ready data governance frameworks—one that integrates RDA as a core regulatory mechanism for ‘augmented accountability’, essential for safeguarding user rights and enabling independent risk assessment in the metaverse.
本文研究了虚拟世界中的受管制数据访问(RDA),虚拟世界是一个相互连接的沉浸式数字生态系统,包括虚拟、增强和超物理现实。我们通过分类法(第2节)、数字服务法案(DSA)锚定理论(第3节)、实施挑战(第4节)、平台实践(第5节)和全球蓝图(第6节)来组织争论。基于欧盟的DSA,特别是第40条,该分析评估了元宇宙平台是否符合超大型在线平台或超大型在线搜索引擎的资格,从而符合DSA的数据访问规则。通过对比英国的《在线安全法》和美国提议的《平台问责和透明度法案》,本文强调了全球数据共享的不同方法以及持续存在的重大治理差距。本文将元原生数据(包括空间、生物识别和眼动追踪数据)分为个人和非个人类型,强调了管理沉浸式多维信息流的高度复杂性。虽然现有的法律框架提供了一个起点,但虚拟世界的新数据实践需要有针对性的调整,以应对分散治理、实时环境中的用户同意以及隐私增强技术的集成等挑战。通过对选定的元宇宙平台上的数据访问制度的检查,本文确定了外部研究人员缺乏统一、透明的流程。在此背景下,本文强调了RDA更广泛的公共利益功能,促进了平台活动的外部审查,并确保服务提供商承担责任。缺乏一致的RDA框架阻碍了系统性风险研究,破坏了风险评估和减轻风险的努力,同时使用户权利容易受到不透明平台治理的影响。为了解决这些差距,本文提出了一套旨在加强RDA的政策建议,以适应其不断发展的分散架构的监管策略。通过根据虚拟世界的动态特性调整监管策略,政策制定者可以在国内(在数据访问规定仍然不发达的英国等司法管辖区)和国际上促进问责制、创新和信任。该分析从单纯的应用程序扩展到虚拟平台,提供了可以应用于整个在线平台生态系统的见解。最后,本文描绘了一条通向统一的、面向未来的数据治理框架的道路,该框架将RDA集成为“增强问责制”的核心监管机制,这对于维护用户权利和实现元环境中的独立风险评估至关重要。
{"title":"Augmented accountability: Data access in the metaverse","authors":"Giancarlo Frosio ,&nbsp;Faith Obafemi","doi":"10.1016/j.clsr.2025.106196","DOIUrl":"10.1016/j.clsr.2025.106196","url":null,"abstract":"<div><div>This article examines regulated data access (RDA) in the metaverse—an interconnected and immersive digital ecosystem comprising virtual, augmented, and hyper-physical realities. We organise the argument across taxonomy (Section 2), Digital Services Act (DSA)-anchored doctrine (Section 3), implementation challenges (Section 4), platform practices (Section 5), and a global blueprint (Section 6). Building on the European Union’s DSA, particularly Article 40, the analysis evaluates whether metaverse platforms qualify as Very Large Online Platforms or Very Large Online Search Engines and thus fall within the DSA’s data access rules. Drawing comparative insights from the UK’s Online Safety Act and the United States’ proposed Platform Accountability and Transparency Act, the article highlights differing global approaches to data sharing and the significant governance gaps that persist.</div><div>This article categorizes metaverse-native data, including spatial, biometric, and eye-tracking data, into personal and non-personal types, stressing the heightened complexity of governing immersive, multidimensional information flows. While existing legal frameworks offer a starting point, the metaverse’s novel data practices demand targeted adaptations to address challenges like decentralised governance, user consent in real-time environments, and the integration of privacy-enhancing technologies. Through an examination of data access regimes across selected metaverse platforms, the article identifies a lack of uniform, transparent processes for external researchers.</div><div>In this context, the article highlights RDA's broader public-interest function, facilitating external scrutiny of platform activities and ensuring service providers are held accountable. The absence of consistent RDA frameworks obstructs systemic risk research, undermining both risk assessment and mitigation efforts while leaving user rights vulnerable to opaque platform governance. To address these gaps, the article advances a set of policy recommendations aimed at strengthening RDA in the metaverse—adapting regulatory strategies to its evolving, decentralised architecture. By tailoring regulatory strategies to the metaverse’s dynamic nature, policymakers can foster accountability, innovation, and trust—both domestically (in jurisdictions like the UK, where data access provisions remain underdeveloped) and internationally. The analysis extends beyond mere applications to metaverse platforms, providing insights that can be applied to the online platform ecosystem in its entirety. Ultimately, this article charts a path toward harmonized, future-ready data governance frameworks—one that integrates RDA as a core regulatory mechanism for ‘augmented accountability’, essential for safeguarding user rights and enabling independent risk assessment in the metaverse.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106196"},"PeriodicalIF":3.2,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anticipating compliance. An exploration of foresight initiatives in data protection 预期合规。数据保护前瞻性举措的探索
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-09-17 DOI: 10.1016/j.clsr.2025.106182
Alessandro Ortalda , Stefano Leucci , Gabriele Rizzo
The pace of technological progress has been increasing in recent years. As novel technologies arise or existing ones further develop, it becomes increasingly challenging to balance leveraging these advancements and safeguarding personal data. By relying on firsthand accounts of professionals in the field, the paper identifies how these challenges, which appear to be applicable to data controllers and Data Protection Authorities, are substantially connected with ensuring a sound interpretation of the law through time.
The paper examines the leading foresight and anticipation techniques and explores their possible data protection applications by reviewing existing initiatives that attempt to implement foresight in the context of data protection.
Section 2 delves into the evolving regulatory landscape, emphasising the need for a foresight-based approach to tackle the complexities arising from data-intensive technologies and the changing European regulatory framework. Section 3 introduces foresight as a discipline, its history and evolution, and leading techniques. Section 4 presents practical examples of foresight in data protection, detailing initiatives by the authors and other actors in the data protection space.
In conclusion, the paper underscores the initial consensus on the benefits of anticipatory approaches in addressing current data protection challenges. Anticipation techniques, as a flexible concept, can be tailored to meet the needs of various stakeholders, fostering a collaborative and practical approach to data protection. However, a gap in consolidated methodologies persists, necessitating further research to design and implement practical foresight approaches.
近年来,技术进步的步伐一直在加快。随着新技术的出现或现有技术的进一步发展,平衡这些进步和保护个人数据变得越来越具有挑战性。通过依赖该领域专业人士的第一手资料,本文确定了这些似乎适用于数据控制者和数据保护当局的挑战如何与确保对法律的合理解释实质性地联系在一起。本文考察了领先的预见和预测技术,并通过审查在数据保护背景下试图实施预见的现有举措,探索了它们可能的数据保护应用。第2部分深入研究了不断变化的监管环境,强调需要一种基于远见的方法来解决数据密集型技术和不断变化的欧洲监管框架所带来的复杂性。第三节介绍前瞻性作为一门学科,它的历史和演变,以及主要技术。第4节介绍了数据保护远见的实际例子,详细介绍了作者和其他参与者在数据保护领域的举措。总之,本文强调了在应对当前数据保护挑战时,对前瞻性方法的好处的初步共识。预测技术作为一个灵活的概念,可以根据不同利益相关者的需求进行定制,从而促进数据保护的协作和实用方法。然而,综合方法的差距仍然存在,需要进一步研究设计和实施实际的预见方法。
{"title":"Anticipating compliance. An exploration of foresight initiatives in data protection","authors":"Alessandro Ortalda ,&nbsp;Stefano Leucci ,&nbsp;Gabriele Rizzo","doi":"10.1016/j.clsr.2025.106182","DOIUrl":"10.1016/j.clsr.2025.106182","url":null,"abstract":"<div><div>The pace of technological progress has been increasing in recent years. As novel technologies arise or existing ones further develop, it becomes increasingly challenging to balance leveraging these advancements and safeguarding personal data. By relying on firsthand accounts of professionals in the field, the paper identifies how these challenges, which appear to be applicable to data controllers and Data Protection Authorities, are substantially connected with ensuring a sound interpretation of the law through time.</div><div>The paper examines the leading foresight and anticipation techniques and explores their possible data protection applications by reviewing existing initiatives that attempt to implement foresight in the context of data protection.</div><div>Section 2 delves into the evolving regulatory landscape, emphasising the need for a foresight-based approach to tackle the complexities arising from data-intensive technologies and the changing European regulatory framework. Section 3 introduces foresight as a discipline, its history and evolution, and leading techniques. Section 4 presents practical examples of foresight in data protection, detailing initiatives by the authors and other actors in the data protection space.</div><div>In conclusion, the paper underscores the initial consensus on the benefits of anticipatory approaches in addressing current data protection challenges. Anticipation techniques, as a flexible concept, can be tailored to meet the needs of various stakeholders, fostering a collaborative and practical approach to data protection. However, a gap in consolidated methodologies persists, necessitating further research to design and implement practical foresight approaches.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106182"},"PeriodicalIF":3.2,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
State, society, and market: Interpreting the norms and dynamics of China's AI governance 国家、社会和市场:解读中国人工智能治理的规范和动态
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-09-17 DOI: 10.1016/j.clsr.2025.106206
Xuechen Chen , Lu Xu
This study challenges the prevailing perception of China's AI governance as a monolithic, state-driven model and instead presents a nuanced analysis of its complex governance landscape. Utilizing governance theories, we develop an analytical framework examining key governing nodes, tools, actors, and norms. Through case studies on minor protection and content regulation, this study demonstrates that Chinese AI governance involves a diverse array of stakeholders—including the state, private sector, and society—who co-produce norms and regulatory mechanisms. Contrary to conventional narratives, China's governance approach adapts existing regulatory tools to meet new challenges, balancing political, social, and economic interests. This study highlights how China has rapidly formalized AI regulations, in areas such as minor protection and content regulation, setting a precedent in global AI governance. The findings contribute to a broader understanding of AI regulation beyond ideological binaries and offer insights relevant to international AI policy discussions.
这项研究挑战了人们普遍认为中国人工智能治理是一个单一的、国家驱动的模式,而是对其复杂的治理格局进行了细致的分析。利用治理理论,我们开发了一个分析框架,检查关键的治理节点、工具、参与者和规范。通过对未成年人保护和内容监管的案例研究,本研究表明,中国的人工智能治理涉及多种利益相关者——包括国家、私营部门和社会——他们共同制定规范和监管机制。与传统叙事相反,中国的治理方式调整了现有的监管工具来应对新的挑战,平衡了政治、社会和经济利益。这项研究强调了中国如何在未成年人保护和内容监管等领域迅速正式制定人工智能法规,为全球人工智能治理树立了先例。这些发现有助于超越意识形态的二元性,更广泛地理解人工智能监管,并为国际人工智能政策讨论提供了相关见解。
{"title":"State, society, and market: Interpreting the norms and dynamics of China's AI governance","authors":"Xuechen Chen ,&nbsp;Lu Xu","doi":"10.1016/j.clsr.2025.106206","DOIUrl":"10.1016/j.clsr.2025.106206","url":null,"abstract":"<div><div>This study challenges the prevailing perception of China's AI governance as a monolithic, state-driven model and instead presents a nuanced analysis of its complex governance landscape. Utilizing governance theories, we develop an analytical framework examining key governing nodes, tools, actors, and norms. Through case studies on minor protection and content regulation, this study demonstrates that Chinese AI governance involves a diverse array of stakeholders—including the state, private sector, and society—who co-produce norms and regulatory mechanisms. Contrary to conventional narratives, China's governance approach adapts existing regulatory tools to meet new challenges, balancing political, social, and economic interests. This study highlights how China has rapidly formalized AI regulations, in areas such as minor protection and content regulation, setting a precedent in global AI governance. The findings contribute to a broader understanding of AI regulation beyond ideological binaries and offer insights relevant to international AI policy discussions.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106206"},"PeriodicalIF":3.2,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Law & Security Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1