首页 > 最新文献

Computer Law & Security Review最新文献

英文 中文
Balancing privacy and platform power in the mobile ecosystem: The case of Apple’s App Tracking Transparency 在移动生态系统中平衡隐私和平台力量:以苹果的应用跟踪透明度为例
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-24 DOI: 10.1016/j.clsr.2025.106255
Julia Krämer
In 2021, Apple shook up the AdTech industry by introducing the iOS 14.5 update, which not only changed the default access to an app's advertising identifier but also restructured the process of user consent within mobile apps through the App Tracking Transparency (ATT) framework. Given that Apple dominates one of the main mobile operating systems (iOS), and one of the major mobile app store (Apple App Store) in the European Union (EU), the question arises to what extent such a powerful private party is able to govern privacy standards at this scale. While the introduction of the ATT has already raised competition concerns, its impact on privacy and data protection within the EU legal order remains largely unexplored. Therefore, this article investigates how the ATT affects EU privacy and data protection compliance and explores the extent of the General Data Protection Regulation (GDPR) in restricting the privacy regulator role of app stores and mobile operating systems. While the ATT limits certain privacy risks by limiting disclosures to third-parties, Apple is redefining core privacy concepts such as tracking. This may lead to the emergence of “walled gardens”, closed ecosystems which are managed and curated by their owners, which may alter the structure of the mobile ecosystem in general. The paper contributes to the overall discussion about the impact of private sector-led initiatives and powerful private actors in setting privacy standards.
2021年,苹果推出了iOS 14.5更新,震惊了广告科技行业,该更新不仅改变了应用程序广告标识符的默认访问权限,还通过应用跟踪透明度(ATT)框架重组了移动应用程序中的用户同意流程。考虑到苹果在欧盟(EU)主导着主要的移动操作系统之一(iOS)和主要的移动应用商店之一(苹果应用商店),问题就来了,这样一个强大的私人政党能够在多大程度上控制这种规模的隐私标准。尽管ATT的引入已经引发了对竞争的担忧,但它对欧盟法律秩序内隐私和数据保护的影响在很大程度上仍未得到探讨。因此,本文研究ATT如何影响欧盟隐私和数据保护合规,并探讨通用数据保护条例(GDPR)在限制应用商店和移动操作系统隐私监管角色方面的程度。虽然ATT通过限制向第三方披露信息来限制某些隐私风险,但苹果正在重新定义跟踪等核心隐私概念。这可能会导致“围墙花园”的出现,即由其所有者管理和策划的封闭生态系统,这可能会改变移动生态系统的总体结构。本文有助于全面讨论私营部门主导的倡议和强大的私人行为者在制定隐私标准方面的影响。
{"title":"Balancing privacy and platform power in the mobile ecosystem: The case of Apple’s App Tracking Transparency","authors":"Julia Krämer","doi":"10.1016/j.clsr.2025.106255","DOIUrl":"10.1016/j.clsr.2025.106255","url":null,"abstract":"<div><div>In 2021, Apple shook up the AdTech industry by introducing the iOS 14.5 update, which not only changed the default access to an app's advertising identifier but also restructured the process of user consent within mobile apps through the App Tracking Transparency (ATT) framework. Given that Apple dominates one of the main mobile operating systems (iOS), and one of the major mobile app store (Apple App Store) in the European Union (EU), the question arises to what extent such a powerful private party is able to govern privacy standards at this scale. While the introduction of the ATT has already raised competition concerns, its impact on privacy and data protection within the EU legal order remains largely unexplored. Therefore, this article investigates how the ATT affects EU privacy and data protection compliance and explores the extent of the General Data Protection Regulation (GDPR) in restricting the privacy regulator role of app stores and mobile operating systems. While the ATT limits certain privacy risks by limiting disclosures to third-parties, Apple is redefining core privacy concepts such as tracking. This may lead to the emergence of “walled gardens”, closed ecosystems which are managed and curated by their owners, which may alter the structure of the mobile ecosystem in general. The paper contributes to the overall discussion about the impact of private sector-led initiatives and powerful private actors in setting privacy standards.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106255"},"PeriodicalIF":3.2,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring gender equality in the metaverse 探索虚拟世界中的性别平等
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-24 DOI: 10.1016/j.clsr.2025.106254
Christina Pasvanti Gkioka, Eduard Fosch-Villaronga
Gender-based discrimination in the Metaverse often takes the form as harassment or unwanted sexual behavior directed at avatars. Such harm is frequently underestimated because people assume a clear divide between users and their digital selves, overlooking how strongly individuals identify with their avatars. Mediated embodiment theory shows, nonetheless, that users experience their avatars as extensions of themselves, making virtual discrimination a real-world concern affecting dignity, mental health, and well-being. As digital spaces replicate and sometimes amplify existing gender inequalities, this study examines the extent to which gender equality is safeguarded in the Metaverse. It focuses on both legal and platform-based safeguards, assessing how the European Union’s Digital Services Act (DSA) can address gender-based risks in virtual environments. The analysis clarifies how the DSA’s obligations for hosting services and online platforms may apply to Metaverse providers, while acknowledging that most do not yet meet the threshold for designation as Very Large Online Platforms (VLOPs). The DSA provides a valuable starting point for promoting accountability and transparency but leaves important gaps in enforcement and coverage. At the platform level, policies, moderation tools, and safety features vary widely, underscoring the need for context-specific governance measures and legal recognition of avatar-mediated harm. Strengthening these safeguards is essential to ensure that the Metaverse evolves into a safer and more inclusive space, free from gender-based discrimination.
在虚拟世界中,基于性别的歧视通常以针对虚拟角色的骚扰或不受欢迎的性行为的形式出现。这种危害经常被低估,因为人们认为用户和他们的数字自我之间存在明显的鸿沟,忽视了个人对他们的虚拟形象的强烈认同感。中介化身理论表明,尽管如此,用户体验到他们的化身是自己的延伸,使虚拟歧视成为影响尊严、心理健康和福祉的现实世界问题。由于数字空间复制并有时放大了现有的性别不平等,本研究考察了在虚拟世界中性别平等得到保障的程度。它侧重于法律和平台保障措施,评估欧盟数字服务法(DSA)如何解决虚拟环境中基于性别的风险。该分析澄清了DSA对托管服务和在线平台的义务如何适用于虚拟世界提供商,同时承认大多数尚未达到指定为超大型在线平台(VLOPs)的门槛。DSA为促进问责制和透明度提供了一个有价值的起点,但在执行和覆盖方面留下了重大差距。在平台层面上,政策、审核工具和安全功能差异很大,这强调了针对特定环境的治理措施和对虚拟形象介导的伤害的法律承认的必要性。加强这些保障措施对于确保元宇宙发展成为一个更安全、更包容、不受性别歧视的空间至关重要。
{"title":"Exploring gender equality in the metaverse","authors":"Christina Pasvanti Gkioka,&nbsp;Eduard Fosch-Villaronga","doi":"10.1016/j.clsr.2025.106254","DOIUrl":"10.1016/j.clsr.2025.106254","url":null,"abstract":"<div><div>Gender-based discrimination in the Metaverse often takes the form as harassment or unwanted sexual behavior directed at avatars. Such harm is frequently underestimated because people assume a clear divide between users and their digital selves, overlooking how strongly individuals identify with their avatars. Mediated embodiment theory shows, nonetheless, that users experience their avatars as extensions of themselves, making virtual discrimination a real-world concern affecting dignity, mental health, and well-being. As digital spaces replicate and sometimes amplify existing gender inequalities, this study examines the extent to which gender equality is safeguarded in the Metaverse. It focuses on both legal and platform-based safeguards, assessing how the European Union’s Digital Services Act (DSA) can address gender-based risks in virtual environments. The analysis clarifies how the DSA’s obligations for hosting services and online platforms may apply to Metaverse providers, while acknowledging that most do not yet meet the threshold for designation as Very Large Online Platforms (VLOPs). The DSA provides a valuable starting point for promoting accountability and transparency but leaves important gaps in enforcement and coverage. At the platform level, policies, moderation tools, and safety features vary widely, underscoring the need for context-specific governance measures and legal recognition of avatar-mediated harm. Strengthening these safeguards is essential to ensure that the Metaverse evolves into a safer and more inclusive space, free from gender-based discrimination.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106254"},"PeriodicalIF":3.2,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approaching the AI Act... with AI: LLMs and knowledge graphs to extract and analyse obligations 接近人工智能法案……与人工智能:法学硕士和知识图谱提取和分析义务
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-16 DOI: 10.1016/j.clsr.2025.106230
Federico Galli , Thiago Raulino Dal Pont , Galileo Sartor , Giuseppe Contissa
The EU Artificial Intelligence Act (AIA) exemplifies the growing complexity of digital regulation in the domain of computer technologies. Characterised by abstract terminology, multi-layered provisions, and intersecting regulatory requirements, the AIA poses significant challenges for the identification and interpretation of legal obligations, making compliance a demanding and potentially error-prone endeavour for legal professionals and organisations alike.
Recent advances in Artificial Intelligence (AI), particularly in the fields of Natural Language Processing (NLP) and Large Language Models (LLMs), offer promising support for addressing these challenges. By automating the extraction and structuring of legal rules, AI-based tools have the potential to assist regulatory compliance activities and provide more systematic insights into complex legislative frameworks.
This paper presents an experiment combining NLP techniques and LLMs to automate the extraction and structuring of legal obligations from the AIA.
The approach is based on a modular workflow comprising four main stages: identification of obligations, filtering of deontic statements, analysis of deontic content, and the construction of searchable knowledge graphs. The experiment employed the LLaMA 3.3 70B model, supported by more traditional NLP tools.
Five experts (4 Ph.D. students and 1 post-doc in legal informatics and philosophy) evaluated the system’s performance on a subset of cases. The results indicate a precision of 93% in the obligation filtering phase and over 99% accuracy in classifying obligation types, addressees, and predicates. A quantitative analysis of the extracted and analysed obligations revealed a predominance of prescriptive obligations (603 out of 729 total), among which 136 are imposed on the European Commission, while 88 consist of informative duties. The results are in line with current discussions around the AI Act regulatory approach.
These findings underscore the potential of LLM-based tools to enhance regulatory compliance and analysis. Future research will focus on extending the system to additional EU regulations and integrating formal ontologies to enable more advanced representations of legal obligations.
欧盟人工智能法案(AIA)体现了计算机技术领域数字监管日益复杂的情况。AIA以抽象的术语、多层次的条款和交叉的监管要求为特征,对法律义务的识别和解释提出了重大挑战,使法律专业人员和组织都需要遵守并可能容易出错。人工智能(AI)的最新进展,特别是在自然语言处理(NLP)和大型语言模型(llm)领域,为解决这些挑战提供了有希望的支持。通过自动提取和构建法律规则,基于人工智能的工具有可能协助监管遵从性活动,并对复杂的立法框架提供更系统的见解。本文提出了一个结合NLP技术和llm的实验,以自动从AIA中提取和构建法律义务。该方法基于模块化工作流程,包括四个主要阶段:义务识别、道义陈述过滤、道义内容分析和可搜索知识图谱的构建。实验采用LLaMA 3.3 70B模型,并采用更传统的NLP工具支持。5名专家(4名博士生和1名法律信息学和哲学博士后)评估了该系统在一个案例子集上的表现。结果表明,义务过滤阶段的精度为93%,对义务类型、收件人和谓词进行分类的精度超过99%。对抽取和分析的义务进行的定量分析显示,规范性义务占主导地位(总共729项义务中有603项),其中136项是强加给欧洲委员会的,而88项是信息义务。结果与目前围绕人工智能法案监管方法的讨论一致。这些发现强调了基于法学硕士的工具在提高法规遵从性和分析方面的潜力。未来的研究将集中于将该系统扩展到其他欧盟法规,并整合正式本体,以实现更高级的法律义务表示。
{"title":"Approaching the AI Act... with AI: LLMs and knowledge graphs to extract and analyse obligations","authors":"Federico Galli ,&nbsp;Thiago Raulino Dal Pont ,&nbsp;Galileo Sartor ,&nbsp;Giuseppe Contissa","doi":"10.1016/j.clsr.2025.106230","DOIUrl":"10.1016/j.clsr.2025.106230","url":null,"abstract":"<div><div>The EU Artificial Intelligence Act (AIA) exemplifies the growing complexity of digital regulation in the domain of computer technologies. Characterised by abstract terminology, multi-layered provisions, and intersecting regulatory requirements, the AIA poses significant challenges for the identification and interpretation of legal obligations, making compliance a demanding and potentially error-prone endeavour for legal professionals and organisations alike.</div><div>Recent advances in Artificial Intelligence (AI), particularly in the fields of Natural Language Processing (NLP) and Large Language Models (LLMs), offer promising support for addressing these challenges. By automating the extraction and structuring of legal rules, AI-based tools have the potential to assist regulatory compliance activities and provide more systematic insights into complex legislative frameworks.</div><div>This paper presents an experiment combining NLP techniques and LLMs to automate the extraction and structuring of legal obligations from the AIA.</div><div>The approach is based on a modular workflow comprising four main stages: identification of obligations, filtering of deontic statements, analysis of deontic content, and the construction of searchable knowledge graphs. The experiment employed the LLaMA 3.3 70B model, supported by more traditional NLP tools.</div><div>Five experts (4 Ph.D. students and 1 post-doc in legal informatics and philosophy) evaluated the system’s performance on a subset of cases. The results indicate a precision of 93% in the obligation filtering phase and over 99% accuracy in classifying obligation types, addressees, and predicates. A quantitative analysis of the extracted and analysed obligations revealed a predominance of prescriptive obligations (603 out of 729 total), among which 136 are imposed on the European Commission, while 88 consist of informative duties. The results are in line with current discussions around the AI Act regulatory approach.</div><div>These findings underscore the potential of LLM-based tools to enhance regulatory compliance and analysis. Future research will focus on extending the system to additional EU regulations and integrating formal ontologies to enable more advanced representations of legal obligations.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106230"},"PeriodicalIF":3.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing data protection impact assessments: Lessons from COVID-19 contact tracing apps 评估数据保护影响评估:来自COVID-19接触者追踪应用的经验教训
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-12 DOI: 10.1016/j.clsr.2025.106233
Michael Spratt, TJ McIntyre
The Data Protection Impact Assessment (DPIA) is an innovation adopted in the 2016 General Data Protection Regulation (GDPR) as a core part of its move towards ex ante regulation of data processing. However there is little empirical work examining how data controllers carry out DPIAs in practice. In this article we address that gap by providing the first systematic analysis of multiple DPIAs on the same topic: those adopted across European states for COVID-19 contact tracing apps using the Google/Apple Exposure Notification (GAEN) system. We identify significant discrepancies between these DPIAs (particularly in relation to risk identification and mitigation) even though they address identical fact patterns. We discuss factors leading to these inconsistencies, and make recommendations to promote uniformity, transparency, and feedback in the DPIA process.
数据保护影响评估(DPIA)是2016年通用数据保护条例(GDPR)中采用的一项创新,是其迈向数据处理事前监管的核心部分。然而,很少有实证工作检查数据控制器如何在实践中执行DPIAs。在本文中,我们通过首次系统分析同一主题的多个dpia来解决这一差距:这些dpia是欧洲各州使用b谷歌/Apple暴露通知(GAEN)系统对COVID-19接触者追踪应用程序采用的dpia。我们发现了这些DPIAs之间的重大差异(特别是在风险识别和缓解方面),尽管它们处理的是相同的事实模式。我们讨论了导致这些不一致的因素,并提出了在DPIA过程中促进一致性、透明度和反馈的建议。
{"title":"Assessing data protection impact assessments: Lessons from COVID-19 contact tracing apps","authors":"Michael Spratt,&nbsp;TJ McIntyre","doi":"10.1016/j.clsr.2025.106233","DOIUrl":"10.1016/j.clsr.2025.106233","url":null,"abstract":"<div><div>The Data Protection Impact Assessment (DPIA) is an innovation adopted in the 2016 General Data Protection Regulation (GDPR) as a core part of its move towards <em>ex ante</em> regulation of data processing. However there is little empirical work examining how data controllers carry out DPIAs in practice. In this article we address that gap by providing the first systematic analysis of multiple DPIAs on the same topic: those adopted across European states for COVID-19 contact tracing apps using the Google/Apple Exposure Notification (GAEN) system. We identify significant discrepancies between these DPIAs (particularly in relation to risk identification and mitigation) even though they address identical fact patterns. We discuss factors leading to these inconsistencies, and make recommendations to promote uniformity, transparency, and feedback in the DPIA process.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106233"},"PeriodicalIF":3.2,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Legal response to facial recognition technologies in China: still seeking the balance 中国对面部识别技术的法律回应:仍在寻求平衡
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-08 DOI: 10.1016/j.clsr.2025.106250
Yang Feng , Yuanyuan Cheng , Xingyu Yan
China leads globally in the large-scale deployment of facial recognition technologies (FRTs). As the country’s data protection legislation intensifies, the wide use of FRTs is raising increasing concerns about their legitimacy. To examine the legal response to FRTs in China, we analyse the legislative framework through a normative lens, evaluate the relevant administrative enforcement decisions with a mixed-method approach combining quantitative descriptive statistics and qualitative case study, and examine the judicial stance on FRT regulation through a case study. We find that despite some plausible legislative developments, the current legal framework provides inadequate facial information protection with an ineffective separate consent rule, a conspicuous lack of control over FRT use in the public sector, and weak enforcement of existing facial information protection laws. Additionally, the courts appear reluctant to address the abuse of FRTs, likely due to concerns about hindering the development of the FRT industry. We recommend a comprehensive approach to facial information protection, encompassing complementary legislative, administrative, and judicial measures.
中国在面部识别技术的大规模部署方面处于全球领先地位。随着该国数据保护立法的加强,frt的广泛使用引起了人们对其合法性的越来越多的担忧。为了考察中国对FRT的法律回应,我们从规范的角度分析了立法框架,采用定量描述性统计和定性案例研究相结合的混合方法评估了相关的行政执法决定,并通过案例研究考察了FRT监管的司法立场。我们发现,尽管有一些看似合理的立法发展,但目前的法律框架提供的面部信息保护不足,缺乏有效的单独同意规则,明显缺乏对公共部门使用人脸识别的控制,以及现有面部信息保护法的执行不力。此外,法院似乎不愿解决FRT的滥用问题,可能是因为担心这会阻碍FRT行业的发展。我们建议采取全面的方法来保护面部信息,包括立法、行政和司法措施。
{"title":"Legal response to facial recognition technologies in China: still seeking the balance","authors":"Yang Feng ,&nbsp;Yuanyuan Cheng ,&nbsp;Xingyu Yan","doi":"10.1016/j.clsr.2025.106250","DOIUrl":"10.1016/j.clsr.2025.106250","url":null,"abstract":"<div><div>China leads globally in the large-scale deployment of facial recognition technologies (FRTs). As the country’s data protection legislation intensifies, the wide use of FRTs is raising increasing concerns about their legitimacy. To examine the legal response to FRTs in China, we analyse the legislative framework through a normative lens, evaluate the relevant administrative enforcement decisions with a mixed-method approach combining quantitative descriptive statistics and qualitative case study, and examine the judicial stance on FRT regulation through a case study. We find that despite some plausible legislative developments, the current legal framework provides inadequate facial information protection with an ineffective separate consent rule, a conspicuous lack of control over FRT use in the public sector, and weak enforcement of existing facial information protection laws. Additionally, the courts appear reluctant to address the abuse of FRTs, likely due to concerns about hindering the development of the FRT industry. We recommend a comprehensive approach to facial information protection, encompassing complementary legislative, administrative, and judicial measures.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106250"},"PeriodicalIF":3.2,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Escaping the simplification trap: A playbook for the EU’s digital rulebook 摆脱简化陷阱:欧盟数字规则手册的剧本
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-04 DOI: 10.1016/j.clsr.2025.106245
Kai Zenner
The Commission’s simplification agenda makes sense if it focuses on the effects of rules rather than diluting what they are trying to protect. With a series of limited legislative and operational adjustments, the EU could lift competitiveness without lowering standards. By cutting procedural complexity across the digital rulebook, EU companies would face less red tape, while EU institutions could pursue their policy goals more efficiently. However, the catch is execution: Brussels’ crisis-driven, highly politicised processes make it hard to assemble stable coalitions and to produce the high-quality outcomes such an endeavour requires.
欧盟委员会的简化议程是有意义的,如果它把重点放在规则的影响上,而不是稀释它们试图保护的东西。通过一系列有限的立法和操作调整,欧盟可以在不降低标准的情况下提高竞争力。通过降低整个数字规则手册的程序复杂性,欧盟企业将面临更少的繁文缛节,而欧盟机构可以更有效地实现其政策目标。然而,问题在于执行:布鲁塞尔的危机驱动、高度政治化的进程,使得组建稳定的联盟、产生这种努力所需的高质量结果变得困难。
{"title":"Escaping the simplification trap: A playbook for the EU’s digital rulebook","authors":"Kai Zenner","doi":"10.1016/j.clsr.2025.106245","DOIUrl":"10.1016/j.clsr.2025.106245","url":null,"abstract":"<div><div>The Commission’s simplification agenda makes sense if it focuses on the effects of rules rather than diluting what they are trying to protect. With a series of limited legislative and operational adjustments, the EU could lift competitiveness without lowering standards. By cutting procedural complexity across the digital rulebook, EU companies would face less red tape, while EU institutions could pursue their policy goals more efficiently. However, the catch is execution: Brussels’ crisis-driven, highly politicised processes make it hard to assemble stable coalitions and to produce the high-quality outcomes such an endeavour requires.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106245"},"PeriodicalIF":3.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volunteering for the platforms – How social media terms of service may violate the fair remuneration principle of authors and performers 为平台做志愿者——社交媒体的服务条款如何违反作者和表演者的公平报酬原则
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-03 DOI: 10.1016/j.clsr.2025.106246
Ludovico Bossi
Major social media terms of service (i.e., YouTube, TikTok, Facebook, Instagram, LinkedIn, X) impose to users a royalty-free license covering uploaded “content” protected by intellectual property rights (“IPRs”). Consequently, while social media service providers’ revenues are significant, users that are also authors and performers do not directly receive any remuneration in most cases. Most recently, the benefits of training artificial intelligence (“AI”) tools on what is published on social media further intensify this imbalance.
This bargain has not gone completely unnoticed. However, the doctrine often questioned the workability of any legislative or judicial intervention aimed at restoring balance. This article argues that online social media service providers have an obligation under EU law to share the revenues derived from the exploitation of works and performances published on their platforms with authors and performers.
For this purpose, this work discusses the legitimacy of free licenses with the fair remuneration principle of authors and performers. It interprets the so-called “Linux clause” of Recital 82 Directive (EU) 2019/790 (“CDSMD”) and proposes a distinction between “free licences for the benefit of any users” (“open licenses”) and those for the benefit of specific licensees (“gratuitous licenses”). Abuses by the general public cannot occur in the case of open licenses. On the contrary, specific licensees who have a stronger position could unfairly impose gratuitous licenses to authors and performers. This inquiry runs in parallel with a recent litigation in Belgium on the matter (the “Streamz” case).
主要的社交媒体服务条款(即YouTube, TikTok, Facebook, Instagram, LinkedIn, X)向用户施加免版税的许可,涵盖受知识产权保护的上传“内容”(“IPRs”)。因此,虽然社交媒体服务提供商的收入可观,但在大多数情况下,作为作者和表演者的用户并没有直接获得任何报酬。最近,在社交媒体上发布的内容上训练人工智能(“AI”)工具的好处进一步加剧了这种不平衡。这一交易并非完全被忽视。然而,该学说经常质疑旨在恢复平衡的任何立法或司法干预的可行性。本文认为,根据欧盟法律,在线社交媒体服务提供商有义务与作者和表演者分享其平台上发布的作品和表演所产生的收入。为此,本文以作者和表演者的公平报酬原则讨论自由许可的合法性。它解释了Recital 82指令(EU) 2019/790(“CDSMD”)中所谓的“Linux条款”,并提出了“为任何用户利益的自由许可”(“开放许可”)和为特定被许可人利益的自由许可(“免费许可”)之间的区别。在开放许可的情况下,一般公众的滥用不会发生。相反,具有更强地位的特定许可方可能不公平地将免费许可强加给作者和表演者。这项调查与比利时最近就此事(“Streamz”案)提起的诉讼同时进行。
{"title":"Volunteering for the platforms – How social media terms of service may violate the fair remuneration principle of authors and performers","authors":"Ludovico Bossi","doi":"10.1016/j.clsr.2025.106246","DOIUrl":"10.1016/j.clsr.2025.106246","url":null,"abstract":"<div><div>Major social media terms of service (<em>i.e.</em>, YouTube, TikTok, Facebook, Instagram, LinkedIn, X) impose to users a royalty-free license covering uploaded “content” protected by intellectual property rights (“IPRs”). Consequently, while social media service providers’ revenues are significant, users that are also authors and performers do not directly receive any remuneration in most cases. Most recently, the benefits of training artificial intelligence (“AI”) tools on what is published on social media further intensify this imbalance.</div><div>This bargain has not gone completely unnoticed. However, the doctrine often questioned the workability of any legislative or judicial intervention aimed at restoring balance. This article argues that online social media service providers have an obligation under EU law to share the revenues derived from the exploitation of works and performances published on their platforms with authors and performers.</div><div>For this purpose, this work discusses the legitimacy of free licenses with the fair remuneration principle of authors and performers. It interprets the so-called “Linux clause” of Recital 82 Directive (EU) 2019/790 (“CDSMD”) and proposes a distinction between “free licences for the benefit of any users” (“open licenses”) and those for the benefit of specific licensees (“gratuitous licenses”). Abuses by the general public cannot occur in the case of open licenses. On the contrary, specific licensees who have a stronger position could unfairly impose gratuitous licenses to authors and performers. This inquiry runs in parallel with a recent litigation in Belgium on the matter (the “Streamz” case).</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106246"},"PeriodicalIF":3.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mind the gap: Securing algorithmic explainability for credit decisions beyond the UK GDPR 注意差距:确保英国GDPR之外的信贷决策的算法可解释性
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-03 DOI: 10.1016/j.clsr.2025.106247
Holli Sargeant
The recent amendments to the United Kingdom’s GDPR under the Data (Use and Access) Act 2025 marks a significant divergence from the European Union’s approach to automated decision-making, substantively weakening the ‘right to explanation’ for automated decisions. This paper provides a critical legal analysis of the new regime, arguing that it dismantles crucial protections for individuals. The principal finding is that the legislation creates significant legal lacunas by introducing an ambiguous ‘no meaningful human involvement’ standard and restricting key safeguards to decisions involving ‘special category data’. These changes allow firms to shield opaque models from scrutiny, increasing the risk of algorithmic discrimination, particularly in high-stakes sectors like consumer credit.
Drawing on a comparative review of the United States’ technology-neutral adverse action notice requirement, the paper concludes that data protection law is no longer a sufficient safeguard against algorithmic harm in the United Kingdom. It proposes the establishment of a new right to an explanation for any adverse credit decision. This right should be anchored not in data protection law, but in consumer protection law, and be enforced by a specialist regulator, the Financial Conduct Authority. Such a framework would close the new accountability gaps and create market incentives for developing transparent, explainable-by-design systems, better aligning technological innovation with consumer protection.
最近根据《2025年数据(使用和访问)法》对英国GDPR进行的修订标志着与欧盟自动决策方法的重大分歧,大大削弱了自动决策的“解释权”。本文对新制度进行了批判性的法律分析,认为它取消了对个人的关键保护。主要发现是,该立法引入了一个模棱两可的“没有有意义的人类参与”标准,并将关键保障措施限制在涉及“特殊类别数据”的决定上,从而造成了重大的法律空白。这些变化允许公司保护不透明的模型免受审查,增加了算法歧视的风险,特别是在消费信贷等高风险行业。通过对美国技术中立的不利行动通知要求的比较审查,本文得出结论,在英国,数据保护法不再是抵御算法损害的充分保障。它建议设立一项新的权利,要求对任何不利的信贷决定作出解释。这项权利不应局限于数据保护法,而应局限于消费者保护法,并由专业监管机构英国金融市场行为监管局(Financial Conduct Authority)执行。这样一个框架将弥补新的问责差距,并为开发透明、设计可解释的系统创造市场激励,使技术创新与消费者保护更好地结合起来。
{"title":"Mind the gap: Securing algorithmic explainability for credit decisions beyond the UK GDPR","authors":"Holli Sargeant","doi":"10.1016/j.clsr.2025.106247","DOIUrl":"10.1016/j.clsr.2025.106247","url":null,"abstract":"<div><div>The recent amendments to the United Kingdom’s GDPR under the Data (Use and Access) Act 2025 marks a significant divergence from the European Union’s approach to automated decision-making, substantively weakening the ‘right to explanation’ for automated decisions. This paper provides a critical legal analysis of the new regime, arguing that it dismantles crucial protections for individuals. The principal finding is that the legislation creates significant legal lacunas by introducing an ambiguous ‘no meaningful human involvement’ standard and restricting key safeguards to decisions involving ‘special category data’. These changes allow firms to shield opaque models from scrutiny, increasing the risk of algorithmic discrimination, particularly in high-stakes sectors like consumer credit.</div><div>Drawing on a comparative review of the United States’ technology-neutral adverse action notice requirement, the paper concludes that data protection law is no longer a sufficient safeguard against algorithmic harm in the United Kingdom. It proposes the establishment of a new right to an explanation for any adverse credit decision. This right should be anchored not in data protection law, but in consumer protection law, and be enforced by a specialist regulator, the Financial Conduct Authority. Such a framework would close the new accountability gaps and create market incentives for developing transparent, explainable-by-design systems, better aligning technological innovation with consumer protection.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106247"},"PeriodicalIF":3.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
False choices: Competitiveness, deregulation, and the erosion of GDPR’s regulatory integrity 错误的选择:竞争力、放松管制和GDPR监管完整性的侵蚀
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-03 DOI: 10.1016/j.clsr.2025.106237
Itxaso Domínguez de Olazábal
In the name of competitiveness, the European Union (EU) is witnessing a political push to dilute its cornerstone digital regulation: the General Data Protection Regulation (GDPR). This opinion piece critically examines the emerging narrative that regulatory effectiveness must be traded off against innovation, agility, and economic growth. It challenges the assumption that the GDPR poses an inherent barrier to a ‘competitive digital EU’ and scrutinises how ongoing deregulation efforts - framed as simplification - undermine both the normative foundations and the enforceability of the Regulation. Drawing on recent legislative initiatives (including the so-called IVth Omnibus Proposal, with a brief reference to the so-called ‘Digital Omnibus’), the article argues that competitiveness has become a rhetorical device for shifting the regulatory Overton window. It contends that the real barriers to effectiveness lie not in the GDPR’s design but in uneven enforcement, institutional under-resourcing, and the failure to challenge extractive business models. Rather than weakening existing rules, the EU’s digital competitiveness would be better served by safeguarding the GDPR’s rights-based approach and by treating regulatory integrity and fundamental rights as the preconditions for a sustainable and just digital future.
在竞争力的名义下,欧盟(EU)正在见证一场政治推动,以淡化其基石数字监管:《通用数据保护条例》(GDPR)。这篇评论文章批判性地审视了一种新兴的说法,即监管有效性必须与创新、敏捷性和经济增长相权衡。它挑战了GDPR对“竞争性数字欧盟”构成固有障碍的假设,并仔细审查了正在进行的放松管制努力(框架为简化)如何破坏了规范基础和法规的可执行性。根据最近的立法举措(包括所谓的“第四综合提案”,其中简要提到了所谓的“数字综合提案”),文章认为竞争力已经成为改变监管奥弗顿窗口的修辞手段。它认为,影响有效性的真正障碍不在于GDPR的设计,而在于执行不平衡、机构资源不足以及未能挑战采掘商业模式。维护GDPR基于权利的方法,并将监管诚信和基本权利视为可持续和公正的数字未来的先决条件,将更好地服务于欧盟的数字竞争力,而不是削弱现有规则。
{"title":"False choices: Competitiveness, deregulation, and the erosion of GDPR’s regulatory integrity","authors":"Itxaso Domínguez de Olazábal","doi":"10.1016/j.clsr.2025.106237","DOIUrl":"10.1016/j.clsr.2025.106237","url":null,"abstract":"<div><div>In the name of competitiveness, the European Union (EU) is witnessing a political push to dilute its cornerstone digital regulation: the General Data Protection Regulation (GDPR). This opinion piece critically examines the emerging narrative that regulatory effectiveness must be traded off against innovation, agility, and economic growth. It challenges the assumption that the GDPR poses an inherent barrier to a ‘competitive digital EU’ and scrutinises how ongoing deregulation efforts - framed as simplification - undermine both the normative foundations and the enforceability of the Regulation. Drawing on recent legislative initiatives (including the so-called IVth Omnibus Proposal, with a brief reference to the so-called ‘Digital Omnibus’), the article argues that competitiveness has become a rhetorical device for shifting the regulatory Overton window. It contends that the real barriers to effectiveness lie not in the GDPR’s design but in uneven enforcement, institutional under-resourcing, and the failure to challenge extractive business models. Rather than weakening existing rules, the EU’s digital competitiveness would be better served by safeguarding the GDPR’s rights-based approach and by treating regulatory integrity and fundamental rights as the preconditions for a sustainable and just digital future.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106237"},"PeriodicalIF":3.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The regulation of fine-tuning: Federated compliance for modified general-purpose AI models 微调的规则:修改的通用AI模型的联邦遵从性
IF 3.2 3区 社会学 Q1 LAW Pub Date : 2025-12-02 DOI: 10.1016/j.clsr.2025.106234
Philipp Hacker , Matthias Holweg
This paper addresses the regulatory and liability implications of modifying general-purpose AI (GPAI) models under the EU AI Act and related legal frameworks. We make five principal contributions to this debate. First, the analysis maps the spectrum of technical modifications to GPAI models and proposes a detailed taxonomy of these interventions and their associated compliance burdens. Second, the discussion clarifies when exactly a modifying entity qualifies as a GPAI provider under the AI Act, which significantly alters the compliance mandate. Third, we develop a novel, hybrid legal test to distinguish substantial from insubstantial modifications that combines a compute-based threshold with consequence scanning to assess the introduction or amplification of risk. Fourth, the paper examines liability under the revised Product Liability Directive (PLD) and tort law, arguing that entities substantially modifying GPAI models become “manufacturers” under the PLD and may face liability for defects. The paper aligns the concept of “substantial modification” across both regimes for legal coherence and argues for a one-to-one mapping between “new provider” (AI Act) and “new manufacturer” (PLD). Fifth, the recommendations offer concrete governance strategies for policymakers and managers that propose a federated compliance structure, based on joint testing of base and modified models, implementation of Failure Mode and Effects Analysis and consequence scanning, a new database for GPAI models and modifications, robust documentation, and adherence to voluntary codes of practice. The framework also proposes simplified compliance options for SMEs while maintaining their liability obligations. Overall, the paper aims to map out a proportionate and risk-sensitive regulatory framework for modified GPAI models that integrates technical, legal, and wider societal considerations.
本文讨论了在欧盟人工智能法案和相关法律框架下修改通用人工智能(GPAI)模型的监管和责任影响。我们对这场辩论有五个主要贡献。首先,分析绘制了GPAI模型的技术修改范围,并提出了这些干预措施及其相关合规负担的详细分类。其次,讨论澄清了根据《人工智能法案》,修改实体何时才有资格成为GPAI提供商,这大大改变了合规性授权。第三,我们开发了一种新的混合法律测试,将基于计算机的阈值与后果扫描相结合,以评估风险的引入或扩大,以区分实质性和非实质性修改。第四,本文分析了修订后的《产品责任指令》(PLD)和侵权法下的责任,认为实质性修改GPAI模型的实体成为PLD下的“制造商”,可能面临缺陷责任。本文将“实质性修改”的概念与两种制度的法律一致性保持一致,并主张在“新供应商”(AI Act)和“新制造商”(PLD)之间建立一对一的映射。第五,这些建议为政策制定者和管理者提供了具体的治理策略,提出了基于基本模型和修改模型的联合测试、失效模式和影响分析以及后果扫描的实施、GPAI模型和修改的新数据库、健全的文档以及遵守自愿行为准则的联合合规结构。该框架还为中小企业提出了简化的合规选择,同时保持其责任义务。总体而言,本文旨在为整合了技术、法律和更广泛的社会考虑因素的修改后的GPAI模型制定一个比例和风险敏感的监管框架。
{"title":"The regulation of fine-tuning: Federated compliance for modified general-purpose AI models","authors":"Philipp Hacker ,&nbsp;Matthias Holweg","doi":"10.1016/j.clsr.2025.106234","DOIUrl":"10.1016/j.clsr.2025.106234","url":null,"abstract":"<div><div>This paper addresses the regulatory and liability implications of modifying general-purpose AI (GPAI) models under the EU AI Act and related legal frameworks. We make five principal contributions to this debate. First, the analysis maps the spectrum of technical modifications to GPAI models and proposes a detailed taxonomy of these interventions and their associated compliance burdens. Second, the discussion clarifies when exactly a modifying entity qualifies as a GPAI provider under the AI Act, which significantly alters the compliance mandate. Third, we develop a novel, hybrid legal test to distinguish substantial from insubstantial modifications that combines a compute-based threshold with consequence scanning to assess the introduction or amplification of risk. Fourth, the paper examines liability under the revised Product Liability Directive (PLD) and tort law, arguing that entities substantially modifying GPAI models become “manufacturers” under the PLD and may face liability for defects. The paper aligns the concept of “substantial modification” across both regimes for legal coherence and argues for a one-to-one mapping between “new provider” (AI Act) and “new manufacturer” (PLD). Fifth, the recommendations offer concrete governance strategies for policymakers and managers that propose a federated compliance structure, based on joint testing of base and modified models, implementation of Failure Mode and Effects Analysis and consequence scanning, a new database for GPAI models and modifications, robust documentation, and adherence to voluntary codes of practice. The framework also proposes simplified compliance options for SMEs while maintaining their liability obligations. Overall, the paper aims to map out a proportionate and risk-sensitive regulatory framework for modified GPAI models that integrates technical, legal, and wider societal considerations.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106234"},"PeriodicalIF":3.2,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Law & Security Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1