首页 > 最新文献

Computer Law & Security Review最新文献

英文 中文
How might the GDPR evolve? A question of politics, pace and punishment GDPR 如何演变?政治、速度和惩罚问题
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-08-17 DOI: 10.1016/j.clsr.2024.106033
Gerard Buckley , Tristan Caulfield , Ingolf Becker

The digital age has made personal data more valuable and less private. This paper explores the future of the European Union’s General Data Protection Regulation (GDPR) by imagining a range of challenging scenarios and how it might handle them. We analyse United States’, Chinese and European approaches (self-regulation, state control, arms-length regulators) and identify four key drivers shaping the future regulatory landscape: econopolitics, enforcement capacity, societal trust, and speed of technological development. These scenarios lead us to envision six resultant versions of GDPR, ranging from laxer protection than now to models empowering individuals and regulators. While our analysis suggests a minor update to the status quo GDPR is the most likely outcome, we argue a more robust implementation is necessary. This would entail meaningful penalties for non-compliance, harmonised enforcement, a positive case to counter the regulation-stifles-innovation narrative, defence of cross-border data rights, and proactive guidelines to address emerging technologies. Strengthening the GDPR’s effectiveness is crucial to ensure the digital age empowers individuals, not just information technology corporations and governments.

数字时代使个人数据变得更有价值,也更不私密。本文通过想象一系列具有挑战性的情景以及欧盟可能如何处理这些情景,探讨了欧盟《一般数据保护条例》(GDPR)的未来。我们分析了美国、中国和欧洲的方法(自我监管、国家控制、独立监管机构),并确定了塑造未来监管格局的四个关键驱动因素:经济政治、执法能力、社会信任和技术发展速度。这些情景让我们设想了 GDPR 的六种结果版本,从比现在更宽松的保护到赋予个人和监管机构权力的模式。虽然我们的分析表明,对 GDPR 现状进行小幅更新是最有可能的结果,但我们认为有必要进行更有力的实施。这就需要对违规行为进行有意义的处罚、统一执法、以正面案例反驳 "监管扼杀创新 "的说法、捍卫跨境数据权利,以及制定积极的指导方针来应对新兴技术。加强 GDPR 的有效性对于确保数字时代赋予个人而不仅仅是信息技术公司和政府权力至关重要。
{"title":"How might the GDPR evolve? A question of politics, pace and punishment","authors":"Gerard Buckley ,&nbsp;Tristan Caulfield ,&nbsp;Ingolf Becker","doi":"10.1016/j.clsr.2024.106033","DOIUrl":"10.1016/j.clsr.2024.106033","url":null,"abstract":"<div><p>The digital age has made personal data more valuable and less private. This paper explores the future of the European Union’s General Data Protection Regulation (GDPR) by imagining a range of challenging scenarios and how it might handle them. We analyse United States’, Chinese and European approaches (self-regulation, state control, arms-length regulators) and identify four key drivers shaping the future regulatory landscape: econopolitics, enforcement capacity, societal trust, and speed of technological development. These scenarios lead us to envision six resultant versions of GDPR, ranging from laxer protection than now to models empowering individuals and regulators. While our analysis suggests a minor update to the status quo GDPR is the most likely outcome, we argue a more robust implementation is necessary. This would entail meaningful penalties for non-compliance, harmonised enforcement, a positive case to counter the regulation-stifles-innovation narrative, defence of cross-border data rights, and proactive guidelines to address emerging technologies. Strengthening the GDPR’s effectiveness is crucial to ensure the digital age empowers individuals, not just information technology corporations and governments.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106033"},"PeriodicalIF":3.3,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000992/pdfft?md5=0e110841ca9f0647a9535293139f5c91&pid=1-s2.0-S0267364924000992-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harmonizing innovation and regulation: The EU Artificial Intelligence Act in the international trade context 协调创新与监管:国际贸易背景下的欧盟人工智能法案
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-08-15 DOI: 10.1016/j.clsr.2024.106028
Qiang REN , Jing DU

The European Union's Artificial Intelligence Act focuses on establishing harmonized rules across EU Member States so that AI systems are safe, transparent, and respectful of existing laws and fundamental rights. It introduces a risk-based regulatory approach, classifying AI applications by risk levels and imposing stringent compliance requirements on high-risk applications. The paper critically examines the Act's provisions, including its prohibitions on certain AI practices, requirements for high-risk AI systems, and mandates for transparency and human oversight. The paper examines the implications of the Act for international trade and technological regulation, particularly in the context of the World Trade Organization's Technical Barriers to Trade (TBT) Agreement. It addresses the Act's potential impact on developing countries, highlighting concerns that the Act's uniform standards could potentially exacerbate the digital divide and create barriers in global AI innovation and trade. The paper suggests incorporating flexibility and differential standards in the Act, enhancing technical assistance for developing countries, and advocating the EU's active participation in global standard-setting.

欧盟《人工智能法》的重点是在欧盟成员国之间建立统一的规则,使人工智能系统安全、透明,并尊重现有法律和基本权利。该法案引入了基于风险的监管方法,将人工智能应用按风险等级分类,并对高风险应用提出了严格的合规要求。本文批判性地研究了该法案的条款,包括对某些人工智能做法的禁止、对高风险人工智能系统的要求以及对透明度和人工监督的规定。本文探讨了该法案对国际贸易和技术监管的影响,特别是在世界贸易组织《技术性贸易壁垒协议》(TBT)的背景下。它探讨了该法案对发展中国家的潜在影响,强调了人们对该法案的统一标准可能会加剧数字鸿沟并在全球人工智能创新和贸易中制造壁垒的担忧。文件建议在该法案中纳入灵活性和差别标准,加强对发展中国家的技术援助,并倡导欧盟积极参与全球标准制定。
{"title":"Harmonizing innovation and regulation: The EU Artificial Intelligence Act in the international trade context","authors":"Qiang REN ,&nbsp;Jing DU","doi":"10.1016/j.clsr.2024.106028","DOIUrl":"10.1016/j.clsr.2024.106028","url":null,"abstract":"<div><p>The European Union's Artificial Intelligence Act focuses on establishing harmonized rules across EU Member States so that AI systems are safe, transparent, and respectful of existing laws and fundamental rights. It introduces a risk-based regulatory approach, classifying AI applications by risk levels and imposing stringent compliance requirements on high-risk applications. The paper critically examines the Act's provisions, including its prohibitions on certain AI practices, requirements for high-risk AI systems, and mandates for transparency and human oversight. The paper examines the implications of the Act for international trade and technological regulation, particularly in the context of the World Trade Organization's Technical Barriers to Trade (TBT) Agreement. It addresses the Act's potential impact on developing countries, highlighting concerns that the Act's uniform standards could potentially exacerbate the digital divide and create barriers in global AI innovation and trade. The paper suggests incorporating flexibility and differential standards in the Act, enhancing technical assistance for developing countries, and advocating the EU's active participation in global standard-setting.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106028"},"PeriodicalIF":3.3,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141991346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template AI 法案中的基本权利影响评估 (FRIA):根源、法律义务和示范模板的关键要素
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-08-14 DOI: 10.1016/j.clsr.2024.106020
Alessandro Mantelero

What is the context which gave rise to the obligation to carry out a Fundamental Rights Impact Assessment (FRIA) in the AI Act? How has assessment of the impact on fundamental rights been framed by the EU legislator in the AI Act? What methodological criteria should be followed in developing the FRIA? These are the three main research questions that this article aims to address, through both legal analysis of the relevant provisions of the AI Act and discussion of various possible models for assessment of the impact of AI on fundamental rights.

The overall objective of this article is to fill existing gaps in the theoretical and methodological elaboration of the FRIA, as outlined in the AI Act. In order to facilitate the future work of EU and national bodies and AI operators in placing this key tool for human-centric and trustworthy AI at the heart of the EU approach to AI design and development, this article outlines the main building blocks of a model template for the FRIA. While this proposal is consistent with the rationale and scope of the AI Act, it is also applicable beyond the cases listed in Article 27 and can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.

促使《人工智能法》规定有义务进行基本权利影响评估(FRIA)的背景是什么?欧盟立法者在《人工智能法》中是如何规定对基本权利影响的评估的?在制定基本权利影响评估时应遵循哪些方法标准?本文旨在通过对《人工智能法》相关条款的法律分析以及对人工智能对基本权利影响的各种可能评估模式的讨论,解决这三个主要研究问题。为了促进欧盟和国家机构以及人工智能运营商未来的工作,将这一以人为本、值得信赖的人工智能关键工具置于欧盟人工智能设计和开发方法的核心,本文概述了 FRIA 模型模板的主要构件。虽然该建议符合《人工智能法》的原理和范围,但其适用范围也超出了第 27 条所列的情况,可作为其他国家和国际监管举措的蓝本,以确保人工智能完全符合人权。
{"title":"The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template","authors":"Alessandro Mantelero","doi":"10.1016/j.clsr.2024.106020","DOIUrl":"10.1016/j.clsr.2024.106020","url":null,"abstract":"<div><p>What is the context which gave rise to the obligation to carry out a Fundamental Rights Impact Assessment (FRIA) in the AI Act? How has assessment of the impact on fundamental rights been framed by the EU legislator in the AI Act? What methodological criteria should be followed in developing the FRIA? These are the three main research questions that this article aims to address, through both legal analysis of the relevant provisions of the AI Act and discussion of various possible models for assessment of the impact of AI on fundamental rights.</p><p>The overall objective of this article is to fill existing gaps in the theoretical and methodological elaboration of the FRIA, as outlined in the AI Act. In order to facilitate the future work of EU and national bodies and AI operators in placing this key tool for human-centric and trustworthy AI at the heart of the EU approach to AI design and development, this article outlines the main building blocks of a model template for the FRIA. While this proposal is consistent with the rationale and scope of the AI Act, it is also applicable beyond the cases listed in Article 27 and can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106020"},"PeriodicalIF":3.3,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000864/pdfft?md5=8d7f252655f8baa66bbefaa915063643&pid=1-s2.0-S0267364924000864-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141991345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open government data in the Brazilian digital government: Enabling an SDG acceleration agenda 巴西数字政府中的开放式政府数据:推动可持续发展目标加速议程
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-08-07 DOI: 10.1016/j.clsr.2024.106029
Larissa Galdino de Magalhães Santos

Open Government Data (OGD) has evolved from the mere generation of public data to its active management, but the strategic evolution still needs to be explored. This article explores the intersection of government's digital transformation, the Sustainable Development Goals (SDGs), and the role of government open data initiatives. The study focuses on the Brazilian trajectory, employing the "data as a public good" approach to evaluate data governance and capabilities as facilitators of sustainable digital transformation. The GDB method aligns with the SDG Digital Acceleration agenda, providing insights into integrating data in society and digital transformation. The study concludes by indicating the need for more dialogue and synergy between data management and government strategies. It emphasizes integrating data management, privacy protection, transparency, and ethical considerations for sustainable impact.

开放式政府数据(OGD)已从单纯生成公共数据发展到积极管理公共数据,但其战略演变仍有待探索。本文探讨了政府数字化转型、可持续发展目标(SDGs)和政府开放数据计划的作用之间的交叉点。本研究以巴西的发展轨迹为重点,采用 "数据作为公共产品 "的方法来评估数据治理和能力对可持续数字化转型的促进作用。GDB 方法与可持续发展目标的 "数字加速 "议程相一致,为将数据融入社会和数字转型提供了见解。研究最后指出,数据管理和政府战略之间需要更多的对话和协同。它强调要整合数据管理、隐私保护、透明度和伦理考虑因素,以实现可持续影响。
{"title":"Open government data in the Brazilian digital government: Enabling an SDG acceleration agenda","authors":"Larissa Galdino de Magalhães Santos","doi":"10.1016/j.clsr.2024.106029","DOIUrl":"10.1016/j.clsr.2024.106029","url":null,"abstract":"<div><p>Open Government Data (OGD) has evolved from the mere generation of public data to its active management, but the strategic evolution still needs to be explored. This article explores the intersection of government's digital transformation, the Sustainable Development Goals (SDGs), and the role of government open data initiatives. The study focuses on the Brazilian trajectory, employing the \"data as a public good\" approach to evaluate data governance and capabilities as facilitators of sustainable digital transformation. The GDB method aligns with the SDG Digital Acceleration agenda, providing insights into integrating data in society and digital transformation. The study concludes by indicating the need for more dialogue and synergy between data management and government strategies. It emphasizes integrating data management, privacy protection, transparency, and ethical considerations for sustainable impact.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106029"},"PeriodicalIF":3.3,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141953186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The protection of vulnerable algorithmic groups through collective data protection in the onlife world: A Brazilian perspective 通过集体数据保护来保护算法弱势群体:巴西视角
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-08-02 DOI: 10.1016/j.clsr.2024.106027
Diego Machado
<div><p>The aim of this doctrinal legal study is to analyze the interplay between the vulnerability of groups in algorithmic systems and the protection of collective interests in data protection law in Brazil's legal system. Two research questions are raised: (i) Is the protection of personal data regulation applicable to data processing activities related to algorithmic groups? and (ii) can algorithmic groups be regarded as groups with vulnerability under the LGPD legal regime? This article is divided into three parts apart from the introduction, and combines three strands of research, namely group rights theory, vulnerability studies, and law and technology perspective. This combination is key to outline, in Sections 2 and 3, a theoretical framework that elucidates the concepts of collective data protection and group vulnerability mapping both onto the notion of algorithmic groups. Section 2 argues for the collective dimension of the right to the protection of personal data as the foundation of a collective data protection. Section 3, in turn, explores the conceptualization of group vulnerability and how this discourse resonates with algorithmic groups in the onlife world. I draw on vulnerability studies, and on Mireille Hildebrandt's law and technology perspective to delineate what do I mean by group vulnerability and how do I articulate theoretically this notion with algorithmic groups and the affordances of algorithmic systems. Section 4 examines the relation between collective data protection and vulnerability of algorithmic groups under the data protection legal framework in Brazil. To answer the research questions, the analysis is concentrated on three aspects of Brazilian data protection law: (i) the “collectivization of data protection”; (ii) the integration of group vulnerability in the data protection legal framework; (iii) data protection impact assessments in the context of LGPD's risk-based approach. The collective dimension of the right to personal data protection is increasingly recognized in Brazilian law through class-action litigation, particularly in the context of addressing vulnerabilities caused by new data-driven technologies. This collective dimension should guide courts and the Brazilian DPA in interpreting and applying the LGPD, especially Art. 12, § 2, regarding group data processing by algorithmic profiling systems. Data protection law in Brazil acknowledges that groups of data subjects may face vulnerability, requiring special protection and safeguards to mitigate risks and violations. Group vulnerability signals contexts deserving special attention and serves as a source of obligations and rights. Within LGPD's risk-based approach, mandatory DPIAs in ML-based algorithmic profiling systems help identify vulnerable groups and implement appropriate safeguards to mitigate risks of harm or rights violations. Non-compliance with safeguard implementation obligations should be considered a breach of Brazilian data protecti
本法律理论研究旨在分析算法系统中群体的脆弱性与巴西法律体系中数据保护法对集体利益的保护之间的相互作用。本文提出了两个研究问题:(i) 个人数据保护条例是否适用于与算法群体相关的数据处理活动? (ii) 算法群体是否可被视为 LGPD 法律制度下的弱势群体?本文除导言外分为三个部分,结合了三个研究方向,即群体权利理论、脆弱性研究以及法律与技术视角。这种结合的关键是在第2和第3节中勾勒出一个理论框架,阐明集体数据保护和群体脆弱性的概念,并将二者映射到算法群体的概念上。第 2 节论证了个人数据保护权的集体维度是集体数据保护的基础。第 3 节则探讨了群体脆弱性的概念化,以及这一论述如何与生活世界中的算法群体产生共鸣。我借鉴了脆弱性研究和米雷耶-希尔德布兰特(Mireille Hildebrandt)的法律与技术视角,来界定我所说的群体脆弱性是什么意思,以及我如何从理论上将这一概念与算法群体和算法系统的可承受性联系起来。第 4 节探讨了巴西数据保护法律框架下集体数据保护与算法群体脆弱性之间的关系。为了回答研究问题,分析集中在巴西数据保护法的三个方面:(i) "数据保护的集体化";(ii) 将群体脆弱性纳入数据保护法律框架;(iii) 在 LGPD 基于风险的方法背景下进行数据保护影响评估。通过集体诉讼,特别是在解决新数据驱动技术造成的脆弱性方面,个人数据保护权的集体维度在巴西法律中得到越来越多的认可。这一集体维度应指导法院和巴西 DPA 解释和应用 LGPD,尤其是第 12 条第 2 款关于群体数据的规定。第 12 条第 2 款,关于算法剖析系统的群体数据处理。巴西的数据保护法承认,数据主体群体可能面临脆弱性,需要特殊保护和保障措施以降低风险和侵权行为。群体脆弱性预示着值得特别关注的情况,也是义务和权利的来源。在 LGPD 基于风险的方法中,基于 ML 的算法剖析系统中的强制性 DPIA 有助于识别弱势群体,并实施适当的保障措施,以降低伤害或侵犯权利的风险。不遵守保障措施实施义务应被视为违反巴西数据保护法。
{"title":"The protection of vulnerable algorithmic groups through collective data protection in the onlife world: A Brazilian perspective","authors":"Diego Machado","doi":"10.1016/j.clsr.2024.106027","DOIUrl":"10.1016/j.clsr.2024.106027","url":null,"abstract":"&lt;div&gt;&lt;p&gt;The aim of this doctrinal legal study is to analyze the interplay between the vulnerability of groups in algorithmic systems and the protection of collective interests in data protection law in Brazil's legal system. Two research questions are raised: (i) Is the protection of personal data regulation applicable to data processing activities related to algorithmic groups? and (ii) can algorithmic groups be regarded as groups with vulnerability under the LGPD legal regime? This article is divided into three parts apart from the introduction, and combines three strands of research, namely group rights theory, vulnerability studies, and law and technology perspective. This combination is key to outline, in Sections 2 and 3, a theoretical framework that elucidates the concepts of collective data protection and group vulnerability mapping both onto the notion of algorithmic groups. Section 2 argues for the collective dimension of the right to the protection of personal data as the foundation of a collective data protection. Section 3, in turn, explores the conceptualization of group vulnerability and how this discourse resonates with algorithmic groups in the onlife world. I draw on vulnerability studies, and on Mireille Hildebrandt's law and technology perspective to delineate what do I mean by group vulnerability and how do I articulate theoretically this notion with algorithmic groups and the affordances of algorithmic systems. Section 4 examines the relation between collective data protection and vulnerability of algorithmic groups under the data protection legal framework in Brazil. To answer the research questions, the analysis is concentrated on three aspects of Brazilian data protection law: (i) the “collectivization of data protection”; (ii) the integration of group vulnerability in the data protection legal framework; (iii) data protection impact assessments in the context of LGPD's risk-based approach. The collective dimension of the right to personal data protection is increasingly recognized in Brazilian law through class-action litigation, particularly in the context of addressing vulnerabilities caused by new data-driven technologies. This collective dimension should guide courts and the Brazilian DPA in interpreting and applying the LGPD, especially Art. 12, § 2, regarding group data processing by algorithmic profiling systems. Data protection law in Brazil acknowledges that groups of data subjects may face vulnerability, requiring special protection and safeguards to mitigate risks and violations. Group vulnerability signals contexts deserving special attention and serves as a source of obligations and rights. Within LGPD's risk-based approach, mandatory DPIAs in ML-based algorithmic profiling systems help identify vulnerable groups and implement appropriate safeguards to mitigate risks of harm or rights violations. Non-compliance with safeguard implementation obligations should be considered a breach of Brazilian data protecti","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106027"},"PeriodicalIF":3.3,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141961484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The many features which make the eIDAS 2 Digital Wallet either risky or the ideal vehicle for the transition to post-quantum encryption eIDAS 2 数字钱包的众多功能使其成为向后量子加密过渡的风险或理想工具
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-08-01 DOI: 10.1016/j.clsr.2024.106022
Giovanni Comandè , Margaret Varilek

The amended Digital Identity Framework Regulation (“eIDAS 2″) is expected to be implemented by 2026, including its new solution of the Digital Identity Wallet from each Member State for its residents, citizens, and businesses. Widely used public key cryptosystems including those in the current EUDI Wallet prototypes are using electronic signatures and authentication that will need to be replaced by post-quantum resistant cryptography (PQC). In April 2024, the EU recommended general action by the Member States to prepare for quantum capability. We suggest that the European Digital Identity Wallet could be the starting point for an impactful debut of hybrid “quantum resistant” cryptography tools to align the Member States in the transition. We look at the awareness campaigns of ENISA and national cybersecurity authorities in the USA, Spain, UK and Germany on the transition to PQC using a hybrid approach. There seems to be some early consensus that NIST's PQC algorithms are likely to set the international standard. Given the eIDAS 2′s flexible, technologically neutral language, it allows the timely implementation of new secure encryption methods. The Wallet could be an exemplary model for large businesses, or app developers, and SMEs that also must transition to PQC to render secure those asymmetrically encrypted quantum-vulnerable digital assets. A very large and relatively fast uptake of the EUDI Wallet system is expected, and if it holds the promises of functionality, user friendliness, and security across the changing technological world, the EUDI Wallet's approach could become a benchmark for the transition to post-quantum capacity.

经修订的《数字身份框架条例》("eIDAS 2")预计将于 2026 年实施,其中包括各成员国为其居民、公民和企业提供数字身份钱包的新解决方案。广泛使用的公钥密码系统,包括目前欧盟数字身份钱包原型中的公钥密码系统,使用的是电子签名和身份验证,需要被后量子抗性密码学(PQC)所取代。2024 年 4 月,欧盟建议成员国采取总体行动,为量子能力做好准备。我们建议以欧洲数字身份钱包为起点,首次推出具有影响力的混合 "抗量子 "加密工具,使成员国在过渡时期保持一致。我们研究了 ENISA 以及美国、西班牙、英国和德国的国家网络安全机构就使用混合方法向 PQC 过渡所开展的宣传活动。美国国家标准与技术研究院(NIST)的 PQC 算法很可能成为国际标准,这一点似乎已在早期达成共识。鉴于 eIDAS 2 的语言灵活、技术中立,它允许及时实施新的安全加密方法。钱包可以成为大型企业、应用程序开发商和中小型企业的典范,它们也必须过渡到 PQC,以确保那些非对称加密的量子脆弱数字资产的安全。预计EUDI钱包系统会得到大量且相对较快的采用,如果它在功能性、用户友好性和安全性方面都能在不断变化的技术世界中实现承诺,那么EUDI钱包的方法可能会成为向后量子能力过渡的基准。
{"title":"The many features which make the eIDAS 2 Digital Wallet either risky or the ideal vehicle for the transition to post-quantum encryption","authors":"Giovanni Comandè ,&nbsp;Margaret Varilek","doi":"10.1016/j.clsr.2024.106022","DOIUrl":"10.1016/j.clsr.2024.106022","url":null,"abstract":"<div><p>The amended Digital Identity Framework Regulation (“eIDAS 2″) is expected to be implemented by 2026, including its new solution of the Digital Identity Wallet from each Member State for its residents, citizens, and businesses. Widely used public key cryptosystems including those in the current EUDI Wallet prototypes are using electronic signatures and authentication that will need to be replaced by post-quantum resistant cryptography (PQC). In April 2024, the EU recommended general action by the Member States to prepare for quantum capability. We suggest that the European Digital Identity Wallet could be the starting point for an impactful debut of hybrid “quantum resistant” cryptography tools to align the Member States in the transition. We look at the awareness campaigns of ENISA and national cybersecurity authorities in the USA, Spain, UK and Germany on the transition to PQC using a hybrid approach. There seems to be some early consensus that NIST's PQC algorithms are likely to set the international standard. Given the eIDAS 2′s flexible, technologically neutral language, it allows the timely implementation of new secure encryption methods. The Wallet could be an exemplary model for large businesses, or app developers, and SMEs that also must transition to PQC to render secure those asymmetrically encrypted quantum-vulnerable digital assets. A very large and relatively fast uptake of the EUDI Wallet system is expected, and if it holds the promises of functionality, user friendliness, and security across the changing technological world, the EUDI Wallet's approach could become a benchmark for the transition to post-quantum capacity.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106022"},"PeriodicalIF":3.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141961483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When non-consensual intimate deepfakes go viral: The insufficiency of the UK Online Safety Act 当未经同意的私密照片走红网络:英国《网络安全法》的不足之处
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-27 DOI: 10.1016/j.clsr.2024.106024
Beatriz Kira

Advancements in artificial intelligence (AI) have drastically simplified the creation of synthetic media. While concerns often focus on potential misinformation harms, ‘non-consensual intimate deepfakes’ (NCID) – a form of image-based sexual abuse – pose a current, severe, and growing threat, disproportionately impacting women and girls. This article examines the measures implemented with the recently adopted Online Safety Act 2023 (OSA) and argues that the new criminal offences and the ‘systems and processes’ approach the law adopts are insufficient to counter NCID in the UK. This is because the OSA relies on platform policies that often lack consistency regarding synthetic media and on platforms’ content removal mechanisms which offer limited redress to victim-survivors after the harm has already occurred. The article argues that stronger prevention mechanisms are necessary and proposes that the law should mandate all AI-powered deepfake creation tools to ban the generation of intimate synthetic content and require the implementation of comprehensive and enforceable content moderation systems.

人工智能(AI)的进步极大地简化了合成媒体的创建过程。虽然人们关注的焦点往往集中在潜在的错误信息危害上,但 "未经同意的深度伪造亲密关系"(NCID)--一种基于图像的性虐待形式--构成了当前严重且日益增长的威胁,对妇女和女童的影响尤为严重。本文研究了最近通过的《2023 年在线安全法案》(OSA)所实施的措施,并认为新的刑事犯罪和法律所采用的 "系统和流程 "方法不足以应对英国的 NCID。这是因为《2023 年在线安全法》依赖于平台政策,而这些政策在合成媒体方面往往缺乏一致性,而且平台的内容删除机制在伤害发生后为受害者-幸存者提供的补救有限。文章认为,有必要建立更强有力的预防机制,并建议法律应强制要求所有人工智能驱动的深度伪造创建工具禁止生成亲密的合成内容,并要求实施全面且可强制执行的内容审核制度。
{"title":"When non-consensual intimate deepfakes go viral: The insufficiency of the UK Online Safety Act","authors":"Beatriz Kira","doi":"10.1016/j.clsr.2024.106024","DOIUrl":"10.1016/j.clsr.2024.106024","url":null,"abstract":"<div><p>Advancements in artificial intelligence (AI) have drastically simplified the creation of synthetic media. While concerns often focus on potential misinformation harms, ‘non-consensual intimate deepfakes’ (NCID) – a form of image-based sexual abuse – pose a current, severe, and growing threat, disproportionately impacting women and girls. This article examines the measures implemented with the recently adopted Online Safety Act 2023 (OSA) and argues that the new criminal offences and the ‘systems and processes’ approach the law adopts are insufficient to counter NCID in the UK. This is because the OSA relies on platform policies that often lack consistency regarding synthetic media and on platforms’ content removal mechanisms which offer limited redress to victim-survivors after the harm has already occurred. The article argues that stronger prevention mechanisms are necessary and proposes that the law should mandate all AI-powered deepfake creation tools to ban the generation of intimate synthetic content and require the implementation of comprehensive and enforceable content moderation systems.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106024"},"PeriodicalIF":3.3,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000906/pdfft?md5=e8c861b6693900d176a62ac2f6801b2e&pid=1-s2.0-S0267364924000906-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141954621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The blocking of Booking/Etraveli – When the first victim of EU's anti-US tech stand was a European 预订/Etraveli 的封锁 - 当欧盟反美科技立场的第一个受害者是欧洲人时
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-26 DOI: 10.1016/j.clsr.2024.106025
Dr. Christian Bergqvist

It came somewhat unexpected when Dutch Booking's acquisition of Swedish Etraveli was blocked in the EU as the parties operated in two separate segments of the online economy, hotel accommodation and flight booking, making the merger unproblematic under normal circumstances. However, in the digital economy, nothing is normal as enforcement has tightened, mostly vis-à-vis US tech giants but apparently also vis-à-vis European undertakings. Interestingly, customers' unwillingness to shop around for offers, as otherwise accepted by, e.g., the UK authority, played a role in the outcome. The decision has been challenged before the EU's General Court, providing a case to watch.

荷兰 Booking 公司收购瑞典 Etraveli 公司的交易在欧盟受阻有点出乎意料,因为双方分别在酒店住宿和机票预订这两个在线经济领域开展业务,在正常情况下合并不会出现问题。然而,在数字经济领域,没有什么是正常的,因为执法已经收紧,主要是针对美国科技巨头,但显然也包括欧洲企业。有趣的是,客户不愿意货比三家以获得优惠,这在结果中起到了一定的作用,而英国当局等也接受了这一点。欧盟普通法院已对这一决定提出质疑,此案值得关注。
{"title":"The blocking of Booking/Etraveli – When the first victim of EU's anti-US tech stand was a European","authors":"Dr. Christian Bergqvist","doi":"10.1016/j.clsr.2024.106025","DOIUrl":"10.1016/j.clsr.2024.106025","url":null,"abstract":"<div><p>It came somewhat unexpected when Dutch <em>Booking</em>'s acquisition of Swedish <em>Etraveli</em> was blocked in the EU as the parties operated in two separate segments of the online economy, hotel accommodation and flight booking, making the merger unproblematic under normal circumstances. However, in the digital economy, nothing is normal as enforcement has tightened, mostly vis-à-vis US tech giants but apparently also vis-à-vis European undertakings. Interestingly, customers' unwillingness to shop around for offers, as otherwise accepted by, e.g., the UK authority, played a role in the outcome. The decision has been challenged before the EU's General Court, providing a case to watch.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106025"},"PeriodicalIF":3.3,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000918/pdfft?md5=988f2f479691439097c5872023c102cd&pid=1-s2.0-S0267364924000918-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141953090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Better alone than in bad company: Addressing the risks of companion chatbots through data protection by design 独乐乐不如众乐乐:通过设计数据保护应对伴侣聊天机器人的风险
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-25 DOI: 10.1016/j.clsr.2024.106019
Pierre Dewitte

Recent years have seen a surge in the development and use of companion chatbots, conversational agents specifically designed to act as virtual friends, romantic partners, life coaches or even therapists. Yet, these tools raise many concerns, especially when their target audience is comprised of vulnerable individuals. While the recently adopted AI Act is expected to address some of these concerns, both compliance and enforcement are bound to take time. Since the development of companion chatbots involves the processing of personal data at nearly every step of the process, from training to fine-tuning to deployment, this paper argues that the General Data Protection Regulation (“GDPR”), and data protection by design more specifically, already provides a solid ground for regulators and courts to force controllers to mitigate these risks. In doing so, it sheds light on the broad material scope of Articles 24(1) and 25(1) GDPR, highlights the role of these provisions as proxies to Fundamental Rights Impact Assessments (“FRIAs”), and peels off the many layers of personal data processing involved in the companion chatbots supply chain. That reasoning served as the basis for a complaint lodged with the Belgian data protection authority, the full text and supporting evidence of which are provided as supplementary materials.

近年来,伴侣聊天机器人的开发和使用激增,这些对话代理专门设计用来充当虚拟朋友、浪漫伴侣、生活教练甚至治疗师。然而,这些工具引发了许多担忧,尤其是当其目标受众是弱势群体时。虽然最近通过的《人工智能法》有望解决其中一些问题,但合规和执法都需要时间。由于伴侣聊天机器人的开发从培训、微调到部署,几乎每一步都涉及到个人数据的处理,因此本文认为,《通用数据保护条例》("GDPR")以及更具体的数据保护设计已经为监管机构和法院提供了坚实的基础,迫使控制者降低这些风险。在此过程中,它阐明了《一般数据保护条例》第 24(1)条和第 25(1)条的广泛实质范围,强调了这些条款作为基本权利影响评估("FRIA")代理的作用,并剥离了伴侣聊天机器人供应链中涉及的多层个人数据处理。该推理是向比利时数据保护机构投诉的依据,其全文和佐证作为补充材料提供。
{"title":"Better alone than in bad company: Addressing the risks of companion chatbots through data protection by design","authors":"Pierre Dewitte","doi":"10.1016/j.clsr.2024.106019","DOIUrl":"10.1016/j.clsr.2024.106019","url":null,"abstract":"<div><p>Recent years have seen a surge in the development and use of companion chatbots, conversational agents specifically designed to act as virtual friends, romantic partners, life coaches or even therapists. Yet, these tools raise many concerns, especially when their target audience is comprised of vulnerable individuals. While the recently adopted AI Act is expected to address some of these concerns, both compliance and enforcement are bound to take time. Since the development of companion chatbots involves the processing of personal data at nearly every step of the process, from training to fine-tuning to deployment, this paper argues that the General Data Protection Regulation (“GDPR”), and data protection by design more specifically, already provides a solid ground for regulators and courts to force controllers to mitigate these risks. In doing so, it sheds light on the broad material scope of Articles 24(1) and 25(1) GDPR, highlights the role of these provisions as proxies to Fundamental Rights Impact Assessments (“FRIAs”), and peels off the many layers of personal data processing involved in the companion chatbots supply chain. That reasoning served as the basis for a complaint lodged with the Belgian data protection authority, the full text and supporting evidence of which are provided as supplementary materials.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106019"},"PeriodicalIF":3.3,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141953089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Critical points for the processing of personal data by the government: An empirical study in Brazil 政府处理个人数据的关键点:巴西的实证研究
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-24 DOI: 10.1016/j.clsr.2024.106023
Núbia Augusto de Sousa Rocha , Alexandre Nascimento de Almeida , André Nunes , Humberto Angelo

The General Law for the Protection of Personal Data (LGPD), issued in Brazil in August 2018, establishes as one of the legal bases for the processing of personal data the execution of public policies by the State. A systematic review of the literature identified the existence of six critical points that represent challenges for public managers in the elaboration and implementation of policies that require the processing of personal data. The objective of this research is to establish the levels of criticality of the factors identified by the literature review, as well as to verify the existence of other critical points on which the literature has not yet advanced. To this end, a group of 11 specialists was selected to participate in the research that used the Delphi Method, a technique that consists of applying a set of questionnaires sequentially and individually, in order to establish a dialog between the participants and build a collective response. The results indicate a coherence between what was verified in the theory and the perception of the specialists. Another 10 critical points for the processing of personal data by the government were mentioned by the participants. In general, the main elements of tension identified addressed the lack of training of public officials and the sharing of personal data.

巴西于 2018 年 8 月颁布的《个人数据保护总法》(LGPD)规定,国家执行公共政策是处理个人数据的法律依据之一。通过对文献的系统回顾,发现存在六个关键点,这些关键点是公共管理者在制定和实施需要处理个人数据的政策时面临的挑战。本研究的目的是确定文献综述所确定的因素的关键程度,并核实是否存在文献尚未涉及的其他关键点。为此,我们挑选了 11 位专家参与采用德尔菲法(Delphi Method)进行的研究。德尔菲法是一种技术,包括依次单独使用一组调查问卷,以便在参与者之间建立对话并形成集体回应。结果表明,理论验证的内容与专家的看法是一致的。参与者还提到了政府处理个人数据的另外 10 个关键点。总体而言,所发现的主要紧张因素涉及公职人员缺乏培训和个人数据共享问题。
{"title":"Critical points for the processing of personal data by the government: An empirical study in Brazil","authors":"Núbia Augusto de Sousa Rocha ,&nbsp;Alexandre Nascimento de Almeida ,&nbsp;André Nunes ,&nbsp;Humberto Angelo","doi":"10.1016/j.clsr.2024.106023","DOIUrl":"10.1016/j.clsr.2024.106023","url":null,"abstract":"<div><p>The General Law for the Protection of Personal Data (LGPD), issued in Brazil in August 2018, establishes as one of the legal bases for the processing of personal data the execution of public policies by the State. A systematic review of the literature identified the existence of six critical points that represent challenges for public managers in the elaboration and implementation of policies that require the processing of personal data. The objective of this research is to establish the levels of criticality of the factors identified by the literature review, as well as to verify the existence of other critical points on which the literature has not yet advanced. To this end, a group of 11 specialists was selected to participate in the research that used the Delphi Method, a technique that consists of applying a set of questionnaires sequentially and individually, in order to establish a dialog between the participants and build a collective response. The results indicate a coherence between what was verified in the theory and the perception of the specialists. Another 10 critical points for the processing of personal data by the government were mentioned by the participants. In general, the main elements of tension identified addressed the lack of training of public officials and the sharing of personal data.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106023"},"PeriodicalIF":3.3,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141951122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Law & Security Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1