首页 > 最新文献

Computer Law & Security Review最新文献

英文 中文
Data sovereignty and data transfers as fundamental elements of digital transformation: Lessons from the BRICS countries 数据主权和数据传输是数字化转型的基本要素:金砖国家的经验教训
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-23 DOI: 10.1016/j.clsr.2024.106017
Luca Belli , Water B. Gaspar , Shilpa Singh Jaswant

When talking about digital transformation, data sovereignty considerations and data transfers cannot be excluded from the discussion, given the considerable likelihood that digital technologies deployed along the process collect, process and transfer (personal) data in multiple jurisdictions. An increasing number of nations, especially those within the BRICS grouping (Brazil, Russia, India, China, and South Africa) are developing their data governance and digital transformation approaches based on data sovereignty considerations, deeming specific types of data as key strategic and economic resources, which deserve particular protection and that must be leveraged for national development. From this perspective, this paper will try to shed light on how data sovereignty and data transfers interplay in the context of digital transformations. Particularly, we will consider the various dimensions that compose the concept of data sovereignty and will utilise a range of examples from the BRICS grouping to back some of the key considerations developed with empirical evidence. We define data sovereignty as the capacity to understand how and why (personal) data are processed and by whom, develop data processing capabilities, and effectively regulate data processing, thus retaining self-determination and control. We have chosen the BRICS grouping for three reasons. First, research on the grouping's data policies and digital transformation is still minimal despite their leading role. Second, BRICS account for over 40 % of the global population, or 3.2 billion people (which can be seen as 3.2 billion “data subjects” or data producers, depending on perspective, thus making them key players in data governance and digital transformation. Third, the BRICS members have realised that digital transformation is essential for the future of their economies and societies and have shaped specific data governance visions which must be considered by other countries, especially from the global majority, to understand why data governance is instrumental to foster thriving digital environments.

在讨论数字化转型时,不能将数据主权考虑因素和数据传输排除在外,因为在这一过程中部署的数字技术很有可能在多个司法管辖区收集、处理和传输(个人)数据。越来越多的国家,尤其是金砖五国(巴西、俄罗斯、印度、中国和南非)正在基于数据主权的考虑制定其数据治理和数字化转型方法,将特定类型的数据视为关键的战略和经济资源,应予以特别保护,并必须利用这些资源促进国家发展。从这个角度出发,本文将试图揭示数据主权和数据传输在数字化转型背景下是如何相互作用的。特别是,我们将考虑构成数据主权概念的各个层面,并将利用金砖五国集团的一系列实例,以经验证据支持所提出的一些关键考虑因素。我们将数据主权定义为了解(个人)数据如何处理、为何处理以及由谁处理的能力,发展数据处理能力,并有效监管数据处理,从而保留自决权和控制权。我们选择金砖国家集团有三个原因。首先,尽管金砖国家在数据政策和数字化转型方面发挥着主导作用,但对这些国家的研究仍然很少。其次,金砖国家的人口占全球人口的 40%以上,即 32 亿人(根据不同的视角,可将其视为 32 亿 "数据主体 "或数据生产者,从而使其成为数据治理和数字化转型的关键参与者)。第三,金砖五国成员已经意识到,数字化转型对其经济和社会的未来至关重要,并制定了具体的数据治理愿景,其他国家,尤其是全球大多数国家,必须考虑这些愿景,以了解数据治理为何对促进繁荣的数字化环境至关重要。
{"title":"Data sovereignty and data transfers as fundamental elements of digital transformation: Lessons from the BRICS countries","authors":"Luca Belli ,&nbsp;Water B. Gaspar ,&nbsp;Shilpa Singh Jaswant","doi":"10.1016/j.clsr.2024.106017","DOIUrl":"10.1016/j.clsr.2024.106017","url":null,"abstract":"<div><p>When talking about digital transformation, data sovereignty considerations and data transfers cannot be excluded from the discussion, given the considerable likelihood that digital technologies deployed along the process collect, process and transfer (personal) data in multiple jurisdictions. An increasing number of nations, especially those within the BRICS grouping (Brazil, Russia, India, China, and South Africa) are developing their data governance and digital transformation approaches based on data sovereignty considerations, deeming specific types of data as key strategic and economic resources, which deserve particular protection and that must be leveraged for national development. From this perspective, this paper will try to shed light on how data sovereignty and data transfers interplay in the context of digital transformations. Particularly, we will consider the various dimensions that compose the concept of data sovereignty and will utilise a range of examples from the BRICS grouping to back some of the key considerations developed with empirical evidence. We define data sovereignty as the capacity to understand how and why (personal) data are processed and by whom, develop data processing capabilities, and effectively regulate data processing, thus retaining self-determination and control. We have chosen the BRICS grouping for three reasons. First, research on the grouping's data policies and digital transformation is still minimal despite their leading role. Second, BRICS account for over 40 % of the global population, or 3.2 billion people (which can be seen as 3.2 billion “data subjects” or data producers, depending on perspective, thus making them key players in data governance and digital transformation. Third, the BRICS members have realised that digital transformation is essential for the future of their economies and societies and have shaped specific data governance visions which must be considered by other countries, especially from the global majority, to understand why data governance is instrumental to foster thriving digital environments.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106017"},"PeriodicalIF":3.3,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141960829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open Banking goes to Washington: Lessons from the EU on regulatory-driven data sharing regimes 开放银行走进华盛顿:欧盟在监管驱动的数据共享制度方面的经验教训
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-17 DOI: 10.1016/j.clsr.2024.106018
Giuseppe Colangelo

After representing the main country embracing a market-led approach to Open Banking, the U.S. is on the verge of switching to a regulatory-driven regime by mandating the sharing of financial data. Relying on the Section 1033 of the Dodd-Frank Act, the Consumer Financial Protection Bureau (CFPB) has, indeed, recently proposed a rulemaking on “Personal Financial Data Rights.” As the U.S. is, therefore, apparently following the EU, which has been at the forefront of the government-led Open Banking movement, the paper aims at analyzing the CFPB's proposal by taking stock of the EU experience. The review of the EU regulatory framework and its UK implementation provides useful insights about the functioning and challenging trade-offs of Open Banking, thus ultimately enabling us to assess whether the CFPB's proposal would provide significant added value for innovation and competition or would rather represent an unnecessary regulatory burden.

美国是市场主导型开放银行业务的主要国家之一,在此之后,美国即将转向监管主导型制度,强制要求共享金融数据。根据《多德-弗兰克法案》第 1033 条,消费者金融保护局(CFPB)最近确实提出了一项关于 "个人金融数据权利 "的规则制定建议。因此,美国显然是在效仿欧盟,而欧盟在政府主导的开放银行运动中一直走在前列,本文旨在通过总结欧盟的经验来分析 CFPB 的提议。对欧盟监管框架及其在英国的实施情况的回顾,为我们提供了有关开放银行的运作和挑战性权衡的有益启示,从而最终使我们能够评估 CFPB 的建议是会为创新和竞争带来显著的附加值,还是会带来不必要的监管负担。
{"title":"Open Banking goes to Washington: Lessons from the EU on regulatory-driven data sharing regimes","authors":"Giuseppe Colangelo","doi":"10.1016/j.clsr.2024.106018","DOIUrl":"10.1016/j.clsr.2024.106018","url":null,"abstract":"<div><p>After representing the main country embracing a market-led approach to Open Banking, the U.S. is on the verge of switching to a regulatory-driven regime by mandating the sharing of financial data. Relying on the Section 1033 of the Dodd-Frank Act, the Consumer Financial Protection Bureau (CFPB) has, indeed, recently proposed a rulemaking on “Personal Financial Data Rights.” As the U.S. is, therefore, apparently following the EU, which has been at the forefront of the government-led Open Banking movement, the paper aims at analyzing the CFPB's proposal by taking stock of the EU experience. The review of the EU regulatory framework and its UK implementation provides useful insights about the functioning and challenging trade-offs of Open Banking, thus ultimately enabling us to assess whether the CFPB's proposal would provide significant added value for innovation and competition or would rather represent an unnecessary regulatory burden.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106018"},"PeriodicalIF":3.3,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141637504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithmic proxy discrimination and its regulations 算法代理歧视及其监管
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-14 DOI: 10.1016/j.clsr.2024.106021
Xi Chen

As a specific type of algorithmic discrimination, algorithmic proxy discrimination (APD) exerts disparate impacts on legally protected groups because machine learning algorithms adopt facially neutral proxies to refer to legally protected features through their operational logic. Based on the relationship between sensitive feature data and the outcome of interest, APD can be classified as direct or indirect conductive. In the context of big data, the abundance and complexity of algorithmic proxy relations render APD inescapable and difficult to discern, while opaque algorithmic proxy relations impede the imputation of APD. Thus, as traditional antidiscrimination law strategies, such as blocking relevant data or disparate impact liability, are modeled on human decision-making and cannot effectively regulate APD. The paper proposes a regulatory framework targeting APD based on data and algorithmic aspects.

作为算法歧视的一种特殊类型,算法代理歧视(APD)会对受法律保护的群体造成不同程度的影响,因为机器学习算法通过其运行逻辑采用表面中立的代理来指代受法律保护的特征。根据敏感特征数据与相关结果之间的关系,APD 可分为直接或间接传导。在大数据背景下,算法代理关系的丰富性和复杂性使得 APD 无法回避且难以辨别,而不透明的算法代理关系则阻碍了 APD 的归因。因此,传统的反歧视法策略,如封锁相关数据或差异影响责任,都是以人类决策为模型的,无法有效规制 APD。本文提出了一个基于数据和算法方面的反歧视监管框架。
{"title":"Algorithmic proxy discrimination and its regulations","authors":"Xi Chen","doi":"10.1016/j.clsr.2024.106021","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.106021","url":null,"abstract":"<div><p>As a specific type of algorithmic discrimination, algorithmic proxy discrimination (APD) exerts disparate impacts on legally protected groups because machine learning algorithms adopt facially neutral proxies to refer to legally protected features through their operational logic. Based on the relationship between sensitive feature data and the outcome of interest, APD can be classified as direct or indirect conductive. In the context of big data, the abundance and complexity of algorithmic proxy relations render APD inescapable and difficult to discern, while opaque algorithmic proxy relations impede the imputation of APD. Thus, as traditional antidiscrimination law strategies, such as blocking relevant data or disparate impact liability, are modeled on human decision-making and cannot effectively regulate APD. The paper proposes a regulatory framework targeting APD based on data and algorithmic aspects.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106021"},"PeriodicalIF":3.3,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141606221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ontological models for representing image-based sexual abuses 表示基于图像的性虐待的本体论模型
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-06 DOI: 10.1016/j.clsr.2024.105999
Mattia Falduti, Cristine Griffo

In recent years, there has been extensive discourse on the moderation of abusive content online. Image-based Sexual Abuses (IBSAs) represent a type of abusive content that involves sexual images or videos. Platforms must moderate user-generated online content to tackle this issue effectively. One way to achieve this is by allowing users to report content, which can be flagged as abusive. In such instances, platforms may enforce their terms of service and prohibit certain types of content or users. Alongside these efforts, numerous countries have been making progress in defining and regulating this subject by implementing dedicated regulations. However, national solutions alone are insufficient for addressing a constantly increasing global emergency. Consequently, digital platforms create their own definitions of abusive conduct to overcome obstacles arising from conflicting national laws. In this paper, we use an ontological approach to model two types of abusive behavior. To do this, we applied the UFO-L patterns to build ontological models and based them on a top-level ontology, the Unified Foundational Ontology (UFO). The outcome is a set of ontological models that digital platforms can use to monitor and manage user compliance with the service provider’s code of conduct.

近年来,关于如何控制网上辱骂性内容的讨论十分广泛。基于图像的性虐待(IBSA)是一种涉及性图像或视频的虐待内容。平台必须对用户生成的在线内容进行审核,以有效解决这一问题。实现这一目标的方法之一是允许用户举报内容,并将其标记为滥用。在这种情况下,平台可以执行其服务条款,禁止某些类型的内容或用户。在做出这些努力的同时,许多国家通过实施专门法规,在界定和规范这一主题方面取得了进展。然而,仅靠国家解决方案不足以应对不断加剧的全球紧急情况。因此,数字平台创建了自己的滥用行为定义,以克服各国法律冲突带来的障碍。在本文中,我们采用本体论方法为两种滥用行为建模。为此,我们采用 UFO-L 模式来构建本体模型,并将其建立在一个顶级本体--统一基础本体(UFO)--之上。我们的成果是一套本体论模型,数字平台可以用它来监控和管理用户是否遵守服务提供商的行为准则。
{"title":"Ontological models for representing image-based sexual abuses","authors":"Mattia Falduti,&nbsp;Cristine Griffo","doi":"10.1016/j.clsr.2024.105999","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105999","url":null,"abstract":"<div><p>In recent years, there has been extensive discourse on the moderation of abusive content online. Image-based Sexual Abuses (IBSAs) represent a type of abusive content that involves sexual images or videos. Platforms must moderate user-generated online content to tackle this issue effectively. One way to achieve this is by allowing users to report content, which can be flagged as abusive. In such instances, platforms may enforce their terms of service and prohibit certain types of content or users. Alongside these efforts, numerous countries have been making progress in defining and regulating this subject by implementing dedicated regulations. However, national solutions alone are insufficient for addressing a constantly increasing global emergency. Consequently, digital platforms create their own definitions of abusive conduct to overcome obstacles arising from conflicting national laws. In this paper, we use an ontological approach to model two types of abusive behavior. To do this, we applied the UFO-L patterns to build ontological models and based them on a top-level ontology, the Unified Foundational Ontology (UFO). The outcome is a set of ontological models that digital platforms can use to monitor and manage user compliance with the service provider’s code of conduct.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 105999"},"PeriodicalIF":3.3,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cyber Resilience Act 2022: A silver bullet for cybersecurity of IoT devices or a shot in the dark? 2022 年网络复原力法案》:物联网设备网络安全的银弹还是无用功?
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-05 DOI: 10.1016/j.clsr.2024.106009
Mohammed Raiz Shaffique
<div><p>Internet of Things (IoT) is an ecosystem of interconnected devices (IoT devices) that is capable of intelligent decision making. IoT devices can include everyday objects such as televisions, cars and shoes. The interconnectedness brought forth by IoT has extended the need for cybersecurity beyond the information security realm into the physical security sphere. However, ensuring cybersecurity of IoT devices is far from straightforward because IoT devices have several cybersecurity challenges associated with them. Some of the pertinent cybersecurity challenges of IoT devices in this regard relate to: (i) Security During Manufacturing, (ii) Identification and Authentication, (iii) Lack of Encryption, (iv) Large Attack Surface, (v) Security During Updates, (vi) Lack of User Awareness and (vii) Diverging Standards and Regulations.</p><p>Against this background, the Cyber Resilience Act (CRA) has been proposed to complement the existing EU cybersecurity framework consisting of legislations such as the Cybersecurity Act and the NIS2 Directive. However, does the CRA provide a framework for effectively combating the cybersecurity challenges of IoT devices in the EU? The central crux of the CRA is to lay down and enforce the rules required to ensure cybersecurity of ‘products with digital elements’, which includes IoT devices. To this end, several obligations are imposed on manufacturers, importers and distributors of IoT devices. Manufacturers are mandated to ensure that the essential cybersecurity requirements prescribed by the CRA are met before placing IoT devices in the market. While the cybersecurity requirements mandated by the CRA are commendable, the CRA suffers from several ambiguities which can hamper its potential impact. For instance, the CRA could provide guidance to manufacturers on how to conduct cybersecurity risk assessment and could clarify the meanings of terms such as “<em>limit attack surfaces</em>” and “<em>without any known exploitable vulnerabilitie</em>s”.</p><p>When the fundamental themes of the CRA is analysed from the prism of the cybersecurity challenges of IoT devices, it becomes clear that the CRA does provide a foundation for effectively addressing the cybersecurity challenges of IoT devices. However, the expansive wording in various parts of the CRA, including in the Annex I Requirements, leaves scope for interpretation on several fronts. Consequently, the effectiveness of the CRA in tackling the Security During Manufacturing Challenge, Identification and Authentication Challenge, Large Attack Surface Challenge and Diverging Standards and Regulations Challenge would be largely contingent on how harmonised standards develop and how the industry adopts them. The CRA seems to be more effective, albeit not fully so, in significantly addressing the Lack of Encryption Challenge, Security During Updates Challenge and Lack of User Awareness Challenge of IoT devices. However, the manner in which the CRA addresses all these
物联网(IoT)是一个由相互连接的设备(物联网设备)组成的生态系统,能够进行智能决策。物联网设备包括电视机、汽车和鞋子等日常物品。物联网带来的互联性将网络安全需求从信息安全领域扩展到物理安全领域。然而,确保物联网设备的网络安全远非一蹴而就,因为物联网设备面临着一些相关的网络安全挑战。在这方面,物联网设备面临的一些相关网络安全挑战包括(i) 制造过程中的安全性,(ii) 识别和认证,(iii) 缺乏加密,(iv) 攻击面大,(v) 更新过程中的安全性,(vi) 缺乏用户意识,(vii) 标准和法规不统一。在此背景下,欧盟提出了《网络复原力法案》(CRA),以补充由《网络安全法案》和《NIS2 指令》等立法组成的现有网络安全框架。然而,《物联网复原力法》是否为欧盟有效应对物联网设备的网络安全挑战提供了一个框架?网络安全法》的核心是制定和实施必要的规则,以确保 "具有数字元素的产品"(包括物联网设备)的网络安全。为此,物联网设备的制造商、进口商和分销商必须履行多项义务。制造商在将物联网设备投放市场之前,必须确保满足《通信管理局》规定的基本网络安全要求。虽然《网络安全法》规定的网络安全要求值得称赞,但《网络安全法》存在一些模糊之处,可能会影响其潜在影响。例如,《网络安全法》可指导制造商如何进行网络安全风险评估,并可澄清 "限制攻击面 "和 "无任何已知可利用漏洞 "等术语的含义。从物联网设备的网络安全挑战的角度分析《网络安全法》的基本主题,就会发现《网络安全法》确实为有效应对物联网设备的网络安全挑战奠定了基础。然而,《网络安全法》各部分(包括附件一要求)的措辞过于宽泛,在多个方面留下了解释空间。因此,CRA 在应对 "制造过程中的安全挑战"、"识别和认证挑战"、"大攻击面挑战 "以及 "标准和法规分歧挑战 "方面的有效性在很大程度上取决于统一标准的制定和行业采用情况。物联网设备缺乏加密挑战、更新过程中的安全挑战和缺乏用户意识挑战,CRA 似乎能更有效地应对这些挑战,尽管并非完全有效。不过,如果赋予 ENISA 等机构法律授权,根据《网络安全法》制定详细的网络安全要求标准,《网络安全法》应对所有这些网络安全挑战的方式就会得到改进。
{"title":"Cyber Resilience Act 2022: A silver bullet for cybersecurity of IoT devices or a shot in the dark?","authors":"Mohammed Raiz Shaffique","doi":"10.1016/j.clsr.2024.106009","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.106009","url":null,"abstract":"&lt;div&gt;&lt;p&gt;Internet of Things (IoT) is an ecosystem of interconnected devices (IoT devices) that is capable of intelligent decision making. IoT devices can include everyday objects such as televisions, cars and shoes. The interconnectedness brought forth by IoT has extended the need for cybersecurity beyond the information security realm into the physical security sphere. However, ensuring cybersecurity of IoT devices is far from straightforward because IoT devices have several cybersecurity challenges associated with them. Some of the pertinent cybersecurity challenges of IoT devices in this regard relate to: (i) Security During Manufacturing, (ii) Identification and Authentication, (iii) Lack of Encryption, (iv) Large Attack Surface, (v) Security During Updates, (vi) Lack of User Awareness and (vii) Diverging Standards and Regulations.&lt;/p&gt;&lt;p&gt;Against this background, the Cyber Resilience Act (CRA) has been proposed to complement the existing EU cybersecurity framework consisting of legislations such as the Cybersecurity Act and the NIS2 Directive. However, does the CRA provide a framework for effectively combating the cybersecurity challenges of IoT devices in the EU? The central crux of the CRA is to lay down and enforce the rules required to ensure cybersecurity of ‘products with digital elements’, which includes IoT devices. To this end, several obligations are imposed on manufacturers, importers and distributors of IoT devices. Manufacturers are mandated to ensure that the essential cybersecurity requirements prescribed by the CRA are met before placing IoT devices in the market. While the cybersecurity requirements mandated by the CRA are commendable, the CRA suffers from several ambiguities which can hamper its potential impact. For instance, the CRA could provide guidance to manufacturers on how to conduct cybersecurity risk assessment and could clarify the meanings of terms such as “&lt;em&gt;limit attack surfaces&lt;/em&gt;” and “&lt;em&gt;without any known exploitable vulnerabilitie&lt;/em&gt;s”.&lt;/p&gt;&lt;p&gt;When the fundamental themes of the CRA is analysed from the prism of the cybersecurity challenges of IoT devices, it becomes clear that the CRA does provide a foundation for effectively addressing the cybersecurity challenges of IoT devices. However, the expansive wording in various parts of the CRA, including in the Annex I Requirements, leaves scope for interpretation on several fronts. Consequently, the effectiveness of the CRA in tackling the Security During Manufacturing Challenge, Identification and Authentication Challenge, Large Attack Surface Challenge and Diverging Standards and Regulations Challenge would be largely contingent on how harmonised standards develop and how the industry adopts them. The CRA seems to be more effective, albeit not fully so, in significantly addressing the Lack of Encryption Challenge, Security During Updates Challenge and Lack of User Awareness Challenge of IoT devices. However, the manner in which the CRA addresses all these","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106009"},"PeriodicalIF":3.3,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000761/pdfft?md5=cffbcbbedc6e57f54e9b97ba7eead7ab&pid=1-s2.0-S0267364924000761-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meta-Regulation: An ideal alternative to the primary responsibility as the regulatory model of generative AI in China 元监管:中国生成式人工智能监管模式的主要责任之外的理想选择
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-04 DOI: 10.1016/j.clsr.2024.106016
Huijuan Dong , Junkai Chen

Generative AI with stronger responsiveness and emergent abilities has triggered a global boom and is facing challenges such as data compliance risks during the pretraining process and risks of generating fake information, which has raised concerns among global regulatory authorities. The European Union, United States, United Kingdom, and other countries and regions are gradually establishing risk-based, scenario-based, and outcome-based governance models for generative AI. China recently introduced new regulations for the management of generative AI, which adopt a governance model focusing on generative AI service providers. It suggests that China is continuing the principle of primary responsibility in Internet governance, which encompasses legal responsibility, contractual obligations, and ethical responsibility. However, the governance model based on primary responsibility emphasizes the accountability of generative AI model service providers, with relatively limited regulation on other important entities such as users and large-scale dissemination platforms, which may not be conducive to achieving China's regulatory goals for the AI industry. In comparison, the Meta-Regulation model could be an ideal alternative for China. As a classic theory explaining the public-private relationship, the ‘Meta-Regulation’ aligns with the generative AI governance requirements. Based on the Meta-Regulation theory, the governance of generative AI in China should move towards a direction of emphasizing safety, transparency, collaborative governance, and accountability. In line with this, it is necessary to include users and large-scale dissemination platforms within the regulatory scope and establish overarching governance objectives that ensure the responsible distribution of duties among stakeholders, with regulatory authorities assuming ultimate oversight responsibility and technical coordination. At the level of specific improvement measures, it is possible to integrate the three stages of model development, usage, and content dissemination of generative AI. During the model development stage, generative AI providers have specific transparency obligations. In the usage stage, a self-regulatory system centered around platform autonomy should be constructed. In the content dissemination stage, the proactive notification obligations of the dissemination platforms should be clearly defined. Additionally, the enforcement of technical interoperability requirements is necessary, thereby promoting the orderly development of generative AI applications.

具有更强响应能力和突现能力的生成式人工智能在全球掀起热潮,同时也面临着预训练过程中的数据合规风险和生成虚假信息的风险等挑战,引起了全球监管机构的关注。欧盟、美国、英国等国家和地区正在逐步建立基于风险、基于场景、基于结果的生成式人工智能治理模式。中国最近出台了新的生成式人工智能管理条例,采用了以生成式人工智能服务提供商为核心的治理模式。这表明中国在互联网治理中延续了主要责任原则,包括法律责任、合同义务和道德责任。然而,基于主体责任的治理模式强调的是对人工智能模型生成服务提供者的问责,对用户、大型传播平台等其他重要主体的监管相对有限,这可能不利于实现中国对人工智能产业的监管目标。相比之下,元监管模式可能是中国的理想选择。作为解释公私关系的经典理论,"元监管 "符合人工智能治理的生成性要求。基于 "元监管 "理论,中国的创生型人工智能治理应朝着强调安全、透明、协同治理和问责的方向发展。与此相适应,有必要将用户和大型传播平台纳入监管范围,确立总体治理目标,确保各利益相关方分工负责,监管部门承担最终的监督责任和技术协调。在具体改进措施层面,可以整合生成式人工智能的模型开发、使用和内容传播三个阶段。在模型开发阶段,生成式人工智能提供商有具体的透明度义务。在使用阶段,应构建以平台自治为核心的自律体系。在内容传播阶段,应明确规定传播平台的主动通知义务。此外,有必要执行技术互操作性要求,从而促进生成式人工智能应用的有序发展。
{"title":"Meta-Regulation: An ideal alternative to the primary responsibility as the regulatory model of generative AI in China","authors":"Huijuan Dong ,&nbsp;Junkai Chen","doi":"10.1016/j.clsr.2024.106016","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.106016","url":null,"abstract":"<div><p>Generative AI with stronger responsiveness and emergent abilities has triggered a global boom and is facing challenges such as data compliance risks during the pretraining process and risks of generating fake information, which has raised concerns among global regulatory authorities. The European Union, United States, United Kingdom, and other countries and regions are gradually establishing risk-based, scenario-based, and outcome-based governance models for generative AI. China recently introduced new regulations for the management of generative AI, which adopt a governance model focusing on generative AI service providers. It suggests that China is continuing the principle of primary responsibility in Internet governance, which encompasses legal responsibility, contractual obligations, and ethical responsibility. However, the governance model based on primary responsibility emphasizes the accountability of generative AI model service providers, with relatively limited regulation on other important entities such as users and large-scale dissemination platforms, which may not be conducive to achieving China's regulatory goals for the AI industry. In comparison, the Meta-Regulation model could be an ideal alternative for China. As a classic theory explaining the public-private relationship, the ‘Meta-Regulation’ aligns with the generative AI governance requirements. Based on the Meta-Regulation theory, the governance of generative AI in China should move towards a direction of emphasizing safety, transparency, collaborative governance, and accountability. In line with this, it is necessary to include users and large-scale dissemination platforms within the regulatory scope and establish overarching governance objectives that ensure the responsible distribution of duties among stakeholders, with regulatory authorities assuming ultimate oversight responsibility and technical coordination. At the level of specific improvement measures, it is possible to integrate the three stages of model development, usage, and content dissemination of generative AI. During the model development stage, generative AI providers have specific transparency obligations. In the usage stage, a self-regulatory system centered around platform autonomy should be constructed. In the content dissemination stage, the proactive notification obligations of the dissemination platforms should be clearly defined. Additionally, the enforcement of technical interoperability requirements is necessary, thereby promoting the orderly development of generative AI applications.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106016"},"PeriodicalIF":3.3,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Will the GDPR Restrain Health Data Access Bodies Under the European Health Data Space (EHDS)? GDPR 是否会限制欧洲健康数据空间 (EHDS) 下的健康数据访问机构?
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-02 DOI: 10.1016/j.clsr.2024.105993
Paul Quinn, Erika Ellyne, Cong Yao

The plans for a European Health Data Space (EHDS) envisage an ambitious and radical platform that will inter alia make the sharing of secondary health data easier. It will encourage the systematic sharing of health data and provide a legal framework for it to be shared by Health Data Access Bodies (HDABs) based in each of the Member States. Whilst this promises to bring about major benefits for research and innovation, it also raises serious questions given the intrinsic sensitivity of health data. Fears concerning privacy harms on the individual level and detrimental effects on the societal level have been raised. This article discusses two of the main protective pillars designed to allay such concerns. The first is that the proposal clearly outlines several contexts for which a Health Data Access Permit (HDAP) should and should not be granted. The second is that a request for an HDAP must also be compliant with the GDPR (inter alia requiring a valid legal basis and respecting data processing principles such as ‘minimization’ and ‘storage limitation’). As this article discusses, in some instances the need to have a valid legal basis under the GDPR may make it difficult to obtain a data access permit, in particular for some of the commercially orientated grounds outlined within the EHDS proposal. A further important issue concerns the ability of HDABs to analyse the compatibility permit requests under the GDPR and relevant national law at both speed and scale.

欧洲健康数据空间(EHDS)计划设想了一个雄心勃勃的激进平台,除其他外,该平台将使二级健康数据的共享更加容易。它将鼓励系统地共享健康数据,并为设在各成员国的健康数据访问机构(HDABs)共享健康数据提供法律框架。虽然这有望为研究和创新带来重大益处,但鉴于健康数据固有的敏感性,它也提出了严重的问题。人们担心个人隐私会受到损害,社会层面也会受到不利影响。本文讨论了旨在消除这些担忧的两个主要保护支柱。首先,该提案明确概述了应该和不应该获得健康数据访问许可(HDAP)的几种情况。其次,HDAP 申请还必须符合 GDPR 的规定(特别是要求有有效的法律依据,并遵守 "最小化 "和 "存储限制 "等数据处理原则)。正如本文所讨论的,在某些情况下,根据 GDPR 需要有有效的法律依据可能会导致难以获得数据访问许可,特别是对于 EHDS 建议中概述的一些商业导向的理由。另一个重要问题涉及 HDAB 根据 GDPR 和相关国家法律对兼容性许可申请进行快速和大规模分析的能力。
{"title":"Will the GDPR Restrain Health Data Access Bodies Under the European Health Data Space (EHDS)?","authors":"Paul Quinn,&nbsp;Erika Ellyne,&nbsp;Cong Yao","doi":"10.1016/j.clsr.2024.105993","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105993","url":null,"abstract":"<div><p>The plans for a European Health Data Space (EHDS) envisage an ambitious and radical platform that will inter alia make the sharing of secondary health data easier. It will encourage the systematic sharing of health data and provide a legal framework for it to be shared by Health Data Access Bodies (HDABs) based in each of the Member States. Whilst this promises to bring about major benefits for research and innovation, it also raises serious questions given the intrinsic sensitivity of health data. Fears concerning privacy harms on the individual level and detrimental effects on the societal level have been raised. This article discusses two of the main protective pillars designed to allay such concerns. The first is that the proposal clearly outlines several contexts for which a Health Data Access Permit (HDAP) should and should not be granted. The second is that a request for an HDAP must also be compliant with the GDPR (inter alia requiring a valid legal basis and respecting data processing principles such as ‘minimization’ and ‘storage limitation’). As this article discusses, in some instances the need to have a valid legal basis under the GDPR may make it difficult to obtain a data access permit, in particular for some of the commercially orientated grounds outlined within the EHDS proposal. A further important issue concerns the ability of HDABs to analyse the compatibility permit requests under the GDPR and relevant national law at both speed and scale.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 105993"},"PeriodicalIF":3.3,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141482974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ETIAS system and new proposals to advance the use of AI in public services ETIAS 系统和推动在公共服务中使用人工智能的新建议
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-07-02 DOI: 10.1016/j.clsr.2024.106015
Clara Isabel Velasco Rico , Migle Laukyte

Eu-LISA is launching the European Travel Information and Authorization System (ETIAS), which seems an example of a different, human rights-oriented approach to AI within the law enforcement. However, the reality is quite different: the usual problems of the use of AI—lack of transparency, bias, opacity, just to name a few—are still on board. This paper critically assesses these promises of ETIAS and argues that it has serious issues that have not been properly dealt with. So as to argue the need to address these issues, the paper addresses ETIAS within the wider context of human rights and solidarity-based data governance. In this respect, ETIAS is seen as a tool which uses data for high value purposes, such as EU safety and security, yet it also calls for serious risk mitigation measures. Indeed, the risks related to law enforcement on the borders and in migration management are extremely serious due to the vulnerability of people who escape from poverty, wars, regimes, and other disasters. In the third part of this article, we articulate three proposals of such risk mitigation measures. We argue in favour of strengthening critical general safeguards in ETIAS, then elaborate a principle that should guide AI-based public service development (P4P principle) and end with a few IPR-related requirements for private sector involvement in such services. Adopting these measures could contribute to reduce the risk of building EU AI expertise upon data coming from the most vulnerable social groups of our planet.

Eu-LISA 正在启动欧洲旅行信息和授权系统(ETIAS),这似乎是执法部门在人工智能方面采取不同的、以人权为导向的方法的一个范例。然而,现实情况却大相径庭:使用人工智能的常见问题--缺乏透明度、偏见、不透明等等--依然存在。本文对 ETIAS 的这些承诺进行了批判性评估,认为其存在严重问题,尚未得到妥善处理。为了论证解决这些问题的必要性,本文在人权和基于团结的数据管理的大背景下论述了 ETIAS。在这方面,ETIAS 被视为一种将数据用于欧盟安全和安保等高价值目的的工具,但它也要求采取严重的风险缓解措施。事实上,由于逃离贫困、战争、政权和其他灾难的人们的脆弱性,与边境执法和移民管理相关的风险极其严重。在本文的第三部分,我们将就此类风险缓解措施提出三项建议。我们主张加强 ETIAS 中关键的一般保障措施,然后阐述了一项指导人工智能公共服务发展的原则(P4P 原则),最后提出了私营部门参与此类服务的一些知识产权相关要求。采取这些措施有助于降低欧盟利用来自地球上最弱势社会群体的数据建立人工智能专业知识的风险。
{"title":"ETIAS system and new proposals to advance the use of AI in public services","authors":"Clara Isabel Velasco Rico ,&nbsp;Migle Laukyte","doi":"10.1016/j.clsr.2024.106015","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.106015","url":null,"abstract":"<div><p>Eu-LISA is launching the European Travel Information and Authorization System (ETIAS), which seems an example of a different, human rights-oriented approach to AI within the law enforcement. However, the reality is quite different: the usual problems of the use of AI—lack of transparency, bias, opacity, just to name a few—are still on board. This paper critically assesses these promises of ETIAS and argues that it has serious issues that have not been properly dealt with. So as to argue the need to address these issues, the paper addresses ETIAS within the wider context of human rights and solidarity-based data governance. In this respect, ETIAS is seen as a tool which uses data for high value purposes, such as EU safety and security, yet it also calls for serious risk mitigation measures. Indeed, the risks related to law enforcement on the borders and in migration management are extremely serious due to the vulnerability of people who escape from poverty, wars, regimes, and other disasters. In the third part of this article, we articulate three proposals of such risk mitigation measures. We argue in favour of strengthening critical general safeguards in ETIAS, then elaborate a principle that should guide AI-based public service development (P4P principle) and end with a few IPR-related requirements for private sector involvement in such services. Adopting these measures could contribute to reduce the risk of building EU AI expertise upon data coming from the most vulnerable social groups of our planet.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106015"},"PeriodicalIF":3.3,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000815/pdfft?md5=49b2b58312c8697b7334418c2e13e052&pid=1-s2.0-S0267364924000815-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141482997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI liability in Europe: How does it complement risk regulation and deal with the problem of human oversight? 欧洲的人工智能责任:它如何补充风险监管并解决人为监督问题?
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-06-29 DOI: 10.1016/j.clsr.2024.106012
Beatriz Botero Arcila

Who should compensate you if you get hit by a car in “autopilot” mode: the safety driver or the car manufacturer? What about if you find out you were unfairly discriminated against by an AI decision-making tool that was being supervised by an HR professional? Should the developer compensate you, the company that procured the software, or the (employer of the) HR professional that was “supervising” the system's output?

These questions do not have easy answers. In the European Union and elsewhere around the world, AI governance is turning towards risk regulation. Risk regulation alone is, however, rarely optimal. The situations above all involve the liability for harms that are caused by or with an AI system. While risk regulations like the AI Act regulate some aspects of these human and machine interactions, they do not offer those impacted by AI systems any rights and little avenues to seek redress. From a corrective justice perspective risk regulation must also be complemented by liability law because when harms do occur, harmed individuals should be compensated. From a risk-prevention perspective, risk regulation may still fall short of creating optimal incentives for all parties to take precautions.

Because risk regulation is not enough, scholars and regulators around the world have highlighted that AI regulations should be complemented by liability rules to address AI harms when they occur. Using a law and economics framework this Article examines how the recently proposed AI liability regime in the EU – a revision of the Product Liability Directive, and an AI Liability effectively complement the AI Act and how they address the particularities of AI-human interactions.

如果你被一辆处于 "自动驾驶 "模式的汽车撞了,谁应该赔偿你:是安全驾驶员还是汽车制造商?如果你发现自己受到由人力资源专业人士监督的人工智能决策工具的不公平歧视,该怎么办?开发商、采购软件的公司或 "监督 "系统输出的人力资源专业人员的(雇主)是否应该对你进行赔偿?在欧盟和世界其他地方,人工智能治理正转向风险监管。然而,仅靠风险监管很少能达到最佳效果。上述情况都涉及人工智能系统所造成或与之相关的伤害责任。虽然像《人工智能法》这样的风险法规对这些人机交互的某些方面进行了规范,但它们并没有为受到人工智能系统影响的人提供任何权利,也几乎没有寻求补救的途径。从矫正正义的角度来看,风险监管还必须辅之以责任法,因为当伤害发生时,受到伤害的个人应该得到赔偿。由于仅有风险监管是不够的,世界各地的学者和监管者都强调,人工智能法规应辅以责任规则,以应对人工智能损害的发生。本文采用法律和经济学框架,研究了欧盟最近提出的人工智能责任制度--《产品责任指令》修订版和人工智能责任如何有效补充《人工智能法》,以及它们如何解决人工智能与人类互动的特殊性。
{"title":"AI liability in Europe: How does it complement risk regulation and deal with the problem of human oversight?","authors":"Beatriz Botero Arcila","doi":"10.1016/j.clsr.2024.106012","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.106012","url":null,"abstract":"<div><p>Who should compensate you if you get hit by a car in “autopilot” mode: the safety driver or the car manufacturer? What about if you find out you were unfairly discriminated against by an AI decision-making tool that was being supervised by an HR professional? Should the developer compensate you, the company that procured the software, or the (employer of the) HR professional that was “supervising” the system's output?</p><p>These questions do not have easy answers. In the European Union and elsewhere around the world, AI governance is turning towards risk regulation. Risk regulation alone is, however, rarely optimal. The situations above all involve the liability for harms that are caused by or with an AI system. While risk regulations like the AI Act regulate some aspects of these human and machine interactions, they do not offer those impacted by AI systems any rights and little avenues to seek redress. From a corrective justice perspective risk regulation must also be complemented by liability law because when harms do occur, harmed individuals should be compensated. From a risk-prevention perspective, risk regulation may still fall short of creating optimal incentives for all parties to take precautions.</p><p>Because risk regulation is not enough, scholars and regulators around the world have highlighted that AI regulations should be complemented by liability rules to address AI harms when they occur. Using a law and economics framework this Article examines how the recently proposed AI liability regime in the EU – a revision of the Product Liability Directive, and an AI Liability effectively complement the AI Act and how they address the particularities of AI-human interactions.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106012"},"PeriodicalIF":3.3,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000797/pdfft?md5=4672fdb50a5856a23c27094c7201b057&pid=1-s2.0-S0267364924000797-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141482996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stuxnet vs WannaCry and Albania: Cyber-attribution on trial Stuxnet vs WannaCry 和阿尔巴尼亚:网络攻击审判
IF 3.3 3区 社会学 Q1 LAW Pub Date : 2024-06-28 DOI: 10.1016/j.clsr.2024.106008
Jakub Vostoupal

The cyber-attribution problem poses a significant challenge to the effective application of international law in cyberspace. Rooted in unclear standards of proof, evidence disclosure requirements, and deficiencies within the legal framework of the attribution procedure, this issue reflects the limitations of some traditional legal concepts in addressing the unique nature of cyberspace. Notably, the effective control test, introduced by the ICJ in 1986 and reaffirmed in 2007 to attribute the actions of non-state actors, does not adequately account for the distinctive dynamics of cyberspace, allowing states to use proxies to evade responsibility.

The legal impracticality and insufficiency of the attribution procedure not only give rise to the cyber-attribution problem but also compel states to develop new attribution tactics. This article explores the evolution of these cyber-attribution techniques to assess whether contemporary state practices align with the customary rules of attribution identified by the ICJ and codified by the ILC within ARSIWA, or whether new, cyber-specific rules might emerge. By analyzing two datasets on cyber incidents and three distinct cases – Stuxnet, WannaCry, and the 2022 cyberattacks against Albania – this article concludes that the effective control test cannot be conclusively identified as part of customary rules within cyberspace due to the insufficient support in state practice. Furthermore, it is apparent that the rules of attribution in the cyber-specific context are in a disarray, lacking consistent, widespread and representative practice to support a general custom. However, emerging state practice shows some degree of unification and development, suggesting the potential for the future establishment of cyber-specific rules of attribution.

网络归属问题对国际法在网络空间的有效应用提出了重大挑战。这一问题的根源在于举证标准不明确、证据披露要求以及归属程序法律框架内的缺陷,它反映了一些传统法律概念在处理网络空间独特性质方面的局限性。值得注意的是,1986 年由国际法院引入并在 2007 年得到重申的有效控制检验标准(用于对非国家行为者的行为进行归属)并没有充分考虑到网络空间的独特动态,使得国家可以利用代理人来逃避责任。法律上的不切实际和归属程序的不足不仅导致了网络归属问题,还迫使国家开发新的归属策略。本文探讨了这些网络归责技术的演变,以评估当代国家的做法是否符合由国际法院确定并由国际法委员会编入 ARSIWA 的归责习惯规则,或者是否可能出现新的网络特定规则。通过分析两个网络事件数据集和三个不同的案例--Stuxnet、WannaCry 和 2022 年针对阿尔巴尼亚的网络攻击--本文得出结论,由于国家实践中的支持不足,有效控制测试不能被最终确定为网络空间中习惯规则的一部分。此外,网络特定背景下的归属规则显然处于混乱状态,缺乏一致、广泛和有代表性的实践来支持一般习惯。不过,新出现的国家实践显示出一定程度的统一和发展,表明未来有可能建立网络特定的归属规则。
{"title":"Stuxnet vs WannaCry and Albania: Cyber-attribution on trial","authors":"Jakub Vostoupal","doi":"10.1016/j.clsr.2024.106008","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.106008","url":null,"abstract":"<div><p>The cyber-attribution problem poses a significant challenge to the effective application of international law in cyberspace. Rooted in unclear standards of proof, evidence disclosure requirements, and deficiencies within the legal framework of the attribution procedure, this issue reflects the limitations of some traditional legal concepts in addressing the unique nature of cyberspace. Notably, the <em>effective control test</em>, introduced by the ICJ in 1986 and reaffirmed in 2007 to attribute the actions of non-state actors, does not adequately account for the distinctive dynamics of cyberspace, allowing states to use proxies to evade responsibility.</p><p>The legal impracticality and insufficiency of the attribution procedure not only give rise to the cyber-attribution problem but also compel states to develop new attribution tactics. This article explores the evolution of these cyber-attribution techniques to assess whether contemporary state practices align with the customary rules of attribution identified by the ICJ and codified by the ILC within ARSIWA, or whether new, cyber-specific rules might emerge. By analyzing two datasets on cyber incidents and three distinct cases – Stuxnet, WannaCry, and the 2022 cyberattacks against Albania – this article concludes that the <em>effective control test</em> cannot be conclusively identified as part of customary rules within cyberspace due to the insufficient support in state practice. Furthermore, it is apparent that the rules of attribution in the cyber-specific context are in a disarray, lacking consistent, widespread and representative practice to support a general custom. However, emerging state practice shows some degree of unification and development, suggesting the potential for the future establishment of cyber-specific rules of attribution.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106008"},"PeriodicalIF":3.3,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141482976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Law & Security Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1