首页 > 最新文献

Computer Law & Security Review最新文献

英文 中文
Unpacking AI-enabled border management technologies in Greece: To what extent their development and deployment are transparent and respect data protection rules? 解读希腊的人工智能边境管理技术:这些技术的开发和部署在多大程度上是透明的并遵守数据保护规则?
IF 2.9 3区 社会学 Q1 LAW Pub Date : 2024-04-13 DOI: 10.1016/j.clsr.2024.105967
Eleftherios Chelioudakis

This article embarks on a comprehensive examination of two research questions. The first question is “What are the AI-enabled applications, which are developed and deployed in Greece in the border management field?”. The goal is to provide to the reader a thorough listing of these technologies, as well as information regarding the companies that develop them, and the EU funding schemes that support them. By investigating this question, the paper assesses whether there exists transparent information regarding the procurement, development, and deployment phases of such AI tools in Greece, with a keen focus on the accessibility of related documents and data to civil society actors.The second question is “To what extent are the development and deployment of these AI-enabled border management applications in compliance with the applicable data protection provisions?”. There the goal is to register the different breaches of data protection provisions when such AI tools are developed and deployed in practice, taking into account the findings of civil society actors that have challenged the lawful use of such applications in accordance with the related national legal framework enforcing Regulation 2016/679 (GDPR) and transposing the Directive 2016/680.

本文开始全面研究两个研究问题。第一个问题是 "希腊在边境管理领域开发和部署了哪些人工智能应用?目的是为读者提供一份这些技术的详尽清单,以及有关开发这些技术的公司和支持这些技术的欧盟资助计划的信息。通过对这一问题的调查,本文将评估希腊是否存在有关此类人工智能工具的采购、开发和部署阶段的透明信息,并重点关注民间社会行为者对相关文件和数据的可及性。第二个问题是 "这些人工智能边境管理应用程序的开发和部署在多大程度上符合适用的数据保护规定?第二个问题是 "这些人工智能边境管理应用程序的开发和部署在多大程度上符合适用的数据保护规定?"其目的是登记在实际开发和部署此类人工智能工具时违反数据保护规定的不同情况,同时考虑到民间社会行为者的调查结果,这些行为者根据执行第 2016/679 号法规(GDPR)和移植第 2016/680 号指令的相关国家法律框架,对此类应用程序的合法使用提出了质疑。
{"title":"Unpacking AI-enabled border management technologies in Greece: To what extent their development and deployment are transparent and respect data protection rules?","authors":"Eleftherios Chelioudakis","doi":"10.1016/j.clsr.2024.105967","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105967","url":null,"abstract":"<div><p>This article embarks on a comprehensive examination of two research questions. The first question is “What are the AI-enabled applications, which are developed and deployed in Greece in the border management field?”. The goal is to provide to the reader a thorough listing of these technologies, as well as information regarding the companies that develop them, and the EU funding schemes that support them. By investigating this question, the paper assesses whether there exists transparent information regarding the procurement, development, and deployment phases of such AI tools in Greece, with a keen focus on the accessibility of related documents and data to civil society actors.The second question is “To what extent are the development and deployment of these AI-enabled border management applications in compliance with the applicable data protection provisions?”. There the goal is to register the different breaches of data protection provisions when such AI tools are developed and deployed in practice, taking into account the findings of civil society actors that have challenged the lawful use of such applications in accordance with the related national legal framework enforcing Regulation 2016/679 (GDPR) and transposing the Directive 2016/680.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105967"},"PeriodicalIF":2.9,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140551834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The right to self-determination in the digital platform economy 数字平台经济中的自决权
IF 2.9 3区 社会学 Q1 LAW Pub Date : 2024-04-12 DOI: 10.1016/j.clsr.2024.105964
Giacomo Pisani

The power wielded by platforms under "algorithmic governmentality" threatens people's ability to freely self-determine. The European legislation on privacy and, in particular, the GDPR, provide the data subject with certain rights to exercise their own informational self-determination. However, the individual setting of these rights makes them very limited. I will outline a co-regulation proposal that allows subjects to actively participate in the definition of rules for the platform economy. This would allow subjects to self-determine themselves, while giving adequate representation to the collective interests implied in algorithmic relationships.

平台在 "算法政府性 "下掌握的权力威胁着人们自由自决的能力。欧洲关于隐私的立法,尤其是《欧盟个人信息保护条例》(GDPR),为数据主体提供了一定的信息自决权。然而,这些权利的个人设定使其非常有限。我将概述一项共同监管提案,允许主体积极参与平台经济规则的定义。这将允许主体自我决定,同时充分代表算法关系中隐含的集体利益。
{"title":"The right to self-determination in the digital platform economy","authors":"Giacomo Pisani","doi":"10.1016/j.clsr.2024.105964","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105964","url":null,"abstract":"<div><p>The power wielded by platforms under \"algorithmic governmentality\" threatens people's ability to freely self-determine. The European legislation on privacy and, in particular, the GDPR, provide the data subject with certain rights to exercise their own informational self-determination. However, the individual setting of these rights makes them very limited. I will outline a co-regulation proposal that allows subjects to actively participate in the definition of rules for the platform economy. This would allow subjects to self-determine themselves, while giving adequate representation to the collective interests implied in algorithmic relationships.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105964"},"PeriodicalIF":2.9,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140548567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frontex as a hub for surveillance and data sharing: Challenges for data protection and privacy rights 作为监控和数据共享中心的 Frontex:数据保护和隐私权面临的挑战
IF 2.9 3区 社会学 Q1 LAW Pub Date : 2024-04-12 DOI: 10.1016/j.clsr.2024.105963
Shrutika Gandhi

The European Border and Coast Guard Agency, more commonly known as Frontex, was established in 2004 with “a view to improving the integrated management of the external borders of the Member States of the European Union.” It was tasked with the responsibility of providing technical support and expertise to Member States in the management of borders. Over the years its mandate has increased considerably through amendments to its legislative framework. This expansion has taken place against a background of serious allegations concerning Frontex's role in violating the fundamental rights of asylum seekers through its involvement in pushback operations – the practice of stopping asylum-seekers and migrants in need of protection at or before they reach the European Union's external border. While Frontex's complicity in pushbacks has been widely examined by academics, its transformation into a major surveillance and data processing hub and its compliance (or lack thereof) with the fundamental rights to privacy and protection of personal data have received limited academic attention.

This paper traces the evolution of Frontex over the years and fundamental rights implications of the transformation of its role.

欧洲边境和海岸警卫局(通常称为 Frontex)成立于 2004 年,其宗旨是 "改善欧洲联盟成员国外部边界的综合管理"。它的任务是在边境管理方面向成员国提供技术支持和专业知识。多年来,通过对其立法框架的修订,其任务已大大增加。这一扩大的背景是,有人严重指控 Frontex 通过参与 "推回 "行动--在需要保护的寻求庇护者和移民到达欧盟外部边界时或之前将其拦截--侵犯了寻求庇护者的基本权利。虽然 Frontex 在 "推回 "行动中的共谋行为已被学术界广泛研究,但其转变为一个主要的监控和数据处理中心,以及其遵守(或不遵守)隐私权和保护个人数据的基本权利的情况,受到的学术关注却很有限。
{"title":"Frontex as a hub for surveillance and data sharing: Challenges for data protection and privacy rights","authors":"Shrutika Gandhi","doi":"10.1016/j.clsr.2024.105963","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105963","url":null,"abstract":"<div><p>The European Border and Coast Guard Agency, more commonly known as Frontex, was established in 2004 with “a view to improving the integrated management of the external borders of the Member States of the European Union.” It was tasked with the responsibility of providing technical support and expertise to Member States in the management of borders. Over the years its mandate has increased considerably through amendments to its legislative framework. This expansion has taken place against a background of serious allegations concerning Frontex's role in violating the fundamental rights of asylum seekers through its involvement in pushback operations – the practice of stopping asylum-seekers and migrants in need of protection at or before they reach the European Union's external border. While Frontex's complicity in pushbacks has been widely examined by academics, its transformation into a major surveillance and data processing hub and its compliance (or lack thereof) with the fundamental rights to privacy and protection of personal data have received limited academic attention.</p><p>This paper traces the evolution of Frontex over the years and fundamental rights implications of the transformation of its role.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105963"},"PeriodicalIF":2.9,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S026736492400030X/pdfft?md5=1c2063f3c9cfafdaeeb363ab99eb114b&pid=1-s2.0-S026736492400030X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140549418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness, AI & recruitment 公平、人工智能与招聘
IF 2.9 3区 社会学 Q1 LAW Pub Date : 2024-04-08 DOI: 10.1016/j.clsr.2024.105966
Carlotta Rigotti, Eduard Fosch-Villaronga

The ever-increasing adoption of AI technologies in the hiring landscape to enhance human resources efficiency raises questions about algorithmic decision-making's implications in employment, especially for job applicants, including those at higher risk of social discrimination. Among other concepts, such as transparency and accountability, fairness has become crucial in AI recruitment debates due to the potential reproduction of bias and discrimination that can disproportionately affect certain vulnerable groups. However, the ideals and ambitions of fairness may signify different meanings to various stakeholders. Conceptualizing fairness is critical because it may provide a clear benchmark for evaluating and mitigating biases, ensuring that AI systems do not perpetuate existing imbalances and promote, in this case, equitable opportunities for all candidates in the job market. To this end, in this article, we conduct a scoping literature review on fairness in AI applications for recruitment and selection purposes, with special emphasis on its definition, categorization, and practical implementation. We start by explaining how AI applications have been increasingly used in the hiring process, especially to increase the efficiency of the HR team. We then move to the limitations of this technological innovation, which is known to be at high risk of privacy violations and social discrimination. Against this backdrop, we focus on defining and operationalizing fairness in AI applications for recruitment and selection purposes through cross-disciplinary lenses. Although the applicable legal frameworks and some research currently address the issue piecemeal, we observe and welcome the emergence of some cross-disciplinary efforts aimed at tackling this multifaceted challenge. We conclude the article with some brief recommendations to guide and shape future research and action on the fairness of AI applications in the hiring process for the better.

为了提高人力资源效率,人工智能技术越来越多地被应用于招聘领域,这引发了关于算法决策对就业的影响的问题,尤其是对求职者,包括那些面临较高社会歧视风险的求职者。在透明度和问责制等其他概念中,公平性已成为人工智能招聘辩论中的关键,因为偏见和歧视可能会重现,对某些弱势群体造成不成比例的影响。然而,公平的理想和抱负可能对不同的利益相关者有着不同的含义。将公平概念化至关重要,因为它可以为评估和减少偏见提供一个明确的基准,确保人工智能系统不会延续现有的不平衡,并促进就业市场上所有候选人的公平机会。为此,在本文中,我们将对以招聘和选拔为目的的人工智能应用中的公平性进行范围性文献综述,并特别强调其定义、分类和实际实施。我们首先解释了人工智能应用如何越来越多地应用于招聘流程,尤其是提高人力资源团队的效率。然后,我们探讨这一技术创新的局限性,众所周知,它极有可能侵犯隐私和造成社会歧视。在此背景下,我们将重点放在通过跨学科视角来定义和操作用于招聘和选拔目的的人工智能应用中的公平性。虽然适用的法律框架和一些研究目前只是零散地解决这一问题,但我们注意到并欢迎一些旨在应对这一多方面挑战的跨学科努力的出现。文章的最后,我们提出了一些简要建议,以指导和塑造未来关于人工智能应用在招聘过程中的公平性的研究和行动,使之更臻完善。
{"title":"Fairness, AI & recruitment","authors":"Carlotta Rigotti,&nbsp;Eduard Fosch-Villaronga","doi":"10.1016/j.clsr.2024.105966","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105966","url":null,"abstract":"<div><p>The ever-increasing adoption of AI technologies in the hiring landscape to enhance human resources efficiency raises questions about algorithmic decision-making's implications in employment, especially for job applicants, including those at higher risk of social discrimination. Among other concepts, such as transparency and accountability, fairness has become crucial in AI recruitment debates due to the potential reproduction of bias and discrimination that can disproportionately affect certain vulnerable groups. However, the ideals and ambitions of fairness may signify different meanings to various stakeholders. Conceptualizing fairness is critical because it may provide a clear benchmark for evaluating and mitigating biases, ensuring that AI systems do not perpetuate existing imbalances and promote, in this case, equitable opportunities for all candidates in the job market. To this end, in this article, we conduct a scoping literature review on fairness in AI applications for recruitment and selection purposes, with special emphasis on its definition, categorization, and practical implementation. We start by explaining how AI applications have been increasingly used in the hiring process, especially to increase the efficiency of the HR team. We then move to the limitations of this technological innovation, which is known to be at high risk of privacy violations and social discrimination. Against this backdrop, we focus on defining and operationalizing fairness in AI applications for recruitment and selection purposes through cross-disciplinary lenses. Although the applicable legal frameworks and some research currently address the issue piecemeal, we observe and welcome the emergence of some cross-disciplinary efforts aimed at tackling this multifaceted challenge. We conclude the article with some brief recommendations to guide and shape future research and action on the fairness of AI applications in the hiring process for the better.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105966"},"PeriodicalIF":2.9,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140536319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do not go gentle into that good night: The European Union's and China's different approaches to the extraterritorial application of artificial intelligence laws and regulations 不要温柔地进入那个良夜:欧盟与中国对人工智能法律法规域外适用的不同态度
IF 2.9 3区 社会学 Q1 LAW Pub Date : 2024-04-04 DOI: 10.1016/j.clsr.2024.105965
Wang Yan

The extraterritorial application of artificial intelligence (AI) laws and regulations is a form of global AI governance. The EU and China serve as two different examples of how to achieve the extraterritorial applicability of AI laws and regulations. The former shows an explicit territorial extension with more trigger factors, whereas the latter shows vertical regulation with a narrower territorial scope. Both countries’ legislative motivations differ but also have some commonalities. One of the primary goals of extraterritorial application of domestic laws is to protect citizens within their territory. The digital economy's characteristics make it necessary for AI laws to have extraterritorial effects. Without international conventions or treaties, there is a legal vacuum in AI regulation. Additionally, the extraterritorial application of AI laws and regulations helps a state become a global standard-setter and gain an international sphere of influence. However, the extraterritorial application of AI laws and regulations sometimes functions as a form of legal imperialism. This exacerbates the injustice between great powers and weak countries in AI competition. To justify the legitimacy of the extraterritorial application of AI laws and regulations, it is beneficial to adopt the ‘inner morality of extraterritoriality’, a theoretical framework proposed by Professor Dan Svantesson. In fact, extraterritorial applicability depends on the market size and attractiveness. For other countries, whether their AI laws and regulations are endowed with extraterritorial effects is their prerogative. However, they should consider their soft power before implementing legislation.

人工智能(AI)法律法规的域外适用是全球人工智能治理的一种形式。欧盟和中国是实现人工智能法律法规域外适用的两个不同范例。前者表现为明确的领土延伸,触发因素较多,而后者表现为垂直监管,领土范围较窄。两国的立法动机不同,但也有一些共性。域外适用国内法的主要目的之一是保护境内公民。数字经济的特点使得人工智能法律必须具有域外效力。没有国际公约或条约,人工智能法规就会出现法律真空。此外,人工智能法律法规的域外适用有助于一个国家成为全球标准的制定者,并获得国际影响力。然而,人工智能法律法规的域外适用有时也是一种法律帝国主义。这加剧了大国与弱国在人工智能竞争中的不公平。为了证明人工智能法律法规域外适用的正当性,采用丹-斯万特松教授提出的 "域外适用的内在道德 "理论框架是有益的。事实上,域外适用性取决于市场规模和吸引力。对于其他国家来说,其人工智能法律法规是否具有域外效力是其特权。不过,在实施立法之前,它们应该考虑自己的软实力。
{"title":"Do not go gentle into that good night: The European Union's and China's different approaches to the extraterritorial application of artificial intelligence laws and regulations","authors":"Wang Yan","doi":"10.1016/j.clsr.2024.105965","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105965","url":null,"abstract":"<div><p>The extraterritorial application of artificial intelligence (AI) laws and regulations is a form of global AI governance. The EU and China serve as two different examples of how to achieve the extraterritorial applicability of AI laws and regulations. The former shows an explicit territorial extension with more trigger factors, whereas the latter shows vertical regulation with a narrower territorial scope. Both countries’ legislative motivations differ but also have some commonalities. One of the primary goals of extraterritorial application of domestic laws is to protect citizens within their territory. The digital economy's characteristics make it necessary for AI laws to have extraterritorial effects. Without international conventions or treaties, there is a legal vacuum in AI regulation. Additionally, the extraterritorial application of AI laws and regulations helps a state become a global standard-setter and gain an international sphere of influence. However, the extraterritorial application of AI laws and regulations sometimes functions as a form of legal imperialism. This exacerbates the injustice between great powers and weak countries in AI competition. To justify the legitimacy of the extraterritorial application of AI laws and regulations, it is beneficial to adopt the ‘inner morality of extraterritoriality’, a theoretical framework proposed by Professor Dan Svantesson. In fact, extraterritorial applicability depends on the market size and attractiveness. For other countries, whether their AI laws and regulations are endowed with extraterritorial effects is their prerogative. However, they should consider their soft power before implementing legislation.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105965"},"PeriodicalIF":2.9,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140347438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
European National News 欧洲国家新闻
IF 2.9 3区 社会学 Q1 LAW Pub Date : 2024-04-01 DOI: 10.1016/j.clsr.2024.105954
Nick Pantlin

This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.

本文跟踪欧洲主要国家在信息技术和通信领域的国家级发展,并提供重要国家发展的简明提示服务。它由赫伯特-斯密-弗里希尔斯律师事务所(Herbert Smith Freehills LLP)协调,并由欧洲各地的律师事务所撰稿。本专栏提供欧洲主要国家重要事态发展的简明提示服务。其部分目的是通过让读者了解目前在国家层面实施欧盟立法和国际公约及条约的 "实地 "情况,从而对《日刊》的专题文章和简报进行补充。如果某条欧洲国家新闻特别重要,CLSR 也可能在当期或后续版本中对其进行更详细的报道。
{"title":"European National News","authors":"Nick Pantlin","doi":"10.1016/j.clsr.2024.105954","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105954","url":null,"abstract":"<div><p>This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"52 ","pages":"Article 105954"},"PeriodicalIF":2.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140555007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asia–Pacific developments 亚太地区的发展
IF 2.9 3区 社会学 Q1 LAW Pub Date : 2024-04-01 DOI: 10.1016/j.clsr.2024.105953
Gabriela Kennedy

This column provides a country by country analysis of the latest legal developments, cases and issues relevant to the IT, media and telecommunications' industries in key jurisdictions across the Asia Pacific region. The articles appearing in this column are intended to serve as ‘alerts’ and are not submitted as detailed analyses of cases or legal developments.

本专栏逐国分析亚太地区主要司法管辖区与 IT、媒体和电信行业相关的最新法律发展、案例和问题。本专栏中出现的文章旨在作为 "提醒",而不是作为案件或法律发展的详细分析提交。
{"title":"Asia–Pacific developments","authors":"Gabriela Kennedy","doi":"10.1016/j.clsr.2024.105953","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105953","url":null,"abstract":"<div><p>This column provides a country by country analysis of the latest legal developments, cases and issues relevant to the IT, media and telecommunications' industries in key jurisdictions across the Asia Pacific region. The articles appearing in this column are intended to serve as ‘alerts’ and are not submitted as detailed analyses of cases or legal developments.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"52 ","pages":"Article 105953"},"PeriodicalIF":2.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140554663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three pathways for standardisation and ethical disclosure by default under the European Union Artificial Intelligence Act 根据《欧盟人工智能法》默认的标准化和道德披露的三种途径
IF 2.9 3区 社会学 Q1 LAW Pub Date : 2024-03-29 DOI: 10.1016/j.clsr.2024.105957
Johann Laux , Sandra Wachter , Brent Mittelstadt

Under its proposed Artificial Intelligence Act (‘AIA’), the European Union seeks to develop harmonised standards involving abstract normative concepts such transparency, fairness, and accountability. Applying such concepts inevitably requires answering hard normative questions. Considering this challenge, we argue that there are three possible pathways for future standardisation under the AIA. First, European standard-setting organisations (‘SSOs’) could answer hard normative questions themselves. This approach would raise concerns about its democratic legitimacy. Standardisation is a technical discourse and tends to exclude non-expert stakeholders and the public at large. Second, instead of passing their own normative judgments, SSOs could track the normative consensus they find available. By analysing the standard-setting history of one major SSO, we show that such consensus tracking has historically been its pathway of choice. If standardisation under the AIA took the same route, we demonstrate how this would lead to a false sense of safety as the process is not infallible. Consensus tracking would furthermore push the need to solve unavoidable normative problems down the line. Instead of regulators, AI developers and/or users could define what, for example, fairness requires. By the institutional design of its AIA, the European Commission would have essentially kicked the ‘AI Ethics’ can down the road. We thus suggest a third pathway which aims to avoid the pitfalls of the previous two: SSOs should create standards which require “ethical disclosure by default.” These standards will specify minimum technical testing, documentation, and public reporting requirements to shift ethical decision-making to local stakeholders and limit provider discretion in answering hard normative questions in the development of AI products and services. Our proposed pathway is about putting the right information in the hands of the people with the legitimacy to make complex normative decisions at a local, context-sensitive level.

根据其拟议的《人工智能法》(AIA),欧盟试图制定涉及透明度、公平性和问责制等抽象规范概念的统一标准。应用这些概念不可避免地需要回答艰涩的规范性问题。考虑到这一挑战,我们认为,在 AIA 框架下,未来的标准化有三种可能的途径。首先,欧洲标准制定组织('SSOs')可以自己回答硬性规范问题。这种方法会引起人们对其民主合法性的担忧。标准化是一种技术性话语,往往会将非专业利益相关者和广大公众排除在外。其次,标准组织可以跟踪他们所发现的标准共识,而不是通过自己的规范性判断。通过分析一个主要 SSO 的标准制定历史,我们发现这种共识跟踪历来是其选择的途径。如果 AIA 下的标准化工作也采取同样的途径,我们将证明这将如何导致一种虚假的安全感,因为这一过程并非无懈可击。此外,共识追踪还将进一步推高解决不可避免的规范性问题的需求。人工智能开发者和/或用户可以代替监管者来定义公平性等要求。通过其 AIA 的制度设计,欧盟委员会基本上已经把 "人工智能伦理 "问题抛到了九霄云外。因此,我们提出了第三种途径,旨在避免前两种途径的缺陷:SSO 应制定标准,要求 "默认情况下的道德披露"。这些标准将规定最低限度的技术测试、文档和公开报告要求,以便将伦理决策权转移给当地利益相关者,并限制提供商在回答人工智能产品和服务开发过程中棘手的规范性问题时的自由裁量权。我们建议的途径是将正确的信息交到具有合法性的人手中,让他们在当地根据具体情况做出复杂的规范性决策。
{"title":"Three pathways for standardisation and ethical disclosure by default under the European Union Artificial Intelligence Act","authors":"Johann Laux ,&nbsp;Sandra Wachter ,&nbsp;Brent Mittelstadt","doi":"10.1016/j.clsr.2024.105957","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105957","url":null,"abstract":"<div><p>Under its proposed Artificial Intelligence Act (‘AIA’), the European Union seeks to develop harmonised standards involving abstract normative concepts such transparency, fairness, and accountability. Applying such concepts inevitably requires answering hard normative questions. Considering this challenge, we argue that there are three possible pathways for future standardisation under the AIA. First, European standard-setting organisations (‘SSOs’) could answer hard normative questions themselves. This approach would raise concerns about its democratic legitimacy. Standardisation is a technical discourse and tends to exclude non-expert stakeholders and the public at large. Second, instead of passing their own normative judgments, SSOs could track the normative consensus they find available. By analysing the standard-setting history of one major SSO, we show that such consensus tracking has historically been its pathway of choice. If standardisation under the AIA took the same route, we demonstrate how this would lead to a false sense of safety as the process is not infallible. Consensus tracking would furthermore push the need to solve unavoidable normative problems down the line. Instead of regulators, AI developers and/or users could define what, for example, fairness requires. By the institutional design of its AIA, the European Commission would have essentially kicked the ‘AI Ethics’ can down the road. We thus suggest a third pathway which aims to avoid the pitfalls of the previous two: SSOs should create standards which require “ethical disclosure by default.” These standards will specify minimum technical testing, documentation, and public reporting requirements to shift ethical decision-making to local stakeholders and limit provider discretion in answering hard normative questions in the development of AI products and services. Our proposed pathway is about putting the right information in the hands of the people with the legitimacy to make complex normative decisions at a local, context-sensitive level.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105957"},"PeriodicalIF":2.9,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000244/pdfft?md5=04b22c11bc630a648f5dc35efe33f508&pid=1-s2.0-S0267364924000244-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140321594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rule of law or not? A critical evaluation of legal responses to cyberterrorism in the UK 法治与否?对英国应对网络恐怖主义的法律措施的批判性评估
IF 2.9 3区 社会学 Q1 LAW Pub Date : 2024-03-29 DOI: 10.1016/j.clsr.2024.105951
Xingxing Wei

Currently the UK does not have a specific anti-cyberterrorism law, instead relying on existing anti-terrorism laws to deal with cyberterrorism. This approach raises a number of problems insofar as it can lead to legislative uncertainty and unpredictability, as well as impacting on carrying risks of over-criminalisation, a lack of counterbalance, violation of principles of proportionality and arbitrariness. In light of these problems, this article aims to offer a critical evaluation of the UK’s existing legal responses to cyberterrorism with reference to the rule of law and basic human rights principles, mainly focusing on the vague and overly broad definition of terrorism, a tendency towards criminalising a wide range of terrorism precursor offences online, pre-emptive strategies and aggravated punishment of cyberterrorism. Based on this analysis, the article argues that applying the extension of existing anti-terrorism laws to combat low-risk cyberterrorism activities runs the risk of exacerbating harms to the values of the rule of law.

目前,英国没有专门的反网络恐怖主义法,而是依靠现有的反恐怖主义法来应对网络恐怖主义。这种做法引发了一系列问题,因为它可能导致立法的不确定性和不可预测性,并带来过度刑事化、缺乏制衡、违反相称性和任意性原则的风险。鉴于这些问题,本文旨在参照法治和基本人权原则,对英国现有的网络恐怖主义法律应对措施进行批判性评估,主要关注恐怖主义的定义模糊且过于宽泛、将各种恐怖主义前兆罪行在网上定罪的倾向、先发制人的策略以及对网络恐怖主义的加重处罚。基于上述分析,文章认为,将现有反恐法律的适用范围扩大到打击低风险的网络恐怖主义活动,有可能加剧对法治价值观的损害。
{"title":"Rule of law or not? A critical evaluation of legal responses to cyberterrorism in the UK","authors":"Xingxing Wei","doi":"10.1016/j.clsr.2024.105951","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105951","url":null,"abstract":"<div><p>Currently the UK does not have a specific anti-cyberterrorism law, instead relying on existing anti-terrorism laws to deal with cyberterrorism. This approach raises a number of problems insofar as it can lead to legislative uncertainty and unpredictability, as well as impacting on carrying risks of over-criminalisation, a lack of counterbalance, violation of principles of proportionality and arbitrariness. In light of these problems, this article aims to offer a critical evaluation of the UK’s existing legal responses to cyberterrorism with reference to the rule of law and basic human rights principles, mainly focusing on the vague and overly broad definition of terrorism, a tendency towards criminalising a wide range of terrorism precursor offences online, pre-emptive strategies and aggravated punishment of cyberterrorism. Based on this analysis, the article argues that applying the extension of existing anti-terrorism laws to combat low-risk cyberterrorism activities runs the risk of exacerbating harms to the values of the rule of law.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105951"},"PeriodicalIF":2.9,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140321566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a right to cybersecurity in EU law? The challenges ahead 实现欧盟法律中的网络安全权?未来的挑战
IF 2.9 3区 社会学 Q1 LAW Pub Date : 2024-03-25 DOI: 10.1016/j.clsr.2024.105961
Pier Giorgio Chiara

This article aims to engage with the scholarly debate on the introduction of a new fundamental right to cybersecurity in EU law. In particular, the legal analysis focuses on three legal challenges brought about by a theoretical framework for development of a new right to cybersecurity. They regard: i) the need for a new right to cybersecurity against the background of the existing fundamental right to security (Art. 6 EU Charter of Fundamental Rights, CFR); ii) the actual content of this new right; and, iii) how such a new right could be implemented. The article concludes by advocating for the need of acknowledging a new right to cybersecurity in EU law.

本文旨在参与有关在欧盟法律中引入新的网络安全基本权利的学术辩论。具体而言,法律分析的重点是制定新网络安全权的理论框架所带来的三个法律挑战。它们涉及:i) 在现有基本安全权(《欧盟基本权利宪章》第 6 条)的背景下,新网络安全权的必要性;ii) 这一新权利的实际内容;iii) 如何实施这一新权利。文章最后主张有必要在欧盟法律中承认一项新的网络安全权。
{"title":"Towards a right to cybersecurity in EU law? The challenges ahead","authors":"Pier Giorgio Chiara","doi":"10.1016/j.clsr.2024.105961","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105961","url":null,"abstract":"<div><p>This article aims to engage with the scholarly debate on the introduction of a new fundamental right to cybersecurity in EU law. In particular, the legal analysis focuses on three legal challenges brought about by a theoretical framework for development of a new right to cybersecurity. They regard: i) the need for a new right to cybersecurity against the background of the existing fundamental right to security (Art. 6 EU Charter of Fundamental Rights, CFR); ii) the actual content of this new right; and, iii) how such a new right could be implemented. The article concludes by advocating for the need of acknowledging a new right to cybersecurity in EU law.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105961"},"PeriodicalIF":2.9,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000281/pdfft?md5=25ebeae947069d6e07371338c4afa2c7&pid=1-s2.0-S0267364924000281-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140209219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Law & Security Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1