首页 > 最新文献

International Journal of Law and Information Technology最新文献

英文 中文
Digital identity: an approach to its nature, concept, and functionalities 数字身份:探讨其性质、概念和功能的方法
IF 1 Q1 LAW Pub Date : 2024-09-18 DOI: 10.1093/ijlit/eaae019
Margarita Robles-Carrillo
Digital identity is a basic component of the knowledge economy and society. It is the key for accessing the digital world and for carrying out commercial, economic, or any kind of transactions and communications. Far from being a merely digital version of the physical identity, digital identity is a singular and complex construct which poses three main dilemmas that provide the framework for its analysis. The first arises from the context in which it is located, the digital ecosystem, that changes its scope and nature. The second, conceptual, is a consequence of the lack of agreement about its definition but also of the different legal framework derived from it. A third dilemma, functional, is due to the fact that digital identity can fulfil different, even contradictory, functionalities. An analysis of these dilemmas can contribute to a better understanding of this category leading to a proposal for its definition and legal framework.
数字身份是知识经济和社会的基本组成部分。它是进入数字世界和进行商业、经济或任何类型的交易和通信的关键。数字身份远非物理身份的数字版本,而是一种独特而复杂的结构,它提出了三大难题,为其分析提供了框架。第一个难题来自于数字身份所处的环境,即数字生态系统,它改变了数字身份的范围和性质。第二个困境是概念性的,是对其定义缺乏一致意见的结果,同时也是衍生出不同法律框架的结果。第三种困境是功能性的,因为数字身份可以实现不同的功能,甚至是相互矛盾的功能。对这些困境的分析有助于更好地理解这一类别,从而提出其定义和法律框架。
{"title":"Digital identity: an approach to its nature, concept, and functionalities","authors":"Margarita Robles-Carrillo","doi":"10.1093/ijlit/eaae019","DOIUrl":"https://doi.org/10.1093/ijlit/eaae019","url":null,"abstract":"Digital identity is a basic component of the knowledge economy and society. It is the key for accessing the digital world and for carrying out commercial, economic, or any kind of transactions and communications. Far from being a merely digital version of the physical identity, digital identity is a singular and complex construct which poses three main dilemmas that provide the framework for its analysis. The first arises from the context in which it is located, the digital ecosystem, that changes its scope and nature. The second, conceptual, is a consequence of the lack of agreement about its definition but also of the different legal framework derived from it. A third dilemma, functional, is due to the fact that digital identity can fulfil different, even contradictory, functionalities. An analysis of these dilemmas can contribute to a better understanding of this category leading to a proposal for its definition and legal framework.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"19 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142255797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive 没有人工智能责任,能有负责任的人工智能吗?根据欧盟人工智能责任指令,通过事后侵权责任激励人工智能的生成安全
IF 1 Q1 LAW Pub Date : 2024-09-15 DOI: 10.1093/ijlit/eaae021
Guido Noto La Diega, Leonardo C T Bezerra
In Europe, the governance discourse surrounding artificial intelligence (AI) has been predominantly centred on the AI Act, with a proliferation of books, certification courses, and discussions emerging even before its adoption. This narrow focus has overshadowed other crucial regulatory interventions that promise to fundamentally shape AI. This article highlights the proposed EU AI liability directive (AILD), the first attempt to harmonize general tort law in response to AI-related threats, addressing critical issues such as evidence discovery and causal links. As AI risks proliferate, this article argues for the necessity of a responsive system to adequately address AI harms as they arise. AI safety and responsible AI, central themes in current regulatory discussions, must be prioritized, with ex-post liability in tort playing a crucial role in achieving these objectives. This is particularly pertinent as AI systems become more autonomous and unpredictable, rendering the ex-ante risk assessments mandated by the AI Act insufficient. The AILD’s focus on fault and its limited scope is also inadequate. The proposed easing of the burden of proof for victims of AI, through enhanced discovery rules and presumptions of causal links, is insufficient in a context where Large Language Models exhibit unpredictable behaviours and humans increasingly rely on autonomous agents for complex tasks. Moreover, the AILD’s reliance on the concept of risk, inherited from the AI Act, is misplaced, as tort liability intervenes after the risk has materialized. However, the inherent risks in AI systems could justify EU harmonization of AI torts in the direction of strict liability. Bridging the liability gap will enhance AI safety and responsibility, better protect individuals from AI harms, and ensure that tort law remains a vital regulatory tool.
在欧洲,围绕人工智能(AI)的治理讨论主要集中在《人工智能法案》上,甚至在该法案通过之前就出现了大量的书籍、认证课程和讨论。这种狭隘的关注掩盖了有望从根本上塑造人工智能的其他重要监管干预措施。本文重点介绍了拟议中的欧盟人工智能责任指令(AILD),这是为应对人工智能相关威胁而协调一般侵权法的首次尝试,解决了证据发现和因果联系等关键问题。随着人工智能风险的激增,本文认为有必要建立一个反应迅速的系统,以充分应对人工智能损害的出现。人工智能安全和负责任的人工智能是当前监管讨论的核心主题,必须优先考虑,而事后侵权责任在实现这些目标方面发挥着至关重要的作用。随着人工智能系统变得更加自主和不可预测,《人工智能法》规定的事前风险评估就显得不够充分,因此这一点尤为重要。人工智能法》对过错的关注及其有限的范围也是不够的。在大型语言模型(Large Language Models)表现出不可预测的行为以及人类越来越依赖自主代理执行复杂任务的情况下,通过加强发现规则和因果联系推定来减轻人工智能受害者举证责任的建议是不够的。此外,《人工智能法》对风险概念的依赖是错误的,因为侵权责任是在风险发生后才介入的。然而,人工智能系统固有的风险可以证明欧盟协调人工智能侵权行为的严格责任方向是正确的。弥合责任差距将加强人工智能的安全和责任,更好地保护个人免受人工智能的伤害,并确保侵权法仍然是一个重要的监管工具。
{"title":"Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive","authors":"Guido Noto La Diega, Leonardo C T Bezerra","doi":"10.1093/ijlit/eaae021","DOIUrl":"https://doi.org/10.1093/ijlit/eaae021","url":null,"abstract":"In Europe, the governance discourse surrounding artificial intelligence (AI) has been predominantly centred on the AI Act, with a proliferation of books, certification courses, and discussions emerging even before its adoption. This narrow focus has overshadowed other crucial regulatory interventions that promise to fundamentally shape AI. This article highlights the proposed EU AI liability directive (AILD), the first attempt to harmonize general tort law in response to AI-related threats, addressing critical issues such as evidence discovery and causal links. As AI risks proliferate, this article argues for the necessity of a responsive system to adequately address AI harms as they arise. AI safety and responsible AI, central themes in current regulatory discussions, must be prioritized, with ex-post liability in tort playing a crucial role in achieving these objectives. This is particularly pertinent as AI systems become more autonomous and unpredictable, rendering the ex-ante risk assessments mandated by the AI Act insufficient. The AILD’s focus on fault and its limited scope is also inadequate. The proposed easing of the burden of proof for victims of AI, through enhanced discovery rules and presumptions of causal links, is insufficient in a context where Large Language Models exhibit unpredictable behaviours and humans increasingly rely on autonomous agents for complex tasks. Moreover, the AILD’s reliance on the concept of risk, inherited from the AI Act, is misplaced, as tort liability intervenes after the risk has materialized. However, the inherent risks in AI systems could justify EU harmonization of AI torts in the direction of strict liability. Bridging the liability gap will enhance AI safety and responsibility, better protect individuals from AI harms, and ensure that tort law remains a vital regulatory tool.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"106 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum-safe global encryption policy 量子安全全球加密政策
IF 1 Q1 LAW Pub Date : 2024-09-07 DOI: 10.1093/ijlit/eaae020
Alessia Zornetta
Every day, individuals use the Internet to communicate, gather information, and engage in commercial transactions. Encryption renders such activities secure and possible in the first place. While interest in encryption policy has fluctuated among policymakers for the past three decades, this paper argues for the need to promote strong encryption at a global level. The paper sheds light on the risks posed by quantum computing to national security, wherein breached encryption could compromise classified information, military intelligence, sensitive devices, and critical infrastructures. The argument for a worldwide encryption policy is further substantiated by the looming spectre of profound global power asymmetries. As the evolution of quantum technology remains concentrated within a select cohort of nations, those in possession of functional quantum computers could gain unprecedented advantages and exploit such technological supremacy. To buttress this assertion, this paper employs the logic of the ‘least trusted country problem’ to underscore the fragility of global security in the face of such imbalances. In response, this paper introduces a three-fold strategy designed to pave the path towards a quantum-secure future. The strategy encompasses the pivotal elements of post-quantum cryptography, quantum key distribution, and quantum random number generators. While acknowledging the challenges inherent in implementing these measures, including the projected decade-long timeline for establishing standardized solutions, the paper underscores the urgency of confronting the imminent quantum computing menace in a proactive manner. By adhering to these strategic imperatives, the global community stands poised to reinforce encryption practices against the potent capabilities that quantum computing wields. In so doing, the security and integrity of information exchange can be preserved in an ever more interconnected world.
每天,人们都在使用互联网进行通信、收集信息和从事商业交易。加密首先使这些活动变得安全和可能。过去三十年来,政策制定者对加密政策的兴趣起伏不定,但本文认为有必要在全球范围内推广强大的加密技术。本文揭示了量子计算给国家安全带来的风险,即被破坏的加密可能会危及机密信息、军事情报、敏感设备和关键基础设施。全球力量严重不对称的隐忧进一步证实了制定全球加密政策的必要性。由于量子技术的发展仍然集中在少数国家,那些拥有功能性量子计算机的国家可能会获得前所未有的优势,并利用这种技术优势。为了支持这一论断,本文采用了 "最不信任国家问题 "的逻辑,以强调全球安全在这种不平衡面前的脆弱性。为此,本文提出了一项三重战略,旨在为实现量子安全的未来铺平道路。该战略包括后量子密码学、量子密钥分发和量子随机数生成器等关键要素。本文承认实施这些措施所固有的挑战,包括建立标准化解决方案预计需要十年的时间,但同时强调了以积极主动的方式应对迫在眉睫的量子计算威胁的紧迫性。通过遵守这些战略要务,全球社会已做好准备,针对量子计算的强大能力加强加密实践。这样,信息交流的安全性和完整性就能在一个相互联系日益紧密的世界中得到保护。
{"title":"Quantum-safe global encryption policy","authors":"Alessia Zornetta","doi":"10.1093/ijlit/eaae020","DOIUrl":"https://doi.org/10.1093/ijlit/eaae020","url":null,"abstract":"Every day, individuals use the Internet to communicate, gather information, and engage in commercial transactions. Encryption renders such activities secure and possible in the first place. While interest in encryption policy has fluctuated among policymakers for the past three decades, this paper argues for the need to promote strong encryption at a global level. The paper sheds light on the risks posed by quantum computing to national security, wherein breached encryption could compromise classified information, military intelligence, sensitive devices, and critical infrastructures. The argument for a worldwide encryption policy is further substantiated by the looming spectre of profound global power asymmetries. As the evolution of quantum technology remains concentrated within a select cohort of nations, those in possession of functional quantum computers could gain unprecedented advantages and exploit such technological supremacy. To buttress this assertion, this paper employs the logic of the ‘least trusted country problem’ to underscore the fragility of global security in the face of such imbalances. In response, this paper introduces a three-fold strategy designed to pave the path towards a quantum-secure future. The strategy encompasses the pivotal elements of post-quantum cryptography, quantum key distribution, and quantum random number generators. While acknowledging the challenges inherent in implementing these measures, including the projected decade-long timeline for establishing standardized solutions, the paper underscores the urgency of confronting the imminent quantum computing menace in a proactive manner. By adhering to these strategic imperatives, the global community stands poised to reinforce encryption practices against the potent capabilities that quantum computing wields. In so doing, the security and integrity of information exchange can be preserved in an ever more interconnected world.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"72 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video-sharing-platforms and Brussels Ia regulation: navigating contractual jurisdictional challenges 视频分享平台与布鲁塞尔 Ia 法规:应对合同管辖权挑战
IF 1 Q1 LAW Pub Date : 2024-09-07 DOI: 10.1093/ijlit/eaae017
Josep Suquet
This article examines international jurisdiction in the resolution of contractual disputes involving Video-Sharing Platforms (VSP). It situates VSPs within the broader context defined by the Audio-visual Media Services Directive and identifies areas of litigation. The article discusses specific cases where national courts have resolved conflicts related to VSPs and sheds light onto the application of contractual and consumer forums under Regulation 1215/2012.
本文探讨了在解决涉及视频共享平台(VSP)的合同纠纷时的国际管辖权问题。文章将 VSP 置于《视听媒体服务指令》所界定的更广泛背景中,并确定了诉讼领域。文章讨论了国家法院解决与 VSP 相关冲突的具体案例,并阐明了第 1215/2012 号法规下合同和消费者论坛的适用情况。
{"title":"Video-sharing-platforms and Brussels Ia regulation: navigating contractual jurisdictional challenges","authors":"Josep Suquet","doi":"10.1093/ijlit/eaae017","DOIUrl":"https://doi.org/10.1093/ijlit/eaae017","url":null,"abstract":"This article examines international jurisdiction in the resolution of contractual disputes involving Video-Sharing Platforms (VSP). It situates VSPs within the broader context defined by the Audio-visual Media Services Directive and identifies areas of litigation. The article discusses specific cases where national courts have resolved conflicts related to VSPs and sheds light onto the application of contractual and consumer forums under Regulation 1215/2012.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"5 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142220333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence co-regulation? The role of standards in the EU AI Act 人工智能共同监管?标准在欧盟人工智能法案中的作用
IF 1 Q1 LAW Pub Date : 2024-07-08 DOI: 10.1093/ijlit/eaae011
Marta Cantero Gamito, Christopher T Marsden
This article examines artificial intelligence (AI) co-regulation in the EU AI Act and the critical role of standards under this regulatory strategy. It engages with the foundation of democratic legitimacy in EU standardization, emphasizing the need for reform to keep pace with the rapid evolution of AI capabilities, as recently suggested by the European Parliament. The article highlights the challenges posed by interdisciplinarity and the lack of civil society expertise in standard-setting. It critiques the inadequate representation of societal stakeholders in the development of AI standards, posing pressing questions about the potential risks this entails to the protection of fundamental rights, given the lack of democratic oversight and the global composition of standard-developing organizations. The article scrutinizes how under the AI Act technical standards will define AI risks and mitigation measures and questions whether technical experts are adequately equipped to standardize thresholds of acceptable residual risks in different high-risk contexts. More specifically, the article examines the complexities of regulating AI, drawing attention to the multi-dimensional nature of identifying risks in AI systems and the value-laden nature of the task. It questions the potential creation of a typology of AI risks and highlights the need for a nuanced, inclusive, and context-specific approach to risk identification and mitigation. Consequently, in the article we underscore the imperative for continuous stakeholder involvement in developing, monitoring, and refining the technical rules and standards for high-risk AI applications. We also emphasize the need for rigorous training, certification, and surveillance measures to ensure the enforcement of fundamental rights in the face of AI developments. Finally, we recommend greater transparency and inclusivity in risk identification methodologies, urging for approaches that involve stakeholders and require a diverse skill set for risk assessment. At the same time, we also draw attention to the diversity within the European Union and the consequent need for localized risk assessments that consider national contexts, languages, institutions, and culture. In conclusion, the article argues that co-regulation under the AI Act necessitates a thorough re-examination and reform of standard-setting processes, to ensure a democratically legitimate, interdisciplinary, stakeholder-inclusive, and responsive approach to AI regulation, which can safeguard fundamental rights and anticipate, identify, and mitigate a broad spectrum of AI risks.
本文探讨了欧盟《人工智能法》中的人工智能(AI)共同监管以及标准在这一监管战略中的关键作用。文章探讨了欧盟标准化的民主合法性基础,强调了改革的必要性,以跟上人工智能能力快速发展的步伐,正如欧洲议会最近所建议的那样。文章强调了跨学科性带来的挑战以及民间社会在标准制定方面专业知识的缺乏。文章批评了社会利益相关者在人工智能标准制定过程中代表性不足的问题,提出了一个紧迫的问题,即由于缺乏民主监督和标准制定组织的全球构成,这对基本权利的保护带来了潜在风险。文章仔细研究了根据《人工智能法》,技术标准将如何定义人工智能风险和缓解措施,并质疑技术专家是否有足够能力将不同高风险环境下可接受的残余风险阈值标准化。更具体地说,文章探讨了人工智能监管的复杂性,提请注意识别人工智能系统风险的多维性和任务的价值性。文章对建立人工智能风险类型学的可能性提出了质疑,并强调有必要采用一种细致入微、具有包容性且针对具体情况的方法来识别和缓解风险。因此,我们在文章中强调,利益相关者必须持续参与制定、监督和完善高风险人工智能应用的技术规则和标准。我们还强调有必要采取严格的培训、认证和监督措施,以确保在人工智能发展中落实基本权利。最后,我们建议提高风险识别方法的透明度和包容性,敦促采取让利益相关者参与的方法,并要求风险评估具备多种技能。同时,我们还提请注意欧盟内部的多样性,以及因此需要考虑各国国情、语言、机构和文化的本地化风险评估。最后,文章认为,根据《人工智能法》进行共同监管需要对标准制定过程进行彻底的重新审查和改革,以确保对人工智能监管采取民主合法、跨学科、利益相关者参与和反应迅速的方法,从而保障基本权利,预测、识别和减轻广泛的人工智能风险。
{"title":"Artificial intelligence co-regulation? The role of standards in the EU AI Act","authors":"Marta Cantero Gamito, Christopher T Marsden","doi":"10.1093/ijlit/eaae011","DOIUrl":"https://doi.org/10.1093/ijlit/eaae011","url":null,"abstract":"This article examines artificial intelligence (AI) co-regulation in the EU AI Act and the critical role of standards under this regulatory strategy. It engages with the foundation of democratic legitimacy in EU standardization, emphasizing the need for reform to keep pace with the rapid evolution of AI capabilities, as recently suggested by the European Parliament. The article highlights the challenges posed by interdisciplinarity and the lack of civil society expertise in standard-setting. It critiques the inadequate representation of societal stakeholders in the development of AI standards, posing pressing questions about the potential risks this entails to the protection of fundamental rights, given the lack of democratic oversight and the global composition of standard-developing organizations. The article scrutinizes how under the AI Act technical standards will define AI risks and mitigation measures and questions whether technical experts are adequately equipped to standardize thresholds of acceptable residual risks in different high-risk contexts. More specifically, the article examines the complexities of regulating AI, drawing attention to the multi-dimensional nature of identifying risks in AI systems and the value-laden nature of the task. It questions the potential creation of a typology of AI risks and highlights the need for a nuanced, inclusive, and context-specific approach to risk identification and mitigation. Consequently, in the article we underscore the imperative for continuous stakeholder involvement in developing, monitoring, and refining the technical rules and standards for high-risk AI applications. We also emphasize the need for rigorous training, certification, and surveillance measures to ensure the enforcement of fundamental rights in the face of AI developments. Finally, we recommend greater transparency and inclusivity in risk identification methodologies, urging for approaches that involve stakeholders and require a diverse skill set for risk assessment. At the same time, we also draw attention to the diversity within the European Union and the consequent need for localized risk assessments that consider national contexts, languages, institutions, and culture. In conclusion, the article argues that co-regulation under the AI Act necessitates a thorough re-examination and reform of standard-setting processes, to ensure a democratically legitimate, interdisciplinary, stakeholder-inclusive, and responsive approach to AI regulation, which can safeguard fundamental rights and anticipate, identify, and mitigate a broad spectrum of AI risks.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"14 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Will the real data sovereign please stand up? An EU policy response to sovereignty in data spaces 真正的数据主权者能站出来吗?欧盟对数据空间主权的政策回应
IF 1 Q1 LAW Pub Date : 2024-07-06 DOI: 10.1093/ijlit/eaae006
Mark Ryan, Paula Gürtler, Artur Bogucki
This paper aims to evaluate the concept of data sovereignty as applied to data spaces, particularly the Common European Data Space (CEDS). The CEDS aims to develop a single European data market through nine domain-specific data spaces: health, industrial and manufacturing, agriculture, finance, mobility, Green Deal, energy, public administration, and skills. It aims to do this by providing a secure and trustworthy technical architecture, a robust data-sharing business model realized through effective governance, and ensuring data sovereignty. Ensuring data sovereignty, however, is challenging when different agents all claim authority over their data within a data space. This paper focuses on three data sovereign agents in the CEDS—individual, organization, and state—to examine how data sovereignty can be implemented in data spaces based on current European Union regulations and whether shortcomings still need to be addressed.
本文旨在评估适用于数据空间,特别是欧洲共同数据空间(CEDS)的数据主权概念。欧洲共同数据空间旨在通过九个特定领域的数据空间开发一个单一的欧洲数据市场:健康、工业和制造业、农业、金融、流动性、绿色交易、能源、公共管理和技能。它旨在通过提供安全可信的技术架构、通过有效治理实现的稳健数据共享业务模式以及确保数据主权来实现这一目标。然而,当不同的代理都声称对数据空间内的数据拥有权力时,确保数据主权就具有挑战性。本文重点关注 CEDS 中的三个数据主权主体--个人、组织和国家,研究如何在欧盟现行法规的基础上在数据空间中落实数据主权,以及是否仍需解决不足之处。
{"title":"Will the real data sovereign please stand up? An EU policy response to sovereignty in data spaces","authors":"Mark Ryan, Paula Gürtler, Artur Bogucki","doi":"10.1093/ijlit/eaae006","DOIUrl":"https://doi.org/10.1093/ijlit/eaae006","url":null,"abstract":"This paper aims to evaluate the concept of data sovereignty as applied to data spaces, particularly the Common European Data Space (CEDS). The CEDS aims to develop a single European data market through nine domain-specific data spaces: health, industrial and manufacturing, agriculture, finance, mobility, Green Deal, energy, public administration, and skills. It aims to do this by providing a secure and trustworthy technical architecture, a robust data-sharing business model realized through effective governance, and ensuring data sovereignty. Ensuring data sovereignty, however, is challenging when different agents all claim authority over their data within a data space. This paper focuses on three data sovereign agents in the CEDS—individual, organization, and state—to examine how data sovereignty can be implemented in data spaces based on current European Union regulations and whether shortcomings still need to be addressed.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Exclusivity – A Review of Artificial Intelligence & Intellectual Property by Jyh-An Lee, Reto M Hilty and Kung-Chung Liu 反思排他性--人工智能与知识产权述评》,作者:Jyh-An Lee、Reto M Hilty 和 Kung-Chung Liu
IF 1 Q1 LAW Pub Date : 2024-06-29 DOI: 10.1093/ijlit/eaae007
Lisa van Dongen
This review evaluates the edited work ‘Artificial Intelligence & Intellectual Property’. The book’s aim and audience are defined and its contents summarized both generally and chapter by chapter. The review also considers how the book has fared with its challenging scope, the difficult subject matter it covers, and the delivery of a coherent story and conclusions. It is concluded in this review that it speaks for the quality of both the cooperation among contributors and the editors’ vision that the book was quite successful on all accounts despite the difficulty of its project. The review highlights a few of the book’s arguments in the patent and copyright context put forth in support of two of the main discernible conclusions, briefly commenting on their persuasiveness, strengths, and limits. Concluding with some general words of reflecting, the book is recommended as an enlightening read.
这篇评论对编辑作品《人工智能与知识产权》进行了评价。对该书的目的和读者群进行了界定,并对其内容进行了总体和逐章总结。书评还考虑了该书在其具有挑战性的范围、所涵盖的困难主题以及提供连贯的故事和结论方面的表现。书评认为,尽管项目难度很大,但该书在各方面都相当成功,这说明了撰稿人之间的合作质量和编辑的远见卓识。书评强调了该书在专利和版权方面为支持两个主要结论而提出的一些论点,并简要评述了这些论点的说服力、优势和局限性。最后,作者还提出了一些一般性的反思意见,建议将此书作为一本具有启发性的读物。
{"title":"Rethinking Exclusivity – A Review of Artificial Intelligence & Intellectual Property by Jyh-An Lee, Reto M Hilty and Kung-Chung Liu","authors":"Lisa van Dongen","doi":"10.1093/ijlit/eaae007","DOIUrl":"https://doi.org/10.1093/ijlit/eaae007","url":null,"abstract":"This review evaluates the edited work ‘Artificial Intelligence & Intellectual Property’. The book’s aim and audience are defined and its contents summarized both generally and chapter by chapter. The review also considers how the book has fared with its challenging scope, the difficult subject matter it covers, and the delivery of a coherent story and conclusions. It is concluded in this review that it speaks for the quality of both the cooperation among contributors and the editors’ vision that the book was quite successful on all accounts despite the difficulty of its project. The review highlights a few of the book’s arguments in the patent and copyright context put forth in support of two of the main discernible conclusions, briefly commenting on their persuasiveness, strengths, and limits. Concluding with some general words of reflecting, the book is recommended as an enlightening read.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Risks, innovation, and adaptability in the UK’s incrementalism versus the European Union’s comprehensive artificial intelligence regulation 英国的渐进主义与欧盟的全面人工智能法规之间的风险、创新和适应性对比
IF 1 Q1 LAW Pub Date : 2024-06-29 DOI: 10.1093/ijlit/eaae013
Asress Adimi Gikay
The regulation of artificial intelligence (AI) should strike a balance between addressing the risks of the technology and its benefits through enabling useful innovation whilst remaining adaptable to evolving risks. The European Union’s (EU) overarching risk-based regulation subjects AI systems across industries to a set of regulatory standards depending on where they fall in the risk bucket, whilst the UK’s sectoral approach advocates for an incremental regulation. By demonstrating the EU AI Act’s inability to adapt to evolving risks and regulate the technology proportionately, this article argues that the UK should avoid the EU AI Act’s compartmentalized high-risk classification system. The UK should refine its incremental regulation by adopting a generic principle for risk classification that allows for contextual risk assessment whilst adapting to evolving risks. The article contends that if refined appropriately, the UK’s incremental approach that relies on coordinate sectionalism encourages innovation without undermining the UK technology sector’s competitiveness in the global market of compliant AI, while also mitigating the potential risks presented by the technology.
对人工智能(AI)的监管应在应对技术风险与技术效益之间取得平衡,既要支持有益的创新,又要适应不断变化的风险。欧洲联盟(欧盟)基于风险的总体监管要求各行业的人工智能系统根据其在风险桶中的位置遵守一系列监管标准,而英国的部门方法则主张渐进式监管。通过证明欧盟《人工智能法》无法适应不断变化的风险并对技术进行适度监管,本文认为,英国应避免欧盟《人工智能法》中条块分割的高风险分类系统。英国应完善其渐进式监管,采用通用的风险分类原则,允许根据具体情况进行风险评估,同时适应不断变化的风险。文章认为,如果改进得当,英国依赖于协调部门主义的渐进方法既能鼓励创新,又不会削弱英国技术部门在全球合规人工智能市场上的竞争力,同时还能减轻该技术带来的潜在风险。
{"title":"Risks, innovation, and adaptability in the UK’s incrementalism versus the European Union’s comprehensive artificial intelligence regulation","authors":"Asress Adimi Gikay","doi":"10.1093/ijlit/eaae013","DOIUrl":"https://doi.org/10.1093/ijlit/eaae013","url":null,"abstract":"The regulation of artificial intelligence (AI) should strike a balance between addressing the risks of the technology and its benefits through enabling useful innovation whilst remaining adaptable to evolving risks. The European Union’s (EU) overarching risk-based regulation subjects AI systems across industries to a set of regulatory standards depending on where they fall in the risk bucket, whilst the UK’s sectoral approach advocates for an incremental regulation. By demonstrating the EU AI Act’s inability to adapt to evolving risks and regulate the technology proportionately, this article argues that the UK should avoid the EU AI Act’s compartmentalized high-risk classification system. The UK should refine its incremental regulation by adopting a generic principle for risk classification that allows for contextual risk assessment whilst adapting to evolving risks. The article contends that if refined appropriately, the UK’s incremental approach that relies on coordinate sectionalism encourages innovation without undermining the UK technology sector’s competitiveness in the global market of compliant AI, while also mitigating the potential risks presented by the technology.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"136 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Confronting the metadata dilemma in India: a turn to context and proportionality 应对印度的元数据困境:转向背景和相称性
IF 1 Q1 LAW Pub Date : 2024-06-20 DOI: 10.1093/ijlit/eaae012
Rudraksh Lakra, Abhijeet Shrivastava
This paper problematizes the increasing trend of metadata collection by law enforcement, in light of the ‘going dark’ debate, which was spurred by the widespread adoption of secure encryption standards. Focusing on Indian privacy law, which remains nascent as of writing, we examine and propose potential constitutional limitations on metadata collection, and provide substantive guidance on their application. These limitations are bifurcated into two stages: first, whether metadata collection infringes upon the right to privacy, and second, whether the infringement is justified. In determining whether the collection of metadata in a specific case infringes privacy, we conceive of a ‘contextual approach’, challenging the usual ontological subordination of ‘metadata’ in relation to ‘content data’. At the second stage, we centre the standard of proportionality. We offer substantive guidance for Indian courts at each step of the test, including the development of a ‘risk profile’ of metadata collection practices. Such guidance is crucial, given the technically intricate nature of cases involving metadata processing.
本文从安全加密标准的广泛采用所引发的 "走向黑暗 "的争论出发,对执法部门收集元数据这一日益增长的趋势提出了质疑。我们以截至本文撰写时仍处于萌芽状态的印度隐私法为重点,研究并提出了对元数据收集的潜在宪法限制,并为其应用提供了实质性指导。这些限制分为两个阶段:第一,元数据收集是否侵犯了隐私权;第二,侵犯是否合理。在确定具体案件中元数据的收集是否侵犯隐私权时,我们采用 "上下文方法",对 "元数据 "与 "内容数据 "在本体论上的通常从属关系提出质疑。在第二阶段,我们以相称性标准为中心。我们在测试的每一步都为印度法院提供了实质性指导,包括制定元数据收集实践的 "风险概况"。鉴于涉及元数据处理的案件在技术上错综复杂,这种指导至关重要。
{"title":"Confronting the metadata dilemma in India: a turn to context and proportionality","authors":"Rudraksh Lakra, Abhijeet Shrivastava","doi":"10.1093/ijlit/eaae012","DOIUrl":"https://doi.org/10.1093/ijlit/eaae012","url":null,"abstract":"This paper problematizes the increasing trend of metadata collection by law enforcement, in light of the ‘going dark’ debate, which was spurred by the widespread adoption of secure encryption standards. Focusing on Indian privacy law, which remains nascent as of writing, we examine and propose potential constitutional limitations on metadata collection, and provide substantive guidance on their application. These limitations are bifurcated into two stages: first, whether metadata collection infringes upon the right to privacy, and second, whether the infringement is justified. In determining whether the collection of metadata in a specific case infringes privacy, we conceive of a ‘contextual approach’, challenging the usual ontological subordination of ‘metadata’ in relation to ‘content data’. At the second stage, we centre the standard of proportionality. We offer substantive guidance for Indian courts at each step of the test, including the development of a ‘risk profile’ of metadata collection practices. Such guidance is crucial, given the technically intricate nature of cases involving metadata processing.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"2017 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The transatlantic divide: intermediary liability, free expression, and the limits of trade harmonization 跨大西洋鸿沟:中介责任、自由表达和贸易协调的局限性
IF 1 Q1 LAW Pub Date : 2024-03-11 DOI: 10.1093/ijlit/eaae004
Han-Wei Liu
Amid escalating apprehensions surrounding content regulation, the USA has discreetly integrated provisions reminiscent of its Communications Decency Act Section 230 (CDA 230) into trade agreements, offering broad immunity. This scholarly analysis critically assesses this manoeuvre by juxtaposing such CDA 230-like provisions against the UK’s established legal framework governing online content and freedom of expression. Utilizing a comparative legal methodology, the paper underscores the pronounced differences between the USA and UK stances on intermediary liability for third-party content, moulded by their unique constitutional foundations and jurisprudential interpretations of free speech rights. The insertion of CDA 230-aligned clauses into trade agreements poses a potential threat to the UK’s nuanced equilibrium between safeguarding free speech and upholding other paramount interests, such as privacy and reputation. An scrutiny of UK defamation statutes and content regulation protocols reveals inherent challenges in transplanting CDA 230 provisions into trade contexts. In summation, the paper ardently supports a diversified approach to online content governance and cautions against standardizing intermediary liability laws via trade agreements, especially between nations with divergent foundational beliefs. It fervently endorses a cross-disciplinary discourse involving both trade and legal specialists to ensure the preservation of free expression while concurrently recognizing the intricacies of crafting universally applicable standards for online platforms and content regulation.
在围绕内容监管的忧虑不断升级的情况下,美国谨慎地将类似《通信体面法案》第 230 条(CDA 230)的条款纳入贸易协定,提供广泛的豁免权。本学术分析通过将类似于《通信体面法》第 230 条的条款与英国有关网络内容和言论自由的既定法律框架并列,对这一做法进行了批判性评估。本文采用比较法律方法,强调了美国和英国在第三方内容的中间人责任问题上的明显差异,这是由两国独特的宪法基础和对言论自由权利的法理解释所决定的。在贸易协定中加入与《美国诽谤法》第 230 条一致的条款,对英国在保障言论自由与维护隐私和名誉等其他最高利益之间的微妙平衡构成了潜在威胁。对英国诽谤法规和内容监管协议的审查揭示了将 CDA 230 条款移植到贸易环境中的内在挑战。总之,本文坚决支持对在线内容管理采取多样化的方法,并告诫人们不要通过贸易协定将中间人责任法标准化,尤其是在基础信仰不同的国家之间。本文热切支持由贸易和法律专家参与的跨学科讨论,以确保维护言论自由,同时认识到为网络平台和内容监管制定普遍适用标准的复杂性。
{"title":"The transatlantic divide: intermediary liability, free expression, and the limits of trade harmonization","authors":"Han-Wei Liu","doi":"10.1093/ijlit/eaae004","DOIUrl":"https://doi.org/10.1093/ijlit/eaae004","url":null,"abstract":"Amid escalating apprehensions surrounding content regulation, the USA has discreetly integrated provisions reminiscent of its Communications Decency Act Section 230 (CDA 230) into trade agreements, offering broad immunity. This scholarly analysis critically assesses this manoeuvre by juxtaposing such CDA 230-like provisions against the UK’s established legal framework governing online content and freedom of expression. Utilizing a comparative legal methodology, the paper underscores the pronounced differences between the USA and UK stances on intermediary liability for third-party content, moulded by their unique constitutional foundations and jurisprudential interpretations of free speech rights. The insertion of CDA 230-aligned clauses into trade agreements poses a potential threat to the UK’s nuanced equilibrium between safeguarding free speech and upholding other paramount interests, such as privacy and reputation. An scrutiny of UK defamation statutes and content regulation protocols reveals inherent challenges in transplanting CDA 230 provisions into trade contexts. In summation, the paper ardently supports a diversified approach to online content governance and cautions against standardizing intermediary liability laws via trade agreements, especially between nations with divergent foundational beliefs. It fervently endorses a cross-disciplinary discourse involving both trade and legal specialists to ensure the preservation of free expression while concurrently recognizing the intricacies of crafting universally applicable standards for online platforms and content regulation.","PeriodicalId":44278,"journal":{"name":"International Journal of Law and Information Technology","volume":"7 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140105967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Law and Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1