首页 > 最新文献

Computer Law & Security Review最新文献

英文 中文
Introduction for computer law and security review: special issue “knowledge management for law” 计算机法律与安全评论》导言:"法律知识管理 "特刊
IF 2.9 3区 社会学 Q1 Social Sciences Pub Date : 2024-02-13 DOI: 10.1016/j.clsr.2024.105949
Emilio Sulis , Luigi Di Caro , Rohan Nanda
{"title":"Introduction for computer law and security review: special issue “knowledge management for law”","authors":"Emilio Sulis , Luigi Di Caro , Rohan Nanda","doi":"10.1016/j.clsr.2024.105949","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105949","url":null,"abstract":"","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139727065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consumer neuro devices within EU product safety law: Are we prepared for big tech ante portas? 欧盟产品安全法中的消费类神经设备:我们准备好迎接大型科技产品了吗?
IF 2.9 3区 社会学 Q1 Social Sciences Pub Date : 2024-02-10 DOI: 10.1016/j.clsr.2024.105945
Elisabeth Steindl

Previously confined to the distinct medical market, neurotechnologies are expanding rapidly into the consumer market, driven by technological advancements and substantial investments. While offering promising benefits, concerns have emerged regarding the suitability of existing legal frameworks to adequately address the risks they present. Against the background of an ongoing global debate on new policies or new ‘neurorights’ regulating neurotechnology, this paper delves into the regulation of consumer Brain-Computer Interfaces (BCIs) in the European Union (EU), focusing on the pertinent product safety legislation.

The analysis will primarily examine the sector-specific product safety law for medical devices, the Medical Devices Regulation (MDR). It will meticulously delineate which consumer BCIs fall within its scope and are obliged to comply with the requirements outlined. The tech-based approach of Annex XVI MDR, coupled with recent amendments, show that the EU has adopted a forward-thinking rationale towards regulating health-related risks associated with consumer BCIs within existing EU medical devices legislation, while abstaining from over-regulating aspects therein that are beyond its core objectives.

Supplementary, the paper will discuss developments in EU horizontal product safety law, regulating all consumer BCIs that are not subject to sector-specific product safety legislation. In their recently adopted General Product Safety Regulation (GPSR), the EU has introduced several provisions addressing digital products. Inter alia, these changes will enhance the horizontal regulation of consumer BCIs.

Overall, within the context of product safety law, the recent adaptations affirm notable efforts by the EU to refine the legal framework that governs consumer BCIs, striking a delicate balance between effective technology regulation and not impeding innovation.

神经技术以前仅限于独特的医疗市场,但在技术进步和大量投资的推动下,神经技术正在迅速扩展到消费市场。在带来巨大利益的同时,人们也开始担心现有的法律框架是否适合充分应对神经技术带来的风险。目前,全球正在就监管神经技术的新政策或新 "神经权 "展开讨论,本文将在此背景下深入探讨欧盟(EU)对消费类脑机接口(BCI)的监管,重点关注相关的产品安全法规。该分析将主要研究针对医疗器械的特定行业产品安全法--《医疗器械条例》(MDR),并将细致划分哪些消费类 BCI 属于该条例的适用范围,必须遵守其中规定的要求。MDR 附件 XVI 中以技术为基础的方法以及最近的修正案表明,欧盟采用了一种具有前瞻性的理念,即在现有的欧盟医疗器械法规范围内监管与消费类 BCIs 相关的健康风险,同时避免对其中超出其核心目标的方面进行过度监管。在最近通过的《一般产品安全条例》(GPSR)中,欧盟针对数字产品引入了若干规定。总体而言,在产品安全法的背景下,最近的调整肯定了欧盟在完善消费者生物识别技术法律框架方面所做的显著努力,在有效的技术监管和不阻碍创新之间取得了微妙的平衡。
{"title":"Consumer neuro devices within EU product safety law: Are we prepared for big tech ante portas?","authors":"Elisabeth Steindl","doi":"10.1016/j.clsr.2024.105945","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105945","url":null,"abstract":"<div><p>Previously confined to the distinct medical market, neurotechnologies are expanding rapidly into the consumer market, driven by technological advancements and substantial investments. While offering promising benefits, concerns have emerged regarding the suitability of existing legal frameworks to adequately address the risks they present. Against the background of an ongoing global debate on new policies or new ‘neurorights’ regulating neurotechnology, this paper delves into the regulation of consumer Brain-Computer Interfaces (BCIs) in the European Union (EU), focusing on the pertinent product safety legislation.</p><p>The analysis will primarily examine the sector-specific product safety law for medical devices, the Medical Devices Regulation (MDR). It will meticulously delineate which consumer BCIs fall within its scope and are obliged to comply with the requirements outlined. The tech-based approach of Annex XVI MDR, coupled with recent amendments, show that the EU has adopted a forward-thinking rationale towards regulating health-related risks associated with consumer BCIs within existing EU medical devices legislation, while abstaining from over-regulating aspects therein that are beyond its core objectives.</p><p>Supplementary, the paper will discuss developments in EU horizontal product safety law, regulating all consumer BCIs that are not subject to sector-specific product safety legislation. In their recently adopted General Product Safety Regulation (GPSR), the EU has introduced several provisions addressing digital products. Inter alia, these changes will enhance the horizontal regulation of consumer BCIs.</p><p>Overall, within the context of product safety law, the recent adaptations affirm notable efforts by the EU to refine the legal framework that governs consumer BCIs, striking a delicate balance between effective technology regulation and not impeding innovation.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000128/pdfft?md5=7661ea829ab4840c2ce795423ce982d8&pid=1-s2.0-S0267364924000128-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139718879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Substantive fairness in the GDPR: Fairness Elements for Article 5.1a GDPR GDPR 中的实质性公平:GDPR 第 5.1a 条的公平要素
IF 2.9 3区 社会学 Q1 Social Sciences Pub Date : 2024-02-10 DOI: 10.1016/j.clsr.2024.105942
Andreas Häuselmann, Bart Custers

According to the fairness principle in Article 5.1a of the EU General Data Protection Regulation (GDPR), data controllers must process personal data fairly. However, the GDPR fails to explain what is fairness and how it should be achieved. In fact, the GDPR focuses mostly on procedural fairness: if personal data are processed in compliance with the GDPR, for instance, by ensuring lawfulness and transparency, such processing is assumed to be fair. Because some forms of data processing can still be unfair, even if all the GDPR's procedural rules are complied with, we argue that substantive fairness is also an essential part of the GDPR's fairness principle and necessary to achieve the GDPR's goal of offering effective protection to data subjects. Substantive fairness is not mentioned in the GDPR and no guidance on substantive fairness is provided. In this paper, we provide elements of substantive fairness derived from EU consumer law, competition law, non-discrimination law, and data protection law that can help interpret the substantive part of the GDPR's fairness principle. Three elements derived from consumer protection law are good faith, no detrimental effects, and autonomy (e.g., no misleading or aggressive practices). We derive the element of abuse of dominant position (and power inequalities) from competition law. From other areas of law, we derive non-discrimination, vulnerabilities, and accuracy as elements relevant to interpreting substantive fairness. Although this may not be a complete list, cumulatively these elements may help interpret Article 5.1a GDPR and help achieve fairness in data protection law.

根据欧盟《一般数据保护条例》(GDPR)第 5.1a 条的公平原则,数据控制者必须公平地处理个人数据。然而,GDPR 没有解释什么是公平以及如何实现公平。事实上,GDPR 主要关注的是程序公平性:如果个人数据的处理符合 GDPR 的规定,例如通过确保合法性和透明度,那么这种处理就被认为是公平的。由于某些形式的数据处理仍然可能是不公平的,即使 GDPR 的所有程序规则都得到了遵守,我们认为实质公平也是 GDPR 公平原则的重要组成部分,是实现 GDPR 为数据主体提供有效保护这一目标所必需的。GDPR 中没有提到实质公平,也没有提供关于实质公平的指导。在本文中,我们提供了源自欧盟消费者法、竞争法、非歧视法和数据保护法的实质性公平要素,这些要素有助于解释 GDPR 公平原则的实质性部分。从消费者保护法中引申出的三个要素是诚信、无损害性影响和自主权(如无误导性或侵略性做法)。我们从竞争法中引申出滥用支配地位(和权力不平等)的要素。我们从其他法律领域引申出不歧视、脆弱性和准确性等与解释实质公平性相关的要素。尽管这可能不是一个完整的清单,但这些要素的累积可能有助于解释 GDPR 第 5.1a 条,并有助于实现数据保护法的公平性。
{"title":"Substantive fairness in the GDPR: Fairness Elements for Article 5.1a GDPR","authors":"Andreas Häuselmann,&nbsp;Bart Custers","doi":"10.1016/j.clsr.2024.105942","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105942","url":null,"abstract":"<div><p>According to the fairness principle in Article 5.1a of the EU General Data Protection Regulation (GDPR), data controllers must process personal data fairly. However, the GDPR fails to explain what is fairness and how it should be achieved. In fact, the GDPR focuses mostly on procedural fairness: if personal data are processed in compliance with the GDPR, for instance, by ensuring lawfulness and transparency, such processing is assumed to be fair. Because some forms of data processing can still be unfair, even if all the GDPR's procedural rules are complied with, we argue that substantive fairness is also an essential part of the GDPR's fairness principle and necessary to achieve the GDPR's goal of offering effective protection to data subjects. Substantive fairness is not mentioned in the GDPR and no guidance on substantive fairness is provided. In this paper, we provide elements of substantive fairness derived from EU consumer law, competition law, non-discrimination law, and data protection law that can help interpret the substantive part of the GDPR's fairness principle. Three elements derived from consumer protection law are good faith, no detrimental effects, and autonomy (e.g., no misleading or aggressive practices). We derive the element of abuse of dominant position (and power inequalities) from competition law. From other areas of law, we derive non-discrimination, vulnerabilities, and accuracy as elements relevant to interpreting substantive fairness. Although this may not be a complete list, cumulatively these elements may help interpret Article 5.1a GDPR and help achieve fairness in data protection law.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000098/pdfft?md5=9b2b92d2ee98f14fb7799d00af51f207&pid=1-s2.0-S0267364924000098-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139718878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affection as a service: Ghostbots and the changing nature of mourning 亲情是一种服务:幽灵机器人与哀悼性质的变化
IF 2.9 3区 社会学 Q1 Social Sciences Pub Date : 2024-02-09 DOI: 10.1016/j.clsr.2024.105943
Mauricio Figueroa-Torres

This article elucidates the rise of ghostbots, artificial conversational agents that emulate the deceased, as marketable commodities. The study explains the role of ghostbots in changing how mourning is experienced. It highlights how ghostbots alter the relationship between the bereaved and the departed, transforming it into one of a customer-object within legal discourse. By critically examining the nexus between commodification and the law, this study underscores how ghostbots signify a different and intriguing form of commodification in the interaction between the living and the deceased, within the dynamics of the Digital Afterlife Industry. By furnishing this scrutiny, the article contributes to comprehending the commodification inherent in ghostbots and concludes by delineating specific foundational or seminal points for subsequent academic discussion to aide a more holistic deliberation on the use, commercialisation, or regulation of these systems, and other affection-as-a-service products.

本文阐释了幽灵机器人的兴起,幽灵机器人是模仿逝者的人工对话代理,是一种适销对路的商品。研究解释了幽灵机器人在改变哀悼方式中的作用。它强调了幽灵机器人是如何改变丧亲者与逝者之间的关系,将其转变为法律话语中的客户对象。通过批判性地审视商品化与法律之间的关系,本研究强调了幽灵机器人如何在数字逝者产业的动态中,在生者与逝者的互动中象征着一种不同的、引人入胜的商品化形式。文章通过这种审视,有助于理解幽灵机器人固有的商品化,并在最后为后续的学术讨论勾勒出具体的基础性或开创性观点,以帮助对这些系统和其他亲情即服务产品的使用、商业化或监管进行更全面的思考。
{"title":"Affection as a service: Ghostbots and the changing nature of mourning","authors":"Mauricio Figueroa-Torres","doi":"10.1016/j.clsr.2024.105943","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105943","url":null,"abstract":"<div><p>This article elucidates the rise of ghostbots, artificial conversational agents that emulate the deceased, as marketable commodities. The study explains the role of ghostbots in changing how mourning is experienced. It highlights how ghostbots alter the relationship between the bereaved and the departed, transforming it into one of a customer-object within legal discourse. By critically examining the nexus between commodification and the law, this study underscores how ghostbots signify a different and intriguing form of commodification in the interaction between the living and the deceased, within the dynamics of the Digital Afterlife Industry. By furnishing this scrutiny, the article contributes to comprehending the commodification inherent in ghostbots and concludes by delineating specific foundational or seminal points for subsequent academic discussion to aide a more holistic deliberation on the use, commercialisation, or regulation of these systems, and other affection-as-a-service products.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000104/pdfft?md5=2c1547e315f7ded62b771b4c6859191e&pid=1-s2.0-S0267364924000104-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139714785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to design data access for researchers: A legal and software development perspective 如何为研究人员设计数据访问:法律和软件开发视角
IF 2.9 3区 社会学 Q1 Social Sciences Pub Date : 2024-02-06 DOI: 10.1016/j.clsr.2024.105946
M.Z. van Drunen, A. Noroozian

Public scrutiny of platforms has been limited by a lack of transparency. In response, EU law increasingly requires platforms to provide data to researchers. The Digital Services Act and the proposed Regulation on the Transparency and Targeting of Political Advertising in particular require platforms to provide access to data through ad libraries and in response to data access requests. However, these obligations leave platforms considerable discretion to determine how access to data is provided. As the history of platforms’ self-regulated data access projects shows, the technical choices involved in designing data access significantly affect how researchers can use the provided data to scrutinise platforms. Ignoring the way data access is designed therefore creates a danger that platforms’ ability to limit research into their services simply shifts from controlling what data is available to researchers, to how data access is provided. This article explores how the Digital Services Act and proposed Political Advertising Regulation should be used to control the operationalisation of data access obligations that enable researchers to scrutinise platforms. It argues the operationalisation of data access regimes should not only be seen as a legal problem, but also as a software design problem. To that end it explores how software development principles may inform the operationalisation of data access obligations. The article closes by exploring the legal mechanisms available in the Digital Services Act and proposed Political Advertising Regulation to exercise control over the design of data access regimes, and makes five recommendations for ways in which these mechanisms should be used to enable research into platforms.

由于缺乏透明度,公众对平台的监督一直受到限制。对此,欧盟法律越来越多地要求平台向研究人员提供数据。尤其是《数字服务法》和拟议中的《政治广告透明度和针对性条例》要求平台通过广告库提供数据,并对数据访问请求做出回应。然而,这些义务留给平台相当大的自由裁量权来决定如何提供数据访问。正如平台自律数据访问项目的历史所显示的,设计数据访问所涉及的技术选择极大地影响了研究人员如何利用所提供的数据对平台进行审查。因此,忽视数据访问的设计方式会造成一种危险,即平台限制对其服务进行研究的能力会从控制研究人员可获得哪些数据简单地转移到如何提供数据访问上。本文探讨了应如何利用《数字服务法》和拟议的《政治广告法规》来控制数据访问义务的可操作性,从而使研究人员能够对平台进行审查。文章认为,数据访问制度的可操作性不仅应被视为一个法律问题,还应视为一个软件设计问题。为此,文章探讨了软件开发原则如何为数据访问义务的可操作性提供依据。文章最后探讨了《数字服务法》和拟议的《政治广告法规》中可用来控制数据访问制度设计的法律机制,并就如何利用这些机制促进平台研究提出了五点建议。
{"title":"How to design data access for researchers: A legal and software development perspective","authors":"M.Z. van Drunen,&nbsp;A. Noroozian","doi":"10.1016/j.clsr.2024.105946","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105946","url":null,"abstract":"<div><p>Public scrutiny of platforms has been limited by a lack of transparency. In response, EU law increasingly requires platforms to provide data to researchers. The Digital Services Act and the proposed Regulation on the Transparency and Targeting of Political Advertising in particular require platforms to provide access to data through ad libraries and in response to data access requests. However, these obligations leave platforms considerable discretion to determine how access to data is provided. As the history of platforms’ self-regulated data access projects shows, the technical choices involved in designing data access significantly affect how researchers can use the provided data to scrutinise platforms. Ignoring the way data access is designed therefore creates a danger that platforms’ ability to limit research into their services simply shifts from controlling what data is available to researchers, to how data access is provided. This article explores how the Digital Services Act and proposed Political Advertising Regulation should be used to control the operationalisation of data access obligations that enable researchers to scrutinise platforms. It argues the operationalisation of data access regimes should not only be seen as a legal problem, but also as a software design problem. To that end it explores how software development principles may inform the operationalisation of data access obligations. The article closes by exploring the legal mechanisms available in the Digital Services Act and proposed Political Advertising Regulation to exercise control over the design of data access regimes, and makes five recommendations for ways in which these mechanisms should be used to enable research into platforms.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S026736492400013X/pdfft?md5=4f8827d19e930942944bf92c085de7da&pid=1-s2.0-S026736492400013X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139694673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is the regulation of connected and automated vehicles (CAVs) a wicked problem and why does it matter? 对联网和自动驾驶汽车(CAV)的监管是一个棘手的问题吗?
IF 2.9 3区 社会学 Q1 Social Sciences Pub Date : 2024-02-03 DOI: 10.1016/j.clsr.2024.105944
Amy Dunphy

The anticipated public deployment of highly connected and automated vehicles (‘CAVs’) has the potential to introduce a range of complex regulatory challenges because of the novel and expansive way that data is generated, used, collected and shared by CAVs. Regulators within Australia and internationally are facing the complex task of developing rules and regulations to meet these challenges against the backdrop of continuing uncertainty about the ultimate form of CAVs and the timeframe for their introduction. This paper undertakes a novel examination of whether the regulation of high level CAVs and their associated data will constitute a ‘wicked problem’. The wicked problem framework provides a valuable lens through which to examine difficult issues that are faced by regulators and, in turn, to aid in developing regulatory responses and to navigate such issues. A new four quadrant framework is developed and applied. It draws on and expands the seminal work on wicked problems by Rittel and Webber, and Alford and Head. The framework is used to critically reflect on whether CAVs are a ‘wicked problem’, and, if so, what might be the potential consequences for policy and regulatory development involving the data environment. This paper considers whether evaluating the ‘wickedness’ of a problem is a useful exercise for regulators, and the potential impact on developing novel approaches to regulatory responses.

由于高度互联的自动驾驶汽车(CAVs)的数据生成、使用、收集和共享方式新颖而广泛,因此预期中的公开部署有可能带来一系列复杂的监管挑战。澳大利亚和国际上的监管机构都面临着制定规则和法规以应对这些挑战的复杂任务,而其背景则是自动驾驶汽车的最终形式及其引入的时间框架仍存在不确定性。本文对高水平 CAV 及其相关数据的监管是否会构成 "棘手问题 "进行了新颖的探讨。邪恶问题框架为研究监管机构面临的棘手问题提供了一个有价值的视角,反过来也有助于制定监管对策和解决这些问题。我们开发并应用了一个新的四象限框架。该框架借鉴并扩展了 Rittel 和 Webber 以及 Alford 和 Head 就邪恶问题所做的开创性工作。该框架用于批判性地反思 CAV 是否是一个 "邪恶问题",如果是,那么对涉及数据环境的政策和法规制定可能产生哪些潜在影响。本文探讨了评估问题的 "邪恶性 "对监管者是否有用,以及对制定新的监管对策的潜在影响。
{"title":"Is the regulation of connected and automated vehicles (CAVs) a wicked problem and why does it matter?","authors":"Amy Dunphy","doi":"10.1016/j.clsr.2024.105944","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105944","url":null,"abstract":"<div><p>The anticipated public deployment of highly connected and automated vehicles (‘CAVs’) has the potential to introduce a range of complex regulatory challenges because of the novel and expansive way that data is generated, used, collected and shared by CAVs. Regulators within Australia and internationally are facing the complex task of developing rules and regulations to meet these challenges against the backdrop of continuing uncertainty about the ultimate form of CAVs and the timeframe for their introduction. This paper undertakes a novel examination of whether the regulation of high level CAVs and their associated data will constitute a ‘wicked problem’. The wicked problem framework provides a valuable lens through which to examine difficult issues that are faced by regulators and, in turn, to aid in developing regulatory responses and to navigate such issues. A new four quadrant framework is developed and applied. It draws on and expands the seminal work on wicked problems by Rittel and Webber, and Alford and Head. The framework is used to critically reflect on whether CAVs are a ‘wicked problem’, and, if so, what might be the potential consequences for policy and regulatory development involving the data environment. This paper considers whether evaluating the ‘wickedness’ of a problem is a useful exercise for regulators, and the potential impact on developing novel approaches to regulatory responses.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000116/pdfft?md5=5aa65af607c54254006825d18dd5a56d&pid=1-s2.0-S0267364924000116-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139674866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transborder flow of personal data (TDF) in Africa: Stocktaking the ills and gains of a divergently regulated business mechanism 非洲个人数据跨境流动(TDF):盘点监管不一的商业机制的弊端与收益
IF 2.9 3区 社会学 Q1 Social Sciences Pub Date : 2024-01-31 DOI: 10.1016/j.clsr.2024.105940
Olumide Babalola

Technology-based transactions are inseparable from the routine exchange of data. These exchanges may not pose privacy problems until the movement takes extra-territorial turns thereby facing multiple levels of cross-border regulations. In the 80 s, the frequency of transfer of personal data beyond geographical boundaries in Europe precipitated the regulation of transborder data flows (TDF) beginning with the enactment of the Organization for OECD Guidelines. In Africa, the concept of TDF is more complex than usually viewed by the stakeholders and this is partly because neither the African Union nor other regional bodies have introduced legislation on TDF. Like many concepts in data protection, TDF is bereft of a generally accepted meaning. Regardless of the uncertainty, this paper approaches TDF as the transmission of personal data from one country to another country or international entity for the purpose of processing. The paper discusses some definitions of TDF as understood under African regional and national data protection legislation. In a comparative and normative approach, the paper analyses the barriers to TDF in Africa vis a vis the European experience and then concludes with recommendations for workable TDF within and outside the continent from an African perspective beginning with the harmonization of existing regional framework.

基于技术的交易与日常的数据交换密不可分。这些交换可能不会带来隐私问题,直到数据流动出现域外转折,从而面临多层次的跨境监管。上世纪 80 年代,欧洲个人数据的频繁跨境传输促使经合组织制定了《经合组织准则》,开始对跨境数据流(TDF)进行监管。在非洲,TDF 的概念比利益相关者通常认为的要复杂得多,部分原因是非洲联盟和其他地区机构都没有出台有关 TDF 的立法。与数据保护中的许多概念一样,TDF 也没有一个普遍接受的含义。尽管存在不确定性,本文仍将 TDF 视为为处理目的而将个人数据从一个国家传输到另一个国家或国际实体。本文讨论了非洲地区和国家数据保护立法所理解的 TDF 的一些定义。本文采用比较和规范的方法,分析了非洲 TDF 的障碍和欧洲的经验,最后从非洲的角度出发,从协调现有地区框架入手,为非洲大陆内外可行的 TDF 提出了建议。
{"title":"Transborder flow of personal data (TDF) in Africa: Stocktaking the ills and gains of a divergently regulated business mechanism","authors":"Olumide Babalola","doi":"10.1016/j.clsr.2024.105940","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105940","url":null,"abstract":"<div><p>Technology-based transactions are inseparable from the routine exchange of data. These exchanges may not pose privacy problems until the movement takes extra-territorial turns thereby facing multiple levels of cross-border regulations. In the 80 s, the frequency of transfer of personal data beyond geographical boundaries in Europe precipitated the regulation of transborder data flows (TDF) beginning with the enactment of the Organization for OECD Guidelines. In Africa, the concept of TDF is more complex than usually viewed by the stakeholders and this is partly because neither the African Union nor other regional bodies have introduced legislation on TDF. Like many concepts in data protection, TDF is bereft of a generally accepted meaning. Regardless of the uncertainty, this paper approaches TDF as the transmission of personal data from one country to another country or international entity for the purpose of processing. The paper discusses some definitions of TDF as understood under African regional and national data protection legislation. In a comparative and normative approach, the paper analyses the barriers to TDF in Africa vis a vis the European experience and then concludes with recommendations for workable TDF within and outside the continent from an African perspective beginning with the harmonization of existing regional framework.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000074/pdfft?md5=169c3fa8a8be583b07c661812bd2a721&pid=1-s2.0-S0267364924000074-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fraud by generative AI chatbots: On the thin line between deception and negligence 生成式人工智能聊天机器人的欺诈行为:欺骗与疏忽之间的一线之隔
IF 2.9 3区 社会学 Q1 Social Sciences Pub Date : 2024-01-29 DOI: 10.1016/j.clsr.2024.105941
Maarten Herbosch

The use of generative AI systems is on the rise. As a result, we are increasingly often conversing with AI chatbots rather than with fellow humans. This increasing use of AI systems leads to legal challenges as well, particularly when the chatbot provides incorrect information. In this article, we study whether someone who decides to contract on the basis of incorrect information provided by a generative AI chatbot might invoke the fraud regime to annul the resulting contract in various legal systems. During this analysis, it becomes clear that some of the requirements that are currently being put forward from a public law perspective, such as in the European AI Act, may also naturally arise from existing private law figures. In the same vein, this analysis highlights the interesting intradisciplinary feedback between instruments of public law and other legal domains.

生成式人工智能系统的使用呈上升趋势。因此,我们越来越多地与人工智能聊天机器人对话,而不是与人类对话。人工智能系统的使用日益增多也带来了法律挑战,尤其是当聊天机器人提供错误信息时。在本文中,我们将研究在不同的法律体系中,根据生成式人工智能聊天机器人提供的错误信息决定签约的人是否会援引欺诈制度来废除由此产生的合同。在分析过程中,我们可以清楚地看到,目前从公法角度提出的一些要求,如《欧洲人工智能法》中的要求,也可能自然而然地产生于现有的私法数据中。同样,本分析强调了公法文书与其他法律领域之间有趣的学科内反馈。
{"title":"Fraud by generative AI chatbots: On the thin line between deception and negligence","authors":"Maarten Herbosch","doi":"10.1016/j.clsr.2024.105941","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105941","url":null,"abstract":"<div><p><span>The use of generative AI systems is on the rise. As a result, we are increasingly often conversing with AI </span>chatbots<span> rather than with fellow humans. This increasing use of AI systems leads to legal challenges as well, particularly when the chatbot provides incorrect information. In this article, we study whether someone who decides to contract on the basis of incorrect information provided by a generative AI chatbot might invoke the fraud regime to annul the resulting contract in various legal systems. During this analysis, it becomes clear that some of the requirements that are currently being put forward from a public law perspective, such as in the European AI Act, may also naturally arise from existing private law figures. In the same vein, this analysis highlights the interesting intradisciplinary feedback between instruments of public law and other legal domains.</span></p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy icons as a component of effective transparency and controls under the GDPR: effective data protection by design based on art. 25 GDPR 隐私图标作为 GDPR 有效透明度和控制的组成部分:基于 GDPR 第 25 条的有效数据保护设计。GDPR 第 25 条
IF 2.9 3区 社会学 Q1 Social Sciences Pub Date : 2024-01-28 DOI: 10.1016/j.clsr.2023.105924
Max von Grafenstein , Isabel Kiefaber , Julie Heumüller , Valentin Rupp , Paul Graßl , Otto Kolless , Zsófia Puzst

Understandable privacy information builds trust with users and therefore provides an important competitive advantage for the provider. However, designing privacy information that is both truthful and easy for users to understand is challenging. There are many complex balancing decisions to be made, not only with respect to legal but also visual and user experience design issues. This is why designing understandable privacy information requires combining at least three disciplines that have had little to do with each other in current practice: law, visual design, and user experience design research. The challenges of combining all three disciplines actually culminate in the design and use of Privacy Icons, which are expected to make lengthy legal texts clear and easy to understand (see Art. 12 sect. 7 of the EU General Data Protection Regulation). However, that is much easier said than done. In this paper, we summarise our key learnings from a five years research process on how to design Privacy Icons as a component of effective transparency and user controls. We will provide examples of information and control architectures for privacy policies, forms of consent (especially in the form of cookie banners), privacy dashboards and consent agents in which Privacy Icons may be embedded, 2) a non-exhaustive set of more than 150 Privacy Icons, and above all 3) a concept and process model that can be used to implement the requirements of the GDPR in terms of transparency and user controls in an effective way, according to the data protection by design approach in Art. 25 sect. 1 GDPR. The paper will show that it is a rocky road to the stars and we still haven't arrived – but at least we know how to go.

可理解的隐私信息可以建立用户的信任,从而为提供商带来重要的竞争优势。然而,设计既真实又易于用户理解的隐私信息是一项挑战。需要做出许多复杂的平衡决定,不仅涉及法律问题,还涉及视觉和用户体验设计问题。这就是为什么要设计出易于理解的隐私信息,至少需要结合三个学科,而这三个学科在目前的实践中几乎没有任何关系:法律、视觉设计和用户体验设计研究。将这三个学科结合起来所面临的挑战实际上在隐私图标的设计和使用中达到了顶峰,隐私图标有望使冗长的法律文本变得清晰易懂(见《欧盟通用数据保护条例》第 12 条第 7 款)。然而,说起来容易做起来难。在本文中,我们将总结五年来在如何设计隐私图标作为有效透明度和用户控制的组成部分方面的主要研究成果。我们将举例说明隐私政策的信息和控制架构、同意形式(尤其是 cookie 横幅的形式)、隐私仪表盘和同意代理(其中可嵌入隐私图标);2)150 多个隐私图标的非穷尽性集合;最重要的是 3)一个概念和流程模型,可用于根据第 25 条中的数据保护设计方法,以有效的方式实施 GDPR 在透明度和用户控制方面的要求。GDPR 第 25 条第 1 款。1 GDPR。本文将说明,通往星空的道路是坎坷的,我们仍未到达--但至少我们知道该如何走。
{"title":"Privacy icons as a component of effective transparency and controls under the GDPR: effective data protection by design based on art. 25 GDPR","authors":"Max von Grafenstein ,&nbsp;Isabel Kiefaber ,&nbsp;Julie Heumüller ,&nbsp;Valentin Rupp ,&nbsp;Paul Graßl ,&nbsp;Otto Kolless ,&nbsp;Zsófia Puzst","doi":"10.1016/j.clsr.2023.105924","DOIUrl":"https://doi.org/10.1016/j.clsr.2023.105924","url":null,"abstract":"<div><p>Understandable privacy information builds trust with users and therefore provides an important competitive advantage for the provider. However, designing privacy information that is both truthful and easy for users to understand is challenging. There are many complex balancing decisions to be made, not only with respect to legal but also visual and user experience design issues. This is why designing understandable privacy information requires combining at least three disciplines that have had little to do with each other in current practice: law, visual design, and user experience design research. The challenges of combining all three disciplines actually culminate in the design and use of Privacy Icons, which are expected to make lengthy legal texts clear and easy to understand (see Art. 12 sect. 7 of the EU General Data Protection Regulation). However, that is much easier said than done. In this paper, we summarise our key learnings from a five years research process on how to design Privacy Icons as a component of effective transparency and user controls. We will provide examples of information and control architectures for privacy policies, forms of consent (especially in the form of cookie banners), privacy dashboards and consent agents in which Privacy Icons may be embedded, 2) a non-exhaustive set of more than 150 Privacy Icons, and above all 3) a concept and process model that can be used to implement the requirements of the GDPR in terms of transparency and user controls in an <em>effective</em> way, according to the data protection by design approach in Art. 25 sect. 1 GDPR. The paper will show that it is a rocky road to the stars and we still haven't arrived – but at least we know how to go.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364923001346/pdfft?md5=346ff487adc0ce805af363fe1afb0633&pid=1-s2.0-S0267364923001346-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discrimination for the sake of fairness by design and its legal framework 为公平而设计的歧视及其法律框架
IF 2.9 3区 社会学 Q1 Social Sciences Pub Date : 2024-01-27 DOI: 10.1016/j.clsr.2023.105916
Holly Hoch , Corinna Hertweck , Michele Loi , Aurelia Tamò-Larrieux

As algorithms are increasingly enlisted to make critical determinations about human actors, the more frequently we see these algorithms appear in sensational headlines crying foul on discrimination. There is broad consensus among computer scientists working on this issue that such discrimination can be reduced by intentionally collecting and consciously using sensitive information about demographic features like sex, gender, race, religion etc. Companies implementing such algorithms might, however, be wary of allowing algorithms access to such data as they fear legal repercussions, as the promoted standard has been to omit protected attributes, otherwise dubbed “fairness through unawareness”. This paper asks whether such wariness is justified in light of EU data protection and anti-discrimination laws. In order to answer this question, we introduce a specific case and analyze how EU law might apply when an algorithm accesses sensitive information to make fairer predictions. We review whether such measures constitute discrimination, and for who, arriving at different conclusions based on how we define the harm of discrimination and the groups we compare. Finding that several legal claims could arise regarding the use of sensitive information, we ultimately conclude that the proffered fairness measures would be considered a positive (or affirmative) action under EU law. As such, the appropriate use of sensitive information in order to increase the fairness of an algorithm is a positive action, and not per se prohibited by EU law.

随着越来越多的算法被用来对人类行为者做出关键判断,我们也越发频繁地看到这些算法出现在耸人听闻的头条新闻中,大肆宣扬歧视。研究这一问题的计算机科学家们普遍认为,可以通过有意识地收集和使用有关性、性别、种族、宗教等人口特征的敏感信息来减少这种歧视。然而,实施此类算法的公司可能会对允许算法访问此类数据持谨慎态度,因为他们担心会受到法律影响,因为推广的标准是省略受保护的属性,也就是所谓的 "因不知而公平"。本文将从欧盟数据保护和反歧视法的角度来探讨这种担心是否合理。为了回答这个问题,我们介绍了一个具体案例,并分析了当算法获取敏感信息以做出更公平的预测时,欧盟法律可能如何适用。我们审查了此类措施是否构成歧视,以及对谁构成歧视,并根据歧视危害的定义和比较的群体得出了不同的结论。我们发现,在使用敏感信息方面可能会出现几种法律诉求,但我们最终得出结论,根据欧盟法律,提出的公平措施将被视为一种积极(或肯定)行动。因此,适当使用敏感信息以提高算法的公平性是一种积极的行为,其本身并不为欧盟法律所禁止。
{"title":"Discrimination for the sake of fairness by design and its legal framework","authors":"Holly Hoch ,&nbsp;Corinna Hertweck ,&nbsp;Michele Loi ,&nbsp;Aurelia Tamò-Larrieux","doi":"10.1016/j.clsr.2023.105916","DOIUrl":"https://doi.org/10.1016/j.clsr.2023.105916","url":null,"abstract":"<div><p>As algorithms are increasingly enlisted to make critical determinations about human actors, the more frequently we see these algorithms appear in sensational headlines crying foul on discrimination. There is broad consensus among computer scientists working on this issue that such discrimination can be reduced by intentionally collecting and consciously using sensitive information<span> about demographic features like sex, gender, race, religion etc. Companies implementing such algorithms might, however, be wary of allowing algorithms access to such data as they fear legal repercussions, as the promoted standard has been to omit protected attributes, otherwise dubbed “fairness through unawareness”. This paper asks whether such wariness is justified in light of EU data protection and anti-discrimination laws. In order to answer this question, we introduce a specific case and analyze how EU law might apply when an algorithm accesses sensitive information to make fairer predictions. We review whether such measures constitute discrimination, and for who, arriving at different conclusions based on how we define the harm of discrimination and the groups we compare. Finding that several legal claims could arise regarding the use of sensitive information, we ultimately conclude that the proffered fairness measures would be considered a positive (or affirmative) action under EU law. As such, the appropriate use of sensitive information in order to increase the fairness of an algorithm is a positive action, and not per se prohibited by EU law.</span></p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Law & Security Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1