Pub Date : 2023-10-17DOI: 10.1080/13600869.2023.2269498
Efstratios Koulierakis
Data protection by design is an obligation for data controllers according to article 25(1) of the General Data Protection Regulation (GDPR). The present paper explores the concept of data protection by design and proposes that data protection certificates can offer guidance to data controllers, about compliance with this GDPR obligation. An exploration of officially approved certification schemes shows that the certification requirements may lay down concrete use cases which can guide data controllers about compliance with the obligation of data protection by design. Even though these policies are not a comprehensive guide for data protection by design, they lay down valuable solutions with respect to effective compliance. Moreover, the data protection measures of compliance in certification criteria have been approved by the competent Data Protection Authority and possibly the European Data Protection Board. As the present paper argues, the official approval by the competent authorities creates legitimate expectations under European Union Law. Specifically, data controllers can legitimately expect that abidance by approved safeguards meets the expectations of the authorities that are entrusted with monitoring their compliance. For these reasons, certification though an ex post mechanism, can offer valuable ex ante guidance.
{"title":"Certification as guidance for data protection by design","authors":"Efstratios Koulierakis","doi":"10.1080/13600869.2023.2269498","DOIUrl":"https://doi.org/10.1080/13600869.2023.2269498","url":null,"abstract":"Data protection by design is an obligation for data controllers according to article 25(1) of the General Data Protection Regulation (GDPR). The present paper explores the concept of data protection by design and proposes that data protection certificates can offer guidance to data controllers, about compliance with this GDPR obligation. An exploration of officially approved certification schemes shows that the certification requirements may lay down concrete use cases which can guide data controllers about compliance with the obligation of data protection by design. Even though these policies are not a comprehensive guide for data protection by design, they lay down valuable solutions with respect to effective compliance. Moreover, the data protection measures of compliance in certification criteria have been approved by the competent Data Protection Authority and possibly the European Data Protection Board. As the present paper argues, the official approval by the competent authorities creates legitimate expectations under European Union Law. Specifically, data controllers can legitimately expect that abidance by approved safeguards meets the expectations of the authorities that are entrusted with monitoring their compliance. For these reasons, certification though an ex post mechanism, can offer valuable ex ante guidance.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135994162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.1080/13600869.2023.2242671
Jon Truby, Rafael Dean Brown, Imad Antoine Ibrahim
ABSTRACTThis article seeks to address the issue of the regulation of telematics in vehicles. The objective is to navigate the need to protect data privacy and data security while enhancing road safety through telematics. Vehicles telematics devices utilizing analytical and predictive technology can help identify and reduce the risk of dangerous driving. Such devices are a growing tool in the insurance industry and amongst vehicle manufacturers, allowing safe driving to be rewarded whilst dangerous driving can be penalized. Data generated through telematics can also be of use to traffic authorities and governments to help with traffic management and planning. As such, the EU is planning to mandate the use of such devices. The growing use of telematics has, however, faced major data privacy and data security concerns. The article evaluates regulatory responses from the US and EU, highlighting specific European countries. The purpose is to find an effective balance through comparative analysis between driver safety, data privacy and data security.KEYWORDS: Vehicle telematics devicespredictive technologydriver data Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis study was made possible by NPRP grant NPRP12S-0129-190017 from the Qatar National Research Fund (a member of the Qatar Foundation). The findings of this study are solely the responsibility of the authors.
{"title":"Regulatory options for vehicle telematics devices: balancing driver safety, data privacy and data security","authors":"Jon Truby, Rafael Dean Brown, Imad Antoine Ibrahim","doi":"10.1080/13600869.2023.2242671","DOIUrl":"https://doi.org/10.1080/13600869.2023.2242671","url":null,"abstract":"ABSTRACTThis article seeks to address the issue of the regulation of telematics in vehicles. The objective is to navigate the need to protect data privacy and data security while enhancing road safety through telematics. Vehicles telematics devices utilizing analytical and predictive technology can help identify and reduce the risk of dangerous driving. Such devices are a growing tool in the insurance industry and amongst vehicle manufacturers, allowing safe driving to be rewarded whilst dangerous driving can be penalized. Data generated through telematics can also be of use to traffic authorities and governments to help with traffic management and planning. As such, the EU is planning to mandate the use of such devices. The growing use of telematics has, however, faced major data privacy and data security concerns. The article evaluates regulatory responses from the US and EU, highlighting specific European countries. The purpose is to find an effective balance through comparative analysis between driver safety, data privacy and data security.KEYWORDS: Vehicle telematics devicespredictive technologydriver data Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis study was made possible by NPRP grant NPRP12S-0129-190017 from the Qatar National Research Fund (a member of the Qatar Foundation). The findings of this study are solely the responsibility of the authors.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134911400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-18DOI: 10.1080/13600869.2023.2221820
O. Svitlychnyy, I. Matselyukh, Natalia Yaselska, Svitlana L. Glugovska, Olha I. Dyshleva
ABSTRACT The article analyzes the measures implemented by countries in the field of access and administration of justice, focusing on the use of electronic justice as a comprehensive remote mechanism during the COVID-19 pandemic. The study examines the experiences of the European Union and Ukraine to understand the effectiveness of electronic justice in ensuring the right of access to justice. Various scientific methods such as legal-statistical, systematic, formal-legal, and cybernetic methods were employed in the study. The analysis reveals that the measures taken by countries to prevent restrictions on human rights in the judicial system are not perfect and require further development. The study identifies key issues in the practical implementation of electronic justice and provides specific recommendations for improvement. The research fills a gap in comprehensive scientific studies on continuous and effective consideration of court cases during the pandemic. The practical and scientific value of the article lies in its relevance to practitioners and scholars worldwide, who are interested in the realization of the right of access to justice and the functioning of electronic justice. The national experiences and recommendations presented in the article can also be applied by European countries to enhance the effectiveness of their electronic justice systems.
{"title":"Electronic justice as a mechanism for ensuring the right of access to justice in a pandemic: the experience of Ukraine and the EU","authors":"O. Svitlychnyy, I. Matselyukh, Natalia Yaselska, Svitlana L. Glugovska, Olha I. Dyshleva","doi":"10.1080/13600869.2023.2221820","DOIUrl":"https://doi.org/10.1080/13600869.2023.2221820","url":null,"abstract":"ABSTRACT The article analyzes the measures implemented by countries in the field of access and administration of justice, focusing on the use of electronic justice as a comprehensive remote mechanism during the COVID-19 pandemic. The study examines the experiences of the European Union and Ukraine to understand the effectiveness of electronic justice in ensuring the right of access to justice. Various scientific methods such as legal-statistical, systematic, formal-legal, and cybernetic methods were employed in the study. The analysis reveals that the measures taken by countries to prevent restrictions on human rights in the judicial system are not perfect and require further development. The study identifies key issues in the practical implementation of electronic justice and provides specific recommendations for improvement. The research fills a gap in comprehensive scientific studies on continuous and effective consideration of court cases during the pandemic. The practical and scientific value of the article lies in its relevance to practitioners and scholars worldwide, who are interested in the realization of the right of access to justice and the functioning of electronic justice. The national experiences and recommendations presented in the article can also be applied by European countries to enhance the effectiveness of their electronic justice systems.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"14 1","pages":"325 - 340"},"PeriodicalIF":0.0,"publicationDate":"2023-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78261930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-15DOI: 10.1080/13600869.2023.2221823
Monica Horten
At the heart of this paper is an examination of the colloquial concept of a ‘shadow ban’. It reveals ways in which algorithms on the Facebook platform have the effect of suppressing content distribution without specifically targeting it for removal, and examines the consequential stifling of users’ speech. It reveals how the Facebook shadow ban is implemented by blocking dissemination of content in News Feed. The decision-making criteria are based on ‘behaviour’, a term that relates to activity of the page that is identifiable through patterns in the data. It’s a technique that is rooted in computer security, and raises questions about the balance between security and freedom of expression. The paper is situated in the field of responsibility of online platforms for content moderation. It studies the experience of the shadow ban on 20 UK-based Facebook Pages over the period from November 2019 to January 2021. The potential harm was evaluated using human rights standards and a comparative metric produced from Facebook Insights data. The empirical research is connected to recent legislative developments: the EU’s Digital Services Act and the UK’s Online Safety Bill. Its most salient contribution may be around ‘behaviour’ monitoring and its interpretation by legislators.
{"title":"Algorithms patrolling content: where’s the harm?","authors":"Monica Horten","doi":"10.1080/13600869.2023.2221823","DOIUrl":"https://doi.org/10.1080/13600869.2023.2221823","url":null,"abstract":"At the heart of this paper is an examination of the colloquial concept of a ‘shadow ban’. It reveals ways in which algorithms on the Facebook platform have the effect of suppressing content distribution without specifically targeting it for removal, and examines the consequential stifling of users’ speech. It reveals how the Facebook shadow ban is implemented by blocking dissemination of content in News Feed. The decision-making criteria are based on ‘behaviour’, a term that relates to activity of the page that is identifiable through patterns in the data. It’s a technique that is rooted in computer security, and raises questions about the balance between security and freedom of expression. The paper is situated in the field of responsibility of online platforms for content moderation. It studies the experience of the shadow ban on 20 UK-based Facebook Pages over the period from November 2019 to January 2021. The potential harm was evaluated using human rights standards and a comparative metric produced from Facebook Insights data. The empirical research is connected to recent legislative developments: the EU’s Digital Services Act and the UK’s Online Safety Bill. Its most salient contribution may be around ‘behaviour’ monitoring and its interpretation by legislators.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134890298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-04DOI: 10.1080/13600869.2023.2192565
Rory O'Boyle, James Griffin
As guest editors of the special issue, we are pleased to introduce to you four papers that were presented at the annual BILETA conference, our first hybrid conference since the pandemic. Held at the University of Exeter, the conference was based around the theme of creativity in legal regulation, and had a large number of presentations about the topic. In her insightful article Coordinating Digital Regulation in the UK: Is the Digital Regulation Cooperation Form (DRCF) up to the task? Dr Aysem Diker Vanberg explores the coordination of digital regulation in the UK and effectiveness or otherwise of the DRCF in achieving such coordination. Aysem argues persuasively that in its current form the DRCF may not achieve the objectives of promoting more coherence and collaboration and concludes that to effectively respond to the challenges posed by digital technologies, coordination between various regulatory authorities must be extended and formalised. Liesa Keunen has written about tax audits and fishing expeditions. Very much a current topic, Liesa outlines that technologies ability to collect, process and extract new knowledge has changed the way information can be gleaned for tax administration. Liesa looks at the issue of fishing expeditions, questioning whether tax authorities might be engaging in these. Liesa comes to a number of conclusions: a) that fishing expeditions are prohibited, b) that they are an intentional investigation with a purpose, and c) that speculation and excessiveness are a distinctive conceptual characteristic of a prohibited fishing expedition. In her engaging article The European approach to damage caused by artificial intelligence enabled by global navigation satellite systems, Ioana Bratu provides us with a description of the legislative proposals issued by the European Commission in 2021 and 2022 in the context of AI systems enabled by GNSS. The article describes the legal bases of liability for damage caused by AI enabled by GNSS and critically evaluates the proposed EU solutions. Lastly, Dr Mehmet Unver assessing healthcare as a socio-technical system, focusing on fiduciary relationships and proposed framework. The article draws a conceptual framework for trust, and considers its relationship with AI and how it is governed under fiduciary law. It takes a socio-technical system perspective, and examines how to govern trust in such an AI driven system. Mehmet argues that a holistic viewpoint can provide a generalisable framework that can enable trust in AI drive socio-technical systems.
作为特刊的客座编辑,我们很高兴向您介绍在BILETA年度会议上发表的四篇论文,这是我们自大流行以来的第一次混合会议。这次会议在埃克塞特大学举行,会议的主题是法律法规的创造力,并就这一主题进行了大量的演讲。在她富有洞察力的文章《协调英国的数字监管:数字监管合作表(DRCF)是否能够完成任务?》Aysem Diker Vanberg博士探讨了英国数字监管的协调以及DRCF在实现这种协调方面的有效性或其他方面的协调。Aysem令人信服地认为,以目前的形式,DRCF可能无法实现促进更多一致性和协作的目标,并得出结论,为了有效应对数字技术带来的挑战,必须扩大和正规化各个监管机构之间的协调。Liesa Keunen写过关于税务审计和钓鱼考察的文章。Liesa概述了收集、处理和提取新知识的技术能力已经改变了税务管理收集信息的方式,这是一个非常热门的话题。Liesa着眼于钓鱼调查的问题,质疑税务机关是否可能参与其中。Liesa得出了一些结论:a)捕鱼探险是被禁止的,b)它们是有目的的故意调查,c)投机和过度是被禁止的捕鱼探险的一个独特的概念特征。Ioana Bratu在其引人入胜的文章《欧洲应对全球导航卫星系统支持的人工智能造成的损害的方法》中,向我们介绍了欧盟委员会在GNSS支持的人工智能系统背景下于2021年和2022年发布的立法提案。本文描述了由GNSS实现的人工智能造成损害的法律依据,并批判性地评估了拟议的欧盟解决方案。最后,Dr Mehmet Unver评估医疗保健作为一个社会技术系统,重点是信托关系和拟议的框架。本文绘制了信任的概念框架,并考虑了它与人工智能的关系以及如何在信托法下进行管理。它从社会技术系统的角度出发,研究了如何在这样一个人工智能驱动的系统中管理信任。Mehmet认为,一个整体的观点可以提供一个普遍的框架,可以使人们对人工智能的信任驱动社会技术系统。
{"title":"Editorial for special issue. BILETA Conference 2022","authors":"Rory O'Boyle, James Griffin","doi":"10.1080/13600869.2023.2192565","DOIUrl":"https://doi.org/10.1080/13600869.2023.2192565","url":null,"abstract":"As guest editors of the special issue, we are pleased to introduce to you four papers that were presented at the annual BILETA conference, our first hybrid conference since the pandemic. Held at the University of Exeter, the conference was based around the theme of creativity in legal regulation, and had a large number of presentations about the topic. In her insightful article Coordinating Digital Regulation in the UK: Is the Digital Regulation Cooperation Form (DRCF) up to the task? Dr Aysem Diker Vanberg explores the coordination of digital regulation in the UK and effectiveness or otherwise of the DRCF in achieving such coordination. Aysem argues persuasively that in its current form the DRCF may not achieve the objectives of promoting more coherence and collaboration and concludes that to effectively respond to the challenges posed by digital technologies, coordination between various regulatory authorities must be extended and formalised. Liesa Keunen has written about tax audits and fishing expeditions. Very much a current topic, Liesa outlines that technologies ability to collect, process and extract new knowledge has changed the way information can be gleaned for tax administration. Liesa looks at the issue of fishing expeditions, questioning whether tax authorities might be engaging in these. Liesa comes to a number of conclusions: a) that fishing expeditions are prohibited, b) that they are an intentional investigation with a purpose, and c) that speculation and excessiveness are a distinctive conceptual characteristic of a prohibited fishing expedition. In her engaging article The European approach to damage caused by artificial intelligence enabled by global navigation satellite systems, Ioana Bratu provides us with a description of the legislative proposals issued by the European Commission in 2021 and 2022 in the context of AI systems enabled by GNSS. The article describes the legal bases of liability for damage caused by AI enabled by GNSS and critically evaluates the proposed EU solutions. Lastly, Dr Mehmet Unver assessing healthcare as a socio-technical system, focusing on fiduciary relationships and proposed framework. The article draws a conceptual framework for trust, and considers its relationship with AI and how it is governed under fiduciary law. It takes a socio-technical system perspective, and examines how to govern trust in such an AI driven system. Mehmet argues that a holistic viewpoint can provide a generalisable framework that can enable trust in AI drive socio-technical systems.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"14 21 1","pages":"127 - 127"},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80670571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-29DOI: 10.1080/13600869.2023.2192568
Liesa Keunen
ABSTRACT The technological ability to collect, process and extract new and predictive knowledge from big data has changed our society. Based on large amounts of information about e.g. location, payments and communication, patterns can be detected and profiles about citizens can be generated and applied. Knowledge acquired from big data is valuable to tax administrations because it makes the global fight against tax fraud more efficient. Big data usage by tax administrations does raise significant legal questions, however, one being the extent to which such use could qualify as a ‘fishing expedition’. It has been argued that tax administrations are not allowed to search (‘fish’) for information, the existence of which is uncertain. A closer look at the concept of ‘fishing expeditions’ unveils that there is no generally accepted definition, although authors and judges often refer to it. This article provides an oversight of the main characteristics of this concept using a selection of case law of the ECtHR, CJEU and the EGC, literature and policy documents. The identification of these characteristics enabled us to draw conclusions on what fishing expeditions possibly are, and which consequences this may have for the legitimacy of big data gathering and use by tax administrations.
{"title":"Big data tax audits: the conceptualisation of fishing expeditions","authors":"Liesa Keunen","doi":"10.1080/13600869.2023.2192568","DOIUrl":"https://doi.org/10.1080/13600869.2023.2192568","url":null,"abstract":"ABSTRACT The technological ability to collect, process and extract new and predictive knowledge from big data has changed our society. Based on large amounts of information about e.g. location, payments and communication, patterns can be detected and profiles about citizens can be generated and applied. Knowledge acquired from big data is valuable to tax administrations because it makes the global fight against tax fraud more efficient. Big data usage by tax administrations does raise significant legal questions, however, one being the extent to which such use could qualify as a ‘fishing expedition’. It has been argued that tax administrations are not allowed to search (‘fish’) for information, the existence of which is uncertain. A closer look at the concept of ‘fishing expeditions’ unveils that there is no generally accepted definition, although authors and judges often refer to it. This article provides an oversight of the main characteristics of this concept using a selection of case law of the ECtHR, CJEU and the EGC, literature and policy documents. The identification of these characteristics enabled us to draw conclusions on what fishing expeditions possibly are, and which consequences this may have for the legitimacy of big data gathering and use by tax administrations.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"37 1","pages":"166 - 197"},"PeriodicalIF":0.0,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88369537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-29DOI: 10.1080/13600869.2023.2192569
M. Unver
ABSTRACT ‘: Fiduciary law aims to mitigate the inherent risk of ‘trust’, which helps restore interpersonal trust. It remains to be answered how trust should be governed in an AI-driven socio-technical system where technical and social factors are involved including interpersonal relationships and AI-human interactions. Taking interpersonal trust as the backdrop of analysis, this article seeks answers to this question focusing on healthcare. It firstly draws a conceptual framework regarding 'trust' and investigates its interplay with AI as well as examines how it is governed under the fiduciary law. Subsequently, it upholds a socio-technical system perspective, examining how to enable and sustain trust in an AI-driven socio-technical system. A governance model is then developed to elicit ‘intrinsic’, ‘dynamic’ and ‘ethical’ values of trust attributed to various elements under a tri-partite framework. It is recognised that findings of the literature as to trust, its trajectory and implications can be implemented within the proposed framework. Furthermore, it brings novelty by re-conceptualising the elements of 'trust' and associated values, marking distinction to its interpersonal roots and fiduciary relationships. It is considered this governance model, by upholding a holistic viewpoint, provides a generalisable framework that can construct, maintain and restore trust in AI-driven socio-technical systems.
{"title":"Governing fiduciary relationships or building up a governance model for trust in AI? Review of healthcare as a socio-technical system","authors":"M. Unver","doi":"10.1080/13600869.2023.2192569","DOIUrl":"https://doi.org/10.1080/13600869.2023.2192569","url":null,"abstract":"ABSTRACT ‘: Fiduciary law aims to mitigate the inherent risk of ‘trust’, which helps restore interpersonal trust. It remains to be answered how trust should be governed in an AI-driven socio-technical system where technical and social factors are involved including interpersonal relationships and AI-human interactions. Taking interpersonal trust as the backdrop of analysis, this article seeks answers to this question focusing on healthcare. It firstly draws a conceptual framework regarding 'trust' and investigates its interplay with AI as well as examines how it is governed under the fiduciary law. Subsequently, it upholds a socio-technical system perspective, examining how to enable and sustain trust in an AI-driven socio-technical system. A governance model is then developed to elicit ‘intrinsic’, ‘dynamic’ and ‘ethical’ values of trust attributed to various elements under a tri-partite framework. It is recognised that findings of the literature as to trust, its trajectory and implications can be implemented within the proposed framework. Furthermore, it brings novelty by re-conceptualising the elements of 'trust' and associated values, marking distinction to its interpersonal roots and fiduciary relationships. It is considered this governance model, by upholding a holistic viewpoint, provides a generalisable framework that can construct, maintain and restore trust in AI-driven socio-technical systems.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"82 1","pages":"198 - 226"},"PeriodicalIF":0.0,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76096415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-27DOI: 10.1080/13600869.2023.2192566
A. Vanberg
ABSTRACT The shift to online commerce and communication in the global pandemic, the Cambridge Analytica scandal and the cancel culture exacerbated by social media platforms have demonstrated our increasing reliance on digital platforms. Digital regulation is receiving increasing scrutiny globally and, in the UK, as exemplified by the recent Digital Markets and Digital Services Act by the European Union and the establishment of the Digital Markets Unit within the Competition and Markets Authority in the UK. In July 2020, the Competition and Markets Authority, the Information Commissioner’s Office and the Office of Communications formed the Digital Regulation Cooperation Forum (DRCF) to coordinate digital regulation between various regulators. In April 2021, the Financial Conduct Authority also joined the DRCF as a full member. Against this backdrop, the paper explores the coordination of digital regulation in the UK and analyses how effective the DRCF is in contributing to this objective. It is argued that to effectively respond to the challenges posed by digital technologies, coordination between various regulatory authorities must be extended and formalised to avoid fragmented enforcement. Whilst the DRCF is a step in the right direction, it needs to engage more closely with other relevant stakeholders.
全球疫情、剑桥分析公司丑闻以及社交媒体平台加剧的“取消文化”,都表明我们越来越依赖数字平台。数字监管在全球范围内受到越来越多的审查,在英国,欧盟最近通过的《数字市场和数字服务法案》以及英国竞争和市场管理局(Competition and Markets Authority)设立的数字市场部门就是例证。2020年7月,竞争和市场管理局、信息专员办公室和通信办公室成立了数字监管合作论坛(DRCF),以协调各监管机构之间的数字监管。2021年4月,英国金融市场行为监管局也加入DRCF,成为正式成员。在此背景下,本文探讨了英国数字监管的协调,并分析了DRCF在实现这一目标方面的有效性。有人认为,为了有效应对数字技术带来的挑战,必须扩大和正规化各监管机构之间的协调,以避免执法分散。虽然DRCF是朝着正确方向迈出的一步,但它需要与其他相关利益攸关方更密切地接触。
{"title":"Coordinating digital regulation in the UK: is the digital regulation cooperation forum (DRCF) up to the task?","authors":"A. Vanberg","doi":"10.1080/13600869.2023.2192566","DOIUrl":"https://doi.org/10.1080/13600869.2023.2192566","url":null,"abstract":"ABSTRACT The shift to online commerce and communication in the global pandemic, the Cambridge Analytica scandal and the cancel culture exacerbated by social media platforms have demonstrated our increasing reliance on digital platforms. Digital regulation is receiving increasing scrutiny globally and, in the UK, as exemplified by the recent Digital Markets and Digital Services Act by the European Union and the establishment of the Digital Markets Unit within the Competition and Markets Authority in the UK. In July 2020, the Competition and Markets Authority, the Information Commissioner’s Office and the Office of Communications formed the Digital Regulation Cooperation Forum (DRCF) to coordinate digital regulation between various regulators. In April 2021, the Financial Conduct Authority also joined the DRCF as a full member. Against this backdrop, the paper explores the coordination of digital regulation in the UK and analyses how effective the DRCF is in contributing to this objective. It is argued that to effectively respond to the challenges posed by digital technologies, coordination between various regulatory authorities must be extended and formalised to avoid fragmented enforcement. Whilst the DRCF is a step in the right direction, it needs to engage more closely with other relevant stakeholders.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"93 1","pages":"128 - 146"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79593131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-27DOI: 10.1080/13600869.2023.2192567
I. Bratu
ABSTRACT Global Navigation Satellite Systems (GNSS), such as GPS or Galileo, have become indispensable in various sectors, including road traffic, aviation, and emergency response services. With recent technological advancements, GNSS have been incorporated as a fundamental constituent of artificial intelligence (AI) systems. Self-driving vehicles, autonomous aircraft, and drones rely increasingly on GNSS, as these technologies are currently the sole source of globally consistent, precise positioning and timing. However, GNSS are not entirely risk-free as satellite signals can be susceptible to interference and other technical malfunctions may cause disruptive impacts on the proper functioning of AI systems. In such context, this article aims to explore the legal foundations for ascribing liability in case accidents are caused by AI systems due to a GNSS malfunctions, in the light of the recent European regulatory initiatives, namely the AI Act, the AI Liability Directive and the revised Product Liability Directive.
{"title":"A first critical analysis of the European approach to damage caused by artificial intelligence enabled by global navigation satellite systems. A bridge to nowhere or a cloud with a silver lining?","authors":"I. Bratu","doi":"10.1080/13600869.2023.2192567","DOIUrl":"https://doi.org/10.1080/13600869.2023.2192567","url":null,"abstract":"ABSTRACT Global Navigation Satellite Systems (GNSS), such as GPS or Galileo, have become indispensable in various sectors, including road traffic, aviation, and emergency response services. With recent technological advancements, GNSS have been incorporated as a fundamental constituent of artificial intelligence (AI) systems. Self-driving vehicles, autonomous aircraft, and drones rely increasingly on GNSS, as these technologies are currently the sole source of globally consistent, precise positioning and timing. However, GNSS are not entirely risk-free as satellite signals can be susceptible to interference and other technical malfunctions may cause disruptive impacts on the proper functioning of AI systems. In such context, this article aims to explore the legal foundations for ascribing liability in case accidents are caused by AI systems due to a GNSS malfunctions, in the light of the recent European regulatory initiatives, namely the AI Act, the AI Liability Directive and the revised Product Liability Directive.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"47 1","pages":"147 - 165"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80603136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-10DOI: 10.1080/13600869.2022.2164462
Noriyuki Katagiri
ABSTRACT I explore reasons why existing defense has failed to prevent cyber attacks on critical infrastructure. I study one of the least studied notions of cyberspace behavior known as target distinction. Drawn from customary international law, the principle posits that states should tell their wartime targets between combatants and noncombatants and use force only toward military objects. States should not target critical infrastructure, like gas pipelines, because to do so harms civilian populations who use it. I investigate four issues that keeps the principle from preventing attacks on critical infrastructure. The first is its inability to capture the networked nature of critical infrastructure beyond the simple dual-use (military and cyber) purposes. The second defect is the interpretive confusion that the principle generates over the rules of engagement. The third problem is the omission from its coverage of actors other than nation states. By design, the principle condones cyber attacks by nonstate actors on infrastructure, or by those whose linkage to state sponsors cannot be legally established. Finally, the principle is prone to fail when hackers lack proper understanding of what it does and does not allow.
{"title":"Hackers of critical infrastructure: expectations and limits of the principle of target distinction","authors":"Noriyuki Katagiri","doi":"10.1080/13600869.2022.2164462","DOIUrl":"https://doi.org/10.1080/13600869.2022.2164462","url":null,"abstract":"ABSTRACT I explore reasons why existing defense has failed to prevent cyber attacks on critical infrastructure. I study one of the least studied notions of cyberspace behavior known as target distinction. Drawn from customary international law, the principle posits that states should tell their wartime targets between combatants and noncombatants and use force only toward military objects. States should not target critical infrastructure, like gas pipelines, because to do so harms civilian populations who use it. I investigate four issues that keeps the principle from preventing attacks on critical infrastructure. The first is its inability to capture the networked nature of critical infrastructure beyond the simple dual-use (military and cyber) purposes. The second defect is the interpretive confusion that the principle generates over the rules of engagement. The third problem is the omission from its coverage of actors other than nation states. By design, the principle condones cyber attacks by nonstate actors on infrastructure, or by those whose linkage to state sponsors cannot be legally established. Finally, the principle is prone to fail when hackers lack proper understanding of what it does and does not allow.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"180 1","pages":"274 - 293"},"PeriodicalIF":0.0,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88081693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}