Pub Date : 2021-07-03DOI: 10.1080/17579961.2021.1977219
P. Hacker
ABSTRACT In response to recent regulatory initiatives at the EU level, this article shows that training data for AI do not only play a key role in the development of AI applications, but are currently only inadequately captured by EU law. In this, I focus on three central risks of AI training data: risks of data quality, discrimination and innovation. Existing EU law, with the new copyright exception for text and data mining, only addresses a part of this risk profile adequately. Therefore, the article develops the foundations for a discrimination-sensitive quality regime for data sets and AI training, which emancipates itself from the controversial question of the applicability of data protection law to AI training data. Furthermore, it spells out concrete guidelines for the re-use of personal data for AI training purposes under the GDPR. Ultimately, the legislative and interpretive task rests in striking an appropriate balance between individual protection and the promotion of innovation. The article finishes with an assessment of the proposal for an Artificial Intelligence Act in this respect.
{"title":"A legal framework for AI training data—from first principles to the Artificial Intelligence Act","authors":"P. Hacker","doi":"10.1080/17579961.2021.1977219","DOIUrl":"https://doi.org/10.1080/17579961.2021.1977219","url":null,"abstract":"ABSTRACT In response to recent regulatory initiatives at the EU level, this article shows that training data for AI do not only play a key role in the development of AI applications, but are currently only inadequately captured by EU law. In this, I focus on three central risks of AI training data: risks of data quality, discrimination and innovation. Existing EU law, with the new copyright exception for text and data mining, only addresses a part of this risk profile adequately. Therefore, the article develops the foundations for a discrimination-sensitive quality regime for data sets and AI training, which emancipates itself from the controversial question of the applicability of data protection law to AI training data. Furthermore, it spells out concrete guidelines for the re-use of personal data for AI training purposes under the GDPR. Ultimately, the legislative and interpretive task rests in striking an appropriate balance between individual protection and the promotion of innovation. The article finishes with an assessment of the proposal for an Artificial Intelligence Act in this respect.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"13 1","pages":"257 - 301"},"PeriodicalIF":0.0,"publicationDate":"2021-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48337026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-20DOI: 10.1080/17579961.2022.2113665
W. Johnson
ABSTRACT Emerging technologies including artificial intelligence (AI) enable novel products to have dynamic and even self-modifying designs, challenging approval-based products regulation. This article uses a proposed framework by the US Food and Drug Administration (FDA) to explore how flexible regulatory tools, specifically principles-based regulation, could be used to manage ‘dynamic’ products. It examines the appropriateness of principles-based approaches for managing the complexity and fragmentation found in the setting of dynamic products in terms of regulatory capacity and accountability, balancing flexibility and predictability, and the role of third parties. The article concludes that successfully deploying principles-based regulation for dynamic products will require taking serious lessons from the global financial crisis on managing complexity and fragmentation while placing equity at the centre of the framework.
{"title":"Flexible regulation for dynamic products? The case of applying principles-based regulation to medical products using artificial intelligence","authors":"W. Johnson","doi":"10.1080/17579961.2022.2113665","DOIUrl":"https://doi.org/10.1080/17579961.2022.2113665","url":null,"abstract":"ABSTRACT Emerging technologies including artificial intelligence (AI) enable novel products to have dynamic and even self-modifying designs, challenging approval-based products regulation. This article uses a proposed framework by the US Food and Drug Administration (FDA) to explore how flexible regulatory tools, specifically principles-based regulation, could be used to manage ‘dynamic’ products. It examines the appropriateness of principles-based approaches for managing the complexity and fragmentation found in the setting of dynamic products in terms of regulatory capacity and accountability, balancing flexibility and predictability, and the role of third parties. The article concludes that successfully deploying principles-based regulation for dynamic products will require taking serious lessons from the global financial crisis on managing complexity and fragmentation while placing equity at the centre of the framework.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"14 1","pages":"205 - 236"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41328070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we argue that the relationship between 'subject' and 'object' is poorly understood in health research regulation (HRR), and that it is a fallacy to suppose that they can operate in separate, fixed silos. By seeking to perpetuate this fallacy, HRR risks, among other things, objectifying persons by paying insufficient attention to human subjectivity, and the experiences and interests related to being involved in research. We deploy the anthropological concept of liminality - concerned with processes of transformation and change over time - to emphasise the enduring connectedness between subject and object in these contexts. By these means, we posit that regulatory frameworks based on processual regulation can better recognise and encompass the fluidity and significance of these relationships, and so ground more securely the moral legitimacy and social licence for human health research.
{"title":"Beyond categorisation: refining the relationship between subjects and objects in health research regulation.","authors":"Catriona McMillan, Edward Dove, Graeme Laurie, Emily Postan, Nayha Sethi, Annie Sorbie","doi":"10.1080/17579961.2021.1898314","DOIUrl":"https://doi.org/10.1080/17579961.2021.1898314","url":null,"abstract":"<p><p>In this article, we argue that the relationship between 'subject' and 'object' is poorly understood in health research regulation (HRR), and that it is a fallacy to suppose that they can operate in separate, fixed silos. By seeking to perpetuate this fallacy, HRR risks, among other things, objectifying persons by paying insufficient attention to human subjectivity, and the experiences and interests related to being involved in research. We deploy the anthropological concept of liminality - concerned with processes of transformation and change over time - to emphasise the enduring connectedness between subject and object in these contexts. By these means, we posit that regulatory frameworks based on <i>processual regulation</i> can better recognise and encompass the fluidity and significance of these relationships, and so ground more securely the moral legitimacy and social licence for human health research.</p>","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"13 1","pages":"194-222"},"PeriodicalIF":0.0,"publicationDate":"2021-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/17579961.2021.1898314","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39452090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-02DOI: 10.1080/17579961.2021.1898300
Nathalie A. Smuha
ABSTRACT Against a background of global competition to seize the opportunities promised by Artificial Intelligence (AI), many countries and regions are explicitly taking part in a ‘race to AI’. Yet the increased visibility of the technology’s risks has led to ever-louder calls for regulators to look beyond the benefits, and also secure appropriate regulation to ensure AI that is ‘trustworthy’ – i.e. legal, ethical and robust. Besides minimising risks, such regulation could facilitate AI’s uptake, boost legal certainty, and hence also contribute to advancing countries’ position in the race. Consequently, this paper argues that the ‘race to AI’ also brings forth a ‘race to AI regulation’. After discussing the regulatory toolbox for AI and some of the challenges that regulators face when making use thereof, this paper assesses to which extent regulatory competition for AI – or its counterpart, regulatory convergence – is a possibility, a reality and a desirability.
{"title":"From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for artificial intelligence","authors":"Nathalie A. Smuha","doi":"10.1080/17579961.2021.1898300","DOIUrl":"https://doi.org/10.1080/17579961.2021.1898300","url":null,"abstract":"ABSTRACT Against a background of global competition to seize the opportunities promised by Artificial Intelligence (AI), many countries and regions are explicitly taking part in a ‘race to AI’. Yet the increased visibility of the technology’s risks has led to ever-louder calls for regulators to look beyond the benefits, and also secure appropriate regulation to ensure AI that is ‘trustworthy’ – i.e. legal, ethical and robust. Besides minimising risks, such regulation could facilitate AI’s uptake, boost legal certainty, and hence also contribute to advancing countries’ position in the race. Consequently, this paper argues that the ‘race to AI’ also brings forth a ‘race to AI regulation’. After discussing the regulatory toolbox for AI and some of the challenges that regulators face when making use thereof, this paper assesses to which extent regulatory competition for AI – or its counterpart, regulatory convergence – is a possibility, a reality and a desirability.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"13 1","pages":"57 - 84"},"PeriodicalIF":0.0,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/17579961.2021.1898300","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41759916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-02DOI: 10.1080/17579961.2021.1898310
H. Etienne
ABSTRACT This paper reveals the dangers of the Moral Machine experiment, alerting against both its uses for normative ends, and the whole approach it is built upon to address ethical issues. It explores additional methodological limits of the experiment on top of those already identified by its authors and provides reasons why it is inadequate in supporting ethical and juridical discussions to determine the moral settings for autonomous vehicles. Demonstrating the inner fallacy behind computational social choice methods when applied to ethical decision-making, it also warns against the dangers of computational moral systems, such as the ‘voting-based system’ recently developed out of the Moral Machine’s data. Finally, it discusses the Moral Machine’s ambiguous impact on public opinion; on the one hand, laudable for having successfully raised global awareness with regard to ethical concerns about autonomous vehicles, and on the other hand pernicious, as it has led to a significant narrowing of the spectrum of autonomous vehicle ethics, de facto imposing a strong unidirectional approach, while brushing aside other major moral issues.
{"title":"The dark side of the ‘Moral Machine’ and the fallacy of computational ethical decision-making for autonomous vehicles","authors":"H. Etienne","doi":"10.1080/17579961.2021.1898310","DOIUrl":"https://doi.org/10.1080/17579961.2021.1898310","url":null,"abstract":"ABSTRACT This paper reveals the dangers of the Moral Machine experiment, alerting against both its uses for normative ends, and the whole approach it is built upon to address ethical issues. It explores additional methodological limits of the experiment on top of those already identified by its authors and provides reasons why it is inadequate in supporting ethical and juridical discussions to determine the moral settings for autonomous vehicles. Demonstrating the inner fallacy behind computational social choice methods when applied to ethical decision-making, it also warns against the dangers of computational moral systems, such as the ‘voting-based system’ recently developed out of the Moral Machine’s data. Finally, it discusses the Moral Machine’s ambiguous impact on public opinion; on the one hand, laudable for having successfully raised global awareness with regard to ethical concerns about autonomous vehicles, and on the other hand pernicious, as it has led to a significant narrowing of the spectrum of autonomous vehicle ethics, de facto imposing a strong unidirectional approach, while brushing aside other major moral issues.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"13 1","pages":"85 - 107"},"PeriodicalIF":0.0,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/17579961.2021.1898310","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46722537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-02DOI: 10.1080/17579961.2021.1898313
M. Abdussalam, Kayne Menezes
ABSTRACT As legal and business interests continue to arise concerning the deployment of blockchains to different use cases, concerns have equally followed as to how legal liability would attach to participants within blockchain networks. This paper aims to contribute to the discussion concerning the attribution of joint legal liability to nodes within blockchain-based networks. It argues that there should be no broad-brush legal rule governing the issue and that it is the modus operandi of each network that should be determinative of the applicable standard of legal liability in every situation. This paper adopts the use case of innovation and creativity networks to advance the argument that individual liability, as a standard, may in some cases be superior to and more socially beneficial than the use of joint legal liability.
{"title":"On the question of collective liability in innovation and creativity networks deploying blockchains–A discussion concerning joint liability for the unauthorised application of proprietary information","authors":"M. Abdussalam, Kayne Menezes","doi":"10.1080/17579961.2021.1898313","DOIUrl":"https://doi.org/10.1080/17579961.2021.1898313","url":null,"abstract":"ABSTRACT As legal and business interests continue to arise concerning the deployment of blockchains to different use cases, concerns have equally followed as to how legal liability would attach to participants within blockchain networks. This paper aims to contribute to the discussion concerning the attribution of joint legal liability to nodes within blockchain-based networks. It argues that there should be no broad-brush legal rule governing the issue and that it is the modus operandi of each network that should be determinative of the applicable standard of legal liability in every situation. This paper adopts the use case of innovation and creativity networks to advance the argument that individual liability, as a standard, may in some cases be superior to and more socially beneficial than the use of joint legal liability.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"13 1","pages":"167 - 193"},"PeriodicalIF":0.0,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/17579961.2021.1898313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45228570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-02DOI: 10.1080/17579961.2021.1898312
S. Al-Sharieh
ABSTRACT Copyright law in the United Arab Emirates (UAE) has the capacity to address the challenges associated with artificial intelligence (AI)-generated literary, artistic and scientific works. Under UAE copyright law, AI-generated works may qualify as copyright subject matter despite the non-human nature of both the expressions and the innovative character they embody. Also, users of the AI systems generating works may qualify as the authors of these works and, in most cases, bear the responsibility for their copyright infringing activities. These conclusions rely on: (1) the premise that the notions of ‘work’ and ‘author’ for the purpose of copyright law are legal constructs, impacted by both socio-economic and technological factors; (2) the wording of the UAE Copyright Act, reflecting an underlying reconciliation between the economic and moral dimensions of copyright; (3) the potential utility of the UAE Copyright Act’s notion of ‘collective works’, which echoes the work-for-hire doctrine in other national copyright laws; and (4) the overarching knowledge economy-oriented policy in the UAE.
{"title":"The intellectual property road to the knowledge economy: remarks on the readiness of the UAE Copyright Act to drive AI innovation","authors":"S. Al-Sharieh","doi":"10.1080/17579961.2021.1898312","DOIUrl":"https://doi.org/10.1080/17579961.2021.1898312","url":null,"abstract":"ABSTRACT Copyright law in the United Arab Emirates (UAE) has the capacity to address the challenges associated with artificial intelligence (AI)-generated literary, artistic and scientific works. Under UAE copyright law, AI-generated works may qualify as copyright subject matter despite the non-human nature of both the expressions and the innovative character they embody. Also, users of the AI systems generating works may qualify as the authors of these works and, in most cases, bear the responsibility for their copyright infringing activities. These conclusions rely on: (1) the premise that the notions of ‘work’ and ‘author’ for the purpose of copyright law are legal constructs, impacted by both socio-economic and technological factors; (2) the wording of the UAE Copyright Act, reflecting an underlying reconciliation between the economic and moral dimensions of copyright; (3) the potential utility of the UAE Copyright Act’s notion of ‘collective works’, which echoes the work-for-hire doctrine in other national copyright laws; and (4) the overarching knowledge economy-oriented policy in the UAE.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"13 1","pages":"141 - 166"},"PeriodicalIF":0.0,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/17579961.2021.1898312","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46015676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-02DOI: 10.1080/17579961.2021.1898298
R. Brownsword, H. Somsen
ABSTRACT This article, introducing a new extended form of the journal, offers some reflections on the changing context in which we now research law, innovation, and technology. Three major changes are highlighted: the evolving landscape of Law 3.0, potentially de-centring both rules and humans from the legal enterprise; the new ‘normal’ of life with pandemics, underlining the vulnerability of humans; and, threading through all of this, the Anthropocene, destabilising a host of baseline distinctions, and a constant warning about the fragility of the global commons and the human condition. In this changing context, the question is whether technology can provide the solutions to our global challenges without involving an irreversible erosion of human agency. With this, we open the floor to our contributors.
{"title":"Law, innovation and technology: fast forward to 2021","authors":"R. Brownsword, H. Somsen","doi":"10.1080/17579961.2021.1898298","DOIUrl":"https://doi.org/10.1080/17579961.2021.1898298","url":null,"abstract":"ABSTRACT This article, introducing a new extended form of the journal, offers some reflections on the changing context in which we now research law, innovation, and technology. Three major changes are highlighted: the evolving landscape of Law 3.0, potentially de-centring both rules and humans from the legal enterprise; the new ‘normal’ of life with pandemics, underlining the vulnerability of humans; and, threading through all of this, the Anthropocene, destabilising a host of baseline distinctions, and a constant warning about the fragility of the global commons and the human condition. In this changing context, the question is whether technology can provide the solutions to our global challenges without involving an irreversible erosion of human agency. With this, we open the floor to our contributors.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"13 1","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/17579961.2021.1898298","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47227747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-02DOI: 10.1080/17579961.2021.1898311
Hope Johnson
ABSTRACT Proponents and developers of cell-cultured animal material (i.e. lab-grown meat) vest the innovation with the capacity to transform food systems by replacing, or significantly reducing, intensive use of animals in food systems. This article argues that cell-cultured animal material reflects lock-in to the incumbent, productivist regime for agricultural innovation. First, the article positions cell-cultured animal material as broadly consistent with the paradigm underlying intensive animal agriculture and related innovations. Secondly, it 2 explores how intellectual property has become the regulation-by-default of cell-cultured animal material reinforcing well-known limits of intellectual property rights to transform agricultural systems. Thirdly, it considers the broader regulatory context for cell-cultured animal material. The article shows how the claims by proponents and the responses from incumbent industry actors have narrowed regulatory responses preventing regulators from engaging with deeper questions about, and pathways for, food systems transformation; and, it concludes by briefly considering an alternate regulatory approach to cell-cultured animal material.
{"title":"Regulating cell-cultured animal material for food systems transformation: current approaches and future directions","authors":"Hope Johnson","doi":"10.1080/17579961.2021.1898311","DOIUrl":"https://doi.org/10.1080/17579961.2021.1898311","url":null,"abstract":"ABSTRACT Proponents and developers of cell-cultured animal material (i.e. lab-grown meat) vest the innovation with the capacity to transform food systems by replacing, or significantly reducing, intensive use of animals in food systems. This article argues that cell-cultured animal material reflects lock-in to the incumbent, productivist regime for agricultural innovation. First, the article positions cell-cultured animal material as broadly consistent with the paradigm underlying intensive animal agriculture and related innovations. Secondly, it 2 explores how intellectual property has become the regulation-by-default of cell-cultured animal material reinforcing well-known limits of intellectual property rights to transform agricultural systems. Thirdly, it considers the broader regulatory context for cell-cultured animal material. The article shows how the claims by proponents and the responses from incumbent industry actors have narrowed regulatory responses preventing regulators from engaging with deeper questions about, and pathways for, food systems transformation; and, it concludes by briefly considering an alternate regulatory approach to cell-cultured animal material.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"13 1","pages":"108 - 140"},"PeriodicalIF":0.0,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/17579961.2021.1898311","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47355312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-02DOI: 10.1080/17579961.2021.1898315
B. Sloot
ABSTRACT Privacy is predominantly understood as the right to be let alone by others. It protects an individual against intrusions upon the private sphere by governments, companies and fellow citizens and focusses on the right to withhold from them access to one's data, body or home. In the data-driven environment, the fact that others may have access to personal information will only be one concern; equally importantly, a person will be confronted with unwanted information about herself. Being frequently confronted with information about one's past, present and future fundamentally challenges an individual's capacity to form and maintain an identity, which depends on her ability to select and prioritise information about herself. This article suggests that the current privacy paradigm could be ameliorated by treating privacy not only as the right to be let alone by others, but in addition, as the right to be let alone by oneself. But before such a right could be introduced, a number of difficult questions need to be answered, such as the scope of the right, its legal-philosophical underpinnings and its relationship vis-à-vis countervailing interests.
{"title":"The right to be let alone by oneself: narrative and identity in a data-driven environment","authors":"B. Sloot","doi":"10.1080/17579961.2021.1898315","DOIUrl":"https://doi.org/10.1080/17579961.2021.1898315","url":null,"abstract":"ABSTRACT Privacy is predominantly understood as the right to be let alone by others. It protects an individual against intrusions upon the private sphere by governments, companies and fellow citizens and focusses on the right to withhold from them access to one's data, body or home. In the data-driven environment, the fact that others may have access to personal information will only be one concern; equally importantly, a person will be confronted with unwanted information about herself. Being frequently confronted with information about one's past, present and future fundamentally challenges an individual's capacity to form and maintain an identity, which depends on her ability to select and prioritise information about herself. This article suggests that the current privacy paradigm could be ameliorated by treating privacy not only as the right to be let alone by others, but in addition, as the right to be let alone by oneself. But before such a right could be introduced, a number of difficult questions need to be answered, such as the scope of the right, its legal-philosophical underpinnings and its relationship vis-à-vis countervailing interests.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"13 1","pages":"223 - 255"},"PeriodicalIF":0.0,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/17579961.2021.1898315","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47427997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}