Current literature on central bank digital currencies (CBDCs) generally focuses on regulatory issues in the domestic context. This paper discusses the challenges when a CBDC circulates across national borders. It addresses three cross-border spillover effects of the CBDC: the crowding out effect on local currency; challenges to capital control for regulators; and infringement of user privacy. The paper posits the Digital Yuan as the sample on which spillover effects can be assessed as it is circulated beyond its borders. It is estimated that the major fund receivers of the Belt and Road Initiative (BRI), and China’s neighbors, are most likely to be affected by the Digital Yuan. These countries will benefit from convenient, efficient, and secure transactions as the Digital Yuan circulates. But they may face problems when the Digital Yuan becomes widely used in local markets. They will find it difficult to control or monitor the flow of the Digital Yuan, and will have to take measures to protect the privacy of their domestic users. The authors therefore propose unilateral, bilateral, and multilateral strategies to cope with the corresponding spillover effects. The paper’s analysis suggests that the adverse effects of the cross-border uses of CBDCs can be addressed and mitigated by adequate institutional design, and by multilateral coordination efforts.
{"title":"Policy Responses to Cross-border Central Bank Digital Currencies – Assessing the Transborder Effects of Digital Yuan","authors":"Cheng-Yun Tsang, Ping-Kuei Chen","doi":"10.2139/ssrn.3891208","DOIUrl":"https://doi.org/10.2139/ssrn.3891208","url":null,"abstract":"Current literature on central bank digital currencies (CBDCs) generally focuses on regulatory issues in the domestic context. This paper discusses the challenges when a CBDC circulates across national borders. It addresses three cross-border spillover effects of the CBDC: the crowding out effect on local currency; challenges to capital control for regulators; and infringement of user privacy. The paper posits the Digital Yuan as the sample on which spillover effects can be assessed as it is circulated beyond its borders. It is estimated that the major fund receivers of the Belt and Road Initiative (BRI), and China’s neighbors, are most likely to be affected by the Digital Yuan. These countries will benefit from convenient, efficient, and secure transactions as the Digital Yuan circulates. But they may face problems when the Digital Yuan becomes widely used in local markets. They will find it difficult to control or monitor the flow of the Digital Yuan, and will have to take measures to protect the privacy of their domestic users. The authors therefore propose unilateral, bilateral, and multilateral strategies to cope with the corresponding spillover effects. The paper’s analysis suggests that the adverse effects of the cross-border uses of CBDCs can be addressed and mitigated by adequate institutional design, and by multilateral coordination efforts.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114346644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emerging power of Artificial Intelligence (AI), driven by the exponential growth in computer processing and the digitization of things, has the capacity to bring unfathomable benefits to society. In particular, AI promises to reinvent modern healthcare through devices that can predict, comprehend, learn, and act in astonishing and novel ways. While AI has an enormous potential to produce societal benefits, it will not be a sustainable technology without developing solutions to safeguard privacy while processing ever-growing sets of sensitive data.
This paper considers the tension that exists between privacy and AI and examines how AI and privacy can coexist, enjoying the advantages that each can bring. Rejecting the idea that AI means the end of privacy, and taking a technoprogressive stance, the paper seeks to explore how AI can be actively used to protect individual privacy. It contributes to the literature by reconfiguring AI not as a source of threats and challenges, but rather as a phenomenon that has the potential to empower individuals to protect their privacy.
The first part of the paper sets forward a brief taxonomy of AI and clarifies its role in the Internet of Health Things (IoHT). It then addresses privacy concerns that arise in this context. Next, the paper shifts towards a discussion of Data Protection by Design, exploring how AI can be utilized to meet this standard and in turn preserve individual privacy and data protection rights in the IoHT. Finally, the paper presents a case study of how some are actively using AI to preserve privacy in the IoHT.
{"title":"Artificial Intelligence in the Internet of Health Things: Is the Solution to AI Privacy More AI?","authors":"Liane Colonna","doi":"10.2139/ssrn.3838571","DOIUrl":"https://doi.org/10.2139/ssrn.3838571","url":null,"abstract":"The emerging power of Artificial Intelligence (AI), driven by the exponential growth in computer processing and the digitization of things, has the capacity to bring unfathomable benefits to society. In particular, AI promises to reinvent modern healthcare through devices that can predict, comprehend, learn, and act in astonishing and novel ways. While AI has an enormous potential to produce societal benefits, it will not be a sustainable technology without developing solutions to safeguard privacy while processing ever-growing sets of sensitive data.<br><br>This paper considers the tension that exists between privacy and AI and examines how AI and privacy can coexist, enjoying the advantages that each can bring. Rejecting the idea that AI means the end of privacy, and taking a technoprogressive stance, the paper seeks to explore how AI can be actively used to protect individual privacy. It contributes to the literature by reconfiguring AI not as a source of threats and challenges, but rather as a phenomenon that has the potential to empower individuals to protect their privacy.<br><br>The first part of the paper sets forward a brief taxonomy of AI and clarifies its role in the Internet of Health Things (IoHT). It then addresses privacy concerns that arise in this context. Next, the paper shifts towards a discussion of Data Protection by Design, exploring how AI can be utilized to meet this standard and in turn preserve individual privacy and data protection rights in the IoHT. Finally, the paper presents a case study of how some are actively using AI to preserve privacy in the IoHT.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126163353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The European Data Protection Board issued its first Binding Decision on 9 November 2020 in a case in which the Irish Data Commissioner (DPA) was lead enforcement authority. In the judgment of the Irish DPA, a fine of up to EUR 275,000 was appropriate, taking into account all relevant circumstances, including aggravating and mitigating factors. Several other national DPAs raised objections, including the German DPA, which thought that a fine of up to EUR 22 million was relevant, on the basis that it should be 'dissuasive' and therefore 'must be high enough to make data processing uneconomic and objectively inefficient'. Under the DGPR, the EDPB considered all objections, and rejected a surprising number as not satisfying the 'relevant and reasoned' standard. The EDPB issued a binding decision that a sanction must be 'deterrent' and required The Irish DPA to revise its fine. The Irish DPA issued a fine of EUR 450,000. This paper highlights the major rift in theory and practice between different approaches to the effects, if any, of financial sanctions. The case raises fundamental issues over the consistency and coherence of EU enforcement policy, and the level of confidence that may be placed in it. It identifies a conflict between traditional concepts of deterrence, effective, proportionate and dissuasive sanctions, and outcome-focused achievement of compliance. It also raises an underlying conflict between pure economic theory on the effectiveness of penalties and the relevance of the findings on behavioral science on how to affect future behavior.
{"title":"Comments on GDPR Enforcement EDPB Decision 01/020","authors":"C. Hodges","doi":"10.2139/ssrn.3765602","DOIUrl":"https://doi.org/10.2139/ssrn.3765602","url":null,"abstract":"The European Data Protection Board issued its first Binding Decision on 9 November 2020 in a case in which the Irish Data Commissioner (DPA) was lead enforcement authority. In the judgment of the Irish DPA, a fine of up to EUR 275,000 was appropriate, taking into account all relevant circumstances, including aggravating and mitigating factors. Several other national DPAs raised objections, including the German DPA, which thought that a fine of up to EUR 22 million was relevant, on the basis that it should be 'dissuasive' and therefore 'must be high enough to make data processing uneconomic and objectively inefficient'. Under the DGPR, the EDPB considered all objections, and rejected a surprising number as not satisfying the 'relevant and reasoned' standard. The EDPB issued a binding decision that a sanction must be 'deterrent' and required The Irish DPA to revise its fine. The Irish DPA issued a fine of EUR 450,000. \u0000 \u0000This paper highlights the major rift in theory and practice between different approaches to the effects, if any, of financial sanctions. The case raises fundamental issues over the consistency and coherence of EU enforcement policy, and the level of confidence that may be placed in it. It identifies a conflict between traditional concepts of deterrence, effective, proportionate and dissuasive sanctions, and outcome-focused achievement of compliance. It also raises an underlying conflict between pure economic theory on the effectiveness of penalties and the relevance of the findings on behavioral science on how to affect future behavior.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122139945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper investigates how the two key features of GDPR (EU’s data protection regulation)— privacy rights and data security—impact personal data driven markets. First, GDPR recognizes that individuals own and control their data in perpetuity, leading to three critical privacy rights: (i) right to explicit consent (data opt-in), (ii) right to be forgotten (data erasure), and (iii) right to portability (switch data to competitor). Second, GDPR has data security mandates protection against privacy breaches through unauthorized access. The right to explicit opt-in allows goods exchange without data exchange. Erasure and portability rights discipline firms to provide ongoing value and reduces consumers’ holdup using their own data. Overall, privacy rights restrict legal collection and use, while data security protects against illegal access and use. We develop a two- period model of forward-looking firms and consumers where consumers exercise data privacy rights balancing the cost (privacy breach, price discrimination) and benefits (product personalization, price subsidies) of sharing data with firms. We find that by reducing expected privacy breach costs, data security mandates increase opt-in, consumer surplus and firm profit. Privacy rights reduce opt-in and mostly increase consumer surplus at the expense of firm profits; interestingly they hurt firms more in competitive than in monopolistic markets. While privacy rights can reduce surplus for both firms and consumers, these conditions are unlikely to be realized when breach risk is endogenized. Further, by unbundling data exchange from goods exchange, privacy rights facilitate trade in goods that may otherwise fail to occur due to privacy breach risk.
{"title":"Privacy Rights and Data Security: GDPR and Personal Data Driven Markets","authors":"T. Ke, K. Sudhir","doi":"10.2139/ssrn.3643979","DOIUrl":"https://doi.org/10.2139/ssrn.3643979","url":null,"abstract":"The paper investigates how the two key features of GDPR (EU’s data protection regulation)— privacy rights and data security—impact personal data driven markets. First, GDPR recognizes that individuals own and control their data in perpetuity, leading to three critical privacy rights: (i) right to explicit consent (data opt-in), (ii) right to be forgotten (data erasure), and (iii) right to portability (switch data to competitor). Second, GDPR has data security mandates protection against privacy breaches through unauthorized access. The right to explicit opt-in allows goods exchange without data exchange. Erasure and portability rights discipline firms to provide ongoing value and reduces consumers’ holdup using their own data. Overall, privacy rights restrict legal collection and use, while data security protects against illegal access and use. We develop a two- period model of forward-looking firms and consumers where consumers exercise data privacy rights balancing the cost (privacy breach, price discrimination) and benefits (product personalization, price subsidies) of sharing data with firms. We find that by reducing expected privacy breach costs, data security mandates increase opt-in, consumer surplus and firm profit. Privacy rights reduce opt-in and mostly increase consumer surplus at the expense of firm profits; interestingly they hurt firms more in competitive than in monopolistic markets. While privacy rights can reduce surplus for both firms and consumers, these conditions are unlikely to be realized when breach risk is endogenized. Further, by unbundling data exchange from goods exchange, privacy rights facilitate trade in goods that may otherwise fail to occur due to privacy breach risk.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"4 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115402110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Workplace surveillance is a necessity which was prompt by the development of information communication technologies that offered huge opportunities to employers to monitor their employees at work and even out of work which cause strict concerns for the privacy of employees. The present thesis examines such concerns of employees arising out of workplace surveillance. The legal protection of privacy within certain systems, particularly, within the European Convention on Human Rights is considered. After examining the substantive matters of the right to respect for private life under the Convention, four cases of the European Court of Human Rights concerning employee privacy at work are studied thoroughly, and an analysis of each case is provided. By such an examination, the scope of protection of the right to privacy of employees in the context of workplace surveillance is expounded. Furthermore, certain specific problems regarding the protection of privacy are highlighted and where relevant, possible solutions are presented.
{"title":"Big Boss is Watching You! The Right to Privacy of Employees in the Context of Workplace Surveillance","authors":"Fidan Abdurrahimli","doi":"10.2139/ssrn.3740078","DOIUrl":"https://doi.org/10.2139/ssrn.3740078","url":null,"abstract":"Workplace surveillance is a necessity which was prompt by the development of information communication technologies that offered huge opportunities to employers to monitor their employees at work and even out of work which cause strict concerns for the privacy of employees. The present thesis examines such concerns of employees arising out of workplace surveillance. The legal protection of privacy within certain systems, particularly, within the European Convention on Human Rights is considered. After examining the substantive matters of the right to respect for private life under the Convention, four cases of the European Court of Human Rights concerning employee privacy at work are studied thoroughly, and an analysis of each case is provided. By such an examination, the scope of protection of the right to privacy of employees in the context of workplace surveillance is expounded. Furthermore, certain specific problems regarding the protection of privacy are highlighted and where relevant, possible solutions are presented.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"365 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114500877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
On 1 January 2020, the data protection law of the US state of California will change fundamentally. At that time, the California Consumer Privacy Act of 2018 (CCPA) will enter into force, with far-reaching obligations for companies to handle personal information. This article aims at giving an overview on the new regulation. In addition, the new legal status in California will be outlined and compared with the model of the European Union General Data Protection Regulation (GDPR) before developing concrete guidelines for global corporations and their data protection policy.
{"title":"The New Californian Data Protection Law – In the Light of the EU General Data Protection Regulation","authors":"T. Hoeren, Stefan Pinelli","doi":"10.2139/ssrn.3557964","DOIUrl":"https://doi.org/10.2139/ssrn.3557964","url":null,"abstract":"On 1 January 2020, the data protection law of the US state of California will change fundamentally. At that time, the California Consumer Privacy Act of 2018 (CCPA) will enter into force, with far-reaching obligations for companies to handle personal information. This article aims at giving an overview on the new regulation. In addition, the new legal status in California will be outlined and compared with the model of the European Union General Data Protection Regulation (GDPR) before developing concrete guidelines for global corporations and their data protection policy.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132310375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Superpowers, states and companies around the world are all pushing hard to win the AI race. Artificial intelligence (AI) is of strategic importance for the EU, with the European Commission recently stating that ‘artificial intelligence with a purpose can make Europe a world leader’. For this to happen, though, the EU needs to put in place the right ethical and legal framework. This Foresight Brief argues that such a framework must be solidly founded on regulation – which can be achieved by updating existing legislation – and that it must pay specific attention to the protection of workers. Workers are in a subordinate position in relation to their employers, and in the EU’s eagerness to win the AI race, their rights may be overlooked. This is why a protective and enforceable legal framework must be developed, with the participation of social partners.
{"title":"Labour in the Age of AI: Why Regulation is Needed to Protect Workers","authors":"Aida Ponce","doi":"10.2139/ssrn.3541002","DOIUrl":"https://doi.org/10.2139/ssrn.3541002","url":null,"abstract":"Superpowers, states and companies around the world are all pushing hard to win the AI race. Artificial intelligence (AI) is of strategic importance for the EU, with the European Commission recently stating that ‘artificial intelligence with a purpose can make Europe a world leader’. For this to happen, though, the EU needs to put in place the right ethical and legal framework. \u0000 \u0000This Foresight Brief argues that such a framework must be solidly founded on regulation – which can be achieved by updating existing legislation – and that it must pay specific attention to the protection of workers. Workers are in a subordinate position in relation to their employers, and in the EU’s eagerness to win the AI race, their rights may be overlooked. This is why a protective and enforceable legal framework must be developed, with the participation of social partners.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117154314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, Professor Daniel Solove deconstructs and critiques the privacy paradox and the arguments made about it. The “privacy paradox” is the phenomenon where people say that they value privacy highly, yet in their behavior relinquish their personal data for very little in exchange or fail to use measures to protect their privacy. Commentators typically make one of two types of arguments about the privacy paradox. On one side, the “behavior valuation argument” contends behavior is the best metric to evaluate how people actually value privacy. Behavior reveals that people ascribe a low value to privacy or readily trade it away for goods or services. The argument often goes on to contend that privacy regulation should be reduced. On the other side, the “behavior distortion argument” argues that people’s behavior isn’t an accurate metric of preferences because behavior is distorted by biases and heuristics, manipulation and skewing, and other factors. In contrast to both of these camps, Professor Solove argues that the privacy paradox is a myth created by faulty logic. The behavior involved in privacy paradox studies involves people making decisions about risk in very specific contexts. In contrast, people’s attitudes about their privacy concerns or how much they value privacy are much more general in nature. It is a leap in logic to generalize from people’s risk decisions involving specific personal data in specific contexts to reach broader conclusions about how people value their own privacy. The behavior in the privacy paradox studies doesn’t lead to a conclusion for less regulation. On the other hand, minimizing behavioral distortion will not cure people’s failure to protect their own privacy. It is perfectly rational for people — even without any undue influences on behavior — to fail to make good assessments of privacy risks and to fail to manage their privacy effectively. Managing one’s privacy is a vast, complex, and never-ending project that does not scale; it becomes virtually impossible to do comprehensively. Privacy regulation often seeks to give people more privacy self-management, such as the recent California Consumer Privacy Act. Professor Solove argues that giving individuals more tasks for managing their privacy will not provide effective privacy protection. Instead, regulation should employ a different strategy — focus on regulating the architecture that structures the way information is used, maintained, and transferred.
{"title":"The Myth of the Privacy Paradox","authors":"Daniel J. Solove","doi":"10.2139/ssrn.3536265","DOIUrl":"https://doi.org/10.2139/ssrn.3536265","url":null,"abstract":"In this article, Professor Daniel Solove deconstructs and critiques the privacy paradox and the arguments made about it. The “privacy paradox” is the phenomenon where people say that they value privacy highly, yet in their behavior relinquish their personal data for very little in exchange or fail to use measures to protect their privacy. \u0000 \u0000Commentators typically make one of two types of arguments about the privacy paradox. On one side, the “behavior valuation argument” contends behavior is the best metric to evaluate how people actually value privacy. Behavior reveals that people ascribe a low value to privacy or readily trade it away for goods or services. The argument often goes on to contend that privacy regulation should be reduced. \u0000 \u0000On the other side, the “behavior distortion argument” argues that people’s behavior isn’t an accurate metric of preferences because behavior is distorted by biases and heuristics, manipulation and skewing, and other factors. \u0000 \u0000In contrast to both of these camps, Professor Solove argues that the privacy paradox is a myth created by faulty logic. The behavior involved in privacy paradox studies involves people making decisions about risk in very specific contexts. In contrast, people’s attitudes about their privacy concerns or how much they value privacy are much more general in nature. It is a leap in logic to generalize from people’s risk decisions involving specific personal data in specific contexts to reach broader conclusions about how people value their own privacy. \u0000 \u0000The behavior in the privacy paradox studies doesn’t lead to a conclusion for less regulation. On the other hand, minimizing behavioral distortion will not cure people’s failure to protect their own privacy. It is perfectly rational for people — even without any undue influences on behavior — to fail to make good assessments of privacy risks and to fail to manage their privacy effectively. Managing one’s privacy is a vast, complex, and never-ending project that does not scale; it becomes virtually impossible to do comprehensively. Privacy regulation often seeks to give people more privacy self-management, such as the recent California Consumer Privacy Act. Professor Solove argues that giving individuals more tasks for managing their privacy will not provide effective privacy protection. Instead, regulation should employ a different strategy — focus on regulating the architecture that structures the way information is used, maintained, and transferred.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122685125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Safeguarding user rights and maximising consumer welfare in the digital economy, particularly in the context of personal data, requires an integrated approach that cuts across the fields of competition, consumer protection, and data protection. While the legal interventions in each of these fields are geared towards securing better outcomes for individuals, often in their capacity as consumers, there are significant differences in the available tools and remedies. Current and proposed regulatory frameworks in India, however, continue with a silo-based approach offering limited scope for cross-sectional analysis of consumer welfare issues in digital markets. We argue for the need to create appropriate legal and institutional mechanisms to facilitate interactions across the fields of competition, consumer protection, and data protection policies as well as sectoral policies.
{"title":"Personal Data and Consumer Welfare in the Digital Economy","authors":"Smriti Parsheera, Sarang Moharir","doi":"10.2139/ssrn.3545497","DOIUrl":"https://doi.org/10.2139/ssrn.3545497","url":null,"abstract":"Safeguarding user rights and maximising consumer welfare in the digital economy, particularly in the context of personal data, requires an integrated approach that cuts across the fields of competition, consumer protection, and data protection. While the legal interventions in each of these fields are geared towards securing better outcomes for individuals, often in their capacity as consumers, there are significant differences in the available tools and remedies. Current and proposed regulatory frameworks in India, however, continue with a silo-based approach offering limited scope for cross-sectional analysis of consumer welfare issues in digital markets. We argue for the need to create appropriate legal and institutional mechanisms to facilitate interactions across the fields of competition, consumer protection, and data protection policies as well as sectoral policies.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127841586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial Intelligence (AI) is moving so rapidly policy makers, regulators, governments and the legal profession are struggling to keep up. However, AI is not new and it has been used for more than two decades. Coupled with AI, personal data, along with cyber security law, and the challenges posed by the current legal frameworks are nothing short of immense. They are, in part, at odds with each other, and are doing very different things. This paper explores some of the challenges emerging in Australia, Europe and Singapore. The challenge of the interrelationship between personal data and AI arguably begins with who has manufactured the AI. Secondly who owns the AI. Another challenge that has also emerged is defining AI. Most people are able to understand what AI is and how it is beginning to impact the economy and our daily lives. However, there is no clear legal definition of AI, because AI is so nebulous. This burgeoning area of law is going to challenge society, privacy experts, regulators, innovators of technology, as there continues to be a collision between the two. Furthermore, the collection of personal data by AI challenges the notion of where responsibility lies. That is, AI may collect, use and disclose personal data at different points along the technology chain. It will be highlighted how the current data protection laws rather than promote AI projects, largely inhibit its development. This paper identifies some of the tensions between data protection law and AI. This paper argues that there is a need for an urgent and detailed understanding of the opportunities, legal and ethical issues associated with data protection and AI. Doing so will ensure an ongoing balance between the economic, social and human rights issues that are attached to the two areas of the law.
{"title":"Data Protection and Artificial Intelligence Law: Europe Australia Singapore - An Actual or Perceived Dichotomy?","authors":"Robert Walters, Matthew Coghlan","doi":"10.2139/ssrn.3503392","DOIUrl":"https://doi.org/10.2139/ssrn.3503392","url":null,"abstract":"Artificial Intelligence (AI) is moving so rapidly policy makers, regulators, governments and the legal profession are struggling to keep up. However, AI is not new and it has been used for more than two decades. Coupled with AI, personal data, along with cyber security law, and the challenges posed by the current legal frameworks are nothing short of immense. They are, in part, at odds with each other, and are doing very different things. This paper explores some of the challenges emerging in Australia, Europe and Singapore. The challenge of the interrelationship between personal data and AI arguably begins with who has manufactured the AI. Secondly who owns the AI. Another challenge that has also emerged is defining AI. Most people are able to understand what AI is and how it is beginning to impact the economy and our daily lives. However, there is no clear legal definition of AI, because AI is so nebulous. This burgeoning area of law is going to challenge society, privacy experts, regulators, innovators of technology, as there continues to be a collision between the two. Furthermore, the collection of personal data by AI challenges the notion of where responsibility lies. That is, AI may collect, use and disclose personal data at different points along the technology chain. It will be highlighted how the current data protection laws rather than promote AI projects, largely inhibit its development. This paper identifies some of the tensions between data protection law and AI. This paper argues that there is a need for an urgent and detailed understanding of the opportunities, legal and ethical issues associated with data protection and AI. Doing so will ensure an ongoing balance between the economic, social and human rights issues that are attached to the two areas of the law.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129976040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}