In early 2021, the US Census Bureau will begin releasing statistical tables based on the decennial census conducted in 2020. Because of significant changes in the data landscape, the Census Bureau is changing its approach to disclosure avoidance. The confidentiality of individuals represented “anonymously” in these statistical tables will be protected by a “formal privacy” technique that allows the Bureau to mathematically assess the risk of revealing information about individuals in the released statistical tables. The Bureau’s approach is an implementation of “differential privacy,” and it gives a rigorously demonstrated guaranteed level of privacy protection that traditional methods of disclosure avoidance do not. Given the importance of the Census Bureau’s statistical tables to democracy, resource allocation, justice, and research, confusion about what differential privacy is and how it might alter or eliminate data products has rippled through the community of its data users, namely: demographers, statisticians, and census advocates. The purpose of this primer is to provide context to the Census Bureau’s decision to use a technique based on differential privacy and to help data users and other census advocates who are struggling to understand what this mathematical tool is, why it matters, and how it will affect the Bureau’s data products.
{"title":"Differential Privacy in the 2020 Decennial Census and the Implications for Available Data Products","authors":"D. Boyd","doi":"10.2139/ssrn.3416572","DOIUrl":"https://doi.org/10.2139/ssrn.3416572","url":null,"abstract":"In early 2021, the US Census Bureau will begin releasing statistical tables based on the decennial census conducted in 2020. Because of significant changes in the data landscape, the Census Bureau is changing its approach to disclosure avoidance. The confidentiality of individuals represented “anonymously” in these statistical tables will be protected by a “formal privacy” technique that allows the Bureau to mathematically assess the risk of revealing information about individuals in the released statistical tables. The Bureau’s approach is an implementation of “differential privacy,” and it gives a rigorously demonstrated guaranteed level of privacy protection that traditional methods of disclosure avoidance do not. Given the importance of the Census Bureau’s statistical tables to democracy, resource allocation, justice, and research, confusion about what differential privacy is and how it might alter or eliminate data products has rippled through the community of its data users, namely: demographers, statisticians, and census advocates. \u0000 \u0000The purpose of this primer is to provide context to the Census Bureau’s decision to use a technique based on differential privacy and to help data users and other census advocates who are struggling to understand what this mathematical tool is, why it matters, and how it will affect the Bureau’s data products.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125393638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the last decade a dominant storyline in the realm of technology and the law has been the rise of Big Data and the various state responses, or lack thereof, to concerns stemming from the same. At first, technology companies pursued methods of monetizing accumulated data almost by default — massive stores of data were a byproduct of other business ventures. Like early wildcat oil drillers that struck natural gas, the stores of data was seen more as a hindrance than anything else. Over time, oil companies found a use for natural gas and Silicon Valley found a use for our stores of data. The next trove of data is going to be found in our tax information. Let us insure that this time, from the outset, our privacy is kept front of mind.
{"title":"Tax, Technology and Privacy: The Coming Collision","authors":"A. Leahey","doi":"10.2139/SSRN.3431476","DOIUrl":"https://doi.org/10.2139/SSRN.3431476","url":null,"abstract":"In the last decade a dominant storyline in the realm of technology and the law has been the rise of Big Data and the various state responses, or lack thereof, to concerns stemming from the same. At first, technology companies pursued methods of monetizing accumulated data almost by default — massive stores of data were a byproduct of other business ventures. Like early wildcat oil drillers that struck natural gas, the stores of data was seen more as a hindrance than anything else. Over time, oil companies found a use for natural gas and Silicon Valley found a use for our stores of data. The next trove of data is going to be found in our tax information. Let us insure that this time, from the outset, our privacy is kept front of mind.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126206162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moritz Büchi, Eduard Fosch Villaronga, C. Lutz, Aurelia Tamó-Larrieux, Shruthi Velidi, S. Viljoen
In this article, we provide an in-depth overview of the literature on chilling effects and corporate profiling and connect the two topics. We start by explaining how profiling, within an increasingly data-rich environment, creates substantial power asymmetries between users and platforms (and corporations more broadly). We stress the notion of inferences and the increasingly automatic nature of decision-making, based on user data, as essential aspects of profiling. We then discuss chilling effects in depth, connecting them to corporate profiling. In the article, we first stress the relationship and similarities between profiling and surveillance. Second, we illustrate chilling effects as a result of state and peer surveillance; we then show the interrelatedness of corporate and state profiling, and we finally spotlight the customization of behavior and behavioral manipulation as particularly outstanding issues in this discourse. We also explore the legal foundations of profiling through an in-depth analysis of European and US data protection law. We found that, while Europe has a clear regulatory framework in place for profiling, the US primarily relies on a patchwork of sector-specific or state laws. Besides, there is an attempt to regulate differential impacts of profiling, via anti-discrimination statutes, yet few policies focus on combating generalized, concrete harms of profiling, such as chilling effects. At the end of the article, we bring together the diverse strands of literature in concise propositions to guide future research on the connection between corporate profiling and chilling effects.
{"title":"Chilling Effects of Profiling Activities: Mapping the Issues","authors":"Moritz Büchi, Eduard Fosch Villaronga, C. Lutz, Aurelia Tamó-Larrieux, Shruthi Velidi, S. Viljoen","doi":"10.2139/ssrn.3379275","DOIUrl":"https://doi.org/10.2139/ssrn.3379275","url":null,"abstract":"In this article, we provide an in-depth overview of the literature on chilling effects and corporate profiling and connect the two topics. We start by explaining how profiling, within an increasingly data-rich environment, creates substantial power asymmetries between users and platforms (and corporations more broadly). We stress the notion of inferences and the increasingly automatic nature of decision-making, based on user data, as essential aspects of profiling. We then discuss chilling effects in depth, connecting them to corporate profiling. In the article, we first stress the relationship and similarities between profiling and surveillance. Second, we illustrate chilling effects as a result of state and peer surveillance; we then show the interrelatedness of corporate and state profiling, and we finally spotlight the customization of behavior and behavioral manipulation as particularly outstanding issues in this discourse. We also explore the legal foundations of profiling through an in-depth analysis of European and US data protection law. We found that, while Europe has a clear regulatory framework in place for profiling, the US primarily relies on a patchwork of sector-specific or state laws. Besides, there is an attempt to regulate differential impacts of profiling, via anti-discrimination statutes, yet few policies focus on combating generalized, concrete harms of profiling, such as chilling effects. At the end of the article, we bring together the diverse strands of literature in concise propositions to guide future research on the connection between corporate profiling and chilling effects.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129439885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emerging privacy-preserving technologies and approaches hold considerable promise for improving data privacy and confidentiality in the 21st century. At the same time, more information is becoming accessible to support evidence-based policymaking.
In 2017, the U.S. Commission on Evidence-Based Policymaking unanimously recommended that further attention be given to the deployment of privacy-preserving data-sharing applications. If these types of applications can be tested and scaled in the near-term, they could vastly improve insights about important policy problems by using disparate datasets. At the same time, the approaches could promote substantial gains in privacy for the American public.
There are numerous ways to engage in privacy-preserving data sharing. This paper primarily focuses on secure computation, which allows information to be accessed securely, guarantees privacy, and permits analysis without making private information available. Three key issues motivated the launch of a domestic secure computation demonstration project using real government-collected data:
--Using new privacy-preserving approaches addresses pressing needs in society. Current widely accepted approaches to managing privacy risks—like preventing the identification of individuals or organizations in public datasets—will become less effective over time. While there are many practices currently in use to keep government-collected data confidential, they do not often incorporate modern developments in computer science, mathematics, and statistics in a timely way. New approaches can enable researchers to combine datasets to improve the capability for insights, without being impeded by traditional concerns about bringing large, identifiable datasets together. In fact, if successful, traditional approaches to combining data for analysis may not be as necessary.
--There are emerging technical applications to deploy certain privacy-preserving approaches in targeted settings. These emerging procedures are increasingly enabling larger-scale testing of privacy-preserving approaches across a variety of policy domains, governmental jurisdictions, and agency settings to demonstrate the privacy guarantees that accompany data access and use.
--Widespread adoption and use by public administrators will only follow meaningful and successful demonstration projects. For example, secure computation approaches are complex and can be difficult to understand for those unfamiliar with their potential. Implementing new privacy-preserving approaches will require thoughtful attention to public policy implications, public opinions, legal restrictions, and other administrative limitations that vary by agency and governmental entity. This project used real-world government data to illustrate the applicability of secure computation compared to the classic data infrastructure available to some local governments. The project took place in a domestic, non-intelli
{"title":"Privacy-Preserved Data Sharing for Evidence-Based Policy Decisions: A Demonstration Project Using Human Services Administrative Records for Evidence-Building Activities","authors":"N. Hart, David Archer, Erin Dalton","doi":"10.2139/ssrn.3808054","DOIUrl":"https://doi.org/10.2139/ssrn.3808054","url":null,"abstract":"Emerging privacy-preserving technologies and approaches hold considerable promise for improving data privacy and confidentiality in the 21st century. At the same time, more information is becoming accessible to support evidence-based policymaking.<br><br>In 2017, the U.S. Commission on Evidence-Based Policymaking unanimously recommended that further attention be given to the deployment of privacy-preserving data-sharing applications. If these types of applications can be tested and scaled in the near-term, they could vastly improve insights about important policy problems by using disparate datasets. At the same time, the approaches could promote substantial gains in privacy for the American public.<br><br>There are numerous ways to engage in privacy-preserving data sharing. This paper primarily focuses on secure computation, which allows information to be accessed securely, guarantees privacy, and permits analysis without making private information available. Three key issues motivated the launch of a domestic secure computation demonstration project using real government-collected data:<br><br>--Using new privacy-preserving approaches addresses pressing needs in society. Current widely accepted approaches to managing privacy risks—like preventing the identification of individuals or organizations in public datasets—will become less effective over time. While there are many practices currently in use to keep government-collected data confidential, they do not often incorporate modern developments in computer science, mathematics, and statistics in a timely way. New approaches can enable researchers to combine datasets to improve the capability for insights, without being impeded by traditional concerns about bringing large, identifiable datasets together. In fact, if successful, traditional approaches to combining data for analysis may not be as necessary.<br><br>--There are emerging technical applications to deploy certain privacy-preserving approaches in targeted settings. These emerging procedures are increasingly enabling larger-scale testing of privacy-preserving approaches across a variety of policy domains, governmental jurisdictions, and agency settings to demonstrate the privacy guarantees that accompany data access and use.<br><br>--Widespread adoption and use by public administrators will only follow meaningful and successful demonstration projects. For example, secure computation approaches are complex and can be difficult to understand for those unfamiliar with their potential. Implementing new privacy-preserving approaches will require thoughtful attention to public policy implications, public opinions, legal restrictions, and other administrative limitations that vary by agency and governmental entity.<br>This project used real-world government data to illustrate the applicability of secure computation compared to the classic data infrastructure available to some local governments. The project took place in a domestic, non-intelli","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122244385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decentralization is heralded as the most important technological design aspect of distributed ledger technologies (DLTs). In this chapter we’ll analyze the concept of decentralization, with the goal to understand the social, legal, and economic forces that produce more or less decentralized techno-social systems. We first give an overview of decentralization as a political ideology and as an ideal and natural endpoint in the development of digital technologies. We then move beyond this discourse and treat decentralization, its extent, its mode, and the systems which it can refer to as the products of particular economic, political, and social dynamics around and within these techno-social systems. We then point at the concrete forces that shape the actual degree of (de)centralization. Through this, we show that the extent to which a techno-social system is (de)centralized at any given moment should not be measured by its distance from an ideological ideal of total decentralization but should be seen as the sum of all the social, economic, political, and legal forces that impact a techno-social system.
{"title":"The Logics of Technology Decentralization - The Case of Distributed Ledger Technologies","authors":"Balázs Bodó, A. Giannopoulou","doi":"10.4324/9780429029530-8","DOIUrl":"https://doi.org/10.4324/9780429029530-8","url":null,"abstract":"Decentralization is heralded as the most important technological design aspect of distributed ledger technologies (DLTs). In this chapter we’ll analyze the concept of decentralization, with the goal to understand the social, legal, and economic forces that produce more or less decentralized techno-social systems. We first give an overview of decentralization as a political ideology and as an ideal and natural endpoint in the development of digital technologies. We then move beyond this discourse and treat decentralization, its extent, its mode, and the systems which it can refer to as the products of particular economic, political, and social dynamics around and within these techno-social systems. We then point at the concrete forces that shape the actual degree of (de)centralization. Through this, we show that the extent to which a techno-social system is (de)centralized at any given moment should not be measured by its distance from an ideological ideal of total decentralization but should be seen as the sum of all the social, economic, political, and legal forces that impact a techno-social system.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123562943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
What comes after the control paradigm? For decades, privacy law has sought to provide individuals with notice and choice and so give them control over their personal data. But what happens when this regulatory paradigm breaks down?
Predictive analytics forces us to confront this challenge. Individuals cannot understand how predictive analytics uses their surface data to infer latent, far more sensitive data about them. This prevents individuals from making meaningful choices about whether to share their surface data in the first place. It also creates threats (such as harmful bias, manipulation and procedural unfairness) that go well beyond the privacy interests that the control paradigm seeks to safeguard. In order to protect people in the algorithmic economy, privacy law must shift from a liberalist legal paradigm that focuses on individual control, to one in which public authorities set substantive standards that defend people against algorithmic threats.
Leading scholars such as Jack Balkin (information fiduciaries), Helen Nissenbaum (contextual integrity), Danielle Citron (technological due process), Craig Mundie (use-based regulation) and others recognize the need for such a shift and propose ways to achieve it. This article ties these proposals together, views them as attempts to define a new regulatory paradigm for the age of predictive analytics, and evaluates whether each achieves this aim.
It then argues that the solution may be hiding in plain sight in the form of the FTC’s Section 5 unfairness authority. It explores whether the FTC could use its unfairness authority to draw substantive lines between data analytics practices that are socially appropriate and fair, and those that are inappropriate and unfair, and examines how the Commission would make such determinations. It argues that this existing authority, which requires no new legislation, provides a comprehensive and politically legitimate way to create much needed societal boundaries around corporate use of predictive analytics. It concludes that the Commission could use its unfairness authority to protect people from the threats that the algorithmic economy creates.
{"title":"From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics","authors":"D. Hirsch","doi":"10.2139/ssrn.3449112","DOIUrl":"https://doi.org/10.2139/ssrn.3449112","url":null,"abstract":"What comes after the control paradigm? For decades, privacy law has sought to provide individuals with notice and choice and so give them control over their personal data. But what happens when this regulatory paradigm breaks down? <br><br>Predictive analytics forces us to confront this challenge. Individuals cannot understand how predictive analytics uses their surface data to infer latent, far more sensitive data about them. This prevents individuals from making meaningful choices about whether to share their surface data in the first place. It also creates threats (such as harmful bias, manipulation and procedural unfairness) that go well beyond the privacy interests that the control paradigm seeks to safeguard. In order to protect people in the algorithmic economy, privacy law must shift from a liberalist legal paradigm that focuses on individual control, to one in which public authorities set substantive standards that defend people against algorithmic threats. <br><br>Leading scholars such as Jack Balkin (information fiduciaries), Helen Nissenbaum (contextual integrity), Danielle Citron (technological due process), Craig Mundie (use-based regulation) and others recognize the need for such a shift and propose ways to achieve it. This article ties these proposals together, views them as attempts to define a new regulatory paradigm for the age of predictive analytics, and evaluates whether each achieves this aim.<br><br>It then argues that the solution may be hiding in plain sight in the form of the FTC’s Section 5 unfairness authority. It explores whether the FTC could use its unfairness authority to draw substantive lines between data analytics practices that are socially appropriate and fair, and those that are inappropriate and unfair, and examines how the Commission would make such determinations. It argues that this existing authority, which requires no new legislation, provides a comprehensive and politically legitimate way to create much needed societal boundaries around corporate use of predictive analytics. It concludes that the Commission could use its unfairness authority to protect people from the threats that the algorithmic economy creates.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116684480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper evaluates the quality of privacy policies of five popular online services in India from the perspective of access and readability. The paper ask – do the policies have specific, unambiguous and clear provisions that lend themselves to easy comprehension? The paper has also conducted a survey among college students to evaluate how much do users typically understand of what they are signing up for. The paper also finds that the policies studied are poorly drafted, and often seem to serve as check-the-box compliance of expected privacy disclosures. Survey respondents do not score very highly on the privacy policy quiz. The respondents fared the worst on policies that had the most unspecified terms, and on policies that were long. Respondents were also unable to understand terms such as “third party†, “affiliate†and “business-partner†. The results suggest that for consent to work, the information offered to individuals has to be better drafted and designed.
本文从可访问性和可读性的角度对印度五种流行的在线服务的隐私政策质量进行了评估。论文问:这些政策是否有具体、明确和明确的条款,便于理解?该报还在大学生中进行了一项调查,以评估用户通常对他们注册的内容了解多少。这篇论文还发现,所研究的政策起草得很差,而且往往似乎是对预期的隐私披露的遵守。受访者在隐私政策测试中得分不高。受访者在条款不明确的政策和期限长的政策上表现最差。受访者也无法理解诸如€œthird party €、€œaffiliateâ€和€œbusiness-partnerâ€等术语。研究结果表明,为了获得工作的同意,必须更好地起草和设计提供给个人的信息。
{"title":"Disclosures in Privacy Policies: Does 'Notice and Consent' Work?","authors":"R. Bailey, Smriti Parsheera, F. Rahman, R. Sane","doi":"10.2139/SSRN.3328289","DOIUrl":"https://doi.org/10.2139/SSRN.3328289","url":null,"abstract":"This paper evaluates the quality of privacy policies of five popular online services in India from the perspective of access and readability. The paper ask – do the policies have specific, unambiguous and clear provisions that lend themselves to easy comprehension? The paper has also conducted a survey among college students to evaluate how much do users typically understand of what they are signing up for. The paper also finds that the policies studied are poorly drafted, and often seem to serve as check-the-box compliance of expected privacy disclosures. Survey respondents do not score very highly on the privacy policy quiz. The respondents fared the worst on policies that had the most unspecified terms, and on policies that were long. Respondents were also unable to understand terms such as “third party†, “affiliate†and “business-partner†. The results suggest that for consent to work, the information offered to individuals has to be better drafted and designed.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"49 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133239228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Privacy and family law are both dynamic, subjects of passionate debate, and constantly changing with developments in society, policy and technology. This paper develops a normative understanding of the meaning and value of privacy in the context of proceedings under the Family Law Act 1975 (Cth) (Family Law Act) that embraces children’s decision-making autonomy. The focus is privacy’s decisional dimension, which has received scant scholarly attention in the Australian family law context. Recognising and respecting children’s (as distinct from their parents’) decision-making autonomy, and children’s right to make decisions that might conflict with their parents’ (and the state’s) wishes, remain significant, and unresolved, challenges for the Australian family courts. This paper explores these issues using court authorisation of special medical procedures for children diagnosed with gender dysphoria as a case study. This paper argues that the construction of children as vulnerable to harm and the hierarchical nature of the parent-child relationship under the Family Law Act, coupled with judicial approaches to determining the ‘best interests of the child’ as the paramount consideration, have inhibited the Family Court of Australia from embracing children’s decisional privacy. This paper addresses concerns about the perceived conflictual consequences of doing so. It emphasises the relationality of children’s rights, the significance of the family unit, and the public interest in promoting children as active participants in proceedings as a policy goal of family law.
{"title":"Embracing Children’s Right to Decisional Privacy in Proceedings under the Family Law Act 1975 (Cth): In Children’s Best Interests or a Source of Conflict?","authors":"Georgina Dimopoulos","doi":"10.2139/ssrn.3303395","DOIUrl":"https://doi.org/10.2139/ssrn.3303395","url":null,"abstract":"Privacy and family law are both dynamic, subjects of passionate debate, and constantly changing with developments in society, policy and technology. This paper develops a normative understanding of the meaning and value of privacy in the context of proceedings under the Family Law Act 1975 (Cth) (Family Law Act) that embraces children’s decision-making autonomy. The focus is privacy’s decisional dimension, which has received scant scholarly attention in the Australian family law context. Recognising and respecting children’s (as distinct from their parents’) decision-making autonomy, and children’s right to make decisions that might conflict with their parents’ (and the state’s) wishes, remain significant, and unresolved, challenges for the Australian family courts. This paper explores these issues using court authorisation of special medical procedures for children diagnosed with gender dysphoria as a case study. This paper argues that the construction of children as vulnerable to harm and the hierarchical nature of the parent-child relationship under the Family Law Act, coupled with judicial approaches to determining the ‘best interests of the child’ as the paramount consideration, have inhibited the Family Court of Australia from embracing children’s decisional privacy. This paper addresses concerns about the perceived conflictual consequences of doing so. It emphasises the relationality of children’s rights, the significance of the family unit, and the public interest in promoting children as active participants in proceedings as a policy goal of family law.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122565750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today enormous data storage capacities and computational power in the e-big data era have created unforeseen opportunities for big data hoarding corporations to reap hidden benefits from individual’s information sharing, which occurs bit by bit in small tranches over time. This paper presents underlying dignity and utility considerations when individual decision makers face the privacy versus information sharing predicament. Thereby the article unravels the legal foundations of dignity in privacy but also the behavioral economics of utility in communication and information sharing. For Human Resources managers the question arises whether to uphold human dignity in privacy or derive benefit from utility of information sharing. From legal and governance perspectives, the outlined ideas may stimulate the e-privacy discourse in the age of digitalization but also serving the greater goals of democratisation of information and upheld humane dignity in the realm of e-ethics in the big data era.
{"title":"Dignity and Utility of Privacy and Information Sharing in the Digital Big Data Age","authors":"Julia M. Puaschunder","doi":"10.2139/ssrn.3286650","DOIUrl":"https://doi.org/10.2139/ssrn.3286650","url":null,"abstract":"Today enormous data storage capacities and computational power in the e-big data era have created unforeseen opportunities for big data hoarding corporations to reap hidden benefits from individual’s information sharing, which occurs bit by bit in small tranches over time. This paper presents underlying dignity and utility considerations when individual decision makers face the privacy versus information sharing predicament. Thereby the article unravels the legal foundations of dignity in privacy but also the behavioral economics of utility in communication and information sharing. For Human Resources managers the question arises whether to uphold human dignity in privacy or derive benefit from utility of information sharing. From legal and governance perspectives, the outlined ideas may stimulate the e-privacy discourse in the age of digitalization but also serving the greater goals of democratisation of information and upheld humane dignity in the realm of e-ethics in the big data era.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128180164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tad Lipsky, Joshua D. Wright, D. Ginsburg, John M. Yun
This comment is submitted in response to the United States Federal Trade Commission (“FTC”) hearing on Concentration and Competitiveness in the U.S. Economy as part of the Hearings on Competition and Consumer Protection in the 21st Century. We submit this comment based upon our extensive experience and expertise in antitrust law and economics. As an organization committed to promoting sound economic analysis as the foundation of antitrust enforcement and competition policy, the Global Antitrust Institute commends the FTC for holding these hearings and for inviting discussion concerning a range of important topics. Businesses today have greater access to data than ever before. Firms now have access to data at high volume, high velocity, and high variety—a phenomenon known as “big data.” The increasing prevalence of big data creates new questions for antitrust enforcers. This comment will discuss how big data should be considered in the context of antitrust analyses.
{"title":"The Federal Trade Commission Hearings on Competition and Consumer Protection in the 21st Century, Privacy, Big Data, and Competition, Comment of the Global Antitrust Institute, Antonin Scalia Law School, George Mason University","authors":"Tad Lipsky, Joshua D. Wright, D. Ginsburg, John M. Yun","doi":"10.2139/SSRN.3279818","DOIUrl":"https://doi.org/10.2139/SSRN.3279818","url":null,"abstract":"This comment is submitted in response to the United States Federal Trade Commission (“FTC”) hearing on Concentration and Competitiveness in the U.S. Economy as part of the Hearings on Competition and Consumer Protection in the 21st Century. We submit this comment based upon our extensive experience and expertise in antitrust law and economics. As an organization committed to promoting sound economic analysis as the foundation of antitrust enforcement and competition policy, the Global Antitrust Institute commends the FTC for holding these hearings and for inviting discussion concerning a range of important topics. Businesses today have greater access to data than ever before. Firms now have access to data at high volume, high velocity, and high variety—a phenomenon known as “big data.” The increasing prevalence of big data creates new questions for antitrust enforcers. This comment will discuss how big data should be considered in the context of antitrust analyses.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126274166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}