首页 > 最新文献

Information Privacy Law eJournal最新文献

英文 中文
Differential Privacy in the 2020 Decennial Census and the Implications for Available Data Products 2020年十年一次人口普查中的差异隐私及其对可用数据产品的影响
Pub Date : 2019-07-08 DOI: 10.2139/ssrn.3416572
D. Boyd
In early 2021, the US Census Bureau will begin releasing statistical tables based on the decennial census conducted in 2020. Because of significant changes in the data landscape, the Census Bureau is changing its approach to disclosure avoidance. The confidentiality of individuals represented “anonymously” in these statistical tables will be protected by a “formal privacy” technique that allows the Bureau to mathematically assess the risk of revealing information about individuals in the released statistical tables. The Bureau’s approach is an implementation of “differential privacy,” and it gives a rigorously demonstrated guaranteed level of privacy protection that traditional methods of disclosure avoidance do not. Given the importance of the Census Bureau’s statistical tables to democracy, resource allocation, justice, and research, confusion about what differential privacy is and how it might alter or eliminate data products has rippled through the community of its data users, namely: demographers, statisticians, and census advocates. The purpose of this primer is to provide context to the Census Bureau’s decision to use a technique based on differential privacy and to help data users and other census advocates who are struggling to understand what this mathematical tool is, why it matters, and how it will affect the Bureau’s data products.
2021年初,美国人口普查局将开始根据2020年进行的十年一次的人口普查发布统计表。由于数据环境的重大变化,人口普查局正在改变其避免披露的方法。这些统计表中“匿名”个人的机密性将受到“正式隐私”技术的保护,该技术使统计局能够以数学方式评估公布的统计表中个人信息泄露的风险。该局的方法是“差别隐私”的实施,它提供了严格证明的有保障的隐私保护水平,这是传统的避免披露方法所没有的。鉴于人口普查局的统计表对民主、资源分配、正义和研究的重要性,关于什么是差别隐私以及它可能如何改变或消除数据产品的困惑已经在其数据用户社区中蔓延开来,即:人口统计学家、统计学家和人口普查倡导者。这本入门书的目的是为人口普查局决定使用基于差异隐私的技术提供背景,并帮助数据用户和其他人口普查倡导者,他们正在努力理解这个数学工具是什么,它为什么重要,以及它将如何影响人口普查局的数据产品。
{"title":"Differential Privacy in the 2020 Decennial Census and the Implications for Available Data Products","authors":"D. Boyd","doi":"10.2139/ssrn.3416572","DOIUrl":"https://doi.org/10.2139/ssrn.3416572","url":null,"abstract":"In early 2021, the US Census Bureau will begin releasing statistical tables based on the decennial census conducted in 2020. Because of significant changes in the data landscape, the Census Bureau is changing its approach to disclosure avoidance. The confidentiality of individuals represented “anonymously” in these statistical tables will be protected by a “formal privacy” technique that allows the Bureau to mathematically assess the risk of revealing information about individuals in the released statistical tables. The Bureau’s approach is an implementation of “differential privacy,” and it gives a rigorously demonstrated guaranteed level of privacy protection that traditional methods of disclosure avoidance do not. Given the importance of the Census Bureau’s statistical tables to democracy, resource allocation, justice, and research, confusion about what differential privacy is and how it might alter or eliminate data products has rippled through the community of its data users, namely: demographers, statisticians, and census advocates. \u0000 \u0000The purpose of this primer is to provide context to the Census Bureau’s decision to use a technique based on differential privacy and to help data users and other census advocates who are struggling to understand what this mathematical tool is, why it matters, and how it will affect the Bureau’s data products.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125393638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Tax, Technology and Privacy: The Coming Collision 税收、技术和隐私:即将发生的冲突
Pub Date : 2019-07-02 DOI: 10.2139/SSRN.3431476
A. Leahey
In the last decade a dominant storyline in the realm of technology and the law has been the rise of Big Data and the various state responses, or lack thereof, to concerns stemming from the same. At first, technology companies pursued methods of monetizing accumulated data almost by default — massive stores of data were a byproduct of other business ventures. Like early wildcat oil drillers that struck natural gas, the stores of data was seen more as a hindrance than anything else. Over time, oil companies found a use for natural gas and Silicon Valley found a use for our stores of data. The next trove of data is going to be found in our tax information. Let us insure that this time, from the outset, our privacy is kept front of mind.
在过去十年中,技术和法律领域的一个主要故事情节是大数据的兴起,以及各国政府对由此引发的担忧的各种回应(或缺乏回应)。起初,科技公司几乎是默认地追求将积累的数据货币化的方法——大量的数据存储是其他商业企业的副产品。就像早期开采天然气的野猫石油钻探公司一样,数据存储被视为一种障碍。随着时间的推移,石油公司发现了天然气的用途,硅谷发现了我们的数据存储的用途。下一个数据宝库将在我们的税务信息中找到。让我们确保这一次,从一开始,我们的隐私就被放在首位。
{"title":"Tax, Technology and Privacy: The Coming Collision","authors":"A. Leahey","doi":"10.2139/SSRN.3431476","DOIUrl":"https://doi.org/10.2139/SSRN.3431476","url":null,"abstract":"In the last decade a dominant storyline in the realm of technology and the law has been the rise of Big Data and the various state responses, or lack thereof, to concerns stemming from the same. At first, technology companies pursued methods of monetizing accumulated data almost by default — massive stores of data were a byproduct of other business ventures. Like early wildcat oil drillers that struck natural gas, the stores of data was seen more as a hindrance than anything else. Over time, oil companies found a use for natural gas and Silicon Valley found a use for our stores of data. The next trove of data is going to be found in our tax information. Let us insure that this time, from the outset, our privacy is kept front of mind.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126206162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chilling Effects of Profiling Activities: Mapping the Issues 分析活动的寒蝉效应:绘制问题
Pub Date : 2019-04-28 DOI: 10.2139/ssrn.3379275
Moritz Büchi, Eduard Fosch Villaronga, C. Lutz, Aurelia Tamó-Larrieux, Shruthi Velidi, S. Viljoen
In this article, we provide an in-depth overview of the literature on chilling effects and corporate profiling and connect the two topics. We start by explaining how profiling, within an increasingly data-rich environment, creates substantial power asymmetries between users and platforms (and corporations more broadly). We stress the notion of inferences and the increasingly automatic nature of decision-making, based on user data, as essential aspects of profiling. We then discuss chilling effects in depth, connecting them to corporate profiling. In the article, we first stress the relationship and similarities between profiling and surveillance. Second, we illustrate chilling effects as a result of state and peer surveillance; we then show the interrelatedness of corporate and state profiling, and we finally spotlight the customization of behavior and behavioral manipulation as particularly outstanding issues in this discourse. We also explore the legal foundations of profiling through an in-depth analysis of European and US data protection law. We found that, while Europe has a clear regulatory framework in place for profiling, the US primarily relies on a patchwork of sector-specific or state laws. Besides, there is an attempt to regulate differential impacts of profiling, via anti-discrimination statutes, yet few policies focus on combating generalized, concrete harms of profiling, such as chilling effects. At the end of the article, we bring together the diverse strands of literature in concise propositions to guide future research on the connection between corporate profiling and chilling effects.
在本文中,我们对寒蝉效应和公司概况的文献进行了深入的概述,并将这两个主题联系起来。我们首先解释,在数据日益丰富的环境中,分析如何在用户和平台(以及更广泛的公司)之间造成实质性的权力不对称。我们强调推理的概念和基于用户数据的决策的日益自动化的性质,作为分析的基本方面。然后,我们将深入讨论寒蝉效应,并将其与企业概况联系起来。在文章中,我们首先强调了侧写和监视之间的关系和相似之处。其次,我们说明了国家和同伴监督造成的寒蝉效应;然后,我们展示了企业和国家概况的相互关系,最后,我们将行为定制和行为操纵作为本文中特别突出的问题。我们还通过对欧洲和美国数据保护法的深入分析,探讨了分析的法律基础。我们发现,虽然欧洲有明确的监管框架,但美国主要依赖于特定行业或州法律的拼凑。此外,人们试图通过反歧视法规来规范侧写的不同影响,但很少有政策关注与侧写的普遍、具体危害作斗争,例如寒蝉效应。在文章的最后,我们将不同的文献汇集在简明的命题中,以指导未来关于企业概况和寒蝉效应之间联系的研究。
{"title":"Chilling Effects of Profiling Activities: Mapping the Issues","authors":"Moritz Büchi, Eduard Fosch Villaronga, C. Lutz, Aurelia Tamó-Larrieux, Shruthi Velidi, S. Viljoen","doi":"10.2139/ssrn.3379275","DOIUrl":"https://doi.org/10.2139/ssrn.3379275","url":null,"abstract":"In this article, we provide an in-depth overview of the literature on chilling effects and corporate profiling and connect the two topics. We start by explaining how profiling, within an increasingly data-rich environment, creates substantial power asymmetries between users and platforms (and corporations more broadly). We stress the notion of inferences and the increasingly automatic nature of decision-making, based on user data, as essential aspects of profiling. We then discuss chilling effects in depth, connecting them to corporate profiling. In the article, we first stress the relationship and similarities between profiling and surveillance. Second, we illustrate chilling effects as a result of state and peer surveillance; we then show the interrelatedness of corporate and state profiling, and we finally spotlight the customization of behavior and behavioral manipulation as particularly outstanding issues in this discourse. We also explore the legal foundations of profiling through an in-depth analysis of European and US data protection law. We found that, while Europe has a clear regulatory framework in place for profiling, the US primarily relies on a patchwork of sector-specific or state laws. Besides, there is an attempt to regulate differential impacts of profiling, via anti-discrimination statutes, yet few policies focus on combating generalized, concrete harms of profiling, such as chilling effects. At the end of the article, we bring together the diverse strands of literature in concise propositions to guide future research on the connection between corporate profiling and chilling effects.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129439885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Privacy-Preserved Data Sharing for Evidence-Based Policy Decisions: A Demonstration Project Using Human Services Administrative Records for Evidence-Building Activities 基于证据的政策决策的隐私保护数据共享:利用人类服务行政记录进行证据建设活动的示范项目
Pub Date : 2019-03-28 DOI: 10.2139/ssrn.3808054
N. Hart, David Archer, Erin Dalton
Emerging privacy-preserving technologies and approaches hold considerable promise for improving data privacy and confidentiality in the 21st century. At the same time, more information is becoming accessible to support evidence-based policymaking.

In 2017, the U.S. Commission on Evidence-Based Policymaking unanimously recommended that further attention be given to the deployment of privacy-preserving data-sharing applications. If these types of applications can be tested and scaled in the near-term, they could vastly improve insights about important policy problems by using disparate datasets. At the same time, the approaches could promote substantial gains in privacy for the American public.

There are numerous ways to engage in privacy-preserving data sharing. This paper primarily focuses on secure computation, which allows information to be accessed securely, guarantees privacy, and permits analysis without making private information available. Three key issues motivated the launch of a domestic secure computation demonstration project using real government-collected data:

--Using new privacy-preserving approaches addresses pressing needs in society. Current widely accepted approaches to managing privacy risks—like preventing the identification of individuals or organizations in public datasets—will become less effective over time. While there are many practices currently in use to keep government-collected data confidential, they do not often incorporate modern developments in computer science, mathematics, and statistics in a timely way. New approaches can enable researchers to combine datasets to improve the capability for insights, without being impeded by traditional concerns about bringing large, identifiable datasets together. In fact, if successful, traditional approaches to combining data for analysis may not be as necessary.

--There are emerging technical applications to deploy certain privacy-preserving approaches in targeted settings. These emerging procedures are increasingly enabling larger-scale testing of privacy-preserving approaches across a variety of policy domains, governmental jurisdictions, and agency settings to demonstrate the privacy guarantees that accompany data access and use.

--Widespread adoption and use by public administrators will only follow meaningful and successful demonstration projects. For example, secure computation approaches are complex and can be difficult to understand for those unfamiliar with their potential. Implementing new privacy-preserving approaches will require thoughtful attention to public policy implications, public opinions, legal restrictions, and other administrative limitations that vary by agency and governmental entity.
This project used real-world government data to illustrate the applicability of secure computation compared to the classic data infrastructure available to some local governments. The project took place in a domestic, non-intelli
新兴的隐私保护技术和方法为改善21世纪的数据隐私和机密性带来了巨大的希望。与此同时,越来越多的信息可以用于支持基于证据的决策。2017年,美国循证政策制定委员会一致建议进一步关注保护隐私的数据共享应用程序的部署。如果这些类型的应用程序可以在短期内进行测试和扩展,它们可以通过使用不同的数据集大大提高对重要政策问题的洞察力。与此同时,这些方法可以促进美国公众在隐私方面取得实质性进展。有许多方法可以参与保护隐私的数据共享。本文主要关注安全计算,它允许安全访问信息,保证隐私,并允许在不使私有信息可用的情况下进行分析。三个关键问题推动了使用政府收集的真实数据启动国内安全计算示范项目:使用新的隐私保护方法解决了社会的迫切需求。目前被广泛接受的管理隐私风险的方法——比如防止在公共数据集中识别个人或组织——将随着时间的推移而变得不那么有效。虽然目前有许多做法用于保护政府收集的数据的机密性,但它们往往没有及时结合计算机科学、数学和统计学的现代发展。新的方法可以使研究人员能够组合数据集来提高洞察力的能力,而不会受到将大型可识别数据集放在一起的传统担忧的阻碍。事实上,如果成功的话,传统的结合数据进行分析的方法可能就没有必要了。—在目标设置中部署某些隐私保护方法的新兴技术应用。这些新兴的程序越来越多地支持在各种政策领域、政府管辖范围和机构设置中对隐私保护方法进行大规模测试,以证明数据访问和使用所带来的隐私保证。——只有在有意义和成功的示范项目之后,公共管理人员才能广泛采用和使用。例如,安全计算方法是复杂的,对于不熟悉其潜力的人来说可能难以理解。实施新的隐私保护方法将需要对公共政策影响、公众意见、法律限制和其他因机构和政府实体而异的行政限制进行深思熟虑的关注。该项目使用真实的政府数据来说明与一些地方政府可用的经典数据基础设施相比,安全计算的适用性。该项目在一个国内、非情报机构的环境中进行,以增加公共机构潜在经验教训的重要性。根据保密协议从宾夕法尼亚州阿勒格尼县人类服务部获得的数据进行分析,使用隐私保护平台生成基本见解。该分析需要合并来自阿勒格尼县多个政府机构拥有的五个数据集的200多万条记录。具体而言,该演示依赖于对无家可归者的服务、心理健康服务、死亡原因和发生率、家庭干预和监禁的个人记录,以分析有关以下比例的四个关键问题:(1)在监狱服刑的人接受公共资助的心理健康服务;(2)接受公费精神卫生服务的儿童福利案件的家长;(三)接受流浪服务的服刑人员;(4)以前接受过公共资助的心理健康服务的自杀受害者。据BPC所知,这是人类服务领域首次完成此类演示。为了证明和描述这些分析的隐私保护计算的适用性,项目团队在两个不同的隐私保护平台上执行了它们。第一个平台名为Jana,是为美国国防高级研究计划局开发的布兰代斯项目的一部分,完全在软件中实现安全计算。Jana使用加密技术组合来保护静态和传输中的数据,并使用安全的多方计算来保护计算过程中的数据。具体来说,Jana使用多个服务器对数据的加密秘密共享执行计算,同时确保这些服务器永远不会看到解密形式的数据。第二个平台称为FIDES,是美国国土安全部IMPACT计划的一部分,通过硬件支持的加密飞地实现安全计算。 具体来说,FIDES使用英特尔公司的处理器和英特尔软件保护扩展,在处理器的一个区域进行计算,该区域被计算机上运行的其他代码(包括计算机自己的操作系统)限制访问。处理器或软件的任何部分,除了硬件安全的飞地,都不会看到解密形式的数据。这两个保护隐私的计算平台提供了类似的方法:数据到达计算平台时已经加密,以严格不泄露任何数据的方式执行分析,并将结果安全地提供给用户。这些实验的目的是将这两种方法与经典的数据分析设置进行比较。利用人类服务数据成功完成演示产生了以下见解:——实验产生了有效、可靠的结果。这两个平台都产生了与传统数据分析方法一致的有效结果。这一结果表明,使用这些隐私保护方法的查询不会影响统计结论的有效性或可靠性。因此,多方计算模型满足演示的核心标准,以实现数据使用和隐私保护。——实验的效率对政策制定者来说是一种权衡。实现隐私保护技术的不同模式为回答的及时性提供了折衷。使用基于软件的方法对近20万条记录进行分析需要近3个小时才能完成,而在支持硬件的环境中,同样的查询只需要十分之一秒就能返回结果。这些时间对具有快速决策架构的政府操作中的应用程序具有重大影响。这些发现表明,这些方法为公共政策提供了相当大的希望,可以同时实现改进的数据分析和切实的隐私保护。然而,在政府机构广泛使用之前,仍然需要进一步努力开发隐私保护技术,使其部署更加省时。这种部署的范围和规模可能会产生巨大的成本影响或计算响应时间的严重延迟,这取决于对隐私保护方法的期望权衡。除了开发隐私保障的技术精确性之外,技术的进一步发展还必须包括学习在复杂的组织或政府基础设施和法律框架内部署保护措施的方法,这些框架可能不会明确鼓励此类活动。这个示范项目为如何部署这些技术提供了一个引人注目的例子,它可以促进各级政府的国内、非情报机构对这种方法的考虑。
{"title":"Privacy-Preserved Data Sharing for Evidence-Based Policy Decisions: A Demonstration Project Using Human Services Administrative Records for Evidence-Building Activities","authors":"N. Hart, David Archer, Erin Dalton","doi":"10.2139/ssrn.3808054","DOIUrl":"https://doi.org/10.2139/ssrn.3808054","url":null,"abstract":"Emerging privacy-preserving technologies and approaches hold considerable promise for improving data privacy and confidentiality in the 21st century. At the same time, more information is becoming accessible to support evidence-based policymaking.<br><br>In 2017, the U.S. Commission on Evidence-Based Policymaking unanimously recommended that further attention be given to the deployment of privacy-preserving data-sharing applications. If these types of applications can be tested and scaled in the near-term, they could vastly improve insights about important policy problems by using disparate datasets. At the same time, the approaches could promote substantial gains in privacy for the American public.<br><br>There are numerous ways to engage in privacy-preserving data sharing. This paper primarily focuses on secure computation, which allows information to be accessed securely, guarantees privacy, and permits analysis without making private information available. Three key issues motivated the launch of a domestic secure computation demonstration project using real government-collected data:<br><br>--Using new privacy-preserving approaches addresses pressing needs in society. Current widely accepted approaches to managing privacy risks—like preventing the identification of individuals or organizations in public datasets—will become less effective over time. While there are many practices currently in use to keep government-collected data confidential, they do not often incorporate modern developments in computer science, mathematics, and statistics in a timely way. New approaches can enable researchers to combine datasets to improve the capability for insights, without being impeded by traditional concerns about bringing large, identifiable datasets together. In fact, if successful, traditional approaches to combining data for analysis may not be as necessary.<br><br>--There are emerging technical applications to deploy certain privacy-preserving approaches in targeted settings. These emerging procedures are increasingly enabling larger-scale testing of privacy-preserving approaches across a variety of policy domains, governmental jurisdictions, and agency settings to demonstrate the privacy guarantees that accompany data access and use.<br><br>--Widespread adoption and use by public administrators will only follow meaningful and successful demonstration projects. For example, secure computation approaches are complex and can be difficult to understand for those unfamiliar with their potential. Implementing new privacy-preserving approaches will require thoughtful attention to public policy implications, public opinions, legal restrictions, and other administrative limitations that vary by agency and governmental entity.<br>This project used real-world government data to illustrate the applicability of secure computation compared to the classic data infrastructure available to some local governments. The project took place in a domestic, non-intelli","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122244385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Logics of Technology Decentralization - The Case of Distributed Ledger Technologies 技术去中心化的逻辑——以分布式账本技术为例
Pub Date : 2019-02-07 DOI: 10.4324/9780429029530-8
Balázs Bodó, A. Giannopoulou
Decentralization is heralded as the most important technological design aspect of distributed ledger technologies (DLTs). In this chapter we’ll analyze the concept of decentralization, with the goal to understand the social, legal, and economic forces that produce more or less decentralized techno-social systems. We first give an overview of decentralization as a political ideology and as an ideal and natural endpoint in the development of digital technologies. We then move beyond this discourse and treat decentralization, its extent, its mode, and the systems which it can refer to as the products of particular economic, political, and social dynamics around and within these techno-social systems. We then point at the concrete forces that shape the actual degree of (de)centralization. Through this, we show that the extent to which a techno-social system is (de)centralized at any given moment should not be measured by its distance from an ideological ideal of total decentralization but should be seen as the sum of all the social, economic, political, and legal forces that impact a techno-social system.
去中心化被认为是分布式账本技术(dlt)最重要的技术设计方面。在本章中,我们将分析去中心化的概念,目的是了解产生或多或少去中心化技术社会系统的社会、法律和经济力量。我们首先概述了去中心化作为一种政治意识形态,以及作为数字技术发展的理想和自然终点。然后,我们将超越这一论述,讨论去中心化,它的范围,它的模式,以及它可以被称为这些技术-社会系统周围和内部特定经济,政治和社会动态的产物的系统。然后,我们指出了影响(去)中心化实际程度的具体力量。通过这一点,我们表明,在任何给定时刻,技术-社会系统(去)中心化的程度不应以其与完全去中心化的意识形态理想的距离来衡量,而应被视为影响技术-社会系统的所有社会、经济、政治和法律力量的总和。
{"title":"The Logics of Technology Decentralization - The Case of Distributed Ledger Technologies","authors":"Balázs Bodó, A. Giannopoulou","doi":"10.4324/9780429029530-8","DOIUrl":"https://doi.org/10.4324/9780429029530-8","url":null,"abstract":"Decentralization is heralded as the most important technological design aspect of distributed ledger technologies (DLTs). In this chapter we’ll analyze the concept of decentralization, with the goal to understand the social, legal, and economic forces that produce more or less decentralized techno-social systems. We first give an overview of decentralization as a political ideology and as an ideal and natural endpoint in the development of digital technologies. We then move beyond this discourse and treat decentralization, its extent, its mode, and the systems which it can refer to as the products of particular economic, political, and social dynamics around and within these techno-social systems. We then point at the concrete forces that shape the actual degree of (de)centralization. Through this, we show that the extent to which a techno-social system is (de)centralized at any given moment should not be measured by its distance from an ideological ideal of total decentralization but should be seen as the sum of all the social, economic, political, and legal forces that impact a techno-social system.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123562943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics 从个人控制到社会保护:预测分析时代隐私法的新范式
Pub Date : 2019-02-01 DOI: 10.2139/ssrn.3449112
D. Hirsch
What comes after the control paradigm? For decades, privacy law has sought to provide individuals with notice and choice and so give them control over their personal data. But what happens when this regulatory paradigm breaks down?

Predictive analytics forces us to confront this challenge. Individuals cannot understand how predictive analytics uses their surface data to infer latent, far more sensitive data about them. This prevents individuals from making meaningful choices about whether to share their surface data in the first place. It also creates threats (such as harmful bias, manipulation and procedural unfairness) that go well beyond the privacy interests that the control paradigm seeks to safeguard. In order to protect people in the algorithmic economy, privacy law must shift from a liberalist legal paradigm that focuses on individual control, to one in which public authorities set substantive standards that defend people against algorithmic threats.

Leading scholars such as Jack Balkin (information fiduciaries), Helen Nissenbaum (contextual integrity), Danielle Citron (technological due process), Craig Mundie (use-based regulation) and others recognize the need for such a shift and propose ways to achieve it. This article ties these proposals together, views them as attempts to define a new regulatory paradigm for the age of predictive analytics, and evaluates whether each achieves this aim.

It then argues that the solution may be hiding in plain sight in the form of the FTC’s Section 5 unfairness authority. It explores whether the FTC could use its unfairness authority to draw substantive lines between data analytics practices that are socially appropriate and fair, and those that are inappropriate and unfair, and examines how the Commission would make such determinations. It argues that this existing authority, which requires no new legislation, provides a comprehensive and politically legitimate way to create much needed societal boundaries around corporate use of predictive analytics. It concludes that the Commission could use its unfairness authority to protect people from the threats that the algorithmic economy creates.
控制范式之后是什么?几十年来,隐私法一直试图为个人提供通知和选择,从而让他们控制自己的个人数据。但当这种监管模式崩溃时会发生什么呢?预测分析迫使我们面对这一挑战。个人无法理解预测分析如何使用他们的表面数据来推断关于他们的潜在的、更敏感的数据。这使得个人无法做出有意义的选择,即是否首先分享他们的表面数据。它还造成了威胁(如有害的偏见、操纵和程序不公平),这些威胁远远超出了控制范式试图保护的隐私利益。为了在算法经济中保护人们,隐私法必须从注重个人控制的自由主义法律范式转变为公共当局制定实质性标准以保护人们免受算法威胁的法律范式。Jack Balkin(信息受托人)、Helen Nissenbaum(上下文完整性)、Danielle Citron(技术正当程序)、Craig Mundie(基于使用的监管)等知名学者认识到这种转变的必要性,并提出了实现这一转变的方法。本文将这些建议联系在一起,将它们视为为预测分析时代定义新的监管范式的尝试,并评估每个提议是否实现了这一目标。然后,它辩称,解决方案可能以联邦贸易委员会第5条不公平权力的形式隐藏在众目睽睽之下。它探讨了联邦贸易委员会是否可以利用其不公平权力在数据分析实践之间划出实质性的界限,这些实践在社会上是适当的和公平的,以及那些不适当的和不公平的,并研究了委员会将如何做出这样的决定。它认为,这种现有的权力不需要新的立法,它提供了一种全面的、政治上合法的方式,为企业使用预测分析创造了急需的社会界限。报告的结论是,欧盟委员会可以利用其不公平权力来保护人们免受算法经济带来的威胁。
{"title":"From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics","authors":"D. Hirsch","doi":"10.2139/ssrn.3449112","DOIUrl":"https://doi.org/10.2139/ssrn.3449112","url":null,"abstract":"What comes after the control paradigm? For decades, privacy law has sought to provide individuals with notice and choice and so give them control over their personal data. But what happens when this regulatory paradigm breaks down? <br><br>Predictive analytics forces us to confront this challenge. Individuals cannot understand how predictive analytics uses their surface data to infer latent, far more sensitive data about them. This prevents individuals from making meaningful choices about whether to share their surface data in the first place. It also creates threats (such as harmful bias, manipulation and procedural unfairness) that go well beyond the privacy interests that the control paradigm seeks to safeguard. In order to protect people in the algorithmic economy, privacy law must shift from a liberalist legal paradigm that focuses on individual control, to one in which public authorities set substantive standards that defend people against algorithmic threats. <br><br>Leading scholars such as Jack Balkin (information fiduciaries), Helen Nissenbaum (contextual integrity), Danielle Citron (technological due process), Craig Mundie (use-based regulation) and others recognize the need for such a shift and propose ways to achieve it. This article ties these proposals together, views them as attempts to define a new regulatory paradigm for the age of predictive analytics, and evaluates whether each achieves this aim.<br><br>It then argues that the solution may be hiding in plain sight in the form of the FTC’s Section 5 unfairness authority. It explores whether the FTC could use its unfairness authority to draw substantive lines between data analytics practices that are socially appropriate and fair, and those that are inappropriate and unfair, and examines how the Commission would make such determinations. It argues that this existing authority, which requires no new legislation, provides a comprehensive and politically legitimate way to create much needed societal boundaries around corporate use of predictive analytics. It concludes that the Commission could use its unfairness authority to protect people from the threats that the algorithmic economy creates.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116684480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Disclosures in Privacy Policies: Does 'Notice and Consent' Work? 隐私政策的披露:“通知和同意”有效吗?
Pub Date : 2018-12-11 DOI: 10.2139/SSRN.3328289
R. Bailey, Smriti Parsheera, F. Rahman, R. Sane
This paper evaluates the quality of privacy policies of five popular online services in India from the perspective of access and readability. The paper ask – do the policies have specific, unambiguous and clear provisions that lend themselves to easy comprehension? The paper has also conducted a survey among college students to evaluate how much do users typically understand of what they are signing up for. The paper also finds that the policies studied are poorly drafted, and often seem to serve as check-the-box compliance of expected privacy disclosures. Survey respondents do not score very highly on the privacy policy quiz. The respondents fared the worst on policies that had the most unspecified terms, and on policies that were long. Respondents were also unable to understand terms such as “third party†, “affiliate†and “business-partner†. The results suggest that for consent to work, the information offered to individuals has to be better drafted and designed.
本文从可访问性和可读性的角度对印度五种流行的在线服务的隐私政策质量进行了评估。论文问:这些政策是否有具体、明确和明确的条款,便于理解?该报还在大学生中进行了一项调查,以评估用户通常对他们注册的内容了解多少。这篇论文还发现,所研究的政策起草得很差,而且往往似乎是对预期的隐私披露的遵守。受访者在隐私政策测试中得分不高。受访者在条款不明确的政策和期限长的政策上表现最差。受访者也无法理解诸如€œthird party €、€œaffiliateâ€和€œbusiness-partnerâ€等术语。研究结果表明,为了获得工作的同意,必须更好地起草和设计提供给个人的信息。
{"title":"Disclosures in Privacy Policies: Does 'Notice and Consent' Work?","authors":"R. Bailey, Smriti Parsheera, F. Rahman, R. Sane","doi":"10.2139/SSRN.3328289","DOIUrl":"https://doi.org/10.2139/SSRN.3328289","url":null,"abstract":"This paper evaluates the quality of privacy policies of five popular online services in India from the perspective of access and readability. The paper ask – do the policies have specific, unambiguous and clear provisions that lend themselves to easy comprehension? The paper has also conducted a survey among college students to evaluate how much do users typically understand of what they are signing up for. The paper also finds that the policies studied are poorly drafted, and often seem to serve as check-the-box compliance of expected privacy disclosures. Survey respondents do not score very highly on the privacy policy quiz. The respondents fared the worst on policies that had the most unspecified terms, and on policies that were long. Respondents were also unable to understand terms such as “third party†, “affiliate†and “business-partner†. The results suggest that for consent to work, the information offered to individuals has to be better drafted and designed.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"49 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133239228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Embracing Children’s Right to Decisional Privacy in Proceedings under the Family Law Act 1975 (Cth): In Children’s Best Interests or a Source of Conflict? 在1975年《家庭法》(Cth)下的诉讼中接受儿童的决定隐私权:符合儿童的最大利益还是冲突的根源?
Pub Date : 2018-11-20 DOI: 10.2139/ssrn.3303395
Georgina Dimopoulos
Privacy and family law are both dynamic, subjects of passionate debate, and constantly changing with developments in society, policy and technology. This paper develops a normative understanding of the meaning and value of privacy in the context of proceedings under the Family Law Act 1975 (Cth) (Family Law Act) that embraces children’s decision-making autonomy. The focus is privacy’s decisional dimension, which has received scant scholarly attention in the Australian family law context. Recognising and respecting children’s (as distinct from their parents’) decision-making autonomy, and children’s right to make decisions that might conflict with their parents’ (and the state’s) wishes, remain significant, and unresolved, challenges for the Australian family courts. This paper explores these issues using court authorisation of special medical procedures for children diagnosed with gender dysphoria as a case study. This paper argues that the construction of children as vulnerable to harm and the hierarchical nature of the parent-child relationship under the Family Law Act, coupled with judicial approaches to determining the ‘best interests of the child’ as the paramount consideration, have inhibited the Family Court of Australia from embracing children’s decisional privacy. This paper addresses concerns about the perceived conflictual consequences of doing so. It emphasises the relationality of children’s rights, the significance of the family unit, and the public interest in promoting children as active participants in proceedings as a policy goal of family law.
隐私和家庭法都是充满活力的、充满激情的辩论主题,并且随着社会、政策和技术的发展而不断变化。本文在《1975年家庭法法案》(Cth)(家庭法法案)的背景下对隐私的意义和价值进行了规范的理解,该法案包含了儿童的决策自主权。重点是隐私的决定维度,这在澳大利亚家庭法的背景下得到了很少的学术关注。承认和尊重孩子(不同于父母)的决策自主权,以及孩子做出可能与父母(和国家)意愿相冲突的决定的权利,仍然是澳大利亚家庭法院面临的重大且未解决的挑战。本文探讨了这些问题,使用法院授权的特殊医疗程序的儿童诊断为性别不安作为一个案例研究。本文认为,根据《家庭法》,儿童易受伤害的概念和亲子关系的等级性质,再加上以确定“儿童的最大利益”为首要考虑的司法方法,阻碍了澳大利亚家庭法院接受儿童的决定隐私。本文讨论了对这样做的感知冲突后果的关注。它强调儿童权利之间的关系、家庭单位的重要性以及作为家庭法的一项政策目标促进儿童积极参与诉讼的公共利益。
{"title":"Embracing Children’s Right to Decisional Privacy in Proceedings under the Family Law Act 1975 (Cth): In Children’s Best Interests or a Source of Conflict?","authors":"Georgina Dimopoulos","doi":"10.2139/ssrn.3303395","DOIUrl":"https://doi.org/10.2139/ssrn.3303395","url":null,"abstract":"Privacy and family law are both dynamic, subjects of passionate debate, and constantly changing with developments in society, policy and technology. This paper develops a normative understanding of the meaning and value of privacy in the context of proceedings under the Family Law Act 1975 (Cth) (Family Law Act) that embraces children’s decision-making autonomy. The focus is privacy’s decisional dimension, which has received scant scholarly attention in the Australian family law context. Recognising and respecting children’s (as distinct from their parents’) decision-making autonomy, and children’s right to make decisions that might conflict with their parents’ (and the state’s) wishes, remain significant, and unresolved, challenges for the Australian family courts. This paper explores these issues using court authorisation of special medical procedures for children diagnosed with gender dysphoria as a case study. This paper argues that the construction of children as vulnerable to harm and the hierarchical nature of the parent-child relationship under the Family Law Act, coupled with judicial approaches to determining the ‘best interests of the child’ as the paramount consideration, have inhibited the Family Court of Australia from embracing children’s decisional privacy. This paper addresses concerns about the perceived conflictual consequences of doing so. It emphasises the relationality of children’s rights, the significance of the family unit, and the public interest in promoting children as active participants in proceedings as a policy goal of family law.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122565750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dignity and Utility of Privacy and Information Sharing in the Digital Big Data Age 数字大数据时代隐私与信息共享的尊严与效用
Pub Date : 2018-11-18 DOI: 10.2139/ssrn.3286650
Julia M. Puaschunder
Today enormous data storage capacities and computational power in the e-big data era have created unforeseen opportunities for big data hoarding corporations to reap hidden benefits from individual’s information sharing, which occurs bit by bit in small tranches over time. This paper presents underlying dignity and utility considerations when individual decision makers face the privacy versus information sharing predicament. Thereby the article unravels the legal foundations of dignity in privacy but also the behavioral economics of utility in communication and information sharing. For Human Resources managers the question arises whether to uphold human dignity in privacy or derive benefit from utility of information sharing. From legal and governance perspectives, the outlined ideas may stimulate the e-privacy discourse in the age of digitalization but also serving the greater goals of democratisation of information and upheld humane dignity in the realm of e-ethics in the big data era.
今天,在电子大数据时代,巨大的数据存储容量和计算能力为大数据囤积公司创造了不可预见的机会,使他们能够从个人信息共享中获得隐藏的利益,随着时间的推移,这种信息共享会一点一点地发生。本文讨论了个体决策者在面对隐私与信息共享困境时的潜在尊严和效用考虑。由此,本文揭示了隐私尊严的法律基础,以及通信和信息共享中的效用行为经济学。对于人力资源管理者来说,问题出现了,是在隐私中维护人的尊严,还是从信息共享的效用中获取利益。从法律和治理的角度来看,概述的想法可能会刺激数字化时代的电子隐私话语,但也服务于信息民主化的更大目标,并在大数据时代的电子伦理领域维护人类尊严。
{"title":"Dignity and Utility of Privacy and Information Sharing in the Digital Big Data Age","authors":"Julia M. Puaschunder","doi":"10.2139/ssrn.3286650","DOIUrl":"https://doi.org/10.2139/ssrn.3286650","url":null,"abstract":"Today enormous data storage capacities and computational power in the e-big data era have created unforeseen opportunities for big data hoarding corporations to reap hidden benefits from individual’s information sharing, which occurs bit by bit in small tranches over time. This paper presents underlying dignity and utility considerations when individual decision makers face the privacy versus information sharing predicament. Thereby the article unravels the legal foundations of dignity in privacy but also the behavioral economics of utility in communication and information sharing. For Human Resources managers the question arises whether to uphold human dignity in privacy or derive benefit from utility of information sharing. From legal and governance perspectives, the outlined ideas may stimulate the e-privacy discourse in the age of digitalization but also serving the greater goals of democratisation of information and upheld humane dignity in the realm of e-ethics in the big data era.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128180164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
The Federal Trade Commission Hearings on Competition and Consumer Protection in the 21st Century, Privacy, Big Data, and Competition, Comment of the Global Antitrust Institute, Antonin Scalia Law School, George Mason University 联邦贸易委员会关于21世纪竞争与消费者保护的听证会,隐私、大数据与竞争,乔治梅森大学安东宁·斯卡利亚法学院全球反垄断研究所评论
Pub Date : 2018-11-05 DOI: 10.2139/SSRN.3279818
Tad Lipsky, Joshua D. Wright, D. Ginsburg, John M. Yun
This comment is submitted in response to the United States Federal Trade Commission (“FTC”) hearing on Concentration and Competitiveness in the U.S. Economy as part of the Hearings on Competition and Consumer Protection in the 21st Century. We submit this comment based upon our extensive experience and expertise in antitrust law and economics. As an organization committed to promoting sound economic analysis as the foundation of antitrust enforcement and competition policy, the Global Antitrust Institute commends the FTC for holding these hearings and for inviting discussion concerning a range of important topics. Businesses today have greater access to data than ever before. Firms now have access to data at high volume, high velocity, and high variety—a phenomenon known as “big data.” The increasing prevalence of big data creates new questions for antitrust enforcers. This comment will discuss how big data should be considered in the context of antitrust analyses.
本评论是针对美国联邦贸易委员会(“FTC”)关于美国经济中的集中与竞争力的听证会而提交的,该听证会是21世纪竞争与消费者保护听证会的一部分。我们根据我们在反垄断法和经济学方面的丰富经验和专业知识提出这一评论。作为一个致力于促进健全的经济分析作为反垄断执法和竞争政策的基础的组织,全球反垄断研究所赞扬联邦贸易委员会举行这些听证会,并邀请就一系列重要议题进行讨论。今天的企业比以往任何时候都更容易访问数据。企业现在可以访问大量、高速、多样的数据,这种现象被称为“大数据”。大数据的日益普及给反垄断执法机构带来了新的问题。这篇评论将讨论如何在反垄断分析的背景下考虑大数据。
{"title":"The Federal Trade Commission Hearings on Competition and Consumer Protection in the 21st Century, Privacy, Big Data, and Competition, Comment of the Global Antitrust Institute, Antonin Scalia Law School, George Mason University","authors":"Tad Lipsky, Joshua D. Wright, D. Ginsburg, John M. Yun","doi":"10.2139/SSRN.3279818","DOIUrl":"https://doi.org/10.2139/SSRN.3279818","url":null,"abstract":"This comment is submitted in response to the United States Federal Trade Commission (“FTC”) hearing on Concentration and Competitiveness in the U.S. Economy as part of the Hearings on Competition and Consumer Protection in the 21st Century. We submit this comment based upon our extensive experience and expertise in antitrust law and economics. As an organization committed to promoting sound economic analysis as the foundation of antitrust enforcement and competition policy, the Global Antitrust Institute commends the FTC for holding these hearings and for inviting discussion concerning a range of important topics. Businesses today have greater access to data than ever before. Firms now have access to data at high volume, high velocity, and high variety—a phenomenon known as “big data.” The increasing prevalence of big data creates new questions for antitrust enforcers. This comment will discuss how big data should be considered in the context of antitrust analyses.","PeriodicalId":179517,"journal":{"name":"Information Privacy Law eJournal","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126274166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Privacy Law eJournal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1