Pub Date : 2022-03-07DOI: 10.52214/stlr.v23i1.9390
Laura Karas
In response to the recent increase in FDA-approved specialty drugs and escalating specialty drug prices, drug companies now offer patient support programs (“PSPs”) for eligible patients prescribed a particular pharmaceutical drug. Such programs encompass both financial assistance for the purchase of a specialty drug and behavioral services, including nursing support and injection training, intended to improve drug adherence. Although ostensibly gratuitous, these programs have a steep and underappreciated cost: disclosure of protected health information. In effect, patient support programs compel patients to trade protected health information for drug access. This Article provides the first in- depth examination of the legal and ethical concerns associated with patient support programs. Enrollment in a drug company’s patient support program furnishes the company with linked patient- and prescriber- identifying information for each enrollee, data which may enable drug companies to target marketing to patients and healthcare providers with an otherwise unattainable degree of precision. Moreover, once a drug company acquires an enrollee’s protected health information pursuant to a valid Health Insurance Portability and Accountability Act (HIPAA) authorization, a drug company faces few limits on downstream uses of those data. This Article illuminates a possible role for patient support program-mediated data collection in two unlawful drug company practices: (1) kickback schemes in coordination with foundations that cover pharmaceutical drug copays, and (2) “product hopping” to a new brand-name drug formulation after patent expiration of an older formulation. The current regime for health data privacy in the United States lacks adequate safeguards to prevent drug companies from exploiting patient support program-derived data to the detriment of patients. The Article ends by proposing practical modifications to the HIPAA Privacy Rule to modernize HIPAA’s protections vis-à-vis health data transferred from covered entities to noncovered entities such as drug companies.
为了应对最近fda批准的特殊药物的增加和不断上涨的特殊药物价格,制药公司现在为处方特定药物的合格患者提供患者支持计划(“psp”)。这些计划既包括购买特殊药物的财政援助,也包括旨在提高药物依从性的行为服务,包括护理支持和注射培训。虽然这些项目表面上是无偿的,但却有一个高昂而未被重视的代价:泄露受保护的健康信息。实际上,患者支持计划迫使患者以受保护的健康信息换取药物获取。这篇文章提供了与病人支持计划相关的法律和伦理问题的第一次深入检查。在制药公司的患者支持计划中注册,为公司提供了每个注册者的相关患者和处方者识别信息,这些数据可以使制药公司以其他方式无法达到的精确程度对患者和医疗保健提供者进行目标营销。此外,一旦制药公司根据有效的《健康保险流通与责任法案》(health Insurance Portability and Accountability Act, HIPAA)授权获得了参保人受保护的健康信息,制药公司对这些数据的下游使用几乎没有限制。本文阐明了患者支持计划介导的数据收集在两种非法制药公司实践中的可能作用:(1)与涵盖药品共同支付的基金会协调的回扣计划,以及(2)在旧配方专利到期后“产品跳转”到新的品牌药物配方。美国目前的健康数据隐私制度缺乏足够的保障措施,无法防止制药公司利用患者支持方案衍生的数据损害患者的利益。文章最后提出了对HIPAA隐私规则的实际修改,以使HIPAA对-à-vis健康数据从覆盖实体转移到非覆盖实体(如制药公司)的保护现代化。
{"title":"Privacy as the Price of Drug Access","authors":"Laura Karas","doi":"10.52214/stlr.v23i1.9390","DOIUrl":"https://doi.org/10.52214/stlr.v23i1.9390","url":null,"abstract":"In response to the recent increase in FDA-approved specialty drugs and escalating specialty drug prices, drug companies now offer patient support programs (“PSPs”) for eligible patients prescribed a particular pharmaceutical drug. Such programs encompass both financial assistance for the purchase of a specialty drug and behavioral services, including nursing support and injection training, intended to improve drug adherence. Although ostensibly gratuitous, these programs have a steep and underappreciated cost: disclosure of protected health information. In effect, patient support programs compel patients to trade protected health information for drug access. This Article provides the first in- depth examination of the legal and ethical concerns associated with patient support programs. Enrollment in a drug company’s patient support program furnishes the company with linked patient- and prescriber- identifying information for each enrollee, data which may enable drug companies to target marketing to patients and healthcare providers with an otherwise unattainable degree of precision. Moreover, once a drug company acquires an enrollee’s protected health information pursuant to a valid Health Insurance Portability and Accountability Act (HIPAA) authorization, a drug company faces few limits on downstream uses of those data. This Article illuminates a possible role for patient support program-mediated data collection in two unlawful drug company practices: (1) kickback schemes in coordination with foundations that cover pharmaceutical drug copays, and (2) “product hopping” to a new brand-name drug formulation after patent expiration of an older formulation. The current regime for health data privacy in the United States lacks adequate safeguards to prevent drug companies from exploiting patient support program-derived data to the detriment of patients. The Article ends by proposing practical modifications to the HIPAA Privacy Rule to modernize HIPAA’s protections vis-à-vis health data transferred from covered entities to noncovered entities such as drug companies.","PeriodicalId":87208,"journal":{"name":"The Columbia science and technology law review","volume":"80 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90283793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.52214/STLR.V22I2.8664
Shin-Ru Cheng
Facebook, the world’s largest online networking platform, is the subject of multiple antitrust investigations by various state and federal regulators. Yet scholars and practitioners remain divided on how to measure Facebook’s market power. Some argue that conventional approaches for identifying market power are suitable for the online networking market. This Article argues such conventional approaches are inadequate for assessing market power in online networking markets.This Article begins by introducing the traditional approaches that courts have employed to assess market power: the direct effects approach, the Lerner Index approach, and the market share approach. It next describes Facebook’s business model and shows that, because Facebook is a two-sided market, these traditional approaches should not be applied to Facebook.Instead, the Article proposes that the information gaps, switching costs, and entry barriers approaches are better suited for assessing the market power of online networking platforms. The Article thus concludes by proposing a legal framework for assessing market power in online networking platforms which employs such non-traditional approaches. While this Article uses Facebook as the main case study, this paper’s findings are equally applicable to similar online networking platforms.
{"title":"Approaches to Assess Market Power in the Online Networking Market","authors":"Shin-Ru Cheng","doi":"10.52214/STLR.V22I2.8664","DOIUrl":"https://doi.org/10.52214/STLR.V22I2.8664","url":null,"abstract":"Facebook, the world’s largest online networking platform, is the subject of multiple antitrust investigations by various state and federal regulators. Yet scholars and practitioners remain divided on how to measure Facebook’s market power. Some argue that conventional approaches for identifying market power are suitable for the online networking market. This Article argues such conventional approaches are inadequate for assessing market power in online networking markets.This Article begins by introducing the traditional approaches that courts have employed to assess market power: the direct effects approach, the Lerner Index approach, and the market share approach. It next describes Facebook’s business model and shows that, because Facebook is a two-sided market, these traditional approaches should not be applied to Facebook.Instead, the Article proposes that the information gaps, switching costs, and entry barriers approaches are better suited for assessing the market power of online networking platforms. The Article thus concludes by proposing a legal framework for assessing market power in online networking platforms which employs such non-traditional approaches. While this Article uses Facebook as the main case study, this paper’s findings are equally applicable to similar online networking platforms.","PeriodicalId":87208,"journal":{"name":"The Columbia science and technology law review","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86689864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.52214/stlr.v22i2.8669
Aviel Menter
In Rucho v. Common Cause, the Supreme Court held that challenges to partisan gerrymanders presented a nonjusticiable political question. This decision threatened to discard decades of work by political scientists and other experts, who had developed a myriad of techniques designed to help the courts objectively and unambiguously identify excessively partisan district maps. Simulated redistricting promised to be one of the most effective of these techniques. Simulated redistricting algorithms are computer programs capable of generating thousands of election-district maps, each of which conforms to a set of permissible criteria determined by the relevant state legislature. By measuring the partisan lean of both the automatically generated maps and the map put forth by the state legislature, a court could determine how much of this partisan bias was attributable to the deliberate actions of the legislature, rather than the natural distribution of the state’s population.Rucho ended partisan gerrymandering challenges brought under the U.S. Constitution—but it need not close the book on simulated redistricting. Although originally developed to combat partisan gerrymanders, simulated redistricting algorithms can be repurposed to help courts identify intentional racial gerrymanders. Instead of measuring the partisan bias of automatically generated maps, these programs can gauge improper racial considerations evident in the legislature’s plan and demonstrate the discriminatory intent that produced such an outcome. As long as the redistricting process remains in the hands of state legislatures, there is a threat that constitutionally impermissible considerations will be employed when drawing district plans. Simulated redistricting provides a powerful tool with which courts can detect a hidden unconstitutional motive in the redistricting process.
在鲁乔诉共同事业案(Rucho v. Common Cause)中,最高法院认为,对党派不公正划分选区的挑战提出了一个不可审理的政治问题。这一决定有可能使政治学家和其他专家几十年的工作成果夭折,他们开发了无数的技术,旨在帮助法院客观、明确地识别过于党派化的选区地图。模拟重划被认为是这些技术中最有效的一种。模拟选区重新划分算法是一种计算机程序,能够生成数千张选区地图,每一张地图都符合相关州立法机构确定的一套允许的标准。通过测量自动生成的地图和州议会绘制的地图的党派倾向,法院可以确定这种党派偏见在多大程度上归因于立法机构的故意行为,而不是该州人口的自然分布。鲁乔结束了根据美国宪法提出的党派不公正划分选区的挑战,但它不需要结束模拟重新划分的书。虽然最初是为了对抗党派的不公正划分而开发的,但模拟重新划分算法可以重新用于帮助法院识别故意的种族不公正划分。这些程序不是衡量自动生成地图的党派偏见,而是可以衡量立法机构计划中明显存在的不当种族考虑,并证明产生这种结果的歧视意图。只要重新划分选区的过程仍然掌握在州立法机构手中,就有可能在绘制地区规划时考虑到宪法不允许的因素。模拟选区重新划分提供了一个强大的工具,法院可以通过它来发现选区重新划分过程中隐藏的违宪动机。
{"title":"Calculated Discrimination: Exposing Racial Gerrymandering Using Computational Methods","authors":"Aviel Menter","doi":"10.52214/stlr.v22i2.8669","DOIUrl":"https://doi.org/10.52214/stlr.v22i2.8669","url":null,"abstract":"In Rucho v. Common Cause, the Supreme Court held that challenges to partisan gerrymanders presented a nonjusticiable political question. This decision threatened to discard decades of work by political scientists and other experts, who had developed a myriad of techniques designed to help the courts objectively and unambiguously identify excessively partisan district maps. Simulated redistricting promised to be one of the most effective of these techniques. Simulated redistricting algorithms are computer programs capable of generating thousands of election-district maps, each of which conforms to a set of permissible criteria determined by the relevant state legislature. By measuring the partisan lean of both the automatically generated maps and the map put forth by the state legislature, a court could determine how much of this partisan bias was attributable to the deliberate actions of the legislature, rather than the natural distribution of the state’s population.Rucho ended partisan gerrymandering challenges brought under the U.S. Constitution—but it need not close the book on simulated redistricting. Although originally developed to combat partisan gerrymanders, simulated redistricting algorithms can be repurposed to help courts identify intentional racial gerrymanders. Instead of measuring the partisan bias of automatically generated maps, these programs can gauge improper racial considerations evident in the legislature’s plan and demonstrate the discriminatory intent that produced such an outcome. As long as the redistricting process remains in the hands of state legislatures, there is a threat that constitutionally impermissible considerations will be employed when drawing district plans. Simulated redistricting provides a powerful tool with which courts can detect a hidden unconstitutional motive in the redistricting process.","PeriodicalId":87208,"journal":{"name":"The Columbia science and technology law review","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78684497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.52214/STLR.V22I2.8665
David Kappos, Åsa Kling
Humankind has always sought to solve problems. This impetus has transformed hunters and gatherers into a society beginning to enjoy the fruits of the fourth industrial revolution. As part of the fourth industrial revolution, and the increased computing power accompanying it, the long-theorized concept of artificial intelligence (“AI”) is finally becoming a reality. This raises new issues in myriad fields—from the moral and ethical implications of replacing human activity with machines to who will own inventions created by AI. While these questions are worth exploring, they have already received a fair amount of coverage in popular and theoretical writing. This paper will take a different direction, focusing on the current and near-future issues arising on the ground at the intersection of AI and intellectual property (“IP”). After providing a brief overview of AI, we will analyze legal issues unique to AI, including access to data, patent requirements, open source licenses and trade secrecy. We will then suggest best practices for obtaining and preserving IP protection for AI-related innovations through the United States and European Union IP systems. By addressing these issues, the intellectual property system will be better positioned to do its part in unlocking AI’s immense potential.
{"title":"Ground-Level Pressing Issues at the Intersection of AI and IP","authors":"David Kappos, Åsa Kling","doi":"10.52214/STLR.V22I2.8665","DOIUrl":"https://doi.org/10.52214/STLR.V22I2.8665","url":null,"abstract":"Humankind has always sought to solve problems. This impetus has transformed hunters and gatherers into a society beginning to enjoy the fruits of the fourth industrial revolution. As part of the fourth industrial revolution, and the increased computing power accompanying it, the long-theorized concept of artificial intelligence (“AI”) is finally becoming a reality. This raises new issues in myriad fields—from the moral and ethical implications of replacing human activity with machines to who will own inventions created by AI. While these questions are worth exploring, they have already received a fair amount of coverage in popular and theoretical writing. This paper will take a different direction, focusing on the current and near-future issues arising on the ground at the intersection of AI and intellectual property (“IP”). After providing a brief overview of AI, we will analyze legal issues unique to AI, including access to data, patent requirements, open source licenses and trade secrecy. We will then suggest best practices for obtaining and preserving IP protection for AI-related innovations through the United States and European Union IP systems. By addressing these issues, the intellectual property system will be better positioned to do its part in unlocking AI’s immense potential.","PeriodicalId":87208,"journal":{"name":"The Columbia science and technology law review","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87458354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.52214/STLR.V22I2.8668
Wayne Unger
Disinformation campaigns reduce trust in democracy, harm democratic institutions, and endanger public health and safety. While disinformation and misinformation are not new, their rapid and widespread dissemination has only recently been made possible by technological developments that enable never-before-seen levels of mass communication and persuasion.Today, a mix of social media, algorithms, personal profiling, and psychology enable a new dimension of political messaging—a dimension that disinformers exploit for their political gain. These enablers share a root cause—the poor data privacy and security regime in the U.S.At its core, democracy requires independent thought, personal autonomy, and trust in democratic institutions. A public that thinks critically and acts independently can check the government’s power and authority. However, when the public is misinformed, it lacks the autonomy to freely elect and check its representatives and the fundamental basis for democracy erodes. This Article addresses a root cause of misinformation dissemination —the absence of strong data privacy protections in the U.S.—and its effects on democracy. This Article explains, from a technological perspective, how personal information is used for personal profiling, and how personal profiling contributes to the mass interpersonal persuasion that disinformation campaigns exploit to advance their political goals.
{"title":"How the Poor Data Privacy Regime Contributes to Misinformation Spread and Democratic Erosion","authors":"Wayne Unger","doi":"10.52214/STLR.V22I2.8668","DOIUrl":"https://doi.org/10.52214/STLR.V22I2.8668","url":null,"abstract":"Disinformation campaigns reduce trust in democracy, harm democratic institutions, and endanger public health and safety. While disinformation and misinformation are not new, their rapid and widespread dissemination has only recently been made possible by technological developments that enable never-before-seen levels of mass communication and persuasion.Today, a mix of social media, algorithms, personal profiling, and psychology enable a new dimension of political messaging—a dimension that disinformers exploit for their political gain. These enablers share a root cause—the poor data privacy and security regime in the U.S.At its core, democracy requires independent thought, personal autonomy, and trust in democratic institutions. A public that thinks critically and acts independently can check the government’s power and authority. However, when the public is misinformed, it lacks the autonomy to freely elect and check its representatives and the fundamental basis for democracy erodes. This Article addresses a root cause of misinformation dissemination —the absence of strong data privacy protections in the U.S.—and its effects on democracy. This Article explains, from a technological perspective, how personal information is used for personal profiling, and how personal profiling contributes to the mass interpersonal persuasion that disinformation campaigns exploit to advance their political goals.","PeriodicalId":87208,"journal":{"name":"The Columbia science and technology law review","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88875081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.52214/STLR.V22I2.8666
Monika Zalnieriute
Live automated facial recognition technology, rolled out in public spaces and cities across the world, is transforming the nature of modern policing. R (on the application of Bridges) v Chief Constable of South Wales Police, decided in August 2020, is the first successful legal challenge to automated facial recognition technology in the world. In Bridges, the United Kingdom’s Court of Appeal held that the South Wales Police force’s use of automated facial recognition technology was unlawful. This landmark ruling could influence future policy on facial recognition in many countries. The Bridges decision imposes some limits on the police’s previously unconstrained discretion to decide whom to target and where to deploy the technology. Yet, while the decision requires that the police adopt a clearer legal framework to limit this discretion, it does not, in principle, prevent the use of facial recognition technology for mass-surveillance in public places, nor for monitoring political protests. On the contrary, the Court held that the use of automated facial recognition in public spaces – even to identify and track the movement of very large numbers of people – was an acceptable means for achieving law enforcement goals. Thus, the Court dismissed the wider impact and significant risks posed by using facial recognition technology in public spaces. It underplayed the heavy burden this technology can place on democratic participation and freedoms of expression and association, which require collective action in public spaces. The Court neither demanded transparency about the technologies used by the police force, which is often shielded behind the “trade secrets” of the corporations who produce them, nor did it act to prevent inconsistency between local police forces’ rules and regulations on automated facial recognition technology. Thus, while the Bridges decision is reassuring and demands change in the discretionary approaches of U.K. police in the short term, its long-term impact in burning the “bridges” between the expanding public space surveillance infrastructure and the modern state is unlikely. In fact, the decision legitimizes such an expansion.
{"title":"Burning Bridges: The Automated Facial Recognition Technology and Public Space Surveillance in the Modern State","authors":"Monika Zalnieriute","doi":"10.52214/STLR.V22I2.8666","DOIUrl":"https://doi.org/10.52214/STLR.V22I2.8666","url":null,"abstract":"Live automated facial recognition technology, rolled out in public spaces and cities across the world, is transforming the nature of modern policing. R (on the application of Bridges) v Chief Constable of South Wales Police, decided in August 2020, is the first successful legal challenge to automated facial recognition technology in the world. In Bridges, the United Kingdom’s Court of Appeal held that the South Wales Police force’s use of automated facial recognition technology was unlawful. This landmark ruling could influence future policy on facial recognition in many countries. The Bridges decision imposes some limits on the police’s previously unconstrained discretion to decide whom to target and where to deploy the technology. Yet, while the decision requires that the police adopt a clearer legal framework to limit this discretion, it does not, in principle, prevent the use of facial recognition technology for mass-surveillance in public places, nor for monitoring political protests. On the contrary, the Court held that the use of automated facial recognition in public spaces – even to identify and track the movement of very large numbers of people – was an acceptable means for achieving law enforcement goals. Thus, the Court dismissed the wider impact and significant risks posed by using facial recognition technology in public spaces. It underplayed the heavy burden this technology can place on democratic participation and freedoms of expression and association, which require collective action in public spaces. The Court neither demanded transparency about the technologies used by the police force, which is often shielded behind the “trade secrets” of the corporations who produce them, nor did it act to prevent inconsistency between local police forces’ rules and regulations on automated facial recognition technology. Thus, while the Bridges decision is reassuring and demands change in the discretionary approaches of U.K. police in the short term, its long-term impact in burning the “bridges” between the expanding public space surveillance infrastructure and the modern state is unlikely. In fact, the decision legitimizes such an expansion. ","PeriodicalId":87208,"journal":{"name":"The Columbia science and technology law review","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89442781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-22DOI: 10.52214/STLR.V22I1.8054
S. Wood
This Article acknowledges the necessity of including social determinants of health (SDH) data in healthcare planning and treatment but highlights the lack of regulation around the collection of SDH data and potential for violating consumers’ basic rights to be treated equally, protected from discrimination, and to have their privacy respected. The Article analyzes different approaches from the U.S. and EU and proffers the global application of the GDPR plus data human rights provisions as the most sustainable option in a world where technology is ever-changing.
{"title":"Big Data’s Exploitation of Social Determinants of Health: Human Rights Implications","authors":"S. Wood","doi":"10.52214/STLR.V22I1.8054","DOIUrl":"https://doi.org/10.52214/STLR.V22I1.8054","url":null,"abstract":"This Article acknowledges the necessity of including social determinants of health (SDH) data in healthcare planning and treatment but highlights the lack of regulation around the collection of SDH data and potential for violating consumers’ basic rights to be treated equally, protected from discrimination, and to have their privacy respected. The Article analyzes different approaches from the U.S. and EU and proffers the global application of the GDPR plus data human rights provisions as the most sustainable option in a world where technology is ever-changing.","PeriodicalId":87208,"journal":{"name":"The Columbia science and technology law review","volume":"116 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76306968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-22DOI: 10.52214/STLR.V22I1.8056
T. E. Hutchins
A recent spate of governmental shutdowns of the civilian internet in a broad range of violent contexts, from uprisings in Hong Kong and Iraq to armed conflicts in Ethiopia, Kashmir, Myanmar, and Yemen, suggests civilian internet blackouts are the ‘new normal.’ Given the vital and expanding role of internet connectivity in modern society, and the emergence of artificial intelligence, internet shutdowns raise important questions regarding their legality under intentional law. This article considers whether the existing international humanitarian law provides adequate protection for civilian internet connectivity and infrastructure during armed conflicts. Concluding that current safeguards are insufficient, this article proposes a new legal paradigm with special protections for physical internet infrastructure and the right of civilian access, while advocating the adoption of emblems (such as the Red Cross or Blue Shield) in the digital world to protect vital humanitarian communications.
{"title":"Safeguarding Civilian Internet Access During Armed Conflict: Protecting Humanity’s Most Important Resource in War","authors":"T. E. Hutchins","doi":"10.52214/STLR.V22I1.8056","DOIUrl":"https://doi.org/10.52214/STLR.V22I1.8056","url":null,"abstract":"A recent spate of governmental shutdowns of the civilian internet in a broad range of violent contexts, from uprisings in Hong Kong and Iraq to armed conflicts in Ethiopia, Kashmir, Myanmar, and Yemen, suggests civilian internet blackouts are the ‘new normal.’ Given the vital and expanding role of internet connectivity in modern society, and the emergence of artificial intelligence, internet shutdowns raise important questions regarding their legality under intentional law. This article considers whether the existing international humanitarian law provides adequate protection for civilian internet connectivity and infrastructure during armed conflicts. Concluding that current safeguards are insufficient, this article proposes a new legal paradigm with special protections for physical internet infrastructure and the right of civilian access, while advocating the adoption of emblems (such as the Red Cross or Blue Shield) in the digital world to protect vital humanitarian communications.","PeriodicalId":87208,"journal":{"name":"The Columbia science and technology law review","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75386845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consumer genetics has exploded, driven by the second-most popular hobby in the United States: genealogy. This hobby has been co-opted by law enforcement to solve cold cases, by linking crime-scene DNA with the DNA of a suspect's relative, which is contained in a direct-to-consumer (DTC) genetic database. The relative’s genetic data acts as a silent witness, or genetic informant, wordlessly guiding law enforcement to a handful of potential suspects. At least thirty murderers and rapists have been arrested in this way, a process which I describe in careful detail in this article. Legal scholars have sounded many alarms, and have called for immediate bans on this methodology, which is referred to as long- range familial searching (or "LRFS"). The opponents’ concerns are many, but generally boil down to fears that LRFS will invade the privacy and autonomy of presumptively innocent individuals. These concerns, I argue, are considerably overblown. Indeed, many aspects of the methodology implicate nothing new, legally or ethically, and might even better protect privacy while exonerating the innocent. Law enforcement’s use of LRFS to solve cold cases is a bogeyman. The real threat to genetic privacy comes from shoddy consumer consent procedures, poor data security standards, and user agreements that permit rampant secondary uses of data. So why do so many legal scholars fear a world where law enforcement uses this methodology? I surmise that our fear of so-called genetic informants stems from the sticky and long-standing traps of genetic essentialism and genetic determinism, where we incorrectly attribute intentional action to our genes and fear a world where humans are controlled by our biology. Rather than banning the use of genetic genealogy to catch serial killers and rapists, I call for improved direct-to-consumer consent processes, and more transparent privacy and security measures. This will better protect genetic privacy in line with consumer expectations, while still permitting the use of LRFS to deliver justice to victims and punish those who commit society's most heinous acts.
{"title":"WHY WE FEAR GENETIC INFORMANTS: USING GENETIC GENEALOGY TO CATCH SERIAL KILLERS.","authors":"Teneille R. Brown","doi":"10.7916/STLR.V21I1.5765","DOIUrl":"https://doi.org/10.7916/STLR.V21I1.5765","url":null,"abstract":"Consumer genetics has exploded, driven by the second-most popular hobby in the United States: genealogy. This hobby has been co-opted by law enforcement to solve cold cases, by linking crime-scene DNA with the DNA of a suspect's relative, which is contained in a direct-to-consumer (DTC) genetic database. The relative’s genetic data acts as a silent witness, or genetic informant, wordlessly guiding law enforcement to a handful of potential suspects. At least thirty murderers and rapists have been arrested in this way, a process which I describe in careful detail in this article. Legal scholars have sounded many alarms, and have called for immediate bans on this methodology, which is referred to as long- range familial searching (or \"LRFS\"). The opponents’ concerns are many, but generally boil down to fears that LRFS will invade the privacy and autonomy of presumptively innocent individuals. These concerns, I argue, are considerably overblown. Indeed, many aspects of the methodology implicate nothing new, legally or ethically, and might even better protect privacy while exonerating the innocent. Law enforcement’s use of LRFS to solve cold cases is a bogeyman. The real threat to genetic privacy comes from shoddy consumer consent procedures, poor data security standards, and user agreements that permit rampant secondary uses of data. So why do so many legal scholars fear a world where law enforcement uses this methodology? I surmise that our fear of so-called genetic informants stems from the sticky and long-standing traps of genetic essentialism and genetic determinism, where we incorrectly attribute intentional action to our genes and fear a world where humans are controlled by our biology. Rather than banning the use of genetic genealogy to catch serial killers and rapists, I call for improved direct-to-consumer consent processes, and more transparent privacy and security measures. This will better protect genetic privacy in line with consumer expectations, while still permitting the use of LRFS to deliver justice to victims and punish those who commit society's most heinous acts.","PeriodicalId":87208,"journal":{"name":"The Columbia science and technology law review","volume":"68 1","pages":"114-181"},"PeriodicalIF":0.0,"publicationDate":"2020-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79968014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WHY WE FEAR GENETIC INFORMANTS: USING GENETIC GENEALOGY TO CATCH SERIAL KILLERS.","authors":"Teneille R Brown","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":87208,"journal":{"name":"The Columbia science and technology law review","volume":"21 1","pages":"114-181"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7946161/pdf/nihms-1655090.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25480275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}