A Face Tells More than a Thousand Posts: Developing Face Recognition Privacy in Social Networks

Yana Welinder
{"title":"A Face Tells More than a Thousand Posts: Developing Face Recognition Privacy in Social Networks","authors":"Yana Welinder","doi":"10.2139/SSRN.2109108","DOIUrl":null,"url":null,"abstract":"What is so special about a face? It is the one personally identifiable feature that we all show in public. Faces are particularly good for identification purposes because, unlike getting a new coat or haircut, significantly altering a face to make it unrecognizable is difficult. But since most people have only a limited set of acquaintances, they can often remain anonymous when doing something personal by themselves — even in public. The use of face recognition technology in social networks shifts this paradigm. It can connect an otherwise anonymous face not only to a name — of which there can be several — but to all the information in a social network profile, including one’s friends, work and education history, status updates, and so forth.In this Article, I present two central ideas. First, applying the theory of contextual integrity, I argue that the current use face recognition technology in social networks violates users’ privacy by changing the information that they share (from a simple photo to automatically identifying biometric data) and providing this information to new recipients beyond the users’ control. Second, I identify the deficiencies in the current law and argue that law alone cannot solve this problem. A blanket prohibition on automatic face recognition in social networks would stifle the development of these technologies, which are useful in their own right. But at the same time, our traditional privacy framework of notice and consent cannot protect users who do not understand the automatic face recognition process and recklessly continue sharing their personal information due to strong network effects. Instead, I propose a multifaceted solution aimed at lowering switching costs between social networks and providing users with better information about how their data is used. My argument is that once users are truly free to leave, they will be able to exercise their choice in a meaningful way to demand that social networks respect their privacy expectations.Though this Article specifically addresses the use of face recognition technology in social networks, the proposed solution can be applied to other privacy problems arising in online platforms that accumulate personal information. More broadly, the undertaking to open up social networks and make them more transparent and interoperable could address the concern that these networks threaten to fragment the Web and lock in our personal information.","PeriodicalId":81374,"journal":{"name":"Harvard journal of law & technology","volume":"59 1","pages":"165"},"PeriodicalIF":0.0000,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Harvard journal of law & technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/SSRN.2109108","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

Abstract

What is so special about a face? It is the one personally identifiable feature that we all show in public. Faces are particularly good for identification purposes because, unlike getting a new coat or haircut, significantly altering a face to make it unrecognizable is difficult. But since most people have only a limited set of acquaintances, they can often remain anonymous when doing something personal by themselves — even in public. The use of face recognition technology in social networks shifts this paradigm. It can connect an otherwise anonymous face not only to a name — of which there can be several — but to all the information in a social network profile, including one’s friends, work and education history, status updates, and so forth.In this Article, I present two central ideas. First, applying the theory of contextual integrity, I argue that the current use face recognition technology in social networks violates users’ privacy by changing the information that they share (from a simple photo to automatically identifying biometric data) and providing this information to new recipients beyond the users’ control. Second, I identify the deficiencies in the current law and argue that law alone cannot solve this problem. A blanket prohibition on automatic face recognition in social networks would stifle the development of these technologies, which are useful in their own right. But at the same time, our traditional privacy framework of notice and consent cannot protect users who do not understand the automatic face recognition process and recklessly continue sharing their personal information due to strong network effects. Instead, I propose a multifaceted solution aimed at lowering switching costs between social networks and providing users with better information about how their data is used. My argument is that once users are truly free to leave, they will be able to exercise their choice in a meaningful way to demand that social networks respect their privacy expectations.Though this Article specifically addresses the use of face recognition technology in social networks, the proposed solution can be applied to other privacy problems arising in online platforms that accumulate personal information. More broadly, the undertaking to open up social networks and make them more transparent and interoperable could address the concern that these networks threaten to fragment the Web and lock in our personal information.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一张脸可以告诉一千多个帖子:在社交网络中开发人脸识别隐私
一张脸有什么特别之处?这是我们所有人在公共场合都会展示的一个个人特征。人脸在身份识别方面尤其有用,因为与换新外套或剪新发型不同,显著改变人脸使其无法被识别是很困难的。但由于大多数人的熟人有限,他们在做私人事情时往往可以保持匿名——即使是在公共场合。在社交网络中使用人脸识别技术改变了这种模式。它不仅可以将一张匿名的脸与一个名字(可能有好几个名字)联系起来,还可以与社交网络个人资料中的所有信息联系起来,包括一个人的朋友、工作和教育经历、状态更新等等。在本文中,我提出了两个中心思想。首先,运用上下文完整性理论,我认为目前在社交网络中使用的人脸识别技术通过改变用户共享的信息(从简单的照片到自动识别的生物特征数据)并将这些信息提供给用户无法控制的新接收者,侵犯了用户的隐私。其次,我指出了现行法律的不足,并认为法律本身不能解决这个问题。在社交网络上全面禁止自动人脸识别将扼杀这些技术的发展,这些技术本身是有用的。但与此同时,我们传统的通知和同意的隐私框架无法保护不了解自动人脸识别过程的用户,由于强大的网络效应,他们不顾一切地继续分享自己的个人信息。相反,我提出了一个多方面的解决方案,旨在降低社交网络之间的切换成本,并向用户提供有关其数据使用情况的更好信息。我的观点是,一旦用户真正自由地离开,他们将能够以一种有意义的方式行使他们的选择,要求社交网络尊重他们的隐私期望。虽然本文专门针对社交网络中人脸识别技术的使用,但所提出的解决方案可以应用于积累个人信息的在线平台中出现的其他隐私问题。更广泛地说,开放社交网络,使其更加透明和可互操作,可以解决这些网络威胁到网络碎片化和锁定我们个人信息的担忧。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Many Revolutions of Carpenter Compelled Decryption and the Fifth Amendment: Exploring the Technical Boundaries Vested Use-Privileges in Property and Copyright Encryption Congress Mod (Apple + CALEA) Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1