可穿戴活动监视器的隐私漏洞:威胁和潜在防御

Mohammad Al-Saad, Madeleine Lucas, Lakshmish Ramaswamy
{"title":"可穿戴活动监视器的隐私漏洞:威胁和潜在防御","authors":"Mohammad Al-Saad, Madeleine Lucas, Lakshmish Ramaswamy","doi":"10.1109/CIC52973.2021.00022","DOIUrl":null,"url":null,"abstract":"Nowadays, large companies including Fitbit, Garmin, and Apple provide consumers with highly accurate and real-time activity trackers. An individual can simply wear a watch or handheld IoT device to automatically detect and track any movement throughout their day. Using sensor data obtained from Arizona State's Kinesiology department, this study presents the privacy concerns that activity-tracker devices pose due to the extensive amount of user data they obtain. We input unidentified user sensor data from six different recorded activities to an LSTM to show how accurately the model can match the data to the individual who completed it. We show that for three out of the six activities, the model can accurately match 88-92% of the timestep samples to the correct subject that performed them and 60-70% for the remaining three activities studied. Additionally, we present a voting based mechanism that improves the accuracy of sensor data classification to an average of 93%. Replacing the data of the participants with fake data can potentially enhance the privacy and anonymize the identities of those participants. One promising way to generate fake data with high quality data is to use generative adversarial networks (GANs). GANs have gained attention in the research community due to its ability to learn rich data distribution from samples and its outstanding experimental performance as a generative model. However, applying GANs by itself on sensitive data could raise a privacy concern since the density of the learned generative distribution could concentrate on the training data points. This means that GANs can easily remember training samples due to the high model complexity of deep networks. To mitigate the privacy risks, we combine ideas from the literature to implement a differentially private GAN model (HDP-GAN) that is capable of generating private synthetic streaming data before residing at its final destination in the tracker's company cloud. Two experiments were conducted to show that HDP-GAN can have promising results in protecting the individuals who performed the activities.","PeriodicalId":170121,"journal":{"name":"2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Privacy Vulnerabilities of Wearable Activity Monitors: Threat and Potential Defence\",\"authors\":\"Mohammad Al-Saad, Madeleine Lucas, Lakshmish Ramaswamy\",\"doi\":\"10.1109/CIC52973.2021.00022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, large companies including Fitbit, Garmin, and Apple provide consumers with highly accurate and real-time activity trackers. An individual can simply wear a watch or handheld IoT device to automatically detect and track any movement throughout their day. Using sensor data obtained from Arizona State's Kinesiology department, this study presents the privacy concerns that activity-tracker devices pose due to the extensive amount of user data they obtain. We input unidentified user sensor data from six different recorded activities to an LSTM to show how accurately the model can match the data to the individual who completed it. We show that for three out of the six activities, the model can accurately match 88-92% of the timestep samples to the correct subject that performed them and 60-70% for the remaining three activities studied. Additionally, we present a voting based mechanism that improves the accuracy of sensor data classification to an average of 93%. Replacing the data of the participants with fake data can potentially enhance the privacy and anonymize the identities of those participants. One promising way to generate fake data with high quality data is to use generative adversarial networks (GANs). GANs have gained attention in the research community due to its ability to learn rich data distribution from samples and its outstanding experimental performance as a generative model. However, applying GANs by itself on sensitive data could raise a privacy concern since the density of the learned generative distribution could concentrate on the training data points. This means that GANs can easily remember training samples due to the high model complexity of deep networks. To mitigate the privacy risks, we combine ideas from the literature to implement a differentially private GAN model (HDP-GAN) that is capable of generating private synthetic streaming data before residing at its final destination in the tracker's company cloud. Two experiments were conducted to show that HDP-GAN can have promising results in protecting the individuals who performed the activities.\",\"PeriodicalId\":170121,\"journal\":{\"name\":\"2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC)\",\"volume\":\"85 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIC52973.2021.00022\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIC52973.2021.00022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

如今,包括Fitbit、Garmin和Apple在内的大公司都为消费者提供了高度精确和实时的活动追踪器。个人只需佩戴手表或手持物联网设备,即可自动检测和跟踪一天中的任何运动。利用从亚利桑那州立大学运动机能系获得的传感器数据,本研究提出了活动跟踪设备由于获取大量用户数据而带来的隐私问题。我们从六个不同的记录活动中输入身份不明的用户传感器数据到LSTM,以显示模型如何准确地将数据与完成数据的个人相匹配。我们表明,对于六个活动中的三个,该模型可以准确地将88-92%的时间步样本匹配到执行它们的正确主体,其余三个活动可以准确地匹配到60-70%。此外,我们提出了一种基于投票的机制,将传感器数据分类的准确率提高到平均93%。用假数据替换参与者的数据可能会增强隐私性,并使这些参与者的身份匿名化。生成对抗网络(GANs)是一种很有前途的方法,可以生成具有高质量数据的假数据。由于gan能够从样本中学习丰富的数据分布,并且作为生成模型具有出色的实验性能,因此受到了研究界的关注。然而,将gan单独应用于敏感数据可能会引起隐私问题,因为学习到的生成分布的密度可能集中在训练数据点上。这意味着由于深度网络的高模型复杂性,gan可以很容易地记住训练样本。为了减轻隐私风险,我们结合了文献中的想法来实现一个差分私有GAN模型(HDP-GAN),该模型能够在存储在跟踪器公司云中的最终目的地之前生成私有合成流数据。两项实验表明,HDP-GAN在保护进行活动的个体方面具有良好的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Privacy Vulnerabilities of Wearable Activity Monitors: Threat and Potential Defence
Nowadays, large companies including Fitbit, Garmin, and Apple provide consumers with highly accurate and real-time activity trackers. An individual can simply wear a watch or handheld IoT device to automatically detect and track any movement throughout their day. Using sensor data obtained from Arizona State's Kinesiology department, this study presents the privacy concerns that activity-tracker devices pose due to the extensive amount of user data they obtain. We input unidentified user sensor data from six different recorded activities to an LSTM to show how accurately the model can match the data to the individual who completed it. We show that for three out of the six activities, the model can accurately match 88-92% of the timestep samples to the correct subject that performed them and 60-70% for the remaining three activities studied. Additionally, we present a voting based mechanism that improves the accuracy of sensor data classification to an average of 93%. Replacing the data of the participants with fake data can potentially enhance the privacy and anonymize the identities of those participants. One promising way to generate fake data with high quality data is to use generative adversarial networks (GANs). GANs have gained attention in the research community due to its ability to learn rich data distribution from samples and its outstanding experimental performance as a generative model. However, applying GANs by itself on sensitive data could raise a privacy concern since the density of the learned generative distribution could concentrate on the training data points. This means that GANs can easily remember training samples due to the high model complexity of deep networks. To mitigate the privacy risks, we combine ideas from the literature to implement a differentially private GAN model (HDP-GAN) that is capable of generating private synthetic streaming data before residing at its final destination in the tracker's company cloud. Two experiments were conducted to show that HDP-GAN can have promising results in protecting the individuals who performed the activities.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Towards an Integrated Micro-services Architecture for Campus environments When Trust Meets the Internet of Vehicles: Opportunities, Challenges, and Future Prospects A Collaborative and Adaptive Feedback System for Physical Exercises 2021 IEEE 7th International Conference on Collaboration and Internet Computing CIC 2021 Cost-aware & Fault-tolerant Geo-distributed Edge Computing for Low-latency Stream Processing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1