数学、风险和混乱的调查数据

IASSIST quarterly Pub Date : 2020-12-18 DOI:10.29173/iq979
Kristi Thompson, C. Sullivan
{"title":"数学、风险和混乱的调查数据","authors":"Kristi Thompson, C. Sullivan","doi":"10.29173/iq979","DOIUrl":null,"url":null,"abstract":"Research funder mandates, such as those from the U.S. National Science Foundation (2011), the Canadian Tri-Agency (draft, 2018), and the UK Economic and Social Research Council (2018) now often include requirements for data curation, including where possible data sharing in an approved archive. Data curators need to be prepared for the potential that researchers who have not previously shared data will need assistance with cleaning and depositing datasets so that they can meet these requirements and maintain funding. Data de-identification or anonymization is a major ethical concern in cases where survey data is to be shared, and one which data professionals may find themselves ill-equipped to deal with. This article is intended to provide an accessible and practical introduction to the theory and concepts behind data anonymization and risk assessment, will describe a couple of case studies that demonstrate how these methods were carried out on actual datasets requiring anonymization, and discuss some of the difficulties encountered. Much of the literature dealing with statistical risk assessment of anonymized data is abstract and aimed at computer scientists and mathematicians, while material aimed at practitioners often does not consider more recent developments in the theory of data anonymization. We hope that this article will help bridge this gap.","PeriodicalId":84870,"journal":{"name":"IASSIST quarterly","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mathematics, risk, and messy survey data\",\"authors\":\"Kristi Thompson, C. Sullivan\",\"doi\":\"10.29173/iq979\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Research funder mandates, such as those from the U.S. National Science Foundation (2011), the Canadian Tri-Agency (draft, 2018), and the UK Economic and Social Research Council (2018) now often include requirements for data curation, including where possible data sharing in an approved archive. Data curators need to be prepared for the potential that researchers who have not previously shared data will need assistance with cleaning and depositing datasets so that they can meet these requirements and maintain funding. Data de-identification or anonymization is a major ethical concern in cases where survey data is to be shared, and one which data professionals may find themselves ill-equipped to deal with. This article is intended to provide an accessible and practical introduction to the theory and concepts behind data anonymization and risk assessment, will describe a couple of case studies that demonstrate how these methods were carried out on actual datasets requiring anonymization, and discuss some of the difficulties encountered. Much of the literature dealing with statistical risk assessment of anonymized data is abstract and aimed at computer scientists and mathematicians, while material aimed at practitioners often does not consider more recent developments in the theory of data anonymization. We hope that this article will help bridge this gap.\",\"PeriodicalId\":84870,\"journal\":{\"name\":\"IASSIST quarterly\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IASSIST quarterly\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.29173/iq979\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IASSIST quarterly","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.29173/iq979","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

研究资助者的授权,如美国国家科学基金会(2011年)、加拿大三机构(草案,2018年)和英国经济和社会研究委员会(2018年)的授权,现在通常包括数据管理的要求,包括在可能的情况下在经批准的档案中共享数据。数据管理者需要做好准备,以应对之前没有共享数据的研究人员可能需要帮助清理和存放数据集,从而满足这些要求并保持资金。在共享调查数据的情况下,数据去识别或匿名化是一个主要的道德问题,而数据专业人员可能会发现自己没有能力处理这些问题。本文旨在对数据匿名化和风险评估背后的理论和概念进行简单实用的介绍,并将描述几个案例研究,展示这些方法是如何在需要匿名化的实际数据集上进行的,并讨论遇到的一些困难。许多关于匿名数据统计风险评估的文献都是抽象的,针对的是计算机科学家和数学家,而针对从业者的材料往往没有考虑数据匿名化理论的最新发展。我们希望这篇文章将有助于弥合这一差距。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Mathematics, risk, and messy survey data
Research funder mandates, such as those from the U.S. National Science Foundation (2011), the Canadian Tri-Agency (draft, 2018), and the UK Economic and Social Research Council (2018) now often include requirements for data curation, including where possible data sharing in an approved archive. Data curators need to be prepared for the potential that researchers who have not previously shared data will need assistance with cleaning and depositing datasets so that they can meet these requirements and maintain funding. Data de-identification or anonymization is a major ethical concern in cases where survey data is to be shared, and one which data professionals may find themselves ill-equipped to deal with. This article is intended to provide an accessible and practical introduction to the theory and concepts behind data anonymization and risk assessment, will describe a couple of case studies that demonstrate how these methods were carried out on actual datasets requiring anonymization, and discuss some of the difficulties encountered. Much of the literature dealing with statistical risk assessment of anonymized data is abstract and aimed at computer scientists and mathematicians, while material aimed at practitioners often does not consider more recent developments in the theory of data anonymization. We hope that this article will help bridge this gap.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Security and preservation of election data in Nigeria in the fourth industrial revolution Knowledge and perception of librarians towards cloud-based technology in academic libraries in southwest Nigeria Much new research, and advances for the IQ Data protection and right to privacy legislation in Kenya Guest editors’ notes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1