可信研究环境中的机器学习模型 -- 了解操作风险

IF 1.6 Q3 HEALTH CARE SCIENCES & SERVICES International Journal of Population Data Science Pub Date : 2023-12-14 DOI:10.23889/ijpds.v8i1.2165
F. Ritchie, Amy Tilbrook, Christian Cole, Emily Jefferson, Susan Krueger, Esma Mansouri-Bensassi, Simon Rogers, Jim Q. Smith
{"title":"可信研究环境中的机器学习模型 -- 了解操作风险","authors":"F. Ritchie, Amy Tilbrook, Christian Cole, Emily Jefferson, Susan Krueger, Esma Mansouri-Bensassi, Simon Rogers, Jim Q. Smith","doi":"10.23889/ijpds.v8i1.2165","DOIUrl":null,"url":null,"abstract":"IntroductionTrusted research environments (TREs) provide secure access to very sensitive data for research. All TREs operate manual checks on outputs to ensure there is no residual disclosure risk. Machine learning (ML) models require very large amount of data; if this data is personal, the TRE is a well-established data management solution. However, ML models present novel disclosure risks, in both type and scale.\nObjectivesAs part of a series on ML disclosure risk in TREs, this article is intended to introduce TRE managers to the conceptual problems and work being done to address them.\nMethodsWe demonstrate how ML models present a qualitatively different type of disclosure risk, compared to traditional statistical outputs. These arise from both the nature and the scale of ML modelling.\nResultsWe show that there are a large number of unresolved issues, although there is progress in many areas. We show where areas of uncertainty remain, as well as remedial responses available to TREs.\nConclusionsAt this stage, disclosure checking of ML models is very much a specialist activity. However, TRE managers need a basic awareness of the potential risk in ML models to enable them to make sensible decisions on using TREs for ML model development.","PeriodicalId":36483,"journal":{"name":"International Journal of Population Data Science","volume":"261 1","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Machine learning models in trusted research environments -- understanding operational risks\",\"authors\":\"F. Ritchie, Amy Tilbrook, Christian Cole, Emily Jefferson, Susan Krueger, Esma Mansouri-Bensassi, Simon Rogers, Jim Q. Smith\",\"doi\":\"10.23889/ijpds.v8i1.2165\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"IntroductionTrusted research environments (TREs) provide secure access to very sensitive data for research. All TREs operate manual checks on outputs to ensure there is no residual disclosure risk. Machine learning (ML) models require very large amount of data; if this data is personal, the TRE is a well-established data management solution. However, ML models present novel disclosure risks, in both type and scale.\\nObjectivesAs part of a series on ML disclosure risk in TREs, this article is intended to introduce TRE managers to the conceptual problems and work being done to address them.\\nMethodsWe demonstrate how ML models present a qualitatively different type of disclosure risk, compared to traditional statistical outputs. These arise from both the nature and the scale of ML modelling.\\nResultsWe show that there are a large number of unresolved issues, although there is progress in many areas. We show where areas of uncertainty remain, as well as remedial responses available to TREs.\\nConclusionsAt this stage, disclosure checking of ML models is very much a specialist activity. However, TRE managers need a basic awareness of the potential risk in ML models to enable them to make sensible decisions on using TREs for ML model development.\",\"PeriodicalId\":36483,\"journal\":{\"name\":\"International Journal of Population Data Science\",\"volume\":\"261 1\",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2023-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Population Data Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23889/ijpds.v8i1.2165\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Population Data Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23889/ijpds.v8i1.2165","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

导言受信任的研究环境(TRE)为研究提供了对非常敏感数据的安全访问。所有 TRE 都会对输出结果进行人工检查,以确保不存在残余披露风险。机器学习 (ML) 模型需要大量数据;如果这些数据是个人数据,则 TRE 是一种成熟的数据管理解决方案。作为 TRE 中的 ML 披露风险系列文章的一部分,本文旨在向 TRE 管理人员介绍概念性问题以及为解决这些问题而开展的工作。这些风险源于 ML 建模的性质和规模。结果我们表明,尽管在许多领域取得了进展,但仍有大量问题尚未解决。结论在现阶段,对 ML 模型进行披露检查在很大程度上是一项专业活动。然而,TRE 管理者需要对 ML 模型的潜在风险有基本的认识,以便在使用 TRE 进行 ML 模型开发时做出明智的决定。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Machine learning models in trusted research environments -- understanding operational risks
IntroductionTrusted research environments (TREs) provide secure access to very sensitive data for research. All TREs operate manual checks on outputs to ensure there is no residual disclosure risk. Machine learning (ML) models require very large amount of data; if this data is personal, the TRE is a well-established data management solution. However, ML models present novel disclosure risks, in both type and scale. ObjectivesAs part of a series on ML disclosure risk in TREs, this article is intended to introduce TRE managers to the conceptual problems and work being done to address them. MethodsWe demonstrate how ML models present a qualitatively different type of disclosure risk, compared to traditional statistical outputs. These arise from both the nature and the scale of ML modelling. ResultsWe show that there are a large number of unresolved issues, although there is progress in many areas. We show where areas of uncertainty remain, as well as remedial responses available to TREs. ConclusionsAt this stage, disclosure checking of ML models is very much a specialist activity. However, TRE managers need a basic awareness of the potential risk in ML models to enable them to make sensible decisions on using TREs for ML model development.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.50
自引率
0.00%
发文量
386
审稿时长
20 weeks
期刊最新文献
Defining a low-risk birth cohort: a cohort study comparing two perinatal data sets in Ontario, Canada. Data resource profile: nutrition data in the VA million veteran program. Deprivation effects on length of stay and death of hospitalised COVID-19 patients in Greater Manchester. Variation in colorectal cancer treatment and outcomes in Scotland: real world evidence from national linked administrative health data. Examining the quality and population representativeness of linked survey and administrative data: guidance and illustration using linked 1958 National Child Development Study and Hospital Episode Statistics data
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1