人工智能在医学中的应用:健康数据、隐私风险及其他

Boris Edidin, Alexey Bunkov, Ksenia Kochetkova
{"title":"人工智能在医学中的应用:健康数据、隐私风险及其他","authors":"Boris Edidin, Alexey Bunkov, Ksenia Kochetkova","doi":"10.17323/2713-2749.2024.2.57.79","DOIUrl":null,"url":null,"abstract":"In the era of advancements in artificial intelligence (AI) and machine learning, the healthcare industry has become one of the major areas where such technologies are being actively adopted and utilized. The global health care sector generated more than 2.3 zettabytes of data worldwide in 2020. Analysts estimate that the global market for artificial intelligence (AI) in medicine will grow to $13 billion by 2025, with a significant increase in newly established companies. Artificial intelligence in medicine is used to predict, detect and diagnose various diseases and pathologies. The sources of data can be various results of medical research (EEG, X-ray images, laboratory tests, e.g. tissues, etc.). At the same time, there are understandable concerns that AI will undermine the patient-provider relationship, contribute to the deskilling of providers, undermine transparency, misdiagnose or inappropriately treat because of errors within AI decision-making that are hard to detect, exacerbate existing racial or societal biases, or introduce algorithmic bias that will be hard to detect. Traditional research methods, general and special ones, with an emphasis on the comparative legal method, were chosen. For the AI to work it needs to be trained, and it’s learning from all sorts of information given to it. The main part of the information on which AI is trained is health data, which is sensitive personal data. The fact that personal data is qualified as sensitive personal data indicates the significance of the information contained, the high risks in case it’s leaking, and hence the need for stricter control and regulation. The article offers a detailed exploration of the legal implications of AI in medicine, highlighting existing challenges, the current state of regulation, and proposes future perspectives and recommendations for legislation adapted to the era of medical AI. Given the above, the study is divided into three parts: international framework, that will focus primarily on applicable WHO documents; risks and possible ways to minimize them, where the authors have tried to consider various issues related to the use of AI in medicine and find options to address them; and relevant case-study.","PeriodicalId":410740,"journal":{"name":"Legal Issues in the Digital Age","volume":"115 16","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Use of AI in Medicine: Health Data, Privacy Risks and More\",\"authors\":\"Boris Edidin, Alexey Bunkov, Ksenia Kochetkova\",\"doi\":\"10.17323/2713-2749.2024.2.57.79\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the era of advancements in artificial intelligence (AI) and machine learning, the healthcare industry has become one of the major areas where such technologies are being actively adopted and utilized. The global health care sector generated more than 2.3 zettabytes of data worldwide in 2020. Analysts estimate that the global market for artificial intelligence (AI) in medicine will grow to $13 billion by 2025, with a significant increase in newly established companies. Artificial intelligence in medicine is used to predict, detect and diagnose various diseases and pathologies. The sources of data can be various results of medical research (EEG, X-ray images, laboratory tests, e.g. tissues, etc.). At the same time, there are understandable concerns that AI will undermine the patient-provider relationship, contribute to the deskilling of providers, undermine transparency, misdiagnose or inappropriately treat because of errors within AI decision-making that are hard to detect, exacerbate existing racial or societal biases, or introduce algorithmic bias that will be hard to detect. Traditional research methods, general and special ones, with an emphasis on the comparative legal method, were chosen. For the AI to work it needs to be trained, and it’s learning from all sorts of information given to it. The main part of the information on which AI is trained is health data, which is sensitive personal data. The fact that personal data is qualified as sensitive personal data indicates the significance of the information contained, the high risks in case it’s leaking, and hence the need for stricter control and regulation. The article offers a detailed exploration of the legal implications of AI in medicine, highlighting existing challenges, the current state of regulation, and proposes future perspectives and recommendations for legislation adapted to the era of medical AI. Given the above, the study is divided into three parts: international framework, that will focus primarily on applicable WHO documents; risks and possible ways to minimize them, where the authors have tried to consider various issues related to the use of AI in medicine and find options to address them; and relevant case-study.\",\"PeriodicalId\":410740,\"journal\":{\"name\":\"Legal Issues in the Digital Age\",\"volume\":\"115 16\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Legal Issues in the Digital Age\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.17323/2713-2749.2024.2.57.79\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Legal Issues in the Digital Age","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17323/2713-2749.2024.2.57.79","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在人工智能(AI)和机器学习不断进步的时代,医疗保健行业已成为积极采用和利用此类技术的主要领域之一。2020 年,全球医疗保健行业将产生超过 2.3 ZB 的数据。分析师估计,到 2025 年,全球人工智能(AI)医疗市场将增长到 130 亿美元,其中新成立的公司将大幅增加。人工智能医学用于预测、检测和诊断各种疾病和病理。数据来源可以是各种医学研究成果(脑电图、X 光图像、实验室检测,如组织等)。与此同时,人们担心人工智能会破坏患者与医疗服务提供者之间的关系、导致医疗服务提供者的非专业化、破坏透明度、由于人工智能决策中难以察觉的错误而导致误诊或不当治疗、加剧现有的种族或社会偏见,或引入难以察觉的算法偏见,这些担忧是可以理解的。我们选择了传统的研究方法,包括一般研究方法和特殊研究方法,重点是比较法律方法。要让人工智能发挥作用,就需要对它进行训练,它要从各种信息中学习。训练人工智能的主要信息是健康数据,也就是敏感的个人数据。个人数据被定性为敏感个人数据这一事实表明,其中包含的信息非常重要,一旦泄露,风险很高,因此需要更严格的控制和监管。文章详细探讨了人工智能在医疗领域的法律影响,强调了现有的挑战和监管现状,并对适应医疗人工智能时代的立法提出了未来展望和建议。鉴于上述情况,本研究报告分为三个部分:国际框架,主要侧重于适用的世卫组织文件;风险和将风险降至最低的可能方法,作者试图在此考虑与医疗中使用人工智能有关的各种问题,并找到解决这些问题的方案;以及相关案例研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The Use of AI in Medicine: Health Data, Privacy Risks and More
In the era of advancements in artificial intelligence (AI) and machine learning, the healthcare industry has become one of the major areas where such technologies are being actively adopted and utilized. The global health care sector generated more than 2.3 zettabytes of data worldwide in 2020. Analysts estimate that the global market for artificial intelligence (AI) in medicine will grow to $13 billion by 2025, with a significant increase in newly established companies. Artificial intelligence in medicine is used to predict, detect and diagnose various diseases and pathologies. The sources of data can be various results of medical research (EEG, X-ray images, laboratory tests, e.g. tissues, etc.). At the same time, there are understandable concerns that AI will undermine the patient-provider relationship, contribute to the deskilling of providers, undermine transparency, misdiagnose or inappropriately treat because of errors within AI decision-making that are hard to detect, exacerbate existing racial or societal biases, or introduce algorithmic bias that will be hard to detect. Traditional research methods, general and special ones, with an emphasis on the comparative legal method, were chosen. For the AI to work it needs to be trained, and it’s learning from all sorts of information given to it. The main part of the information on which AI is trained is health data, which is sensitive personal data. The fact that personal data is qualified as sensitive personal data indicates the significance of the information contained, the high risks in case it’s leaking, and hence the need for stricter control and regulation. The article offers a detailed exploration of the legal implications of AI in medicine, highlighting existing challenges, the current state of regulation, and proposes future perspectives and recommendations for legislation adapted to the era of medical AI. Given the above, the study is divided into three parts: international framework, that will focus primarily on applicable WHO documents; risks and possible ways to minimize them, where the authors have tried to consider various issues related to the use of AI in medicine and find options to address them; and relevant case-study.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Technologies Versus Justice: Challenges of AI Regulation in the Judicial System Digital Platforms in the Focus of National Law Progress in Natural Language Processing Technologies: Regulating Quality and Accessibility of Training Data Legal Horizons of the New Artificial Intelligence Paradigm Regulating Digital Era: A Comparative Analysis of Policy Perspectives on Media Entertainment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1