首页 > 最新文献

Asian Bioethics Review最新文献

英文 中文
Governance of Medical AI. 医疗人工智能的管理。
IF 1.3 Q3 ETHICS Pub Date : 2024-07-03 eCollection Date: 2024-07-01 DOI: 10.1007/s41649-024-00306-4
Calvin W L Ho, Karel Caals
{"title":"Governance of Medical AI.","authors":"Calvin W L Ho, Karel Caals","doi":"10.1007/s41649-024-00306-4","DOIUrl":"https://doi.org/10.1007/s41649-024-00306-4","url":null,"abstract":"","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11250744/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leonardo D. de Castro, 1952–2024 莱昂纳多-德-卡斯特罗,1952-2024 年
IF 1.3 Q3 ETHICS Pub Date : 2024-07-02 DOI: 10.1007/s41649-024-00308-2
Alastair V. Campbell
{"title":"Leonardo D. de Castro, 1952–2024","authors":"Alastair V. Campbell","doi":"10.1007/s41649-024-00308-2","DOIUrl":"https://doi.org/10.1007/s41649-024-00308-2","url":null,"abstract":"","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141685584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How the EU AI Act Seeks to Establish an Epistemic Environment of Trust. 欧盟人工智能法案如何寻求建立信任的知识环境?
IF 1.3 Q3 ETHICS Pub Date : 2024-06-24 eCollection Date: 2024-07-01 DOI: 10.1007/s41649-024-00304-6
Calvin Wai-Loon Ho, Karel Caals

With focus on the development and use of artificial intelligence (AI) systems in the digital health context, we consider the following questions: How does the European Union (EU) seek to facilitate the development and uptake of trustworthy AI systems through the AI Act? What does trustworthiness and trust mean in the AI Act, and how are they linked to some of the ongoing discussions of these terms in bioethics, law, and philosophy? What are the normative components of trustworthiness? And how do the requirements of the AI Act relate to these components? We first explain how the EU seeks to create an epistemic environment of trust through the AI Act to facilitate the development and uptake of trustworthy AI systems. The legislation establishes a governance regime that operates as a socio-epistemological infrastructure of trust which enables a performative framing of trust and trustworthiness. The degree of success that performative acts of trust and trustworthiness have achieved in realising the legislative goals may then be assessed in terms of statutorily defined proxies of trustworthiness. We show that to be trustworthy, these performative acts should be consistent with the ethical principles endorsed by the legislation; these principles are also manifested in at least four key features of the governance regime. However, specified proxies of trustworthiness are not expected to be adequate for applications of AI systems within a regulatory sandbox or in real-world testing. We explain why different proxies of trustworthiness for these applications may be regarded as 'special' trust domains and why the nature of trust should be understood as participatory.

我们重点关注数字医疗领域人工智能(AI)系统的开发和使用,并思考以下问题:欧盟(EU)如何通过《人工智能法》来促进可信赖的人工智能系统的开发和应用?在《人工智能法案》中,可信度和信任意味着什么,它们与生物伦理学、法律和哲学中正在进行的关于这些术语的讨论有什么联系?可信度的规范性要素是什么?人工智能法》的要求与这些要素有何关联?我们首先解释欧盟如何寻求通过《人工智能法》创造一个信任的认识论环境,以促进可信赖的人工智能系统的开发和应用。该法建立了一个治理制度,作为信任的社会认识论基础设施,它促成了对信任和可信度的执行框架。在实现立法目标的过程中,信任和可信度的执行行为所取得的成功程度可以通过法定的可信度代用指标来评估。我们的研究表明,这些执行行为要想值得信赖,就必须符合立法所认可的道德原则;这些原则也至少体现在治理制度的四个关键特征中。然而,对于人工智能系统在监管沙盒或真实世界测试中的应用来说,特定的可信度代理并不足够。我们解释了为什么这些应用中的不同可信度代理可被视为 "特殊 "信任域,以及为什么信任的本质应被理解为参与性。
{"title":"How the EU AI Act Seeks to Establish an Epistemic Environment of Trust.","authors":"Calvin Wai-Loon Ho, Karel Caals","doi":"10.1007/s41649-024-00304-6","DOIUrl":"10.1007/s41649-024-00304-6","url":null,"abstract":"<p><p>With focus on the development and use of artificial intelligence (AI) systems in the digital health context, we consider the following questions: How does the European Union (EU) seek to facilitate the development and uptake of trustworthy AI systems through the AI Act? What does trustworthiness and trust mean in the AI Act, and how are they linked to some of the ongoing discussions of these terms in bioethics, law, and philosophy? What are the normative components of trustworthiness? And how do the requirements of the AI Act relate to these components? We first explain how the EU seeks to create an epistemic environment of trust through the AI Act to facilitate the development and uptake of trustworthy AI systems. The legislation establishes a governance regime that operates as a socio-epistemological infrastructure of trust which enables a performative framing of trust and trustworthiness. The degree of success that performative acts of trust and trustworthiness have achieved in realising the legislative goals may then be assessed in terms of statutorily defined proxies of trustworthiness. We show that to be trustworthy, these performative acts should be consistent with the ethical principles endorsed by the legislation; these principles are also manifested in at least four key features of the governance regime. However, specified proxies of trustworthiness are not expected to be adequate for applications of AI systems within a regulatory sandbox or in real-world testing. We explain why different proxies of trustworthiness for these applications may be regarded as 'special' trust domains and why the nature of trust should be understood as participatory.</p>","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11250763/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Existing and Emerging Capabilities in the Governance of Medical AI. 医疗人工智能管理方面的现有能力和新兴能力。
IF 1.3 Q3 ETHICS Pub Date : 2024-06-24 eCollection Date: 2024-07-01 DOI: 10.1007/s41649-024-00307-3
Gilberto K K Leung, Yuechan Song, Calvin W L Ho
{"title":"Existing and Emerging Capabilities in the Governance of Medical AI.","authors":"Gilberto K K Leung, Yuechan Song, Calvin W L Ho","doi":"10.1007/s41649-024-00307-3","DOIUrl":"10.1007/s41649-024-00307-3","url":null,"abstract":"","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11250747/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Environmental Costs of Artificial Intelligence for Healthcare. 医疗保健领域人工智能的环境成本。
IF 1.3 Q3 ETHICS Pub Date : 2024-06-21 eCollection Date: 2024-07-01 DOI: 10.1007/s41649-024-00295-4
Amelia Katirai

Healthcare has emerged as a key setting where expectations are rising for the potential benefits of artificial intelligence (AI), encompassing a range of technologies of varying utility and benefit. This paper argues that, even as the development of AI for healthcare has been pushed forward by a range of public and private actors, insufficient attention has been paid to a key contradiction at the center of AI for healthcare: that its pursuit to improve health is necessarily accompanied by environmental costs which pose risks to human and environmental health-costs which are not necessarily directly borne by those benefiting from the technologies. This perspective paper begins by examining the purported promise of AI in healthcare, contrasting this with the environmental costs which arise across the AI lifecycle, to highlight this contradiction inherent in the pursuit of AI. Its advancement-including in healthcare-is often described through deterministic language that presents it as inevitable. Yet, this paper argues that there is need for recognition of the environmental harm which this pursuit can lead to. Given recent initiatives to incorporate stakeholder involvement into decision-making around AI, the paper closes with a call for an expanded conception of stakeholders in AI for healthcare, to include consideration of those who may be indirectly affected by its development and deployment.

医疗保健已成为人们对人工智能(AI)潜在益处期望值不断提高的一个关键环境,其中包括一系列效用和益处各不相同的技术。本文认为,尽管一系列公共和私人行为者都在推动人工智能在医疗保健领域的发展,但人们对人工智能在医疗保健领域的核心矛盾却关注不够:在追求改善健康的同时,必然会付出环境成本,给人类和环境健康带来风险,而这些成本并不一定由技术受益者直接承担。本视角论文首先探讨了人工智能在医疗保健领域的前景,并将其与人工智能生命周期中产生的环境成本进行对比,以突出人工智能发展过程中固有的矛盾。人工智能的发展--包括在医疗保健领域的发展--经常被用决定论的语言描述为不可避免的。然而,本文认为,有必要认识到这种追求可能导致的环境危害。鉴于最近有倡议将利益相关者的参与纳入有关人工智能的决策中,本文最后呼吁扩大医疗保健领域人工智能利益相关者的概念,将可能受其开发和部署间接影响的人纳入考虑范围。
{"title":"The Environmental Costs of Artificial Intelligence for Healthcare.","authors":"Amelia Katirai","doi":"10.1007/s41649-024-00295-4","DOIUrl":"10.1007/s41649-024-00295-4","url":null,"abstract":"<p><p>Healthcare has emerged as a key setting where expectations are rising for the potential benefits of artificial intelligence (AI), encompassing a range of technologies of varying utility and benefit. This paper argues that, even as the development of AI for healthcare has been pushed forward by a range of public and private actors, insufficient attention has been paid to a key contradiction at the center of AI for healthcare: that its pursuit to improve health is necessarily accompanied by environmental costs which pose risks to human and environmental health-costs which are not necessarily directly borne by those benefiting from the technologies. This perspective paper begins by examining the purported promise of AI in healthcare, contrasting this with the environmental costs which arise across the AI lifecycle, to highlight this contradiction inherent in the pursuit of AI. Its advancement-including in healthcare-is often described through deterministic language that presents it as inevitable. Yet, this paper argues that there is need for recognition of the environmental harm which this pursuit can lead to. Given recent initiatives to incorporate stakeholder involvement into decision-making around AI, the paper closes with a call for an expanded conception of stakeholders in AI for healthcare, to include consideration of those who may be indirectly affected by its development and deployment.</p>","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11250743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Moving beyond Technical Issues to Stakeholder Involvement: Key Areas for Consideration in the Development of Human-Centred and Trusted AI in Healthcare. 超越技术问题,实现利益相关者的参与:在医疗保健领域开发以人为本、值得信赖的人工智能的关键考虑领域》。
IF 1.3 Q3 ETHICS Pub Date : 2024-06-21 eCollection Date: 2024-07-01 DOI: 10.1007/s41649-024-00300-w
Jane Kaye, Nisha Shah, Atsushi Kogetsu, Sarah Coy, Amelia Katirai, Machie Kuroda, Yan Li, Kazuto Kato, Beverley Anne Yamamoto

Discussion around the increasing use of AI in healthcare tends to focus on the technical aspects of the technology rather than the socio-technical issues associated with implementation. In this paper, we argue for the development of a sustained societal dialogue between stakeholders around the use of AI in healthcare. We contend that a more human-centred approach to AI implementation in healthcare is needed which is inclusive of the views of a range of stakeholders. We identify four key areas to support stakeholder involvement that would enhance the development, implementation, and evaluation of AI in healthcare leading to greater levels of trust. These are as follows: (1) aligning AI development practices with social values, (2) appropriate and proportionate involvement of stakeholders, (3) understanding the importance of building trust in AI, (4) embedding stakeholder-driven governance to support these activities.

围绕人工智能在医疗保健领域的应用日益增多的讨论往往集中在技术层面,而不是与实施相关的社会技术问题。在本文中,我们主张利益相关者围绕人工智能在医疗保健领域的应用开展持续的社会对话。我们认为,在医疗保健领域实施人工智能需要一种更加以人为本的方法,这种方法应包含一系列利益相关者的观点。我们确定了支持利益相关者参与的四个关键领域,以加强医疗保健领域人工智能的开发、实施和评估,从而提高信任度。这些领域如下(1) 使人工智能的发展实践与社会价值观相一致,(2) 利益相关者适当和适度的参与,(3) 理解在人工智能中建立信任的重要性,(4) 嵌入利益相关者驱动的管理以支持这些活动。
{"title":"Moving beyond Technical Issues to Stakeholder Involvement: Key Areas for Consideration in the Development of Human-Centred and Trusted AI in Healthcare.","authors":"Jane Kaye, Nisha Shah, Atsushi Kogetsu, Sarah Coy, Amelia Katirai, Machie Kuroda, Yan Li, Kazuto Kato, Beverley Anne Yamamoto","doi":"10.1007/s41649-024-00300-w","DOIUrl":"10.1007/s41649-024-00300-w","url":null,"abstract":"<p><p>Discussion around the increasing use of AI in healthcare tends to focus on the technical aspects of the technology rather than the socio-technical issues associated with implementation. In this paper, we argue for the development of a sustained societal dialogue between stakeholders around the use of AI in healthcare. We contend that a more human-centred approach to AI implementation in healthcare is needed which is inclusive of the views of a range of stakeholders. We identify four key areas to support stakeholder involvement that would enhance the development, implementation, and evaluation of AI in healthcare leading to greater levels of trust. These are as follows: (1) aligning AI development practices with social values, (2) appropriate and proportionate involvement of stakeholders, (3) understanding the importance of building trust in AI, (4) embedding stakeholder-driven governance to support these activities.</p>","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11250765/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regulating AI-Based Medical Devices in Saudi Arabia: New Legal Paradigms in an Evolving Global Legal Order. 沙特阿拉伯对基于人工智能的医疗设备的监管:不断演变的全球法律秩序中的新法律范式》。
IF 1.3 Q3 ETHICS Pub Date : 2024-06-21 eCollection Date: 2024-07-01 DOI: 10.1007/s41649-024-00285-6
Barry Solaiman

This paper examines the Saudi Food and Drug Authority's (SFDA) Guidance on Artificial Intelligence (AI) and Machine Learning (ML) technologies based Medical Devices (the MDS-G010). The SFDA has pioneered binding requirements designed for manufacturers to obtain Medical Device Marketing Authorization. The regulation of AI in health is at an early stage worldwide. Therefore, it is critical to examine the scope and nature of the MDS-G010, its influences, and its future directions. It is argued that the guidance is a patchwork of existing international best practices concerning AI regulation, incorporates adapted forms of non-AI-based guidelines, and builds on existing legal requirements in the SFDA's existing regulatory architecture. There is particular congruence with the approaches of the US Food and Drug Administration (FDA) and the International Medical Device Regulators Forum (IMDRF), but the SFDA goes beyond those approaches to incorporate other best practices into its guidance. Additionally, the binding nature of the MDS-G010 is complex. There are binding 'components' within the guidance, but the incorporation of non-binding international best practices which are subordinate to national law results in a lack of clarity about how penalties for non-compliance will operate.

本文探讨了沙特食品药品管理局(SFDA)关于基于人工智能(AI)和机器学习(ML)技术的医疗器械指南(MDS-G010)。SFDA 为制造商获得医疗器械营销授权制定了具有约束力的要求。全球对医疗领域人工智能的监管尚处于早期阶段。因此,研究 MDS-G010 的范围和性质、影响及其未来方向至关重要。有观点认为,该指南是现有国际人工智能监管最佳实践的拼凑,纳入了非人工智能指南的改编形式,并以国家食品药品监督管理局现有监管架构中的现有法律要求为基础。该指南与美国食品和药物管理局(FDA)以及国际医疗器械监管者论坛(IMDRF)的方法特别一致,但国家食品药品监督管理局超越了这些方法,将其他最佳实践纳入其指南。此外,MDS-G010 的约束性质也很复杂。指南中有一些具有约束力的 "组成部分",但由于纳入了从属于国家法律的不具约束力的国际最佳实践,导致对违规行为的处罚不明确。
{"title":"Regulating AI-Based Medical Devices in Saudi Arabia: New Legal Paradigms in an Evolving Global Legal Order.","authors":"Barry Solaiman","doi":"10.1007/s41649-024-00285-6","DOIUrl":"10.1007/s41649-024-00285-6","url":null,"abstract":"<p><p>This paper examines the Saudi Food and Drug Authority's (SFDA) Guidance on Artificial Intelligence (AI) and Machine Learning (ML) technologies based Medical Devices (the MDS-G010). The SFDA has pioneered binding requirements designed for manufacturers to obtain Medical Device Marketing Authorization. The regulation of AI in health is at an early stage worldwide. Therefore, it is critical to examine the scope and nature of the MDS-G010, its influences, and its future directions. It is argued that the guidance is a patchwork of existing international best practices concerning AI regulation, incorporates adapted forms of non-AI-based guidelines, and builds on existing legal requirements in the SFDA's existing regulatory architecture. There is particular congruence with the approaches of the US Food and Drug Administration (FDA) and the International Medical Device Regulators Forum (IMDRF), but the SFDA goes beyond those approaches to incorporate other best practices into its guidance. Additionally, the binding nature of the MDS-G010 is complex. There are binding 'components' within the guidance, but the incorporation of non-binding international best practices which are subordinate to national law results in a lack of clarity about how penalties for non-compliance will operate.</p>","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11250741/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping the Apps: Ethical and Legal Issues with Crowdsourced Smartphone Data using mHealth Applications. 绘制应用程序地图:使用移动医疗应用程序的众包智能手机数据的伦理和法律问题。
IF 1.3 Q3 ETHICS Pub Date : 2024-06-18 eCollection Date: 2024-07-01 DOI: 10.1007/s41649-024-00296-3
Nada Farag, Alycia Noë, Dimitri Patrinos, Ma'n H Zawati

More than 5 billion people in the world own a smartphone. More than half of these have been used to collect and process health-related data. As such, the existing volume of potentially exploitable health data is unprecedentedly large and growing rapidly. Mobile health applications (apps) on smartphones are some of the worst offenders and are increasingly being used for gathering and exchanging significant amounts of personal health data from the public. This data is often utilized for health research purposes and for algorithm training. While there are advantages to utilizing this data for expanding health knowledge, there are associated risks for the users of these apps, such as privacy concerns and the protection of their data. Consequently, gaining a deeper comprehension of how apps collect and crowdsource data is crucial. To explore how apps are crowdsourcing data and to identify potential ethical, legal, and social issues (ELSI), we conducted an examination of the Apple App Store and the Google Play Store in North America and Europe to identify apps that could potentially gather health data through crowdsourcing. Subsequently, we analyzed their privacy policies, terms of use, and other related documentation to gain insights into the utilization of users' data and the possibility of repurposing it for research or algorithm training purposes. More specifically, we reviewed privacy policies to identify clauses pertaining to the following key categories: research, data sharing, privacy/confidentiality, commercialization, and return of findings. Based on the results of these app search, we developed an App Atlas that presents apps which crowdsource data for research or algorithm training. We identified 46 apps available in the European and Canadian markets that either openly crowdsource health data for research or algorithm training or retain the legal or technical capability to do so. This app search showed an overall lack of consistency and transparency in privacy policies that poses challenges to user comprehensibility, trust, and informed consent. A significant proportion of applications presented contradictions or exhibited considerable ambiguity. For instance, the vast majority of privacy policies in the App Atlas contain ambiguous or contradictory language regarding the sharing of users' data with third parties. This raises a number of ethico-legal concerns which will require further academic and policy attention to ensure a balance between protecting individual interests and maximizing the scientific utility of crowdsourced data. This article represents a key first step in better understanding these concerns and bringing attention to this important issue.

Supplementary information: The online version contains supplementary material available at 10.1007/s41649-024-00296-3.

全球有超过 50 亿人拥有智能手机。其中一半以上被用于收集和处理与健康有关的数据。因此,现有可能被利用的健康数据量空前庞大,而且还在迅速增长。智能手机上的移动健康应用程序(App)是其中最严重的违规者,越来越多地用于收集和交换公众的大量个人健康数据。这些数据通常用于健康研究目的和算法训练。利用这些数据扩展健康知识固然有其优势,但对这些应用程序的用户来说也存在相关风险,如隐私问题和数据保护问题。因此,深入了解应用程序如何收集和众包数据至关重要。为了探索应用程序如何进行数据众包,并找出潜在的道德、法律和社会问题(ELSI),我们对北美和欧洲的苹果应用商店和谷歌应用商店进行了检查,以找出可能通过众包收集健康数据的应用程序。随后,我们分析了它们的隐私政策、使用条款和其他相关文档,以深入了解用户数据的使用情况以及将其重新用于研究或算法训练目的的可能性。更具体地说,我们审查了隐私政策,以确定与以下关键类别有关的条款:研究、数据共享、隐私/保密、商业化和结果返还。根据这些应用程序的搜索结果,我们开发了一个应用程序图集,展示了用于研究或算法训练的众包数据应用程序。我们在欧洲和加拿大市场上发现了 46 款应用程序,这些应用程序或公开众包健康数据用于研究或算法训练,或保留了这样做的法律或技术能力。这次应用搜索显示,隐私政策总体上缺乏一致性和透明度,这给用户的理解、信任和知情同意带来了挑战。相当一部分应用程序存在矛盾或表现出相当的模糊性。例如,App Atlas 中的绝大多数隐私政策在与第三方共享用户数据方面都使用了含糊不清或自相矛盾的语言。这引发了一系列伦理-法律问题,需要学术界和政策界进一步关注,以确保在保护个人利益和最大限度地发挥众包数据的科学效用之间取得平衡。这篇文章是更好地理解这些问题并使人们关注这一重要问题的关键性第一步:在线版本包含补充材料,可查阅 10.1007/s41649-024-00296-3。
{"title":"Mapping the Apps: Ethical and Legal Issues with Crowdsourced Smartphone Data using mHealth Applications.","authors":"Nada Farag, Alycia Noë, Dimitri Patrinos, Ma'n H Zawati","doi":"10.1007/s41649-024-00296-3","DOIUrl":"10.1007/s41649-024-00296-3","url":null,"abstract":"<p><p>More than 5 billion people in the world own a smartphone. More than half of these have been used to collect and process health-related data. As such, the existing volume of potentially exploitable health data is unprecedentedly large and growing rapidly. Mobile health applications (apps) on smartphones are some of the worst offenders and are increasingly being used for gathering and exchanging significant amounts of personal health data from the public. This data is often utilized for health research purposes and for algorithm training. While there are advantages to utilizing this data for expanding health knowledge, there are associated risks for the users of these apps, such as privacy concerns and the protection of their data. Consequently, gaining a deeper comprehension of how apps collect and crowdsource data is crucial. To explore how apps are crowdsourcing data and to identify potential ethical, legal, and social issues (ELSI), we conducted an examination of the Apple App Store and the Google Play Store in North America and Europe to identify apps that could potentially gather health data through crowdsourcing. Subsequently, we analyzed their privacy policies, terms of use, and other related documentation to gain insights into the utilization of users' data and the possibility of repurposing it for research or algorithm training purposes. More specifically, we reviewed privacy policies to identify clauses pertaining to the following key categories: research, data sharing, privacy/confidentiality, commercialization, and return of findings. Based on the results of these app search, we developed an App Atlas that presents apps which crowdsource data for research or algorithm training. We identified 46 apps available in the European and Canadian markets that either openly crowdsource health data for research or algorithm training or retain the legal or technical capability to do so. This app search showed an overall lack of consistency and transparency in privacy policies that poses challenges to user comprehensibility, trust, and informed consent. A significant proportion of applications presented contradictions or exhibited considerable ambiguity. For instance, the vast majority of privacy policies in the App Atlas contain ambiguous or contradictory language regarding the sharing of users' data with third parties. This raises a number of ethico-legal concerns which will require further academic and policy attention to ensure a balance between protecting individual interests and maximizing the scientific utility of crowdsourced data. This article represents a key first step in better understanding these concerns and bringing attention to this important issue.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s41649-024-00296-3.</p>","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11250705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence Needs Data: Challenges Accessing Italian Databases to Train AI 人工智能需要数据:访问意大利数据库以训练人工智能所面临的挑战
IF 2.9 Q1 Arts and Humanities Pub Date : 2024-06-13 DOI: 10.1007/s41649-024-00282-9
C. Staunton, Roberta Biasiotto, Katharina Tschigg, Deborah Mascalzoni
{"title":"Artificial Intelligence Needs Data: Challenges Accessing Italian Databases to Train AI","authors":"C. Staunton, Roberta Biasiotto, Katharina Tschigg, Deborah Mascalzoni","doi":"10.1007/s41649-024-00282-9","DOIUrl":"https://doi.org/10.1007/s41649-024-00282-9","url":null,"abstract":"","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141349278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Artificial Intelligence in Patient Care—Some Considerations for Doctors and Medical Regulators 在患者护理中使用人工智能--医生和医疗监管机构的一些考虑因素
IF 2.9 Q1 Arts and Humanities Pub Date : 2024-06-13 DOI: 10.1007/s41649-024-00291-8
Kanny Ooi
{"title":"Using Artificial Intelligence in Patient Care—Some Considerations for Doctors and Medical Regulators","authors":"Kanny Ooi","doi":"10.1007/s41649-024-00291-8","DOIUrl":"https://doi.org/10.1007/s41649-024-00291-8","url":null,"abstract":"","PeriodicalId":44520,"journal":{"name":"Asian Bioethics Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141347396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Asian Bioethics Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1