以更广泛的方法应对数字心理健康领域的伦理挑战

IF 73.3 1区 医学 Q1 Medicine World Psychiatry Pub Date : 2024-09-16 DOI:10.1002/wps.21237
Nicole Martinez-Martin
{"title":"以更广泛的方法应对数字心理健康领域的伦理挑战","authors":"Nicole Martinez-Martin","doi":"10.1002/wps.21237","DOIUrl":null,"url":null,"abstract":"<p>Galderisi et al<span><sup>1</sup></span> provide an insightful overview of current ethical challenges in psychiatry, including those presented by digital psychiatry, as well as recommendations for addressing these challenges. As they discuss, “digital psychiatry” encompasses an array of different digital tools, including mental health apps, chatbots, telehealth platforms, and artificial intelligence (AI). These tools hold promise for improving diagnosis and care, and could facilitate access to mental health services by marginalized populations. In particular, digital mental health tools can assist in expanding mental health support in lower-to-middle income countries.</p>\n<p>Many of the ethical challenges identified by the authors in the use of digital tools reflect inequities and challenges within broader society. For example, in the US, lack of mental health insurance and insufficient representation of racialized minorities in medical research contribute to the difficulties with access and fairness in digital psychiatry. In many ways, the ethical challenges presented by digital psychiatry reflect long-standing concerns about who benefits, and who does not, from psychiatry. The array of forward-looking recommendations advanced by Galderisi et al show that these ethical challenges can also be seen as opportunities for moving towards greater equity and inclusion in psychiatry.</p>\n<p>Discussions of the ethics of digital health benefit from broadening the scope of issues to include social context. Galderisi et al refer to inequities in how mental health care is researched, developed and accessed, and to historical power imbalances in psychiatry due to which patient voices are undervalued and overlooked. A broader approach to ethical challenges related to digital health technologies recognizes that issues affecting these technologies often emerge due to their interactions with the social institutions in which they are developed and applied<span><sup>2</sup></span>. For example, privacy and safety of digital psychiatry tools must be understood within the context of the specific regulatory environment and infrastructure (e.g., broadband, hardware) in which they are being used.</p>\n<p>Digital health tools and medical AI are often promoted for improving cost-effectiveness, but this business-oriented emphasis can obscure discussion of what trade-offs in costs are considered acceptable, such as whether lesser-quality services are deemed acceptable for low-income groups. Institutions that regulate medical devices often struggle when they have to deal with softwares or AI. Consumers and patients too often find it difficult to obtain information that can help them decide which digital psychiatry tools are appropriate and effective for their needs.</p>\n<p>There have been pioneering efforts to assist with evaluating effective digital mental health tools, such as American Psychiatric Association's mental health app evaluator<span><sup>3</sup></span>. However, new models for evaluation which are responsive to the ways in which clinicians and patients realistically engage with mental health care tools are still needed. For example, some of the measures that regulators or insurance companies use to evaluate and approve digital mental health tools may not capture the aspects of a tool that, from a consumer or patient perspective, offer meaningful improvements to their lives. There has also been growing recognition that meaningful evaluation of the effectiveness of digital health tools needs to look beyond the tool itself in order to evaluate the tool's effectiveness as it is used within a particular system<span><sup>4</sup></span>. More engagement of diverse communities and those with lived experience during the development of digital psychiatry tools is imperative for improving these tools.</p>\n<p>Unfortunately, the hype around digital mental health often goes hand-in-hand with rapid adoption of unproven technologies. For example, large language models (LLMs) and generative AI are being quickly taken up within health care, including psychiatry<span><sup>5</sup></span>. These digital tools are embraced as cost-effective time-savers before there is sufficient opportunity to determine the extent to which they are in fact ready for the purposes for which they are being used<span><sup>6</sup></span>. Potential problems with generative AI in health care continue to emerge, from the potential discriminatory biases in information, to the potential collection and disclosure of personal data<span><sup>7</sup></span>. There is a need to exercise more caution in the adoption of new digital tools in psychiatry, in order to give time for evaluation and guidance for specific purposes.</p>\n<p>Privacy continues to pose significant concerns for digital psychiatry. Digital mental health tools often gather information that psychiatrists and patients are not aware of, such as location data, which may seem insignificant, but can allow for behavioral analyses that infer sensitive or predictive information regarding users<span><sup>8</sup></span>. In today's data landscape, brokerage of personal data can generate billions of dollars. These data practices have repercussions on patients that they may not be able to anticipate. Even de-identified data can increasingly be re-identified, and user profiles that are compiled from such data can be utilized to target people for fraudulent marketing schemes, or lead to downstream implications for employment or educational opportunities. Furthermore, in countries such as the US, where mental health care may be unaffordable for many individuals, people may effectively be put in the position of trading data for health care.</p>\n<p>Because of fairness and bias issues, there are also real questions on how much digital and AI tools actually work for different populations. One common source of bias is that the data that are used to train and develop digital tools may be insufficiently representative of the target population, such as participants of diverse race and gender or with disability<span><sup>9</sup></span>. The potential for bias goes beyond the question of algorithmic bias, as tools may be simply designed in ways that do not work effectively for different populations, or the use of those tools in specific contexts may lead to unfair outcomes. Addressing fairness will require ensuring that researchers and clinicians from diverse backgrounds are included in the development and design of digital psychiatry tools.</p>\n<p>As Galderisi et al note, the discipline and tools of psychiatry have a long history of being used for social control, such as in the criminal justice and educational systems. The tools of digital psychiatry may be applied to put vulnerable and minoritized groups at particular risk of punitive interventions from government institutions. It is, therefore, important that members of the psychiatric profession put considered effort into anticipating and addressing the social and legal implications of the use of digital psychiatry tools in other domains of society.</p>\n<p>Development of digital psychiatry tools requires identifying specific ethical challenges, but also taking the time to reflect and envision the system and world that these tools will help create. Galderisi et al set out a number of action items that, taken together, envision a more equitable and inclusive future for psychiatry. This is an important moment to take these opportunities for building new frameworks and systems for psychiatry, in which digital tools can be used to support human empathy and creativity, allowing mental well-being to flourish.</p>","PeriodicalId":23858,"journal":{"name":"World Psychiatry","volume":null,"pages":null},"PeriodicalIF":73.3000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A broader approach to ethical challenges in digital mental health\",\"authors\":\"Nicole Martinez-Martin\",\"doi\":\"10.1002/wps.21237\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Galderisi et al<span><sup>1</sup></span> provide an insightful overview of current ethical challenges in psychiatry, including those presented by digital psychiatry, as well as recommendations for addressing these challenges. As they discuss, “digital psychiatry” encompasses an array of different digital tools, including mental health apps, chatbots, telehealth platforms, and artificial intelligence (AI). These tools hold promise for improving diagnosis and care, and could facilitate access to mental health services by marginalized populations. In particular, digital mental health tools can assist in expanding mental health support in lower-to-middle income countries.</p>\\n<p>Many of the ethical challenges identified by the authors in the use of digital tools reflect inequities and challenges within broader society. For example, in the US, lack of mental health insurance and insufficient representation of racialized minorities in medical research contribute to the difficulties with access and fairness in digital psychiatry. In many ways, the ethical challenges presented by digital psychiatry reflect long-standing concerns about who benefits, and who does not, from psychiatry. The array of forward-looking recommendations advanced by Galderisi et al show that these ethical challenges can also be seen as opportunities for moving towards greater equity and inclusion in psychiatry.</p>\\n<p>Discussions of the ethics of digital health benefit from broadening the scope of issues to include social context. Galderisi et al refer to inequities in how mental health care is researched, developed and accessed, and to historical power imbalances in psychiatry due to which patient voices are undervalued and overlooked. A broader approach to ethical challenges related to digital health technologies recognizes that issues affecting these technologies often emerge due to their interactions with the social institutions in which they are developed and applied<span><sup>2</sup></span>. For example, privacy and safety of digital psychiatry tools must be understood within the context of the specific regulatory environment and infrastructure (e.g., broadband, hardware) in which they are being used.</p>\\n<p>Digital health tools and medical AI are often promoted for improving cost-effectiveness, but this business-oriented emphasis can obscure discussion of what trade-offs in costs are considered acceptable, such as whether lesser-quality services are deemed acceptable for low-income groups. Institutions that regulate medical devices often struggle when they have to deal with softwares or AI. Consumers and patients too often find it difficult to obtain information that can help them decide which digital psychiatry tools are appropriate and effective for their needs.</p>\\n<p>There have been pioneering efforts to assist with evaluating effective digital mental health tools, such as American Psychiatric Association's mental health app evaluator<span><sup>3</sup></span>. However, new models for evaluation which are responsive to the ways in which clinicians and patients realistically engage with mental health care tools are still needed. For example, some of the measures that regulators or insurance companies use to evaluate and approve digital mental health tools may not capture the aspects of a tool that, from a consumer or patient perspective, offer meaningful improvements to their lives. There has also been growing recognition that meaningful evaluation of the effectiveness of digital health tools needs to look beyond the tool itself in order to evaluate the tool's effectiveness as it is used within a particular system<span><sup>4</sup></span>. More engagement of diverse communities and those with lived experience during the development of digital psychiatry tools is imperative for improving these tools.</p>\\n<p>Unfortunately, the hype around digital mental health often goes hand-in-hand with rapid adoption of unproven technologies. For example, large language models (LLMs) and generative AI are being quickly taken up within health care, including psychiatry<span><sup>5</sup></span>. These digital tools are embraced as cost-effective time-savers before there is sufficient opportunity to determine the extent to which they are in fact ready for the purposes for which they are being used<span><sup>6</sup></span>. Potential problems with generative AI in health care continue to emerge, from the potential discriminatory biases in information, to the potential collection and disclosure of personal data<span><sup>7</sup></span>. There is a need to exercise more caution in the adoption of new digital tools in psychiatry, in order to give time for evaluation and guidance for specific purposes.</p>\\n<p>Privacy continues to pose significant concerns for digital psychiatry. Digital mental health tools often gather information that psychiatrists and patients are not aware of, such as location data, which may seem insignificant, but can allow for behavioral analyses that infer sensitive or predictive information regarding users<span><sup>8</sup></span>. In today's data landscape, brokerage of personal data can generate billions of dollars. These data practices have repercussions on patients that they may not be able to anticipate. Even de-identified data can increasingly be re-identified, and user profiles that are compiled from such data can be utilized to target people for fraudulent marketing schemes, or lead to downstream implications for employment or educational opportunities. Furthermore, in countries such as the US, where mental health care may be unaffordable for many individuals, people may effectively be put in the position of trading data for health care.</p>\\n<p>Because of fairness and bias issues, there are also real questions on how much digital and AI tools actually work for different populations. One common source of bias is that the data that are used to train and develop digital tools may be insufficiently representative of the target population, such as participants of diverse race and gender or with disability<span><sup>9</sup></span>. The potential for bias goes beyond the question of algorithmic bias, as tools may be simply designed in ways that do not work effectively for different populations, or the use of those tools in specific contexts may lead to unfair outcomes. Addressing fairness will require ensuring that researchers and clinicians from diverse backgrounds are included in the development and design of digital psychiatry tools.</p>\\n<p>As Galderisi et al note, the discipline and tools of psychiatry have a long history of being used for social control, such as in the criminal justice and educational systems. The tools of digital psychiatry may be applied to put vulnerable and minoritized groups at particular risk of punitive interventions from government institutions. It is, therefore, important that members of the psychiatric profession put considered effort into anticipating and addressing the social and legal implications of the use of digital psychiatry tools in other domains of society.</p>\\n<p>Development of digital psychiatry tools requires identifying specific ethical challenges, but also taking the time to reflect and envision the system and world that these tools will help create. Galderisi et al set out a number of action items that, taken together, envision a more equitable and inclusive future for psychiatry. This is an important moment to take these opportunities for building new frameworks and systems for psychiatry, in which digital tools can be used to support human empathy and creativity, allowing mental well-being to flourish.</p>\",\"PeriodicalId\":23858,\"journal\":{\"name\":\"World Psychiatry\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":73.3000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"World Psychiatry\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1002/wps.21237\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Psychiatry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/wps.21237","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

摘要

这些数据行为对患者造成的影响可能是他们无法预料的。即使是去标识化的数据也会越来越多地被重新标识,而根据这些数据编制的用户档案可能会被用于针对目标人群的欺诈性营销计划,或导致对就业或教育机会的下游影响。此外,在美国等国家,许多人可能负担不起心理健康护理,人们实际上可能处于用数据换取健康护理的境地。由于公平性和偏见问题,数字和人工智能工具对不同人群的实际作用有多大也是个现实问题。一个常见的偏见来源是,用于训练和开发数字工具的数据可能不足以代表目标人群,例如不同种族、性别或残疾的参与者9。出现偏见的可能性不仅仅是算法偏见的问题,因为工具的设计方式可能根本无法对不同人群有效发挥作用,或者在特定情况下使用这些工具可能导致不公平的结果。要解决公平问题,就必须确保来自不同背景的研究人员和临床医生都能参与数字精神病学工具的开发和设计。正如加尔德里西等人所指出的,精神病学的学科和工具长期以来一直被用于社会控制,比如在刑事司法和教育系统中。数字精神病学工具的应用可能会使弱势和少数群体特别容易受到政府机构的惩罚性干预。因此,重要的是,精神病学专业的成员要深思熟虑,努力预测并解决在社会其他领域使用数字精神病学工具所带来的社会和法律影响。开发数字精神病学工具需要确定具体的伦理挑战,同时也要花时间反思和设想这些工具将帮助创建的系统和世界。加尔德里西等人提出了一系列行动项目,这些项目合在一起,为精神病学设想了一个更加公平和包容的未来。这是一个重要的时刻,我们应该抓住这些机会,为精神病学建立新的框架和系统,利用数字工具来支持人类的同理心和创造力,让心理健康蓬勃发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A broader approach to ethical challenges in digital mental health

Galderisi et al1 provide an insightful overview of current ethical challenges in psychiatry, including those presented by digital psychiatry, as well as recommendations for addressing these challenges. As they discuss, “digital psychiatry” encompasses an array of different digital tools, including mental health apps, chatbots, telehealth platforms, and artificial intelligence (AI). These tools hold promise for improving diagnosis and care, and could facilitate access to mental health services by marginalized populations. In particular, digital mental health tools can assist in expanding mental health support in lower-to-middle income countries.

Many of the ethical challenges identified by the authors in the use of digital tools reflect inequities and challenges within broader society. For example, in the US, lack of mental health insurance and insufficient representation of racialized minorities in medical research contribute to the difficulties with access and fairness in digital psychiatry. In many ways, the ethical challenges presented by digital psychiatry reflect long-standing concerns about who benefits, and who does not, from psychiatry. The array of forward-looking recommendations advanced by Galderisi et al show that these ethical challenges can also be seen as opportunities for moving towards greater equity and inclusion in psychiatry.

Discussions of the ethics of digital health benefit from broadening the scope of issues to include social context. Galderisi et al refer to inequities in how mental health care is researched, developed and accessed, and to historical power imbalances in psychiatry due to which patient voices are undervalued and overlooked. A broader approach to ethical challenges related to digital health technologies recognizes that issues affecting these technologies often emerge due to their interactions with the social institutions in which they are developed and applied2. For example, privacy and safety of digital psychiatry tools must be understood within the context of the specific regulatory environment and infrastructure (e.g., broadband, hardware) in which they are being used.

Digital health tools and medical AI are often promoted for improving cost-effectiveness, but this business-oriented emphasis can obscure discussion of what trade-offs in costs are considered acceptable, such as whether lesser-quality services are deemed acceptable for low-income groups. Institutions that regulate medical devices often struggle when they have to deal with softwares or AI. Consumers and patients too often find it difficult to obtain information that can help them decide which digital psychiatry tools are appropriate and effective for their needs.

There have been pioneering efforts to assist with evaluating effective digital mental health tools, such as American Psychiatric Association's mental health app evaluator3. However, new models for evaluation which are responsive to the ways in which clinicians and patients realistically engage with mental health care tools are still needed. For example, some of the measures that regulators or insurance companies use to evaluate and approve digital mental health tools may not capture the aspects of a tool that, from a consumer or patient perspective, offer meaningful improvements to their lives. There has also been growing recognition that meaningful evaluation of the effectiveness of digital health tools needs to look beyond the tool itself in order to evaluate the tool's effectiveness as it is used within a particular system4. More engagement of diverse communities and those with lived experience during the development of digital psychiatry tools is imperative for improving these tools.

Unfortunately, the hype around digital mental health often goes hand-in-hand with rapid adoption of unproven technologies. For example, large language models (LLMs) and generative AI are being quickly taken up within health care, including psychiatry5. These digital tools are embraced as cost-effective time-savers before there is sufficient opportunity to determine the extent to which they are in fact ready for the purposes for which they are being used6. Potential problems with generative AI in health care continue to emerge, from the potential discriminatory biases in information, to the potential collection and disclosure of personal data7. There is a need to exercise more caution in the adoption of new digital tools in psychiatry, in order to give time for evaluation and guidance for specific purposes.

Privacy continues to pose significant concerns for digital psychiatry. Digital mental health tools often gather information that psychiatrists and patients are not aware of, such as location data, which may seem insignificant, but can allow for behavioral analyses that infer sensitive or predictive information regarding users8. In today's data landscape, brokerage of personal data can generate billions of dollars. These data practices have repercussions on patients that they may not be able to anticipate. Even de-identified data can increasingly be re-identified, and user profiles that are compiled from such data can be utilized to target people for fraudulent marketing schemes, or lead to downstream implications for employment or educational opportunities. Furthermore, in countries such as the US, where mental health care may be unaffordable for many individuals, people may effectively be put in the position of trading data for health care.

Because of fairness and bias issues, there are also real questions on how much digital and AI tools actually work for different populations. One common source of bias is that the data that are used to train and develop digital tools may be insufficiently representative of the target population, such as participants of diverse race and gender or with disability9. The potential for bias goes beyond the question of algorithmic bias, as tools may be simply designed in ways that do not work effectively for different populations, or the use of those tools in specific contexts may lead to unfair outcomes. Addressing fairness will require ensuring that researchers and clinicians from diverse backgrounds are included in the development and design of digital psychiatry tools.

As Galderisi et al note, the discipline and tools of psychiatry have a long history of being used for social control, such as in the criminal justice and educational systems. The tools of digital psychiatry may be applied to put vulnerable and minoritized groups at particular risk of punitive interventions from government institutions. It is, therefore, important that members of the psychiatric profession put considered effort into anticipating and addressing the social and legal implications of the use of digital psychiatry tools in other domains of society.

Development of digital psychiatry tools requires identifying specific ethical challenges, but also taking the time to reflect and envision the system and world that these tools will help create. Galderisi et al set out a number of action items that, taken together, envision a more equitable and inclusive future for psychiatry. This is an important moment to take these opportunities for building new frameworks and systems for psychiatry, in which digital tools can be used to support human empathy and creativity, allowing mental well-being to flourish.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
World Psychiatry
World Psychiatry Nursing-Psychiatric Mental Health
CiteScore
64.10
自引率
7.40%
发文量
124
期刊介绍: World Psychiatry is the official journal of the World Psychiatric Association. It aims to disseminate information on significant clinical, service, and research developments in the mental health field. World Psychiatry is published three times per year and is sent free of charge to psychiatrists.The recipient psychiatrists' names and addresses are provided by WPA member societies and sections.The language used in the journal is designed to be understandable by the majority of mental health professionals worldwide.
期刊最新文献
The contribution of the WPA to the development of the ICD-11 CDDR. A report from the WPA Working Group on Providing Mental Health Care for Migrants and Refugees. Global launch of the ICD-11 Clinical Descriptions and Diagnostic Requirements (CDDR). Addictive disorders through the lens of the WPA Section on Addiction Psychiatry. Physician-assisted dying in people with mental health conditions - whose choice?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1