{"title":"A broader approach to ethical challenges in digital mental health","authors":"Nicole Martinez-Martin","doi":"10.1002/wps.21237","DOIUrl":null,"url":null,"abstract":"<p>Galderisi et al<span><sup>1</sup></span> provide an insightful overview of current ethical challenges in psychiatry, including those presented by digital psychiatry, as well as recommendations for addressing these challenges. As they discuss, “digital psychiatry” encompasses an array of different digital tools, including mental health apps, chatbots, telehealth platforms, and artificial intelligence (AI). These tools hold promise for improving diagnosis and care, and could facilitate access to mental health services by marginalized populations. In particular, digital mental health tools can assist in expanding mental health support in lower-to-middle income countries.</p>\n<p>Many of the ethical challenges identified by the authors in the use of digital tools reflect inequities and challenges within broader society. For example, in the US, lack of mental health insurance and insufficient representation of racialized minorities in medical research contribute to the difficulties with access and fairness in digital psychiatry. In many ways, the ethical challenges presented by digital psychiatry reflect long-standing concerns about who benefits, and who does not, from psychiatry. The array of forward-looking recommendations advanced by Galderisi et al show that these ethical challenges can also be seen as opportunities for moving towards greater equity and inclusion in psychiatry.</p>\n<p>Discussions of the ethics of digital health benefit from broadening the scope of issues to include social context. Galderisi et al refer to inequities in how mental health care is researched, developed and accessed, and to historical power imbalances in psychiatry due to which patient voices are undervalued and overlooked. A broader approach to ethical challenges related to digital health technologies recognizes that issues affecting these technologies often emerge due to their interactions with the social institutions in which they are developed and applied<span><sup>2</sup></span>. For example, privacy and safety of digital psychiatry tools must be understood within the context of the specific regulatory environment and infrastructure (e.g., broadband, hardware) in which they are being used.</p>\n<p>Digital health tools and medical AI are often promoted for improving cost-effectiveness, but this business-oriented emphasis can obscure discussion of what trade-offs in costs are considered acceptable, such as whether lesser-quality services are deemed acceptable for low-income groups. Institutions that regulate medical devices often struggle when they have to deal with softwares or AI. Consumers and patients too often find it difficult to obtain information that can help them decide which digital psychiatry tools are appropriate and effective for their needs.</p>\n<p>There have been pioneering efforts to assist with evaluating effective digital mental health tools, such as American Psychiatric Association's mental health app evaluator<span><sup>3</sup></span>. However, new models for evaluation which are responsive to the ways in which clinicians and patients realistically engage with mental health care tools are still needed. For example, some of the measures that regulators or insurance companies use to evaluate and approve digital mental health tools may not capture the aspects of a tool that, from a consumer or patient perspective, offer meaningful improvements to their lives. There has also been growing recognition that meaningful evaluation of the effectiveness of digital health tools needs to look beyond the tool itself in order to evaluate the tool's effectiveness as it is used within a particular system<span><sup>4</sup></span>. More engagement of diverse communities and those with lived experience during the development of digital psychiatry tools is imperative for improving these tools.</p>\n<p>Unfortunately, the hype around digital mental health often goes hand-in-hand with rapid adoption of unproven technologies. For example, large language models (LLMs) and generative AI are being quickly taken up within health care, including psychiatry<span><sup>5</sup></span>. These digital tools are embraced as cost-effective time-savers before there is sufficient opportunity to determine the extent to which they are in fact ready for the purposes for which they are being used<span><sup>6</sup></span>. Potential problems with generative AI in health care continue to emerge, from the potential discriminatory biases in information, to the potential collection and disclosure of personal data<span><sup>7</sup></span>. There is a need to exercise more caution in the adoption of new digital tools in psychiatry, in order to give time for evaluation and guidance for specific purposes.</p>\n<p>Privacy continues to pose significant concerns for digital psychiatry. Digital mental health tools often gather information that psychiatrists and patients are not aware of, such as location data, which may seem insignificant, but can allow for behavioral analyses that infer sensitive or predictive information regarding users<span><sup>8</sup></span>. In today's data landscape, brokerage of personal data can generate billions of dollars. These data practices have repercussions on patients that they may not be able to anticipate. Even de-identified data can increasingly be re-identified, and user profiles that are compiled from such data can be utilized to target people for fraudulent marketing schemes, or lead to downstream implications for employment or educational opportunities. Furthermore, in countries such as the US, where mental health care may be unaffordable for many individuals, people may effectively be put in the position of trading data for health care.</p>\n<p>Because of fairness and bias issues, there are also real questions on how much digital and AI tools actually work for different populations. One common source of bias is that the data that are used to train and develop digital tools may be insufficiently representative of the target population, such as participants of diverse race and gender or with disability<span><sup>9</sup></span>. The potential for bias goes beyond the question of algorithmic bias, as tools may be simply designed in ways that do not work effectively for different populations, or the use of those tools in specific contexts may lead to unfair outcomes. Addressing fairness will require ensuring that researchers and clinicians from diverse backgrounds are included in the development and design of digital psychiatry tools.</p>\n<p>As Galderisi et al note, the discipline and tools of psychiatry have a long history of being used for social control, such as in the criminal justice and educational systems. The tools of digital psychiatry may be applied to put vulnerable and minoritized groups at particular risk of punitive interventions from government institutions. It is, therefore, important that members of the psychiatric profession put considered effort into anticipating and addressing the social and legal implications of the use of digital psychiatry tools in other domains of society.</p>\n<p>Development of digital psychiatry tools requires identifying specific ethical challenges, but also taking the time to reflect and envision the system and world that these tools will help create. Galderisi et al set out a number of action items that, taken together, envision a more equitable and inclusive future for psychiatry. This is an important moment to take these opportunities for building new frameworks and systems for psychiatry, in which digital tools can be used to support human empathy and creativity, allowing mental well-being to flourish.</p>","PeriodicalId":23858,"journal":{"name":"World Psychiatry","volume":"1 1","pages":""},"PeriodicalIF":73.3000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Psychiatry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/wps.21237","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0
Abstract
Galderisi et al1 provide an insightful overview of current ethical challenges in psychiatry, including those presented by digital psychiatry, as well as recommendations for addressing these challenges. As they discuss, “digital psychiatry” encompasses an array of different digital tools, including mental health apps, chatbots, telehealth platforms, and artificial intelligence (AI). These tools hold promise for improving diagnosis and care, and could facilitate access to mental health services by marginalized populations. In particular, digital mental health tools can assist in expanding mental health support in lower-to-middle income countries.
Many of the ethical challenges identified by the authors in the use of digital tools reflect inequities and challenges within broader society. For example, in the US, lack of mental health insurance and insufficient representation of racialized minorities in medical research contribute to the difficulties with access and fairness in digital psychiatry. In many ways, the ethical challenges presented by digital psychiatry reflect long-standing concerns about who benefits, and who does not, from psychiatry. The array of forward-looking recommendations advanced by Galderisi et al show that these ethical challenges can also be seen as opportunities for moving towards greater equity and inclusion in psychiatry.
Discussions of the ethics of digital health benefit from broadening the scope of issues to include social context. Galderisi et al refer to inequities in how mental health care is researched, developed and accessed, and to historical power imbalances in psychiatry due to which patient voices are undervalued and overlooked. A broader approach to ethical challenges related to digital health technologies recognizes that issues affecting these technologies often emerge due to their interactions with the social institutions in which they are developed and applied2. For example, privacy and safety of digital psychiatry tools must be understood within the context of the specific regulatory environment and infrastructure (e.g., broadband, hardware) in which they are being used.
Digital health tools and medical AI are often promoted for improving cost-effectiveness, but this business-oriented emphasis can obscure discussion of what trade-offs in costs are considered acceptable, such as whether lesser-quality services are deemed acceptable for low-income groups. Institutions that regulate medical devices often struggle when they have to deal with softwares or AI. Consumers and patients too often find it difficult to obtain information that can help them decide which digital psychiatry tools are appropriate and effective for their needs.
There have been pioneering efforts to assist with evaluating effective digital mental health tools, such as American Psychiatric Association's mental health app evaluator3. However, new models for evaluation which are responsive to the ways in which clinicians and patients realistically engage with mental health care tools are still needed. For example, some of the measures that regulators or insurance companies use to evaluate and approve digital mental health tools may not capture the aspects of a tool that, from a consumer or patient perspective, offer meaningful improvements to their lives. There has also been growing recognition that meaningful evaluation of the effectiveness of digital health tools needs to look beyond the tool itself in order to evaluate the tool's effectiveness as it is used within a particular system4. More engagement of diverse communities and those with lived experience during the development of digital psychiatry tools is imperative for improving these tools.
Unfortunately, the hype around digital mental health often goes hand-in-hand with rapid adoption of unproven technologies. For example, large language models (LLMs) and generative AI are being quickly taken up within health care, including psychiatry5. These digital tools are embraced as cost-effective time-savers before there is sufficient opportunity to determine the extent to which they are in fact ready for the purposes for which they are being used6. Potential problems with generative AI in health care continue to emerge, from the potential discriminatory biases in information, to the potential collection and disclosure of personal data7. There is a need to exercise more caution in the adoption of new digital tools in psychiatry, in order to give time for evaluation and guidance for specific purposes.
Privacy continues to pose significant concerns for digital psychiatry. Digital mental health tools often gather information that psychiatrists and patients are not aware of, such as location data, which may seem insignificant, but can allow for behavioral analyses that infer sensitive or predictive information regarding users8. In today's data landscape, brokerage of personal data can generate billions of dollars. These data practices have repercussions on patients that they may not be able to anticipate. Even de-identified data can increasingly be re-identified, and user profiles that are compiled from such data can be utilized to target people for fraudulent marketing schemes, or lead to downstream implications for employment or educational opportunities. Furthermore, in countries such as the US, where mental health care may be unaffordable for many individuals, people may effectively be put in the position of trading data for health care.
Because of fairness and bias issues, there are also real questions on how much digital and AI tools actually work for different populations. One common source of bias is that the data that are used to train and develop digital tools may be insufficiently representative of the target population, such as participants of diverse race and gender or with disability9. The potential for bias goes beyond the question of algorithmic bias, as tools may be simply designed in ways that do not work effectively for different populations, or the use of those tools in specific contexts may lead to unfair outcomes. Addressing fairness will require ensuring that researchers and clinicians from diverse backgrounds are included in the development and design of digital psychiatry tools.
As Galderisi et al note, the discipline and tools of psychiatry have a long history of being used for social control, such as in the criminal justice and educational systems. The tools of digital psychiatry may be applied to put vulnerable and minoritized groups at particular risk of punitive interventions from government institutions. It is, therefore, important that members of the psychiatric profession put considered effort into anticipating and addressing the social and legal implications of the use of digital psychiatry tools in other domains of society.
Development of digital psychiatry tools requires identifying specific ethical challenges, but also taking the time to reflect and envision the system and world that these tools will help create. Galderisi et al set out a number of action items that, taken together, envision a more equitable and inclusive future for psychiatry. This is an important moment to take these opportunities for building new frameworks and systems for psychiatry, in which digital tools can be used to support human empathy and creativity, allowing mental well-being to flourish.
期刊介绍:
World Psychiatry is the official journal of the World Psychiatric Association. It aims to disseminate information on significant clinical, service, and research developments in the mental health field.
World Psychiatry is published three times per year and is sent free of charge to psychiatrists.The recipient psychiatrists' names and addresses are provided by WPA member societies and sections.The language used in the journal is designed to be understandable by the majority of mental health professionals worldwide.