{"title":"Beyond Doomsday Fears: Why We Need to Consider the Potential Harms of AI Psychotherapy.","authors":"Şerife Tekin, Megan Delehanty","doi":"10.1080/15265161.2025.2457724","DOIUrl":null,"url":null,"abstract":"<p><p>There is increased enthusiasm about the use of Artificial Intelligence (AI) technologies in psychotherapy. Notably, AI psychotherapy chatbots are increasing in popularity, especially since the US Food and Drug Administration (FDA) gave one of these apps breakthrough device designation. This article raises concerns about the lack of consideration of potential harms of this technology for clinical trial participants, and current and future users. We outline what these harms might be, by turning to the Belmont Report and the existing literature on harms of (typical) psychotherapy and conclude with two recommendations. Note that our goal is not to articulate doomsday fears regarding the use of AI in psychotherapy contexts; rather we offer a constructive proposal in thinking about the potential harms of these tools and invite clinicians, patients, developers, researchers, policymakers and funding agencies to work together to augment the benefits of these tools and minimize their potential harms.</p>","PeriodicalId":50962,"journal":{"name":"American Journal of Bioethics","volume":" ","pages":"1-11"},"PeriodicalIF":17.0000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Bioethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/15265161.2025.2457724","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
There is increased enthusiasm about the use of Artificial Intelligence (AI) technologies in psychotherapy. Notably, AI psychotherapy chatbots are increasing in popularity, especially since the US Food and Drug Administration (FDA) gave one of these apps breakthrough device designation. This article raises concerns about the lack of consideration of potential harms of this technology for clinical trial participants, and current and future users. We outline what these harms might be, by turning to the Belmont Report and the existing literature on harms of (typical) psychotherapy and conclude with two recommendations. Note that our goal is not to articulate doomsday fears regarding the use of AI in psychotherapy contexts; rather we offer a constructive proposal in thinking about the potential harms of these tools and invite clinicians, patients, developers, researchers, policymakers and funding agencies to work together to augment the benefits of these tools and minimize their potential harms.
期刊介绍:
The American Journal of Bioethics (AJOB) is a renowned global publication focused on bioethics. It tackles pressing ethical challenges in the realm of health sciences.
With a commitment to the original vision of bioethics, AJOB explores the social consequences of advancements in biomedicine. It sparks meaningful discussions that have proved invaluable to a wide range of professionals, including judges, senators, journalists, scholars, and educators.
AJOB covers various areas of interest, such as the ethical implications of clinical research, ensuring access to healthcare services, and the responsible handling of medical records and data.
The journal welcomes contributions in the form of target articles presenting original research, open peer commentaries facilitating a dialogue, book reviews, and responses to open peer commentaries.
By presenting insightful and authoritative content, AJOB continues to shape the field of bioethics and engage diverse stakeholders in crucial conversations about the intersection of medicine, ethics, and society.