<p>We dedicate this editorial to Max Fink, MD, who died on June 15, 2025 at 102. Max was a paragon of psychiatry, an astute practitioner and researcher, prolific writer, fierce polemist, and advocate for electroconvulsive therapy (ECT), and generous mentor to younger colleagues, including both of us. Max was the strongest advocate for catatonia as an independent syndrome across many disorders and conditions, and inspired the making of The Catatonia Foundation, https://www.thecatatoniafoundation.org, a new organization established in 2022 by a parent in the aftermath of her daughter's significantly delayed catatonia diagnosis and lifesaving course of ECT in order to bring sorely needed awareness about catatonia and connect patients and families globally with treatment providers.</p><p>Max made several contributions to <i>Acta Psychiatrica Scandinavica</i>, including the 1996 papers [<span>1, 2</span>] establishing the Bush Francis Catatonia Rating Scale (BFCRS) and Bush Francis Catatonia Screening Instrument (BFCSI) as standards worldwide and promoting benzodiazepines (BZDs) and ECT as their primary treatments.</p><p>Almost 30 years later, Luccarelli et al. [<span>3</span>] have distilled a list of 4 catatonic symptoms (excitement, mutism, staring, and posturing) from the original 14-item BFCSI. The presence of only one of those four symptoms assures 97% sensitivity compared to the BFCSI. This makes the new screening instrument, coined the Catatonia Quick Screen (CQS) a perfect tool to improve recognition, diagnosis, and treatment of catatonia.</p><p>Catatonia has a storied history. Catatonia was likely first formally described in 16th century England by Phillip Barrough who wrote <i>Of congelation or taking</i> and mostly aptly commented upon the “lethargic” and “frenetic” poles of a disorder now recognized to often be characterized by both psychomotor agitation and retardation [<span>4</span>]. King Henry VI may have suffered from catatonia, with historians noting that he was unable to speak, walk or hold up his head after being informed of a military loss in Gascony in 1453, furthermore described as “smitten with a frenzy and his wit and reason withdrawn.” Catatonia may also have been present since the early days of humankind, with the potential for catatonia underscored in Lot's Wife, the Prophet Ezekiel and the unfortunate individuals who gazed upon Medusa and turned into stone [<span>5</span>]. Vagal intimations and the role of the fight, flight or freeze response in evolution further suggests that catatonia may be an intimate part of the human experience [<span>6</span>] and opens up new vistas on the role of psychological, traumatic, environmental, and social risk factors in catatonia [<span>7</span>].</p><p>The formal term catatonia was coined in 1874 by Kahlbaum [<span>8</span>] as a novel clinical entity with distinct motor, vocal, and behavioral symptoms that he observed in the Reimer Sanitarium in Gorlitz, then part of the Kingdom of
{"title":"How to Quickly Diagnose Catatonia, and a Farewell Salute to Max Fink, MD","authors":"Dirk Dhossche, Lee Elizabeth Wachtel","doi":"10.1111/acps.70024","DOIUrl":"10.1111/acps.70024","url":null,"abstract":"<p>We dedicate this editorial to Max Fink, MD, who died on June 15, 2025 at 102. Max was a paragon of psychiatry, an astute practitioner and researcher, prolific writer, fierce polemist, and advocate for electroconvulsive therapy (ECT), and generous mentor to younger colleagues, including both of us. Max was the strongest advocate for catatonia as an independent syndrome across many disorders and conditions, and inspired the making of The Catatonia Foundation, https://www.thecatatoniafoundation.org, a new organization established in 2022 by a parent in the aftermath of her daughter's significantly delayed catatonia diagnosis and lifesaving course of ECT in order to bring sorely needed awareness about catatonia and connect patients and families globally with treatment providers.</p><p>Max made several contributions to <i>Acta Psychiatrica Scandinavica</i>, including the 1996 papers [<span>1, 2</span>] establishing the Bush Francis Catatonia Rating Scale (BFCRS) and Bush Francis Catatonia Screening Instrument (BFCSI) as standards worldwide and promoting benzodiazepines (BZDs) and ECT as their primary treatments.</p><p>Almost 30 years later, Luccarelli et al. [<span>3</span>] have distilled a list of 4 catatonic symptoms (excitement, mutism, staring, and posturing) from the original 14-item BFCSI. The presence of only one of those four symptoms assures 97% sensitivity compared to the BFCSI. This makes the new screening instrument, coined the Catatonia Quick Screen (CQS) a perfect tool to improve recognition, diagnosis, and treatment of catatonia.</p><p>Catatonia has a storied history. Catatonia was likely first formally described in 16th century England by Phillip Barrough who wrote <i>Of congelation or taking</i> and mostly aptly commented upon the “lethargic” and “frenetic” poles of a disorder now recognized to often be characterized by both psychomotor agitation and retardation [<span>4</span>]. King Henry VI may have suffered from catatonia, with historians noting that he was unable to speak, walk or hold up his head after being informed of a military loss in Gascony in 1453, furthermore described as “smitten with a frenzy and his wit and reason withdrawn.” Catatonia may also have been present since the early days of humankind, with the potential for catatonia underscored in Lot's Wife, the Prophet Ezekiel and the unfortunate individuals who gazed upon Medusa and turned into stone [<span>5</span>]. Vagal intimations and the role of the fight, flight or freeze response in evolution further suggests that catatonia may be an intimate part of the human experience [<span>6</span>] and opens up new vistas on the role of psychological, traumatic, environmental, and social risk factors in catatonia [<span>7</span>].</p><p>The formal term catatonia was coined in 1874 by Kahlbaum [<span>8</span>] as a novel clinical entity with distinct motor, vocal, and behavioral symptoms that he observed in the Reimer Sanitarium in Gorlitz, then part of the Kingdom of ","PeriodicalId":108,"journal":{"name":"Acta Psychiatrica Scandinavica","volume":"152 5","pages":"325-327"},"PeriodicalIF":5.0,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/acps.70024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144797613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p>When I proposed the hypothesis that generative artificial intelligence chatbots (chatbots hereafter) might trigger delusions in individuals prone to psychosis in August 2023 [<span>1</span>], I was venturing into unknown territory. Indeed, in the virtual absence of evidence, the editorial was merely based on guesswork—stemming from my own use of these chatbots and my interest in the mechanisms underlying and driving delusions.</p><p>Following publication of the editorial, my charting of the territory slowly began as I started to receive the occasional email from chatbot users, their worried family members, and journalists. Most of these emails described situations where users' interactions with chatbots seemed to spark or bolster delusional ideation. The stories differed with regard to the specific topic at hand but were yet very similar: Consistently, the chatbots seemed to interact with the users in ways that aligned with, or intensified, prior unusual ideas or false beliefs—leading the users further out on these tangents, not rarely resulting in what, based on the descriptions, seemed to be outright delusions.</p><p>Over the past couple of months, I have noticed that the number of emails I have received on this topic from near and far has only increased. I have been working with psychiatric research for more than 15 years and can say, without a doubt, that none of my prior publications have led to this level of direct engagement with the public. Coinciding completely with the increase in the number of correspondences, the number of views of my 2023 editorial suddenly spiked dramatically, rising from a very modest plateau around 100 per month to approximately 750 views in May 2025 and 1375 views in June 2025.</p><p>The time trend described above has been paralleled by media coverage of the topic. Indeed, the New York Times [<span>2</span>], Rolling Stone [<span>3</span>], and many other outlets have published articles based on interviews and accounts from online fora [<span>4</span>] that are all compatible with people experiencing onset or worsening of delusions during intense and typically long interactions with chatbots (that do not grow tired of chatting) [<span>2</span>].</p><p>The timing of this spike in the focus on potential chatbot-fuelled delusions is likely not random as it coincided with the April 25th 2025 update to the GPT-4o model—a recent version of the popular ChatGPT chatbot from OpenAI [<span>5-7</span>]. This model has been accused of being overly “sycophantic” (insincerely affirming and flattering) toward users, caused by the model training leaning too hard on user preferences communicated via thumbs-up/thumbs-down assessments in the chatbot (so-called Reinforcement Learning from Human Feedback (RLHF)) [<span>8</span>]. OpenAI acknowledged this issue: “On April 25th, we rolled out an update to GPT-4o in ChatGPT that made the model noticeably more sycophantic. It aimed to please the user, not just as flattery, but also as
{"title":"Generative Artificial Intelligence Chatbots and Delusions: From Guesswork to Emerging Cases","authors":"Søren Dinesen Østergaard","doi":"10.1111/acps.70022","DOIUrl":"10.1111/acps.70022","url":null,"abstract":"<p>When I proposed the hypothesis that generative artificial intelligence chatbots (chatbots hereafter) might trigger delusions in individuals prone to psychosis in August 2023 [<span>1</span>], I was venturing into unknown territory. Indeed, in the virtual absence of evidence, the editorial was merely based on guesswork—stemming from my own use of these chatbots and my interest in the mechanisms underlying and driving delusions.</p><p>Following publication of the editorial, my charting of the territory slowly began as I started to receive the occasional email from chatbot users, their worried family members, and journalists. Most of these emails described situations where users' interactions with chatbots seemed to spark or bolster delusional ideation. The stories differed with regard to the specific topic at hand but were yet very similar: Consistently, the chatbots seemed to interact with the users in ways that aligned with, or intensified, prior unusual ideas or false beliefs—leading the users further out on these tangents, not rarely resulting in what, based on the descriptions, seemed to be outright delusions.</p><p>Over the past couple of months, I have noticed that the number of emails I have received on this topic from near and far has only increased. I have been working with psychiatric research for more than 15 years and can say, without a doubt, that none of my prior publications have led to this level of direct engagement with the public. Coinciding completely with the increase in the number of correspondences, the number of views of my 2023 editorial suddenly spiked dramatically, rising from a very modest plateau around 100 per month to approximately 750 views in May 2025 and 1375 views in June 2025.</p><p>The time trend described above has been paralleled by media coverage of the topic. Indeed, the New York Times [<span>2</span>], Rolling Stone [<span>3</span>], and many other outlets have published articles based on interviews and accounts from online fora [<span>4</span>] that are all compatible with people experiencing onset or worsening of delusions during intense and typically long interactions with chatbots (that do not grow tired of chatting) [<span>2</span>].</p><p>The timing of this spike in the focus on potential chatbot-fuelled delusions is likely not random as it coincided with the April 25th 2025 update to the GPT-4o model—a recent version of the popular ChatGPT chatbot from OpenAI [<span>5-7</span>]. This model has been accused of being overly “sycophantic” (insincerely affirming and flattering) toward users, caused by the model training leaning too hard on user preferences communicated via thumbs-up/thumbs-down assessments in the chatbot (so-called Reinforcement Learning from Human Feedback (RLHF)) [<span>8</span>]. OpenAI acknowledged this issue: “On April 25th, we rolled out an update to GPT-4o in ChatGPT that made the model noticeably more sycophantic. It aimed to please the user, not just as flattery, but also as","PeriodicalId":108,"journal":{"name":"Acta Psychiatrica Scandinavica","volume":"152 4","pages":"257-259"},"PeriodicalIF":5.0,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/acps.70022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144783001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}