Background: Growth of generative artificial intelligence (GenAI) has exploded in recent years. Many have noted its substantial potential to increase access to scalable digital mental health interventions or provide companions for individuals who are socially isolated. At the same time, seeking mental health support from mainstream GenAI models may involve risks. Several recent examples of exacerbation of delusions have received attention in the popular press, leading to a call for empirical research to document the scope of interactions with GenAI among individuals experiencing symptoms of psychosis.
Objective: This study aimed to evaluate associations of psychosis risk to GenAI use frequency, motivations for use, and GenAI interactions involving potential delusions.
Methods: We conducted a large-scale cross-sectional survey of 1003 young adults in United States, divided the sample of individuals that had used GenAI into "elevated risk" (Prodromal Questionnaire, Brief Version Distress Score ≥20; N=267, 28%) and "low risk" groups (Prodromal Questionnaire, Brief Version Distress Score <20; N=685, 72%), and compared groups on several assessments related to GenAI use.
Results: We found that while members of the elevated risk group were no more likely to have ever used GenAI, they were significantly more likely to report intensive use (odds ratio 1.70 to 2.56; ie, several times per day, more than 30 minutes per day, 6 or more chatbot conversations per day). Those at elevated risk were more likely to report using GenAI to receive social and emotional support and significantly more likely to ascribe human-like roles to their chatbot interactions (odds ratio 1.76 to 3.08; ie, companion, friend, therapist, and romantic partner). Delusion-related interactions were also commonly reported among those at risk for psychosis (item endorsements from 13.3% to 30.7%).
Conclusions: While it is unclear whether they have a positive or negative impact overall, GenAI chatbots may have the potential to impact symptom-related experiences among young adults at risk.
Unlabelled: Artificial intelligence (AI) promises efficiency and equity in health care. However, adoption remains fragmented due to weak foundations of trust. This Viewpoint highlights the gap between intrinsic trust, based on interpretability, and extrinsic trust, based on functional validation. We propose a contractual framework between AI systems and users defined by 3 promises: reliability, scope and equity, and shift and uncertainty. Illustrated through a vignette, we show how health systems can operationalize these promises through structured evidence and governance, translating trustworthy AI into accountable clinical deployment.
Artificial intelligence triage in general practice is developing rapidly within the primary care digital transformation, promising efficiency gains and safety standardization in overwhelmed primary care systems. However, current evidence is drawn from retrospective validations, emergency settings, or vignettes, with scant evaluation of real-world outcomes and almost no equity-stratified safety data, despite known disparities across age, ethnicity, language, and deprivation. From a sociotechnical standpoint, which considers the fit between people, tasks, technology, and organizational context, risks arise not only from algorithmic bias and undertriage but also from human factors, workflow misalignment, governance gaps, and inadequate postdeployment monitoring. We argue that ensuring artificial intelligence triage is safe and equitable requires real-world evaluations in primary care settings, equity-focused performance reporting using theoretically informed frameworks, and rigorous postmarket surveillance. Without these, deployment may widen existing health inequalities rather than moderate them.

