Pub Date : 2026-02-02DOI: 10.1038/s44277-025-00054-9
Bridget Dwyer, Matthew Flathers, Akane Sano, Allison Dempsey, Andrea Cipriani, Asim H Gazi, Bryce Hill, Carla Gorban, Carolyn I Rodriguez, Charles Stromeyer, Darlene King, Eden Rozenblit, Gillian Strudwick, Jake Linardon, Jiaee Cheong, Joseph Firth, Julian Herpertz, Julian Schwarz, Khai Truong, Margaret Emerson, Martin P Paulus, Michelle Patriquin, Yining Hua, Soumya Choudhary, Steven Siddals, Laura Ospina Pinillos, Jason Bantjes, Stephen M Schueller, Xuhai Xu, Ken Duckworth, Daniel H Gillison, Michael Wood, John Torous
{"title":"Correction: Mindbench.ai: an actionable platform to evaluate the profile and performance of large language models in a mental healthcare context.","authors":"Bridget Dwyer, Matthew Flathers, Akane Sano, Allison Dempsey, Andrea Cipriani, Asim H Gazi, Bryce Hill, Carla Gorban, Carolyn I Rodriguez, Charles Stromeyer, Darlene King, Eden Rozenblit, Gillian Strudwick, Jake Linardon, Jiaee Cheong, Joseph Firth, Julian Herpertz, Julian Schwarz, Khai Truong, Margaret Emerson, Martin P Paulus, Michelle Patriquin, Yining Hua, Soumya Choudhary, Steven Siddals, Laura Ospina Pinillos, Jason Bantjes, Stephen M Schueller, Xuhai Xu, Ken Duckworth, Daniel H Gillison, Michael Wood, John Torous","doi":"10.1038/s44277-025-00054-9","DOIUrl":"10.1038/s44277-025-00054-9","url":null,"abstract":"","PeriodicalId":520008,"journal":{"name":"NPP-digital psychiatry and neuroscience","volume":"4 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12865003/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1038/s44277-026-00056-1
Filippo Bargagna, Thomas M Morin, Ya-Chin Chen, Ylind Lila, Chieh-En J Tseng, Maria F Santarelli, Nicola Vanello, Christopher J McDougle, Jacob M Hooker, Nicole R Zürcher
The choroid plexus serves as the primary barrier between the brain's blood and cerebrospinal fluid and mediates neuroimmune function. A subset of individuals with autism spectrum disorder (ASD) may exhibit morphological alterations of the choroid plexus. However, to power larger population analyses, an automated tool capable of accurately segmenting the choroid plexus based on magnetic resonance imaging (MRI) is needed. Automated Segmentation of CHOroid PLEXus (ASCHOPLEX) is a deep learning tool that enables finetuning using new, patient-specific, training data, allowing its usage across cohorts for which the model was not originally trained. We evaluated ASCHOPLEX's generalizability to individuals with ASD by performing finetuning on a local dataset of ASD and control (CON) participants. To assess generalizability, we implemented a probabilistic version of the algorithm, which allowed us to quantify the uncertainty in choroid plexus segmentation and evaluate the model's confidence. ASCHOPLEX generalized well to our local dataset, in which all participants were adults. To further assess its performance, we tested the algorithm on the Autism Brain Imaging Data Exchange (ABIDE) dataset, which includes data from children and adults. While ASCHOPLEX performed well in adults, its accuracy declined in children, suggesting limited generalizability to different age groups without additional finetuning. Our findings show that the incorporation of a probabilistic approach during finetuning can strengthen the use of this deep learning tool by providing confidence metrics which allow assessing model reliability. Overall, our findings demonstrate that ASCHOPLEX can generate accurate choroid plexus segmentations in previously unseen data.
{"title":"A probabilistic deep learning approach for choroid plexus segmentation in autism spectrum disorder.","authors":"Filippo Bargagna, Thomas M Morin, Ya-Chin Chen, Ylind Lila, Chieh-En J Tseng, Maria F Santarelli, Nicola Vanello, Christopher J McDougle, Jacob M Hooker, Nicole R Zürcher","doi":"10.1038/s44277-026-00056-1","DOIUrl":"10.1038/s44277-026-00056-1","url":null,"abstract":"<p><p>The choroid plexus serves as the primary barrier between the brain's blood and cerebrospinal fluid and mediates neuroimmune function. A subset of individuals with autism spectrum disorder (ASD) may exhibit morphological alterations of the choroid plexus. However, to power larger population analyses, an automated tool capable of accurately segmenting the choroid plexus based on magnetic resonance imaging (MRI) is needed. Automated Segmentation of CHOroid PLEXus (ASCHOPLEX) is a deep learning tool that enables finetuning using new, patient-specific, training data, allowing its usage across cohorts for which the model was not originally trained. We evaluated ASCHOPLEX's generalizability to individuals with ASD by performing finetuning on a local dataset of ASD and control (CON) participants. To assess generalizability, we implemented a probabilistic version of the algorithm, which allowed us to quantify the uncertainty in choroid plexus segmentation and evaluate the model's confidence. ASCHOPLEX generalized well to our local dataset, in which all participants were adults. To further assess its performance, we tested the algorithm on the Autism Brain Imaging Data Exchange (ABIDE) dataset, which includes data from children and adults. While ASCHOPLEX performed well in adults, its accuracy declined in children, suggesting limited generalizability to different age groups without additional finetuning. Our findings show that the incorporation of a probabilistic approach during finetuning can strengthen the use of this deep learning tool by providing confidence metrics which allow assessing model reliability. Overall, our findings demonstrate that ASCHOPLEX can generate accurate choroid plexus segmentations in previously unseen data.</p>","PeriodicalId":520008,"journal":{"name":"NPP-digital psychiatry and neuroscience","volume":"4 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858829/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146095302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1038/s44277-025-00053-w
Santiago López Pereyra, Diego R Mazzotti, Desmond Oathes, Jennifer R Goldschmied
No validated biomarker currently exists for early detection or personalized treatment of major depressive disorder (MDD). Transcranial magnetic stimulation (TMS) is widely used in clinical and research settings and holds promise for biomarker discovery. We assessed two novel TMS-derived cortical excitability metrics, and , for distinguishing individuals with MDD from healthy controls. Motor-evoked potentials (MEPs) were recorded from the left abductor pollicis brevis during TMS of the right primary motor cortex in twenty-six unmedicated MDD patients and seventeen never-depressed controls. and were computed from peak-to-peak MEP amplitudes. A Gradient Boosting classifier predicted diagnostic status using raw MEPs, and , or their combination. While MEPs alone were non-predictive, and significantly improved accuracy. Combining MEPs with and yielded 83.3% accuracy and 82.3% balanced accuracy. These results suggest and effectively capture neurophysiological alterations in MDD and support their potential as candidate biomarkers for MDD.
{"title":"Novel TMS-derived metrics enable machine learning classification of major depressive disorder.","authors":"Santiago López Pereyra, Diego R Mazzotti, Desmond Oathes, Jennifer R Goldschmied","doi":"10.1038/s44277-025-00053-w","DOIUrl":"10.1038/s44277-025-00053-w","url":null,"abstract":"<p><p>No validated biomarker currently exists for early detection or personalized treatment of major depressive disorder (MDD). Transcranial magnetic stimulation (TMS) is widely used in clinical and research settings and holds promise for biomarker discovery. We assessed two novel TMS-derived cortical excitability metrics, <math><mi>δ</mi></math> and <math><mi>ϱ</mi></math> , for distinguishing individuals with MDD from healthy controls. Motor-evoked potentials (MEPs) were recorded from the left abductor pollicis brevis during TMS of the right primary motor cortex in twenty-six unmedicated MDD patients and seventeen never-depressed controls. <math><mi>δ</mi></math> and <math><mi>ϱ</mi></math> were computed from peak-to-peak MEP amplitudes. A Gradient Boosting classifier predicted diagnostic status using raw MEPs, <math><mi>δ</mi></math> and <math><mi>ϱ</mi></math> , or their combination. While MEPs alone were non-predictive, <math><mi>δ</mi></math> and <math><mi>ϱ</mi></math> significantly improved accuracy. Combining MEPs with <math><mi>δ</mi></math> and <math><mi>ϱ</mi></math> yielded 83.3% accuracy and 82.3% balanced accuracy. These results suggest <math><mi>δ</mi></math> and <math><mi>ϱ</mi></math> effectively capture neurophysiological alterations in MDD and support their potential as candidate biomarkers for MDD.</p>","PeriodicalId":520008,"journal":{"name":"NPP-digital psychiatry and neuroscience","volume":"4 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12796298/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1038/s44277-025-00051-y
Samuel J Abplanalp, Joseph S Maimone, Michael F Green
Social isolation is a major public health concern linked to increased risk for both psychiatric and physical health conditions. Yet despite the potential consequences of social isolation, our understanding of its nature and how it emerges and evolves over time remains limited. We propose that social isolation should be understood and analyzed as a complex dynamical system. First, we introduce core principles of dynamical systems theory and describe how they can be applied to better understand social isolation. Second, we formalize a dynamical systems model using differential equations. Third, we present simulations based on the differential equations showing how changes in system dynamics may increase or decrease the likelihood of individuals entering a state of social isolation. Fourth, we provide a brief simulation-recovery analysis demonstrating model parameter identifiability from intensive longitudinal data designs. Finally, we offer a simulated example of how intensive longitudinal data could be used to identify signs of transitions between healthy and isolated states. Overall, this framework, both theoretical and computational, helps elucidate the dynamic nature of social isolation and may ultimately inform empirical research and personalized interventions capable of identifying those at risk for transitioning into a state of isolation.
{"title":"Viewing social isolation as a complex dynamical system: A theoretical and computational framework.","authors":"Samuel J Abplanalp, Joseph S Maimone, Michael F Green","doi":"10.1038/s44277-025-00051-y","DOIUrl":"10.1038/s44277-025-00051-y","url":null,"abstract":"<p><p>Social isolation is a major public health concern linked to increased risk for both psychiatric and physical health conditions. Yet despite the potential consequences of social isolation, our understanding of its nature and how it emerges and evolves over time remains limited. We propose that social isolation should be understood and analyzed as a complex dynamical system. First, we introduce core principles of dynamical systems theory and describe how they can be applied to better understand social isolation. Second, we formalize a dynamical systems model using differential equations. Third, we present simulations based on the differential equations showing how changes in system dynamics may increase or decrease the likelihood of individuals entering a state of social isolation. Fourth, we provide a brief simulation-recovery analysis demonstrating model parameter identifiability from intensive longitudinal data designs. Finally, we offer a simulated example of how intensive longitudinal data could be used to identify signs of transitions between healthy and isolated states. Overall, this framework, both theoretical and computational, helps elucidate the dynamic nature of social isolation and may ultimately inform empirical research and personalized interventions capable of identifying those at risk for transitioning into a state of isolation.</p>","PeriodicalId":520008,"journal":{"name":"NPP-digital psychiatry and neuroscience","volume":"3 1","pages":"31"},"PeriodicalIF":0.0,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12689641/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145717141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1038/s44277-025-00049-6
Bridget Dwyer, Matthew Flathers, Akane Sano, Allison Dempsey, Andrea Cipriani, Asim H Gazi, Bryce Hill, Carla Gorban, Carolyn I Rodriguez, Charles Stromeyer, Darlene King, Eden Rozenblit, Gillian Strudwick, Jake Linardon, Jiaee Cheong, Joseph Firth, Julian Herpertz, Julian Schwarz, Khai Truong, Margaret Emerson, Martin P Paulus, Michelle Patriquin, Yining Hua, Soumya Choudhary, Steven Siddals, Laura Ospina Pinillos, Jason Bantjes, Stephen M Scheuller, Xuhai Xu, Ken Duckworth, Daniel H Gillison, Michael Wood, John Torous
Individuals are increasingly utilizing large language model (LLM)-based tools for mental health guidance and crisis support in place of human experts. While AI technology has great potential to improve health outcomes, insufficient empirical evidence exists to suggest that AI technology can be deployed as a clinical replacement; thus, there is an urgent need to assess and regulate such tools. Regulatory efforts have been made and multiple evaluation frameworks have been proposed, however,field-wide assessment metrics have yet to be formally integrated. In this paper, we introduce a comprehensive online platform that aggregates evaluation approaches and serves as a dynamic online resource to simplify LLM and LLM-based tool assessment: MindBench.ai. At its core, MindBench.ai is designed to provide easily accessible/interpretable information for diverse stakeholders (patients, clinicians, developers, regulators, etc.). To create MindBench.ai, we built off our work developing MINDapps.org to support informed decision-making around smartphone app use for mental health, and expanded the technical MINDapps.org framework to encompass novel large language model (LLM) functionalities through benchmarking approaches. The MindBench.ai platform is designed as a partnership with the National Alliance on Mental Illness (NAMI) to provide assessment tools that systematically evaluate LLMs and LLM-based tools with objective and transparent criteria from a healthcare standpoint, assessing both profile (i.e. technical features, privacy protections, and conversational style) and performance characteristics (i.e. clinical reasoning skills). With infrastructure designed to scale through community and expert contributions, along with adapting to technological advances, this platform establishes a critical foundation for the dynamic, empirical evaluation of LLM-based mental health tools-transforming assessment into a living, continuously evolving resource rather than a static snapshot.
{"title":"Mindbench.ai: an actionable platform to evaluate the profile and performance of large language models in a mental healthcare context.","authors":"Bridget Dwyer, Matthew Flathers, Akane Sano, Allison Dempsey, Andrea Cipriani, Asim H Gazi, Bryce Hill, Carla Gorban, Carolyn I Rodriguez, Charles Stromeyer, Darlene King, Eden Rozenblit, Gillian Strudwick, Jake Linardon, Jiaee Cheong, Joseph Firth, Julian Herpertz, Julian Schwarz, Khai Truong, Margaret Emerson, Martin P Paulus, Michelle Patriquin, Yining Hua, Soumya Choudhary, Steven Siddals, Laura Ospina Pinillos, Jason Bantjes, Stephen M Scheuller, Xuhai Xu, Ken Duckworth, Daniel H Gillison, Michael Wood, John Torous","doi":"10.1038/s44277-025-00049-6","DOIUrl":"10.1038/s44277-025-00049-6","url":null,"abstract":"<p><p>Individuals are increasingly utilizing large language model (LLM)-based tools for mental health guidance and crisis support in place of human experts. While AI technology has great potential to improve health outcomes, insufficient empirical evidence exists to suggest that AI technology can be deployed as a clinical replacement; thus, there is an urgent need to assess and regulate such tools. Regulatory efforts have been made and multiple evaluation frameworks have been proposed, however,field-wide assessment metrics have yet to be formally integrated. In this paper, we introduce a comprehensive online platform that aggregates evaluation approaches and serves as a dynamic online resource to simplify LLM and LLM-based tool assessment: MindBench.ai. At its core, MindBench.ai is designed to provide easily accessible/interpretable information for diverse stakeholders (patients, clinicians, developers, regulators, etc.). To create MindBench.ai, we built off our work developing MINDapps.org to support informed decision-making around smartphone app use for mental health, and expanded the technical MINDapps.org framework to encompass novel large language model (LLM) functionalities through benchmarking approaches. The MindBench.ai platform is designed as a partnership with the National Alliance on Mental Illness (NAMI) to provide assessment tools that systematically evaluate LLMs and LLM-based tools with objective and transparent criteria from a healthcare standpoint, assessing both profile (i.e. technical features, privacy protections, and conversational style) and performance characteristics (i.e. clinical reasoning skills). With infrastructure designed to scale through community and expert contributions, along with adapting to technological advances, this platform establishes a critical foundation for the dynamic, empirical evaluation of LLM-based mental health tools-transforming assessment into a living, continuously evolving resource rather than a static snapshot.</p>","PeriodicalId":520008,"journal":{"name":"NPP-digital psychiatry and neuroscience","volume":"3 1","pages":"28"},"PeriodicalIF":0.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12624894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13eCollection Date: 2025-01-01DOI: 10.1038/s44277-025-00048-7
Uma R Chatterjee, Maya C Schumer, Devin P Effinger, Nev Jones, Noel A Vest, Michael E Cahill, Brandon K Staglin, Eric J Nestler
Researchers with lived experience (RWLE) of serious mental illness or substance use disorders (SMI/SUD) bring critical dual expertise to psychiatric neuroscience as both scientists and individuals directly affected by the conditions they study. Yet their participation and leadership remain profoundly limited by entrenched stigma, disclosure risks that can obstruct promising career trajectories, lack of mentorship from senior RWLE, and the absence of structural protections against discrimination and exclusion. These systemic barriers silence voices that can help transform the field's understanding of mental illness and its biological underpinnings. Drawing on the authors' lived and/or professional experiences, this Perspective challenges the assumption that lived experience introduces bias, reframing it as a source of empirical strength, innovation, and epistemic diversity. Here, the authors propose structural reforms to reshape admissions, mentorship, and leadership pathways. Centering RWLE is both a scientific necessity and an ethical imperative for advancing a more equitable and representative psychiatric neuroscience.
{"title":"Breaking barriers: centering researchers with lived experience in psychiatric neuroscience.","authors":"Uma R Chatterjee, Maya C Schumer, Devin P Effinger, Nev Jones, Noel A Vest, Michael E Cahill, Brandon K Staglin, Eric J Nestler","doi":"10.1038/s44277-025-00048-7","DOIUrl":"10.1038/s44277-025-00048-7","url":null,"abstract":"<p><p>Researchers with lived experience (RWLE) of serious mental illness or substance use disorders (SMI/SUD) bring critical dual expertise to psychiatric neuroscience as both scientists and individuals directly affected by the conditions they study. Yet their participation and leadership remain profoundly limited by entrenched stigma, disclosure risks that can obstruct promising career trajectories, lack of mentorship from senior RWLE, and the absence of structural protections against discrimination and exclusion. These systemic barriers silence voices that can help transform the field's understanding of mental illness and its biological underpinnings. Drawing on the authors' lived and/or professional experiences, this Perspective challenges the assumption that lived experience introduces bias, reframing it as a source of empirical strength, innovation, and epistemic diversity. Here, the authors propose structural reforms to reshape admissions, mentorship, and leadership pathways. Centering RWLE is both a scientific necessity and an ethical imperative for advancing a more equitable and representative psychiatric neuroscience.</p>","PeriodicalId":520008,"journal":{"name":"NPP-digital psychiatry and neuroscience","volume":"3 ","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12615256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145544916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mental health care faces a significant gap in service availability, with demand for services significantly surpassing available care. As such, building scalable and objective measurement tools for mental health evaluation is of primary concern. Given the usage of spoken language in diagnostics and treatment, it stands out as a potential methodology. With a substantial mismatch between the demand for services and the availability of care, this study focuses on leveraging large language models to bridge this gap. Here, a RoBERTa-based transformer model is fine-tuned for mental health status evaluation using natural language processing. The model analyzes written language without access to prosodic, motor, or visual cues commonly used in clinical mental status exams. Using non-clinical data from online forums and clinical data from a board-reviewed online psychotherapy trial, this study provides preliminary evidence that large language models can support symptom identification in classifying sentences with an accuracy comparable to human experts. The text dataset is expanded through augmentation using backtranslation and the model performance is optimized through hyperparameter tuning. Specifically, a RoBERTa-based model is fine-tuned on psychotherapy session text to predict whether individual sentences are symptomatic of anxiety or depression with prediction accuracy on par with clinical evaluations at 74%.
{"title":"Using large language models as a scalable mental status evaluation technique.","authors":"Margot Wagner, Callum Stephenson, Jasleen Jagayat, Anchan Kumar, Amir Shirazi, Nazanin Alavi, Mohsen Omrani","doi":"10.1038/s44277-025-00042-z","DOIUrl":"10.1038/s44277-025-00042-z","url":null,"abstract":"<p><p>Mental health care faces a significant gap in service availability, with demand for services significantly surpassing available care. As such, building scalable and objective measurement tools for mental health evaluation is of primary concern. Given the usage of spoken language in diagnostics and treatment, it stands out as a potential methodology. With a substantial mismatch between the demand for services and the availability of care, this study focuses on leveraging large language models to bridge this gap. Here, a RoBERTa-based transformer model is fine-tuned for mental health status evaluation using natural language processing. The model analyzes written language without access to prosodic, motor, or visual cues commonly used in clinical mental status exams. Using non-clinical data from online forums and clinical data from a board-reviewed online psychotherapy trial, this study provides preliminary evidence that large language models can support symptom identification in classifying sentences with an accuracy comparable to human experts. The text dataset is expanded through augmentation using backtranslation and the model performance is optimized through hyperparameter tuning. Specifically, a RoBERTa-based model is fine-tuned on psychotherapy session text to predict whether individual sentences are symptomatic of anxiety or depression with prediction accuracy on par with clinical evaluations at 74%.</p>","PeriodicalId":520008,"journal":{"name":"NPP-digital psychiatry and neuroscience","volume":"3 1","pages":"27"},"PeriodicalIF":0.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12624874/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24DOI: 10.1038/s44277-025-00046-9
Sunwoo Kim, Yunyi Huang, Uday Singla, Andrew Hu, Sumay Kalra, Alex A Morgan, Benjamin Sichel, Dyar Othman, Lieselot L G Carrette
Operant behavior paradigms are essential in preclinical models of neuropsychiatric disorders, such as substance use disorders, enabling the study of complex behaviors including learning, salience, motivation, and preference. These tasks often involve repeated, time-resolved interactions over extended periods, producing large behavioral datasets with rich temporal structure. To support genome-wide association studies (GWAS), the Preclinical Addiction Research Consortium (PARC) has phenotyped over 3000 rats for oxycodone and cocaine addiction-like behaviors using extended access self-administration, producing over 100,000 data files. To manage, store, and process this data efficiently, we leveraged Dropbox, Microsoft Azure Cloud Services, and other widely available computational tools to develop a robust, automated data processing pipeline. Raw MedPC operant output files are automatically converted into structured Excel files using custom scripts, then integrated with standardized experimental, behavioral, and metadata spreadsheets, all uploaded from Dropbox into a relational SQL database on Azure. The pipeline enables automated quality control, data backups, daily summary reports, and interactive visualizations. This approach has dramatically improved PARC's high-throughput phenotyping capabilities by reducing human workload and error, while improving data quality, richness, and accessibility. We here share our approach, as these streamlined workflows can deliver benefits to operant studies of any scale, supporting more efficient, transparent, reproducible, and collaborative preclinical research.
{"title":"Automated pipeline for operant behavior phenotyping for high-throughput data management, processing, and visualization.","authors":"Sunwoo Kim, Yunyi Huang, Uday Singla, Andrew Hu, Sumay Kalra, Alex A Morgan, Benjamin Sichel, Dyar Othman, Lieselot L G Carrette","doi":"10.1038/s44277-025-00046-9","DOIUrl":"10.1038/s44277-025-00046-9","url":null,"abstract":"<p><p>Operant behavior paradigms are essential in preclinical models of neuropsychiatric disorders, such as substance use disorders, enabling the study of complex behaviors including learning, salience, motivation, and preference. These tasks often involve repeated, time-resolved interactions over extended periods, producing large behavioral datasets with rich temporal structure. To support genome-wide association studies (GWAS), the Preclinical Addiction Research Consortium (PARC) has phenotyped over 3000 rats for oxycodone and cocaine addiction-like behaviors using extended access self-administration, producing over 100,000 data files. To manage, store, and process this data efficiently, we leveraged Dropbox, Microsoft Azure Cloud Services, and other widely available computational tools to develop a robust, automated data processing pipeline. Raw MedPC operant output files are automatically converted into structured Excel files using custom scripts, then integrated with standardized experimental, behavioral, and metadata spreadsheets, all uploaded from Dropbox into a relational SQL database on Azure. The pipeline enables automated quality control, data backups, daily summary reports, and interactive visualizations. This approach has dramatically improved PARC's high-throughput phenotyping capabilities by reducing human workload and error, while improving data quality, richness, and accessibility. We here share our approach, as these streamlined workflows can deliver benefits to operant studies of any scale, supporting more efficient, transparent, reproducible, and collaborative preclinical research.</p>","PeriodicalId":520008,"journal":{"name":"NPP-digital psychiatry and neuroscience","volume":"3 1","pages":"25"},"PeriodicalIF":0.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12624926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13eCollection Date: 2025-01-01DOI: 10.1038/s44277-025-00045-w
Hadar Fisher, Nigel M Jaffe, Habiballah Rahimi-Eichi, Erika E Forbes, Diego A Pizzagalli, Justin T Baker, Christian A Webb
Adolescent depression remains a major public health concern, and Behavioral Activation (BA), a brief therapeutic intervention designed to reduce depression-related avoidance and boost engagement in rewarding activities, has shown encouraging results. Still, few studies directly measure the hypothesized mechanism of "activation" in daily life, especially using low-burden, ecologically valid methods. This proof-of-concept study evaluates the validity of two technology-based approaches to measuring activation in adolescents receiving BA: smartphone-based mobility sensing and large language model (LLM) ratings of free-response text. Adolescents (n = 38, ages 13-18) receiving 12-week BA therapy for anhedonia completed daily ecological momentary assessment (EMA) reporting on positive and negative affect. GPT-4o was used to rate behavioral activation from EMA free-text entries. A subsample (n = 13) contributed passive smartphone sensing data (e.g., accelerometer activity, GPS-derived mobility). Activation and symptoms were assessed weekly via self-report. GPT-derived activation ratings correlated positively with passive sensing indicators (number of places visited, time away from home) and self-reported activation. Within-person increases in GPT-rated activation were associated with higher daily positive affect and lower negative affect. Passive sensing features also forecasted weekly improvements in anhedonia and depressive symptoms. Associations emerged primarily at the within-person level, suggesting that changes in activation relative to one's own baseline are clinically meaningful. This study demonstrates the feasibility and validity of passively measuring behavioral activation in adolescents' daily lives using smartphone data and LLMs. These tools hold promise for advancing data-informed psychotherapy by tracking therapeutic processes in real time, reducing reliance on self-report, and enabling personalized, adaptive interventions. Clinical Trial Registry: NCT02498925.
{"title":"Measuring activation during behavioral activation therapy: a proof-of-concept study using smartphone sensors and LLM-derived ratings in adolescents with anhedonia.","authors":"Hadar Fisher, Nigel M Jaffe, Habiballah Rahimi-Eichi, Erika E Forbes, Diego A Pizzagalli, Justin T Baker, Christian A Webb","doi":"10.1038/s44277-025-00045-w","DOIUrl":"10.1038/s44277-025-00045-w","url":null,"abstract":"<p><p>Adolescent depression remains a major public health concern, and Behavioral Activation (BA), a brief therapeutic intervention designed to reduce depression-related avoidance and boost engagement in rewarding activities, has shown encouraging results. Still, few studies directly measure the hypothesized mechanism of \"activation\" in daily life, especially using low-burden, ecologically valid methods. This proof-of-concept study evaluates the validity of two technology-based approaches to measuring activation in adolescents receiving BA: smartphone-based mobility sensing and large language model (LLM) ratings of free-response text. Adolescents (<i>n</i> = 38, ages 13-18) receiving 12-week BA therapy for anhedonia completed daily ecological momentary assessment (EMA) reporting on positive and negative affect. GPT-4o was used to rate behavioral activation from EMA free-text entries. A subsample (<i>n</i> = 13) contributed passive smartphone sensing data (e.g., accelerometer activity, GPS-derived mobility). Activation and symptoms were assessed weekly via self-report. GPT-derived activation ratings correlated positively with passive sensing indicators (number of places visited, time away from home) and self-reported activation. Within-person increases in GPT-rated activation were associated with higher daily positive affect and lower negative affect. Passive sensing features also forecasted weekly improvements in anhedonia and depressive symptoms. Associations emerged primarily at the within-person level, suggesting that changes in activation relative to one's own baseline are clinically meaningful. This study demonstrates the feasibility and validity of passively measuring behavioral activation in adolescents' daily lives using smartphone data and LLMs. These tools hold promise for advancing data-informed psychotherapy by tracking therapeutic processes in real time, reducing reliance on self-report, and enabling personalized, adaptive interventions. Clinical Trial Registry: NCT02498925.</p>","PeriodicalId":520008,"journal":{"name":"NPP-digital psychiatry and neuroscience","volume":"3 ","pages":"24"},"PeriodicalIF":0.0,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518126/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145305137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-20DOI: 10.1038/s44277-025-00043-y
Albert Garcia-Romeu
{"title":"Deconstructing the trip treatment: are hallucinogenic effects critical to the therapeutic benefits of psychedelics?","authors":"Albert Garcia-Romeu","doi":"10.1038/s44277-025-00043-y","DOIUrl":"10.1038/s44277-025-00043-y","url":null,"abstract":"","PeriodicalId":520008,"journal":{"name":"NPP-digital psychiatry and neuroscience","volume":"3 1","pages":"22"},"PeriodicalIF":0.0,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12624876/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}