{"title":"Large language models and psychiatry","authors":"Graziella Orrù , Giulia Melis , Giuseppe Sartori","doi":"10.1016/j.ijlp.2025.102086","DOIUrl":null,"url":null,"abstract":"<div><div>Integrating Generative Artificial Intelligence and Large Language Models (LLMs) such as GPT-4 is transforming clinical medicine and cognitive psychology. These models exhibit remarkable capabilities in understanding and generating human-like language, which can enhance various aspects of healthcare, including clinical decision-making and psychological counseling.</div><div>LLMs, trained on vast datasets, function by predicting the next word in a sequence, endowing them with extensive knowledge and reasoning abilities. Their adaptability allows them to perform a wide range of language-related tasks, significantly contributing to advancements in cognitive psychology and psychiatry. These models demonstrate proficiency in tasks such as analogical reasoning, metaphor comprehension, and problem-solving, often achieving performance comparable to neurotypical humans. Despite their impressive capabilities, LLMs still exhibit limitations in causal reasoning and complex planning. However, their continuous improvement, exemplified by the enhanced performance of GPT-4 over its predecessors, suggests a trajectory towards overcoming these challenges. The ongoing debate about the “intelligence” of LLMs revolves around their ability to mimic human-like reasoning and understanding, a focal point of contemporary research.</div><div>This paper explores the cognitive abilities of LLMs, comparing them with human cognitive processes and examining their performance on various psychological tests. It highlights the emergent properties of LLMs, their potential to transform cognitive psychology, and the different applications of LLMs in psychiatry, highlighting the limitations, the ethical considerations, and the importance of scaling and fine-tuning these models to enhance their capabilities. We also explore the parallels between LLMs and human error patterns, underscoring the significance of using LLMs as models for human cognition.</div><div>Overall, this paper provides substantial evidence supporting the role of LLMs in reviving associationism as a viable framework for understanding human cognition while acknowledging the current limitations and the need for further research to fully realize their potential.</div></div>","PeriodicalId":47930,"journal":{"name":"International Journal of Law and Psychiatry","volume":"101 ","pages":"Article 102086"},"PeriodicalIF":1.4000,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Law and Psychiatry","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160252725000196","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
Integrating Generative Artificial Intelligence and Large Language Models (LLMs) such as GPT-4 is transforming clinical medicine and cognitive psychology. These models exhibit remarkable capabilities in understanding and generating human-like language, which can enhance various aspects of healthcare, including clinical decision-making and psychological counseling.
LLMs, trained on vast datasets, function by predicting the next word in a sequence, endowing them with extensive knowledge and reasoning abilities. Their adaptability allows them to perform a wide range of language-related tasks, significantly contributing to advancements in cognitive psychology and psychiatry. These models demonstrate proficiency in tasks such as analogical reasoning, metaphor comprehension, and problem-solving, often achieving performance comparable to neurotypical humans. Despite their impressive capabilities, LLMs still exhibit limitations in causal reasoning and complex planning. However, their continuous improvement, exemplified by the enhanced performance of GPT-4 over its predecessors, suggests a trajectory towards overcoming these challenges. The ongoing debate about the “intelligence” of LLMs revolves around their ability to mimic human-like reasoning and understanding, a focal point of contemporary research.
This paper explores the cognitive abilities of LLMs, comparing them with human cognitive processes and examining their performance on various psychological tests. It highlights the emergent properties of LLMs, their potential to transform cognitive psychology, and the different applications of LLMs in psychiatry, highlighting the limitations, the ethical considerations, and the importance of scaling and fine-tuning these models to enhance their capabilities. We also explore the parallels between LLMs and human error patterns, underscoring the significance of using LLMs as models for human cognition.
Overall, this paper provides substantial evidence supporting the role of LLMs in reviving associationism as a viable framework for understanding human cognition while acknowledging the current limitations and the need for further research to fully realize their potential.
期刊介绍:
The International Journal of Law and Psychiatry is intended to provide a multi-disciplinary forum for the exchange of ideas and information among professionals concerned with the interface of law and psychiatry. There is a growing awareness of the need for exploring the fundamental goals of both the legal and psychiatric systems and the social implications of their interaction. The journal seeks to enhance understanding and cooperation in the field through the varied approaches represented, not only by law and psychiatry, but also by the social sciences and related disciplines.