Pub Date : 2026-02-07DOI: 10.1177/09637214251414021
Matthew K. Nock, Shirley B. Wang
Suicide is among the most perplexing of all human behaviors. It has been a leading cause of death for decades, and despite significant study it continues unabated. Over the past few years, the development of new digital and computational methods has provided tools that are helping to overcome many long-standing challenges to studying suicide. Here we review recent advances in the understanding, prediction, and prevention of suicidal behaviors using such methods. Examples include the use of mathematical and computational modeling to build and test more precise theories of suicidal thoughts and behaviors, large-scale electronic databases to better detect and predict those at risk for suicide (e.g., health-care networks, social media, and other web-based platforms), smartphones and wearable biosensors to identify person-specific high-risk periods, and digital devices and platforms to deliver and test just-in-time adaptive interventions. Although suicide is a long-standing problem, these advances are facilitating significant progress and hope for the future of suicide prevention.
{"title":"Understanding, Predicting, and Preventing Suicide: Recent Advances Using Digital and Computational Methods","authors":"Matthew K. Nock, Shirley B. Wang","doi":"10.1177/09637214251414021","DOIUrl":"https://doi.org/10.1177/09637214251414021","url":null,"abstract":"Suicide is among the most perplexing of all human behaviors. It has been a leading cause of death for decades, and despite significant study it continues unabated. Over the past few years, the development of new digital and computational methods has provided tools that are helping to overcome many long-standing challenges to studying suicide. Here we review recent advances in the understanding, prediction, and prevention of suicidal behaviors using such methods. Examples include the use of mathematical and computational modeling to build and test more precise theories of suicidal thoughts and behaviors, large-scale electronic databases to better detect and predict those at risk for suicide (e.g., health-care networks, social media, and other web-based platforms), smartphones and wearable biosensors to identify person-specific high-risk periods, and digital devices and platforms to deliver and test just-in-time adaptive interventions. Although suicide is a long-standing problem, these advances are facilitating significant progress and hope for the future of suicide prevention.","PeriodicalId":10802,"journal":{"name":"Current Directions in Psychological Science","volume":"6 1","pages":""},"PeriodicalIF":7.2,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146138489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-27DOI: 10.1177/09637214251401848
Julia A. Leonard, Reut Shachnai
Persistence is essential for learning, but children cannot and should not persist at everything. How do young children decide what is worth their effort? We build a theory of young children’s state persistence as the outcome of a socially guided decision-making process between children and caregivers. Integrating research from metacognition, decision-making, and social learning, we show how caregivers shape two key beliefs that guide children’s effort: What children think they are capable of and whether their effort is worthwhile. Caregivers’ actions, in turn, are guided by their own beliefs about children’s abilities and the value of tasks, creating a dynamic social system of effort calibration. By reframing persistence as a dynamic coconstructed process, we uncover how motivation is built—and where it can break down.
{"title":"Dyadic Decisions About Effort: How Caregivers Shape Young Children’s Persistence","authors":"Julia A. Leonard, Reut Shachnai","doi":"10.1177/09637214251401848","DOIUrl":"https://doi.org/10.1177/09637214251401848","url":null,"abstract":"Persistence is essential for learning, but children cannot and should not persist at everything. How do young children decide what is worth their effort? We build a theory of young children’s state persistence as the outcome of a socially guided decision-making process between children and caregivers. Integrating research from metacognition, decision-making, and social learning, we show how caregivers shape two key beliefs that guide children’s effort: What children think they are capable of and whether their effort is worthwhile. Caregivers’ actions, in turn, are guided by their own beliefs about children’s abilities and the value of tasks, creating a dynamic social system of effort calibration. By reframing persistence as a dynamic coconstructed process, we uncover how motivation is built—and where it can break down.","PeriodicalId":10802,"journal":{"name":"Current Directions in Psychological Science","volume":"7 1","pages":""},"PeriodicalIF":7.2,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146056186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-27DOI: 10.1177/09637214251407571
Cleotilde Gonzalez, Tailia Malloy
This article calls for complementary human-AI intelligence. Rather than redefining intelligence to fit machine capabilities, we argue for designing AI that complements and extends human cognition. We distinguish between cognitive AI , which is grounded in cognitive science to model human perception, learning, and decision-making, and machine AI , which achieves large-scale performance through data-driven optimization. Building on advances in machine learning alignment and human-AI complementarity, we propose an integrative framework that connects cognitive and machine AI across four routes: embedding integration , aligning human and machine representations; instruction encoding , using machine AI to translate goals into cognitive AI; training agents , using cognitive AI to guide and train machine AI through human-like data; and coevolving agents , enabling cognitive and machine AI to coadapt and improve together over time. These integration routes provide a foundation for complementary intelligence : systems that combine human interpretability with machine scalability and precision to enhance trust, adaptability, and human agency in complex sociotechnical environments.
{"title":"Toward Complementary Intelligence: Integrating Cognitive and Machine AI","authors":"Cleotilde Gonzalez, Tailia Malloy","doi":"10.1177/09637214251407571","DOIUrl":"https://doi.org/10.1177/09637214251407571","url":null,"abstract":"This article calls for complementary human-AI intelligence. Rather than redefining intelligence to fit machine capabilities, we argue for designing AI that complements and extends human cognition. We distinguish between <jats:italic toggle=\"yes\">cognitive AI</jats:italic> , which is grounded in cognitive science to model human perception, learning, and decision-making, and <jats:italic toggle=\"yes\">machine AI</jats:italic> , which achieves large-scale performance through data-driven optimization. Building on advances in machine learning alignment and human-AI complementarity, we propose an integrative framework that connects cognitive and machine AI across four routes: <jats:italic toggle=\"yes\">embedding integration</jats:italic> , aligning human and machine representations; <jats:italic toggle=\"yes\">instruction encoding</jats:italic> , using machine AI to translate goals into cognitive AI; <jats:italic toggle=\"yes\">training agents</jats:italic> , using cognitive AI to guide and train machine AI through human-like data; and <jats:italic toggle=\"yes\">coevolving agents</jats:italic> , enabling cognitive and machine AI to coadapt and improve together over time. These integration routes provide a foundation for <jats:italic toggle=\"yes\">complementary intelligence</jats:italic> : systems that combine human interpretability with machine scalability and precision to enhance trust, adaptability, and human agency in complex sociotechnical environments.","PeriodicalId":10802,"journal":{"name":"Current Directions in Psychological Science","volume":"38 1","pages":""},"PeriodicalIF":7.2,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146056187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-26DOI: 10.1177/09637214251410195
Frank J. Infurna, Yesenia Cruz-Carrillo, Nutifafa E. Y. Dey, Markus Wettstein, Margie E. Lachman, Denis Gerstorf
We summarize empirical evidence documenting that (a) U.S. middle-aged adults have displayed historical trends of elevations in loneliness and depressive symptoms and declining memory and physical health and (b) this pattern is largely confined to the United States and not observed in peer nations. A conceptual model is provided to detail possible explanations for these historical trends. We also discuss future directions to explore whether similar historical trends are transpiring across population subgroups and low- and middle-income nations, and we identify psychosocial resources for promoting resilience. This timely article sheds light on midlife development from a cross-national and historical perspective.
{"title":"Historical Change in Midlife Development From a Cross-National Perspective","authors":"Frank J. Infurna, Yesenia Cruz-Carrillo, Nutifafa E. Y. Dey, Markus Wettstein, Margie E. Lachman, Denis Gerstorf","doi":"10.1177/09637214251410195","DOIUrl":"https://doi.org/10.1177/09637214251410195","url":null,"abstract":"We summarize empirical evidence documenting that (a) U.S. middle-aged adults have displayed historical trends of elevations in loneliness and depressive symptoms and declining memory and physical health and (b) this pattern is largely confined to the United States and not observed in peer nations. A conceptual model is provided to detail possible explanations for these historical trends. We also discuss future directions to explore whether similar historical trends are transpiring across population subgroups and low- and middle-income nations, and we identify psychosocial resources for promoting resilience. This timely article sheds light on midlife development from a cross-national and historical perspective.","PeriodicalId":10802,"journal":{"name":"Current Directions in Psychological Science","volume":"291 1","pages":""},"PeriodicalIF":7.2,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146048517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1177/09637214251382091
Stephanie L. Brown, R. Michael Brown, David Cavallino
This article provides an overview of the debate within social psychology concerning the possible existence of altruistic motivation. After presenting the social-psychological background, we describe selective investment theory , an evolutionary theory of altruistic motivation, and discuss the underlying neurobiology. We describe evidence of the theory’s generativity within health psychology and consider its implications for solving social problems in the areas of economics, overpopulation, peace negotiations, and environmental protection.
{"title":"Does Altruism Exist? Implications of Selective Investment Theory for Solving Social Problems","authors":"Stephanie L. Brown, R. Michael Brown, David Cavallino","doi":"10.1177/09637214251382091","DOIUrl":"https://doi.org/10.1177/09637214251382091","url":null,"abstract":"This article provides an overview of the debate within social psychology concerning the possible existence of altruistic motivation. After presenting the social-psychological background, we describe <jats:italic toggle=\"yes\">selective investment theory</jats:italic> , an evolutionary theory of altruistic motivation, and discuss the underlying neurobiology. We describe evidence of the theory’s generativity within health psychology and consider its implications for solving social problems in the areas of economics, overpopulation, peace negotiations, and environmental protection.","PeriodicalId":10802,"journal":{"name":"Current Directions in Psychological Science","volume":"8 1","pages":""},"PeriodicalIF":7.2,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145829842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-06DOI: 10.1177/09637214251395678
Dedre Gentner, Kenneth Forbus
This article describes the structure-mapping engine (SME) and its relation to psychological theory and research. SME was created in 1986 as a simulation of structure-mapping theory (SMT) and is still in use, both on its own and as part of larger scale simulations such as CogSketch and Companion that capture analogy’s roles in other cognitive processing. Over the 4 decades since artificial intelligence (AI) first appeared, there has been continual interaction between AI research and human research. We begin by briefly reviewing SMT and the basic construction of SME. After comparing SME with other simulations, we then describe some specific contributions of SME to our understanding of human analogical processing. We close by proposing that these psychological models can become a new technology for AI.
{"title":"The Structure-Mapping Engine: A Multidecade Interaction Between Psychology and Artificial Intelligence","authors":"Dedre Gentner, Kenneth Forbus","doi":"10.1177/09637214251395678","DOIUrl":"https://doi.org/10.1177/09637214251395678","url":null,"abstract":"This article describes the structure-mapping engine (SME) and its relation to psychological theory and research. SME was created in 1986 as a simulation of structure-mapping theory (SMT) and is still in use, both on its own and as part of larger scale simulations such as CogSketch and Companion that capture analogy’s roles in other cognitive processing. Over the 4 decades since artificial intelligence (AI) first appeared, there has been continual interaction between AI research and human research. We begin by briefly reviewing SMT and the basic construction of SME. After comparing SME with other simulations, we then describe some specific contributions of SME to our understanding of human analogical processing. We close by proposing that these psychological models can become a new technology for AI.","PeriodicalId":10802,"journal":{"name":"Current Directions in Psychological Science","volume":"36 1","pages":""},"PeriodicalIF":7.2,"publicationDate":"2025-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05DOI: 10.1177/09637214251392861
C. Daryl Cameron, Alan R. Wagner, Martina Orlandi, Eliana Hadjiandreou, India G. Oates, Stephen Anderson
Several years ago, the world was stunned when the cute robot HitchBOT was destroyed. Does empathy for robots—sharing experiences and feeling compassion—make sense for humans? How do people empathize with robots, and what are the ethical and practical implications of doing so? How do people react when robots seem to be empathizing with them? In this review, we detail empirical work on empathy for robots, discuss the ethics of extending empathy toward robots, and consider how to engineer robots that elicit empathy. We then review empirical work on empathy received from robots to explore psychological, philosophical, and engineering implications. In our final section, we suggest how interactions with robots might cultivate human empathy. Can interactions with a robot build human empathy and help it to become more resilient and reliable?
{"title":"Empathy for and From Embodied Robots: An Interdisciplinary Review","authors":"C. Daryl Cameron, Alan R. Wagner, Martina Orlandi, Eliana Hadjiandreou, India G. Oates, Stephen Anderson","doi":"10.1177/09637214251392861","DOIUrl":"https://doi.org/10.1177/09637214251392861","url":null,"abstract":"Several years ago, the world was stunned when the cute robot HitchBOT was destroyed. Does empathy for robots—sharing experiences and feeling compassion—make sense for humans? How do people empathize with robots, and what are the ethical and practical implications of doing so? How do people react when robots seem to be empathizing <jats:italic toggle=\"yes\">with</jats:italic> them? In this review, we detail empirical work on empathy for robots, discuss the ethics of extending empathy toward robots, and consider how to engineer robots that elicit empathy. We then review empirical work on empathy received <jats:italic toggle=\"yes\">from</jats:italic> robots to explore psychological, philosophical, and engineering implications. In our final section, we suggest how interactions with robots might cultivate human empathy. Can interactions with a robot build human empathy and help it to become more resilient and reliable?","PeriodicalId":10802,"journal":{"name":"Current Directions in Psychological Science","volume":"53 1","pages":""},"PeriodicalIF":7.2,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1177/09637214251391158
Mark Steyvers, Megan A. K. Peters
Metacognition—the capacity to monitor and evaluate one’s own knowledge and performance—is foundational to human decision-making, learning, and communication. As large language models (LLMs) become increasingly embedded in both high-stakes and widespread low-stakes contexts, it is important to assess whether, how, and to what extent they exhibit metacognitive abilities. Here, we provide an overview of the current knowledge of LLMs’ metacognitive capacities, how they might be studied, and how they relate to our knowledge of metacognition in humans. We show that although humans and LLMs can sometimes appear quite aligned in their metacognitive capacities and behaviors, it is clear many differences remain; attending to these differences is important for enhancing the collaboration between humans and artificial intelligence. Last, we discuss how endowing future LLMs with more sensitive and more calibrated metacognition may also help them develop new capacities such as more efficient learning, self-direction, and curiosity.
{"title":"Metacognition and Uncertainty Communication in Humans and Large Language Models","authors":"Mark Steyvers, Megan A. K. Peters","doi":"10.1177/09637214251391158","DOIUrl":"https://doi.org/10.1177/09637214251391158","url":null,"abstract":"Metacognition—the capacity to monitor and evaluate one’s own knowledge and performance—is foundational to human decision-making, learning, and communication. As large language models (LLMs) become increasingly embedded in both high-stakes and widespread low-stakes contexts, it is important to assess whether, how, and to what extent they exhibit metacognitive abilities. Here, we provide an overview of the current knowledge of LLMs’ metacognitive capacities, how they might be studied, and how they relate to our knowledge of metacognition in humans. We show that although humans and LLMs can sometimes appear quite aligned in their metacognitive capacities and behaviors, it is clear many differences remain; attending to these differences is important for enhancing the collaboration between humans and artificial intelligence. Last, we discuss how endowing future LLMs with more sensitive and more calibrated metacognition may also help them develop new capacities such as more efficient learning, self-direction, and curiosity.","PeriodicalId":10802,"journal":{"name":"Current Directions in Psychological Science","volume":"24 1","pages":""},"PeriodicalIF":7.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145545726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-08DOI: 10.1177/09637214251386047
Zac E. Imel, Torrey Creed, Brent Kious, Tim Althoff, Dana Atzil-Slonim, Vivek Srikumar
Psychotherapy is a conversational intervention that has relied on humans to manage its implementation. Improvements in conversational artificial intelligence (AI) have accompanied speculation on how technologies might automate components of psychotherapy, most often the replacement of human therapists. However, there is a spectrum of opportunities for human collaboration with autonomous systems in psychotherapy, including evaluation, documentation, training, and assistance. Clarity about what is being automated is necessary to understand the affordances and limitations of specific technologies. In this article we present a framework for categories of autonomous systems in psychotherapy as a guidepost for empirical and ethical inquiry. Categories include scripted or rule-based conversations; collaborative systems in which humans are evaluated by, supervise, or are assisted by AI; and agents that generate interventions. These categories highlight considerations for key stakeholders as psychotherapy moves from unmediated human-to-human conversation to various forms of automation.
{"title":"A Framework for Automation in Psychotherapy","authors":"Zac E. Imel, Torrey Creed, Brent Kious, Tim Althoff, Dana Atzil-Slonim, Vivek Srikumar","doi":"10.1177/09637214251386047","DOIUrl":"https://doi.org/10.1177/09637214251386047","url":null,"abstract":"Psychotherapy is a conversational intervention that has relied on humans to manage its implementation. Improvements in conversational artificial intelligence (AI) have accompanied speculation on how technologies might automate components of psychotherapy, most often the replacement of human therapists. However, there is a spectrum of opportunities for human collaboration with autonomous systems in psychotherapy, including evaluation, documentation, training, and assistance. Clarity about what is being automated is necessary to understand the affordances and limitations of specific technologies. In this article we present a framework for categories of autonomous systems in psychotherapy as a guidepost for empirical and ethical inquiry. Categories include scripted or rule-based conversations; collaborative systems in which humans are evaluated by, supervise, or are assisted by AI; and agents that generate interventions. These categories highlight considerations for key stakeholders as psychotherapy moves from unmediated human-to-human conversation to various forms of automation.","PeriodicalId":10802,"journal":{"name":"Current Directions in Psychological Science","volume":"80 1","pages":""},"PeriodicalIF":7.2,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145472994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}