{"title":"Beyond humanism: telling response-able stories about significant otherness in human-chatbot relations.","authors":"Michael Holohan, Ruth Müller","doi":"10.3389/fpsyg.2024.1357572","DOIUrl":null,"url":null,"abstract":"<p><p>AI-enabled chatbots intended to build social relations with humans are becoming increasingly common in the marketplace, with millions of registered users using these chatbots as virtual companions or therapists. These chatbots make use of what is often called the \"Eliza effect\"-the tendency of human users to attribute human-like knowledge and understanding to a computer program. A common interpretation of this phenomenon is to consider this form of relating in terms of delusion, error, or deception, where the user misunderstands or forgets they are talking to a computer. As an alternative, we draw on the work of feminist Science and Technology Studies scholars as providing a robust and capacious tradition of thinking and engaging with human-nonhuman relationships in non-reductive ways. We closely analyze two different stories about encounters with chatbots, taking up the feminist STS challenge to attend to the agency of significant otherness in the encounter. The first is Joseph Weizenbaum's story about rejecting the ELIZA chatbot technology he designed to mimic a therapist as a monstrosity, based on his experiences watching others engage with it. The second is a story about Julie, who experiences a mental health crisis, and her chatbot Navi, as told through her descriptions of her experiences with Navi in the recent podcast <i>Radiotopia presents: Bot Love</i>. We argue that a reactionary humanist narrative, as presented by Weizenbaum, is incapable of attending to the possibilities of pleasure, play, or even healing that might occur in human-chatbot relatings. Other forms of engaging with, understanding, and making sense of this new technology and its potentialities are needed both in research and mental health practice, particularly as more and more patients will begin to use these technologies alongside engaging in traditional human-led psychotherapy.</p>","PeriodicalId":12525,"journal":{"name":"Frontiers in Psychology","volume":"15 ","pages":"1357572"},"PeriodicalIF":2.6000,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543445/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3389/fpsyg.2024.1357572","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
AI-enabled chatbots intended to build social relations with humans are becoming increasingly common in the marketplace, with millions of registered users using these chatbots as virtual companions or therapists. These chatbots make use of what is often called the "Eliza effect"-the tendency of human users to attribute human-like knowledge and understanding to a computer program. A common interpretation of this phenomenon is to consider this form of relating in terms of delusion, error, or deception, where the user misunderstands or forgets they are talking to a computer. As an alternative, we draw on the work of feminist Science and Technology Studies scholars as providing a robust and capacious tradition of thinking and engaging with human-nonhuman relationships in non-reductive ways. We closely analyze two different stories about encounters with chatbots, taking up the feminist STS challenge to attend to the agency of significant otherness in the encounter. The first is Joseph Weizenbaum's story about rejecting the ELIZA chatbot technology he designed to mimic a therapist as a monstrosity, based on his experiences watching others engage with it. The second is a story about Julie, who experiences a mental health crisis, and her chatbot Navi, as told through her descriptions of her experiences with Navi in the recent podcast Radiotopia presents: Bot Love. We argue that a reactionary humanist narrative, as presented by Weizenbaum, is incapable of attending to the possibilities of pleasure, play, or even healing that might occur in human-chatbot relatings. Other forms of engaging with, understanding, and making sense of this new technology and its potentialities are needed both in research and mental health practice, particularly as more and more patients will begin to use these technologies alongside engaging in traditional human-led psychotherapy.
期刊介绍:
Frontiers in Psychology is the largest journal in its field, publishing rigorously peer-reviewed research across the psychological sciences, from clinical research to cognitive science, from perception to consciousness, from imaging studies to human factors, and from animal cognition to social psychology. Field Chief Editor Axel Cleeremans at the Free University of Brussels is supported by an outstanding Editorial Board of international researchers. This multidisciplinary open-access journal is at the forefront of disseminating and communicating scientific knowledge and impactful discoveries to researchers, academics, clinicians and the public worldwide. The journal publishes the best research across the entire field of psychology. Today, psychological science is becoming increasingly important at all levels of society, from the treatment of clinical disorders to our basic understanding of how the mind works. It is highly interdisciplinary, borrowing questions from philosophy, methods from neuroscience and insights from clinical practice - all in the goal of furthering our grasp of human nature and society, as well as our ability to develop new intervention methods.