Evelien Heyselaar, N. Caruana, Mincheol Shin, L. Schilbach, Emily S. Cross
{"title":"Editorial: Do we really interact with artificial agents as if they are human?","authors":"Evelien Heyselaar, N. Caruana, Mincheol Shin, L. Schilbach, Emily S. Cross","doi":"10.3389/frvir.2023.1201385","DOIUrl":null,"url":null,"abstract":"Social interactions with artificial agents, such as voice agents, physically-embodied robots and avatars in virtual reality, are becoming increasingly normalised. As we strive to understand and optimise these social interactions–and human interactions in general–a pertinent question is: Do we really interact with artificial agents as if they are human?Awealth of related questions that are ripe for exploration concern the factors or conditions that might make this more or less likely. In this Research Topic, we propose that this line of empirical enquiry is important, not only in informing how we can best design and position artificial agents in various applied contexts (e.g., education, entertainment, healthcare delivery), but also so we can inform how artificial agents can continue to be used as a valid tool in human social neuroscience research. Over the past decade, artificial agents have become a critical tool in experimental social neuroscience. In particular, virtual agent and virtual interaction paradigms have enabled social neuroscientists to achieve a balance between the need for 1) ecological validity on the one hand, with paradigms that capture the dynamic and reciprocal complexity of social interactions; and 2) experimental control and objectivity, with the ability to deploy paradigms in controlled laboratory and neuroimaging settings (that are typically designed to test one person at a time), with objective measures of social attention, behaviour and corresponding neural processes. Historically, studies of human social interaction have either used naturalistic and observational approaches that achieve 1) but not 2), or contrived and simplistic experimental studies–typically involving the passive observation of social information from a third person perspective–that achieve 2) but not 1). Recent calls for more interactive, second person neuroscience approaches have been met with the use of artificial agents and virtual interaction paradigms (Schilbach et al., 2013; Caruana et al., 2017c). Across this nascent body of research, it has largely been assumed that the neural, cognitive, and psychological mechanisms supporting social interactions between humans flexibly generalize to interactions with artificial agents and that they therefore can provide an ecologically-valid analogue for investigating these mechanisms. However, emerging research has highlighted that there are many factors, such as agent features (Cross and Ramsey, 2021; Henschel et al., 2021; Marchesi et al., 2021) or our beliefs and expectations about the agency and intentions of artificial agents (Klapper et al., 2014; Cross et al., 2016; Caruana et al., 2017a; Caruana et al., 2017b; Caruana and OPEN ACCESS","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in virtual reality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frvir.2023.1201385","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Social interactions with artificial agents, such as voice agents, physically-embodied robots and avatars in virtual reality, are becoming increasingly normalised. As we strive to understand and optimise these social interactions–and human interactions in general–a pertinent question is: Do we really interact with artificial agents as if they are human?Awealth of related questions that are ripe for exploration concern the factors or conditions that might make this more or less likely. In this Research Topic, we propose that this line of empirical enquiry is important, not only in informing how we can best design and position artificial agents in various applied contexts (e.g., education, entertainment, healthcare delivery), but also so we can inform how artificial agents can continue to be used as a valid tool in human social neuroscience research. Over the past decade, artificial agents have become a critical tool in experimental social neuroscience. In particular, virtual agent and virtual interaction paradigms have enabled social neuroscientists to achieve a balance between the need for 1) ecological validity on the one hand, with paradigms that capture the dynamic and reciprocal complexity of social interactions; and 2) experimental control and objectivity, with the ability to deploy paradigms in controlled laboratory and neuroimaging settings (that are typically designed to test one person at a time), with objective measures of social attention, behaviour and corresponding neural processes. Historically, studies of human social interaction have either used naturalistic and observational approaches that achieve 1) but not 2), or contrived and simplistic experimental studies–typically involving the passive observation of social information from a third person perspective–that achieve 2) but not 1). Recent calls for more interactive, second person neuroscience approaches have been met with the use of artificial agents and virtual interaction paradigms (Schilbach et al., 2013; Caruana et al., 2017c). Across this nascent body of research, it has largely been assumed that the neural, cognitive, and psychological mechanisms supporting social interactions between humans flexibly generalize to interactions with artificial agents and that they therefore can provide an ecologically-valid analogue for investigating these mechanisms. However, emerging research has highlighted that there are many factors, such as agent features (Cross and Ramsey, 2021; Henschel et al., 2021; Marchesi et al., 2021) or our beliefs and expectations about the agency and intentions of artificial agents (Klapper et al., 2014; Cross et al., 2016; Caruana et al., 2017a; Caruana et al., 2017b; Caruana and OPEN ACCESS