首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Social robots are good for me, but better for other people:The presumed allo-enhancement effect of social robot perceptions 社交机器人对我有好处,但对其他人更好:社交机器人感知的假定增强效应
Pub Date : 2024-07-02 DOI: 10.1016/j.chbah.2024.100079
Xun Sunny Liu , Jeff Hancock

This research proposes and investigates the presumed allo-enhancement effect of social robot perceptions, a tendency for individuals to view social robots as more beneficial for others than for themselves. We discuss this as a systematic bias in the perception of the utility of social robots. Through two survey studies, we test and replicate self-other perceptual differences, obtain effect sizes of these perceptual differences, and trace the impact of this presumed allo-enhancement effect on individuals' attitudes and behaviors. Analyses revealed strong perceptual differences, where individuals consistently believed social robots to be more enhancing for others than for themselves (d = −0.69, d = −0.62). These perceptual differences predicted individuals’ attitudes and endorsed behaviors towards social robots. By identifying this bias, we offer a new theoretical lens for understanding how people perceive and respond to emergent technologies.

本研究提出并调查了社交机器人感知的假定异体增强效应,即个人倾向于认为社交机器人对他人比对自己更有益。我们将其视为社交机器人效用认知中的一种系统性偏差。通过两项调查研究,我们测试并复制了自我与他人的感知差异,获得了这些感知差异的效应大小,并追踪了这种假定的异体增强效应对个人态度和行为的影响。分析表明,个体始终认为社交机器人对他人的增强作用大于对自己的增强作用(d = -0.69,d = -0.62),这种感知差异非常明显。这些感知差异预测了个体对社交机器人的态度和认可行为。通过识别这种偏差,我们为理解人们如何感知和应对新兴技术提供了一个新的理论视角。
{"title":"Social robots are good for me, but better for other people:The presumed allo-enhancement effect of social robot perceptions","authors":"Xun Sunny Liu ,&nbsp;Jeff Hancock","doi":"10.1016/j.chbah.2024.100079","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100079","url":null,"abstract":"<div><p>This research proposes and investigates <em>the presumed allo-enhancement effect of social robot perceptions</em>, a tendency for individuals to view social robots as more beneficial for others than for themselves. We discuss this as a systematic bias in the perception of the utility of social robots. Through two survey studies, we test and replicate self-other perceptual differences, obtain effect sizes of these perceptual differences, and trace the impact of this presumed allo-enhancement effect on individuals' attitudes and behaviors. Analyses revealed strong perceptual differences, where individuals consistently believed social robots to be more enhancing for others than for themselves (<em>d</em> = −0.69, <em>d</em> = −0.62). These perceptual differences predicted individuals’ attitudes and endorsed behaviors towards social robots. By identifying this bias, we offer a new theoretical lens for understanding how people perceive and respond to emergent technologies.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000392/pdfft?md5=192859a1c7d543cc91e3db4bc01c149c&pid=1-s2.0-S2949882124000392-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141582999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do realistic avatars make virtual reality better? Examining human-like avatars for VR social interactions 逼真的虚拟化身能让虚拟现实变得更好吗?研究用于虚拟现实社交互动的类人化身
Pub Date : 2024-07-02 DOI: 10.1016/j.chbah.2024.100082
Alan D. Fraser, Isabella Branson, Ross C. Hollett, Craig P. Speelman, Shane L. Rogers
{"title":"Do realistic avatars make virtual reality better? Examining human-like avatars for VR social interactions","authors":"Alan D. Fraser,&nbsp;Isabella Branson,&nbsp;Ross C. Hollett,&nbsp;Craig P. Speelman,&nbsp;Shane L. Rogers","doi":"10.1016/j.chbah.2024.100082","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100082","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000422/pdfft?md5=1eeb2a30b6d620464af52d1066c159d7&pid=1-s2.0-S2949882124000422-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Naughty Japanese Babe:” An analysis of racialized sex tech designs "顽皮的日本宝贝:"种族化性科技设计分析
Pub Date : 2024-07-02 DOI: 10.1016/j.chbah.2024.100080
Kenneth R. Hanson , Chloé Locatelli PhD

Recent technological developments and growing acceptance of sex tech has brought increased scholarly attention to sex tech entrepreneurs, personified sex tech devices and applications, and the adult industry. Drawing on qualitative case studies of a sex doll brothel named “Cybrothel” and the virtual entertainer, or “V-Tuber,” known as Projekt Melody, as well as quantitative sex doll advertisement data, this study examines the racialization of personified sex technologies. Bringing attention to the racialization of personified sex tech is long overdue, as much scholarship to date has focused on how sex tech reproduces specific gendered meanings, despite decades of intersectional feminist scholarship demonstrating that gendered and racialized meanings are mutually constituted. General trends in the industry are shown, but particular emphasis is placed on the overrepresentation of Asianized femininity among personified sex tech industries.

最近的技术发展和人们对性科技的日益接受,使学术界越来越关注性科技企业家、人格化的性科技设备和应用以及成人产业。本研究通过对名为 "Cybrothel "的性玩偶妓院和名为 "Projekt Melody "的虚拟艺人(或称 "V-Tuber")的定性案例研究,以及定量性玩偶广告数据,探讨了人格化性技术的种族化问题。尽管数十年来交叉性女权主义学术研究表明,性别和种族意义是相互构成的,但迄今为止,许多学术研究都侧重于性技术如何再现特定的性别意义,因此,早就应该关注人格化性技术的种族化问题了。本报告展示了该行业的总体趋势,但特别强调了亚洲女性在人格化性科技行业中的过度代表性。
{"title":"“Naughty Japanese Babe:” An analysis of racialized sex tech designs","authors":"Kenneth R. Hanson ,&nbsp;Chloé Locatelli PhD","doi":"10.1016/j.chbah.2024.100080","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100080","url":null,"abstract":"<div><p>Recent technological developments and growing acceptance of sex tech has brought increased scholarly attention to sex tech entrepreneurs, personified sex tech devices and applications, and the adult industry. Drawing on qualitative case studies of a sex doll brothel named “Cybrothel” and the virtual entertainer, or “V-Tuber,” known as Projekt Melody, as well as quantitative sex doll advertisement data, this study examines the racialization of personified sex technologies. Bringing attention to the racialization of personified sex tech is long overdue, as much scholarship to date has focused on how sex tech reproduces specific gendered meanings, despite decades of intersectional feminist scholarship demonstrating that gendered and racialized meanings are mutually constituted. General trends in the industry are shown, but particular emphasis is placed on the overrepresentation of Asianized femininity among personified sex tech industries.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000409/pdfft?md5=ed1675bc2b43859a5c660ea84708964a&pid=1-s2.0-S2949882124000409-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility assessment of using ChatGPT for training case conceptualization skills in psychological counseling 使用 ChatGPT 培训心理咨询案例概念化技能的可行性评估
Pub Date : 2024-07-02 DOI: 10.1016/j.chbah.2024.100083
Lih-Horng Hsieh , Wei-Chou Liao , En-Yu Liu

This study investigates the feasibility and effectiveness of using ChatGPT for training case conceptualization skills in psychological counseling. The novelty of this research lies in the application of an AI-based model, ChatGPT, to enhance the professional development of prospective counselors, particularly in the realm of case conceptualization—a core competence in psychotherapy. Traditional training methods are often limited by time and resources, while ChatGPT offers a scalable and interactive alternative. Through a single-blind assessment, this study explores the accuracy, completeness, feasibility, and consistency of OpenAI's ChatGPT for case conceptualization in psychological counseling. Results show that using ChatGPT for generating case conceptualization is acceptable in terms of accuracy, completeness, feasibility, and consistency, as evaluated by experts. Therefore, counseling educators can encourage trainees to use ChatGPT as auxiliary methods for developing case conceptualization skills during supervision processes. The social implications of this research are significant, as the integration of AI in psychological counseling could address the growing need for mental health services and support. By improving the accuracy and efficiency of case conceptualization, ChatGPT can contribute to better counseling outcomes, potentially reducing the societal burden of mental health issues. Moreover, the use of AI in this context prompts important discussions on ethical considerations and the evolving role of technology in human services. Overall, this study highlights the potential of ChatGPT to serve as a valuable tool in counselor training, ultimately aiming to enhance the quality and accessibility of psychological support services.

本研究探讨了在心理咨询中使用 ChatGPT 训练案例概念化技能的可行性和有效性。本研究的新颖之处在于应用基于人工智能的模型 ChatGPT 来加强未来心理咨询师的专业发展,尤其是在心理治疗的核心能力--个案概念化领域。传统的培训方法往往受到时间和资源的限制,而 ChatGPT 则提供了一种可扩展的互动式替代方法。本研究通过单盲评估,探讨了 OpenAI 的 ChatGPT 在心理咨询案例概念化方面的准确性、完整性、可行性和一致性。结果表明,经专家评估,使用 ChatGPT 生成案例概念化在准确性、完整性、可行性和一致性方面都是可以接受的。因此,心理咨询教育工作者可以鼓励受训者在督导过程中使用 ChatGPT 作为发展案例概念化技能的辅助方法。这项研究的社会意义重大,因为将人工智能整合到心理咨询中可以满足日益增长的心理健康服务和支持需求。通过提高案例概念化的准确性和效率,ChatGPT 可以促进更好的心理咨询结果,从而减轻心理健康问题的社会负担。此外,在这种情况下使用人工智能还引发了关于伦理因素和技术在人类服务中不断发展的作用的重要讨论。总之,本研究强调了 ChatGPT 作为心理咨询师培训的宝贵工具的潜力,其最终目的是提高心理支持服务的质量和可及性。
{"title":"Feasibility assessment of using ChatGPT for training case conceptualization skills in psychological counseling","authors":"Lih-Horng Hsieh ,&nbsp;Wei-Chou Liao ,&nbsp;En-Yu Liu","doi":"10.1016/j.chbah.2024.100083","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100083","url":null,"abstract":"<div><p>This study investigates the feasibility and effectiveness of using ChatGPT for training case conceptualization skills in psychological counseling. The novelty of this research lies in the application of an AI-based model, ChatGPT, to enhance the professional development of prospective counselors, particularly in the realm of case conceptualization—a core competence in psychotherapy. Traditional training methods are often limited by time and resources, while ChatGPT offers a scalable and interactive alternative. Through a single-blind assessment, this study explores the accuracy, completeness, feasibility, and consistency of OpenAI's ChatGPT for case conceptualization in psychological counseling. Results show that using ChatGPT for generating case conceptualization is acceptable in terms of accuracy, completeness, feasibility, and consistency, as evaluated by experts. Therefore, counseling educators can encourage trainees to use ChatGPT as auxiliary methods for developing case conceptualization skills during supervision processes. The social implications of this research are significant, as the integration of AI in psychological counseling could address the growing need for mental health services and support. By improving the accuracy and efficiency of case conceptualization, ChatGPT can contribute to better counseling outcomes, potentially reducing the societal burden of mental health issues. Moreover, the use of AI in this context prompts important discussions on ethical considerations and the evolving role of technology in human services. Overall, this study highlights the potential of ChatGPT to serve as a valuable tool in counselor training, ultimately aiming to enhance the quality and accessibility of psychological support services.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000434/pdfft?md5=10d95ea221c1a752e8cf6ff0aab8ba5e&pid=1-s2.0-S2949882124000434-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI in relationship counselling: Evaluating ChatGPT's therapeutic capabilities in providing relationship advice 关系咨询中的人工智能:评估 ChatGPT 在提供恋爱咨询方面的治疗能力
Pub Date : 2024-06-21 DOI: 10.1016/j.chbah.2024.100078
Laura M. Vowels , Rachel R.R. Francois-Walcott , Joëlle Darwiche

Recent advancements in AI have led to chatbots, such as ChatGPT, capable of providing therapeutic responses. Early research evaluating chatbots' ability to provide relationship advice and single-session relationship interventions has showed that both laypeople and relationship therapists rate them high on attributed such as empathy and helpfulness. In the present study, 20 participants engaged in single-session relationship intervention with ChatGPT and were interviewed about their experiences. We evaluated the performance of ChatGPT comprising of technical outcomes such as error rate and linguistic accuracy and therapeutic quality such as empathy and therapeutic questioning. The interviews were analysed using reflexive thematic analysis which generated four themes: light at the end of the tunnel; clearing the fog; clinical skills; and therapeutic setting. The analyses of technical and feasibility outcomes, as coded by researchers and perceived by users, show ChatGPT provides realistic single-session intervention with it consistently rated highly on attributes such as therapeutic skills, human-likeness, exploration, and useability, and providing clarity and next steps for users’ relationship problem. Limitations include a poor assessment of risk and reaching collaborative solutions with the participant. This study extends on AI acceptance theories and highlights the potential capabilities of ChatGPT in providing relationship advice and support.

最近,人工智能的进步促使聊天机器人(如 ChatGPT)能够提供治疗回复。对聊天机器人提供恋爱建议和单次恋爱干预能力的早期评估研究表明,无论是普通人还是恋爱治疗师,都对聊天机器人的同理心和帮助性给予了很高的评价。在本研究中,20 名参与者使用 ChatGPT 进行了单次关系干预,并就他们的体验接受了采访。我们对 ChatGPT 的性能进行了评估,包括错误率和语言准确性等技术成果以及移情和治疗性提问等治疗质量。我们采用反思性主题分析法对访谈进行了分析,得出了四个主题:隧道尽头的曙光;拨开迷雾;临床技能;治疗环境。根据研究人员的编码和用户的感知,对技术和可行性结果的分析表明,ChatGPT 提供了现实的单次会话干预,在治疗技能、人性化、探索性和可用性等属性上始终获得很高的评价,并为用户的关系问题提供了清晰的思路和下一步措施。其局限性包括对风险的评估不足,以及无法与参与者达成合作解决方案。本研究扩展了人工智能接受理论,并强调了 ChatGPT 在提供关系建议和支持方面的潜在能力。
{"title":"AI in relationship counselling: Evaluating ChatGPT's therapeutic capabilities in providing relationship advice","authors":"Laura M. Vowels ,&nbsp;Rachel R.R. Francois-Walcott ,&nbsp;Joëlle Darwiche","doi":"10.1016/j.chbah.2024.100078","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100078","url":null,"abstract":"<div><p>Recent advancements in AI have led to chatbots, such as ChatGPT, capable of providing therapeutic responses. Early research evaluating chatbots' ability to provide relationship advice and single-session relationship interventions has showed that both laypeople and relationship therapists rate them high on attributed such as empathy and helpfulness. In the present study, 20 participants engaged in single-session relationship intervention with ChatGPT and were interviewed about their experiences. We evaluated the performance of ChatGPT comprising of technical outcomes such as error rate and linguistic accuracy and therapeutic quality such as empathy and therapeutic questioning. The interviews were analysed using reflexive thematic analysis which generated four themes: light at the end of the tunnel; clearing the fog; clinical skills; and therapeutic setting. The analyses of technical and feasibility outcomes, as coded by researchers and perceived by users, show ChatGPT provides realistic single-session intervention with it consistently rated highly on attributes such as therapeutic skills, human-likeness, exploration, and useability, and providing clarity and next steps for users’ relationship problem. Limitations include a poor assessment of risk and reaching collaborative solutions with the participant. This study extends on AI acceptance theories and highlights the potential capabilities of ChatGPT in providing relationship advice and support.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000380/pdfft?md5=d4b9aa843c4d16b685ded5378e52197c&pid=1-s2.0-S2949882124000380-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The gendered nature of AI: Men and masculinities through the lens of ChatGPT and GPT4 人工智能的性别本质:从 ChatGPT 和 GPT4 的角度看男性和男性气质
Pub Date : 2024-06-21 DOI: 10.1016/j.chbah.2024.100076
Andreas Walther , Flora Logoz , Lukas Eggenberger

Because artificial intelligence powered language models such as the GPT series have most certainly come to stay and will permanently change the way individuals all over the world access information and form opinions, there is a need to highlight potential risks for the understanding and perception of men and masculinities. It is important to understand whether ChatGPT or its following versions such as GPT4 are biased – and if so, in which direction and to which degree. In the specific research field on men and masculinities, it seems paramount to understand the grounds upon which these language models respond to seemingly simple questions such as “What is a man?” or “What is masculine?”. In the following, we provide interactions with ChatGPT and GPT4 where we asked such questions, in an effort to better understand the quality and potential biases of the answers from ChatGPT and GPT4. We then critically reflect on the output by ChatGPT, compare it to the output by GPT4 and draw conclusions for future actions.

由于人工智能驱动的语言模型(如 GPT 系列)肯定会继续存在,并将永久性地改变全世界人们获取信息和形成观点的方式,因此有必要强调对男性和男性气质的理解和看法的潜在风险。重要的是要了解 ChatGPT 或其后续版本(如 GPT4)是否存在偏见,如果有,是在哪个方向和程度上。在有关男性和男性气质的特定研究领域,了解这些语言模型在回答 "什么是男人?"或 "什么是阳刚之气?"等看似简单的问题时所依据的理由似乎至关重要。在下文中,我们提供了与 ChatGPT 和 GPT4 的互动,在互动中我们提出了此类问题,以便更好地理解 ChatGPT 和 GPT4 的答案的质量和潜在偏差。然后,我们对 ChatGPT 的输出进行批判性反思,将其与 GPT4 的输出进行比较,并为未来的行动得出结论。
{"title":"The gendered nature of AI: Men and masculinities through the lens of ChatGPT and GPT4","authors":"Andreas Walther ,&nbsp;Flora Logoz ,&nbsp;Lukas Eggenberger","doi":"10.1016/j.chbah.2024.100076","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100076","url":null,"abstract":"<div><p>Because artificial intelligence powered language models such as the GPT series have most certainly come to stay and will permanently change the way individuals all over the world access information and form opinions, there is a need to highlight potential risks for the understanding and perception of men and masculinities. It is important to understand whether ChatGPT or its following versions such as GPT4 are biased – and if so, in which direction and to which degree. In the specific research field on men and masculinities, it seems paramount to understand the grounds upon which these language models respond to seemingly simple questions such as “What is a man?” or “What is masculine?”. In the following, we provide interactions with ChatGPT and GPT4 where we asked such questions, in an effort to better understand the quality and potential biases of the answers from ChatGPT and GPT4. We then critically reflect on the output by ChatGPT, compare it to the output by GPT4 and draw conclusions for future actions.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000367/pdfft?md5=00f26a01ff331a51e5085db5eba8195a&pid=1-s2.0-S2949882124000367-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141486735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring people's perceptions of LLM-generated advice 探索人们对法律硕士所提建议的看法
Pub Date : 2024-06-07 DOI: 10.1016/j.chbah.2024.100072
Joel Wester, Sander de Jong, Henning Pohl, Niels van Berkel

When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (N = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on likelihood, receptiveness, and what advice they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.

在搜索和浏览网页时,我们遇到的越来越多的信息都是通过大型语言模型(LLM)生成或中介的。这可以是寻找食谱、获得论文帮助,也可以是寻找恋爱建议。然而,人们对个人如何看待这些大型语言模型所提供的建议的了解还很有限。在本文中,我们将探讨人们对由 LLM 生成的建议的看法,以及不同的用户特征(即个性和技术准备程度)在影响他们的看法方面所起的作用。此外,由于 LLM 生成的建议很难与人工建议区分开来,我们对此类建议的令人毛骨悚然的感知进行了评估。为了调查这一点,我们进行了一项探索性研究(N = 91),让参与者对不同风格的建议(由 GPT-3.5 Turbo 生成)进行评分。值得注意的是,我们的研究结果表明,认同感较高的人往往更喜欢建议,并认为建议更有用。此外,技术不安全感较高的人更有可能听从建议,并认为建议更有用,而且更有可能是朋友给出的建议。最后,我们发现 "怀疑 "风格的建议被认为是最不可预测的,而 "异想天开 "风格的建议被认为是最不恶意的--这表明法律硕士的建议风格会影响用户的看法。我们的研究结果还概括了人们对可能性、接受度的考虑,以及他们可能会从这些数字助理那里寻求哪些建议。基于我们的研究结果,我们为法律硕士生成的建议提供了设计启示,并概述了未来的研究方向,以便为针对不同期望和需求的人群的支持应用设计法律硕士生成的建议提供更多信息。
{"title":"Exploring people's perceptions of LLM-generated advice","authors":"Joel Wester,&nbsp;Sander de Jong,&nbsp;Henning Pohl,&nbsp;Niels van Berkel","doi":"10.1016/j.chbah.2024.100072","DOIUrl":"10.1016/j.chbah.2024.100072","url":null,"abstract":"<div><p>When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (<em>N</em> = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on <em>likelihood</em>, <em>receptiveness</em>, and <em>what advice</em> they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400032X/pdfft?md5=ed36391afd77ad6dce64841705e4cd1b&pid=1-s2.0-S294988212400032X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141390960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are chatbots the new relationship experts? Insights from three studies 聊天机器人是新的人际关系专家吗?三项研究的启示
Pub Date : 2024-06-07 DOI: 10.1016/j.chbah.2024.100077
Laura M. Vowels

Relationship distress is among the most important predictors of individual distress. Over one in three couples report distress in relationships but despite the distress, couples only rarely seek help from couple therapists and instead prefer to seek information and advice online. The recent breakthroughs in the development of humanlike artificial intelligence-powered chatbots such as ChatGPT have recently made it possible to develop chatbots which respond therapeutically. Early research suggests that they outperform physicians in helpfulness and empathy in answering health-related questions. However, we do not yet know how well chatbots respond to questions about relationships. Across three studies, we evaluated the performance of chatbots in responding to relationship-related questions and in engaging in a single session relationship therapy. In Studies 1 and 2, we demonstrated that chatbots are perceived as more helpful and empathic than relationship experts and in Study 3, we showed that relationship therapists rate single sessions with a chatbot high on attributes such as empathy, active listening, and exploration. Limitations include repetitive responding and inadequate assessment of risk. The findings show the potential of using chatbots in relationship support and highlight the limitations which need to be addressed before they can be safely adopted for interventions.

人际关系困扰是预测个人困扰的最重要因素之一。超过三分之一的情侣表示在人际关系中存在困扰,但尽管存在困扰,情侣们却很少向情侣治疗师寻求帮助,而是更愿意在网上寻求信息和建议。最近,人工智能驱动的聊天机器人(如 ChatGPT)的开发取得了突破性进展,这使得开发能做出治疗反应的聊天机器人成为可能。早期研究表明,在回答与健康有关的问题时,聊天机器人在帮助性和同理心方面优于医生。但是,我们还不知道聊天机器人在回答人际关系问题时的表现如何。在三项研究中,我们评估了聊天机器人在回答人际关系相关问题和参与单次人际关系治疗时的表现。在研究 1 和研究 2 中,我们证明聊天机器人被认为比关系专家更有帮助、更有同理心;在研究 3 中,我们证明关系治疗师对聊天机器人的单次治疗在同理心、积极倾听和探索等属性方面评价很高。不足之处包括重复回应和风险评估不足。研究结果表明了在关系支持中使用聊天机器人的潜力,并强调了在安全采用聊天机器人进行干预之前需要解决的局限性。
{"title":"Are chatbots the new relationship experts? Insights from three studies","authors":"Laura M. Vowels","doi":"10.1016/j.chbah.2024.100077","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100077","url":null,"abstract":"<div><p>Relationship distress is among the most important predictors of individual distress. Over one in three couples report distress in relationships but despite the distress, couples only rarely seek help from couple therapists and instead prefer to seek information and advice online. The recent breakthroughs in the development of humanlike artificial intelligence-powered chatbots such as ChatGPT have recently made it possible to develop chatbots which respond therapeutically. Early research suggests that they outperform physicians in helpfulness and empathy in answering health-related questions. However, we do not yet know how well chatbots respond to questions about relationships. Across three studies, we evaluated the performance of chatbots in responding to relationship-related questions and in engaging in a single session relationship therapy. In Studies 1 and 2, we demonstrated that chatbots are perceived as more helpful and empathic than relationship experts and in Study 3, we showed that relationship therapists rate single sessions with a chatbot high on attributes such as empathy, active listening, and exploration. Limitations include repetitive responding and inadequate assessment of risk. The findings show the potential of using chatbots in relationship support and highlight the limitations which need to be addressed before they can be safely adopted for interventions.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000379/pdfft?md5=dfd93f67d4fda22de40804a5b5727726&pid=1-s2.0-S2949882124000379-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Am I still human? Wearing an exoskeleton impacts self-perceptions of warmth, competence, attractiveness, and machine-likeness 我还是人吗?穿戴外骨骼会影响对温暖、能力、吸引力和机器相似性的自我认知
Pub Date : 2024-05-31 DOI: 10.1016/j.chbah.2024.100073
Sandra Maria Siedl, Martina Mara

Occupational exoskeletons are body-worn technologies capable of enhancing a wearer's naturally given strength at work. Despite increasing interest in their physical effects, their implications for user self-perception have been largely overlooked. Addressing common concerns about body-enhancing technologies, our study explored how real-world use of a robotic exoskeleton affects a wearer's mechanistic dehumanization and perceived attractiveness of the self. In a within-subjects laboratory experiment, n = 119 participants performed various practical work tasks (carrying, screwing, riveting) with and without the Ironhand active hand exoskeleton. After each condition, they completed a questionnaire. We expected that in the exoskeleton condition self-perceptions of warmth and attractiveness would be less pronounced and self-perceptions of being competent and machine-like would be more pronounced. Study data supported these hypotheses and showed perceived competence, machine-likeness, and attractiveness to be relevant to technology acceptance. Our findings provide the first evidence that body-enhancement technologies may be associated with tendencies towards self-dehumanization, and underline the multifaceted role of exoskeleton-induced competence gain. By examining user self-perceptions that relate to mechanistic dehumanization and aesthetic appeal, our research highlights the need to better understand psychological impacts of exoskeletons on human wearers.

职业外骨骼是一种佩戴在身上的技术,能够增强佩戴者在工作中的自然力量。尽管人们对它们的物理效果越来越感兴趣,但它们对用户自我感知的影响却在很大程度上被忽视了。为了解决人们对增强体质技术的普遍担忧,我们的研究探讨了机器人外骨骼在现实世界中的使用如何影响穿戴者的机械非人化和对自我吸引力的感知。在一项主体内实验室实验中,n=119 名参与者在使用或不使用 Ironhand 主动手部外骨骼的情况下完成了各种实际工作任务(搬运、拧螺丝、铆接)。每种情况结束后,他们都要填写一份问卷。我们预计,在使用外骨骼的情况下,自我感觉温暖和有吸引力的程度会降低,而自我感觉称职和像机器的程度会提高。研究数据支持了这些假设,并表明能力感知、机器感知和吸引力感知与技术接受度相关。我们的研究结果首次证明了身体增强技术可能与自我非人化倾向有关,并强调了外骨骼引发的能力增强的多方面作用。通过研究与机械非人化和美学吸引力相关的用户自我感知,我们的研究强调了更好地理解外骨骼对人类穿戴者心理影响的必要性。
{"title":"Am I still human? Wearing an exoskeleton impacts self-perceptions of warmth, competence, attractiveness, and machine-likeness","authors":"Sandra Maria Siedl,&nbsp;Martina Mara","doi":"10.1016/j.chbah.2024.100073","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100073","url":null,"abstract":"<div><p>Occupational exoskeletons are body-worn technologies capable of enhancing a wearer's naturally given strength at work. Despite increasing interest in their physical effects, their implications for user self-perception have been largely overlooked. Addressing common concerns about body-enhancing technologies, our study explored how real-world use of a robotic exoskeleton affects a wearer's mechanistic dehumanization and perceived attractiveness of the self. In a within-subjects laboratory experiment, n = 119 participants performed various practical work tasks (carrying, screwing, riveting) with and without the <em>Ironhand</em> active hand exoskeleton. After each condition, they completed a questionnaire. We expected that in the exoskeleton condition self-perceptions of warmth and attractiveness would be less pronounced and self-perceptions of being competent and machine-like would be more pronounced. Study data supported these hypotheses and showed perceived competence, machine-likeness, and attractiveness to be relevant to technology acceptance. Our findings provide the first evidence that body-enhancement technologies may be associated with tendencies towards self-dehumanization, and underline the multifaceted role of exoskeleton-induced competence gain. By examining user self-perceptions that relate to mechanistic dehumanization and aesthetic appeal, our research highlights the need to better understand psychological impacts of exoskeletons on human wearers.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000331/pdfft?md5=cdbc3d3a9a85f6c53c5c3975b75c6aa2&pid=1-s2.0-S2949882124000331-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On trust in humans and trust in artificial intelligence: A study with samples from Singapore and Germany extending recent research 关于对人类的信任和对人工智能的信任:以新加坡和德国的样本为基础,扩展近期研究的一项研究
Pub Date : 2024-05-10 DOI: 10.1016/j.chbah.2024.100070
Christian Montag , Benjamin Becker , Benjamin J. Li

The AI revolution is shaping societies around the world. People interact daily with a growing number of products and services that feature AI integration. Without doubt rapid developments in AI will bring positive outcomes, but also challenges. In this realm it is important to understand if people trust this omni-use technology, because trust represents an essential prerequisite (to be willing) to use AI products and this in turn likely has an impact on how much AI will be embraced by national economies with consequences for the local work forces. To shed more light on trusting AI, the present work aims to understand how much the variables trust in AI and trust in humans overlap. This is important to understand, because much is already known about trust in humans, and if the concepts overlap, much of our understanding of trust in humans might be transferable to trusting AI. In samples from Singapore (n = 535) and Germany (n = 954) we could observe varying degrees of positive relations between the trust in AI/humans variables. Whereas trust in AI/humans showed a small positive association in Germany, there was a moderate positive association in Singapore. Further, this paper revisits associations between individual differences in the Big Five of Personality and general attitudes towards AI including trust.

The present work shows that trust in humans and trust in AI share only small amounts of variance, but this depends on culture (varying here from about 4 to 11 percent of shared variance). Future research should further investigate such associations but by also considering assessments of trust in specific AI-empowered-products and AI-empowered-services, where things might be different.

人工智能革命正在塑造世界各地的社会。人们每天都在与越来越多集成了人工智能的产品和服务打交道。毫无疑问,人工智能的快速发展将带来积极的成果,但也会带来挑战。在这一领域,重要的是要了解人们是否信任这种全方位使用的技术,因为信任是(愿意)使用人工智能产品的基本前提,而这反过来又可能影响到人工智能在多大程度上被国家经济所接受,并对当地劳动力产生影响。为了进一步阐明对人工智能的信任,本研究旨在了解对人工智能的信任和对人类的信任这两个变量的重叠程度。了解这一点非常重要,因为人们已经对人类的信任有了很多了解,如果这两个概念重叠,我们对人类信任的很多理解可能就会转移到对人工智能的信任上。在新加坡(n = 535)和德国(n = 954)的样本中,我们可以观察到人工智能/人类信任变量之间不同程度的正相关关系。在德国,对人工智能/人类的信任显示出微小的正相关,而在新加坡则显示出中等程度的正相关。此外,本文还重新探讨了五大人格中的个体差异与对人工智能的一般态度(包括信任)之间的关联。目前的研究表明,对人类的信任和对人工智能的信任只存在少量的共同差异,但这取决于文化(共同差异约为 4% 到 11%)。未来的研究应进一步调查这种关联,但也应考虑对特定人工智能产品和人工智能服务的信任评估,因为在这些产品和服务中,情况可能会有所不同。
{"title":"On trust in humans and trust in artificial intelligence: A study with samples from Singapore and Germany extending recent research","authors":"Christian Montag ,&nbsp;Benjamin Becker ,&nbsp;Benjamin J. Li","doi":"10.1016/j.chbah.2024.100070","DOIUrl":"10.1016/j.chbah.2024.100070","url":null,"abstract":"<div><p>The AI revolution is shaping societies around the world. People interact daily with a growing number of products and services that feature AI integration. Without doubt rapid developments in AI will bring positive outcomes, but also challenges. In this realm it is important to understand if people trust this omni-use technology, because trust represents an essential prerequisite (to be willing) to use AI products and this in turn likely has an impact on how much AI will be embraced by national economies with consequences for the local work forces. To shed more light on trusting AI, the present work aims to understand how much the variables <em>trust in AI</em> and <em>trust in humans</em> overlap. This is important to understand, because much is already known about trust in humans, and if the concepts overlap, much of our understanding of trust in humans might be transferable to trusting AI. In samples from Singapore (n = 535) and Germany (n = 954) we could observe varying degrees of positive relations between the <em>trust in AI/humans</em> variables. Whereas <em>trust in AI/humans</em> showed a small positive association in Germany, there was a moderate positive association in Singapore. Further, this paper revisits associations between individual differences in the Big Five of Personality and general attitudes towards AI including trust.</p><p>The present work shows that <em>trust in humans</em> and <em>trust in AI</em> share only small amounts of variance, but this depends on culture (varying here from about 4 to 11 percent of shared variance). Future research should further investigate such associations but by also considering assessments of trust in specific AI-empowered-products and AI-empowered-services, where things might be different.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000306/pdfft?md5=79d1e52e0296b5cc72a13b7bfacaaf35&pid=1-s2.0-S2949882124000306-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141042698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1