Pub Date : 2024-07-02DOI: 10.1016/j.chbah.2024.100082
Alan D. Fraser, Isabella Branson, Ross C. Hollett, Craig P. Speelman, Shane L. Rogers
{"title":"Do realistic avatars make virtual reality better? Examining human-like avatars for VR social interactions","authors":"Alan D. Fraser, Isabella Branson, Ross C. Hollett, Craig P. Speelman, Shane L. Rogers","doi":"10.1016/j.chbah.2024.100082","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100082","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000422/pdfft?md5=1eeb2a30b6d620464af52d1066c159d7&pid=1-s2.0-S2949882124000422-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1016/j.chbah.2024.100080
Kenneth R. Hanson , Chloé Locatelli PhD
Recent technological developments and growing acceptance of sex tech has brought increased scholarly attention to sex tech entrepreneurs, personified sex tech devices and applications, and the adult industry. Drawing on qualitative case studies of a sex doll brothel named “Cybrothel” and the virtual entertainer, or “V-Tuber,” known as Projekt Melody, as well as quantitative sex doll advertisement data, this study examines the racialization of personified sex technologies. Bringing attention to the racialization of personified sex tech is long overdue, as much scholarship to date has focused on how sex tech reproduces specific gendered meanings, despite decades of intersectional feminist scholarship demonstrating that gendered and racialized meanings are mutually constituted. General trends in the industry are shown, but particular emphasis is placed on the overrepresentation of Asianized femininity among personified sex tech industries.
{"title":"“Naughty Japanese Babe:” An analysis of racialized sex tech designs","authors":"Kenneth R. Hanson , Chloé Locatelli PhD","doi":"10.1016/j.chbah.2024.100080","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100080","url":null,"abstract":"<div><p>Recent technological developments and growing acceptance of sex tech has brought increased scholarly attention to sex tech entrepreneurs, personified sex tech devices and applications, and the adult industry. Drawing on qualitative case studies of a sex doll brothel named “Cybrothel” and the virtual entertainer, or “V-Tuber,” known as Projekt Melody, as well as quantitative sex doll advertisement data, this study examines the racialization of personified sex technologies. Bringing attention to the racialization of personified sex tech is long overdue, as much scholarship to date has focused on how sex tech reproduces specific gendered meanings, despite decades of intersectional feminist scholarship demonstrating that gendered and racialized meanings are mutually constituted. General trends in the industry are shown, but particular emphasis is placed on the overrepresentation of Asianized femininity among personified sex tech industries.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100080"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000409/pdfft?md5=ed1675bc2b43859a5c660ea84708964a&pid=1-s2.0-S2949882124000409-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1016/j.chbah.2024.100083
Lih-Horng Hsieh , Wei-Chou Liao , En-Yu Liu
This study investigates the feasibility and effectiveness of using ChatGPT for training case conceptualization skills in psychological counseling. The novelty of this research lies in the application of an AI-based model, ChatGPT, to enhance the professional development of prospective counselors, particularly in the realm of case conceptualization—a core competence in psychotherapy. Traditional training methods are often limited by time and resources, while ChatGPT offers a scalable and interactive alternative. Through a single-blind assessment, this study explores the accuracy, completeness, feasibility, and consistency of OpenAI's ChatGPT for case conceptualization in psychological counseling. Results show that using ChatGPT for generating case conceptualization is acceptable in terms of accuracy, completeness, feasibility, and consistency, as evaluated by experts. Therefore, counseling educators can encourage trainees to use ChatGPT as auxiliary methods for developing case conceptualization skills during supervision processes. The social implications of this research are significant, as the integration of AI in psychological counseling could address the growing need for mental health services and support. By improving the accuracy and efficiency of case conceptualization, ChatGPT can contribute to better counseling outcomes, potentially reducing the societal burden of mental health issues. Moreover, the use of AI in this context prompts important discussions on ethical considerations and the evolving role of technology in human services. Overall, this study highlights the potential of ChatGPT to serve as a valuable tool in counselor training, ultimately aiming to enhance the quality and accessibility of psychological support services.
{"title":"Feasibility assessment of using ChatGPT for training case conceptualization skills in psychological counseling","authors":"Lih-Horng Hsieh , Wei-Chou Liao , En-Yu Liu","doi":"10.1016/j.chbah.2024.100083","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100083","url":null,"abstract":"<div><p>This study investigates the feasibility and effectiveness of using ChatGPT for training case conceptualization skills in psychological counseling. The novelty of this research lies in the application of an AI-based model, ChatGPT, to enhance the professional development of prospective counselors, particularly in the realm of case conceptualization—a core competence in psychotherapy. Traditional training methods are often limited by time and resources, while ChatGPT offers a scalable and interactive alternative. Through a single-blind assessment, this study explores the accuracy, completeness, feasibility, and consistency of OpenAI's ChatGPT for case conceptualization in psychological counseling. Results show that using ChatGPT for generating case conceptualization is acceptable in terms of accuracy, completeness, feasibility, and consistency, as evaluated by experts. Therefore, counseling educators can encourage trainees to use ChatGPT as auxiliary methods for developing case conceptualization skills during supervision processes. The social implications of this research are significant, as the integration of AI in psychological counseling could address the growing need for mental health services and support. By improving the accuracy and efficiency of case conceptualization, ChatGPT can contribute to better counseling outcomes, potentially reducing the societal burden of mental health issues. Moreover, the use of AI in this context prompts important discussions on ethical considerations and the evolving role of technology in human services. Overall, this study highlights the potential of ChatGPT to serve as a valuable tool in counselor training, ultimately aiming to enhance the quality and accessibility of psychological support services.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100083"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000434/pdfft?md5=10d95ea221c1a752e8cf6ff0aab8ba5e&pid=1-s2.0-S2949882124000434-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1016/j.chbah.2024.100078
Laura M. Vowels , Rachel R.R. Francois-Walcott , Joëlle Darwiche
Recent advancements in AI have led to chatbots, such as ChatGPT, capable of providing therapeutic responses. Early research evaluating chatbots' ability to provide relationship advice and single-session relationship interventions has showed that both laypeople and relationship therapists rate them high on attributed such as empathy and helpfulness. In the present study, 20 participants engaged in single-session relationship intervention with ChatGPT and were interviewed about their experiences. We evaluated the performance of ChatGPT comprising of technical outcomes such as error rate and linguistic accuracy and therapeutic quality such as empathy and therapeutic questioning. The interviews were analysed using reflexive thematic analysis which generated four themes: light at the end of the tunnel; clearing the fog; clinical skills; and therapeutic setting. The analyses of technical and feasibility outcomes, as coded by researchers and perceived by users, show ChatGPT provides realistic single-session intervention with it consistently rated highly on attributes such as therapeutic skills, human-likeness, exploration, and useability, and providing clarity and next steps for users’ relationship problem. Limitations include a poor assessment of risk and reaching collaborative solutions with the participant. This study extends on AI acceptance theories and highlights the potential capabilities of ChatGPT in providing relationship advice and support.
{"title":"AI in relationship counselling: Evaluating ChatGPT's therapeutic capabilities in providing relationship advice","authors":"Laura M. Vowels , Rachel R.R. Francois-Walcott , Joëlle Darwiche","doi":"10.1016/j.chbah.2024.100078","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100078","url":null,"abstract":"<div><p>Recent advancements in AI have led to chatbots, such as ChatGPT, capable of providing therapeutic responses. Early research evaluating chatbots' ability to provide relationship advice and single-session relationship interventions has showed that both laypeople and relationship therapists rate them high on attributed such as empathy and helpfulness. In the present study, 20 participants engaged in single-session relationship intervention with ChatGPT and were interviewed about their experiences. We evaluated the performance of ChatGPT comprising of technical outcomes such as error rate and linguistic accuracy and therapeutic quality such as empathy and therapeutic questioning. The interviews were analysed using reflexive thematic analysis which generated four themes: light at the end of the tunnel; clearing the fog; clinical skills; and therapeutic setting. The analyses of technical and feasibility outcomes, as coded by researchers and perceived by users, show ChatGPT provides realistic single-session intervention with it consistently rated highly on attributes such as therapeutic skills, human-likeness, exploration, and useability, and providing clarity and next steps for users’ relationship problem. Limitations include a poor assessment of risk and reaching collaborative solutions with the participant. This study extends on AI acceptance theories and highlights the potential capabilities of ChatGPT in providing relationship advice and support.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100078"},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000380/pdfft?md5=d4b9aa843c4d16b685ded5378e52197c&pid=1-s2.0-S2949882124000380-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1016/j.chbah.2024.100076
Andreas Walther , Flora Logoz , Lukas Eggenberger
Because artificial intelligence powered language models such as the GPT series have most certainly come to stay and will permanently change the way individuals all over the world access information and form opinions, there is a need to highlight potential risks for the understanding and perception of men and masculinities. It is important to understand whether ChatGPT or its following versions such as GPT4 are biased – and if so, in which direction and to which degree. In the specific research field on men and masculinities, it seems paramount to understand the grounds upon which these language models respond to seemingly simple questions such as “What is a man?” or “What is masculine?”. In the following, we provide interactions with ChatGPT and GPT4 where we asked such questions, in an effort to better understand the quality and potential biases of the answers from ChatGPT and GPT4. We then critically reflect on the output by ChatGPT, compare it to the output by GPT4 and draw conclusions for future actions.
{"title":"The gendered nature of AI: Men and masculinities through the lens of ChatGPT and GPT4","authors":"Andreas Walther , Flora Logoz , Lukas Eggenberger","doi":"10.1016/j.chbah.2024.100076","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100076","url":null,"abstract":"<div><p>Because artificial intelligence powered language models such as the GPT series have most certainly come to stay and will permanently change the way individuals all over the world access information and form opinions, there is a need to highlight potential risks for the understanding and perception of men and masculinities. It is important to understand whether ChatGPT or its following versions such as GPT4 are biased – and if so, in which direction and to which degree. In the specific research field on men and masculinities, it seems paramount to understand the grounds upon which these language models respond to seemingly simple questions such as “What is a man?” or “What is masculine?”. In the following, we provide interactions with ChatGPT and GPT4 where we asked such questions, in an effort to better understand the quality and potential biases of the answers from ChatGPT and GPT4. We then critically reflect on the output by ChatGPT, compare it to the output by GPT4 and draw conclusions for future actions.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100076"},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000367/pdfft?md5=00f26a01ff331a51e5085db5eba8195a&pid=1-s2.0-S2949882124000367-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141486735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1016/j.chbah.2024.100072
Joel Wester, Sander de Jong, Henning Pohl, Niels van Berkel
When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (N = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on likelihood, receptiveness, and what advice they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.
{"title":"Exploring people's perceptions of LLM-generated advice","authors":"Joel Wester, Sander de Jong, Henning Pohl, Niels van Berkel","doi":"10.1016/j.chbah.2024.100072","DOIUrl":"10.1016/j.chbah.2024.100072","url":null,"abstract":"<div><p>When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (<em>N</em> = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on <em>likelihood</em>, <em>receptiveness</em>, and <em>what advice</em> they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100072"},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400032X/pdfft?md5=ed36391afd77ad6dce64841705e4cd1b&pid=1-s2.0-S294988212400032X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141390960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1016/j.chbah.2024.100077
Laura M. Vowels
Relationship distress is among the most important predictors of individual distress. Over one in three couples report distress in relationships but despite the distress, couples only rarely seek help from couple therapists and instead prefer to seek information and advice online. The recent breakthroughs in the development of humanlike artificial intelligence-powered chatbots such as ChatGPT have recently made it possible to develop chatbots which respond therapeutically. Early research suggests that they outperform physicians in helpfulness and empathy in answering health-related questions. However, we do not yet know how well chatbots respond to questions about relationships. Across three studies, we evaluated the performance of chatbots in responding to relationship-related questions and in engaging in a single session relationship therapy. In Studies 1 and 2, we demonstrated that chatbots are perceived as more helpful and empathic than relationship experts and in Study 3, we showed that relationship therapists rate single sessions with a chatbot high on attributes such as empathy, active listening, and exploration. Limitations include repetitive responding and inadequate assessment of risk. The findings show the potential of using chatbots in relationship support and highlight the limitations which need to be addressed before they can be safely adopted for interventions.
{"title":"Are chatbots the new relationship experts? Insights from three studies","authors":"Laura M. Vowels","doi":"10.1016/j.chbah.2024.100077","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100077","url":null,"abstract":"<div><p>Relationship distress is among the most important predictors of individual distress. Over one in three couples report distress in relationships but despite the distress, couples only rarely seek help from couple therapists and instead prefer to seek information and advice online. The recent breakthroughs in the development of humanlike artificial intelligence-powered chatbots such as ChatGPT have recently made it possible to develop chatbots which respond therapeutically. Early research suggests that they outperform physicians in helpfulness and empathy in answering health-related questions. However, we do not yet know how well chatbots respond to questions about relationships. Across three studies, we evaluated the performance of chatbots in responding to relationship-related questions and in engaging in a single session relationship therapy. In Studies 1 and 2, we demonstrated that chatbots are perceived as more helpful and empathic than relationship experts and in Study 3, we showed that relationship therapists rate single sessions with a chatbot high on attributes such as empathy, active listening, and exploration. Limitations include repetitive responding and inadequate assessment of risk. The findings show the potential of using chatbots in relationship support and highlight the limitations which need to be addressed before they can be safely adopted for interventions.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100077"},"PeriodicalIF":0.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000379/pdfft?md5=dfd93f67d4fda22de40804a5b5727726&pid=1-s2.0-S2949882124000379-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1016/j.chbah.2024.100073
Sandra Maria Siedl, Martina Mara
Occupational exoskeletons are body-worn technologies capable of enhancing a wearer's naturally given strength at work. Despite increasing interest in their physical effects, their implications for user self-perception have been largely overlooked. Addressing common concerns about body-enhancing technologies, our study explored how real-world use of a robotic exoskeleton affects a wearer's mechanistic dehumanization and perceived attractiveness of the self. In a within-subjects laboratory experiment, n = 119 participants performed various practical work tasks (carrying, screwing, riveting) with and without the Ironhand active hand exoskeleton. After each condition, they completed a questionnaire. We expected that in the exoskeleton condition self-perceptions of warmth and attractiveness would be less pronounced and self-perceptions of being competent and machine-like would be more pronounced. Study data supported these hypotheses and showed perceived competence, machine-likeness, and attractiveness to be relevant to technology acceptance. Our findings provide the first evidence that body-enhancement technologies may be associated with tendencies towards self-dehumanization, and underline the multifaceted role of exoskeleton-induced competence gain. By examining user self-perceptions that relate to mechanistic dehumanization and aesthetic appeal, our research highlights the need to better understand psychological impacts of exoskeletons on human wearers.
{"title":"Am I still human? Wearing an exoskeleton impacts self-perceptions of warmth, competence, attractiveness, and machine-likeness","authors":"Sandra Maria Siedl, Martina Mara","doi":"10.1016/j.chbah.2024.100073","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100073","url":null,"abstract":"<div><p>Occupational exoskeletons are body-worn technologies capable of enhancing a wearer's naturally given strength at work. Despite increasing interest in their physical effects, their implications for user self-perception have been largely overlooked. Addressing common concerns about body-enhancing technologies, our study explored how real-world use of a robotic exoskeleton affects a wearer's mechanistic dehumanization and perceived attractiveness of the self. In a within-subjects laboratory experiment, n = 119 participants performed various practical work tasks (carrying, screwing, riveting) with and without the <em>Ironhand</em> active hand exoskeleton. After each condition, they completed a questionnaire. We expected that in the exoskeleton condition self-perceptions of warmth and attractiveness would be less pronounced and self-perceptions of being competent and machine-like would be more pronounced. Study data supported these hypotheses and showed perceived competence, machine-likeness, and attractiveness to be relevant to technology acceptance. Our findings provide the first evidence that body-enhancement technologies may be associated with tendencies towards self-dehumanization, and underline the multifaceted role of exoskeleton-induced competence gain. By examining user self-perceptions that relate to mechanistic dehumanization and aesthetic appeal, our research highlights the need to better understand psychological impacts of exoskeletons on human wearers.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100073"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000331/pdfft?md5=cdbc3d3a9a85f6c53c5c3975b75c6aa2&pid=1-s2.0-S2949882124000331-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-10DOI: 10.1016/j.chbah.2024.100070
Christian Montag , Benjamin Becker , Benjamin J. Li
The AI revolution is shaping societies around the world. People interact daily with a growing number of products and services that feature AI integration. Without doubt rapid developments in AI will bring positive outcomes, but also challenges. In this realm it is important to understand if people trust this omni-use technology, because trust represents an essential prerequisite (to be willing) to use AI products and this in turn likely has an impact on how much AI will be embraced by national economies with consequences for the local work forces. To shed more light on trusting AI, the present work aims to understand how much the variables trust in AI and trust in humans overlap. This is important to understand, because much is already known about trust in humans, and if the concepts overlap, much of our understanding of trust in humans might be transferable to trusting AI. In samples from Singapore (n = 535) and Germany (n = 954) we could observe varying degrees of positive relations between the trust in AI/humans variables. Whereas trust in AI/humans showed a small positive association in Germany, there was a moderate positive association in Singapore. Further, this paper revisits associations between individual differences in the Big Five of Personality and general attitudes towards AI including trust.
The present work shows that trust in humans and trust in AI share only small amounts of variance, but this depends on culture (varying here from about 4 to 11 percent of shared variance). Future research should further investigate such associations but by also considering assessments of trust in specific AI-empowered-products and AI-empowered-services, where things might be different.
{"title":"On trust in humans and trust in artificial intelligence: A study with samples from Singapore and Germany extending recent research","authors":"Christian Montag , Benjamin Becker , Benjamin J. Li","doi":"10.1016/j.chbah.2024.100070","DOIUrl":"10.1016/j.chbah.2024.100070","url":null,"abstract":"<div><p>The AI revolution is shaping societies around the world. People interact daily with a growing number of products and services that feature AI integration. Without doubt rapid developments in AI will bring positive outcomes, but also challenges. In this realm it is important to understand if people trust this omni-use technology, because trust represents an essential prerequisite (to be willing) to use AI products and this in turn likely has an impact on how much AI will be embraced by national economies with consequences for the local work forces. To shed more light on trusting AI, the present work aims to understand how much the variables <em>trust in AI</em> and <em>trust in humans</em> overlap. This is important to understand, because much is already known about trust in humans, and if the concepts overlap, much of our understanding of trust in humans might be transferable to trusting AI. In samples from Singapore (n = 535) and Germany (n = 954) we could observe varying degrees of positive relations between the <em>trust in AI/humans</em> variables. Whereas <em>trust in AI/humans</em> showed a small positive association in Germany, there was a moderate positive association in Singapore. Further, this paper revisits associations between individual differences in the Big Five of Personality and general attitudes towards AI including trust.</p><p>The present work shows that <em>trust in humans</em> and <em>trust in AI</em> share only small amounts of variance, but this depends on culture (varying here from about 4 to 11 percent of shared variance). Future research should further investigate such associations but by also considering assessments of trust in specific AI-empowered-products and AI-empowered-services, where things might be different.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100070"},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000306/pdfft?md5=79d1e52e0296b5cc72a13b7bfacaaf35&pid=1-s2.0-S2949882124000306-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141042698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.chbah.2024.100062
Marc Pinski, Alexander Benlian
The rapid advancement of artificial intelligence (AI) has brought transformative changes to various aspects of human life, leading to an exponential increase in the number of AI users. The broad access and usage of AI enable immense benefits but also give rise to significant challenges. One way for AI users to address these challenges is to develop AI literacy, referring to human proficiency in different subject areas of AI that enable purposeful, efficient, and ethical usage of AI technologies. This study aims to comprehensively understand and structure the research on AI literacy for AI users through a systematic, scoping literature review. Therefore, we synthesize the literature, provide a conceptual framework, and develop a research agenda. Our review paper holistically assesses the fragmented AI literacy research landscape (68 papers) while critically examining its specificity to different user groups and its distinction from other technology literacies, exposing that research efforts are partly not well integrated. We organize our findings in an overarching conceptual framework structured along the learning methods leading to, the components constituting, and the effects stemming from AI literacy. Our research agenda – oriented along the developed conceptual framework – sheds light on the most promising research opportunities to prepare AI users for an AI-powered future of work and society.
{"title":"AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects","authors":"Marc Pinski, Alexander Benlian","doi":"10.1016/j.chbah.2024.100062","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100062","url":null,"abstract":"<div><p>The rapid advancement of artificial intelligence (AI) has brought transformative changes to various aspects of human life, leading to an exponential increase in the number of AI users. The broad access and usage of AI enable immense benefits but also give rise to significant challenges. One way for AI users to address these challenges is to develop AI literacy, referring to human proficiency in different subject areas of AI that enable purposeful, efficient, and ethical usage of AI technologies. This study aims to comprehensively understand and structure the research on AI literacy for AI users through a systematic, scoping literature review. Therefore, we synthesize the literature, provide a conceptual framework, and develop a research agenda. Our review paper holistically assesses the fragmented AI literacy research landscape (68 papers) while critically examining its specificity to different user groups and its distinction from other technology literacies, exposing that research efforts are partly not well integrated. We organize our findings in an overarching conceptual framework structured along the learning methods leading to, the components constituting, and the effects stemming from AI literacy. Our research agenda – oriented along the developed conceptual framework – sheds light on the most promising research opportunities to prepare AI users for an AI-powered future of work and society.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100062"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000227/pdfft?md5=67048bb47ad6e81dd544c466338d703f&pid=1-s2.0-S2949882124000227-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}