Theory suggests that robots with human-like mental capabilities (i.e., high agency and experience) evoke stronger aversion than robots without these capabilities. Yet, while several studies support this prediction, there is also evidence that the mental prowess of robots could be evaluated positively, at least by some individuals. To help resolving this ambivalence, we focused on rather stable individual differences that may shape users’ responses to machines with different levels of (perceived) mental ability. Specifically, we explored four key variables as potential moderators: monotheistic religiosity, the tendency to anthropomorphize, prior attitudes towards robots, and the general affinity for complex technology. Two pre-registered online experiments (N1 = 391, N2 = 617) were conducted, using text vignettes to introduce participants to a robot with or without complex, human-like capabilities. Results showed that negative attitudes towards robots increased the relative aversion against machines with (vs. without) complex minds, whereas technology affinity weakened the difference between conditions. Results for monotheistic religiosity turned out mixed, while the tendency to anthropomorphize had no significant impact on the evoked aversion. Overall, we conclude that certain individual differences play an important role in perceptions of machines with complex minds and should be considered in future research.
The interplay between artificial intelligence (AI) and psychology, particularly in personality assessment, represents an important emerging area of research. Accurate personality trait estimation is crucial not only for enhancing personalization in human-computer interaction but also for a wide variety of applications ranging from mental health to education. This paper analyzes the capability of a generic chatbot, ChatGPT, to effectively infer personality traits from short texts. We report the results of a comprehensive user study featuring texts written in Czech by a representative population sample of 155 participants. Their self-assessments based on the Big Five Inventory (BFI) questionnaire serve as the ground truth. We compare the personality trait estimations made by ChatGPT against those by human raters and report ChatGPT's competitive performance in inferring personality traits from text. We also uncover a ‘positivity bias’ in ChatGPT's assessments across all personality dimensions and explore the impact of prompt composition on accuracy. This work contributes to the understanding of AI capabilities in psychological assessment, highlighting both the potential and limitations of using large language models for personality inference. Our research underscores the importance of responsible AI development, considering ethical implications such as privacy, consent, autonomy, and bias in AI applications.
Mental disorders impact a large proportion of individuals worldwide, with young adults being particularly susceptible to poor mental health. Past research shows that help-seeking self-stigma plays a vital role in deterring help-seeking among young adults; however, this relationship has primarily been examined in the context of human-delivered psychotherapy. The present study aimed to understand how young adults’ perceptions of help-seeking self-stigma associated with different modes of psychotherapy, specifically human-delivered and artificial intelligence (AI)-delivered, influence attitudes towards using AI chatbots for psychotherapy. This study employed a cross-sectional survey design to measure perceived help-seeking self-stigma and attitudes towards both human- and AI-delivered psychotherapy. The results demonstrated that high help-seeking self-stigma associated with human-delivered psychotherapy was linked to more negative attitudes towards human-delivered psychotherapy but more positive attitudes towards AI-delivered psychotherapy. Moreover, high help-seeking self-stigma associated with AI-delivered psychotherapy was linked to more negative attitudes towards AI-delivered psychotherapy but more positive attitudes towards human-delivered psychotherapy. These findings have important real-world implications for future clinical practice and mental health service delivery. The results indicate that young adults who are reluctant to engage with human-delivered psychotherapy due to help-seeking self-stigma may be more inclined to seek help through alternative modes of psychotherapy, such as AI chatbots. Limitations and future directions are discussed.
With the increasing influence of artificial intelligence (AI) on various aspects of society, understanding public attitudes towards AI becomes crucial. This study investigated attitudes towards AI among Hong Kong middle-aged and older adults. In June 2023, an online survey was conducted among a sample of 740 smartphone users aged 45 years or older (Max = 78) in Hong Kong. Using exploratory factor analysis, we found three factors from the General Attitude to Artificial Intelligence Scale (GAAIS) - Perils, Power, and Promises. Subsequently, with latent profile analysis we revealed three latent profiles: (i) Enthusiasts (18.4%; high on Promises and Power but low on Perils); (ii) Skeptics (12.3%; high on Perils but low on Promises and Power), and (iii) Indecisive (69.3%; moderate on all three factors). The Enthusiasts were more likely to be male, with higher socio-economic status, better self-rated health, and greater mobile device proficiency, optimism, innovativeness, but also less insecurity with technology, compared to the Indecisive, and then to the Skeptics. Our findings suggest that most middle-aged and older adults in Hong Kong hold an ambivalent view towards AI, appreciating its power and potentials while also cognizant of the perils it may entail. Our findings are timely considering the recent debates on ethical use of AI evoked by smart phone applications such as ChatGPT and will be valuable for practitioners and scholars for developing inclusive AI-facilitated services and applications.