Internet users are inundated with attempts to persuade, including digital nudges like defaults, friction, and reinforcement. When these nudges fail to be transparent, optional, and beneficial, they can become ‘dark patterns’, categorised here under the acronym FORCES (Frame, Obstruct, Ruse, Compel, Entangle, Seduce). Elsewhere, psychological principles like negativity bias, the curiosity gap, and fluency are exploited to make social content viral, while more covert tactics including astroturfing, meta-nudging, and inoculation are used to manufacture consensus. The power of these techniques is set to increase in line with technological advances such as predictive algorithms, generative AI, and virtual reality. Digital nudges can be used for altruistic purposes including protection against manipulation, but behavioural interventions have mixed effects at best.
The peak-end rule, a memory heuristic in which the most emotionally salient part of an experience (i.e., peak) and conclusion of an experience (i.e., end) are weighted more heavily in summary evaluations, has been understudied in mental health contexts. The recent growth of intensive longitudinal methods has provided new opportunities for examining the peak-end rule in the retrospective recall of mental health symptoms, including measures often used in measurement-based care initiatives. Additionally, principles of the peak-end rule have significant potential to be applied to exposure-based therapy procedures. Additional research is needed to better understand the contexts in which, and persons for whom, the peak-end rule presents a greater risk of bias, to ultimately improve assessment strategies and clinical care.
By blurring the boundaries between digital and physical realities, Augmented Reality (AR) is transforming consumers' perceptions of themselves and their environments. This review demonstrates AR's capacity to influence psychology and behavior in profound ways. We begin by providing a concise introduction to AR, considering its technical, practical, and theoretical properties. Next, we showcase a multi-disciplinary set of recent studies that explore AR's impact on psychological processes and behavioral outcomes. We conclude by offering a selection of potential future research directions designed to deepen our understanding of the psychological and behavioral implications of AR experiences.
This article synthesizes recent research connected to how cultural identity can determine responses to artificial intelligence. National differences in AI adoption imply that culturally-driven psychological differences may offer a nuanced understanding and interventions. Our review suggests that cultural identity shapes how individuals include AI in constructing the self in relation to others and determines the effect of AI on key decision-making processes. Individualists may be more prone to view AI as external to the self and interpret AI features to infringe upon their uniqueness, autonomy, and privacy. In contrast, collectivists may be more prone to view AI as an extension of the self and interpret AI features to facilitate conforming to consensus, respond to their environment, and protect privacy.
Algorithmic bias has emerged as a critical challenge in the age of responsible production of artificial intelligence (AI). This paper reviews recent research on algorithmic bias and proposes increased engagement of psychological and social science research to understand antecedents and consequences of algorithmic bias. Through the lens of the 3-D Dependable AI Framework, this article explores how social science disciplines, such as psychology, can contribute to identifying and mitigating bias at the Design, Develop, and Deploy stages of the AI life cycle. Finally, we propose future research directions to further address the complexities of algorithmic bias and its societal implications.
Chatbots, a type of virtual AI entity designed to emulate human conversation, are gaining prominence in business and consumer domains. This research aims to consolidate extant literature focusing on a pivotal aspect: the human-likeness of chatbots. Employing three fundamental themes as organizational pillars – chatbot as a non-human entity, chatbot as a human-like entity, and chatbot as an ambiguous agent – we aim to not only spotlight important findings but also chart out potential trajectories for future exploration. By delving into the intricacies of chatbot–consumer interaction, we seek to shed light on unexplored dimensions of marketing research, ultimately enhancing our understanding of this evolving field.
Integrating artificial intelligence (AI) into human teams, forming human-AI teams (HATs), is a rapidly evolving field. This overview examines the complexities of team constellations and dynamics, trust in AI teammates, and shared cognition within HATs. Adding an AI teammate often reduces coordination, communication, and trust. Further, trust in AI tends to decline over time due to initial overestimation of capabilities, impairing teamwork. Despite AI's potential to enhance performance in contexts like chess and medicine, HATs frequently underperform due to poor team cognition and inadequate mutual understanding. Future research must address these issues with interdisciplinary collaboration between computer science and psychology and advance robust theoretical frameworks to realize the full potential of human-AI teaming.
Inspired by significant technical advancements, a rapidly growing stream of research explores human lay beliefs and reactions surrounding AI tools, which employ algorithms to mimic elements of human intelligence. This literature predominantly documents negative reactions to these tools or the underlying algorithms, often referred to as algorithm aversion or, alternatively, a preference for humans. This article proposes a third interpretation: people may be averse to their labels, but appreciative of their output. This perspective offers three core insights for how we study people's reactions to algorithms. Research would benefit from (1) carefully considering the labeling of AI tools, (2) broadening the scope of study to include interactions with these tools, and (3) accounting for their technical configuration.
As the popularity and adoption of Artificial Intelligence (AI) systems continue to rise, this article presents a promising proposition: the use of AI dialects to enhance AI perception. By delving into the potential of personalized AI dialects to augment user perceptions of warmth, competence, and authenticity, the article underscores the pivotal role of anthropomorphism in fortifying trust, satisfaction, and loyalty to AI systems. A comprehensive research framework is put forth to explore these potential mechanisms and outcomes of AI dialect introduction, shedding light on how these impacts might vary based on AI modality (text, voice, and video), industry adoption, and user demographics.