Social media plays a powerful role in accelerating the spread of misinformation, especially in the mental health domain, where misleading content may even cause disasters. Given the extensive coverage and complexity of social media data, manually moderating online misinformation is infeasible. Therefore, the present study proposes an integrated framework that combines qualitative analysis and deep learning to automatically detect and evaluate mental health misinformation. Guided by expert interviews and grounded theory, in the present study, a 21-level, fine-grained credibility assessment framework covering seven dimensions was developed. Using the framework, in this study, 814 Chinese social media posts were manually annotated, and a high-quality dataset was constructed. On this dataset, we trained and evaluated three deep learning models, that is, Gated Recurrent Unit (GRU), Bidirectional Encoder Representations from Transformers (BERT), and Robustly Optimized BERT Approach (RoBERTa), to automatically assess the credibility of mental health content. The results show that BERT, GRU, and RoBERTa models are effective at leveraging a range of clear sentiment-related cues and surface-level patterns to evaluate mental health misinformation on social media, particularly on dimensions such as Inflammatory Expression and One-sidedness of Expression. However, all three models face challenges in evaluating evidence quality and detecting context-dependent misinformation. When dealing with these challenges, BERT and GRU outperform RoBERTa, particularly in dimensions such as Logical Rigor. This study provides a robust, scalable, and expert-informed approach to improve the credibility of mental health information online.
The cognitive processes of the hypnotized mind and the computational operations of large language models (LLMs) share deep functional parallels. Both systems generate sophisticated, contextually appropriate behavior through automatic pattern-completion mechanisms operating with limited or unreliable executive oversight. This review examines this convergence across three principles: automaticity, in which responses emerge from associative rather than deliberative processes; suppressed monitoring, leading to errors such as confabulation in hypnosis and hallucination in LLMs; and heightened contextual dependency, where immediate cues-a therapist's suggestion or a user's prompt-override stable knowledge. These mechanisms reveal an observer-relative meaning gap: both systems produce coherent but ungrounded outputs that require an external interpreter to supply meaning. Hypnosis and LLMs also exemplify functional agency-the capacity for complex, goal-directed, context-sensitive behavior-without subjective agency, the conscious awareness of intention and ownership that defines human action. This distinction clarifies how purposive behavior can emerge without self-reflective consciousness, governed instead by structural and contextual dynamics. Finally, both domains illuminate the phenomenon of scheming: automatic, goal-directed pattern generation that unfolds without reflective awareness. Hypnosis provides an experimental model for understanding how intention can become dissociated from conscious deliberation, offering insights into the hidden motivational dynamics of artificial systems. Recognizing these parallels suggests that the future of reliable artificial intelligence lies in hybrid architectures that integrate generative fluency with mechanisms of executive monitoring, an approach inspired by the complex, self-regulating architecture of the human mind.
Although geographically dispersed organizations increasingly rely on virtual platforms to collaborate, virtual communication can undermine key team processes and outcomes. Prior research has largely focused on individual-level explanations, such as cognitive strain or "Zoom fatigue," for these challenges. We extend this literature by proposing that virtual communication also reinforces hierarchical structures by amplifying disparities in member influence during decision-making. In a controlled experiment comparing video conferencing and face-to-face teams, we find that disparities in member influence are significantly greater in virtual teams, which in turn reduces task performance. These findings highlight a critical, group-level mechanism through which virtual communication shapes communication patterns and outcomes, beyond previously identified individual-level factors. By identifying disparity in member influence as a key mediator, this study advances theory on virtual communication, group hierarchy, and decision-making and offers practical implications for reducing hierarchical distortions and fostering more equal-level conversations that enhance team effectiveness.
Despite the emergence of the dark web more than 20 years ago, little scholarly attention has focused on identifying potential mental health differences between dark web users and surface web users. Yet, given the pseudo-anonymous nature of the dark web and the purported privacy it provides, individuals with mental health vulnerabilities may be inclined to use the dark web. In the present study, we investigate this matter by drawing on survey data collected in 2024 from a national sample of 2,000 U.S. adults. The results of both bivariate and multivariate analyses indicate that dark web users exhibit greater depressive symptoms and have more paranoid thoughts than surface web users. Likewise, dark web users are more likely than surface web users to report suicidal thoughts, nonsuicidal self-injury, and engagement in digital self-harm. Discussion centers on the implications of these findings for practice as well as avenues for future research.
Virtual reality (VR) and biofeedback have emerged as promising tools for mindfulness training. However, their combined effectiveness, compared with Traditional Mindfulness formats, remains understudied. This study investigated the short-term effects of a brief mindfulness intervention delivered via VR, with and without biofeedback, on psychological and physiological outcomes. Seventy-two participants (64.7 percent women; aged 18-57 years, M = 24.0) were randomly assigned to one of three groups (n = 24 per group): mindfulness with VR and biofeedback, mindfulness with VR only, and traditional audio-guided mindfulness. Self-report measures assessed negative emotional symptoms, state anxiety, affect, and present-moment awareness before and after the intervention. Heart rate was recorded as a psychophysiological index of arousal. Results indicated significant reductions in stress, anxiety, and heart rate and an increase in positive affect across all groups. The VR + biofeedback group showed significantly greater improvements in receptive awareness and attentional focus compared with the other conditions. These findings support the use of VR-based mindfulness and suggest that integrating biofeedback may enhance present-moment engagement.
First responders (FRs) are routinely exposed to traumatic events, increasing risk for posttraumatic stress disorder (PTSD). This study compared heart rate variability (HRV) and skin conductance level (SCL) between FRs with and without probable PTSD at baseline and during a virtual reality (VR) task. Eighty-four FRs completed questionnaires and physiological assessments. Participants with probable PTSD showed significantly lower baseline HRV, indicating reduced parasympathetic modulation. No group differences emerged for HRV during VR or for SCL at either point. The results confirm reduced HRV at rest in PTSD, but further work is needed to clarify why this difference was not observed during the task and why SCL showed no group effects. To advance understanding of these results, future studies should include larger samples, longer baselines, recovery phases, and clinical interviews.

