Pub Date : 2025-09-01DOI: 10.1016/j.entcom.2025.101026
Zhien Piao , Saerom Lee , Hyunmi Baek
This study aims to investigate the effectiveness of emotional appeals in driving user engagement with marketer-generated content (MGC), specifically analyzing variations across the pre- and post-release phases of a movie. The study analyzed 23,998 marketing messages from Sina Weibo related to 356 movies released in China. The findings show that emotional appeals in MGC significantly enhance user engagement, with a stronger impact before a movie’s release, especially when audiences lack prior knowledge about the movie. This study contributes to the marketing and film literature by empirically demonstrating the strategic importance of emotional appeals in MGC strategies, particularly during the pre-release period, in effectively capturing audience attention and fostering engagement.
{"title":"Revealing consumer hearts − the impact of emotional appeals in movie marketing: evidence from marketer-generated content on Weibo","authors":"Zhien Piao , Saerom Lee , Hyunmi Baek","doi":"10.1016/j.entcom.2025.101026","DOIUrl":"10.1016/j.entcom.2025.101026","url":null,"abstract":"<div><div>This study aims to investigate the effectiveness of emotional appeals in driving user engagement with marketer-generated content (MGC), specifically analyzing variations across the pre- and post-release phases of a movie. The study analyzed 23,998 marketing messages from Sina Weibo related to 356 movies released in China. The findings show that emotional appeals in MGC significantly enhance user engagement, with a stronger impact before a movie’s release, especially when audiences lack prior knowledge about the movie. This study contributes to the marketing and film literature by empirically demonstrating the strategic importance of emotional appeals in MGC strategies, particularly during the pre-release period, in effectively capturing audience attention and fostering engagement.</div></div>","PeriodicalId":55997,"journal":{"name":"Entertainment Computing","volume":"55 ","pages":"Article 101026"},"PeriodicalIF":2.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1016/j.entcom.2025.101025
Muhammad Ashraf Fauzi
This study explores the fundamental knowledge structure of smartphone addiction among young consumers. Smartphone use has been ubiquitous and a norm in today’s society. This phenomenon led to smartphone addiction among users, particularly youngsters, as it imparts negative consequences to their growth, development, and social life. Employing a novel state-of-the-art science mapping approach through bibliometric analysis, the current and future trends of smartphone addiction among young consumers were analyzed. 665 documents were retrieved from the Web of Science (WoS) database and were analyzed using bibliographic coupling and co-word analysis. Four emerging research streams produced: 1) Negative consequences of smartphone addiction among youngsters, 2) Parental phubbing and its impact on adolescent smartphone addiction, 3) Smartphone and social media addiction, and 4) Smartphone addiction scale. At the same time, future trends are related to the risk and impact of smartphone addiction among young consumers.
本研究探讨了年轻消费者智能手机成瘾的基本知识结构。智能手机的使用已经无处不在,成为当今社会的一种常态。这一现象导致用户,尤其是年轻人对智能手机上瘾,因为这给他们的成长、发展和社交生活带来了负面影响。通过文献计量学分析,采用一种新颖的最先进的科学制图方法,分析了年轻消费者智能手机成瘾的当前和未来趋势。从Web of Science (WoS)数据库中检索665篇文献,采用书目耦合和共词分析方法进行分析。产生了四个新兴研究流:1)青少年智能手机成瘾的负面影响;2)父母低头及其对青少年智能手机成瘾的影响;3)智能手机和社交媒体成瘾;4)智能手机成瘾规模。与此同时,未来的趋势与年轻消费者智能手机成瘾的风险和影响有关。
{"title":"Smartphone addiction among children, adolescents and teenagers: mapping emerging and future direction","authors":"Muhammad Ashraf Fauzi","doi":"10.1016/j.entcom.2025.101025","DOIUrl":"10.1016/j.entcom.2025.101025","url":null,"abstract":"<div><div>This study explores the fundamental knowledge structure of smartphone addiction among young consumers. Smartphone use has been ubiquitous and a norm in today’s society. This phenomenon led to smartphone addiction among users, particularly youngsters, as it imparts negative consequences to their growth, development, and social life. Employing a novel state-of-the-art science mapping approach through bibliometric analysis, the current and future trends of smartphone addiction among young consumers were analyzed. 665 documents were retrieved from the Web of Science (WoS) database and were analyzed using bibliographic coupling and co-word analysis. Four emerging research streams produced: 1) Negative consequences of smartphone addiction among youngsters, 2) Parental phubbing and its impact on adolescent smartphone addiction, 3) Smartphone and social media addiction, and 4) Smartphone addiction scale. At the same time, future trends are related to the risk and impact of smartphone addiction among young consumers.</div></div>","PeriodicalId":55997,"journal":{"name":"Entertainment Computing","volume":"55 ","pages":"Article 101025"},"PeriodicalIF":2.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145157427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1016/j.entcom.2025.101033
Alejandro Fernandez-Sanchez, Pedro J. Navarro, Fernando Terroso-Saenz
The music industry has been reshaped by the rise of artist collaborations, driven by digital technologies, streaming platforms, and the globalization of music. While existing research has examined the cultural and commercial impact of collaborations, few efforts have focused on recommendation systems to assist musicians in discovering potential creative partners. Moreover, most approaches rely on proprietary data, limiting scalability and reproducibility. This paper presents MUSYNERGY, a novel framework for music collaboration discovery based on neural networks and graph analysis. MUSYNERGY builds a Heterogeneous Knowledge Graph (HKG) using open data from MusicBrainz, representing relationships among artists, tracks, and musical attributes over five decades. By formulating collaboration discovery as a link prediction task, the system identifies new, plausible collaborations between artists with no prior joint work. This open, scalable framework addresses current limitations in data accessibility and supports innovation, transparency, and cultural exchange in the global music landscape through data-driven collaboration discovery.
{"title":"MUSYNERGY: A framework for music collaboration discovery based on neural networks and graph analysis","authors":"Alejandro Fernandez-Sanchez, Pedro J. Navarro, Fernando Terroso-Saenz","doi":"10.1016/j.entcom.2025.101033","DOIUrl":"10.1016/j.entcom.2025.101033","url":null,"abstract":"<div><div>The music industry has been reshaped by the rise of artist collaborations, driven by digital technologies, streaming platforms, and the globalization of music. While existing research has examined the cultural and commercial impact of collaborations, few efforts have focused on recommendation systems to assist musicians in discovering potential creative partners. Moreover, most approaches rely on proprietary data, limiting scalability and reproducibility. This paper presents MUSYNERGY, a novel framework for music collaboration discovery based on neural networks and graph analysis. MUSYNERGY builds a Heterogeneous Knowledge Graph (HKG) using open data from MusicBrainz, representing relationships among artists, tracks, and musical attributes over five decades. By formulating collaboration discovery as a link prediction task, the system identifies new, plausible collaborations between artists with no prior joint work. This open, scalable framework addresses current limitations in data accessibility and supports innovation, transparency, and cultural exchange in the global music landscape through data-driven collaboration discovery.</div></div>","PeriodicalId":55997,"journal":{"name":"Entertainment Computing","volume":"55 ","pages":"Article 101033"},"PeriodicalIF":2.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1016/j.entcom.2025.101021
Flávio Coutinho , Lucas G.S. Chaves , Luiz Chaimowicz
Designing pixel art character sprites with numerous animation frames is a labor-intensive process that often involves repetitive work. To streamline this task, we propose a method that automates sprite generation, allowing artists to focus on the creative aspects. Our work addresses the challenge of synthesizing a character sprite in a target pose using reference images from other viewing angles. We formulate this as a missing data imputation problem and introduce a generative adversarial network that reconstructs the desired pose from those already available among the back, left, front, and right directions. Unlike baseline models, our proposed generator utilizes all available poses of a character to enhance the quality of the generated image. Additionally, it does not introduce small variations of the same colors and instead produces images that strictly follow a predefined color palette. Crucially, our model ensures adherence to the palette by incorporating a novel operation of differentiable quantization of pixel values, making it suitable for end-to-end training. Compared to baseline models proposed for generating a new pose from a single one, our approach produces images with better FID (34.09% lower) and distance to the ground truth (22.66% lower). It also shows superior quality through visual inspection. Additionally, as the generator selects colors from a desired palette, similar to how human artists create pixel art, the generated images are more readily useful, eliminating the need for a post-processing step to restrict the color.
{"title":"Imputation of missing pixel art character poses with differentiable palette quantization","authors":"Flávio Coutinho , Lucas G.S. Chaves , Luiz Chaimowicz","doi":"10.1016/j.entcom.2025.101021","DOIUrl":"10.1016/j.entcom.2025.101021","url":null,"abstract":"<div><div>Designing pixel art character sprites with numerous animation frames is a labor-intensive process that often involves repetitive work. To streamline this task, we propose a method that automates sprite generation, allowing artists to focus on the creative aspects. Our work addresses the challenge of synthesizing a character sprite in a target pose using reference images from other viewing angles. We formulate this as a missing data imputation problem and introduce a generative adversarial network that reconstructs the desired pose from those already available among the back, left, front, and right directions. Unlike baseline models, our proposed generator utilizes all available poses of a character to enhance the quality of the generated image. Additionally, it does not introduce small variations of the same colors and instead produces images that strictly follow a predefined color palette. Crucially, our model ensures adherence to the palette by incorporating a novel operation of differentiable quantization of pixel values, making it suitable for end-to-end training. Compared to baseline models proposed for generating a new pose from a single one, our approach produces images with better FID (34.09% lower) and <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> distance to the ground truth (22.66% lower). It also shows superior quality through visual inspection. Additionally, as the generator selects colors from a desired palette, similar to how human artists create pixel art, the generated images are more readily useful, eliminating the need for a post-processing step to restrict the color.</div></div>","PeriodicalId":55997,"journal":{"name":"Entertainment Computing","volume":"55 ","pages":"Article 101021"},"PeriodicalIF":2.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145157428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel multi-model approach for emotion recognition in Chinese speech using deep learning. We propose separating neutral and non-neutral emotion detection to improve accuracy, while combining speech acoustic features with text sentiment analysis. Our experimental results show significant improvements over baseline methods, achieving 89.1% accuracy through our integrated approach. This represents a substantial gain from baseline 67.71% when training emotions directly, demonstrating the effectiveness of both our neutral/non-neutral separation strategy and text sentiment integration.
{"title":"Emotion recognition using deep learning models in Chinese speech","authors":"Ko-Chun Hung , Ammar Amjad , Yi-Ping Chao , Hsien-Tsung Chang","doi":"10.1016/j.entcom.2025.101039","DOIUrl":"10.1016/j.entcom.2025.101039","url":null,"abstract":"<div><div>This paper presents a novel multi-model approach for emotion recognition in Chinese speech using deep learning. We propose separating neutral and non-neutral emotion detection to improve accuracy, while combining speech acoustic features with text sentiment analysis. Our experimental results show significant improvements over baseline methods, achieving 89.1% accuracy through our integrated approach. This represents a substantial gain from baseline 67.71% when training emotions directly, demonstrating the effectiveness of both our neutral/non-neutral separation strategy and text sentiment integration.</div></div>","PeriodicalId":55997,"journal":{"name":"Entertainment Computing","volume":"55 ","pages":"Article 101039"},"PeriodicalIF":2.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1016/j.entcom.2025.101053
Jifeng Zhong
Personalized hand-drawn illustrations are becoming more popular in digital media, helping to make content more engaging. Traditional generation methods often lack adaptability. To address this gap, this paper proposes PersonaSketch-RL as a reinforcement learning-based strategy for optimizing the generation of hand-drawn illustrations tailored to user preferences. The objective is to produce illustrations that maintain content relevance, stylistic fidelity, and visual quality. The hand-drawn illustration process is modeled as a sequence of continuous drawing actions, including stroke length, and pressure. Proximal Policy Optimization (PPO), a stable and efficient reinforcement learning algorithm, is employed to train an agent to generate these actions. A style encoder extracts representative features from user-provided reference sketches, while a semantic module aligns the generated content with textual prompts. The reward function is designed to balance multiple objectives: visual coherence, stylistic similarity, and alignment with semantic intent. Experimental evaluations demonstrate that PersonaSketch–RL achieves superior performance in personalized style reproduction and drawing quality when compared to baseline generative methods. Human assessments further validate improvements in satisfaction and perceived personal relevance. This approach confirms the potential of reinforcement learning for optimizing personalized hand-drawn illustration systems, enabling the generation of adaptive and high-quality visual content.
{"title":"Optimization of personalized hand-drawn illustration generation strategy supported by reinforcement learning","authors":"Jifeng Zhong","doi":"10.1016/j.entcom.2025.101053","DOIUrl":"10.1016/j.entcom.2025.101053","url":null,"abstract":"<div><div>Personalized hand-drawn illustrations are becoming more popular in digital media, helping to make content more engaging. Traditional generation methods often lack adaptability. To address this gap, this paper proposes PersonaSketch-RL as a reinforcement learning-based strategy for optimizing the generation of hand-drawn illustrations tailored to user preferences. The objective is to produce illustrations that maintain content relevance, stylistic fidelity, and visual quality. The hand-drawn illustration process is modeled as a sequence of continuous drawing actions, including stroke length, and pressure. Proximal Policy Optimization (PPO), a stable and efficient reinforcement learning algorithm, is employed to train an agent to generate these actions. A style encoder extracts representative features from user-provided reference sketches, while a semantic module aligns the generated content with textual prompts. The reward function is designed to balance multiple objectives: visual coherence, stylistic similarity, and alignment with semantic intent. Experimental evaluations demonstrate that PersonaSketch–RL achieves superior performance in personalized style reproduction and drawing quality when compared to baseline generative methods. Human assessments further validate improvements in satisfaction and perceived personal relevance. This approach confirms the potential of reinforcement learning for optimizing personalized hand-drawn illustration systems, enabling the generation of adaptive and high-quality visual content.</div></div>","PeriodicalId":55997,"journal":{"name":"Entertainment Computing","volume":"55 ","pages":"Article 101053"},"PeriodicalIF":2.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25DOI: 10.1016/j.entcom.2025.101009
Helmi Hibatullah , Tuğçe Ballı , E. Fatih Yetkin
Video games have been an inseparable aspect for many throughout their upbringing. The widespread adoption of the internet in the early 2000s has brought video games from the traditional offline media to the online environment. Consequently, people from different parts of the world can play together and communicate in-game with each other. Nowadays, most massively multiplayer online games (MMOs) incorporate voice communication features. Playing video games online with a certain degree of anonymity, along with the ability to verbally communicate with each other, has proven to be a dangerous combination that can breed toxic and abusive behaviors if left unmoderated. This paper proposes a new approach to integrating Whisper, a pre-trained automatic speech recognition (ASR) model, with the well-researched topic of text-based abusive behavior detection. Our proposed verbal harassment detection pipelines yielded an average F-score of 0.899 for all variants tested.
{"title":"Verbal harassment detection in online games using machine learning methods","authors":"Helmi Hibatullah , Tuğçe Ballı , E. Fatih Yetkin","doi":"10.1016/j.entcom.2025.101009","DOIUrl":"10.1016/j.entcom.2025.101009","url":null,"abstract":"<div><div>Video games have been an inseparable aspect for many throughout their upbringing. The widespread adoption of the internet in the early 2000s has brought video games from the traditional offline media to the online environment. Consequently, people from different parts of the world can play together and communicate in-game with each other. Nowadays, most massively multiplayer online games (MMOs) incorporate voice communication features. Playing video games online with a certain degree of anonymity, along with the ability to verbally communicate with each other, has proven to be a dangerous combination that can breed toxic and abusive behaviors if left unmoderated. This paper proposes a new approach to integrating Whisper, a pre-trained automatic speech recognition (ASR) model, with the well-researched topic of text-based abusive behavior detection. Our proposed verbal harassment detection pipelines yielded an average F-score of 0.899 for all variants tested.</div></div>","PeriodicalId":55997,"journal":{"name":"Entertainment Computing","volume":"55 ","pages":"Article 101009"},"PeriodicalIF":2.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144913241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-23DOI: 10.1016/j.entcom.2025.101016
Jing Li , Xinyi Min , Tian-jian Luo , Haoyang Peng , Huosheng Hu , Shen-rui Wu , Xin-jie Lu , Hua Peng
Brain-computer interface (BCI) provides the ability of the human brain to control external devices directly, and human-robot interaction (HRI) can develop a robot’s autonomous, cognitive, and social abilities. Based on BCI and HRI, this study proposes a brain-controlled robotic choreography approach based on motor imagery, and it belongs to the category of cooperative human-robot dance. Moreover, the whole system includes two parts: offline training and online calibrating. In offline training, a robot-guided experimental paradigm of motor imagery (MI) was first constructed, and electroencephalogram (EEG) samples of MI were collected for training a convolutional neural network (CNN) model. During online calibration, to achieve the robotic choreography, we used a biped humanoid robot named “Yanshee” as the carrier of robotic dance, and the corresponding dance motion library and mapping rules were designed. Based on the well-trained CNN model, a majority voting strategy was used to keep robust recognition, and the recognized MI command was used to drive robotic choreography based on such library and rules. Experimental results have shown an average accuracy of 74.71% for offline classification among seven subjects. Three online controlling strategies have been applied to seven subjects, and an average of 75.40% classification accuracy has been achieved. To measure the brain-controlled robotic choreography, four invited experts gave an overall average score of 7.67 on 21 robotic dance works using a 10-point scale. The constructed framework gives a novel view to integrate science and art, further developing a new entertainment application of social robots.
{"title":"Brain-computer interface controlled robotic choreography based on motor imagery EEG","authors":"Jing Li , Xinyi Min , Tian-jian Luo , Haoyang Peng , Huosheng Hu , Shen-rui Wu , Xin-jie Lu , Hua Peng","doi":"10.1016/j.entcom.2025.101016","DOIUrl":"10.1016/j.entcom.2025.101016","url":null,"abstract":"<div><div>Brain-computer interface (BCI) provides the ability of the human brain to control external devices directly, and human-robot interaction (HRI) can develop a robot’s autonomous, cognitive, and social abilities. Based on BCI and HRI, this study proposes a brain-controlled robotic choreography approach based on motor imagery, and it belongs to the category of cooperative human-robot dance. Moreover, the whole system includes two parts: offline training and online calibrating. In offline training, a robot-guided experimental paradigm of motor imagery (MI) was first constructed, and electroencephalogram (EEG) samples of MI were collected for training a convolutional neural network (CNN) model. During online calibration, to achieve the robotic choreography, we used a biped humanoid robot named “Yanshee” as the carrier of robotic dance, and the corresponding dance motion library and mapping rules were designed. Based on the well-trained CNN model, a majority voting strategy was used to keep robust recognition, and the recognized MI command was used to drive robotic choreography based on such library and rules. Experimental results have shown an average accuracy of 74.71% for offline classification among seven subjects. Three online controlling strategies have been applied to seven subjects, and an average of 75.40% classification accuracy has been achieved. To measure the brain-controlled robotic choreography, four invited experts gave an overall average score of 7.67 on 21 robotic dance works using a 10-point scale. The constructed framework gives a novel view to integrate science and art, further developing a new entertainment application of social robots.</div></div>","PeriodicalId":55997,"journal":{"name":"Entertainment Computing","volume":"55 ","pages":"Article 101016"},"PeriodicalIF":2.4,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-21DOI: 10.1016/j.entcom.2025.101014
Seung Woo Chae , Mark Alberta , Sung Hyun Lee
This study explores how synchronicity is associated with users’ social interaction and positive emotion when watching gaming content on video-based social media platforms. YouTube and Twitch were selected as comparable venues wherein asynchronous and synchronous communication can be observed, respectively. To control the effect of video content, we found a gaming video on YouTube, the content of which had originally been streamed on Twitch. From the identical videos on the two platforms, the asynchronous comments on YouTube and the synchronous chat messages on Twitch were collected. We analyzed the two datasets using the text analysis program LIWC. The results showed that users are more likely to use social words in the asynchronous setting than in the synchronous setting. Meanwhile, positive emotion was more frequently observed in Twitch’s synchronous texts compared to YouTube’s asynchronous texts. These findings suggest the possibility that Twitch users are more like eSports spectators than chatters.
{"title":"Twitch vs YouTube: Exploring how synchronicity is associated with social interaction and positive emotion among gaming video viewers","authors":"Seung Woo Chae , Mark Alberta , Sung Hyun Lee","doi":"10.1016/j.entcom.2025.101014","DOIUrl":"10.1016/j.entcom.2025.101014","url":null,"abstract":"<div><div>This study explores how synchronicity is associated with users’ social interaction and positive emotion when watching gaming content on video-based social media platforms. YouTube and Twitch were selected as comparable venues wherein asynchronous and synchronous communication can be observed, respectively. To control the effect of video content, we found a gaming video on YouTube, the content of which had originally been streamed on Twitch. From the identical videos on the two platforms, the asynchronous comments on YouTube and the synchronous chat messages on Twitch were collected. We analyzed the two datasets using the text analysis program LIWC. The results showed that users are more likely to use social words in the asynchronous setting than in the synchronous setting. Meanwhile, positive emotion was more frequently observed in Twitch’s synchronous texts compared to YouTube’s asynchronous texts. These findings suggest the possibility that Twitch users are more like eSports spectators than chatters.</div></div>","PeriodicalId":55997,"journal":{"name":"Entertainment Computing","volume":"55 ","pages":"Article 101014"},"PeriodicalIF":2.4,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144888792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The detrimental effects of stress and anxiety on mental and physical health are well-documented. In the context of professional athletics, high levels of pressure and anxiety can lead to a decline in performance and a regression in well-honed skills, a phenomenon known as performance blocks. While traditional methods have been employed to mitigate these blocks, recent studies have demonstrated the efficacy of digital approaches, especially virtual reality and serious games, in addressing mental and physical disorders. The application of digital treatments to target panic, a specific type of performance block which is a prevalent issue among professional archers, remains understudied. Furthermore, the relationship between personality traits and treatment efficacy remains unclear. This study reports on the findings of a formal user study involving 30 archers over a four-week period. A comprehensive system was designed and developed, integrating a virtual reality game as the software component alongside the necessary hardware, serving as the medium for treatment. Additionally, a tailored questionnaire addressing target panic was formulated to facilitate data collection. The results reveal a statistically significant difference between the proposed method and traditional approaches, as well as a strong positive correlation between achievements in the digital environment and real-world performance. The findings suggest that digital treatment can be a viable tool for archers experiencing target panic disorder, and that certain personality traits, such as conscientiousness, are closely tied to treatment effectiveness.
{"title":"Paykan: Virtual reality gaming as a therapeutic tool for target panic disorder","authors":"Hesam Sakian Mohamadi , Faraz Bakhshi , Yoones A. Sekhavat , Mallesham Dasari , Kazem Gobadi Ansaroudi , Mahdi Ahmadzadeh Haji Alilou","doi":"10.1016/j.entcom.2025.101013","DOIUrl":"10.1016/j.entcom.2025.101013","url":null,"abstract":"<div><div>The detrimental effects of stress and anxiety on mental and physical health are well-documented. In the context of professional athletics, high levels of pressure and anxiety can lead to a decline in performance and a regression in well-honed skills, a phenomenon known as performance blocks. While traditional methods have been employed to mitigate these blocks, recent studies have demonstrated the efficacy of digital approaches, especially virtual reality and serious games, in addressing mental and physical disorders. The application of digital treatments to target panic, a specific type of performance block which is a prevalent issue among professional archers, remains understudied. Furthermore, the relationship between personality traits and treatment efficacy remains unclear. This study reports on the findings of a formal user study involving 30 archers over a four-week period. A comprehensive system was designed and developed, integrating a virtual reality game as the software component alongside the necessary hardware, serving as the medium for treatment. Additionally, a tailored questionnaire addressing target panic was formulated to facilitate data collection. The results reveal a statistically significant difference between the proposed method and traditional approaches, as well as a strong positive correlation between achievements in the digital environment and real-world performance. The findings suggest that digital treatment can be a viable tool for archers experiencing target panic disorder, and that certain personality traits, such as conscientiousness, are closely tied to treatment effectiveness.</div></div>","PeriodicalId":55997,"journal":{"name":"Entertainment Computing","volume":"55 ","pages":"Article 101013"},"PeriodicalIF":2.4,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}