In the intelligence of social things (IoST) paradigm, where interconnected devices and social networks create a dynamic ecosystem, understanding information diffusion is essential. IoST integrates user interactions, device behaviors, and contextual factors, adding complexity to information networks and necessitating accurate prediction models. This work analyses user behavior in terms of both group and individual relationships and presents an information propagation prediction model that combines information propagation topology features with user relationship representations. Information diffusion prediction analyzes patterns of spread in networks to understand and forecast propagation processes. Existing studies emphasize social and dynamic influence relationships within user groups but often neglect user similarity in group relations and intrinsic factors affecting individual sharing decisions. To address these gaps, a novel model is proposed, combining user relationship representations and diffusion topological features. At the group level, a user cooccurrence graph captures similarity relationship, integrating these with diffusion topology to analyze group interactions. At the individual level, user-specific feature representations and influence factor vectors address intrinsic motivations for sharing. Experimental results validate the model’s efficacy, achieving performance improvements on public datasets. On the Memetracker dataset, the model increased MAP@k by 6.54% and hits@k by 2.75%, demonstrating its ability to capture both group and individual dynamics for enhanced diffusion prediction.
{"title":"Integrating User Relationships and Features for Intelligence of Social Things Aware Information Diffusion Prediction","authors":"Bhawani Sankar Panigrahi;Mohammed E. Seno;Balasubramani Murugesan;Omar Isam;Vemula Jasmine Sowmya;K.D.V. Prasad;Deepak Gupta;Jumaniyazov Inomjon Turayevich;Richard Rivera","doi":"10.1109/TCSS.2025.3588781","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3588781","url":null,"abstract":"In the intelligence of social things (IoST) paradigm, where interconnected devices and social networks create a dynamic ecosystem, understanding information diffusion is essential. IoST integrates user interactions, device behaviors, and contextual factors, adding complexity to information networks and necessitating accurate prediction models. This work analyses user behavior in terms of both group and individual relationships and presents an information propagation prediction model that combines information propagation topology features with user relationship representations. Information diffusion prediction analyzes patterns of spread in networks to understand and forecast propagation processes. Existing studies emphasize social and dynamic influence relationships within user groups but often neglect user similarity in group relations and intrinsic factors affecting individual sharing decisions. To address these gaps, a novel model is proposed, combining user relationship representations and diffusion topological features. At the group level, a user cooccurrence graph captures similarity relationship, integrating these with diffusion topology to analyze group interactions. At the individual level, user-specific feature representations and influence factor vectors address intrinsic motivations for sharing. Experimental results validate the model’s efficacy, achieving performance improvements on public datasets. On the Memetracker dataset, the model increased MAP@k by 6.54% and hits@k by 2.75%, demonstrating its ability to capture both group and individual dynamics for enhanced diffusion prediction.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"13 1","pages":"1069-1078"},"PeriodicalIF":4.5,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-11DOI: 10.1109/TCSS.2025.3591887
Huiyu Ren;Junya Lv;Lingyu Ren;Chi Zhang
There is very little research on the small scale of the residential areas, and most studies have not taken into account the community embedded resources. This article employs different scales of subdistricts and residential areas and includes the embedded elderly care beds in the communities. There was an obvious difference between the spatial layout and regional characteristics of elderly care resources and the elderly population in downtown Shanghai in 2020. Based on the perspective of constructing the 15-minute community pension life circle, the spatial accessibility of residential areas to elderly care facilities was significantly different in downtown Shanghai in 2020. This study presents a novel approach to elderly care resource allocation in Shanghai by integrating social things-enabled cooperative learning with spatial optimization methods. Our methodology focuses on optimizing the location and scale through hybrid spatial–social synergy analysis, aiming to reduce spatial inequality in the allocation of elderly care resources and to approach the planning target values in downtown Shanghai. These insights might contribute to enhancing the planning layout and the service system of elderly care resources in Shanghai.
{"title":"Spatial–Social Synergy: GIS-Genetic Algorithm Optimization With Social Things-Enabled Cooperative Learning for Elderly Care Resource Allocation in Shanghai","authors":"Huiyu Ren;Junya Lv;Lingyu Ren;Chi Zhang","doi":"10.1109/TCSS.2025.3591887","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3591887","url":null,"abstract":"There is very little research on the small scale of the residential areas, and most studies have not taken into account the community embedded resources. This article employs different scales of subdistricts and residential areas and includes the embedded elderly care beds in the communities. There was an obvious difference between the spatial layout and regional characteristics of elderly care resources and the elderly population in downtown Shanghai in 2020. Based on the perspective of constructing the 15-minute community pension life circle, the spatial accessibility of residential areas to elderly care facilities was significantly different in downtown Shanghai in 2020. This study presents a novel approach to elderly care resource allocation in Shanghai by integrating social things-enabled cooperative learning with spatial optimization methods. Our methodology focuses on optimizing the location and scale through hybrid spatial–social synergy analysis, aiming to reduce spatial inequality in the allocation of elderly care resources and to approach the planning target values in downtown Shanghai. These insights might contribute to enhancing the planning layout and the service system of elderly care resources in Shanghai.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"13 1","pages":"1194-1203"},"PeriodicalIF":4.5,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-16DOI: 10.1109/TCSS.2025.3583322
Ying Zhao;Sung-Ki Kim;Xianju Wang
The intelligence of social things (IoST) changes how we monitor and respond to environmental concerns. However, many of the present link prediction techniques ignore the intricate multievolutionary patterns defining sustainable device interactions. These networks show special traits motivated by environmental monitoring demands, energy efficiency requirements, and sustainability goals, so accurate future connection projections are essential for the best use of resources and environmental awareness. This article proposes a multievolutionary feature learning algorithm for IoST networks (MEF-IoST) that enhances environmental awareness through sophisticated link prediction. First, we design a time-aware extreme learning model that efficiently processes temporal patterns in IoST device interactions and environmental sensing data through gated networks and self-encoders. Then, we construct multiple deep extreme learning machines to map temporal features from different perspectives, extracting various evolutionary patterns that reflect device collaboration dynamics and environmental monitoring requirements. Finally, we employ an environmental-aware extreme learning machine classifier to predict future IoST network links while considering ecological constraints. Experiments on four real-world IoST networks demonstrate MEF-IoST's effectiveness, achieving 15%–21% lower RMSE and 2.3%–2.7% higher AUC compared to state-of-the-art methods while reducing computational costs by 20%–35%.
{"title":"Multievolutionary Feature Learning for Intelligence of Social Things Networks: Enhancing Environmental Awareness Through Link Prediction","authors":"Ying Zhao;Sung-Ki Kim;Xianju Wang","doi":"10.1109/TCSS.2025.3583322","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3583322","url":null,"abstract":"The intelligence of social things (IoST) changes how we monitor and respond to environmental concerns. However, many of the present link prediction techniques ignore the intricate multievolutionary patterns defining sustainable device interactions. These networks show special traits motivated by environmental monitoring demands, energy efficiency requirements, and sustainability goals, so accurate future connection projections are essential for the best use of resources and environmental awareness. This article proposes a multievolutionary feature learning algorithm for IoST networks (MEF-IoST) that enhances environmental awareness through sophisticated link prediction. First, we design a time-aware extreme learning model that efficiently processes temporal patterns in IoST device interactions and environmental sensing data through gated networks and self-encoders. Then, we construct multiple deep extreme learning machines to map temporal features from different perspectives, extracting various evolutionary patterns that reflect device collaboration dynamics and environmental monitoring requirements. Finally, we employ an environmental-aware extreme learning machine classifier to predict future IoST network links while considering ecological constraints. Experiments on four real-world IoST networks demonstrate MEF-IoST's effectiveness, achieving 15%–21% lower RMSE and 2.3%–2.7% higher AUC compared to state-of-the-art methods while reducing computational costs by 20%–35%.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"13 1","pages":"1079-1090"},"PeriodicalIF":4.5,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Major depressive disorder (MDD) is a global mental health problem that significantly affects individuals’ daily activities. The diagnosis of MDD is a challenging issue due to people’s stigma and less interest in reaching the clinic for healthcare assistance. People prefer to share their thoughts and feelings through text posts on social media platforms. The main aim of this article is to bridge the gap between medical experts and depressed individuals in identifying the symptoms of MDD early to provide effective treatment before it reaches a critical stage. This article creates a “hybrid model of DistilBERT with a convolutional neural network (CNN)—(HDC),” combining the power of two different deep learning architectures, DistilBERT and CNN, along with advances in natural language processing (NLP) to detect symptoms of MDD in alignment with DSM-5 through analyzing content from social networks. Experiments are conducted using standard online tweet data. The data augmentation technique solves data imbalance problems and avoids model-biased predictions. Precision, recall, f1-score, and accuracy are metrics used to evaluate the current technique with other baseline models. Experimental results show that the “HDC” model achieved 94.13% accuracy and outperformed cutting-edge methodologies for detecting depression symptoms.
{"title":"Major Depressive Disorder Symptoms Detection System Through Text in Social Media Platforms Using Hybrid Deep Learning Models","authors":"Vankayala Tejaswini;Bibhudatta Sahoo;Korra Sathya Babu","doi":"10.1109/TCSS.2025.3579300","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3579300","url":null,"abstract":"Major depressive disorder (MDD) is a global mental health problem that significantly affects individuals’ daily activities. The diagnosis of MDD is a challenging issue due to people’s stigma and less interest in reaching the clinic for healthcare assistance. People prefer to share their thoughts and feelings through text posts on social media platforms. The main aim of this article is to bridge the gap between medical experts and depressed individuals in identifying the symptoms of MDD early to provide effective treatment before it reaches a critical stage. This article creates a “hybrid model of DistilBERT with a convolutional neural network (CNN)—(HDC),” combining the power of two different deep learning architectures, DistilBERT and CNN, along with advances in natural language processing (NLP) to detect symptoms of MDD in alignment with DSM-5 through analyzing content from social networks. Experiments are conducted using standard online tweet data. The data augmentation technique solves data imbalance problems and avoids model-biased predictions. Precision, recall, f1-score, and accuracy are metrics used to evaluate the current technique with other baseline models. Experimental results show that the “HDC” model achieved 94.13% accuracy and outperformed cutting-edge methodologies for detecting depression symptoms.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"13 1","pages":"1031-1043"},"PeriodicalIF":4.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-10DOI: 10.1109/TCSS.2025.3579570
Xin Cheng;Lei Yang;Rui Li
Generative adversarial networks (GANs) have demonstrated potential in enhancing keyframe selection and video reconstruction via adversarial training among unsupervised approaches. Nevertheless, GANs struggle to encapsulate the intricate spatiotemporal dynamics in videos, which is essential for producing coherent and informative summaries. To address these challenges, we introduce an unsupervised video summarization framework that synergistically integrates temporal–spatial semantic graphs (TSSGraphs) with a bilinear additive attention (BAA) mechanism. TSSGraphs are designed to effectively model temporal and spatial relationships among video frames by combining temporal convolution and dynamic edge convolution, thereby extracting salient features while mitigating model complexity. The BAA mechanism enhances the framework’s ability to capture critical motion information by addressing feature sparsity and eliminating redundant parameters, ensuring robust attention to significant motion dynamics. Experimental assessments on the SumMe and TVSum benchmark datasets reveal that our method attains improvements of up to 4.0% and 3.3% in F-score, respectively, compared to current methodologies. Moreover, our system demonstrates diminished parameter overhead throughout training and inference stages, particularly excelling in contexts with significant motion content.
{"title":"Unsupervised Video Summarization Based on Spatiotemporal Semantic Graph and Enhanced Attention Mechanism","authors":"Xin Cheng;Lei Yang;Rui Li","doi":"10.1109/TCSS.2025.3579570","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3579570","url":null,"abstract":"Generative adversarial networks (GANs) have demonstrated potential in enhancing keyframe selection and video reconstruction via adversarial training among unsupervised approaches. Nevertheless, GANs struggle to encapsulate the intricate spatiotemporal dynamics in videos, which is essential for producing coherent and informative summaries. To address these challenges, we introduce an unsupervised video summarization framework that synergistically integrates temporal–spatial semantic graphs (TSSGraphs) with a bilinear additive attention (BAA) mechanism. TSSGraphs are designed to effectively model temporal and spatial relationships among video frames by combining temporal convolution and dynamic edge convolution, thereby extracting salient features while mitigating model complexity. The BAA mechanism enhances the framework’s ability to capture critical motion information by addressing feature sparsity and eliminating redundant parameters, ensuring robust attention to significant motion dynamics. Experimental assessments on the SumMe and TVSum benchmark datasets reveal that our method attains improvements of up to 4.0% and 3.3% in F-score, respectively, compared to current methodologies. Moreover, our system demonstrates diminished parameter overhead throughout training and inference stages, particularly excelling in contexts with significant motion content.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3751-3764"},"PeriodicalIF":4.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-08DOI: 10.1109/TCSS.2025.3575939
Yutian Li;Zhuopan Yang;Zhenguo Yang;Xiaoping Li;Wenyin Liu;Qing Li
Addressing the bias problem in multimodal zero-shot learning tasks is challenging due to the domain shift between seen and unseen classes, as well as the semantic gap across different modalities. To tackle these challenges, we propose a multimodal disentangled fusion network (MDFN) that unifies the class embedding space for multimodal zero-shot learning. MDFN exploits feature disentangled variational autoencoder (FD-VAE) in two branches to distangle unimodal features into modality-specific representations that are semantically consistent and unrelated, where semantics are shared within classes. In particular, semantically consistent representations and unimodal features are integrated to retain the semantics of the original features in the form of residuals. Furthermore, multimodal conditional VAE (MC-VAE) in two branches is adopted to learn cross-modal interactions with modality-specific conditions. Finally, the complementary multimodal representations achieved by MC-VAE are encoded into a fusion network (FN) with a self-adaptive margin center loss (SAMC-loss) to predict target class labels in embedding forms. By learning the distance among domain samples, SAMC-loss promotes intraclass compactness and interclass separability. Experiments on zero-shot and news event datasets demonstrate the superior performance of MDFN, with the harmonic mean improved by 27.2% on the MMED dataset and 5.1% on the SUN dataset.
{"title":"Multimodal Disentangled Fusion Network via VAEs for Multimodal Zero-Shot Learning","authors":"Yutian Li;Zhuopan Yang;Zhenguo Yang;Xiaoping Li;Wenyin Liu;Qing Li","doi":"10.1109/TCSS.2025.3575939","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3575939","url":null,"abstract":"Addressing the bias problem in multimodal zero-shot learning tasks is challenging due to the domain shift between seen and unseen classes, as well as the semantic gap across different modalities. To tackle these challenges, we propose a multimodal disentangled fusion network (MDFN) that unifies the class embedding space for multimodal zero-shot learning. MDFN exploits feature disentangled variational autoencoder (FD-VAE) in two branches to distangle unimodal features into modality-specific representations that are semantically consistent and unrelated, where semantics are shared within classes. In particular, semantically consistent representations and unimodal features are integrated to retain the semantics of the original features in the form of residuals. Furthermore, multimodal conditional VAE (MC-VAE) in two branches is adopted to learn cross-modal interactions with modality-specific conditions. Finally, the complementary multimodal representations achieved by MC-VAE are encoded into a fusion network (FN) with a self-adaptive margin center loss (SAMC-loss) to predict target class labels in embedding forms. By learning the distance among domain samples, SAMC-loss promotes intraclass compactness and interclass separability. Experiments on zero-shot and news event datasets demonstrate the superior performance of MDFN, with the harmonic mean improved by 27.2% on the MMED dataset and 5.1% on the SUN dataset.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3684-3697"},"PeriodicalIF":4.5,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the current era of AI technology, where systems increasingly rely on big data to process vast amounts of societal information, efficient methods for integrating and utilizing diverse datasets are essential. This article presents a novel approach for transforming the feature space of different datasets through singular value decomposition (SVD) to extract common and hidden features as using the prior domain knowledge. Specifically, we apply this method to two datasets: 1) one related to physical and cognitive frailty in the elderly; and 2) another focusing on identifying IKIGAI (happiness, self-efficacy, and sense of contribution) in volunteer staff of a civic health promotion activity. Both datasets consist of multiple sub-datasets measured using different modalities, such as facial expressions, sound, activity, and heart rates. By defining feature extraction methods for each subdataset, we compare and integrate the overlapping data. The results demonstrated that our method could effectively preserve common characteristics across different data types, offering a more interpretable solution than traditional dimensionality reduction methods based on linear and nonlinear transformation. This approach has significant implications for data integration in multidisciplinary fields and opens the door for future applications to a wide range of datasets.
{"title":"Coordinate System Transformation Method for Comparing Different Types of Data in Different Dataset Using Singular Value Decomposition","authors":"Emiko Uchiyama;Wataru Takano;Yoshihiko Nakamura;Tomoki Tanaka;Katsuya Iijima;Gentiane Venture;Vincent Hernandez;Kenta Kamikokuryo;Ken-ichiro Yabu;Takahiro Miura;Kimitaka Nakazawa;Bo-Kyung Son","doi":"10.1109/TCSS.2025.3561078","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3561078","url":null,"abstract":"In the current era of AI technology, where systems increasingly rely on big data to process vast amounts of societal information, efficient methods for integrating and utilizing diverse datasets are essential. This article presents a novel approach for transforming the feature space of different datasets through singular value decomposition (SVD) to extract common and hidden features as using the prior domain knowledge. Specifically, we apply this method to two datasets: 1) one related to physical and cognitive frailty in the elderly; and 2) another focusing on identifying <italic>IKIGAI</i> (happiness, self-efficacy, and sense of contribution) in volunteer staff of a civic health promotion activity. Both datasets consist of multiple sub-datasets measured using different modalities, such as facial expressions, sound, activity, and heart rates. By defining feature extraction methods for each subdataset, we compare and integrate the overlapping data. The results demonstrated that our method could effectively preserve common characteristics across different data types, offering a more interpretable solution than traditional dimensionality reduction methods based on linear and nonlinear transformation. This approach has significant implications for data integration in multidisciplinary fields and opens the door for future applications to a wide range of datasets.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3610-3626"},"PeriodicalIF":4.5,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11073557","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145255998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-17DOI: 10.1109/TCSS.2025.3561073
Mazin Abdalla;Parya Abadeh;Zeinab Noorian;Amira Ghenai;Fattane Zarrinkalam;Soroush Zamani Alavijeh
The intersection of music and mental health has gained increasing attention, with previous studies highlighting music’s potential to reduce stress and anxiety. Despite these promising findings, many of these studies are limited by small sample sizes and traditional observational methods, leaving a gap in our understanding of music’s broader impact on mental health. In response to these limitations, this study introduces a novel approach that combines generalized linear mixed models (GLMM) with propensity score matching (PSM) to explore the relationship between music listening and stress levels among social media users diagnosed with anxiety, depression, and posttraumatic stress disorder (PTSD). Our research not only identifies associative patterns between music listening and stress but also provides a more rigorous examination of potential causal effects, taking into account demographic factors such as education level, gender, and age. Our findings reveal that across all mental health conditions, music listening is significantly associated with reduced stress levels, with an observed 21.3% reduction for anxiety, 15.4% for depression, and 19.3% for PTSD. Additionally, users who listened to music were more likely to report a zero stress score, indicating a stronger relaxation effect. Further, our analysis of demographic variations shows that age and education level influence the impact of music on stress reduction, highlighting the potential for personalized interventions. These findings contribute to a deeper understanding of music’s therapeutic potential, particularly in crafting interventions tailored to the diverse needs of different populations.
{"title":"The Impact of Listening to Music on Stress Level for Anxiety, Depression, and PTSD: Mixed-Effect Models and Propensity Score Analysis","authors":"Mazin Abdalla;Parya Abadeh;Zeinab Noorian;Amira Ghenai;Fattane Zarrinkalam;Soroush Zamani Alavijeh","doi":"10.1109/TCSS.2025.3561073","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3561073","url":null,"abstract":"The intersection of music and mental health has gained increasing attention, with previous studies highlighting music’s potential to reduce stress and anxiety. Despite these promising findings, many of these studies are limited by small sample sizes and traditional observational methods, leaving a gap in our understanding of music’s broader impact on mental health. In response to these limitations, this study introduces a novel approach that combines generalized linear mixed models (GLMM) with propensity score matching (PSM) to explore the relationship between music listening and stress levels among social media users diagnosed with anxiety, depression, and posttraumatic stress disorder (PTSD). Our research not only identifies associative patterns between music listening and stress but also provides a more rigorous examination of potential causal effects, taking into account demographic factors such as education level, gender, and age. Our findings reveal that across all mental health conditions, music listening is significantly associated with reduced stress levels, with an observed 21.3% reduction for anxiety, 15.4% for depression, and 19.3% for PTSD. Additionally, users who listened to music were more likely to report a zero stress score, indicating a stronger relaxation effect. Further, our analysis of demographic variations shows that age and education level influence the impact of music on stress reduction, highlighting the potential for personalized interventions. These findings contribute to a deeper understanding of music’s therapeutic potential, particularly in crafting interventions tailored to the diverse needs of different populations.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3816-3830"},"PeriodicalIF":4.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of the Intelligence of Social Things (IoST) paradigm has created new prospects for improving false news detection by utilizing interconnected social networks, facilitating the amalgamation of many data sources including user behaviors, social interactions, and contextual information. Multiple techniques exist for identifying false information, with individual methods often concentrating on aspects such as news substance, social context, or external veracity. Establishing dissemination networks, examining the structural traits and methods of fake news spread on Weibo and Twitter. Nonetheless, it possesses limitations in enabling the two modes to concentrate more efficiently on their individual preferences. By using entity linking to expand the entity terminology in news content and semantic mining to augment the style vocabulary in news material, the Pref-FEND model was developed. The graph neural network’s capacity to effectively capture node properties was improved by learning and using five different types of words as node representations in the graph network. A heterogeneous degree-aware graph convolutional network was concurrently incorporated, yielding enhancements of 2.8% and 1.9% in F1-score relative to the fact-based singular model GET. Additionally, when integrated with LDAVAE+GET for concurrent detection, the F1-scores were enhanced by 1.1% and 1.3%, respectively, in comparison to Pref-FEND. The experimental findings confirm the efficacy of the suggested enhancements to the model.
{"title":"Unified Fake News Detection Based on IoST-Driven Joint Detection Models","authors":"Janjhyam Venkata Naga Ramesh;Sachin Gupta;Aadam Quraishi;Ashit Kumar Dutta;Kumari Priyanka Sinha;G Siva Nageswara Rao;Nasiba Sherkuziyeva;Divya Nimma;Jagdish Chandra Patni","doi":"10.1109/TCSS.2025.3568872","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3568872","url":null,"abstract":"The advent of the Intelligence of Social Things (IoST) paradigm has created new prospects for improving false news detection by utilizing interconnected social networks, facilitating the amalgamation of many data sources including user behaviors, social interactions, and contextual information. Multiple techniques exist for identifying false information, with individual methods often concentrating on aspects such as news substance, social context, or external veracity. Establishing dissemination networks, examining the structural traits and methods of fake news spread on Weibo and Twitter. Nonetheless, it possesses limitations in enabling the two modes to concentrate more efficiently on their individual preferences. By using entity linking to expand the entity terminology in news content and semantic mining to augment the style vocabulary in news material, the Pref-FEND model was developed. The graph neural network’s capacity to effectively capture node properties was improved by learning and using five different types of words as node representations in the graph network. A heterogeneous degree-aware graph convolutional network was concurrently incorporated, yielding enhancements of 2.8% and 1.9% in F1-score relative to the fact-based singular model GET. Additionally, when integrated with LDAVAE+GET for concurrent detection, the F1-scores were enhanced by 1.1% and 1.3%, respectively, in comparison to Pref-FEND. The experimental findings confirm the efficacy of the suggested enhancements to the model.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"13 1","pages":"1058-1068"},"PeriodicalIF":4.5,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-06DOI: 10.1109/TCSS.2025.3565414
Ye Zhang;Qing Gao;Rong Hu;Qingtang Ding;Boyang Li;Yulan Guo
Sensor-based human activity recognition (HAR) usually suffers from the problem of insufficient annotated data, due to the difficulty in labeling the intuitive signals of wearable sensors. To this end, recent advances have adopted handcrafted operations or generative models for data augmentation. The handcrafted operations are driven by some physical priors of human activities, e.g., action distortion and strength fluctuations. However, these approaches may face challenges in maintaining semantic data properties. Although the generative models have better data adaptability, it is difficult for them to incorporate important action priors into data generation. This article proposes a differentiable prior-driven data augmentation framework for HAR. First, we embed the handcrafted augmentation operations into a differentiable module, which adaptively selects and optimizes the operations to be combined together. Then, we construct a generative module to add controllable perturbations to the data derived by the handcrafted operations and further improve the diversity of data augmentation. By integrating the handcrafted operation module and the generative module into one learnable framework, the generalization performance of the recognition models is enhanced effectively. Extensive experimental results with three different classifiers on five public datasets demonstrate the effectiveness of the proposed framework. Project page: https://github.com/crocodilegogogo/DriveData-Under-Review.
{"title":"Differentiable Prior-Driven Data Augmentation for Sensor-Based Human Activity Recognition","authors":"Ye Zhang;Qing Gao;Rong Hu;Qingtang Ding;Boyang Li;Yulan Guo","doi":"10.1109/TCSS.2025.3565414","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3565414","url":null,"abstract":"Sensor-based human activity recognition (HAR) usually suffers from the problem of insufficient annotated data, due to the difficulty in labeling the intuitive signals of wearable sensors. To this end, recent advances have adopted handcrafted operations or generative models for data augmentation. The handcrafted operations are driven by some physical priors of human activities, e.g., action distortion and strength fluctuations. However, these approaches may face challenges in maintaining semantic data properties. Although the generative models have better data adaptability, it is difficult for them to incorporate important action priors into data generation. This article proposes a differentiable prior-driven data augmentation framework for HAR. First, we embed the handcrafted augmentation operations into a differentiable module, which adaptively selects and optimizes the operations to be combined together. Then, we construct a generative module to add controllable perturbations to the data derived by the handcrafted operations and further improve the diversity of data augmentation. By integrating the handcrafted operation module and the generative module into one learnable framework, the generalization performance of the recognition models is enhanced effectively. Extensive experimental results with three different classifiers on five public datasets demonstrate the effectiveness of the proposed framework. Project page: <uri>https://github.com/crocodilegogogo/DriveData-Under-Review</uri>.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3778-3790"},"PeriodicalIF":4.5,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}