Pub Date : 2025-03-30DOI: 10.1109/TCSS.2025.3567776
Yingying She;Fang Liu;Baorong Yang;Bin Hu
{"title":"Converging Real and Virtual: Embodied Intelligence-Driven Immersive VR Biofeedback for Brain Health Modulation","authors":"Yingying She;Fang Liu;Baorong Yang;Bin Hu","doi":"10.1109/TCSS.2025.3567776","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3567776","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"938-946"},"PeriodicalIF":4.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11018521","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-30DOI: 10.1109/TCSS.2025.3567690
{"title":"IEEE Transactions on Computational Social Systems Publication Information","authors":"","doi":"10.1109/TCSS.2025.3567690","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3567690","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"C2-C2"},"PeriodicalIF":4.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11018522","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-30DOI: 10.1109/TCSS.2025.3567692
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/TCSS.2025.3567692","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3567692","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"C3-C3"},"PeriodicalIF":4.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11018523","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-30DOI: 10.1109/TCSS.2025.3567694
{"title":"IEEE Transactions on Computational Social Systems Information for Authors","authors":"","doi":"10.1109/TCSS.2025.3567694","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3567694","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"C4-C4"},"PeriodicalIF":4.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11018520","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-28DOI: 10.1109/TCSS.2025.3550419
Jialin Li;Muhammad Azeem Akbar;Syed Hassan Shah;Zhi Wang;Jing Yang
Multimodal data have emerged as a cornerstone for understanding and analyzing complex human behaviors, particularly in mental health monitoring. In this study, we propose a deep learning-driven behavioral modeling framework for intelligence of social things (IoST)-based mental health monitoring and intervention, designed to integrate and analyze multimodal data—including text, speech, and physiological signals—captured from interconnected IoST devices. The framework incorporates an adaptive attention-based fusion mechanism that dynamically adjusts the contribution of each modality based on contextual relevance, enhancing the robustness of multimodal integration. Additionally, we employ a temporal-aware recurrent neural network with an attention mechanism to capture long-term dependencies and evolving behavioral patterns, ensuring precise mental health state prediction. To validate the framework, extensive experiments were conducted using three publicly available datasets: DAIC-WOZ, SEED, and MELD. Comparative experiments demonstrate the superior performance of the proposed framework, achieving state-of-the-art accuracy of 93.5%, F1-scores of 92.9%, and AUC-ROC of 0.95 values. Ablation studies highlight the critical roles of attention mechanisms and multimodal integration, showcasing significant performance improvements over single-modality and simplified fusion approaches. These findings underscore the framework's potential as a reliable and efficient tool for real-time mental health monitoring in IoST environments, paving the way for scalable and personalized interventions.
{"title":"Deep Learning-Driven Behavioral Modeling in IoST for Mental Health Monitoring and Intervention","authors":"Jialin Li;Muhammad Azeem Akbar;Syed Hassan Shah;Zhi Wang;Jing Yang","doi":"10.1109/TCSS.2025.3550419","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3550419","url":null,"abstract":"Multimodal data have emerged as a cornerstone for understanding and analyzing complex human behaviors, particularly in mental health monitoring. In this study, we propose a deep learning-driven behavioral modeling framework for intelligence of social things (IoST)-based mental health monitoring and intervention, designed to integrate and analyze multimodal data—including text, speech, and physiological signals—captured from interconnected IoST devices. The framework incorporates an adaptive attention-based fusion mechanism that dynamically adjusts the contribution of each modality based on contextual relevance, enhancing the robustness of multimodal integration. Additionally, we employ a temporal-aware recurrent neural network with an attention mechanism to capture long-term dependencies and evolving behavioral patterns, ensuring precise mental health state prediction. To validate the framework, extensive experiments were conducted using three publicly available datasets: DAIC-WOZ, SEED, and MELD. Comparative experiments demonstrate the superior performance of the proposed framework, achieving state-of-the-art accuracy of 93.5%, F1-scores of 92.9%, and AUC-ROC of 0.95 values. Ablation studies highlight the critical roles of attention mechanisms and multimodal integration, showcasing significant performance improvements over single-modality and simplified fusion approaches. These findings underscore the framework's potential as a reliable and efficient tool for real-time mental health monitoring in IoST environments, paving the way for scalable and personalized interventions.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"13 1","pages":"1044-1057"},"PeriodicalIF":4.5,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-21DOI: 10.1109/TCSS.2025.3548057
Yu Mao;Shaojie Lin;Zilong Lin;Yaojin Lin
Few-shot learning (FSL) aims to train a model using a minimal number of samples and subsequently apply this model to recognize unseen classes. Recently, metric-based methods mainly focus on exploring the relationship between the support set and the query set through attention mechanism in solving FSL problems. However, these methods typically employ unidirectional computation when calculating the attention relationship between support and query. This unidirectional approach not only limits the depth and breadth of knowledge acquisition but may also lead to mismatched patches between support and query, thereby affecting the overall performance of the model. In this article, we propose a bidirectional patch-aware attention network for few-shot learning (BPAN) to address this issue. First, we extract subimages via grid cropping and feed them into the learned feature extractor to obtain patch features. Moreover, self-attention is used to assign different weights to patch features and reconstruct them. Then, PFCAM is proposed to mutually explore the patch feature relationship between the support set and the support set, further reconstruct the patch features, and aggregate multiple patch features of each image into one feature through a learnable parameter matrix for the purpose of prediction. Finally, the template for each class is constructed to extend the results of PFCAM to the few-shot classification scenario. Experiments on three benchmark datasets show that BPAN achieves superior performance.
{"title":"Bidirectional Patch-Aware Attention Network for Few-Shot Learning","authors":"Yu Mao;Shaojie Lin;Zilong Lin;Yaojin Lin","doi":"10.1109/TCSS.2025.3548057","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3548057","url":null,"abstract":"Few-shot learning (FSL) aims to train a model using a minimal number of samples and subsequently apply this model to recognize unseen classes. Recently, metric-based methods mainly focus on exploring the relationship between the support set and the query set through attention mechanism in solving FSL problems. However, these methods typically employ unidirectional computation when calculating the attention relationship between support and query. This unidirectional approach not only limits the depth and breadth of knowledge acquisition but may also lead to mismatched patches between support and query, thereby affecting the overall performance of the model. In this article, we propose a bidirectional patch-aware attention network for few-shot learning (BPAN) to address this issue. First, we extract subimages via grid cropping and feed them into the learned feature extractor to obtain patch features. Moreover, self-attention is used to assign different weights to patch features and reconstruct them. Then, PFCAM is proposed to mutually explore the patch feature relationship between the support set and the support set, further reconstruct the patch features, and aggregate multiple patch features of each image into one feature through a learnable parameter matrix for the purpose of prediction. Finally, the template for each class is constructed to extend the results of PFCAM to the few-shot classification scenario. Experiments on three benchmark datasets show that BPAN achieves superior performance.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3698-3708"},"PeriodicalIF":4.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-07DOI: 10.1109/TCSS.2025.3544263
Aakash Singh;Deepawali Sharma;Vivek Kumar Singh
The massive expansion of social media and the rapid growth in multimedia content on it has resulted in a growing interest in visual content analysis and classification. There are now a good number of studies that focus on identifying hateful and offensive content in social media posts. The social media content is often analyzed through automated algorithmic approaches, with respect to being unsuitable or harmful for different groups such as women and children. There is, however, a noticeable gap in the exploration of positive content, particularly in the case of multimodal content such as GIFs. Therefore, the present work attempted to address this gap by introducing a high-quality annotated dataset of animated GIFs. The dataset provides for two subtasks: 1) subtask 1 involves binary classification, determining whether a GIF provides emotional support; and 2) subtask 2 involves multiclass classification, wherein the GIFs are categorized into three different emotional support categories. The data annotation quality is assessed using Fleiss' kappa. Various unimodal models, utilizing text-only and image-only approaches, are implemented. Additionally, an effective multimodal approach is proposed that combines visual and textual information for detecting emotional support in animated GIFs. Both sequence and frame-level visual features are extracted from animated GIFs and utilized for classification tasks. The proposed multimodal long-term spatiotemporal model employs a weighted late fusion technique. The results obtained show that the proposed multimodal model outperformed the implemented unimodal models for both subtasks. The proposed LTST model achieved a weighted F1-score of 0.8304 and 0.7180 for subtask 1 and subtask 2, respectively. The experimental work and analysis confirm the suitability of the dataset and proposed algorithmic model for the task.
{"title":"EmoGif: A Multimodal Approach to Detect Emotional Support in Animated GIFs","authors":"Aakash Singh;Deepawali Sharma;Vivek Kumar Singh","doi":"10.1109/TCSS.2025.3544263","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3544263","url":null,"abstract":"The massive expansion of social media and the rapid growth in multimedia content on it has resulted in a growing interest in visual content analysis and classification. There are now a good number of studies that focus on identifying hateful and offensive content in social media posts. The social media content is often analyzed through automated algorithmic approaches, with respect to being unsuitable or harmful for different groups such as women and children. There is, however, a noticeable gap in the exploration of positive content, particularly in the case of multimodal content such as GIFs. Therefore, the present work attempted to address this gap by introducing a high-quality annotated dataset of animated GIFs. The dataset provides for two subtasks: 1) subtask 1 involves binary classification, determining whether a GIF provides emotional support; and 2) subtask 2 involves multiclass classification, wherein the GIFs are categorized into three different emotional support categories. The data annotation quality is assessed using Fleiss' kappa. Various unimodal models, utilizing text-only and image-only approaches, are implemented. Additionally, an effective multimodal approach is proposed that combines visual and textual information for detecting emotional support in animated GIFs. Both sequence and frame-level visual features are extracted from animated GIFs and utilized for classification tasks. The proposed multimodal long-term spatiotemporal model employs a weighted late fusion technique. The results obtained show that the proposed multimodal model outperformed the implemented unimodal models for both subtasks. The proposed LTST model achieved a weighted F1-score of 0.8304 and 0.7180 for subtask 1 and subtask 2, respectively. The experimental work and analysis confirm the suitability of the dataset and proposed algorithmic model for the task.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3791-3803"},"PeriodicalIF":4.5,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In addressing the challenge of interpretability and generalizability of artificial music intelligence, this article introduces a novel symbolic representation that amalgamates both explicit and implicit musical information across diverse traditions and granularities. Utilizing a hierarchical and-or graph representation, the model employs nodes and edges to encapsulate a broad spectrum of musical elements, including structures, textures, rhythms, and harmonies. This hierarchical approach expands the representability across various scales of music. This representation serves as the foundation for an energy-based model, uniquely tailored to learn musical concepts through a flexible algorithm framework relying on the minimax entropy principle. Utilizing an adapted Metropolis–Hastings sampling technique, the model enables fine-grained control over music generation. Through a comprehensive empirical evaluation, this novel approach demonstrates significant improvements in interpretability and controllability compared to existing methodologies. This study marks a substantial contribution to the fields of music analysis, composition, and computational musicology.
{"title":"MusicAOG: An Energy-Based Model for Learning and Sampling a Hierarchical Representation of Symbolic Music","authors":"Yikai Qian;Tianle Wang;Jishang Chen;Peiyang Yu;Duo Xu;Xin Jin;Feng Yu;Song-Chun Zhu","doi":"10.1109/TCSS.2024.3521445","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3521445","url":null,"abstract":"In addressing the challenge of interpretability and generalizability of artificial music intelligence, this article introduces a novel symbolic representation that amalgamates both explicit and implicit musical information across diverse traditions and granularities. Utilizing a hierarchical and-or graph representation, the model employs nodes and edges to encapsulate a broad spectrum of musical elements, including structures, textures, rhythms, and harmonies. This hierarchical approach expands the representability across various scales of music. This representation serves as the foundation for an energy-based model, uniquely tailored to learn musical concepts through a flexible algorithm framework relying on the minimax entropy principle. Utilizing an adapted Metropolis–Hastings sampling technique, the model enables fine-grained control over music generation. Through a comprehensive empirical evaluation, this novel approach demonstrates significant improvements in interpretability and controllability compared to existing methodologies. This study marks a substantial contribution to the fields of music analysis, composition, and computational musicology.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 2","pages":"873-889"},"PeriodicalIF":4.5,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social media platforms such as Weibo and TikTok have become more influential than traditional media. Sentiment in social media comments reflects users’ attitudes and impacts society, making sentiment analysis (SA) crucial. AI driven models, especially deep-learning models, have achieved excellent results in SA tasks. However, most existing models are not interpretable enough. First, deep learning models have numerous parameters, and their transparency is insufficient. People cannot easily understand how the models extract features from input data and make sentiment judgments. Second, most models lack intuitive explanations. They cannot clearly indicate which words or phrases are key for emotion prediction. Moreover, extracting sentiment factors from comments is challenging because a comment often contains multiple sentiment characteristics. To address these issues, we propose a dual-branch combination network (DCN) for SA of social media short comments, achieving both word-level and sentence-level interpretability. The network includes a key word feature extraction network (KWFEN) and a key word order feature extraction network (KWOFEN). KWFEN uses popular emotional words and SHAP for word-level interpretability. KWOFEN employs position embedding and an attention layer to visualize attention weights for sentence-level interpretability. We validated our method on the public dataset weibo2018 and TSATC. The results show that our method effectively extracts positive and negative sentiment factors, establishing a clear mapping between model inputs and outputs, demonstrating good interpretability performance.
{"title":"Explainable Dual-Branch Combination Network With Key Words Embedding and Position Attention for Sentimental Analytics of Social Media Short Comments","authors":"Zixuan Wang;Pan Wang;Lianyong Qi;Zhixin Sun;Xiaokang Zhou","doi":"10.1109/TCSS.2025.3532984","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3532984","url":null,"abstract":"Social media platforms such as Weibo and TikTok have become more influential than traditional media. Sentiment in social media comments reflects users’ attitudes and impacts society, making sentiment analysis (SA) crucial. AI driven models, especially deep-learning models, have achieved excellent results in SA tasks. However, most existing models are not interpretable enough. First, deep learning models have numerous parameters, and their transparency is insufficient. People cannot easily understand how the models extract features from input data and make sentiment judgments. Second, most models lack intuitive explanations. They cannot clearly indicate which words or phrases are key for emotion prediction. Moreover, extracting sentiment factors from comments is challenging because a comment often contains multiple sentiment characteristics. To address these issues, we propose a dual-branch combination network (DCN) for SA of social media short comments, achieving both word-level and sentence-level interpretability. The network includes a key word feature extraction network (KWFEN) and a key word order feature extraction network (KWOFEN). KWFEN uses popular emotional words and SHAP for word-level interpretability. KWOFEN employs position embedding and an attention layer to visualize attention weights for sentence-level interpretability. We validated our method on the public dataset weibo2018 and TSATC. The results show that our method effectively extracts positive and negative sentiment factors, establishing a clear mapping between model inputs and outputs, demonstrating good interpretability performance.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1376-1389"},"PeriodicalIF":4.5,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-05DOI: 10.1109/TCSS.2025.3528890
Qiang He;Xin Yan;Alireza Jolfaei;Amr Tolba;Keping Yu;Yu-Kai Fu;Yuliang Cai
Sentiment propagation plays a crucial role in the continuous emergence of social public opinion and network group events. By analyzing the maximum Influence of sentiment propagation, we can gain a better understanding of how network group events arise and evolve. Influence maximization (IM) is a critical fundamental issue in the field of informatics, whose purpose is to identify the collection of individuals and maximize the specific information's influence in real-world social networks, and the sentiments expressed by nodes with the greatest influence can significantly impact the emotions of the entire group. The IM issue has been established to be an NP-hard (nondeterministic polynomial) challenge. Although some methods based on the greedy framework can achieve ideal results, they bring unacceptable computational overhead, while the performance of other methods is unsatisfactory. In this article, we explicate the IM problem and design a local influence evaluation function as the objective function of the IM to estimate the influence spread in the cascade diffusion models. We redefine particle parameters, update rules for IM problems, and introduce learning automata to realize multiple search modes. Then, we propose a multisearch particle Swarm optimization algorithm (MSPSO) to optimize the objective function. This algorithm incorporates a heuristic-based initialization strategy and a local search scheme to expedite MSPSO convergence. Experimental results on five real-world social network datasets consistently demonstrate MSPSO's superior efficiency and performance compared with baseline algorithms.
{"title":"Influence Maximization in Sentiment Propagation With Multisearch Particle Swarm Optimization Algorithm","authors":"Qiang He;Xin Yan;Alireza Jolfaei;Amr Tolba;Keping Yu;Yu-Kai Fu;Yuliang Cai","doi":"10.1109/TCSS.2025.3528890","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3528890","url":null,"abstract":"Sentiment propagation plays a crucial role in the continuous emergence of social public opinion and network group events. By analyzing the maximum Influence of sentiment propagation, we can gain a better understanding of how network group events arise and evolve. Influence maximization (IM) is a critical fundamental issue in the field of informatics, whose purpose is to identify the collection of individuals and maximize the specific information's influence in real-world social networks, and the sentiments expressed by nodes with the greatest influence can significantly impact the emotions of the entire group. The IM issue has been established to be an NP-hard (nondeterministic polynomial) challenge. Although some methods based on the greedy framework can achieve ideal results, they bring unacceptable computational overhead, while the performance of other methods is unsatisfactory. In this article, we explicate the IM problem and design a local influence evaluation function as the objective function of the IM to estimate the influence spread in the cascade diffusion models. We redefine particle parameters, update rules for IM problems, and introduce learning automata to realize multiple search modes. Then, we propose a multisearch particle Swarm optimization algorithm (MSPSO) to optimize the objective function. This algorithm incorporates a heuristic-based initialization strategy and a local search scheme to expedite MSPSO convergence. Experimental results on five real-world social network datasets consistently demonstrate MSPSO's superior efficiency and performance compared with baseline algorithms.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1365-1375"},"PeriodicalIF":4.5,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}