首页 > 最新文献

IEEE Transactions on Computational Social Systems最新文献

英文 中文
Converging Real and Virtual: Embodied Intelligence-Driven Immersive VR Biofeedback for Brain Health Modulation 融合真实与虚拟:具身智能驱动的沉浸式VR生物反馈脑健康调节
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-03-30 DOI: 10.1109/TCSS.2025.3567776
Yingying She;Fang Liu;Baorong Yang;Bin Hu
{"title":"Converging Real and Virtual: Embodied Intelligence-Driven Immersive VR Biofeedback for Brain Health Modulation","authors":"Yingying She;Fang Liu;Baorong Yang;Bin Hu","doi":"10.1109/TCSS.2025.3567776","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3567776","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"938-946"},"PeriodicalIF":4.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11018521","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Computational Social Systems Publication Information IEEE计算社会系统汇刊信息
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-03-30 DOI: 10.1109/TCSS.2025.3567690
{"title":"IEEE Transactions on Computational Social Systems Publication Information","authors":"","doi":"10.1109/TCSS.2025.3567690","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3567690","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"C2-C2"},"PeriodicalIF":4.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11018522","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Systems, Man, and Cybernetics Society Information IEEE系统、人与控制论学会信息
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-03-30 DOI: 10.1109/TCSS.2025.3567692
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/TCSS.2025.3567692","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3567692","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"C3-C3"},"PeriodicalIF":4.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11018523","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Computational Social Systems Information for Authors IEEE计算社会系统信息汇刊
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-03-30 DOI: 10.1109/TCSS.2025.3567694
{"title":"IEEE Transactions on Computational Social Systems Information for Authors","authors":"","doi":"10.1109/TCSS.2025.3567694","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3567694","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"C4-C4"},"PeriodicalIF":4.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11018520","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Driven Behavioral Modeling in IoST for Mental Health Monitoring and Intervention 基于IoST的深度学习驱动行为建模用于心理健康监测与干预
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-03-28 DOI: 10.1109/TCSS.2025.3550419
Jialin Li;Muhammad Azeem Akbar;Syed Hassan Shah;Zhi Wang;Jing Yang
Multimodal data have emerged as a cornerstone for understanding and analyzing complex human behaviors, particularly in mental health monitoring. In this study, we propose a deep learning-driven behavioral modeling framework for intelligence of social things (IoST)-based mental health monitoring and intervention, designed to integrate and analyze multimodal data—including text, speech, and physiological signals—captured from interconnected IoST devices. The framework incorporates an adaptive attention-based fusion mechanism that dynamically adjusts the contribution of each modality based on contextual relevance, enhancing the robustness of multimodal integration. Additionally, we employ a temporal-aware recurrent neural network with an attention mechanism to capture long-term dependencies and evolving behavioral patterns, ensuring precise mental health state prediction. To validate the framework, extensive experiments were conducted using three publicly available datasets: DAIC-WOZ, SEED, and MELD. Comparative experiments demonstrate the superior performance of the proposed framework, achieving state-of-the-art accuracy of 93.5%, F1-scores of 92.9%, and AUC-ROC of 0.95 values. Ablation studies highlight the critical roles of attention mechanisms and multimodal integration, showcasing significant performance improvements over single-modality and simplified fusion approaches. These findings underscore the framework's potential as a reliable and efficient tool for real-time mental health monitoring in IoST environments, paving the way for scalable and personalized interventions.
多模态数据已成为理解和分析复杂人类行为的基石,特别是在心理健康监测方面。在这项研究中,我们提出了一个深度学习驱动的行为建模框架,用于基于社会物智能(IoST)的心理健康监测和干预,旨在整合和分析从互联IoST设备捕获的多模态数据,包括文本、语音和生理信号。该框架采用了基于注意力的自适应融合机制,根据上下文相关性动态调整各模态的贡献,增强了多模态集成的鲁棒性。此外,我们采用具有时间意识的递归神经网络和注意机制来捕捉长期依赖和不断演变的行为模式,确保准确的心理健康状态预测。为了验证该框架,使用三个公开可用的数据集进行了广泛的实验:DAIC-WOZ, SEED和MELD。对比实验表明,该框架具有优异的性能,准确率为93.5%,f1分数为92.9%,AUC-ROC值为0.95。消融研究强调了注意机制和多模态融合的关键作用,显示了相对于单模态和简化融合方法的显著性能改进。这些发现强调了该框架作为IoST环境中实时心理健康监测的可靠和有效工具的潜力,为可扩展和个性化干预铺平了道路。
{"title":"Deep Learning-Driven Behavioral Modeling in IoST for Mental Health Monitoring and Intervention","authors":"Jialin Li;Muhammad Azeem Akbar;Syed Hassan Shah;Zhi Wang;Jing Yang","doi":"10.1109/TCSS.2025.3550419","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3550419","url":null,"abstract":"Multimodal data have emerged as a cornerstone for understanding and analyzing complex human behaviors, particularly in mental health monitoring. In this study, we propose a deep learning-driven behavioral modeling framework for intelligence of social things (IoST)-based mental health monitoring and intervention, designed to integrate and analyze multimodal data—including text, speech, and physiological signals—captured from interconnected IoST devices. The framework incorporates an adaptive attention-based fusion mechanism that dynamically adjusts the contribution of each modality based on contextual relevance, enhancing the robustness of multimodal integration. Additionally, we employ a temporal-aware recurrent neural network with an attention mechanism to capture long-term dependencies and evolving behavioral patterns, ensuring precise mental health state prediction. To validate the framework, extensive experiments were conducted using three publicly available datasets: DAIC-WOZ, SEED, and MELD. Comparative experiments demonstrate the superior performance of the proposed framework, achieving state-of-the-art accuracy of 93.5%, F1-scores of 92.9%, and AUC-ROC of 0.95 values. Ablation studies highlight the critical roles of attention mechanisms and multimodal integration, showcasing significant performance improvements over single-modality and simplified fusion approaches. These findings underscore the framework's potential as a reliable and efficient tool for real-time mental health monitoring in IoST environments, paving the way for scalable and personalized interventions.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"13 1","pages":"1044-1057"},"PeriodicalIF":4.5,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bidirectional Patch-Aware Attention Network for Few-Shot Learning 面向少镜头学习的双向补丁感知注意网络
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-03-21 DOI: 10.1109/TCSS.2025.3548057
Yu Mao;Shaojie Lin;Zilong Lin;Yaojin Lin
Few-shot learning (FSL) aims to train a model using a minimal number of samples and subsequently apply this model to recognize unseen classes. Recently, metric-based methods mainly focus on exploring the relationship between the support set and the query set through attention mechanism in solving FSL problems. However, these methods typically employ unidirectional computation when calculating the attention relationship between support and query. This unidirectional approach not only limits the depth and breadth of knowledge acquisition but may also lead to mismatched patches between support and query, thereby affecting the overall performance of the model. In this article, we propose a bidirectional patch-aware attention network for few-shot learning (BPAN) to address this issue. First, we extract subimages via grid cropping and feed them into the learned feature extractor to obtain patch features. Moreover, self-attention is used to assign different weights to patch features and reconstruct them. Then, PFCAM is proposed to mutually explore the patch feature relationship between the support set and the support set, further reconstruct the patch features, and aggregate multiple patch features of each image into one feature through a learnable parameter matrix for the purpose of prediction. Finally, the template for each class is constructed to extend the results of PFCAM to the few-shot classification scenario. Experiments on three benchmark datasets show that BPAN achieves superior performance.
少射学习(FSL)旨在使用最少数量的样本训练模型,并随后应用该模型来识别未见过的类。目前,基于度量的方法主要通过注意机制探索支持集和查询集之间的关系来解决FSL问题。然而,这些方法在计算支持和查询之间的注意关系时通常采用单向计算。这种单向的方法不仅限制了知识获取的深度和广度,而且可能导致支持和查询之间的补丁不匹配,从而影响模型的整体性能。在本文中,我们提出了一种用于少射学习(BPAN)的双向补丁感知注意网络来解决这个问题。首先,我们通过网格裁剪提取子图像,并将其输入到学习的特征提取器中以获得patch特征。此外,该算法还利用自关注对patch特征赋予不同的权重并进行重构。然后,提出PFCAM相互探索支持集与支持集之间的补丁特征关系,进一步重构补丁特征,并通过可学习的参数矩阵将每张图像的多个补丁特征聚合为一个特征进行预测。最后,构建了每个类别的模板,将PFCAM的结果扩展到少镜头分类场景。在三个基准数据集上的实验表明,BPAN具有较好的性能。
{"title":"Bidirectional Patch-Aware Attention Network for Few-Shot Learning","authors":"Yu Mao;Shaojie Lin;Zilong Lin;Yaojin Lin","doi":"10.1109/TCSS.2025.3548057","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3548057","url":null,"abstract":"Few-shot learning (FSL) aims to train a model using a minimal number of samples and subsequently apply this model to recognize unseen classes. Recently, metric-based methods mainly focus on exploring the relationship between the support set and the query set through attention mechanism in solving FSL problems. However, these methods typically employ unidirectional computation when calculating the attention relationship between support and query. This unidirectional approach not only limits the depth and breadth of knowledge acquisition but may also lead to mismatched patches between support and query, thereby affecting the overall performance of the model. In this article, we propose a bidirectional patch-aware attention network for few-shot learning (BPAN) to address this issue. First, we extract subimages via grid cropping and feed them into the learned feature extractor to obtain patch features. Moreover, self-attention is used to assign different weights to patch features and reconstruct them. Then, PFCAM is proposed to mutually explore the patch feature relationship between the support set and the support set, further reconstruct the patch features, and aggregate multiple patch features of each image into one feature through a learnable parameter matrix for the purpose of prediction. Finally, the template for each class is constructed to extend the results of PFCAM to the few-shot classification scenario. Experiments on three benchmark datasets show that BPAN achieves superior performance.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3698-3708"},"PeriodicalIF":4.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EmoGif: A Multimodal Approach to Detect Emotional Support in Animated GIFs EmoGif:在动画gif中检测情感支持的多模式方法
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-03-07 DOI: 10.1109/TCSS.2025.3544263
Aakash Singh;Deepawali Sharma;Vivek Kumar Singh
The massive expansion of social media and the rapid growth in multimedia content on it has resulted in a growing interest in visual content analysis and classification. There are now a good number of studies that focus on identifying hateful and offensive content in social media posts. The social media content is often analyzed through automated algorithmic approaches, with respect to being unsuitable or harmful for different groups such as women and children. There is, however, a noticeable gap in the exploration of positive content, particularly in the case of multimodal content such as GIFs. Therefore, the present work attempted to address this gap by introducing a high-quality annotated dataset of animated GIFs. The dataset provides for two subtasks: 1) subtask 1 involves binary classification, determining whether a GIF provides emotional support; and 2) subtask 2 involves multiclass classification, wherein the GIFs are categorized into three different emotional support categories. The data annotation quality is assessed using Fleiss' kappa. Various unimodal models, utilizing text-only and image-only approaches, are implemented. Additionally, an effective multimodal approach is proposed that combines visual and textual information for detecting emotional support in animated GIFs. Both sequence and frame-level visual features are extracted from animated GIFs and utilized for classification tasks. The proposed multimodal long-term spatiotemporal model employs a weighted late fusion technique. The results obtained show that the proposed multimodal model outperformed the implemented unimodal models for both subtasks. The proposed LTST model achieved a weighted F1-score of 0.8304 and 0.7180 for subtask 1 and subtask 2, respectively. The experimental work and analysis confirm the suitability of the dataset and proposed algorithmic model for the task.
社交媒体的大规模扩张和多媒体内容的快速增长使得人们对视觉内容的分析和分类越来越感兴趣。现在有大量的研究专注于识别社交媒体帖子中的仇恨和冒犯内容。社交媒体内容通常通过自动算法方法进行分析,以确定对不同群体(如妇女和儿童)是否不适合或有害。然而,在积极内容的探索方面存在明显的差距,特别是在gif等多模式内容的情况下。因此,目前的工作试图通过引入高质量的带注释的动画gif数据集来解决这一差距。数据集提供了两个子任务:1)子任务1涉及二值分类,确定GIF是否提供情感支持;2)子任务2涉及多类分类,其中动图被分为三种不同的情感支持类别。采用Fleiss kappa评价数据标注质量。实现了使用纯文本和纯图像方法的各种单模模型。此外,提出了一种有效的多模态方法,结合视觉和文本信息来检测动画gif中的情感支持。从动画gif中提取序列和帧级视觉特征,并将其用于分类任务。提出的多模态长期时空模型采用加权后期融合技术。结果表明,所提出的多模态模型在两个子任务上都优于已实现的单模态模型。所提出的LTST模型对子任务1和子任务2的加权f1得分分别为0.8304和0.7180。实验工作和分析证实了数据集和算法模型的适用性。
{"title":"EmoGif: A Multimodal Approach to Detect Emotional Support in Animated GIFs","authors":"Aakash Singh;Deepawali Sharma;Vivek Kumar Singh","doi":"10.1109/TCSS.2025.3544263","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3544263","url":null,"abstract":"The massive expansion of social media and the rapid growth in multimedia content on it has resulted in a growing interest in visual content analysis and classification. There are now a good number of studies that focus on identifying hateful and offensive content in social media posts. The social media content is often analyzed through automated algorithmic approaches, with respect to being unsuitable or harmful for different groups such as women and children. There is, however, a noticeable gap in the exploration of positive content, particularly in the case of multimodal content such as GIFs. Therefore, the present work attempted to address this gap by introducing a high-quality annotated dataset of animated GIFs. The dataset provides for two subtasks: 1) subtask 1 involves binary classification, determining whether a GIF provides emotional support; and 2) subtask 2 involves multiclass classification, wherein the GIFs are categorized into three different emotional support categories. The data annotation quality is assessed using Fleiss' kappa. Various unimodal models, utilizing text-only and image-only approaches, are implemented. Additionally, an effective multimodal approach is proposed that combines visual and textual information for detecting emotional support in animated GIFs. Both sequence and frame-level visual features are extracted from animated GIFs and utilized for classification tasks. The proposed multimodal long-term spatiotemporal model employs a weighted late fusion technique. The results obtained show that the proposed multimodal model outperformed the implemented unimodal models for both subtasks. The proposed LTST model achieved a weighted F1-score of 0.8304 and 0.7180 for subtask 1 and subtask 2, respectively. The experimental work and analysis confirm the suitability of the dataset and proposed algorithmic model for the task.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3791-3803"},"PeriodicalIF":4.5,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MusicAOG: An Energy-Based Model for Learning and Sampling a Hierarchical Representation of Symbolic Music MusicAOG:一个基于能量的学习和采样符号音乐分层表示模型
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-02-27 DOI: 10.1109/TCSS.2024.3521445
Yikai Qian;Tianle Wang;Jishang Chen;Peiyang Yu;Duo Xu;Xin Jin;Feng Yu;Song-Chun Zhu
In addressing the challenge of interpretability and generalizability of artificial music intelligence, this article introduces a novel symbolic representation that amalgamates both explicit and implicit musical information across diverse traditions and granularities. Utilizing a hierarchical and-or graph representation, the model employs nodes and edges to encapsulate a broad spectrum of musical elements, including structures, textures, rhythms, and harmonies. This hierarchical approach expands the representability across various scales of music. This representation serves as the foundation for an energy-based model, uniquely tailored to learn musical concepts through a flexible algorithm framework relying on the minimax entropy principle. Utilizing an adapted Metropolis–Hastings sampling technique, the model enables fine-grained control over music generation. Through a comprehensive empirical evaluation, this novel approach demonstrates significant improvements in interpretability and controllability compared to existing methodologies. This study marks a substantial contribution to the fields of music analysis, composition, and computational musicology.
为了解决人工音乐智能的可解释性和概括性的挑战,本文介绍了一种新的符号表示,它融合了不同传统和粒度的显性和隐性音乐信息。该模型利用层次化的和或图表示,使用节点和边来封装广泛的音乐元素,包括结构、纹理、节奏和和声。这种分层方法扩展了音乐在不同音阶上的可表征性。这种表示作为基于能量的模型的基础,通过依靠极大极小熵原理的灵活算法框架来学习音乐概念。利用适应的大都会黑斯廷斯采样技术,该模型可以对音乐生成进行细粒度控制。通过全面的实证评估,与现有方法相比,这种新方法在可解释性和可控性方面有了显著改善。这项研究标志着对音乐分析、作曲和计算音乐学领域的重大贡献。
{"title":"MusicAOG: An Energy-Based Model for Learning and Sampling a Hierarchical Representation of Symbolic Music","authors":"Yikai Qian;Tianle Wang;Jishang Chen;Peiyang Yu;Duo Xu;Xin Jin;Feng Yu;Song-Chun Zhu","doi":"10.1109/TCSS.2024.3521445","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3521445","url":null,"abstract":"In addressing the challenge of interpretability and generalizability of artificial music intelligence, this article introduces a novel symbolic representation that amalgamates both explicit and implicit musical information across diverse traditions and granularities. Utilizing a hierarchical and-or graph representation, the model employs nodes and edges to encapsulate a broad spectrum of musical elements, including structures, textures, rhythms, and harmonies. This hierarchical approach expands the representability across various scales of music. This representation serves as the foundation for an energy-based model, uniquely tailored to learn musical concepts through a flexible algorithm framework relying on the minimax entropy principle. Utilizing an adapted Metropolis–Hastings sampling technique, the model enables fine-grained control over music generation. Through a comprehensive empirical evaluation, this novel approach demonstrates significant improvements in interpretability and controllability compared to existing methodologies. This study marks a substantial contribution to the fields of music analysis, composition, and computational musicology.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 2","pages":"873-889"},"PeriodicalIF":4.5,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable Dual-Branch Combination Network With Key Words Embedding and Position Attention for Sentimental Analytics of Social Media Short Comments 基于关键词嵌入和位置关注的可解释双分支组合网络社交媒体短评论情感分析
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-02-06 DOI: 10.1109/TCSS.2025.3532984
Zixuan Wang;Pan Wang;Lianyong Qi;Zhixin Sun;Xiaokang Zhou
Social media platforms such as Weibo and TikTok have become more influential than traditional media. Sentiment in social media comments reflects users’ attitudes and impacts society, making sentiment analysis (SA) crucial. AI driven models, especially deep-learning models, have achieved excellent results in SA tasks. However, most existing models are not interpretable enough. First, deep learning models have numerous parameters, and their transparency is insufficient. People cannot easily understand how the models extract features from input data and make sentiment judgments. Second, most models lack intuitive explanations. They cannot clearly indicate which words or phrases are key for emotion prediction. Moreover, extracting sentiment factors from comments is challenging because a comment often contains multiple sentiment characteristics. To address these issues, we propose a dual-branch combination network (DCN) for SA of social media short comments, achieving both word-level and sentence-level interpretability. The network includes a key word feature extraction network (KWFEN) and a key word order feature extraction network (KWOFEN). KWFEN uses popular emotional words and SHAP for word-level interpretability. KWOFEN employs position embedding and an attention layer to visualize attention weights for sentence-level interpretability. We validated our method on the public dataset weibo2018 and TSATC. The results show that our method effectively extracts positive and negative sentiment factors, establishing a clear mapping between model inputs and outputs, demonstrating good interpretability performance.
微博和抖音等社交媒体平台的影响力已经超过了传统媒体。社交媒体评论中的情绪反映了用户的态度并对社会产生影响,因此情绪分析(Sentiment analysis, SA)至关重要。人工智能驱动的模型,特别是深度学习模型,在人工智能任务中取得了优异的成绩。然而,大多数现有模型的可解释性不够。首先,深度学习模型参数众多,透明度不足。人们不容易理解模型如何从输入数据中提取特征并做出情绪判断。其次,大多数模型缺乏直观的解释。他们不能清楚地指出哪些单词或短语是情绪预测的关键。此外,从评论中提取情感因素具有挑战性,因为一条评论通常包含多个情感特征。为了解决这些问题,我们提出了一种双分支组合网络(DCN)来实现社交媒体短评论的可解释性,同时实现了单词级和句子级的可解释性。该网络包括一个关键词特征提取网络(KWFEN)和一个关键词顺序特征提取网络(KWOFEN)。KWFEN使用流行的情感词汇和SHAP进行单词级别的可解释性。KWOFEN采用位置嵌入和注意层来可视化句子级可解释性的注意权重。我们在公共数据集weibo2018和TSATC上验证了我们的方法。结果表明,该方法有效地提取了正面和负面情绪因素,在模型输入和输出之间建立了清晰的映射关系,具有良好的可解释性。
{"title":"Explainable Dual-Branch Combination Network With Key Words Embedding and Position Attention for Sentimental Analytics of Social Media Short Comments","authors":"Zixuan Wang;Pan Wang;Lianyong Qi;Zhixin Sun;Xiaokang Zhou","doi":"10.1109/TCSS.2025.3532984","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3532984","url":null,"abstract":"Social media platforms such as Weibo and TikTok have become more influential than traditional media. Sentiment in social media comments reflects users’ attitudes and impacts society, making sentiment analysis (SA) crucial. AI driven models, especially deep-learning models, have achieved excellent results in SA tasks. However, most existing models are not interpretable enough. First, deep learning models have numerous parameters, and their transparency is insufficient. People cannot easily understand how the models extract features from input data and make sentiment judgments. Second, most models lack intuitive explanations. They cannot clearly indicate which words or phrases are key for emotion prediction. Moreover, extracting sentiment factors from comments is challenging because a comment often contains multiple sentiment characteristics. To address these issues, we propose a dual-branch combination network (DCN) for SA of social media short comments, achieving both word-level and sentence-level interpretability. The network includes a key word feature extraction network (KWFEN) and a key word order feature extraction network (KWOFEN). KWFEN uses popular emotional words and SHAP for word-level interpretability. KWOFEN employs position embedding and an attention layer to visualize attention weights for sentence-level interpretability. We validated our method on the public dataset weibo2018 and TSATC. The results show that our method effectively extracts positive and negative sentiment factors, establishing a clear mapping between model inputs and outputs, demonstrating good interpretability performance.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1376-1389"},"PeriodicalIF":4.5,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence Maximization in Sentiment Propagation With Multisearch Particle Swarm Optimization Algorithm 基于多搜索粒子群优化算法的情感传播影响最大化
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2025-02-05 DOI: 10.1109/TCSS.2025.3528890
Qiang He;Xin Yan;Alireza Jolfaei;Amr Tolba;Keping Yu;Yu-Kai Fu;Yuliang Cai
Sentiment propagation plays a crucial role in the continuous emergence of social public opinion and network group events. By analyzing the maximum Influence of sentiment propagation, we can gain a better understanding of how network group events arise and evolve. Influence maximization (IM) is a critical fundamental issue in the field of informatics, whose purpose is to identify the collection of individuals and maximize the specific information's influence in real-world social networks, and the sentiments expressed by nodes with the greatest influence can significantly impact the emotions of the entire group. The IM issue has been established to be an NP-hard (nondeterministic polynomial) challenge. Although some methods based on the greedy framework can achieve ideal results, they bring unacceptable computational overhead, while the performance of other methods is unsatisfactory. In this article, we explicate the IM problem and design a local influence evaluation function as the objective function of the IM to estimate the influence spread in the cascade diffusion models. We redefine particle parameters, update rules for IM problems, and introduce learning automata to realize multiple search modes. Then, we propose a multisearch particle Swarm optimization algorithm (MSPSO) to optimize the objective function. This algorithm incorporates a heuristic-based initialization strategy and a local search scheme to expedite MSPSO convergence. Experimental results on five real-world social network datasets consistently demonstrate MSPSO's superior efficiency and performance compared with baseline algorithms.
在不断涌现的社会舆情和网络群体性事件中,情绪传播起着至关重要的作用。通过分析情感传播的最大影响,我们可以更好地理解网络群体事件的产生和演变。影响最大化(Influence maximization, IM)是信息学领域的一个关键基础问题,其目的是在现实社会网络中识别个体的集合并最大化特定信息的影响力,影响力最大的节点所表达的情绪可以显著影响整个群体的情绪。IM问题已被确定为NP-hard(不确定性多项式)挑战。虽然一些基于贪心框架的方法可以获得理想的结果,但它们带来了不可接受的计算开销,而其他方法的性能则不尽人意。在本文中,我们阐述了间接影响问题,并设计了一个局部影响评价函数作为间接影响的目标函数来估计串级扩散模型中的影响扩散。我们重新定义粒子参数,更新IM问题的规则,并引入学习自动机来实现多种搜索模式。然后,我们提出了一种多搜索粒子群优化算法(MSPSO)来优化目标函数。该算法结合了启发式初始化策略和局部搜索方案,加快了MSPSO的收敛速度。在五个真实社会网络数据集上的实验结果一致表明,与基线算法相比,MSPSO具有更高的效率和性能。
{"title":"Influence Maximization in Sentiment Propagation With Multisearch Particle Swarm Optimization Algorithm","authors":"Qiang He;Xin Yan;Alireza Jolfaei;Amr Tolba;Keping Yu;Yu-Kai Fu;Yuliang Cai","doi":"10.1109/TCSS.2025.3528890","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3528890","url":null,"abstract":"Sentiment propagation plays a crucial role in the continuous emergence of social public opinion and network group events. By analyzing the maximum Influence of sentiment propagation, we can gain a better understanding of how network group events arise and evolve. Influence maximization (IM) is a critical fundamental issue in the field of informatics, whose purpose is to identify the collection of individuals and maximize the specific information's influence in real-world social networks, and the sentiments expressed by nodes with the greatest influence can significantly impact the emotions of the entire group. The IM issue has been established to be an NP-hard (nondeterministic polynomial) challenge. Although some methods based on the greedy framework can achieve ideal results, they bring unacceptable computational overhead, while the performance of other methods is unsatisfactory. In this article, we explicate the IM problem and design a local influence evaluation function as the objective function of the IM to estimate the influence spread in the cascade diffusion models. We redefine particle parameters, update rules for IM problems, and introduce learning automata to realize multiple search modes. Then, we propose a multisearch particle Swarm optimization algorithm (MSPSO) to optimize the objective function. This algorithm incorporates a heuristic-based initialization strategy and a local search scheme to expedite MSPSO convergence. Experimental results on five real-world social network datasets consistently demonstrate MSPSO's superior efficiency and performance compared with baseline algorithms.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1365-1375"},"PeriodicalIF":4.5,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computational Social Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1