Pub Date : 2024-12-17DOI: 10.1109/TCSS.2024.3514148
Shuxin Qin;Yongcan Luo;Jing Zhu;Gaofeng Tao;Jingya Zheng;Zhongjun Ma
Detecting abnormal nodes from attributed networks plays an important role in various applications, including cybersecurity, finance, and social networks. Most existing methods focus on learning different scales of graphs or using augmented data to improve the quality of feature representation. However, the performance is limited due to two critical problems. First, the high sensitivity of attributed networks makes it uncontrollable and uncertain to use conventional methods for data augmentation, leading to limited improvement in representation and generalization capabilities. Second, under the unsupervised paradigm, anomalous nodes mixed in the training data may interfere with the learning of normal patterns and weaken the discrimination ability. In this work, we propose a novel multiview and multiscale contrastive learning framework to address these two issues. Specifically, a network augmentation method based on parameter perturbation is introduced to generate augmented views for both node–node and node–subgraph level contrast branches. Then, cross-view graph contrastive learning is employed to improve the representation without the need for augmented data. We also provide a cycle training strategy where normal samples detected in the former step are collected for an additional training step. In this way, the ability to learn normal patterns is enhanced. Extensive experiments on six benchmark datasets demonstrate that our method outperforms the existing state-of-the-art baselines.
{"title":"Anomaly Detection on Attributed Networks via Multiview and Multiscale Contrastive Learning","authors":"Shuxin Qin;Yongcan Luo;Jing Zhu;Gaofeng Tao;Jingya Zheng;Zhongjun Ma","doi":"10.1109/TCSS.2024.3514148","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3514148","url":null,"abstract":"Detecting abnormal nodes from attributed networks plays an important role in various applications, including cybersecurity, finance, and social networks. Most existing methods focus on learning different scales of graphs or using augmented data to improve the quality of feature representation. However, the performance is limited due to two critical problems. First, the high sensitivity of attributed networks makes it uncontrollable and uncertain to use conventional methods for data augmentation, leading to limited improvement in representation and generalization capabilities. Second, under the unsupervised paradigm, anomalous nodes mixed in the training data may interfere with the learning of normal patterns and weaken the discrimination ability. In this work, we propose a novel multiview and multiscale contrastive learning framework to address these two issues. Specifically, a network augmentation method based on parameter perturbation is introduced to generate augmented views for both node–node and node–subgraph level contrast branches. Then, cross-view graph contrastive learning is employed to improve the representation without the need for augmented data. We also provide a cycle training strategy where normal samples detected in the former step are collected for an additional training step. In this way, the ability to learn normal patterns is enhanced. Extensive experiments on six benchmark datasets demonstrate that our method outperforms the existing state-of-the-art baselines.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1038-1051"},"PeriodicalIF":4.5,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/TCSS.2024.3514186
Liping Tao;Yang Lu;Yuqi Fan;Chee Wei Tan;Zhen Wei
Sharding presents an enticing path toward improving blockchain scalability. However, the consensus mechanism within individual shards faces mounting security challenges due to the restricted number of consensus nodes and the reliance on conventional, unchanging nodes for consensus. Common strategies to enhance shard consensus security often involve increasing the number of consensus nodes per shard. While effective in bolstering security, this approach also leads to a notable rise in consensus delay within each shard, potentially offsetting the scalability advantages of sharding. Hence, it becomes imperative to strategically select nodes to form dedicated consensus groups for each shard. These groups should not only enhance shard consensus security but also do so without exacerbating consensus delay. In this article, we propose a novel consensus group selection based on transmission delay between nodes (CGSTD) to address this challenge, with the goal of minimizing the overall consensus delay across the system. CGSTD intelligently selects nodes from various shards to form distinct consensus groups for each shard, thereby enhancing shard security while maintaining optimal system-wide consensus efficiency. We conduct a rigorous theoretical analysis to evaluate the security properties of CGSTD and derive approximation ratios under various operational scenarios. Simulation results validate the superior performance of CGSTD compared to baseline algorithms, showcasing reductions in total consensus delay, mitigated increases in shard-specific delay, optimized block storage utilization per node, and streamlined participation of nodes in consensus groups.
{"title":"Optimized Consensus Group Selection Focused on Node Transmission Delay in Sharding Blockchains","authors":"Liping Tao;Yang Lu;Yuqi Fan;Chee Wei Tan;Zhen Wei","doi":"10.1109/TCSS.2024.3514186","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3514186","url":null,"abstract":"Sharding presents an enticing path toward improving blockchain scalability. However, the consensus mechanism within individual shards faces mounting security challenges due to the restricted number of consensus nodes and the reliance on conventional, unchanging nodes for consensus. Common strategies to enhance shard consensus security often involve increasing the number of consensus nodes per shard. While effective in bolstering security, this approach also leads to a notable rise in consensus delay within each shard, potentially offsetting the scalability advantages of sharding. Hence, it becomes imperative to strategically select nodes to form dedicated consensus groups for each shard. These groups should not only enhance shard consensus security but also do so without exacerbating consensus delay. In this article, we propose a novel consensus group selection based on transmission delay between nodes (CGSTD) to address this challenge, with the goal of minimizing the overall consensus delay across the system. CGSTD intelligently selects nodes from various shards to form distinct consensus groups for each shard, thereby enhancing shard security while maintaining optimal system-wide consensus efficiency. We conduct a rigorous theoretical analysis to evaluate the security properties of CGSTD and derive approximation ratios under various operational scenarios. Simulation results validate the superior performance of CGSTD compared to baseline algorithms, showcasing reductions in total consensus delay, mitigated increases in shard-specific delay, optimized block storage utilization per node, and streamlined participation of nodes in consensus groups.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1052-1067"},"PeriodicalIF":4.5,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-16DOI: 10.1109/TCSS.2024.3509340
Manjary P. Gangan;Anoop Kadan;Lajish V. L.
Image forensics research has recently witnessed a lot of advancements toward developing computational models capable of accurately detecting natural images captured by cameras and generative adversarial network (GAN) generated images. However, it is also important to ensure whether these computational models are fair enough and do not produce biased outcomes that could eventually harm certain societal groups or cause serious security threats. Exploring fairness in image forensic algorithms is an initial step toward mitigating these biases. This study explores bias in visual transformer based image forensic algorithms that classify natural and GAN images, since visual transformers are recently being widely used in image classification based tasks, including in the area of image forensics. The proposed study procures bias evaluation corpora to analyze bias in gender, racial, affective, and intersectional domains using a wide set of individual and pairwise bias evaluation measures. Since the robustness of the algorithms against image compression is an important factor to be considered in forensic tasks, this study also analyzes the impact of image compression on model bias. Hence, to study the impact of image compression on model bias, a two-phase evaluation setting is followed, where the experiments are carried out in uncompressed and compressed evaluation settings. The study could identify bias existences in the visual transformer based models distinguishing natural and GAN images, and also observes that image compression impacts model biases, predominantly amplifying the presence of biases in class GAN predictions.
{"title":"Toward Exploring Fairness in Visual Transformer Based Natural and GAN Image Detection Systems","authors":"Manjary P. Gangan;Anoop Kadan;Lajish V. L.","doi":"10.1109/TCSS.2024.3509340","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3509340","url":null,"abstract":"Image forensics research has recently witnessed a lot of advancements toward developing computational models capable of accurately detecting natural images captured by cameras and generative adversarial network (GAN) generated images. However, it is also important to ensure whether these computational models are fair enough and do not produce biased outcomes that could eventually harm certain societal groups or cause serious security threats. Exploring fairness in image forensic algorithms is an initial step toward mitigating these biases. This study explores bias in visual transformer based image forensic algorithms that classify natural and GAN images, since visual transformers are recently being widely used in image classification based tasks, including in the area of image forensics. The proposed study procures bias evaluation corpora to analyze bias in gender, racial, affective, and intersectional domains using a wide set of individual and pairwise bias evaluation measures. Since the robustness of the algorithms against image compression is an important factor to be considered in forensic tasks, this study also analyzes the impact of image compression on model bias. Hence, to study the impact of image compression on model bias, a two-phase evaluation setting is followed, where the experiments are carried out in uncompressed and compressed evaluation settings. The study could identify bias existences in the visual transformer based models distinguishing natural and GAN images, and also observes that image compression impacts model biases, predominantly amplifying the presence of biases in class GAN predictions.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1068-1079"},"PeriodicalIF":4.5,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-16DOI: 10.1109/TCSS.2024.3508089
Harshal Janjani;Tanmay Agarwal;M. P. Gopinath;Vimoh Sharma;S. P. Raja
With the rapid proliferation of machine learning applications in cloud computing environments, addressing crucial challenges concerning energy efficiency becomes pressing, including addressing the high power consumption of such workloads. In this regard, this work focuses much on the development of an energy-aware scheduling and task assignment algorithm that, while optimizing energy consumption, maintains required performance standards in deploying machine-learning applications in cloud environments. It therefore, pivots on leveraging online reinforcement learning to deduce an optimal planning and allocation strategy. This proposed algorithm leverages the capability of RL in making sequential decisions with the aim of achieving maximum cumulative rewards. The algorithm design and its implementation are examined in detail, considering the nature of workloads and how the computational resources are utilized. The algorithm’s performance is analyzed by looking into different performance metrics that assess the success of the model. All the results indicate that energy-aware scheduling combined with task assignment algorithms are bound to reduce energy consumption by a great margin while meeting the required performance for large-scale workloads. These results hold much promise for the improvement of sustainable cloud computing infrastructures and consequently, to energy-efficient machine learning. The future research directions involve enhancing the proposed algorithm’s generalization capabilities and addressing challenges related to scalability and convergence.
{"title":"Designing Energy-Aware Scheduling and Task Allocation Algorithms for Online Reinforcement Learning Applications in Cloud Environments","authors":"Harshal Janjani;Tanmay Agarwal;M. P. Gopinath;Vimoh Sharma;S. P. Raja","doi":"10.1109/TCSS.2024.3508089","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3508089","url":null,"abstract":"With the rapid proliferation of machine learning applications in cloud computing environments, addressing crucial challenges concerning energy efficiency becomes pressing, including addressing the high power consumption of such workloads. In this regard, this work focuses much on the development of an energy-aware scheduling and task assignment algorithm that, while optimizing energy consumption, maintains required performance standards in deploying machine-learning applications in cloud environments. It therefore, pivots on leveraging online reinforcement learning to deduce an optimal planning and allocation strategy. This proposed algorithm leverages the capability of RL in making sequential decisions with the aim of achieving maximum cumulative rewards. The algorithm design and its implementation are examined in detail, considering the nature of workloads and how the computational resources are utilized. The algorithm’s performance is analyzed by looking into different performance metrics that assess the success of the model. All the results indicate that energy-aware scheduling combined with task assignment algorithms are bound to reduce energy consumption by a great margin while meeting the required performance for large-scale workloads. These results hold much promise for the improvement of sustainable cloud computing infrastructures and consequently, to energy-efficient machine learning. The future research directions involve enhancing the proposed algorithm’s generalization capabilities and addressing challenges related to scalability and convergence.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1218-1232"},"PeriodicalIF":4.5,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of rapid urbanization, traditional manual guidance and static evacuation signs are increasingly inadequate for addressing complex and dynamic emergencies. This study proposes an innovative emergency evacuation framework that optimizes the crowd evacuation by integrating multiagent reinforcement learning (MARL) with adversarial reinforcement learning (ARL). The developed simulation environment models realistic human behavior in complex buildings and incorporates robotic navigation and intelligent path planning. A novel simulated human behavior model was integrated, capable of complex human–robot interaction, independent escape route searching, and exhibiting herd mentality and memory mechanisms. We also proposed a multiagent framework that combines MARL and ARL to enhance overall evacuation efficiency and robustness. Additionally, we developed a new ARL evaluation framework that provides a novel method for quantifying agents’ performance. Various experiments of differing difficulty levels were conducted, and the results demonstrate that the proposed framework exhibits advantages in emergency evacuation scenarios. Specifically, our ARLR approach increased survival rates by 1.8% points in low-difficulty evacuation tasks compared to the RLR approach using only MARL algorithms. In high-difficulty evacuation tasks, the ARLR approach raised survival rates from 46.7% without robots to 64.4%, exceeding the RLR approach by 1.7% points. This study aims to enhance the efficiency and safety of human–robot collaborative fire evacuations and provides theoretical support for evaluating and improving the performance and robustness of ARL agents.
{"title":"Adversarial Reinforcement Learning for Enhanced Decision-Making of Evacuation Guidance Robots in Intelligent Fire Scenarios","authors":"Hantao Zhao;Zhihao Liang;Tianxing Ma;Xiaomeng Shi;Mubbasir Kapadia;Tyler Thrash;Christoph Hoelscher;Jinyuan Jia;Bo Liu;Jiuxin Cao","doi":"10.1109/TCSS.2024.3502420","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3502420","url":null,"abstract":"In the context of rapid urbanization, traditional manual guidance and static evacuation signs are increasingly inadequate for addressing complex and dynamic emergencies. This study proposes an innovative emergency evacuation framework that optimizes the crowd evacuation by integrating multiagent reinforcement learning (MARL) with adversarial reinforcement learning (ARL). The developed simulation environment models realistic human behavior in complex buildings and incorporates robotic navigation and intelligent path planning. A novel simulated human behavior model was integrated, capable of complex human–robot interaction, independent escape route searching, and exhibiting herd mentality and memory mechanisms. We also proposed a multiagent framework that combines MARL and ARL to enhance overall evacuation efficiency and robustness. Additionally, we developed a new ARL evaluation framework that provides a novel method for quantifying agents’ performance. Various experiments of differing difficulty levels were conducted, and the results demonstrate that the proposed framework exhibits advantages in emergency evacuation scenarios. Specifically, our ARLR approach increased survival rates by 1.8% points in low-difficulty evacuation tasks compared to the RLR approach using only MARL algorithms. In high-difficulty evacuation tasks, the ARLR approach raised survival rates from 46.7% without robots to 64.4%, exceeding the RLR approach by 1.7% points. This study aims to enhance the efficiency and safety of human–robot collaborative fire evacuations and provides theoretical support for evaluating and improving the performance and robustness of ARL agents.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"2030-2046"},"PeriodicalIF":4.5,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Temporal dynamic graphs (TDGs), representing the dynamic evolution of entities and their relationships over time with intricate temporal features, are widely used in various real-world domains. Existing methods typically rely on mainstream techniques such as transformers and graph neural networks (GNNs) to capture the spatiotemporal information of TDGs. However, despite their advanced capabilities, these methods often struggle with significant computational complexity and limited ability to capture temporal dynamic contextual relationships. Recently, a new model architecture called mamba has emerged, noted for its capability to capture complex dependencies in sequences while significantly reducing computational complexity. Building on this, we propose a novel method, TDG-mamba, which integrates mamba for TDG learning. TDG-mamba introduces deep semantic spatiotemporal embeddings into the mamba architecture through a specially designed spatiotemporal prior tokenization module (SPTM). Furthermore, to better leverage temporal information differences and enhance the modeling of dynamic changes in graph structures, we separately design a bidirectional mamba and a directed GNN for improved spatiotemporal embedding learning. Link prediction experiments on multiple public datasets demonstrate that our method delivers superior performance, with an average improvement of 5.11% over baseline methods across various settings.
{"title":"TDG-Mamba: Advanced Spatiotemporal Embedding for Temporal Dynamic Graph Learning via Bidirectional Information Propagation","authors":"Mengran Li;Junzhou Chen;Bo Li;Yong Zhang;Ronghui Zhang;Siyuan Gong;Xiaolei Ma;Zhihong Tian","doi":"10.1109/TCSS.2024.3509399","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3509399","url":null,"abstract":"Temporal dynamic graphs (TDGs), representing the dynamic evolution of entities and their relationships over time with intricate temporal features, are widely used in various real-world domains. Existing methods typically rely on mainstream techniques such as transformers and graph neural networks (GNNs) to capture the spatiotemporal information of TDGs. However, despite their advanced capabilities, these methods often struggle with significant computational complexity and limited ability to capture temporal dynamic contextual relationships. Recently, a new model architecture called mamba has emerged, noted for its capability to capture complex dependencies in sequences while significantly reducing computational complexity. Building on this, we propose a novel method, TDG-mamba, which integrates mamba for TDG learning. TDG-mamba introduces deep semantic spatiotemporal embeddings into the mamba architecture through a specially designed spatiotemporal prior tokenization module (SPTM). Furthermore, to better leverage temporal information differences and enhance the modeling of dynamic changes in graph structures, we separately design a bidirectional mamba and a directed GNN for improved spatiotemporal embedding learning. Link prediction experiments on multiple public datasets demonstrate that our method delivers superior performance, with an average improvement of 5.11% over baseline methods across various settings.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"2014-2029"},"PeriodicalIF":4.5,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1109/TCSS.2024.3508452
Víctor A. Vargas-Pérez;Jesús Giráldez-Cru;Pablo Mesejo;Oscar Cordón
Opinion Dynamics models in social networks are a valuable tool to study how opinions evolve within a population. However, these models often rely on agent-level parameters that are difficult to measure in a real population. This is the case of the confidence threshold in opinion dynamics models based on bounded confidence, where agents are only influenced by other agents having a similar opinion (given by this confidence threshold). Consequently, a common practice is to apply a universal threshold to the entire population and calibrate its value to match observed real-world data, despite being an unrealistic assumption. In this work, we propose an alternative approach using graph neural networks to infer agent-level confidence thresholds in the opinion dynamics of the Hegselmann-Krause model of bounded confidence. This eliminates the need for additional simulations when faced with new case studies. To this end, we construct a comprehensive synthetic training dataset that includes different network topologies and configurations of thresholds and opinions. Through multiple training runs utilizing different architectures, we identify GraphSAGE as the most effective solution, achieving a coefficient of determination $R^{2}$ above 0.7 in test datasets derived from real-world topologies. Remarkably, this performance holds even when the test topologies differ in size from those considered during training.
{"title":"Unveiling Agents’ Confidence in Opinion Dynamics Models via Graph Neural Networks","authors":"Víctor A. Vargas-Pérez;Jesús Giráldez-Cru;Pablo Mesejo;Oscar Cordón","doi":"10.1109/TCSS.2024.3508452","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3508452","url":null,"abstract":"Opinion Dynamics models in social networks are a valuable tool to study how opinions evolve within a population. However, these models often rely on agent-level parameters that are difficult to measure in a real population. This is the case of the confidence threshold in opinion dynamics models based on bounded confidence, where agents are only influenced by other agents having a similar opinion (given by this confidence threshold). Consequently, a common practice is to apply a universal threshold to the entire population and calibrate its value to match observed real-world data, despite being an unrealistic assumption. In this work, we propose an alternative approach using graph neural networks to infer agent-level confidence thresholds in the opinion dynamics of the Hegselmann-Krause model of bounded confidence. This eliminates the need for additional simulations when faced with new case studies. To this end, we construct a comprehensive synthetic training dataset that includes different network topologies and configurations of thresholds and opinions. Through multiple training runs utilizing different architectures, we identify GraphSAGE as the most effective solution, achieving a coefficient of determination <inline-formula><tex-math>$R^{2}$</tex-math></inline-formula> above 0.7 in test datasets derived from real-world topologies. Remarkably, this performance holds even when the test topologies differ in size from those considered during training.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 2","pages":"725-737"},"PeriodicalIF":4.5,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10792931","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143783288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1109/TCSS.2024.3508733
Jothi Prakash V;Arul Antran Vijay S
In the dynamic landscape of social media, the strategic use of hashtags has emerged as a crucial tool for enhancing content discoverability and engagement. This research introduces the neurosymbolic contrastive framework (NSCF), an innovative methodology designed to address the multifaceted challenges inherent in automated hashtag recommendation, such as the integration of multimodal data, the context sensitivity of content, and the dynamic nature of social media trends. By combining deep learning's representational strengths with the deductive prowess of symbolic artificial Intelligence (AI), NSCF crafts contextually relevant and logically coherent hashtag suggestions. Its dual-stream architecture meticulously processes and aligns textual and visual content through contrastive learning, ensuring a comprehensive understanding of multimodal social media data. The framework's neurosymbolic integration leverages structured knowledge and logical inference, significantly enhancing the relevance and coherence of its recommendations. Evaluated against a variety of datasets, including MM-INS, NUS-WIDE, and HARRISON, NSCF has demonstrated exceptional performance, outshining existing models and baseline methods across key metrics such as precision (0.721–0.701), recall (0.736–0.716), and F1 score (0.728–0.708). This research represents a major advancement in social media analytics as it not only demonstrates NSCF's novel approach but also sheds light on its potential to transform hashtag recommendation systems.
{"title":"A Comprehensive Multimodal Framework for Optimizing Social Media Hashtag Recommendations","authors":"Jothi Prakash V;Arul Antran Vijay S","doi":"10.1109/TCSS.2024.3508733","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3508733","url":null,"abstract":"In the dynamic landscape of social media, the strategic use of hashtags has emerged as a crucial tool for enhancing content discoverability and engagement. This research introduces the neurosymbolic contrastive framework (NSCF), an innovative methodology designed to address the multifaceted challenges inherent in automated hashtag recommendation, such as the integration of multimodal data, the context sensitivity of content, and the dynamic nature of social media trends. By combining deep learning's representational strengths with the deductive prowess of symbolic artificial Intelligence (AI), NSCF crafts contextually relevant and logically coherent hashtag suggestions. Its dual-stream architecture meticulously processes and aligns textual and visual content through contrastive learning, ensuring a comprehensive understanding of multimodal social media data. The framework's neurosymbolic integration leverages structured knowledge and logical inference, significantly enhancing the relevance and coherence of its recommendations. Evaluated against a variety of datasets, including MM-INS, NUS-WIDE, and HARRISON, NSCF has demonstrated exceptional performance, outshining existing models and baseline methods across key metrics such as precision (0.721–0.701), recall (0.736–0.716), and F1 score (0.728–0.708). This research represents a major advancement in social media analytics as it not only demonstrates NSCF's novel approach but also sheds light on its potential to transform hashtag recommendation systems.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"2144-2155"},"PeriodicalIF":4.5,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Along with the increasing popularization of social platforms, rumors in the Web environment have become one of the significant threats to human society. Existing rumor detection methods ignore modeling and analyzing the community structure of the rumor propagation network. This article proposes a new community-enhanced dynamic graph convolutional network (CDGCN) for effective rumor detection on online social networks, which utilize the communities formed in a rumor propagation process to improve rumor detection accuracy. CDGCN uses a designed method that combines node features and topology features to identify the communities and learn the community features of rumors. Following this, a graph convolutional network (GCN) with a community-aware attention mechanism is proposed to enable the nodes to dynamically aggregate information from their neighboring nodes’ global and community features, effectively prioritizing critical neighborhood information, enhancing the representation of both local community structures and global network patterns for improved analytical performance. The final rumor representations generated by the GCN are processed by a classifier to detect false rumors. Comprehensive experiments and comparison studies are conducted on four real-world datasets to validate the effectiveness of CDGCN.
{"title":"Community-Enhanced Dynamic Graph Convolutional Networks for Rumor Detection on Social Networks","authors":"Wei Zhou;Chenzhan Wang;Fengji Luo;Yu Wang;Min Gao;Junhao Wen","doi":"10.1109/TCSS.2024.3505892","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3505892","url":null,"abstract":"Along with the increasing popularization of social platforms, rumors in the Web environment have become one of the significant threats to human society. Existing rumor detection methods ignore modeling and analyzing the community structure of the rumor propagation network. This article proposes a new community-enhanced dynamic graph convolutional network (CDGCN) for effective rumor detection on online social networks, which utilize the communities formed in a rumor propagation process to improve rumor detection accuracy. CDGCN uses a designed method that combines node features and topology features to identify the communities and learn the community features of rumors. Following this, a graph convolutional network (GCN) with a community-aware attention mechanism is proposed to enable the nodes to dynamically aggregate information from their neighboring nodes’ global and community features, effectively prioritizing critical neighborhood information, enhancing the representation of both local community structures and global network patterns for improved analytical performance. The final rumor representations generated by the GCN are processed by a classifier to detect false rumors. Comprehensive experiments and comparison studies are conducted on four real-world datasets to validate the effectiveness of CDGCN.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 2","pages":"818-831"},"PeriodicalIF":4.5,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1109/TCSS.2024.3504398
Keyi Chen;Tianxing Wang;Haibin Zhu;Bing Huang
Role-based collaboration (RBC) is a role-centered computational approach designed to solve collaboration problems. Group role assignment is an essential and extensive part of this research. Based on group multirole assignment (GMRA), this article addresses some issues in the current research. First, managers often hope to obtain the highest benefits rather than maximizing the team performance, which is emphasized in the traditional RBC research. This article introduces the use of expected utility theory to assign roles in order to maximize team effectiveness. Second, the existing studies need to provide expressions of agent and role conflicts, which have yet to be reasonably addressed. This article classifies conflicts by employing agent and role capability combined with the three-way conflict analysis theory. Based on these, this article puts forward the utility-based GMRA with conflicting agent and role problems. The validity is verified through several experiments and comparative analysis, which provides more possibilities for future research.
{"title":"Maximizing Group Utilities While Avoiding Conflicts Through Agent Qualifications","authors":"Keyi Chen;Tianxing Wang;Haibin Zhu;Bing Huang","doi":"10.1109/TCSS.2024.3504398","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3504398","url":null,"abstract":"Role-based collaboration (RBC) is a role-centered computational approach designed to solve collaboration problems. Group role assignment is an essential and extensive part of this research. Based on group multirole assignment (GMRA), this article addresses some issues in the current research. First, managers often hope to obtain the highest benefits rather than maximizing the team performance, which is emphasized in the traditional RBC research. This article introduces the use of expected utility theory to assign roles in order to maximize team effectiveness. Second, the existing studies need to provide expressions of agent and role conflicts, which have yet to be reasonably addressed. This article classifies conflicts by employing agent and role capability combined with the three-way conflict analysis theory. Based on these, this article puts forward the utility-based GMRA with conflicting agent and role problems. The validity is verified through several experiments and comparative analysis, which provides more possibilities for future research.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 2","pages":"552-562"},"PeriodicalIF":4.5,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143783262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}