Pub Date : 2024-12-24DOI: 10.1109/TCSS.2024.3508803
Arnaud Z. Dragicevic
This study examines the dynamics of bargaining in a social system that incorporates risk sharing through exchange network models and stochastic matching between agents. The analysis explores three scenarios: convergent expectations, divergent expectations, and social preferences among model players. The study introduces stochastic shocks through a Poisson process, which can disrupt coordination within the decentralized exchange mechanism. Despite these shocks, agents can employ a risk-sharing protocol utilizing Pareto weights to mitigate their effects. The model outcomes do not align with the generalized Nash bargaining solutions across all scenarios. However, over a sufficiently long time frame, the dynamics consistently converge to a fixed point that slightly deviates from the balanced outcome or Nash equilibrium. This minor deviation represents the risk premium necessary for hedging against mutual risk. The risk premium is at its minimum in the scenario with convergent expectations and remains unchanged in the case involving social preferences.
{"title":"Exploring Risk Sharing in Stochastic Exchange Networks","authors":"Arnaud Z. Dragicevic","doi":"10.1109/TCSS.2024.3508803","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3508803","url":null,"abstract":"This study examines the dynamics of bargaining in a social system that incorporates risk sharing through exchange network models and stochastic matching between agents. The analysis explores three scenarios: convergent expectations, divergent expectations, and social preferences among model players. The study introduces stochastic shocks through a Poisson process, which can disrupt coordination within the decentralized exchange mechanism. Despite these shocks, agents can employ a risk-sharing protocol utilizing Pareto weights to mitigate their effects. The model outcomes do not align with the generalized Nash bargaining solutions across all scenarios. However, over a sufficiently long time frame, the dynamics consistently converge to a fixed point that slightly deviates from the balanced outcome or Nash equilibrium. This minor deviation represents the risk premium necessary for hedging against mutual risk. The risk premium is at its minimum in the scenario with convergent expectations and remains unchanged in the case involving social preferences.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1181-1192"},"PeriodicalIF":4.5,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1109/TCSS.2024.3517656
Xiaobo Chen;Yuwen Liang;Junyu Wang;Qiaolin Ye;Yingfeng Cai
Accurately predicting the future trajectories of traffic agents is paramount for autonomous unmanned systems, such as self-driving cars and mobile robotics. Extracting abundant temporal and social features from trajectory data and integrating the resulting features effectively pose great challenges for predictive models. To address these issues, this article proposes a novel multibranch attentive transformer (MBAT) trajectory prediction network for traffic agents. Specifically, to explore and reveal diverse correlations of agents, we propose a decoupled temporal and spatial feature learning module with multibranch to extract temporal, spatial, as well as spatiotemporal features. Such design ensures each branch can be specifically tailored for different types of correlations, thus enhancing the flexibility and representation ability of features. Besides, we put forward an attentive transformer architecture that simultaneously models the complex correlations possibly occurring in historical and future timesteps. Moreover, the temporal, spatial, and spatiotemporal features can be effectively integrated based on different types of attention mechanisms. Empirical results demonstrate that our model achieves outstanding performance on public ETH, UCY, SDD, and INTERACTION datasets. Detailed ablation studies are conducted to verify the effectiveness of the model components.
{"title":"Multibranch Attentive Transformer With Joint Temporal and Social Correlations for Traffic Agents Trajectory Prediction","authors":"Xiaobo Chen;Yuwen Liang;Junyu Wang;Qiaolin Ye;Yingfeng Cai","doi":"10.1109/TCSS.2024.3517656","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3517656","url":null,"abstract":"Accurately predicting the future trajectories of traffic agents is paramount for autonomous unmanned systems, such as self-driving cars and mobile robotics. Extracting abundant temporal and social features from trajectory data and integrating the resulting features effectively pose great challenges for predictive models. To address these issues, this article proposes a novel multibranch attentive transformer (MBAT) trajectory prediction network for traffic agents. Specifically, to explore and reveal diverse correlations of agents, we propose a decoupled temporal and spatial feature learning module with multibranch to extract temporal, spatial, as well as spatiotemporal features. Such design ensures each branch can be specifically tailored for different types of correlations, thus enhancing the flexibility and representation ability of features. Besides, we put forward an attentive transformer architecture that simultaneously models the complex correlations possibly occurring in historical and future timesteps. Moreover, the temporal, spatial, and spatiotemporal features can be effectively integrated based on different types of attention mechanisms. Empirical results demonstrate that our model achieves outstanding performance on public ETH, UCY, SDD, and INTERACTION datasets. Detailed ablation studies are conducted to verify the effectiveness of the model components.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 2","pages":"525-538"},"PeriodicalIF":4.5,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143783268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1109/TCSS.2024.3502798
Yuelei Yu;Shuai Sui;Zhihong Zhao;C. L. Philip Chen
This article studies the adaptive neural network (NN) event-triggered secure control issue for stochastic nonlinear systems subject to sensor attacks. NNs are adopted to identify unknown nonlinear dynamics, and an NN state estimator is established to address the issue resulting from unmeasurable states. An NN observer is proposed to estimate unknown sensor attack signals. To save limited communication resources and reduce the number of controller updates, an event-triggered control (ETC) scheme is introduced. Then, an adaptive NN event-triggered secure control algorithm is designed by backstepping control method. The results demonstrate the stability of the control system and its consistent convergence in tracking errors under sensor attacks. Finally, simulations are shown to verify the effectiveness of the investigated theory.
{"title":"Neural-Network-Adaptive Event-Triggered Control for Stochastic Nonlinear Systems With Sensor Attacks","authors":"Yuelei Yu;Shuai Sui;Zhihong Zhao;C. L. Philip Chen","doi":"10.1109/TCSS.2024.3502798","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3502798","url":null,"abstract":"This article studies the adaptive neural network (NN) event-triggered secure control issue for stochastic nonlinear systems subject to sensor attacks. NNs are adopted to identify unknown nonlinear dynamics, and an NN state estimator is established to address the issue resulting from unmeasurable states. An NN observer is proposed to estimate unknown sensor attack signals. To save limited communication resources and reduce the number of controller updates, an event-triggered control (ETC) scheme is introduced. Then, an adaptive NN event-triggered secure control algorithm is designed by backstepping control method. The results demonstrate the stability of the control system and its consistent convergence in tracking errors under sensor attacks. Finally, simulations are shown to verify the effectiveness of the investigated theory.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"2062-2071"},"PeriodicalIF":4.5,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/TCSS.2024.3514148
Shuxin Qin;Yongcan Luo;Jing Zhu;Gaofeng Tao;Jingya Zheng;Zhongjun Ma
Detecting abnormal nodes from attributed networks plays an important role in various applications, including cybersecurity, finance, and social networks. Most existing methods focus on learning different scales of graphs or using augmented data to improve the quality of feature representation. However, the performance is limited due to two critical problems. First, the high sensitivity of attributed networks makes it uncontrollable and uncertain to use conventional methods for data augmentation, leading to limited improvement in representation and generalization capabilities. Second, under the unsupervised paradigm, anomalous nodes mixed in the training data may interfere with the learning of normal patterns and weaken the discrimination ability. In this work, we propose a novel multiview and multiscale contrastive learning framework to address these two issues. Specifically, a network augmentation method based on parameter perturbation is introduced to generate augmented views for both node–node and node–subgraph level contrast branches. Then, cross-view graph contrastive learning is employed to improve the representation without the need for augmented data. We also provide a cycle training strategy where normal samples detected in the former step are collected for an additional training step. In this way, the ability to learn normal patterns is enhanced. Extensive experiments on six benchmark datasets demonstrate that our method outperforms the existing state-of-the-art baselines.
{"title":"Anomaly Detection on Attributed Networks via Multiview and Multiscale Contrastive Learning","authors":"Shuxin Qin;Yongcan Luo;Jing Zhu;Gaofeng Tao;Jingya Zheng;Zhongjun Ma","doi":"10.1109/TCSS.2024.3514148","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3514148","url":null,"abstract":"Detecting abnormal nodes from attributed networks plays an important role in various applications, including cybersecurity, finance, and social networks. Most existing methods focus on learning different scales of graphs or using augmented data to improve the quality of feature representation. However, the performance is limited due to two critical problems. First, the high sensitivity of attributed networks makes it uncontrollable and uncertain to use conventional methods for data augmentation, leading to limited improvement in representation and generalization capabilities. Second, under the unsupervised paradigm, anomalous nodes mixed in the training data may interfere with the learning of normal patterns and weaken the discrimination ability. In this work, we propose a novel multiview and multiscale contrastive learning framework to address these two issues. Specifically, a network augmentation method based on parameter perturbation is introduced to generate augmented views for both node–node and node–subgraph level contrast branches. Then, cross-view graph contrastive learning is employed to improve the representation without the need for augmented data. We also provide a cycle training strategy where normal samples detected in the former step are collected for an additional training step. In this way, the ability to learn normal patterns is enhanced. Extensive experiments on six benchmark datasets demonstrate that our method outperforms the existing state-of-the-art baselines.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1038-1051"},"PeriodicalIF":4.5,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/TCSS.2024.3514186
Liping Tao;Yang Lu;Yuqi Fan;Chee Wei Tan;Zhen Wei
Sharding presents an enticing path toward improving blockchain scalability. However, the consensus mechanism within individual shards faces mounting security challenges due to the restricted number of consensus nodes and the reliance on conventional, unchanging nodes for consensus. Common strategies to enhance shard consensus security often involve increasing the number of consensus nodes per shard. While effective in bolstering security, this approach also leads to a notable rise in consensus delay within each shard, potentially offsetting the scalability advantages of sharding. Hence, it becomes imperative to strategically select nodes to form dedicated consensus groups for each shard. These groups should not only enhance shard consensus security but also do so without exacerbating consensus delay. In this article, we propose a novel consensus group selection based on transmission delay between nodes (CGSTD) to address this challenge, with the goal of minimizing the overall consensus delay across the system. CGSTD intelligently selects nodes from various shards to form distinct consensus groups for each shard, thereby enhancing shard security while maintaining optimal system-wide consensus efficiency. We conduct a rigorous theoretical analysis to evaluate the security properties of CGSTD and derive approximation ratios under various operational scenarios. Simulation results validate the superior performance of CGSTD compared to baseline algorithms, showcasing reductions in total consensus delay, mitigated increases in shard-specific delay, optimized block storage utilization per node, and streamlined participation of nodes in consensus groups.
{"title":"Optimized Consensus Group Selection Focused on Node Transmission Delay in Sharding Blockchains","authors":"Liping Tao;Yang Lu;Yuqi Fan;Chee Wei Tan;Zhen Wei","doi":"10.1109/TCSS.2024.3514186","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3514186","url":null,"abstract":"Sharding presents an enticing path toward improving blockchain scalability. However, the consensus mechanism within individual shards faces mounting security challenges due to the restricted number of consensus nodes and the reliance on conventional, unchanging nodes for consensus. Common strategies to enhance shard consensus security often involve increasing the number of consensus nodes per shard. While effective in bolstering security, this approach also leads to a notable rise in consensus delay within each shard, potentially offsetting the scalability advantages of sharding. Hence, it becomes imperative to strategically select nodes to form dedicated consensus groups for each shard. These groups should not only enhance shard consensus security but also do so without exacerbating consensus delay. In this article, we propose a novel consensus group selection based on transmission delay between nodes (CGSTD) to address this challenge, with the goal of minimizing the overall consensus delay across the system. CGSTD intelligently selects nodes from various shards to form distinct consensus groups for each shard, thereby enhancing shard security while maintaining optimal system-wide consensus efficiency. We conduct a rigorous theoretical analysis to evaluate the security properties of CGSTD and derive approximation ratios under various operational scenarios. Simulation results validate the superior performance of CGSTD compared to baseline algorithms, showcasing reductions in total consensus delay, mitigated increases in shard-specific delay, optimized block storage utilization per node, and streamlined participation of nodes in consensus groups.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1052-1067"},"PeriodicalIF":4.5,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-16DOI: 10.1109/TCSS.2024.3509340
Manjary P. Gangan;Anoop Kadan;Lajish V. L.
Image forensics research has recently witnessed a lot of advancements toward developing computational models capable of accurately detecting natural images captured by cameras and generative adversarial network (GAN) generated images. However, it is also important to ensure whether these computational models are fair enough and do not produce biased outcomes that could eventually harm certain societal groups or cause serious security threats. Exploring fairness in image forensic algorithms is an initial step toward mitigating these biases. This study explores bias in visual transformer based image forensic algorithms that classify natural and GAN images, since visual transformers are recently being widely used in image classification based tasks, including in the area of image forensics. The proposed study procures bias evaluation corpora to analyze bias in gender, racial, affective, and intersectional domains using a wide set of individual and pairwise bias evaluation measures. Since the robustness of the algorithms against image compression is an important factor to be considered in forensic tasks, this study also analyzes the impact of image compression on model bias. Hence, to study the impact of image compression on model bias, a two-phase evaluation setting is followed, where the experiments are carried out in uncompressed and compressed evaluation settings. The study could identify bias existences in the visual transformer based models distinguishing natural and GAN images, and also observes that image compression impacts model biases, predominantly amplifying the presence of biases in class GAN predictions.
{"title":"Toward Exploring Fairness in Visual Transformer Based Natural and GAN Image Detection Systems","authors":"Manjary P. Gangan;Anoop Kadan;Lajish V. L.","doi":"10.1109/TCSS.2024.3509340","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3509340","url":null,"abstract":"Image forensics research has recently witnessed a lot of advancements toward developing computational models capable of accurately detecting natural images captured by cameras and generative adversarial network (GAN) generated images. However, it is also important to ensure whether these computational models are fair enough and do not produce biased outcomes that could eventually harm certain societal groups or cause serious security threats. Exploring fairness in image forensic algorithms is an initial step toward mitigating these biases. This study explores bias in visual transformer based image forensic algorithms that classify natural and GAN images, since visual transformers are recently being widely used in image classification based tasks, including in the area of image forensics. The proposed study procures bias evaluation corpora to analyze bias in gender, racial, affective, and intersectional domains using a wide set of individual and pairwise bias evaluation measures. Since the robustness of the algorithms against image compression is an important factor to be considered in forensic tasks, this study also analyzes the impact of image compression on model bias. Hence, to study the impact of image compression on model bias, a two-phase evaluation setting is followed, where the experiments are carried out in uncompressed and compressed evaluation settings. The study could identify bias existences in the visual transformer based models distinguishing natural and GAN images, and also observes that image compression impacts model biases, predominantly amplifying the presence of biases in class GAN predictions.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1068-1079"},"PeriodicalIF":4.5,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-16DOI: 10.1109/TCSS.2024.3508089
Harshal Janjani;Tanmay Agarwal;M. P. Gopinath;Vimoh Sharma;S. P. Raja
With the rapid proliferation of machine learning applications in cloud computing environments, addressing crucial challenges concerning energy efficiency becomes pressing, including addressing the high power consumption of such workloads. In this regard, this work focuses much on the development of an energy-aware scheduling and task assignment algorithm that, while optimizing energy consumption, maintains required performance standards in deploying machine-learning applications in cloud environments. It therefore, pivots on leveraging online reinforcement learning to deduce an optimal planning and allocation strategy. This proposed algorithm leverages the capability of RL in making sequential decisions with the aim of achieving maximum cumulative rewards. The algorithm design and its implementation are examined in detail, considering the nature of workloads and how the computational resources are utilized. The algorithm’s performance is analyzed by looking into different performance metrics that assess the success of the model. All the results indicate that energy-aware scheduling combined with task assignment algorithms are bound to reduce energy consumption by a great margin while meeting the required performance for large-scale workloads. These results hold much promise for the improvement of sustainable cloud computing infrastructures and consequently, to energy-efficient machine learning. The future research directions involve enhancing the proposed algorithm’s generalization capabilities and addressing challenges related to scalability and convergence.
{"title":"Designing Energy-Aware Scheduling and Task Allocation Algorithms for Online Reinforcement Learning Applications in Cloud Environments","authors":"Harshal Janjani;Tanmay Agarwal;M. P. Gopinath;Vimoh Sharma;S. P. Raja","doi":"10.1109/TCSS.2024.3508089","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3508089","url":null,"abstract":"With the rapid proliferation of machine learning applications in cloud computing environments, addressing crucial challenges concerning energy efficiency becomes pressing, including addressing the high power consumption of such workloads. In this regard, this work focuses much on the development of an energy-aware scheduling and task assignment algorithm that, while optimizing energy consumption, maintains required performance standards in deploying machine-learning applications in cloud environments. It therefore, pivots on leveraging online reinforcement learning to deduce an optimal planning and allocation strategy. This proposed algorithm leverages the capability of RL in making sequential decisions with the aim of achieving maximum cumulative rewards. The algorithm design and its implementation are examined in detail, considering the nature of workloads and how the computational resources are utilized. The algorithm’s performance is analyzed by looking into different performance metrics that assess the success of the model. All the results indicate that energy-aware scheduling combined with task assignment algorithms are bound to reduce energy consumption by a great margin while meeting the required performance for large-scale workloads. These results hold much promise for the improvement of sustainable cloud computing infrastructures and consequently, to energy-efficient machine learning. The future research directions involve enhancing the proposed algorithm’s generalization capabilities and addressing challenges related to scalability and convergence.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1218-1232"},"PeriodicalIF":4.5,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of rapid urbanization, traditional manual guidance and static evacuation signs are increasingly inadequate for addressing complex and dynamic emergencies. This study proposes an innovative emergency evacuation framework that optimizes the crowd evacuation by integrating multiagent reinforcement learning (MARL) with adversarial reinforcement learning (ARL). The developed simulation environment models realistic human behavior in complex buildings and incorporates robotic navigation and intelligent path planning. A novel simulated human behavior model was integrated, capable of complex human–robot interaction, independent escape route searching, and exhibiting herd mentality and memory mechanisms. We also proposed a multiagent framework that combines MARL and ARL to enhance overall evacuation efficiency and robustness. Additionally, we developed a new ARL evaluation framework that provides a novel method for quantifying agents’ performance. Various experiments of differing difficulty levels were conducted, and the results demonstrate that the proposed framework exhibits advantages in emergency evacuation scenarios. Specifically, our ARLR approach increased survival rates by 1.8% points in low-difficulty evacuation tasks compared to the RLR approach using only MARL algorithms. In high-difficulty evacuation tasks, the ARLR approach raised survival rates from 46.7% without robots to 64.4%, exceeding the RLR approach by 1.7% points. This study aims to enhance the efficiency and safety of human–robot collaborative fire evacuations and provides theoretical support for evaluating and improving the performance and robustness of ARL agents.
{"title":"Adversarial Reinforcement Learning for Enhanced Decision-Making of Evacuation Guidance Robots in Intelligent Fire Scenarios","authors":"Hantao Zhao;Zhihao Liang;Tianxing Ma;Xiaomeng Shi;Mubbasir Kapadia;Tyler Thrash;Christoph Hoelscher;Jinyuan Jia;Bo Liu;Jiuxin Cao","doi":"10.1109/TCSS.2024.3502420","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3502420","url":null,"abstract":"In the context of rapid urbanization, traditional manual guidance and static evacuation signs are increasingly inadequate for addressing complex and dynamic emergencies. This study proposes an innovative emergency evacuation framework that optimizes the crowd evacuation by integrating multiagent reinforcement learning (MARL) with adversarial reinforcement learning (ARL). The developed simulation environment models realistic human behavior in complex buildings and incorporates robotic navigation and intelligent path planning. A novel simulated human behavior model was integrated, capable of complex human–robot interaction, independent escape route searching, and exhibiting herd mentality and memory mechanisms. We also proposed a multiagent framework that combines MARL and ARL to enhance overall evacuation efficiency and robustness. Additionally, we developed a new ARL evaluation framework that provides a novel method for quantifying agents’ performance. Various experiments of differing difficulty levels were conducted, and the results demonstrate that the proposed framework exhibits advantages in emergency evacuation scenarios. Specifically, our ARLR approach increased survival rates by 1.8% points in low-difficulty evacuation tasks compared to the RLR approach using only MARL algorithms. In high-difficulty evacuation tasks, the ARLR approach raised survival rates from 46.7% without robots to 64.4%, exceeding the RLR approach by 1.7% points. This study aims to enhance the efficiency and safety of human–robot collaborative fire evacuations and provides theoretical support for evaluating and improving the performance and robustness of ARL agents.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"2030-2046"},"PeriodicalIF":4.5,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Temporal dynamic graphs (TDGs), representing the dynamic evolution of entities and their relationships over time with intricate temporal features, are widely used in various real-world domains. Existing methods typically rely on mainstream techniques such as transformers and graph neural networks (GNNs) to capture the spatiotemporal information of TDGs. However, despite their advanced capabilities, these methods often struggle with significant computational complexity and limited ability to capture temporal dynamic contextual relationships. Recently, a new model architecture called mamba has emerged, noted for its capability to capture complex dependencies in sequences while significantly reducing computational complexity. Building on this, we propose a novel method, TDG-mamba, which integrates mamba for TDG learning. TDG-mamba introduces deep semantic spatiotemporal embeddings into the mamba architecture through a specially designed spatiotemporal prior tokenization module (SPTM). Furthermore, to better leverage temporal information differences and enhance the modeling of dynamic changes in graph structures, we separately design a bidirectional mamba and a directed GNN for improved spatiotemporal embedding learning. Link prediction experiments on multiple public datasets demonstrate that our method delivers superior performance, with an average improvement of 5.11% over baseline methods across various settings.
{"title":"TDG-Mamba: Advanced Spatiotemporal Embedding for Temporal Dynamic Graph Learning via Bidirectional Information Propagation","authors":"Mengran Li;Junzhou Chen;Bo Li;Yong Zhang;Ronghui Zhang;Siyuan Gong;Xiaolei Ma;Zhihong Tian","doi":"10.1109/TCSS.2024.3509399","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3509399","url":null,"abstract":"Temporal dynamic graphs (TDGs), representing the dynamic evolution of entities and their relationships over time with intricate temporal features, are widely used in various real-world domains. Existing methods typically rely on mainstream techniques such as transformers and graph neural networks (GNNs) to capture the spatiotemporal information of TDGs. However, despite their advanced capabilities, these methods often struggle with significant computational complexity and limited ability to capture temporal dynamic contextual relationships. Recently, a new model architecture called mamba has emerged, noted for its capability to capture complex dependencies in sequences while significantly reducing computational complexity. Building on this, we propose a novel method, TDG-mamba, which integrates mamba for TDG learning. TDG-mamba introduces deep semantic spatiotemporal embeddings into the mamba architecture through a specially designed spatiotemporal prior tokenization module (SPTM). Furthermore, to better leverage temporal information differences and enhance the modeling of dynamic changes in graph structures, we separately design a bidirectional mamba and a directed GNN for improved spatiotemporal embedding learning. Link prediction experiments on multiple public datasets demonstrate that our method delivers superior performance, with an average improvement of 5.11% over baseline methods across various settings.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"2014-2029"},"PeriodicalIF":4.5,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1109/TCSS.2024.3508452
Víctor A. Vargas-Pérez;Jesús Giráldez-Cru;Pablo Mesejo;Oscar Cordón
Opinion Dynamics models in social networks are a valuable tool to study how opinions evolve within a population. However, these models often rely on agent-level parameters that are difficult to measure in a real population. This is the case of the confidence threshold in opinion dynamics models based on bounded confidence, where agents are only influenced by other agents having a similar opinion (given by this confidence threshold). Consequently, a common practice is to apply a universal threshold to the entire population and calibrate its value to match observed real-world data, despite being an unrealistic assumption. In this work, we propose an alternative approach using graph neural networks to infer agent-level confidence thresholds in the opinion dynamics of the Hegselmann-Krause model of bounded confidence. This eliminates the need for additional simulations when faced with new case studies. To this end, we construct a comprehensive synthetic training dataset that includes different network topologies and configurations of thresholds and opinions. Through multiple training runs utilizing different architectures, we identify GraphSAGE as the most effective solution, achieving a coefficient of determination $R^{2}$ above 0.7 in test datasets derived from real-world topologies. Remarkably, this performance holds even when the test topologies differ in size from those considered during training.
{"title":"Unveiling Agents’ Confidence in Opinion Dynamics Models via Graph Neural Networks","authors":"Víctor A. Vargas-Pérez;Jesús Giráldez-Cru;Pablo Mesejo;Oscar Cordón","doi":"10.1109/TCSS.2024.3508452","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3508452","url":null,"abstract":"Opinion Dynamics models in social networks are a valuable tool to study how opinions evolve within a population. However, these models often rely on agent-level parameters that are difficult to measure in a real population. This is the case of the confidence threshold in opinion dynamics models based on bounded confidence, where agents are only influenced by other agents having a similar opinion (given by this confidence threshold). Consequently, a common practice is to apply a universal threshold to the entire population and calibrate its value to match observed real-world data, despite being an unrealistic assumption. In this work, we propose an alternative approach using graph neural networks to infer agent-level confidence thresholds in the opinion dynamics of the Hegselmann-Krause model of bounded confidence. This eliminates the need for additional simulations when faced with new case studies. To this end, we construct a comprehensive synthetic training dataset that includes different network topologies and configurations of thresholds and opinions. Through multiple training runs utilizing different architectures, we identify GraphSAGE as the most effective solution, achieving a coefficient of determination <inline-formula><tex-math>$R^{2}$</tex-math></inline-formula> above 0.7 in test datasets derived from real-world topologies. Remarkably, this performance holds even when the test topologies differ in size from those considered during training.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 2","pages":"725-737"},"PeriodicalIF":4.5,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10792931","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143783288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}