首页 > 最新文献

IEEE Transactions on Computational Social Systems最新文献

英文 中文
Exploring Risk Sharing in Stochastic Exchange Networks 随机交换网络风险分担研究
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-12-24 DOI: 10.1109/TCSS.2024.3508803
Arnaud Z. Dragicevic
This study examines the dynamics of bargaining in a social system that incorporates risk sharing through exchange network models and stochastic matching between agents. The analysis explores three scenarios: convergent expectations, divergent expectations, and social preferences among model players. The study introduces stochastic shocks through a Poisson process, which can disrupt coordination within the decentralized exchange mechanism. Despite these shocks, agents can employ a risk-sharing protocol utilizing Pareto weights to mitigate their effects. The model outcomes do not align with the generalized Nash bargaining solutions across all scenarios. However, over a sufficiently long time frame, the dynamics consistently converge to a fixed point that slightly deviates from the balanced outcome or Nash equilibrium. This minor deviation represents the risk premium necessary for hedging against mutual risk. The risk premium is at its minimum in the scenario with convergent expectations and remains unchanged in the case involving social preferences.
本研究通过交换网络模型和代理人之间的随机匹配,考察了一个包含风险分担的社会系统中的议价动力学。该分析探讨了三种情况:趋同期望、发散期望和模型参与者的社会偏好。该研究通过泊松过程引入随机冲击,随机冲击会破坏分散交换机制内的协调。尽管有这些冲击,代理人可以采用利用帕累托权重的风险分担协议来减轻其影响。模型结果并不符合所有情形下的广义纳什议价方案。然而,在足够长的时间框架内,动态始终收敛到一个固定点,这个固定点略微偏离平衡结果或纳什均衡。这种微小的偏差代表了对冲共同风险所必需的风险溢价。风险溢价在具有趋同预期的情况下最低,在涉及社会偏好的情况下保持不变。
{"title":"Exploring Risk Sharing in Stochastic Exchange Networks","authors":"Arnaud Z. Dragicevic","doi":"10.1109/TCSS.2024.3508803","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3508803","url":null,"abstract":"This study examines the dynamics of bargaining in a social system that incorporates risk sharing through exchange network models and stochastic matching between agents. The analysis explores three scenarios: convergent expectations, divergent expectations, and social preferences among model players. The study introduces stochastic shocks through a Poisson process, which can disrupt coordination within the decentralized exchange mechanism. Despite these shocks, agents can employ a risk-sharing protocol utilizing Pareto weights to mitigate their effects. The model outcomes do not align with the generalized Nash bargaining solutions across all scenarios. However, over a sufficiently long time frame, the dynamics consistently converge to a fixed point that slightly deviates from the balanced outcome or Nash equilibrium. This minor deviation represents the risk premium necessary for hedging against mutual risk. The risk premium is at its minimum in the scenario with convergent expectations and remains unchanged in the case involving social preferences.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1181-1192"},"PeriodicalIF":4.5,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multibranch Attentive Transformer With Joint Temporal and Social Correlations for Traffic Agents Trajectory Prediction 具有联合时间和社会关联的多支路关注变压器交通agent轨迹预测
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-12-24 DOI: 10.1109/TCSS.2024.3517656
Xiaobo Chen;Yuwen Liang;Junyu Wang;Qiaolin Ye;Yingfeng Cai
Accurately predicting the future trajectories of traffic agents is paramount for autonomous unmanned systems, such as self-driving cars and mobile robotics. Extracting abundant temporal and social features from trajectory data and integrating the resulting features effectively pose great challenges for predictive models. To address these issues, this article proposes a novel multibranch attentive transformer (MBAT) trajectory prediction network for traffic agents. Specifically, to explore and reveal diverse correlations of agents, we propose a decoupled temporal and spatial feature learning module with multibranch to extract temporal, spatial, as well as spatiotemporal features. Such design ensures each branch can be specifically tailored for different types of correlations, thus enhancing the flexibility and representation ability of features. Besides, we put forward an attentive transformer architecture that simultaneously models the complex correlations possibly occurring in historical and future timesteps. Moreover, the temporal, spatial, and spatiotemporal features can be effectively integrated based on different types of attention mechanisms. Empirical results demonstrate that our model achieves outstanding performance on public ETH, UCY, SDD, and INTERACTION datasets. Detailed ablation studies are conducted to verify the effectiveness of the model components.
准确预测交通参与者的未来轨迹对于自动驾驶汽车和移动机器人等自主无人驾驶系统至关重要。从轨迹数据中提取丰富的时间和社会特征,并有效整合由此产生的特征,对预测模型提出了巨大挑战。为解决这些问题,本文提出了一种新颖的交通代理多分支殷勤变换器(MBAT)轨迹预测网络。具体来说,为了探索和揭示交通参与者的不同相关性,我们提出了一个多分支的解耦时空特征学习模块,以提取时间、空间以及时空特征。这种设计确保了每个分支都能专门针对不同类型的相关性,从而提高了特征的灵活性和表征能力。此外,我们还提出了一种贴心的转换器架构,可同时对历史和未来时间步中可能出现的复杂相关性进行建模。此外,基于不同类型的注意机制,时间、空间和时空特征可以得到有效整合。实证结果表明,我们的模型在公开的 ETH、UCY、SDD 和 INTERACTION 数据集上表现出色。我们还进行了详细的消融研究,以验证模型组件的有效性。
{"title":"Multibranch Attentive Transformer With Joint Temporal and Social Correlations for Traffic Agents Trajectory Prediction","authors":"Xiaobo Chen;Yuwen Liang;Junyu Wang;Qiaolin Ye;Yingfeng Cai","doi":"10.1109/TCSS.2024.3517656","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3517656","url":null,"abstract":"Accurately predicting the future trajectories of traffic agents is paramount for autonomous unmanned systems, such as self-driving cars and mobile robotics. Extracting abundant temporal and social features from trajectory data and integrating the resulting features effectively pose great challenges for predictive models. To address these issues, this article proposes a novel multibranch attentive transformer (MBAT) trajectory prediction network for traffic agents. Specifically, to explore and reveal diverse correlations of agents, we propose a decoupled temporal and spatial feature learning module with multibranch to extract temporal, spatial, as well as spatiotemporal features. Such design ensures each branch can be specifically tailored for different types of correlations, thus enhancing the flexibility and representation ability of features. Besides, we put forward an attentive transformer architecture that simultaneously models the complex correlations possibly occurring in historical and future timesteps. Moreover, the temporal, spatial, and spatiotemporal features can be effectively integrated based on different types of attention mechanisms. Empirical results demonstrate that our model achieves outstanding performance on public ETH, UCY, SDD, and INTERACTION datasets. Detailed ablation studies are conducted to verify the effectiveness of the model components.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 2","pages":"525-538"},"PeriodicalIF":4.5,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143783268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural-Network-Adaptive Event-Triggered Control for Stochastic Nonlinear Systems With Sensor Attacks 随机非线性系统传感器攻击的神经网络自适应事件触发控制
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-12-19 DOI: 10.1109/TCSS.2024.3502798
Yuelei Yu;Shuai Sui;Zhihong Zhao;C. L. Philip Chen
This article studies the adaptive neural network (NN) event-triggered secure control issue for stochastic nonlinear systems subject to sensor attacks. NNs are adopted to identify unknown nonlinear dynamics, and an NN state estimator is established to address the issue resulting from unmeasurable states. An NN observer is proposed to estimate unknown sensor attack signals. To save limited communication resources and reduce the number of controller updates, an event-triggered control (ETC) scheme is introduced. Then, an adaptive NN event-triggered secure control algorithm is designed by backstepping control method. The results demonstrate the stability of the control system and its consistent convergence in tracking errors under sensor attacks. Finally, simulations are shown to verify the effectiveness of the investigated theory.
研究了受传感器攻击的随机非线性系统的自适应神经网络事件触发安全控制问题。采用神经网络识别未知的非线性动力学,并建立了神经网络状态估计器来解决状态不可测的问题。提出了一种用于估计未知传感器攻击信号的神经网络观测器。为了节省有限的通信资源和减少控制器更新的次数,引入了事件触发控制(ETC)方案。然后,采用反步控制方法设计了一种自适应神经网络事件触发安全控制算法。结果表明,在传感器攻击下,控制系统的稳定性和跟踪误差的一致性收敛性。最后通过仿真验证了所研究理论的有效性。
{"title":"Neural-Network-Adaptive Event-Triggered Control for Stochastic Nonlinear Systems With Sensor Attacks","authors":"Yuelei Yu;Shuai Sui;Zhihong Zhao;C. L. Philip Chen","doi":"10.1109/TCSS.2024.3502798","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3502798","url":null,"abstract":"This article studies the adaptive neural network (NN) event-triggered secure control issue for stochastic nonlinear systems subject to sensor attacks. NNs are adopted to identify unknown nonlinear dynamics, and an NN state estimator is established to address the issue resulting from unmeasurable states. An NN observer is proposed to estimate unknown sensor attack signals. To save limited communication resources and reduce the number of controller updates, an event-triggered control (ETC) scheme is introduced. Then, an adaptive NN event-triggered secure control algorithm is designed by backstepping control method. The results demonstrate the stability of the control system and its consistent convergence in tracking errors under sensor attacks. Finally, simulations are shown to verify the effectiveness of the investigated theory.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"2062-2071"},"PeriodicalIF":4.5,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomaly Detection on Attributed Networks via Multiview and Multiscale Contrastive Learning 基于多视角和多尺度对比学习的属性网络异常检测
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-12-17 DOI: 10.1109/TCSS.2024.3514148
Shuxin Qin;Yongcan Luo;Jing Zhu;Gaofeng Tao;Jingya Zheng;Zhongjun Ma
Detecting abnormal nodes from attributed networks plays an important role in various applications, including cybersecurity, finance, and social networks. Most existing methods focus on learning different scales of graphs or using augmented data to improve the quality of feature representation. However, the performance is limited due to two critical problems. First, the high sensitivity of attributed networks makes it uncontrollable and uncertain to use conventional methods for data augmentation, leading to limited improvement in representation and generalization capabilities. Second, under the unsupervised paradigm, anomalous nodes mixed in the training data may interfere with the learning of normal patterns and weaken the discrimination ability. In this work, we propose a novel multiview and multiscale contrastive learning framework to address these two issues. Specifically, a network augmentation method based on parameter perturbation is introduced to generate augmented views for both node–node and node–subgraph level contrast branches. Then, cross-view graph contrastive learning is employed to improve the representation without the need for augmented data. We also provide a cycle training strategy where normal samples detected in the former step are collected for an additional training step. In this way, the ability to learn normal patterns is enhanced. Extensive experiments on six benchmark datasets demonstrate that our method outperforms the existing state-of-the-art baselines.
从属性网络中检测异常节点在网络安全、金融、社交网络等应用中发挥着重要作用。大多数现有的方法都集中在学习不同尺度的图或使用增强数据来提高特征表示的质量。然而,由于两个关键问题,性能受到限制。首先,属性网络的高灵敏度使得使用传统方法进行数据增强具有不可控和不确定性,导致表征和泛化能力的提高有限。其次,在无监督范式下,混杂在训练数据中的异常节点可能会干扰正常模式的学习,削弱识别能力。在这项工作中,我们提出了一个新的多视角和多尺度对比学习框架来解决这两个问题。具体来说,提出了一种基于参数摄动的网络增强方法来生成节点-节点级和节点-子图级对比分支的增强视图。然后,在不需要增强数据的情况下,采用交叉视图对比学习来改进表示。我们还提供了一个循环训练策略,在前一步中检测到的正常样本被收集用于额外的训练步骤。通过这种方式,学习正常模式的能力得到了增强。在六个基准数据集上进行的大量实验表明,我们的方法优于现有的最先进的基线。
{"title":"Anomaly Detection on Attributed Networks via Multiview and Multiscale Contrastive Learning","authors":"Shuxin Qin;Yongcan Luo;Jing Zhu;Gaofeng Tao;Jingya Zheng;Zhongjun Ma","doi":"10.1109/TCSS.2024.3514148","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3514148","url":null,"abstract":"Detecting abnormal nodes from attributed networks plays an important role in various applications, including cybersecurity, finance, and social networks. Most existing methods focus on learning different scales of graphs or using augmented data to improve the quality of feature representation. However, the performance is limited due to two critical problems. First, the high sensitivity of attributed networks makes it uncontrollable and uncertain to use conventional methods for data augmentation, leading to limited improvement in representation and generalization capabilities. Second, under the unsupervised paradigm, anomalous nodes mixed in the training data may interfere with the learning of normal patterns and weaken the discrimination ability. In this work, we propose a novel multiview and multiscale contrastive learning framework to address these two issues. Specifically, a network augmentation method based on parameter perturbation is introduced to generate augmented views for both node–node and node–subgraph level contrast branches. Then, cross-view graph contrastive learning is employed to improve the representation without the need for augmented data. We also provide a cycle training strategy where normal samples detected in the former step are collected for an additional training step. In this way, the ability to learn normal patterns is enhanced. Extensive experiments on six benchmark datasets demonstrate that our method outperforms the existing state-of-the-art baselines.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1038-1051"},"PeriodicalIF":4.5,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Consensus Group Selection Focused on Node Transmission Delay in Sharding Blockchains 基于节点传输延迟的分片区块链共识组优化选择
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-12-17 DOI: 10.1109/TCSS.2024.3514186
Liping Tao;Yang Lu;Yuqi Fan;Chee Wei Tan;Zhen Wei
Sharding presents an enticing path toward improving blockchain scalability. However, the consensus mechanism within individual shards faces mounting security challenges due to the restricted number of consensus nodes and the reliance on conventional, unchanging nodes for consensus. Common strategies to enhance shard consensus security often involve increasing the number of consensus nodes per shard. While effective in bolstering security, this approach also leads to a notable rise in consensus delay within each shard, potentially offsetting the scalability advantages of sharding. Hence, it becomes imperative to strategically select nodes to form dedicated consensus groups for each shard. These groups should not only enhance shard consensus security but also do so without exacerbating consensus delay. In this article, we propose a novel consensus group selection based on transmission delay between nodes (CGSTD) to address this challenge, with the goal of minimizing the overall consensus delay across the system. CGSTD intelligently selects nodes from various shards to form distinct consensus groups for each shard, thereby enhancing shard security while maintaining optimal system-wide consensus efficiency. We conduct a rigorous theoretical analysis to evaluate the security properties of CGSTD and derive approximation ratios under various operational scenarios. Simulation results validate the superior performance of CGSTD compared to baseline algorithms, showcasing reductions in total consensus delay, mitigated increases in shard-specific delay, optimized block storage utilization per node, and streamlined participation of nodes in consensus groups.
分片为提高区块链的可伸缩性提供了一条诱人的途径。然而,由于共识节点数量的限制以及对传统的、不变的共识节点的依赖,单个分片内的共识机制面临着越来越大的安全挑战。增强分片共识安全性的常用策略通常涉及增加每个分片的共识节点数量。虽然这种方法可以有效地增强安全性,但也会导致每个分片内的共识延迟显著增加,可能会抵消分片的可扩展性优势。因此,必须战略性地选择节点,为每个分片形成专用的共识组。这些团体不仅应该增强分片共识的安全性,而且应该在不加剧共识延迟的情况下这样做。在本文中,我们提出了一种基于节点间传输延迟(CGSTD)的新型共识组选择来解决这一挑战,目标是最小化整个系统的整体共识延迟。CGSTD智能地从各个分片中选择节点,为每个分片形成不同的共识组,从而在保持全系统最佳共识效率的同时增强分片安全性。我们进行了严格的理论分析,评估了CGSTD的安全特性,并推导了各种操作场景下的近似比。仿真结果验证了CGSTD与基线算法相比的优越性能,显示了总共识延迟的减少,减缓了分片特定延迟的增加,优化了每个节点的块存储利用率,并简化了节点在共识组中的参与。
{"title":"Optimized Consensus Group Selection Focused on Node Transmission Delay in Sharding Blockchains","authors":"Liping Tao;Yang Lu;Yuqi Fan;Chee Wei Tan;Zhen Wei","doi":"10.1109/TCSS.2024.3514186","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3514186","url":null,"abstract":"Sharding presents an enticing path toward improving blockchain scalability. However, the consensus mechanism within individual shards faces mounting security challenges due to the restricted number of consensus nodes and the reliance on conventional, unchanging nodes for consensus. Common strategies to enhance shard consensus security often involve increasing the number of consensus nodes per shard. While effective in bolstering security, this approach also leads to a notable rise in consensus delay within each shard, potentially offsetting the scalability advantages of sharding. Hence, it becomes imperative to strategically select nodes to form dedicated consensus groups for each shard. These groups should not only enhance shard consensus security but also do so without exacerbating consensus delay. In this article, we propose a novel consensus group selection based on transmission delay between nodes (CGSTD) to address this challenge, with the goal of minimizing the overall consensus delay across the system. CGSTD intelligently selects nodes from various shards to form distinct consensus groups for each shard, thereby enhancing shard security while maintaining optimal system-wide consensus efficiency. We conduct a rigorous theoretical analysis to evaluate the security properties of CGSTD and derive approximation ratios under various operational scenarios. Simulation results validate the superior performance of CGSTD compared to baseline algorithms, showcasing reductions in total consensus delay, mitigated increases in shard-specific delay, optimized block storage utilization per node, and streamlined participation of nodes in consensus groups.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1052-1067"},"PeriodicalIF":4.5,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Exploring Fairness in Visual Transformer Based Natural and GAN Image Detection Systems 探索基于视觉变压器的自然和GAN图像检测系统的公平性
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-12-16 DOI: 10.1109/TCSS.2024.3509340
Manjary P. Gangan;Anoop Kadan;Lajish V. L.
Image forensics research has recently witnessed a lot of advancements toward developing computational models capable of accurately detecting natural images captured by cameras and generative adversarial network (GAN) generated images. However, it is also important to ensure whether these computational models are fair enough and do not produce biased outcomes that could eventually harm certain societal groups or cause serious security threats. Exploring fairness in image forensic algorithms is an initial step toward mitigating these biases. This study explores bias in visual transformer based image forensic algorithms that classify natural and GAN images, since visual transformers are recently being widely used in image classification based tasks, including in the area of image forensics. The proposed study procures bias evaluation corpora to analyze bias in gender, racial, affective, and intersectional domains using a wide set of individual and pairwise bias evaluation measures. Since the robustness of the algorithms against image compression is an important factor to be considered in forensic tasks, this study also analyzes the impact of image compression on model bias. Hence, to study the impact of image compression on model bias, a two-phase evaluation setting is followed, where the experiments are carried out in uncompressed and compressed evaluation settings. The study could identify bias existences in the visual transformer based models distinguishing natural and GAN images, and also observes that image compression impacts model biases, predominantly amplifying the presence of biases in class GAN predictions.
最近,图像取证研究在开发计算模型方面取得了许多进展,这些计算模型能够准确检测由相机捕获的自然图像和生成对抗网络(GAN)生成的图像。然而,同样重要的是要确保这些计算模型是否足够公平,不会产生可能最终伤害某些社会群体或造成严重安全威胁的有偏见的结果。探索图像取证算法的公平性是减轻这些偏见的第一步。本研究探讨了基于视觉变换的图像取证算法的偏见,这些算法对自然图像和GAN图像进行分类,因为视觉变换最近被广泛用于基于图像分类的任务,包括图像取证领域。本研究拟建立偏见评估语料库,以分析性别、种族、情感和交叉领域的偏见,采用广泛的个体和成对偏见评估措施。由于算法对图像压缩的鲁棒性是法医任务中需要考虑的一个重要因素,因此本研究还分析了图像压缩对模型偏差的影响。因此,为了研究图像压缩对模型偏差的影响,采用两阶段评估设置,分别在未压缩和压缩评估设置下进行实验。该研究可以识别基于视觉转换器的模型中存在的偏差,该模型区分了自然图像和GAN图像,并且还观察到图像压缩影响模型偏差,主要放大了GAN类预测中的偏差。
{"title":"Toward Exploring Fairness in Visual Transformer Based Natural and GAN Image Detection Systems","authors":"Manjary P. Gangan;Anoop Kadan;Lajish V. L.","doi":"10.1109/TCSS.2024.3509340","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3509340","url":null,"abstract":"Image forensics research has recently witnessed a lot of advancements toward developing computational models capable of accurately detecting natural images captured by cameras and generative adversarial network (GAN) generated images. However, it is also important to ensure whether these computational models are fair enough and do not produce biased outcomes that could eventually harm certain societal groups or cause serious security threats. Exploring fairness in image forensic algorithms is an initial step toward mitigating these biases. This study explores bias in visual transformer based image forensic algorithms that classify natural and GAN images, since visual transformers are recently being widely used in image classification based tasks, including in the area of image forensics. The proposed study procures bias evaluation corpora to analyze bias in gender, racial, affective, and intersectional domains using a wide set of individual and pairwise bias evaluation measures. Since the robustness of the algorithms against image compression is an important factor to be considered in forensic tasks, this study also analyzes the impact of image compression on model bias. Hence, to study the impact of image compression on model bias, a two-phase evaluation setting is followed, where the experiments are carried out in uncompressed and compressed evaluation settings. The study could identify bias existences in the visual transformer based models distinguishing natural and GAN images, and also observes that image compression impacts model biases, predominantly amplifying the presence of biases in class GAN predictions.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1068-1079"},"PeriodicalIF":4.5,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing Energy-Aware Scheduling and Task Allocation Algorithms for Online Reinforcement Learning Applications in Cloud Environments 云环境下在线强化学习应用的能量感知调度和任务分配算法设计
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-12-16 DOI: 10.1109/TCSS.2024.3508089
Harshal Janjani;Tanmay Agarwal;M. P. Gopinath;Vimoh Sharma;S. P. Raja
With the rapid proliferation of machine learning applications in cloud computing environments, addressing crucial challenges concerning energy efficiency becomes pressing, including addressing the high power consumption of such workloads. In this regard, this work focuses much on the development of an energy-aware scheduling and task assignment algorithm that, while optimizing energy consumption, maintains required performance standards in deploying machine-learning applications in cloud environments. It therefore, pivots on leveraging online reinforcement learning to deduce an optimal planning and allocation strategy. This proposed algorithm leverages the capability of RL in making sequential decisions with the aim of achieving maximum cumulative rewards. The algorithm design and its implementation are examined in detail, considering the nature of workloads and how the computational resources are utilized. The algorithm’s performance is analyzed by looking into different performance metrics that assess the success of the model. All the results indicate that energy-aware scheduling combined with task assignment algorithms are bound to reduce energy consumption by a great margin while meeting the required performance for large-scale workloads. These results hold much promise for the improvement of sustainable cloud computing infrastructures and consequently, to energy-efficient machine learning. The future research directions involve enhancing the proposed algorithm’s generalization capabilities and addressing challenges related to scalability and convergence.
随着云计算环境中机器学习应用的迅速普及,解决能源效率方面的关键挑战变得迫在眉睫,包括解决此类工作负载的高功耗问题。在这方面,这项工作主要集中在能源感知调度和任务分配算法的开发上,该算法在优化能耗的同时,在云环境中部署机器学习应用程序时保持所需的性能标准。因此,它依赖于利用在线强化学习来推断出最优的规划和分配策略。该算法利用强化学习的能力进行顺序决策,以实现最大的累积奖励。考虑到工作负载的性质和如何利用计算资源,对算法设计及其实现进行了详细的研究。通过查看评估模型成功的不同性能指标来分析算法的性能。结果表明,能量感知调度与任务分配算法相结合,在满足大规模工作负载性能要求的前提下,一定能大幅度降低能耗。这些结果为可持续云计算基础设施的改进以及节能机器学习带来了很大的希望。未来的研究方向包括提高算法的泛化能力,解决与可扩展性和收敛性相关的挑战。
{"title":"Designing Energy-Aware Scheduling and Task Allocation Algorithms for Online Reinforcement Learning Applications in Cloud Environments","authors":"Harshal Janjani;Tanmay Agarwal;M. P. Gopinath;Vimoh Sharma;S. P. Raja","doi":"10.1109/TCSS.2024.3508089","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3508089","url":null,"abstract":"With the rapid proliferation of machine learning applications in cloud computing environments, addressing crucial challenges concerning energy efficiency becomes pressing, including addressing the high power consumption of such workloads. In this regard, this work focuses much on the development of an energy-aware scheduling and task assignment algorithm that, while optimizing energy consumption, maintains required performance standards in deploying machine-learning applications in cloud environments. It therefore, pivots on leveraging online reinforcement learning to deduce an optimal planning and allocation strategy. This proposed algorithm leverages the capability of RL in making sequential decisions with the aim of achieving maximum cumulative rewards. The algorithm design and its implementation are examined in detail, considering the nature of workloads and how the computational resources are utilized. The algorithm’s performance is analyzed by looking into different performance metrics that assess the success of the model. All the results indicate that energy-aware scheduling combined with task assignment algorithms are bound to reduce energy consumption by a great margin while meeting the required performance for large-scale workloads. These results hold much promise for the improvement of sustainable cloud computing infrastructures and consequently, to energy-efficient machine learning. The future research directions involve enhancing the proposed algorithm’s generalization capabilities and addressing challenges related to scalability and convergence.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 3","pages":"1218-1232"},"PeriodicalIF":4.5,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Reinforcement Learning for Enhanced Decision-Making of Evacuation Guidance Robots in Intelligent Fire Scenarios 基于对抗性强化学习的智能火灾疏散引导机器人决策增强研究
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-12-12 DOI: 10.1109/TCSS.2024.3502420
Hantao Zhao;Zhihao Liang;Tianxing Ma;Xiaomeng Shi;Mubbasir Kapadia;Tyler Thrash;Christoph Hoelscher;Jinyuan Jia;Bo Liu;Jiuxin Cao
In the context of rapid urbanization, traditional manual guidance and static evacuation signs are increasingly inadequate for addressing complex and dynamic emergencies. This study proposes an innovative emergency evacuation framework that optimizes the crowd evacuation by integrating multiagent reinforcement learning (MARL) with adversarial reinforcement learning (ARL). The developed simulation environment models realistic human behavior in complex buildings and incorporates robotic navigation and intelligent path planning. A novel simulated human behavior model was integrated, capable of complex human–robot interaction, independent escape route searching, and exhibiting herd mentality and memory mechanisms. We also proposed a multiagent framework that combines MARL and ARL to enhance overall evacuation efficiency and robustness. Additionally, we developed a new ARL evaluation framework that provides a novel method for quantifying agents’ performance. Various experiments of differing difficulty levels were conducted, and the results demonstrate that the proposed framework exhibits advantages in emergency evacuation scenarios. Specifically, our ARLR approach increased survival rates by 1.8% points in low-difficulty evacuation tasks compared to the RLR approach using only MARL algorithms. In high-difficulty evacuation tasks, the ARLR approach raised survival rates from 46.7% without robots to 64.4%, exceeding the RLR approach by 1.7% points. This study aims to enhance the efficiency and safety of human–robot collaborative fire evacuations and provides theoretical support for evaluating and improving the performance and robustness of ARL agents.
在快速城市化的背景下,传统的人工引导和静态疏散标志越来越不适合处理复杂和动态的突发事件。本研究提出了一种创新的应急疏散框架,通过将多智能体强化学习(MARL)与对抗强化学习(ARL)相结合,优化人群疏散。所开发的仿真环境模拟了复杂建筑中真实的人类行为,并结合了机器人导航和智能路径规划。该模型具有复杂的人机交互能力、独立的逃生路径搜索能力,并具有从众心理和记忆机制。我们还提出了一个结合MARL和ARL的多智能体框架,以提高整体疏散效率和鲁棒性。此外,我们开发了一个新的ARL评估框架,为量化代理的性能提供了一种新的方法。进行了不同难度的实验,结果表明所提出的框架在紧急疏散场景中具有优势。具体来说,与仅使用MARL算法的RLR方法相比,我们的ARLR方法在低难度疏散任务中的存活率提高了1.8%。在高难度的疏散任务中,ARLR方法将生存率从没有机器人的46.7%提高到64.4%,比RLR方法高出1.7%。本研究旨在提高人机协同火灾疏散的效率和安全性,为评估和提高ARL agent的性能和鲁棒性提供理论支持。
{"title":"Adversarial Reinforcement Learning for Enhanced Decision-Making of Evacuation Guidance Robots in Intelligent Fire Scenarios","authors":"Hantao Zhao;Zhihao Liang;Tianxing Ma;Xiaomeng Shi;Mubbasir Kapadia;Tyler Thrash;Christoph Hoelscher;Jinyuan Jia;Bo Liu;Jiuxin Cao","doi":"10.1109/TCSS.2024.3502420","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3502420","url":null,"abstract":"In the context of rapid urbanization, traditional manual guidance and static evacuation signs are increasingly inadequate for addressing complex and dynamic emergencies. This study proposes an innovative emergency evacuation framework that optimizes the crowd evacuation by integrating multiagent reinforcement learning (MARL) with adversarial reinforcement learning (ARL). The developed simulation environment models realistic human behavior in complex buildings and incorporates robotic navigation and intelligent path planning. A novel simulated human behavior model was integrated, capable of complex human–robot interaction, independent escape route searching, and exhibiting herd mentality and memory mechanisms. We also proposed a multiagent framework that combines MARL and ARL to enhance overall evacuation efficiency and robustness. Additionally, we developed a new ARL evaluation framework that provides a novel method for quantifying agents’ performance. Various experiments of differing difficulty levels were conducted, and the results demonstrate that the proposed framework exhibits advantages in emergency evacuation scenarios. Specifically, our ARLR approach increased survival rates by 1.8% points in low-difficulty evacuation tasks compared to the RLR approach using only MARL algorithms. In high-difficulty evacuation tasks, the ARLR approach raised survival rates from 46.7% without robots to 64.4%, exceeding the RLR approach by 1.7% points. This study aims to enhance the efficiency and safety of human–robot collaborative fire evacuations and provides theoretical support for evaluating and improving the performance and robustness of ARL agents.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"2030-2046"},"PeriodicalIF":4.5,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TDG-Mamba: Advanced Spatiotemporal Embedding for Temporal Dynamic Graph Learning via Bidirectional Information Propagation 基于双向信息传播的时间动态图学习的高级时空嵌入
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-12-12 DOI: 10.1109/TCSS.2024.3509399
Mengran Li;Junzhou Chen;Bo Li;Yong Zhang;Ronghui Zhang;Siyuan Gong;Xiaolei Ma;Zhihong Tian
Temporal dynamic graphs (TDGs), representing the dynamic evolution of entities and their relationships over time with intricate temporal features, are widely used in various real-world domains. Existing methods typically rely on mainstream techniques such as transformers and graph neural networks (GNNs) to capture the spatiotemporal information of TDGs. However, despite their advanced capabilities, these methods often struggle with significant computational complexity and limited ability to capture temporal dynamic contextual relationships. Recently, a new model architecture called mamba has emerged, noted for its capability to capture complex dependencies in sequences while significantly reducing computational complexity. Building on this, we propose a novel method, TDG-mamba, which integrates mamba for TDG learning. TDG-mamba introduces deep semantic spatiotemporal embeddings into the mamba architecture through a specially designed spatiotemporal prior tokenization module (SPTM). Furthermore, to better leverage temporal information differences and enhance the modeling of dynamic changes in graph structures, we separately design a bidirectional mamba and a directed GNN for improved spatiotemporal embedding learning. Link prediction experiments on multiple public datasets demonstrate that our method delivers superior performance, with an average improvement of 5.11% over baseline methods across various settings.
时间动态图(Temporal dynamic graph, tdg)以复杂的时间特征表征实体的动态演化及其关系,被广泛应用于现实世界的各个领域。现有方法主要依赖于变压器和图神经网络(gnn)等主流技术来获取tdg的时空信息。然而,尽管这些方法具有先进的功能,但它们经常与显著的计算复杂性和捕获时间动态上下文关系的有限能力作斗争。最近,一种名为mamba的新模型架构出现了,它以能够捕获序列中的复杂依赖关系而闻名,同时显著降低了计算复杂性。在此基础上,我们提出了一种新的方法,TDG-mamba,将曼巴融入TDG学习中。TDG-mamba通过一个特别设计的时空先验标记化模块(SPTM)将深度语义时空嵌入引入到mamba架构中。此外,为了更好地利用时间信息差异和增强图结构动态变化的建模,我们分别设计了双向曼巴和定向GNN,以改进时空嵌入学习。在多个公共数据集上的链路预测实验表明,我们的方法提供了卓越的性能,在各种设置下,比基线方法平均提高了5.11%。
{"title":"TDG-Mamba: Advanced Spatiotemporal Embedding for Temporal Dynamic Graph Learning via Bidirectional Information Propagation","authors":"Mengran Li;Junzhou Chen;Bo Li;Yong Zhang;Ronghui Zhang;Siyuan Gong;Xiaolei Ma;Zhihong Tian","doi":"10.1109/TCSS.2024.3509399","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3509399","url":null,"abstract":"Temporal dynamic graphs (TDGs), representing the dynamic evolution of entities and their relationships over time with intricate temporal features, are widely used in various real-world domains. Existing methods typically rely on mainstream techniques such as transformers and graph neural networks (GNNs) to capture the spatiotemporal information of TDGs. However, despite their advanced capabilities, these methods often struggle with significant computational complexity and limited ability to capture temporal dynamic contextual relationships. Recently, a new model architecture called mamba has emerged, noted for its capability to capture complex dependencies in sequences while significantly reducing computational complexity. Building on this, we propose a novel method, TDG-mamba, which integrates mamba for TDG learning. TDG-mamba introduces deep semantic spatiotemporal embeddings into the mamba architecture through a specially designed spatiotemporal prior tokenization module (SPTM). Furthermore, to better leverage temporal information differences and enhance the modeling of dynamic changes in graph structures, we separately design a bidirectional mamba and a directed GNN for improved spatiotemporal embedding learning. Link prediction experiments on multiple public datasets demonstrate that our method delivers superior performance, with an average improvement of 5.11% over baseline methods across various settings.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"2014-2029"},"PeriodicalIF":4.5,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unveiling Agents’ Confidence in Opinion Dynamics Models via Graph Neural Networks 基于图神经网络的意见动态模型中主体信心的揭示
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-12-11 DOI: 10.1109/TCSS.2024.3508452
Víctor A. Vargas-Pérez;Jesús Giráldez-Cru;Pablo Mesejo;Oscar Cordón
Opinion Dynamics models in social networks are a valuable tool to study how opinions evolve within a population. However, these models often rely on agent-level parameters that are difficult to measure in a real population. This is the case of the confidence threshold in opinion dynamics models based on bounded confidence, where agents are only influenced by other agents having a similar opinion (given by this confidence threshold). Consequently, a common practice is to apply a universal threshold to the entire population and calibrate its value to match observed real-world data, despite being an unrealistic assumption. In this work, we propose an alternative approach using graph neural networks to infer agent-level confidence thresholds in the opinion dynamics of the Hegselmann-Krause model of bounded confidence. This eliminates the need for additional simulations when faced with new case studies. To this end, we construct a comprehensive synthetic training dataset that includes different network topologies and configurations of thresholds and opinions. Through multiple training runs utilizing different architectures, we identify GraphSAGE as the most effective solution, achieving a coefficient of determination $R^{2}$ above 0.7 in test datasets derived from real-world topologies. Remarkably, this performance holds even when the test topologies differ in size from those considered during training.
社交网络中的意见动态模型是研究意见在人群中如何演变的一个有价值的工具。然而,这些模型通常依赖于难以在实际人口中测量的代理级参数。这是基于有限置信度的意见动态模型中的置信阈值的情况,其中代理只受到具有类似意见的其他代理的影响(由该置信阈值给出)。因此,一种常见的做法是对整个人口应用一个通用阈值,并校准其值以匹配观察到的真实世界数据,尽管这是一个不切实际的假设。在这项工作中,我们提出了一种替代方法,使用图神经网络来推断Hegselmann-Krause有界置信模型的意见动态中的代理级置信阈值。这消除了在面对新的案例研究时进行额外模拟的需要。为此,我们构建了一个全面的综合训练数据集,其中包括不同的网络拓扑结构和阈值和意见的配置。通过使用不同架构的多次训练运行,我们确定GraphSAGE是最有效的解决方案,在来自现实世界拓扑的测试数据集中实现了高于0.7的决定系数R^{2}$。值得注意的是,即使测试拓扑的大小与训练期间考虑的拓扑不同,这种性能也保持不变。
{"title":"Unveiling Agents’ Confidence in Opinion Dynamics Models via Graph Neural Networks","authors":"Víctor A. Vargas-Pérez;Jesús Giráldez-Cru;Pablo Mesejo;Oscar Cordón","doi":"10.1109/TCSS.2024.3508452","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3508452","url":null,"abstract":"Opinion Dynamics models in social networks are a valuable tool to study how opinions evolve within a population. However, these models often rely on agent-level parameters that are difficult to measure in a real population. This is the case of the confidence threshold in opinion dynamics models based on bounded confidence, where agents are only influenced by other agents having a similar opinion (given by this confidence threshold). Consequently, a common practice is to apply a universal threshold to the entire population and calibrate its value to match observed real-world data, despite being an unrealistic assumption. In this work, we propose an alternative approach using graph neural networks to infer agent-level confidence thresholds in the opinion dynamics of the Hegselmann-Krause model of bounded confidence. This eliminates the need for additional simulations when faced with new case studies. To this end, we construct a comprehensive synthetic training dataset that includes different network topologies and configurations of thresholds and opinions. Through multiple training runs utilizing different architectures, we identify GraphSAGE as the most effective solution, achieving a coefficient of determination <inline-formula><tex-math>$R^{2}$</tex-math></inline-formula> above 0.7 in test datasets derived from real-world topologies. Remarkably, this performance holds even when the test topologies differ in size from those considered during training.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 2","pages":"725-737"},"PeriodicalIF":4.5,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10792931","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143783288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computational Social Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1