首页 > 最新文献

Frontiers of Computer Science最新文献

英文 中文
Learning group interaction for sports video understanding from a perspective of athlete 从运动员角度看体育视频理解中的学习小组互动
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-18 DOI: 10.1007/s11704-023-2525-y
Rui He, Zehua Fu, Qingjie Liu, Yunhong Wang, Xunxun Chen

Learning activities interactions between small groups is a key step in understanding team sports videos. Recent research focusing on team sports videos can be strictly regarded from the perspective of the audience rather than the athlete. For team sports videos such as volleyball and basketball videos, there are plenty of intra-team and inter-team relations. In this paper, a new task named Group Scene Graph Generation is introduced to better understand intra-team relations and inter-team relations in sports videos. To tackle this problem, a novel Hierarchical Relation Network is proposed. After all players in a video are finely divided into two teams, the feature of the two teams’ activities and interactions will be enhanced by Graph Convolutional Networks, which are finally recognized to generate Group Scene Graph. For evaluation, built on Volleyball dataset with additional 9660 team activity labels, a Volleyball+ dataset is proposed. A baseline is set for better comparison and our experimental results demonstrate the effectiveness of our method. Moreover, the idea of our method can be directly utilized in another video-based task, Group Activity Recognition. Experiments show the priority of our method and display the link between the two tasks. Finally, from the athlete’s view, we elaborately present an interpretation that shows how to utilize Group Scene Graph to analyze teams’ activities and provide professional gaming suggestions.

学习小组之间的互动活动是理解团队体育视频的关键一步。最近对团队体育视频的研究可以严格地从观众而非运动员的角度来看待。对于排球和篮球等团队运动视频来说,队内和队际关系非常丰富。为了更好地理解体育视频中的队内关系和队际关系,本文引入了一项名为 "组场景图生成 "的新任务。为解决这一问题,本文提出了一种新颖的分层关系网络。视频中的所有球员被精细划分为两支队伍后,两支队伍的活动和互动特征将通过图卷积网络得到增强,并最终被识别生成群体场景图。为了进行评估,我们在排球数据集的基础上增加了 9660 个球队活动标签,提出了排球+ 数据集。实验结果证明了我们方法的有效性。此外,我们的方法还可直接用于另一项基于视频的任务--群体活动识别。实验证明了我们方法的优先性,并展示了这两项任务之间的联系。最后,我们从运动员的角度出发,详细阐述了如何利用群体场景图分析球队活动并提供专业比赛建议。
{"title":"Learning group interaction for sports video understanding from a perspective of athlete","authors":"Rui He, Zehua Fu, Qingjie Liu, Yunhong Wang, Xunxun Chen","doi":"10.1007/s11704-023-2525-y","DOIUrl":"https://doi.org/10.1007/s11704-023-2525-y","url":null,"abstract":"<p>Learning activities interactions between small groups is a key step in understanding team sports videos. Recent research focusing on team sports videos can be strictly regarded from the perspective of the audience rather than the athlete. For team sports videos such as volleyball and basketball videos, there are plenty of intra-team and inter-team relations. In this paper, a new task named Group Scene Graph Generation is introduced to better understand intra-team relations and inter-team relations in sports videos. To tackle this problem, a novel Hierarchical Relation Network is proposed. After all players in a video are finely divided into two teams, the feature of the two teams’ activities and interactions will be enhanced by Graph Convolutional Networks, which are finally recognized to generate Group Scene Graph. For evaluation, built on <i>Volleyball</i> dataset with additional 9660 team activity labels, a <i>Volleyball+</i> dataset is proposed. A baseline is set for better comparison and our experimental results demonstrate the effectiveness of our method. Moreover, the idea of our method can be directly utilized in another video-based task, Group Activity Recognition. Experiments show the priority of our method and display the link between the two tasks. Finally, from the athlete’s view, we elaborately present an interpretation that shows how to utilize Group Scene Graph to analyze teams’ activities and provide professional gaming suggestions.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"33 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138743512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
$$cal{Y}$$ -Tuning: an efficient tuning paradigm for large-scale pre-trained models via label representation learning $$cal{Y}$ -Tuning:通过标签表示学习对大规模预训练模型进行高效调整的范例
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-18 DOI: 10.1007/s11704-023-3131-8
Yitao Liu, Chenxin An, Xipeng Qiu

With current success of large-scale pre-trained models (PTMs), how efficiently adapting PTMs to downstream tasks has attracted tremendous attention, especially for PTMs with billions of parameters. Previous work focuses on designing parameter-efficient tuning paradigms but needs to save and compute the gradient of the whole computational graph. In this paper, we propose (cal{Y})-Tuning, an efficient yet effective paradigm to adapt frozen large-scale PTMs to specific downstream tasks. (cal{Y})-Tuning learns dense representations for labels (cal{Y}) defined in a given task and aligns them to fixed feature representation. Without computing the gradients of text encoder at training phrase, (cal{Y})-Tuning is not only parameter-efficient but also training-efficient. Experimental results show that for DeBERTaXXL with 1.6 billion parameters, (cal{Y})-Tuning achieves performance more than 96% of full fine-tuning on GLUE Benchmark with only 2% tunable parameters and much fewer training costs.

随着目前大规模预训练模型(PTM)的成功,如何有效地调整 PTM 以适应下游任务引起了极大的关注,尤其是对于拥有数十亿参数的 PTM。以往的工作主要集中在设计参数高效的调整范式,但需要保存和计算整个计算图的梯度。在本文中,我们提出了 (cal{Y})-Tuning 这一高效而有效的范式,用于使冻结的大规模 PTM 适应特定的下游任务。(cal{Y})-Tuning学习在给定任务中定义的标签的密集表示,并将它们与固定的特征表示对齐。由于不需要在训练短语中计算文本编码器的梯度,(cal{Y})-Tuning 不仅参数效率高,而且训练效率也很高。实验结果表明,对于拥有 16 亿个参数的 DeBERTaXXL,(cal{Y})-Tuning 只需 2% 的可调参数和更少的训练成本,就能在 GLUE Benchmark 上实现超过 96% 的完全微调性能。
{"title":"$$cal{Y}$$ -Tuning: an efficient tuning paradigm for large-scale pre-trained models via label representation learning","authors":"Yitao Liu, Chenxin An, Xipeng Qiu","doi":"10.1007/s11704-023-3131-8","DOIUrl":"https://doi.org/10.1007/s11704-023-3131-8","url":null,"abstract":"<p>With current success of large-scale pre-trained models (PTMs), how efficiently adapting PTMs to downstream tasks has attracted tremendous attention, especially for PTMs with billions of parameters. Previous work focuses on designing parameter-efficient tuning paradigms but needs to save and compute the gradient of the whole computational graph. In this paper, we propose <span>(cal{Y})</span>-Tuning, an efficient yet effective paradigm to adapt frozen large-scale PTMs to specific downstream tasks. <span>(cal{Y})</span>-Tuning learns dense representations for labels <span>(cal{Y})</span> defined in a given task and aligns them to fixed feature representation. Without computing the gradients of text encoder at training phrase, <span>(cal{Y})</span>-Tuning is not only parameter-efficient but also training-efficient. Experimental results show that for DeBERTa<sub>XXL</sub> with 1.6 billion parameters, <span>(cal{Y})</span>-Tuning achieves performance more than 96% of full fine-tuning on GLUE Benchmark with only 2% tunable parameters and much fewer training costs.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"38 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138743951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A prompt-based approach to adversarial example generation and robustness enhancement 基于提示的对抗性示例生成和鲁棒性增强方法
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-18 DOI: 10.1007/s11704-023-2639-2
Yuting Yang, Pei Huang, Juan Cao, Jintao Li, Yun Lin, Feifei Ma

Recent years have seen the wide application of natural language processing (NLP) models in crucial areas such as finance, medical treatment, and news media, raising concerns about the model robustness and vulnerabilities. We find that prompt paradigm can probe special robust defects of pre-trained language models. Malicious prompt texts are first constructed for inputs and a pre-trained language model can generate adversarial examples for victim models via maskfilling. Experimental results show that prompt paradigm can efficiently generate more diverse adversarial examples besides synonym substitution. Then, we propose a novel robust training approach based on prompt paradigm which incorporates prompt texts as the alternatives to adversarial examples and enhances robustness under a lightweight minimax-style optimization framework. Experiments on three real-world tasks and two deep neural models show that our approach can significantly improve the robustness of models to resist adversarial attacks.

近年来,自然语言处理(NLP)模型被广泛应用于金融、医疗和新闻媒体等重要领域,这引起了人们对模型鲁棒性和脆弱性的关注。我们发现,提示范例可以探测预训练语言模型的特殊鲁棒性缺陷。首先构建恶意提示文本作为输入,预训练的语言模型可通过掩码填充为受害者模型生成对抗性示例。实验结果表明,除了同义词替换之外,提示范式还能有效生成更多样化的对抗范例。然后,我们提出了一种基于提示范式的新型鲁棒训练方法,该方法将提示文本作为对抗示例的替代品,并在轻量级最小化式优化框架下增强鲁棒性。在三个实际任务和两个深度神经模型上的实验表明,我们的方法可以显著提高模型的鲁棒性,从而抵御对抗性攻击。
{"title":"A prompt-based approach to adversarial example generation and robustness enhancement","authors":"Yuting Yang, Pei Huang, Juan Cao, Jintao Li, Yun Lin, Feifei Ma","doi":"10.1007/s11704-023-2639-2","DOIUrl":"https://doi.org/10.1007/s11704-023-2639-2","url":null,"abstract":"<p>Recent years have seen the wide application of natural language processing (NLP) models in crucial areas such as finance, medical treatment, and news media, raising concerns about the model robustness and vulnerabilities. We find that prompt paradigm can probe special robust defects of pre-trained language models. Malicious prompt texts are first constructed for inputs and a pre-trained language model can generate adversarial examples for victim models via maskfilling. Experimental results show that prompt paradigm can efficiently generate more diverse adversarial examples besides synonym substitution. Then, we propose a novel robust training approach based on prompt paradigm which incorporates prompt texts as the alternatives to adversarial examples and enhances robustness under a lightweight minimax-style optimization framework. Experiments on three real-world tasks and two deep neural models show that our approach can significantly improve the robustness of models to resist adversarial attacks.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"1 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138715451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum speedup and limitations on matroid property problems 矩阵属性问题的量子提速与限制
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-18 DOI: 10.1007/s11704-023-3130-9
Xiaowei Huang, Jingquan Luo, Lvzhou Li

Matroid theory has been developed to be a mature branch of mathematics and has extensive applications in combinatorial optimization, algorithm design and so on. On the other hand, quantum computing has attracted much attention and has been shown to surpass classical computing on solving some computational problems. Surprisingly, crossover studies of the two fields seem to be missing in the literature. This paper initiates the study of quantum algorithms for matroid property problems. It is shown that quadratic quantum speedup is possible for the calculation problem of finding the girth or the number of circuits (bases, flats, hyperplanes) of a matroid, and for the decision problem of deciding whether a matroid is uniform or Eulerian, by giving a uniform lower bound (Omega left( {sqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } cr } } right)} } right)) on the query complexity of all these problems. On the other hand, for the uniform matroid decision problem, an asymptotically optimal quantum algorithm is proposed which achieves the lower bound, and for the girth problem, an almost optimal quantum algorithm is given with query complexity (Oleft( {log nsqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } cr } } right)} } right)). In addition, for the paving matroid decision problem, a lower bound (Omega left( {sqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } cr } } right)/n} } right)) on the query complexity is obtained, and an (Oleft( {sqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } cr } } right)} } right)) quantum algorithm is presented.

矩阵理论已发展成为一个成熟的数学分支,在组合优化、算法设计等方面有着广泛的应用。另一方面,量子计算也备受关注,在解决某些计算问题上,量子计算已经超越了经典计算。令人惊讶的是,文献中似乎缺少对这两个领域的交叉研究。本文开始研究矩阵属性问题的量子算法。结果表明,通过给出一个统一的下界 (Omega left( {sqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } cr } right)} ,四次量子加速是有可能实现的,这个四次量子加速是通过给出一个统一的下界 (Omega left( {sqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } cr } right)} 来实现的。}(right))对所有这些问题的查询复杂度都有影响。另一方面,对于均匀矩阵决策问题,提出了一种渐进最优的量子算法,该算法达到了下界;对于周长问题,给出了一种几乎最优的量子算法,其查询复杂度为 (Oleft( {log nsqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } } cr)} right)} }。}right)).此外,对于铺路矩阵决策问题,一个下限是 (Omega left( {sqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } } cr } } right)/n} 。}right)/n}},就可以得到一个关于查询复杂度的(Oleft( {sqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } } right)} }。}提出了量子算法。
{"title":"Quantum speedup and limitations on matroid property problems","authors":"Xiaowei Huang, Jingquan Luo, Lvzhou Li","doi":"10.1007/s11704-023-3130-9","DOIUrl":"https://doi.org/10.1007/s11704-023-3130-9","url":null,"abstract":"<p>Matroid theory has been developed to be a mature branch of mathematics and has extensive applications in combinatorial optimization, algorithm design and so on. On the other hand, quantum computing has attracted much attention and has been shown to surpass classical computing on solving some computational problems. Surprisingly, crossover studies of the two fields seem to be missing in the literature. This paper initiates the study of quantum algorithms for matroid property problems. It is shown that quadratic quantum speedup is possible for the calculation problem of finding the girth or the number of circuits (bases, flats, hyperplanes) of a matroid, and for the decision problem of deciding whether a matroid is uniform or Eulerian, by giving a uniform lower bound <span>(Omega left( {sqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } cr } } right)} } right))</span> on the query complexity of all these problems. On the other hand, for the uniform matroid decision problem, an asymptotically optimal quantum algorithm is proposed which achieves the lower bound, and for the girth problem, an almost optimal quantum algorithm is given with query complexity <span>(Oleft( {log nsqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } cr } } right)} } right))</span>. In addition, for the paving matroid decision problem, a lower bound <span>(Omega left( {sqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } cr } } right)/n} } right))</span> on the query complexity is obtained, and an <span>(Oleft( {sqrt {left( {matrix{n cr {leftlfloor {n/2} rightrfloor } cr } } right)} } right))</span> quantum algorithm is presented.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"9 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138743393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMR-CBT: learning modality-fused representations with CB-Transformer for multimodal emotion recognition from unaligned multimodal sequences LMR-CBT:利用 CB 变换器学习模态融合表征,从未对齐的多模态序列中进行多模态情感识别
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-16 DOI: 10.1007/s11704-023-2444-y
Ziwang Fu, Feng Liu, Qing Xu, Xiangling Fu, Jiayin Qi

Learning modality-fused representations and processing unaligned multimodal sequences are meaningful and challenging in multimodal emotion recognition. Existing approaches use directional pairwise attention or a message hub to fuse language, visual, and audio modalities. However, these fusion methods are often quadratic in complexity with respect to the modal sequence length, bring redundant information and are not efficient. In this paper, we propose an efficient neural network to learn modality-fused representations with CB-Transformer (LMR-CBT) for multimodal emotion recognition from unaligned multi-modal sequences. Specifically, we first perform feature extraction for the three modalities respectively to obtain the local structure of the sequences. Then, we design an innovative asymmetric transformer with cross-modal blocks (CB-Transformer) that enables complementary learning of different modalities, mainly divided into local temporal learning, cross-modal feature fusion and global self-attention representations. In addition, we splice the fused features with the original features to classify the emotions of the sequences. Finally, we conduct word-aligned and unaligned experiments on three challenging datasets, IEMOCAP, CMU-MOSI, and CMU-MOSEI. The experimental results show the superiority and efficiency of our proposed method in both settings. Compared with the mainstream methods, our approach reaches the state-of-the-art with a minimum number of parameters.

在多模态情感识别中,学习模态融合表征和处理未对齐的多模态序列既有意义又具有挑战性。现有的方法使用定向配对注意力或信息枢纽来融合语言、视觉和音频模态。然而,这些融合方法的复杂度通常与模态序列长度成二次方关系,会带来冗余信息,而且效率不高。在本文中,我们提出了一种利用 CB 变换器学习模态融合表征(LMR-CBT)的高效神经网络,用于从未对齐的多模态序列中进行多模态情感识别。具体来说,我们首先分别对三种模态进行特征提取,以获得序列的局部结构。然后,我们设计了一种具有跨模态块的创新型非对称变换器(CB-Transformer),可以实现不同模态的互补学习,主要分为局部时态学习、跨模态特征融合和全局自我注意表征。此外,我们将融合特征与原始特征进行拼接,从而对序列进行情绪分类。最后,我们在 IEMOCAP、CMU-MOSI 和 CMU-MOSEI 这三个具有挑战性的数据集上进行了词对齐和不对齐实验。实验结果表明,我们提出的方法在这两种情况下都具有优越性和高效性。与主流方法相比,我们的方法以最少的参数达到了最先进的水平。
{"title":"LMR-CBT: learning modality-fused representations with CB-Transformer for multimodal emotion recognition from unaligned multimodal sequences","authors":"Ziwang Fu, Feng Liu, Qing Xu, Xiangling Fu, Jiayin Qi","doi":"10.1007/s11704-023-2444-y","DOIUrl":"https://doi.org/10.1007/s11704-023-2444-y","url":null,"abstract":"<p>Learning modality-fused representations and processing unaligned multimodal sequences are meaningful and challenging in multimodal emotion recognition. Existing approaches use directional pairwise attention or a message hub to fuse language, visual, and audio modalities. However, these fusion methods are often quadratic in complexity with respect to the modal sequence length, bring redundant information and are not efficient. In this paper, we propose an efficient neural network to learn modality-fused representations with CB-Transformer (LMR-CBT) for multimodal emotion recognition from unaligned multi-modal sequences. Specifically, we first perform feature extraction for the three modalities respectively to obtain the local structure of the sequences. Then, we design an innovative asymmetric transformer with cross-modal blocks (CB-Transformer) that enables complementary learning of different modalities, mainly divided into local temporal learning, cross-modal feature fusion and global self-attention representations. In addition, we splice the fused features with the original features to classify the emotions of the sequences. Finally, we conduct word-aligned and unaligned experiments on three challenging datasets, IEMOCAP, CMU-MOSI, and CMU-MOSEI. The experimental results show the superiority and efficiency of our proposed method in both settings. Compared with the mainstream methods, our approach reaches the state-of-the-art with a minimum number of parameters.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"19 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Communication-robust multi-agent learning by adaptable auxiliary multi-agent adversary generation 通过适应性辅助多代理对手生成实现通信稳健的多代理学习
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-16 DOI: 10.1007/s11704-023-2733-5
Lei Yuan, Feng Chen, Zongzhang Zhang, Yang Yu

Communication can promote coordination in cooperative Multi-Agent Reinforcement Learning (MARL). Nowadays, existing works mainly focus on improving the communication efficiency of agents, neglecting that real-world communication is much more challenging as there may exist noise or potential attackers. Thus the robustness of the communication-based policies becomes an emergent and severe issue that needs more exploration. In this paper, we posit that the ego system1) trained with auxiliary adversaries may handle this limitation and propose an adaptable method of Multi-Agent Auxiliary Adversaries Generation for robust Communication, dubbed MA3C, to obtain a robust communication-based policy. In specific, we introduce a novel message-attacking approach that models the learning of the auxiliary attacker as a cooperative problem under a shared goal to minimize the coordination ability of the ego system, with which every information channel may suffer from distinct message attacks. Furthermore, as naive adversarial training may impede the generalization ability of the ego system, we design an attacker population generation approach based on evolutionary learning. Finally, the ego system is paired with an attacker population and then alternatively trained against the continuously evolving attackers to improve its robustness, meaning that both the ego system and the attackers are adaptable. Extensive experiments on multiple benchmarks indicate that our proposed MA3C provides comparable or better robustness and generalization ability than other baselines.

在合作式多代理强化学习(MARL)中,通信可以促进协调。目前,现有的研究主要集中在提高代理的通信效率上,而忽略了现实世界中的通信因可能存在噪音或潜在攻击者而更具挑战性。因此,基于通信的策略的鲁棒性成为一个新出现的严峻问题,需要更多的探索。在本文中,我们认为使用辅助对手训练的自我系统1) 可以解决这一局限性,并提出了一种用于鲁棒通信的多代理辅助对手生成的适应性方法(被称为 MA3C),以获得基于通信的鲁棒策略。具体来说,我们引入了一种新颖的信息攻击方法,将辅助攻击者的学习建模为一个共同目标下的合作问题,即最小化自我系统的协调能力,在此目标下,每个信息通道都可能遭受不同的信息攻击。此外,由于天真的对抗训练可能会阻碍自我系统的泛化能力,我们设计了一种基于进化学习的攻击者群体生成方法。最后,自我系统与攻击者群体配对,然后针对不断进化的攻击者进行交替训练,以提高其鲁棒性,这意味着自我系统和攻击者都具有适应性。在多个基准上进行的广泛实验表明,我们提出的 MA3C 具有与其他基准相当甚至更好的鲁棒性和泛化能力。
{"title":"Communication-robust multi-agent learning by adaptable auxiliary multi-agent adversary generation","authors":"Lei Yuan, Feng Chen, Zongzhang Zhang, Yang Yu","doi":"10.1007/s11704-023-2733-5","DOIUrl":"https://doi.org/10.1007/s11704-023-2733-5","url":null,"abstract":"<p>Communication can promote coordination in cooperative Multi-Agent Reinforcement Learning (MARL). Nowadays, existing works mainly focus on improving the communication efficiency of agents, neglecting that real-world communication is much more challenging as there may exist noise or potential attackers. Thus the robustness of the communication-based policies becomes an emergent and severe issue that needs more exploration. In this paper, we posit that the ego system<sup>1)</sup> trained with auxiliary adversaries may handle this limitation and propose an adaptable method of <b>M</b>ulti<b>-A</b>gent <b>A</b>uxiliary <b>A</b>dversaries Generation for robust <b>C</b>ommunication, dubbed MA3C, to obtain a robust communication-based policy. In specific, we introduce a novel message-attacking approach that models the learning of the auxiliary attacker as a cooperative problem under a shared goal to minimize the coordination ability of the ego system, with which every information channel may suffer from distinct message attacks. Furthermore, as naive adversarial training may impede the generalization ability of the ego system, we design an attacker population generation approach based on evolutionary learning. Finally, the ego system is paired with an attacker population and then alternatively trained against the continuously evolving attackers to improve its robustness, meaning that both the ego system and the attackers are adaptable. Extensive experiments on multiple benchmarks indicate that our proposed MA3C provides comparable or better robustness and generalization ability than other baselines.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"18 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on dynamic graph processing on GPUs: concepts, terminologies and systems GPU 动态图形处理概览:概念、术语和系统
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-16 DOI: 10.1007/s11704-023-2656-1
Hongru Gao, Xiaofei Liao, Zhiyuan Shao, Kexin Li, Jiajie Chen, Hai Jin

Graphs that are used to model real-world entities with vertices and relationships among entities with edges, have proven to be a powerful tool for describing real-world problems in applications. In most real-world scenarios, entities and their relationships are subject to constant changes. Graphs that record such changes are called dynamic graphs. In recent years, the widespread application scenarios of dynamic graphs have stimulated extensive research on dynamic graph processing systems that continuously ingest graph updates and produce up-to-date graph analytics results. As the scale of dynamic graphs becomes larger, higher performance requirements are demanded to dynamic graph processing systems. With the massive parallel processing power and high memory bandwidth, GPUs become mainstream vehicles to accelerate dynamic graph processing tasks. GPU-based dynamic graph processing systems mainly address two challenges: maintaining the graph data when updates occur (i.e., graph updating) and producing analytics results in time (i.e., graph computing). In this paper, we survey GPU-based dynamic graph processing systems and review their methods on addressing both graph updating and graph computing. To comprehensively discuss existing dynamic graph processing systems on GPUs, we first introduce the terminologies of dynamic graph processing and then develop a taxonomy to describe the methods employed for graph updating and graph computing. In addition, we discuss the challenges and future research directions of dynamic graph processing on GPUs.

事实证明,用顶点来模拟现实世界中的实体,用边来模拟实体之间的关系的图形,是描述应用中现实世界问题的有力工具。在现实世界的大多数场景中,实体及其关系会不断发生变化。记录这种变化的图被称为动态图。近年来,动态图的广泛应用场景激发了人们对动态图处理系统的广泛研究,这些系统可以持续摄取图更新并生成最新的图分析结果。随着动态图的规模越来越大,对动态图处理系统的性能也提出了更高的要求。GPU 具有强大的并行处理能力和高内存带宽,已成为加速动态图处理任务的主流工具。基于 GPU 的动态图处理系统主要解决两个难题:在更新发生时维护图数据(即图更新)和及时生成分析结果(即图计算)。在本文中,我们对基于 GPU 的动态图处理系统进行了调查,并回顾了它们解决图更新和图计算的方法。为了全面讨论 GPU 上现有的动态图处理系统,我们首先介绍了动态图处理的术语,然后开发了一种分类法来描述图更新和图计算所采用的方法。此外,我们还讨论了 GPU 上动态图处理所面临的挑战和未来的研究方向。
{"title":"A survey on dynamic graph processing on GPUs: concepts, terminologies and systems","authors":"Hongru Gao, Xiaofei Liao, Zhiyuan Shao, Kexin Li, Jiajie Chen, Hai Jin","doi":"10.1007/s11704-023-2656-1","DOIUrl":"https://doi.org/10.1007/s11704-023-2656-1","url":null,"abstract":"<p>Graphs that are used to model real-world entities with vertices and relationships among entities with edges, have proven to be a powerful tool for describing real-world problems in applications. In most real-world scenarios, entities and their relationships are subject to constant changes. Graphs that record such changes are called dynamic graphs. In recent years, the widespread application scenarios of dynamic graphs have stimulated extensive research on dynamic graph processing systems that continuously ingest graph updates and produce up-to-date graph analytics results. As the scale of dynamic graphs becomes larger, higher performance requirements are demanded to dynamic graph processing systems. With the massive parallel processing power and high memory bandwidth, GPUs become mainstream vehicles to accelerate dynamic graph processing tasks. GPU-based dynamic graph processing systems mainly address two challenges: maintaining the graph data when updates occur (i.e., graph updating) and producing analytics results in time (i.e., graph computing). In this paper, we survey GPU-based dynamic graph processing systems and review their methods on addressing both graph updating and graph computing. To comprehensively discuss existing dynamic graph processing systems on GPUs, we first introduce the terminologies of dynamic graph processing and then develop a taxonomy to describe the methods employed for graph updating and graph computing. In addition, we discuss the challenges and future research directions of dynamic graph processing on GPUs.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"2 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138682019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A MLP-Mixer and mixture of expert model for remaining useful life prediction of lithium-ion batteries 用于预测锂离子电池剩余使用寿命的 MLP-Mixer 和专家混合模型
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-16 DOI: 10.1007/s11704-023-3277-4

Abstract

Accurately predicting the Remaining Useful Life (RUL) of lithium-ion batteries is crucial for battery management systems. Deep learning-based methods have been shown to be effective in predicting RUL by leveraging battery capacity time series data. However, the representation learning of features such as long-distance sequence dependencies and mutations in capacity time series still needs to be improved. To address this challenge, this paper proposes a novel deep learning model, the MLP-Mixer and Mixture of Expert (MMMe) model, for RUL prediction. The MMMe model leverages the Gated Recurrent Unit and Multi-Head Attention mechanism to encode the sequential data of battery capacity to capture the temporal features and a re-zero MLP-Mixer model to capture the high-level features. Additionally, we devise an ensemble predictor based on a Mixture-of-Experts (MoE) architecture to generate reliable RUL predictions. The experimental results on public datasets demonstrate that our proposed model significantly outperforms other existing methods, providing more reliable and precise RUL predictions while also accurately tracking the capacity degradation process. Our code and dataset are available at the website of github.

摘要 准确预测锂离子电池的剩余使用寿命(RUL)对电池管理系统至关重要。通过利用电池容量时间序列数据,基于深度学习的方法已被证明能有效预测 RUL。然而,容量时间序列中的长距离序列依赖性和突变等特征的表示学习仍有待改进。为应对这一挑战,本文提出了一种新型深度学习模型--MLP-Mixer 和专家混合(MMMe)模型,用于 RUL 预测。MMMe 模型利用门控递归单元(Gated Recurrent Unit)和多头注意(Multi-Head Attention)机制对电池容量的序列数据进行编码,以捕捉时间特征,并利用重零 MLP-Mixer 模型捕捉高级特征。此外,我们还设计了一种基于专家混合(MoE)架构的集合预测器,以生成可靠的 RUL 预测。在公共数据集上的实验结果表明,我们提出的模型明显优于其他现有方法,能提供更可靠、更精确的 RUL 预测,同时还能准确跟踪容量衰减过程。我们的代码和数据集可在 github 网站上获取。
{"title":"A MLP-Mixer and mixture of expert model for remaining useful life prediction of lithium-ion batteries","authors":"","doi":"10.1007/s11704-023-3277-4","DOIUrl":"https://doi.org/10.1007/s11704-023-3277-4","url":null,"abstract":"<h3>Abstract</h3> <p>Accurately predicting the Remaining Useful Life (RUL) of lithium-ion batteries is crucial for battery management systems. Deep learning-based methods have been shown to be effective in predicting RUL by leveraging battery capacity time series data. However, the representation learning of features such as long-distance sequence dependencies and mutations in capacity time series still needs to be improved. To address this challenge, this paper proposes a novel deep learning model, the MLP-Mixer and Mixture of Expert (MMMe) model, for RUL prediction. The MMMe model leverages the Gated Recurrent Unit and Multi-Head Attention mechanism to encode the sequential data of battery capacity to capture the temporal features and a re-zero MLP-Mixer model to capture the high-level features. Additionally, we devise an ensemble predictor based on a Mixture-of-Experts (MoE) architecture to generate reliable RUL predictions. The experimental results on public datasets demonstrate that our proposed model significantly outperforms other existing methods, providing more reliable and precise RUL predictions while also accurately tracking the capacity degradation process. Our code and dataset are available at the website of github.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"6 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Representation learning: serial-autoencoder for personalized recommendation 表征学习:用于个性化推荐的序列自动编码器
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-16 DOI: 10.1007/s11704-023-2441-1
Yi Zhu, Yishuai Geng, Yun Li, Jipeng Qiang, Xindong Wu

Nowadays, the personalized recommendation has become a research hotspot for addressing information overload. Despite this, generating effective recommendations from sparse data remains a challenge. Recently, auxiliary information has been widely used to address data sparsity, but most models using auxiliary information are linear and have limited expressiveness. Due to the advantages of feature extraction and no-label requirements, autoencoder-based methods have become quite popular. However, most existing autoencoder-based methods discard the reconstruction of auxiliary information, which poses huge challenges for better representation learning and model scalability. To address these problems, we propose Serial-Autoencoder for Personalized Recommendation (SAPR), which aims to reduce the loss of critical information and enhance the learning of feature representations. Specifically, we first combine the original rating matrix and item attribute features and feed them into the first autoencoder for generating a higher-level representation of the input. Second, we use a second autoencoder to enhance the reconstruction of the data representation of the prediciton rating matrix. The output rating information is used for recommendation prediction. Extensive experiments on the MovieTweetings and MovieLens datasets have verified the effectiveness of SAPR compared to state-of-the-art models.

如今,个性化推荐已成为解决信息过载问题的研究热点。尽管如此,从稀疏数据中生成有效的推荐仍然是一个挑战。最近,辅助信息被广泛用于解决数据稀疏问题,但大多数使用辅助信息的模型都是线性的,表达能力有限。由于具有特征提取和无标签要求的优势,基于自动编码器的方法已变得相当流行。然而,大多数现有的基于自动编码器的方法都放弃了对辅助信息的重建,这对更好的表征学习和模型的可扩展性提出了巨大挑战。为了解决这些问题,我们提出了用于个性化推荐的串行自动编码器(SAPR),旨在减少关键信息的丢失,增强特征表征的学习。具体来说,我们首先将原始评分矩阵和项目属性特征结合起来,并将其输入第一个自动编码器,以生成输入的高级表示。其次,我们使用第二个自动编码器来增强预测评级矩阵数据表示的重建。输出的评级信息用于推荐预测。在 MovieTweetings 和 MovieLens 数据集上进行的大量实验验证了 SAPR 与最先进模型相比的有效性。
{"title":"Representation learning: serial-autoencoder for personalized recommendation","authors":"Yi Zhu, Yishuai Geng, Yun Li, Jipeng Qiang, Xindong Wu","doi":"10.1007/s11704-023-2441-1","DOIUrl":"https://doi.org/10.1007/s11704-023-2441-1","url":null,"abstract":"<p>Nowadays, the personalized recommendation has become a research hotspot for addressing information overload. Despite this, generating effective recommendations from sparse data remains a challenge. Recently, auxiliary information has been widely used to address data sparsity, but most models using auxiliary information are linear and have limited expressiveness. Due to the advantages of feature extraction and no-label requirements, autoencoder-based methods have become quite popular. However, most existing autoencoder-based methods discard the reconstruction of auxiliary information, which poses huge challenges for better representation learning and model scalability. To address these problems, we propose Serial-Autoencoder for Personalized Recommendation (SAPR), which aims to reduce the loss of critical information and enhance the learning of feature representations. Specifically, we first combine the original rating matrix and item attribute features and feed them into the first autoencoder for generating a higher-level representation of the input. Second, we use a second autoencoder to enhance the reconstruction of the data representation of the prediciton rating matrix. The output rating information is used for recommendation prediction. Extensive experiments on the MovieTweetings and MovieLens datasets have verified the effectiveness of SAPR compared to state-of-the-art models.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"187 4 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust AUC maximization for classification with pairwise confidence comparisons 利用成对置信度比较实现分类的稳健 AUC 最大化
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-16 DOI: 10.1007/s11704-023-2709-5
Haochen Shi, Mingkun Xie, Shengjun Huang

Supervised learning often requires a large number of labeled examples, which has become a critical bottleneck in the case that manual annotating the class labels is costly. To mitigate this issue, a new framework called pairwise comparison (Pcomp) classification is proposed to allow training examples only weakly annotated with pairwise comparison, i.e., which one of two examples is more likely to be positive. The previous study solves Pcomp problems by minimizing the classification error, which may lead to less robust model due to its sensitivity to class distribution. In this paper, we propose a robust learning framework for Pcomp data along with a pairwise surrogate loss called Pcomp-AUC. It provides an unbiased estimator to equivalently maximize AUC without accessing the precise class labels. Theoretically, we prove the consistency with respect to AUC and further provide the estimation error bound for the proposed method. Empirical studies on multiple datasets validate the effectiveness of the proposed method.

监督学习通常需要大量的标注示例,在人工标注类标签成本高昂的情况下,这已成为一个关键瓶颈。为了缓解这一问题,我们提出了一种称为成对比较(Pcomp)分类的新框架,允许只对训练示例进行弱注释的成对比较,即两个示例中哪一个更有可能是正面的。以往的研究通过最小化分类误差来解决 Pcomp 问题,但由于其对类别分布的敏感性,可能会导致模型的鲁棒性较差。在本文中,我们提出了一种针对 Pcomp 数据的稳健学习框架,以及一种名为 Pcomp-AUC 的成对替代损失。它提供了一种无偏估计器,可以在不获取精确类别标签的情况下等效地最大化 AUC。从理论上讲,我们证明了 AUC 的一致性,并进一步提供了所提方法的估计误差边界。对多个数据集的实证研究验证了所提方法的有效性。
{"title":"Robust AUC maximization for classification with pairwise confidence comparisons","authors":"Haochen Shi, Mingkun Xie, Shengjun Huang","doi":"10.1007/s11704-023-2709-5","DOIUrl":"https://doi.org/10.1007/s11704-023-2709-5","url":null,"abstract":"<p>Supervised learning often requires a large number of labeled examples, which has become a critical bottleneck in the case that manual annotating the class labels is costly. To mitigate this issue, a new framework called pairwise comparison (Pcomp) classification is proposed to allow training examples only weakly annotated with pairwise comparison, i.e., which one of two examples is more likely to be positive. The previous study solves Pcomp problems by minimizing the classification error, which may lead to less robust model due to its sensitivity to class distribution. In this paper, we propose a robust learning framework for Pcomp data along with a pairwise surrogate loss called Pcomp-AUC. It provides an unbiased estimator to equivalently maximize AUC without accessing the precise class labels. Theoretically, we prove the consistency with respect to AUC and further provide the estimation error bound for the proposed method. Empirical studies on multiple datasets validate the effectiveness of the proposed method.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"6 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers of Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1