首页 > 最新文献

The Journal of Supercomputing最新文献

英文 中文
CCeACF: content and complementarity enhanced attentional collaborative filtering for cloud API recommendation CCeACF:用于云应用程序接口推荐的内容和互补性增强型注意协同过滤技术
Pub Date : 2024-08-21 DOI: 10.1007/s11227-024-06445-7
Zhen Chen, Wenhui Chen, Xiaowei Liu, Jing Zhao

Cloud application programming interface (API) is a software intermediary that enables applications to communicate and transfer information to one another in the cloud. As the number of cloud APIs continues to increase, developers are inundated with a plethora of cloud API choices, so researchers have proposed many cloud API recommendation methods. Existing cloud API recommendation methods can be divided into two types: content-based (CB) cloud API recommendation and collaborative filtering-based (CF) cloud API recommendation. CF methods mainly consider the historical information of cloud APIs invoked by mashups. Generally, CF methods have better recommendation performances on head cloud APIs due to more interaction records, and poor recommendation performances on tail cloud APIs. Meanwhile, CB methods can improve the recommendation performances of tail cloud APIs by leveraging the content information of cloud APIs and mashups, but their overall performances are not as good as those of CF methods. Moreover, traditional cloud API recommendation methods ignore the complementarity relationship between mashups and cloud APIs. To address the above issues, this paper first proposes the complementary function vector (CV) based on tag co-occurrence and graph convolutional networks, in order to characterize the complementarity relationship between cloud APIs and mashups. Then we utilize the attention mechanism to systematically integrate CF, CB, and CV methods, and propose a model named Content and Complementarity enhanced Attentional Collaborative Filtering (CCeACF). Finally, the experimental results show that the proposed approach outperforms the state-of-the-art cloud API recommendation methods, can effectively alleviate the long tail problem in the cloud API ecosystem, and is interpretable.

云应用编程接口(API)是一种软件中介,可使应用程序在云中相互通信和传输信息。随着云 API 数量的不断增加,开发人员面临着大量的云 API 选择,因此研究人员提出了许多云 API 推荐方法。现有的云 API 推荐方法可分为两类:基于内容(CB)的云 API 推荐和基于协同过滤(CF)的云 API 推荐。CF 方法主要考虑混搭调用的云 API 的历史信息。一般来说,由于交互记录较多,CF 方法对头部云 API 的推荐效果较好,而对尾部云 API 的推荐效果较差。同时,CB 方法可以通过利用云 API 和混搭的内容信息来提高尾部云 API 的推荐性能,但其总体性能不如 CF 方法。此外,传统的云应用程序接口推荐方法忽略了mashup与云应用程序接口之间的互补关系。针对上述问题,本文首先提出了基于标签共现和图卷积网络的互补函数向量(CV),以表征云 API 与 mashup 之间的互补关系。然后,我们利用注意力机制系统地整合了 CF、CB 和 CV 方法,并提出了一个名为内容和互补性增强注意力协同过滤(CCeACF)的模型。最后,实验结果表明,所提出的方法优于最先进的云 API 推荐方法,能有效缓解云 API 生态系统中的长尾问题,并且具有可解释性。
{"title":"CCeACF: content and complementarity enhanced attentional collaborative filtering for cloud API recommendation","authors":"Zhen Chen, Wenhui Chen, Xiaowei Liu, Jing Zhao","doi":"10.1007/s11227-024-06445-7","DOIUrl":"https://doi.org/10.1007/s11227-024-06445-7","url":null,"abstract":"<p>Cloud application programming interface (API) is a software intermediary that enables applications to communicate and transfer information to one another in the cloud. As the number of cloud APIs continues to increase, developers are inundated with a plethora of cloud API choices, so researchers have proposed many cloud API recommendation methods. Existing cloud API recommendation methods can be divided into two types: content-based (CB) cloud API recommendation and collaborative filtering-based (CF) cloud API recommendation. CF methods mainly consider the historical information of cloud APIs invoked by mashups. Generally, CF methods have better recommendation performances on head cloud APIs due to more interaction records, and poor recommendation performances on tail cloud APIs. Meanwhile, CB methods can improve the recommendation performances of tail cloud APIs by leveraging the content information of cloud APIs and mashups, but their overall performances are not as good as those of CF methods. Moreover, traditional cloud API recommendation methods ignore the complementarity relationship between mashups and cloud APIs. To address the above issues, this paper first proposes the complementary function vector (CV) based on tag co-occurrence and graph convolutional networks, in order to characterize the complementarity relationship between cloud APIs and mashups. Then we utilize the attention mechanism to systematically integrate CF, CB, and CV methods, and propose a model named <b>C</b>ontent and <b>C</b>omplementarity <b>e</b>nhanced <b>A</b>ttentional <b>C</b>ollaborative <b>F</b>iltering (CCeACF). Finally, the experimental results show that the proposed approach outperforms the state-of-the-art cloud API recommendation methods, can effectively alleviate the long tail problem in the cloud API ecosystem, and is interpretable.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
State ordering and classification for analyzing non-sparse large Markov models 用于分析非稀疏大型马尔可夫模型的状态排序和分类
Pub Date : 2024-08-21 DOI: 10.1007/s11227-024-06446-6
Mohammadsadegh Mohagheghi

Markov chains and Markov decision processes have been widely used to model the behavior of computer systems with probabilistic aspects. Numerical and iterative methods are commonly used to analyze these models. Many efforts have been made in recent decades to improve the efficiency of these numerical methods. In this paper, focusing on Markov models with non-sparse structure, a new set of heuristics is proposed for prioritizing model states with the aim of reducing the total computation time. In these heuristics, a set of simulation runs are used for statistical analysis of the effect of each state on the required values of the other states. Under this criterion, the priority of each state in updating its values is determined. The proposed heuristics provide a state ordering that improves the value propagation among the states. The proposed methods are also extended for very large models where disk-based techniques are required to analyze the models. Experimental results show that our proposed methods in this paper reduce the running times of the iterative methods for most cases of non-sparse models.

马尔可夫链和马尔可夫决策过程已被广泛用于对计算机系统的行为进行概率建模。数值和迭代方法通常用于分析这些模型。近几十年来,人们一直在努力提高这些数值方法的效率。本文以具有非稀疏结构的马尔可夫模型为重点,提出了一套新的启发式方法,用于确定模型状态的优先次序,以减少总计算时间。在这些启发式方法中,一组模拟运行用于统计分析每个状态对其他状态所需数值的影响。根据这一标准,确定每个状态在更新其数值时的优先级。所提出的启发式方法提供了一种状态排序方法,可改善状态间的数值传播。所提出的方法还可扩展用于需要基于磁盘技术分析模型的超大型模型。实验结果表明,本文提出的方法缩短了大多数非稀疏模型迭代法的运行时间。
{"title":"State ordering and classification for analyzing non-sparse large Markov models","authors":"Mohammadsadegh Mohagheghi","doi":"10.1007/s11227-024-06446-6","DOIUrl":"https://doi.org/10.1007/s11227-024-06446-6","url":null,"abstract":"<p>Markov chains and Markov decision processes have been widely used to model the behavior of computer systems with probabilistic aspects. Numerical and iterative methods are commonly used to analyze these models. Many efforts have been made in recent decades to improve the efficiency of these numerical methods. In this paper, focusing on Markov models with non-sparse structure, a new set of heuristics is proposed for prioritizing model states with the aim of reducing the total computation time. In these heuristics, a set of simulation runs are used for statistical analysis of the effect of each state on the required values of the other states. Under this criterion, the priority of each state in updating its values is determined. The proposed heuristics provide a state ordering that improves the value propagation among the states. The proposed methods are also extended for very large models where disk-based techniques are required to analyze the models. Experimental results show that our proposed methods in this paper reduce the running times of the iterative methods for most cases of non-sparse models.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel reinforcement learning-based hybrid intrusion detection system on fog-to-cloud computing 基于强化学习的新型雾到云计算混合入侵检测系统
Pub Date : 2024-08-20 DOI: 10.1007/s11227-024-06417-x
Sepide Najafli, Abolfazl Toroghi Haghighat, Babak Karasfi

The increasing growth of the Internet of Things (IoT) and its open and shared character has exponentially led to a rise in new attacks. Consequently, quick and adaptive detection of attacks in IoT environments is essential. The Intrusion Detection System (IDS) is responsible for protecting and detecting the type of attacks. Creating an IDS that works in real time and adapts to environmental changes is critical. In this paper, we propose a Deep Reinforcement Learning-based (DRL) self-learning IDS that addresses the mentioned challenges. DRL-based IDS helps to create a decision agent, who controls the interaction with the indeterminate environment and performs binary detection (normal/intrusion) in fog. We use the ensemble method to classify multi-class attacks in the cloud. The proposed approach was evaluated on the CIC-IDS2018 dataset. The results demonstrated that the proposed model achieves a superior performance in detecting intrusions and identifying attacks to compare other machine learning techniques and state-of-the-art approaches. For example, our suggested method can detect Botnet attacks with an accuracy of 0.9999% and reach an F-measure of 0.9959 in binary detection. It can reduce the prediction time to 0.52 also. Overall, we proved that combining multiple methods can be a great way for IDS.

随着物联网(IoT)的日益发展及其开放和共享的特性,新的攻击呈指数级增长。因此,在物联网环境中快速、自适应地检测攻击至关重要。入侵检测系统(IDS)负责保护和检测攻击类型。创建一个能实时工作并适应环境变化的 IDS 至关重要。在本文中,我们提出了一种基于深度强化学习(DRL)的自学习 IDS,以应对上述挑战。基于 DRL 的 IDS 有助于创建一个决策代理,它可以控制与不确定环境的交互,并在雾中执行二元检测(正常/入侵)。我们使用集合方法对云中的多类攻击进行分类。我们在 CIC-IDS2018 数据集上对所提出的方法进行了评估。结果表明,与其他机器学习技术和最先进的方法相比,所提出的模型在检测入侵和识别攻击方面表现出色。例如,我们建议的方法检测僵尸网络攻击的准确率为 0.9999%,二进制检测的 F-measure 为 0.9959。它还能将预测时间缩短至 0.52 秒。总之,我们证明了将多种方法结合起来可以成为 IDS 的一种很好的方法。
{"title":"A novel reinforcement learning-based hybrid intrusion detection system on fog-to-cloud computing","authors":"Sepide Najafli, Abolfazl Toroghi Haghighat, Babak Karasfi","doi":"10.1007/s11227-024-06417-x","DOIUrl":"https://doi.org/10.1007/s11227-024-06417-x","url":null,"abstract":"<p>The increasing growth of the Internet of Things (IoT) and its open and shared character has exponentially led to a rise in new attacks. Consequently, quick and adaptive detection of attacks in IoT environments is essential. The Intrusion Detection System (IDS) is responsible for protecting and detecting the type of attacks. Creating an IDS that works in real time and adapts to environmental changes is critical. In this paper, we propose a Deep Reinforcement Learning-based (DRL) self-learning IDS that addresses the mentioned challenges. DRL-based IDS helps to create a decision agent, who controls the interaction with the indeterminate environment and performs binary detection (normal/intrusion) in fog. We use the ensemble method to classify multi-class attacks in the cloud. The proposed approach was evaluated on the CIC-IDS2018 dataset. The results demonstrated that the proposed model achieves a superior performance in detecting intrusions and identifying attacks to compare other machine learning techniques and state-of-the-art approaches. For example, our suggested method can detect Botnet attacks with an accuracy of 0.9999% and reach an F-measure of 0.9959 in binary detection. It can reduce the prediction time to 0.52 also. Overall, we proved that combining multiple methods can be a great way for IDS.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure incentive mechanism for energy trading in computing force networks enabled internet of vehicles: a contract theory approach 支持计算力网络的车联网中能源交易的安全激励机制:一种契约理论方法
Pub Date : 2024-08-20 DOI: 10.1007/s11227-024-06369-2
Wen Wen, Lu Lu, Renchao Xie, Qinqin Tang, Yuexia Fu, Tao Huang

The integration of generative artificial intelligence (GAI) and internet of vehicles (IoV) will transform vehicular intelligence from conventional analytical intelligence to service-specific generative intelligence, enhancing vehicular services. In this context, computing force networks (CFNs), capable of flexibly scheduling widespread, multi-domain, multi-layer, and distributed resources, can cater to the demands of the IoV for ultra-high-density computing power and ultra-low latency. In CFNs, the integration of GAI and IoV consumes enormous energy, and GAI servers need to purchase energy from energy suppliers (ESs). However, the information asymmetry between GAI servers and ESs makes it difficult to price energy fairly and distributed ESs and GAI servers constitute a complex trading environment where malicious ESs may intentionally provide low-quality services. In this paper, to facilitate efficient and secure energy trading, and supply for ubiquitous AIGC services, we initially introduce an innovative CFNs-based GAI energy trading system architecture; present an energy consumption model for AIGC services, cost model of ESs, and reputation evaluation model of ESs; and obtain utility functions of GAI servers and ESs based on contract theory. Then, we propose a secure incentive mechanism in IoV, including designing an optimal contract scheme based on contract feasibility conditions and a safety guarantee mechanism based on blockchain. Simulation results demonstrate the feasibility and superiority of our energy trading mechanism.

生成式人工智能(GAI)与车联网(IoV)的融合将使车辆智能从传统的分析智能转变为针对特定服务的生成式智能,从而增强车辆服务。在此背景下,能够灵活调度大范围、多领域、多层次和分布式资源的计算力网络(CFN)可以满足车联网对超高密度计算力和超低延迟的需求。在 CFN 中,GAI 与 IoV 的整合会消耗大量能源,GAI 服务器需要向能源供应商(ES)购买能源。然而,由于 GAI 服务器和 ES 之间的信息不对称,很难对能源进行公平定价,而且分布式 ES 和 GAI 服务器构成了一个复杂的交易环境,恶意 ES 可能会故意提供低质量服务。为了促进高效、安全的能源交易,为无处不在的 AIGC 服务提供能源,本文首先介绍了一种创新的基于 CFNs 的 GAI 能源交易系统架构;提出了 AIGC 服务的能源消耗模型、ES 的成本模型和 ES 的信誉评价模型;并基于契约理论得到了 GAI 服务器和 ES 的效用函数。然后,我们提出了 IoV 中的安全激励机制,包括设计基于合约可行性条件的最优合约方案和基于区块链的安全保障机制。仿真结果证明了我们的能源交易机制的可行性和优越性。
{"title":"Secure incentive mechanism for energy trading in computing force networks enabled internet of vehicles: a contract theory approach","authors":"Wen Wen, Lu Lu, Renchao Xie, Qinqin Tang, Yuexia Fu, Tao Huang","doi":"10.1007/s11227-024-06369-2","DOIUrl":"https://doi.org/10.1007/s11227-024-06369-2","url":null,"abstract":"<p>The integration of generative artificial intelligence (GAI) and internet of vehicles (IoV) will transform vehicular intelligence from conventional analytical intelligence to service-specific generative intelligence, enhancing vehicular services. In this context, computing force networks (CFNs), capable of flexibly scheduling widespread, multi-domain, multi-layer, and distributed resources, can cater to the demands of the IoV for ultra-high-density computing power and ultra-low latency. In CFNs, the integration of GAI and IoV consumes enormous energy, and GAI servers need to purchase energy from energy suppliers (ESs). However, the information asymmetry between GAI servers and ESs makes it difficult to price energy fairly and distributed ESs and GAI servers constitute a complex trading environment where malicious ESs may intentionally provide low-quality services. In this paper, to facilitate efficient and secure energy trading, and supply for ubiquitous AIGC services, we initially introduce an innovative CFNs-based GAI energy trading system architecture; present an energy consumption model for AIGC services, cost model of ESs, and reputation evaluation model of ESs; and obtain utility functions of GAI servers and ESs based on contract theory. Then, we propose a secure incentive mechanism in IoV, including designing an optimal contract scheme based on contract feasibility conditions and a safety guarantee mechanism based on blockchain. Simulation results demonstrate the feasibility and superiority of our energy trading mechanism.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based demand response for short-term operation of renewable-based microgrids 基于深度学习的需求响应,促进基于可再生能源的微电网的短期运行
Pub Date : 2024-08-18 DOI: 10.1007/s11227-024-06407-z
Sina Samadi Gharehveran, Kimia Shirini, Selma Cheshmeh Khavar, Seyyed Hadi Mousavi, Arya Abdolahi

This paper introduces a cutting-edge deep learning-based model aimed at enhancing the short-term performance of microgrids by simultaneously minimizing operational costs and emissions in the presence of distributed energy resources. The primary focus of this research is to harness the potential of demand response programs (DRPs), which actively engage a diverse range of consumers to mitigate uncertainties associated with renewable energy sources (RES). To facilitate an effective demand response, this study presents a novel incentive-based payment strategy packaged as a pricing offer. This approach incentivizes consumers to actively participate in DRPs, thereby contributing to overall microgrid optimization. The research conducts a comprehensive comparative analysis by evaluating the operational costs and emissions under scenarios with and without the integration of DRPs. The problem is formulated as a challenging mixed-integer nonlinear programming problem, demanding a robust optimization technique for resolution. In this regard, the multi-objective particle swarm optimization algorithm is employed to efficiently address this intricate problem. To showcase the efficacy and proficiency of the proposed methodology, a real-world smart microgrid case study is chosen as a representative example. The obtained results demonstrate that the integration of deep learning-based demand response with the incentive-based pricing offer leads to significant improvements in microgrid performance, emphasizing its potential to revolutionize sustainable and cost-effective energy management in modern power systems. Key numerical results demonstrate the efficacy of our approach. In the case study, the implementation of our demand response strategy results in a cost reduction of 12.5% and a decrease in carbon emissions of 14.3% compared to baseline scenarios without DR integration. Furthermore, the optimization model shows a notable increase in RES utilization by 22.7%, which significantly reduces reliance on fossil fuel-based generation.

本文介绍了一种基于深度学习的前沿模型,旨在通过同时最大限度地降低分布式能源资源的运营成本和排放,提高微电网的短期性能。本研究的主要重点是利用需求响应计划 (DRP) 的潜力,该计划积极吸引各类消费者参与,以缓解与可再生能源 (RES) 相关的不确定性。为了促进有效的需求响应,本研究提出了一种新颖的基于激励的支付策略,并将其包装为定价提议。这种方法可以激励消费者积极参与 DRP,从而促进微电网的整体优化。研究通过评估整合和不整合 DRP 的情况下的运营成本和排放,进行了全面的比较分析。该问题被表述为一个具有挑战性的混合整数非线性编程问题,需要一种稳健的优化技术来解决。为此,采用了多目标粒子群优化算法来有效解决这一复杂问题。为了展示所提方法的有效性和熟练性,我们选择了一个真实世界的智能微电网案例研究作为代表。研究结果表明,将基于深度学习的需求响应与基于激励的定价方案相结合,可以显著提高微电网的性能,从而凸显其在现代电力系统中实现可持续、经济高效的能源管理的潜力。关键的数值结果证明了我们方法的有效性。在案例研究中,与未集成需求响应的基线方案相比,实施我们的需求响应策略后,成本降低了 12.5%,碳排放量减少了 14.3%。此外,优化模型显示可再生能源利用率显著提高了 22.7%,从而大大降低了对化石燃料发电的依赖。
{"title":"Deep learning-based demand response for short-term operation of renewable-based microgrids","authors":"Sina Samadi Gharehveran, Kimia Shirini, Selma Cheshmeh Khavar, Seyyed Hadi Mousavi, Arya Abdolahi","doi":"10.1007/s11227-024-06407-z","DOIUrl":"https://doi.org/10.1007/s11227-024-06407-z","url":null,"abstract":"<p>This paper introduces a cutting-edge deep learning-based model aimed at enhancing the short-term performance of microgrids by simultaneously minimizing operational costs and emissions in the presence of distributed energy resources. The primary focus of this research is to harness the potential of demand response programs (DRPs), which actively engage a diverse range of consumers to mitigate uncertainties associated with renewable energy sources (RES). To facilitate an effective demand response, this study presents a novel incentive-based payment strategy packaged as a pricing offer. This approach incentivizes consumers to actively participate in DRPs, thereby contributing to overall microgrid optimization. The research conducts a comprehensive comparative analysis by evaluating the operational costs and emissions under scenarios with and without the integration of DRPs. The problem is formulated as a challenging mixed-integer nonlinear programming problem, demanding a robust optimization technique for resolution. In this regard, the multi-objective particle swarm optimization algorithm is employed to efficiently address this intricate problem. To showcase the efficacy and proficiency of the proposed methodology, a real-world smart microgrid case study is chosen as a representative example. The obtained results demonstrate that the integration of deep learning-based demand response with the incentive-based pricing offer leads to significant improvements in microgrid performance, emphasizing its potential to revolutionize sustainable and cost-effective energy management in modern power systems. Key numerical results demonstrate the efficacy of our approach. In the case study, the implementation of our demand response strategy results in a cost reduction of 12.5% and a decrease in carbon emissions of 14.3% compared to baseline scenarios without DR integration. Furthermore, the optimization model shows a notable increase in RES utilization by 22.7%, which significantly reduces reliance on fossil fuel-based generation.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"122 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosted regression for predicting CPU utilization in the cloud with periodicity 用于预测云中 CPU 利用率的周期性提升回归方法
Pub Date : 2024-08-18 DOI: 10.1007/s11227-024-06451-9
Khanh Nguyen Quoc, Van Tong, Cuong Dao, Tuyen Ngoc Le, Duc Tran

Predicting CPU usage is crucial to cloud resource management. Precise CPU prediction, however, is a tough challenge due to the variable and dynamic nature of CPUs. In this paper, we introduce TrAdaBoost.WLP, a novel regression transfer boosting method that employs Long Short-Term Memory (LSTM) networks for CPU consumption prediction. Concretely, a dedicated Periodicity-aware LSTM (PA-LSTM) model is specifically developed to take into account the use of periodically repeated patterns in time series data while making predictions. To adjust for variations in CPU demands, multiple PA-LSTMs are trained and concatenated in TrAdaBoost.WLP using a boosting mechanism. TrAdaBoost.WLP and benchmarks have been thoroughly evaluated on two datasets: 160 Microsoft Azure VMs and 8 Google cluster traces. The experimental results show that TrAdaBoost.WLP can produce promising performance, improving by 32.4% and 59.3% in terms of mean squared error compared to the standard Probabilistic LSTM and ARIMA.

预测 CPU 使用情况对云资源管理至关重要。然而,由于 CPU 的可变性和动态性,CPU 的精确预测是一项艰巨的挑战。在本文中,我们介绍了 TrAdaBoost.WLP,这是一种新颖的回归转移提升方法,采用长短期记忆(LSTM)网络进行 CPU 消耗预测。具体来说,我们专门开发了一个周期性感知 LSTM(PA-LSTM)模型,以便在进行预测时考虑到时间序列数据中周期性重复模式的使用。为了适应 CPU 需求的变化,TrAdaBoost.WLP 利用增强机制训练并连接了多个 PA-LSTM 模型。TrAdaBoost.WLP 和基准在两个数据集上进行了全面评估:160 个微软 Azure 虚拟机和 8 个谷歌集群痕迹。实验结果表明,TrAdaBoost.WLP 能产生令人满意的性能,与标准概率 LSTM 和 ARIMA 相比,平均平方误差分别提高了 32.4% 和 59.3%。
{"title":"Boosted regression for predicting CPU utilization in the cloud with periodicity","authors":"Khanh Nguyen Quoc, Van Tong, Cuong Dao, Tuyen Ngoc Le, Duc Tran","doi":"10.1007/s11227-024-06451-9","DOIUrl":"https://doi.org/10.1007/s11227-024-06451-9","url":null,"abstract":"<p>Predicting CPU usage is crucial to cloud resource management. Precise CPU prediction, however, is a tough challenge due to the variable and dynamic nature of CPUs. In this paper, we introduce TrAdaBoost.WLP, a novel regression transfer boosting method that employs Long Short-Term Memory (LSTM) networks for CPU consumption prediction. Concretely, a dedicated Periodicity-aware LSTM (PA-LSTM) model is specifically developed to take into account the use of periodically repeated patterns in time series data while making predictions. To adjust for variations in CPU demands, multiple PA-LSTMs are trained and concatenated in TrAdaBoost.WLP using a boosting mechanism. TrAdaBoost.WLP and benchmarks have been thoroughly evaluated on two datasets: 160 Microsoft Azure VMs and 8 Google cluster traces. The experimental results show that TrAdaBoost.WLP can produce promising performance, improving by 32.4% and 59.3% in terms of mean squared error compared to the standard Probabilistic LSTM and ARIMA.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multistage strategy for ground point filtering on large-scale datasets 大规模数据集地面点过滤的多级策略
Pub Date : 2024-08-17 DOI: 10.1007/s11227-024-06406-0
Diego Teijeiro Paredes, Margarita Amor López, Sandra Buján, Rico Richter, Jürgen Döllner

Ground point filtering on national-level datasets is a challenge due to the presence of multiple types of landscapes. This limitation does not simply affect to individual users, but it is in particular relevant for those national institutions in charge of providing national-level Light Detection and Ranging (LiDAR) point clouds. Each type of landscape is typically better filtered by different filtering algorithms or parameters; therefore, in order to get the best quality classification, the LiDAR point cloud should be divided by the landscape before running the filtering algorithms. Despite the fact that the manual segmentation and identification of the landscapes can be very time intensive, only few studies have addressed this issue. In this work, we present a multistage approach to automate the identification of the type of landscape using several metrics extracted from the LiDAR point cloud, matching the best filtering algorithms in each type of landscape. An additional contribution is presented, a parallel implementation for distributed memory systems, using Apache Spark, that can achieve up to (34times) of speedup using 12 compute nodes.

由于存在多种地貌类型,在国家级数据集上进行地面点过滤是一项挑战。这种限制不仅影响到个人用户,而且与负责提供国家级光探测和测距(LiDAR)点云的国家机构尤为相关。每种类型的地貌通常都能通过不同的过滤算法或参数得到更好的过滤效果;因此,为了获得最佳的分类质量,在运行过滤算法之前,应根据地貌对激光雷达点云进行划分。尽管人工分割和识别地貌会耗费大量时间,但很少有研究涉及这一问题。在这项工作中,我们提出了一种多阶段方法,利用从激光雷达点云中提取的多个指标自动识别景观类型,并在每种景观类型中匹配最佳过滤算法。此外,我们还利用 Apache Spark 为分布式内存系统提供了并行实施方案,使用 12 个计算节点可实现高达(34/times)的速度提升。
{"title":"Multistage strategy for ground point filtering on large-scale datasets","authors":"Diego Teijeiro Paredes, Margarita Amor López, Sandra Buján, Rico Richter, Jürgen Döllner","doi":"10.1007/s11227-024-06406-0","DOIUrl":"https://doi.org/10.1007/s11227-024-06406-0","url":null,"abstract":"<p>Ground point filtering on national-level datasets is a challenge due to the presence of multiple types of landscapes. This limitation does not simply affect to individual users, but it is in particular relevant for those national institutions in charge of providing national-level Light Detection and Ranging (LiDAR) point clouds. Each type of landscape is typically better filtered by different filtering algorithms or parameters; therefore, in order to get the best quality classification, the LiDAR point cloud should be divided by the landscape before running the filtering algorithms. Despite the fact that the manual segmentation and identification of the landscapes can be very time intensive, only few studies have addressed this issue. In this work, we present a multistage approach to automate the identification of the type of landscape using several metrics extracted from the LiDAR point cloud, matching the best filtering algorithms in each type of landscape. An additional contribution is presented, a parallel implementation for distributed memory systems, using Apache Spark, that can achieve up to <span>(34times)</span> of speedup using 12 compute nodes.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Community detection in attributed social networks using deep learning 利用深度学习检测属性社交网络中的社群
Pub Date : 2024-08-16 DOI: 10.1007/s11227-024-06436-8
Omid Rashnodi, Maryam Rastegarpour, Parham Moradi, Azadeh Zamanifar

Existing methods for detecting communities in attributed social networks often rely solely on network topology, which leads to suboptimal accuracy in community detection, inefficient use of available data, and increased time required for identifying groups. This paper introduces the Dual Embedding-based Graph Convolution Network (DEGCN) to address these challenges. This new method uses graph embedding techniques in a new deep learning framework to improve accuracy and speed up community detection by combining the nodes’ content with the network’s topology. Initially, we compute the modularity and Markov matrices of the input graph. Each matrix is then processed through a graph embedding network with at least two layers to produce a condensed graph representation. As a result, a multilayer perceptron neural network classifies each node’s community based on these generated embeddings. We tested the suggested method on three standard datasets: Cora, CiteSeer, and PubMed. Then, we compared the outcomes to many basic and advanced approaches using five important metrics: F1-score, adjusted rand index (ARI), normalized mutual information (NMI), and accuracy. The findings demonstrate that the DEGCN accurately captures community structure, achieves superior precision, and has higher ARI, NMI, and F1 scores, significantly outperforming existing algorithms for identifying community structures in medium-scale networks.

在有属性的社交网络中检测社群的现有方法往往只依赖于网络拓扑结构,这导致社群检测的准确性不理想、可用数据的使用效率低下以及识别群组所需的时间增加。本文介绍了基于双嵌入的图卷积网络 (DEGCN),以应对这些挑战。这种新方法在一个新的深度学习框架中使用了图嵌入技术,通过将节点内容与网络拓扑结构相结合,提高了群组检测的准确性并加快了检测速度。首先,我们计算输入图的模块化矩阵和马尔可夫矩阵。然后,通过至少有两层的图嵌入网络对每个矩阵进行处理,生成浓缩的图表示。最后,多层感知器神经网络根据这些生成的嵌入对每个节点的社区进行分类。我们在三个标准数据集上测试了所建议的方法:Cora、CiteSeer 和 PubMed。然后,我们使用五个重要指标将结果与许多基本方法和先进方法进行了比较:F1 分数、调整后的兰德指数(ARI)、归一化互信息(NMI)和准确率。研究结果表明,DEGCN 能准确捕捉社群结构,精度更高,ARI、NMI 和 F1 分数也更高,在识别中等规模网络中的社群结构方面明显优于现有算法。
{"title":"Community detection in attributed social networks using deep learning","authors":"Omid Rashnodi, Maryam Rastegarpour, Parham Moradi, Azadeh Zamanifar","doi":"10.1007/s11227-024-06436-8","DOIUrl":"https://doi.org/10.1007/s11227-024-06436-8","url":null,"abstract":"<p>Existing methods for detecting communities in attributed social networks often rely solely on network topology, which leads to suboptimal accuracy in community detection, inefficient use of available data, and increased time required for identifying groups. This paper introduces the Dual Embedding-based Graph Convolution Network (DEGCN) to address these challenges. This new method uses graph embedding techniques in a new deep learning framework to improve accuracy and speed up community detection by combining the nodes’ content with the network’s topology. Initially, we compute the modularity and Markov matrices of the input graph. Each matrix is then processed through a graph embedding network with at least two layers to produce a condensed graph representation. As a result, a multilayer perceptron neural network classifies each node’s community based on these generated embeddings. We tested the suggested method on three standard datasets: Cora, CiteSeer, and PubMed. Then, we compared the outcomes to many basic and advanced approaches using five important metrics: F1-score, adjusted rand index (ARI), normalized mutual information (NMI), and accuracy. The findings demonstrate that the DEGCN accurately captures community structure, achieves superior precision, and has higher ARI, NMI, and F1 scores, significantly outperforming existing algorithms for identifying community structures in medium-scale networks.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"14 30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hexadecimal scrambling image encryption scheme based on improved four-dimensional chaotic system 基于改进的四维混沌系统的十六进制加扰图像加密方案
Pub Date : 2024-08-16 DOI: 10.1007/s11227-024-06400-6
Shengtao Geng, Heng Zhang, Xuncai Zhang

This paper proposes an image encryption scheme based on an improved four-dimensional chaotic system. First, a 4D chaotic system is constructed by introducing new state variables based on the Chen chaotic system, and its chaotic behavior is verified by phase diagrams, bifurcation diagrams, Lyapunov exponents, NIST tests, etc. Second, the initial chaotic key is generated using the hash function SHA-512 and plain image information. Parity scrambling is performed on the plain image using the chaotic sequence generated by the chaotic system. The image is then converted into a hexadecimal character matrix, divided into two planes according to the high and low bits of the characters and scrambled by generating two position index matrices using chaotic sequences. The two planes are then restored to a hexadecimal character matrix, which is further converted into the form of an image matrix. Finally, different combined operation diffusion formulas are selected for diffusion according to the chaotic sequence to obtain the encrypted image. Based on simulation experiments and security evaluations, the scheme effectively encrypts gray images and shows strong security against various types of attacks.

本文提出了一种基于改进的四维混沌系统的图像加密方案。首先,在陈混沌系统的基础上引入新的状态变量构建了四维混沌系统,并通过相图、分岔图、Lyapunov指数、NIST测试等验证了其混沌行为。其次,使用哈希函数 SHA-512 和纯图像信息生成初始混沌密钥。使用混沌系统生成的混沌序列对普通图像进行奇偶校验扰码。然后将图像转换成十六进制字符矩阵,根据字符的高位和低位分成两个平面,并使用混沌序列生成两个位置索引矩阵进行扰码。然后将两个平面还原为十六进制字符矩阵,再进一步转换为图像矩阵的形式。最后,根据混沌序列选择不同的组合运算扩散公式进行扩散,得到加密图像。根据仿真实验和安全性评估,该方案能有效地加密灰色图像,并对各种类型的攻击表现出很强的安全性。
{"title":"A hexadecimal scrambling image encryption scheme based on improved four-dimensional chaotic system","authors":"Shengtao Geng, Heng Zhang, Xuncai Zhang","doi":"10.1007/s11227-024-06400-6","DOIUrl":"https://doi.org/10.1007/s11227-024-06400-6","url":null,"abstract":"<p>This paper proposes an image encryption scheme based on an improved four-dimensional chaotic system. First, a 4D chaotic system is constructed by introducing new state variables based on the Chen chaotic system, and its chaotic behavior is verified by phase diagrams, bifurcation diagrams, Lyapunov exponents, NIST tests, etc. Second, the initial chaotic key is generated using the hash function SHA-512 and plain image information. Parity scrambling is performed on the plain image using the chaotic sequence generated by the chaotic system. The image is then converted into a hexadecimal character matrix, divided into two planes according to the high and low bits of the characters and scrambled by generating two position index matrices using chaotic sequences. The two planes are then restored to a hexadecimal character matrix, which is further converted into the form of an image matrix. Finally, different combined operation diffusion formulas are selected for diffusion according to the chaotic sequence to obtain the encrypted image. Based on simulation experiments and security evaluations, the scheme effectively encrypts gray images and shows strong security against various types of attacks.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SiamMGT: robust RGBT tracking via graph attention and reliable modality weight learning SiamMGT:通过图注意和可靠的模态权重学习实现鲁棒 RGBT 跟踪
Pub Date : 2024-08-16 DOI: 10.1007/s11227-024-06443-9
Lizhi Geng, Dongming Zhou, Kerui Wang, Yisong Liu, Kaixiang Yan

In recent years, RGBT trackers based on the Siamese network have gained significant attention due to their balanced accuracy and efficiency. However, these trackers often rely on similarity matching of features between a fixed-size target template and search region, which can result in unsatisfactory tracking performance when there are dramatic changes in target scale or shape or occlusion occurs. Additionally, while these trackers often employ feature-level fusion for different modalities, they frequently overlook the benefits of decision-level fusion, which can diminish their flexibility and independence. In this paper, a novel Siamese tracker through graph attention and reliable modality weighting is proposed for robust RGBT tracking. Specifically, a modality feature interaction learning network is constructed to perform bidirectional learning of the local features from each modality while extracting their respective characteristics. Subsequently, a multimodality graph attention network is used to match the local features of the template and search region, generating more accurate and robust similarity responses. Finally, a modality fusion prediction network is designed to perform decision-level adaptive fusion of the two modality responses, leveraging their complementary nature for prediction. Extensive experiments on three large-scale RGBT benchmarks demonstrate outstanding tracking capabilities over other state-of-the-art trackers. Code will be shared at https://github.com/genglizhi/SiamMGT.

近年来,基于连体网络的 RGBT 跟踪器因其均衡的精度和效率而备受关注。然而,这些跟踪器通常依赖于固定大小的目标模板和搜索区域之间的特征相似性匹配,当目标的比例、形状发生剧烈变化或发生遮挡时,跟踪性能可能会不尽如人意。此外,虽然这些跟踪器通常针对不同的模式采用特征级融合,但它们经常忽略决策级融合的好处,这可能会降低它们的灵活性和独立性。本文提出了一种新颖的连体跟踪器,通过图关注和可靠的模态加权实现鲁棒 RGBT 跟踪。具体来说,本文构建了一个模态特征交互学习网络,对每种模态的局部特征进行双向学习,同时提取它们各自的特征。随后,多模态图注意网络用于匹配模板和搜索区域的局部特征,生成更准确、更稳健的相似性响应。最后,设计了一个模态融合预测网络,对两种模态响应进行决策级自适应融合,利用它们的互补性进行预测。在三个大规模 RGBT 基准上进行的广泛实验表明,与其他最先进的跟踪器相比,该系统具有出色的跟踪能力。代码将在 https://github.com/genglizhi/SiamMGT 上共享。
{"title":"SiamMGT: robust RGBT tracking via graph attention and reliable modality weight learning","authors":"Lizhi Geng, Dongming Zhou, Kerui Wang, Yisong Liu, Kaixiang Yan","doi":"10.1007/s11227-024-06443-9","DOIUrl":"https://doi.org/10.1007/s11227-024-06443-9","url":null,"abstract":"<p>In recent years, RGBT trackers based on the Siamese network have gained significant attention due to their balanced accuracy and efficiency. However, these trackers often rely on similarity matching of features between a fixed-size target template and search region, which can result in unsatisfactory tracking performance when there are dramatic changes in target scale or shape or occlusion occurs. Additionally, while these trackers often employ feature-level fusion for different modalities, they frequently overlook the benefits of decision-level fusion, which can diminish their flexibility and independence. In this paper, a novel Siamese tracker through graph attention and reliable modality weighting is proposed for robust RGBT tracking. Specifically, a modality feature interaction learning network is constructed to perform bidirectional learning of the local features from each modality while extracting their respective characteristics. Subsequently, a multimodality graph attention network is used to match the local features of the template and search region, generating more accurate and robust similarity responses. Finally, a modality fusion prediction network is designed to perform decision-level adaptive fusion of the two modality responses, leveraging their complementary nature for prediction. Extensive experiments on three large-scale RGBT benchmarks demonstrate outstanding tracking capabilities over other state-of-the-art trackers. Code will be shared at https://github.com/genglizhi/SiamMGT.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"78 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The Journal of Supercomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1