首页 > 最新文献

Computational Intelligence最新文献

英文 中文
Cost-sensitive tree SHAP for explaining cost-sensitive tree-based models 用于解释基于成本敏感树模型的成本敏感树 SHAP
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-09 DOI: 10.1111/coin.12651
Marija Kopanja, Stefan Hačko, Sanja Brdar, Miloš Savić

Cost-sensitive ensemble learning as a combination of two approaches, ensemble learning and cost-sensitive learning, enables generation of cost-sensitive tree-based ensemble models using the cost-sensitive decision tree (CSDT) learning algorithm. In general, tree-based models characterize nice graphical representation that can explain a model's decision-making process. However, the depth of the tree and the number of base models in the ensemble can be a limiting factor in comprehending the model's decision for each sample. The CSDT models are widely used in finance (e.g., credit scoring and fraud detection) but lack effective explanation methods. We previously addressed this gap with cost-sensitive tree Shapley Additive Explanation Method (CSTreeSHAP), a cost-sensitive tree explanation method for the single-tree CSDT model. Here, we extend the introduced methodology to cost-sensitive ensemble models, particularly cost-sensitive random forest models. The paper details the theoretical foundation and implementation details of CSTreeSHAP for both single CSDT and ensemble models. The usefulness of the proposed method is demonstrated by providing explanations for single and ensemble CSDT models trained on well-known benchmark credit scoring datasets. Finally, we apply our methodology and analyze the stability of explanations for those models compared to the cost-insensitive tree-based models. Our analysis reveals statistically significant differences between SHAP values despite seemingly similar global feature importance plots of the models. This highlights the value of our methodology as a comprehensive tool for explaining CSDT models.

成本敏感集合学习是集合学习和成本敏感学习这两种方法的结合,它能利用成本敏感决策树(CSDT)学习算法生成基于树的成本敏感集合模型。一般来说,基于树的模型具有良好的图形表示特性,可以解释模型的决策过程。然而,树的深度和集合中基础模型的数量可能会成为理解模型对每个样本决策的限制因素。CSDT 模型被广泛应用于金融领域(如信用评分和欺诈检测),但缺乏有效的解释方法。针对这一缺陷,我们之前提出了成本敏感树夏普利加法解释方法(CSTreeSHAP),这是一种针对单树 CSDT 模型的成本敏感树解释方法。在这里,我们将介绍的方法扩展到成本敏感的集合模型,特别是成本敏感的随机森林模型。本文详细介绍了 CSTreeSHAP 在单树 CSDT 模型和集合模型中的理论基础和实现细节。通过对在知名基准信用评分数据集上训练的单个和集合 CSDT 模型的解释,证明了所提方法的实用性。最后,我们应用了我们的方法,并分析了与对成本不敏感的树状模型相比,这些模型解释的稳定性。我们的分析表明,尽管模型的全局特征重要性图看似相似,但 SHAP 值之间却存在显著的统计差异。这凸显了我们的方法作为解释 CSDT 模型的综合工具的价值。
{"title":"Cost-sensitive tree SHAP for explaining cost-sensitive tree-based models","authors":"Marija Kopanja,&nbsp;Stefan Hačko,&nbsp;Sanja Brdar,&nbsp;Miloš Savić","doi":"10.1111/coin.12651","DOIUrl":"https://doi.org/10.1111/coin.12651","url":null,"abstract":"<p>Cost-sensitive ensemble learning as a combination of two approaches, ensemble learning and cost-sensitive learning, enables generation of cost-sensitive tree-based ensemble models using the cost-sensitive decision tree (CSDT) learning algorithm. In general, tree-based models characterize nice graphical representation that can explain a model's decision-making process. However, the depth of the tree and the number of base models in the ensemble can be a limiting factor in comprehending the model's decision for each sample. The CSDT models are widely used in finance (e.g., credit scoring and fraud detection) but lack effective explanation methods. We previously addressed this gap with cost-sensitive tree Shapley Additive Explanation Method (CSTreeSHAP), a cost-sensitive tree explanation method for the single-tree CSDT model. Here, we extend the introduced methodology to cost-sensitive ensemble models, particularly cost-sensitive random forest models. The paper details the theoretical foundation and implementation details of CSTreeSHAP for both single CSDT and ensemble models. The usefulness of the proposed method is demonstrated by providing explanations for single and ensemble CSDT models trained on well-known benchmark credit scoring datasets. Finally, we apply our methodology and analyze the stability of explanations for those models compared to the cost-insensitive tree-based models. Our analysis reveals statistically significant differences between SHAP values despite seemingly similar global feature importance plots of the models. This highlights the value of our methodology as a comprehensive tool for explaining CSDT models.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 3","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing passage-level relevance and kernel pooling for enhancing BERT-based document reranking 利用段落级相关性和内核池增强基于 BERT 的文档重排能力
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1111/coin.12656
Min Pan, Shuting Zhou, Teng Li, Yu Liu, Quanli Pei, Angela J. Huang, Jimmy X. Huang

The pre-trained language model (PLM) based on the Transformer encoder, namely BERT, has achieved state-of-the-art results in the field of Information Retrieval. Existing BERT-based ranking models divide documents into passages and aggregate passage-level relevance to rank the document list. However, these common score aggregation strategies cannot capture important semantic information such as document structure and have not been extensively studied. In this article, we propose a novel kernel-based score pooling system to capture document-level relevance by aggregating passage-level relevance. In particular, we propose and study several representative kernel pooling functions and several different document ranking strategies based on passage-level relevance. Our proposed framework KnBERT naturally incorporates kernel functions from the passage level into the BERT-based re-ranking method, which provides a promising avenue for building universal retrieval-then-rerank information retrieval systems. Experiments conducted on two widely used TREC Robust04 and GOV2 test datasets show that the KnBERT has made significant improvements over other BERT-based ranking approaches in terms of MAP, P@20, and NDCG@20 indicators with no extra or even less computations.

基于变换器编码器的预训练语言模型(PLM),即 BERT,在信息检索领域取得了最先进的成果。现有的基于 BERT 的排序模型将文档划分为段落,并汇总段落级相关性,从而对文档列表进行排序。然而,这些常见的分数聚合策略无法捕捉重要的语义信息,如文档结构,因此尚未得到广泛研究。在本文中,我们提出了一种新颖的基于内核的分数池系统,通过聚合段落级相关性来捕捉文档级相关性。特别是,我们提出并研究了几种有代表性的内核池函数和几种基于段落级相关性的不同文档排序策略。我们提出的 KnBERT 框架自然地将段落级的核函数纳入了基于 BERT 的重排序方法,这为构建通用的检索-重排序信息检索系统提供了一条前景广阔的途径。在两个广泛使用的TREC Robust04和GOV2测试数据集上进行的实验表明,与其他基于BERT的排序方法相比,KnBERT在MAP、P@20和NDCG@20指标上都有显著改进,而且没有额外的计算量,甚至计算量更少。
{"title":"Utilizing passage-level relevance and kernel pooling for enhancing BERT-based document reranking","authors":"Min Pan,&nbsp;Shuting Zhou,&nbsp;Teng Li,&nbsp;Yu Liu,&nbsp;Quanli Pei,&nbsp;Angela J. Huang,&nbsp;Jimmy X. Huang","doi":"10.1111/coin.12656","DOIUrl":"https://doi.org/10.1111/coin.12656","url":null,"abstract":"<p>The pre-trained language model (PLM) based on the Transformer encoder, namely BERT, has achieved state-of-the-art results in the field of Information Retrieval. Existing BERT-based ranking models divide documents into passages and aggregate passage-level relevance to rank the document list. However, these common score aggregation strategies cannot capture important semantic information such as document structure and have not been extensively studied. In this article, we propose a novel kernel-based score pooling system to capture document-level relevance by aggregating passage-level relevance. In particular, we propose and study several representative kernel pooling functions and several different document ranking strategies based on passage-level relevance. Our proposed framework KnBERT naturally incorporates kernel functions from the passage level into the BERT-based re-ranking method, which provides a promising avenue for building universal retrieval-then-rerank information retrieval systems. Experiments conducted on two widely used TREC Robust04 and GOV2 test datasets show that the KnBERT has made significant improvements over other BERT-based ranking approaches in terms of MAP, P@20, and NDCG@20 indicators with no extra or even less computations.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 3","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coin.12656","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low overhead vector codes with combination property and zigzag decoding for edge-aided computing in UAV network 用于无人机网络边缘辅助计算的具有组合特性和之字形解码的低开销矢量编码
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-06 DOI: 10.1111/coin.12642
Mingjun Dai, Ronghao Huang, Jinjin Wang, Bingchun Li

Codes that possess combination property (CP) and zigzag decoding (ZD) simultaneously (CP-ZD) has broad application into edge aided distributed systems, including distributed storage, coded distributed computing (CDC), and CDC-structured distributed training. Existing CP-ZD code designs are based on scalar code, where one node stores exactly one encoded packet. The drawback is that the induced overhead is high. In order to significantly reduce the overhead, vector CP-ZD codes are designed, where vector means the number of stored encoded packets in one node is extended from one to multiple. More specifically, in detailed code construction, cyclic shift is proposed, and the shifts are carefully designed for cases that each node stores two, three, and four packets, respectively. Comparisons show that the overhead is reduced significantly.

同时具有组合特性(CP)和之字形解码(ZD)的编码(CP-ZD)在边缘辅助分布式系统中有着广泛的应用,包括分布式存储、编码分布式计算(CDC)和 CDC 结构分布式训练。现有的 CP-ZD 代码设计基于标量代码,即一个节点存储一个编码数据包。其缺点是引起的开销很大。为了大幅降低开销,我们设计了矢量 CP-ZD 代码,其中矢量表示一个节点存储的编码数据包数量从一个扩展到多个。更具体地说,在详细的编码构造中,提出了循环移位,并针对每个节点分别存储两个、三个和四个数据包的情况精心设计了移位。比较显示,开销显著减少。
{"title":"Low overhead vector codes with combination property and zigzag decoding for edge-aided computing in UAV network","authors":"Mingjun Dai,&nbsp;Ronghao Huang,&nbsp;Jinjin Wang,&nbsp;Bingchun Li","doi":"10.1111/coin.12642","DOIUrl":"https://doi.org/10.1111/coin.12642","url":null,"abstract":"<p>Codes that possess combination property (CP) and zigzag decoding (ZD) simultaneously (CP-ZD) has broad application into edge aided distributed systems, including distributed storage, coded distributed computing (CDC), and CDC-structured distributed training. Existing CP-ZD code designs are based on scalar code, where one node stores exactly one encoded packet. The drawback is that the induced overhead is high. In order to significantly reduce the overhead, vector CP-ZD codes are designed, where vector means the number of stored encoded packets in one node is extended from one to multiple. More specifically, in detailed code construction, cyclic shift is proposed, and the shifts are carefully designed for cases that each node stores two, three, and four packets, respectively. Comparisons show that the overhead is reduced significantly.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 3","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of multi-class lung diseases based on customized neural network 基于定制神经网络的多类肺部疾病检测
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-23 DOI: 10.1111/coin.12649
Azmat Ali, Yulin Wang, Xiaochuan Shi

In the medical image processing domain, deep learning methodologies have outstanding performance for disease classification using digital images such as X-rays, magnetic resonance imaging (MRI), and computerized tomography (CT). However, accurate diagnosis of disease by medical personnel can be challenging in certain cases, such as the complexity of interpretation and non-availability of expert personnel, difficulty at pixel-level analysis, etc. Computer-aided diagnostic (CAD) systems with proper training have shown the potential to enhance diagnostic accuracy and efficiency. With the exponential growth of medical data, CAD systems can analyze and extract valuable information by assisting medical personnel during the disease diagnostic process. To overcome these challenges, this research introduces CX-RaysNet, a novel deep-learning framework designed for the automatic identification of various lung disease classes in digital chest X-ray images. The core novelty of the CX-RaysNet framework lies in the integration of both convolutional and group convolutional layers, along with the usage of small filter sizes and the incorporation of dropout regularization. This phenomenon helps us optimize the model's ability to distinguish minute features that reveal different lung diseases. Additionally, data augmentation techniques are implemented to augment the training and testing datasets, which enhances the model's robustness and generalizability. The performance evaluation of CX-RaysNet reveals promising results, with the proposed model achieving a multi-class classification accuracy of 97.25%. Particularly, this study represents the first attempt to optimize a model specifically for low-power embedded devices, aiming to improve the accuracy of disease detection while minimizing computational resources.

在医学图像处理领域,深度学习方法在利用 X 射线、磁共振成像(MRI)和计算机断层扫描(CT)等数字图像进行疾病分类方面表现出色。然而,在某些情况下,医务人员对疾病的准确诊断可能具有挑战性,例如解释的复杂性和专家人员的不可获得性、像素级分析的难度等。经过适当培训的计算机辅助诊断(CAD)系统已显示出提高诊断准确性和效率的潜力。随着医疗数据的指数级增长,计算机辅助诊断系统可以在疾病诊断过程中协助医务人员分析和提取有价值的信息。为了克服这些挑战,本研究引入了 CX-RaysNet,这是一种新颖的深度学习框架,旨在自动识别数字胸部 X 光图像中的各种肺部疾病类别。CX-RaysNet 框架的核心新颖之处在于同时整合了卷积层和群卷积层,并使用小尺寸滤波器和滤除正则化。这种现象有助于我们优化模型分辨揭示不同肺部疾病的微小特征的能力。此外,我们还采用了数据增强技术来增强训练和测试数据集,从而增强了模型的鲁棒性和通用性。CX-RaysNet 的性能评估结果令人鼓舞,所提出的模型的多类分类准确率达到了 97.25%。特别值得一提的是,这项研究首次尝试优化专门用于低功耗嵌入式设备的模型,旨在提高疾病检测的准确性,同时最大限度地减少计算资源。
{"title":"Detection of multi-class lung diseases based on customized neural network","authors":"Azmat Ali,&nbsp;Yulin Wang,&nbsp;Xiaochuan Shi","doi":"10.1111/coin.12649","DOIUrl":"https://doi.org/10.1111/coin.12649","url":null,"abstract":"<p>In the medical image processing domain, deep learning methodologies have outstanding performance for disease classification using digital images such as X-rays, magnetic resonance imaging (MRI), and computerized tomography (CT). However, accurate diagnosis of disease by medical personnel can be challenging in certain cases, such as the complexity of interpretation and non-availability of expert personnel, difficulty at pixel-level analysis, etc. Computer-aided diagnostic (CAD) systems with proper training have shown the potential to enhance diagnostic accuracy and efficiency. With the exponential growth of medical data, CAD systems can analyze and extract valuable information by assisting medical personnel during the disease diagnostic process. To overcome these challenges, this research introduces CX-RaysNet, a novel deep-learning framework designed for the automatic identification of various lung disease classes in digital chest X-ray images. The core novelty of the CX-RaysNet framework lies in the integration of both convolutional and group convolutional layers, along with the usage of small filter sizes and the incorporation of dropout regularization. This phenomenon helps us optimize the model's ability to distinguish minute features that reveal different lung diseases. Additionally, data augmentation techniques are implemented to augment the training and testing datasets, which enhances the model's robustness and generalizability. The performance evaluation of CX-RaysNet reveals promising results, with the proposed model achieving a multi-class classification accuracy of 97.25%. Particularly, this study represents the first attempt to optimize a model specifically for low-power embedded devices, aiming to improve the accuracy of disease detection while minimizing computational resources.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140633671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contour wavelet diffusion: A fast and high-quality image generation model 轮廓小波扩散:快速、高质量的图像生成模型
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-23 DOI: 10.1111/coin.12644
Yaoyao Ding, Xiaoxi Zhu, Yuntao Zou

Diffusion models can generate high-quality images and have attracted increasing attention. However, diffusion models adopt a progressive optimization process and often have long training and inference time, which limits their application in realistic scenarios. Recently, some latent space diffusion models have partially accelerated training speed by using parameters in the feature space, but additional network structures still require a large amount of unnecessary computation. Therefore, we propose the Contour Wavelet Diffusion method to accelerate the training and inference speed. First, we introduce the contour wavelet transform to extract anisotropic low-frequency and high-frequency components from the input image, and achieve acceleration by processing these down-sampling components. Meanwhile, due to the good reconstructive properties of wavelet transforms, the quality of generated images can be maintained. Second, we propose a Batch-normalized stochastic attention module that enables the model to effectively focus on important high-frequency information, further improving the quality of image generation. Finally, we propose a balanced loss function to further improve the convergence speed of the model. Experimental results on several public datasets show that our method can significantly accelerate the training and inference speed of the diffusion model while ensuring the quality of generated images.

扩散模型可以生成高质量的图像,因此受到越来越多的关注。然而,扩散模型采用渐进优化过程,通常需要较长的训练和推理时间,这限制了其在现实场景中的应用。最近,一些潜空间扩散模型通过使用特征空间中的参数,部分加快了训练速度,但额外的网络结构仍需要大量不必要的计算。因此,我们提出了轮廓小波扩散方法来加快训练和推理速度。首先,我们引入轮廓小波变换,从输入图像中提取各向异性的低频和高频分量,并通过处理这些下采样分量实现加速。同时,由于小波变换具有良好的重构特性,可以保持生成图像的质量。其次,我们提出了批量归一化随机关注模块,使模型能有效地关注重要的高频信息,进一步提高图像生成的质量。最后,我们提出了一种平衡损失函数,以进一步提高模型的收敛速度。在多个公开数据集上的实验结果表明,我们的方法可以显著加快扩散模型的训练和推理速度,同时确保生成图像的质量。
{"title":"Contour wavelet diffusion: A fast and high-quality image generation model","authors":"Yaoyao Ding,&nbsp;Xiaoxi Zhu,&nbsp;Yuntao Zou","doi":"10.1111/coin.12644","DOIUrl":"https://doi.org/10.1111/coin.12644","url":null,"abstract":"<p>Diffusion models can generate high-quality images and have attracted increasing attention. However, diffusion models adopt a progressive optimization process and often have long training and inference time, which limits their application in realistic scenarios. Recently, some latent space diffusion models have partially accelerated training speed by using parameters in the feature space, but additional network structures still require a large amount of unnecessary computation. Therefore, we propose the Contour Wavelet Diffusion method to accelerate the training and inference speed. First, we introduce the contour wavelet transform to extract anisotropic low-frequency and high-frequency components from the input image, and achieve acceleration by processing these down-sampling components. Meanwhile, due to the good reconstructive properties of wavelet transforms, the quality of generated images can be maintained. Second, we propose a Batch-normalized stochastic attention module that enables the model to effectively focus on important high-frequency information, further improving the quality of image generation. Finally, we propose a balanced loss function to further improve the convergence speed of the model. Experimental results on several public datasets show that our method can significantly accelerate the training and inference speed of the diffusion model while ensuring the quality of generated images.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140633669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel mixture allocation models for topic learning 用于主题学习的新型混合分配模型
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-11 DOI: 10.1111/coin.12641
Kamal Maanicshah, Manar Amayri, Nizar Bouguila

Latent Dirichlet allocation (LDA) is one of the major models used for topic modelling. A number of models have been proposed extending the basic LDA model. There has also been interesting research to replace the Dirichlet prior of LDA with other pliable distributions like generalized Dirichlet, Beta-Liouville and so forth. Owing to the proven efficiency of using generalized Dirichlet (GD) and Beta-Liouville (BL) priors in topic models, we use these versions of topic models in our paper. Furthermore, to enhance the support of respective topics, we integrate mixture components which gives rise to generalized Dirichlet mixture allocation and Beta-Liouville mixture allocation models respectively. In order to improve the modelling capabilities, we use variational inference method for estimating the parameters. Additionally, we also introduce an online variational approach to cater to specific applications involving streaming data. We evaluate our models based on its performance on applications related to text classification, image categorization and genome sequence classification using a supervised approach where the labels are used as an observed variable within the model.

Latent Dirichlet allocation(LDA)是用于主题建模的主要模型之一。人们提出了许多扩展基本 LDA 模型的模型。还有一些有趣的研究是用其他柔性分布(如广义 Dirichlet 分布、Beta-Liouville 分布等)取代 LDA 的 Dirichlet 先验分布。由于在主题模型中使用广义 Dirichlet(GD)和 Beta-Liouville (BL)先验的效率已得到证实,我们在本文中使用了这些版本的主题模型。此外,为了增强各自主题的支持度,我们还整合了混合成分,从而分别产生了广义狄利克特混合分配模型和贝塔-刘维尔混合分配模型。为了提高建模能力,我们使用变分推理方法来估计参数。此外,我们还引入了在线变分方法,以满足涉及流数据的特定应用。我们根据模型在文本分类、图像分类和基因组序列分类等相关应用中的性能,采用有监督的方法对模型进行了评估,其中标签被用作模型中的观察变量。
{"title":"Novel mixture allocation models for topic learning","authors":"Kamal Maanicshah,&nbsp;Manar Amayri,&nbsp;Nizar Bouguila","doi":"10.1111/coin.12641","DOIUrl":"https://doi.org/10.1111/coin.12641","url":null,"abstract":"<p>Latent Dirichlet allocation (LDA) is one of the major models used for topic modelling. A number of models have been proposed extending the basic LDA model. There has also been interesting research to replace the Dirichlet prior of LDA with other pliable distributions like generalized Dirichlet, Beta-Liouville and so forth. Owing to the proven efficiency of using generalized Dirichlet (GD) and Beta-Liouville (BL) priors in topic models, we use these versions of topic models in our paper. Furthermore, to enhance the support of respective topics, we integrate mixture components which gives rise to generalized Dirichlet mixture allocation and Beta-Liouville mixture allocation models respectively. In order to improve the modelling capabilities, we use variational inference method for estimating the parameters. Additionally, we also introduce an online variational approach to cater to specific applications involving streaming data. We evaluate our models based on its performance on applications related to text classification, image categorization and genome sequence classification using a supervised approach where the labels are used as an observed variable within the model.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coin.12641","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140546858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph embedded low-light image enhancement transformer based on federated learning for Internet of Vehicle under tunnel environment 基于联合学习的图嵌入式低照度图像增强变换器,用于隧道环境下的车联网
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-11 DOI: 10.1111/coin.12648
Yuan Shu, Fuxi Zhu, Zhongqiu Zhang, Min Zhang, Jie Yang, Yi Wang, Jun Wang

The Internet of Vehicles (IoV) autonomous driving technology based on deep learning has achieved great success. However, under the tunnel environment, the computer vision-based IoV may fail due to low illumination. In order to handle this issue, this paper deploys an image enhancement module at the terminal of the IoV to alleviate the low illumination influence. The enhanced images can be submitted through IoT to the cloud server for further processing. The core algorithm of image enhancement is implemented by a dynamic graph embedded transformer network based on federated learning which can fully utilize the data resources of multiple devices in IoV and improve the generalization. Extensive comparative experiments are conducted on the publicly available dataset and the self-built dataset which is collected under the tunnel environment. Compared with other deep models, all results confirm that the proposed graph embedded Transformer model can effectively enhance the detail information of the low-light image, which can greatly improve the following tasks in IoV.

基于深度学习的车联网(IoV)自动驾驶技术已经取得了巨大成功。然而,在隧道环境下,基于计算机视觉的 IoV 可能会因光照不足而失效。为了解决这一问题,本文在 IoV 的终端部署了图像增强模块,以减轻低照度的影响。增强后的图像可通过物联网提交到云服务器进行进一步处理。图像增强的核心算法由基于联合学习的动态图嵌入式变换器网络实现,可以充分利用物联网中多个设备的数据资源,提高泛化能力。在公开数据集和隧道环境下采集的自建数据集上进行了广泛的对比实验。与其他深度模型相比,所有结果都证实了所提出的图嵌入 Transformer 模型能有效增强弱光图像的细节信息,从而大大改善物联网中的以下任务。
{"title":"Graph embedded low-light image enhancement transformer based on federated learning for Internet of Vehicle under tunnel environment","authors":"Yuan Shu,&nbsp;Fuxi Zhu,&nbsp;Zhongqiu Zhang,&nbsp;Min Zhang,&nbsp;Jie Yang,&nbsp;Yi Wang,&nbsp;Jun Wang","doi":"10.1111/coin.12648","DOIUrl":"https://doi.org/10.1111/coin.12648","url":null,"abstract":"<p>The Internet of Vehicles (IoV) autonomous driving technology based on deep learning has achieved great success. However, under the tunnel environment, the computer vision-based IoV may fail due to low illumination. In order to handle this issue, this paper deploys an image enhancement module at the terminal of the IoV to alleviate the low illumination influence. The enhanced images can be submitted through IoT to the cloud server for further processing. The core algorithm of image enhancement is implemented by a dynamic graph embedded transformer network based on federated learning which can fully utilize the data resources of multiple devices in IoV and improve the generalization. Extensive comparative experiments are conducted on the publicly available dataset and the self-built dataset which is collected under the tunnel environment. Compared with other deep models, all results confirm that the proposed graph embedded Transformer model can effectively enhance the detail information of the low-light image, which can greatly improve the following tasks in IoV.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140546874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy preserving support vector machine based on federated learning for distributed IoT-enabled data analysis 基于联合学习的隐私保护支持向量机,用于分布式物联网数据分析
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-03 DOI: 10.1111/coin.12636
Yu-Chi Chen, Song-Yi Hsu, Xin Xie, Saru Kumari, Sachin Kumar, Joel Rodrigues, Bander A. Alzahrani

In a smart city, IoT devices are required to support monitoring of normal operations such as traffic, infrastructure, and the crowd of people. IoT-enabled systems offered by many IoT devices are expected to achieve sustainable developments from the information collected by the smart city. Indeed, artificial intelligence (AI) and machine learning (ML) are well-known methods for achieving this goal as long as the system framework and problem statement are well prepared. However, to better use AI/ML, the training data should be as global as possible, which can prevent the model from working only on local data. Such data can be obtained from different sources, but this induces the privacy issue where at least one party collects all data in the plain. The main focus of this article is on support vector machines (SVM). We aim to present a solution to the privacy issue and provide confidentiality to protect the data. We build a privacy-preserving scheme for SVM (SecretSVM) based on the framework of federated learning and distributed consensus. In this scheme, data providers self-organize and obtain training parameters of SVM without revealing their own models. Finally, experiments with real data analysis show the feasibility of potential applications in smart cities. This article is the extended version of that of Hsu et al. (Proceedings of the 15th ACM Asia Conference on Computer and Communications Security. ACM; 2020:904-906).

在智慧城市中,物联网设备需要支持对交通、基础设施和人群等正常运行的监控。许多物联网设备提供的物联网系统有望通过智慧城市收集的信息实现可持续发展。事实上,只要系统框架和问题陈述准备充分,人工智能(AI)和机器学习(ML)是实现这一目标的众所周知的方法。然而,为了更好地利用人工智能/机器学习,训练数据应尽可能具有全球性,这样才能避免模型仅在本地数据上运行。这些数据可以从不同来源获得,但这会引发隐私问题,因为至少有一方会收集平原上的所有数据。本文的重点是支持向量机(SVM)。我们的目标是提出一种解决隐私问题的方法,并为保护数据提供保密性。我们基于联合学习和分布式共识框架,为 SVM 建立了一个隐私保护方案(SecretSVM)。在该方案中,数据提供者自我组织并获取 SVM 的训练参数,而不会泄露自己的模型。最后,真实数据分析实验表明了在智慧城市中潜在应用的可行性。本文是 Hsu 等人的扩展版(第 15 届 ACM 亚洲计算机与通信安全会议论文集。ACM;2020:904-906)。
{"title":"Privacy preserving support vector machine based on federated learning for distributed IoT-enabled data analysis","authors":"Yu-Chi Chen,&nbsp;Song-Yi Hsu,&nbsp;Xin Xie,&nbsp;Saru Kumari,&nbsp;Sachin Kumar,&nbsp;Joel Rodrigues,&nbsp;Bander A. Alzahrani","doi":"10.1111/coin.12636","DOIUrl":"https://doi.org/10.1111/coin.12636","url":null,"abstract":"<p>In a smart city, IoT devices are required to support monitoring of normal operations such as traffic, infrastructure, and the crowd of people. IoT-enabled systems offered by many IoT devices are expected to achieve sustainable developments from the information collected by the smart city. Indeed, artificial intelligence (AI) and machine learning (ML) are well-known methods for achieving this goal as long as the system framework and problem statement are well prepared. However, to better use AI/ML, the training data should be as global as possible, which can prevent the model from working only on local data. Such data can be obtained from different sources, but this induces the privacy issue where at least one party collects all data in the plain. The main focus of this article is on support vector machines (SVM). We aim to present a solution to the privacy issue and provide confidentiality to protect the data. We build a privacy-preserving scheme for SVM (SecretSVM) based on the framework of federated learning and distributed consensus. In this scheme, data providers self-organize and obtain training parameters of SVM without revealing their own models. Finally, experiments with real data analysis show the feasibility of potential applications in smart cities. This article is the extended version of that of Hsu et al. (Proceedings of the 15th ACM Asia Conference on Computer and Communications Security. ACM; 2020:904-906).</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel algorithm machine translation for language translation tool 语言翻译工具的新算法机器翻译
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-03 DOI: 10.1111/coin.12643
K. Jayasakthi Velmurugan, G. Sumathy, K. V. Pradeep

Fuzzy matching techniques are the presently used methods in translating the words. Neural machine translation and statistical machine translation are the methods used in MT. In machine translator tool, the strategy employed for translation needs to handle large amount of datasets and therefore the performance in retrieving correct matching output can be affected. In order to improve the matching score of MT, the advanced techniques can be presented by modifying the existing fuzzy based translator and neural machine translator. The conventional process of modifying architectures and encoding schemes are tedious process. Similarly, the preprocessing of datasets also involves more time consumption and memory utilization. In this article, a new spider web based searching enhanced translation is presented to be employed with the neural machine translator. The proposed scheme enables deep searching of available dataset to detect the accurate matching result. In addition, the quality of translation is improved by presenting an optimal selection scheme for using the sentence matches in source augmentation. The matches retrieved using various matching scores are applied to an optimization algorithm. The source augmentation using optimal retrieved matches increases the translation quality. Further, the selection of optimal match combination helps to reduce time requirement, since it is not necessary to test all retrieved matches in finding target sentence. The performance of translation is validated by measuring the quality of translation using BLEU and METEOR scores. These two scores can be achieved for the TA-EN language pairs in different configurations of about 92% and 86%, correspondingly. The results are evaluated and compared with other available NMT methods to validate the work.

模糊匹配技术是目前使用的单词翻译方法。神经机器翻译和统计机器翻译是 MT 中使用的方法。在机器翻译工具中,采用的翻译策略需要处理大量的数据集,因此会影响检索正确匹配输出的性能。为了提高 MT 的匹配得分,可以通过修改现有的基于模糊的翻译器和神经机器翻译器来提出先进的技术。修改架构和编码方案的传统过程非常繁琐。同样,数据集的预处理也需要消耗更多的时间和内存。本文提出了一种新的基于蜘蛛网的搜索增强翻译,可与神经机器翻译器一起使用。所提出的方案能够对可用数据集进行深度搜索,以检测准确的匹配结果。此外,通过提出在源增强中使用句子匹配的最佳选择方案,翻译质量也得到了提高。使用不同匹配分数检索的匹配结果将应用于优化算法。使用最佳检索匹配进行源增强可提高翻译质量。此外,选择最佳匹配组合有助于减少时间要求,因为在查找目标句时无需测试所有检索到的匹配。翻译的性能通过使用 BLEU 和 METEOR 分数来衡量翻译质量来验证。在不同的配置下,TA-EN 语言对的这两个分数分别达到了约 92% 和 86%。对结果进行了评估,并与其他可用的 NMT 方法进行了比较,以验证这项工作的有效性。
{"title":"Novel algorithm machine translation for language translation tool","authors":"K. Jayasakthi Velmurugan,&nbsp;G. Sumathy,&nbsp;K. V. Pradeep","doi":"10.1111/coin.12643","DOIUrl":"https://doi.org/10.1111/coin.12643","url":null,"abstract":"<p>Fuzzy matching techniques are the presently used methods in translating the words. Neural machine translation and statistical machine translation are the methods used in MT. In machine translator tool, the strategy employed for translation needs to handle large amount of datasets and therefore the performance in retrieving correct matching output can be affected. In order to improve the matching score of MT, the advanced techniques can be presented by modifying the existing fuzzy based translator and neural machine translator. The conventional process of modifying architectures and encoding schemes are tedious process. Similarly, the preprocessing of datasets also involves more time consumption and memory utilization. In this article, a new spider web based searching enhanced translation is presented to be employed with the neural machine translator. The proposed scheme enables deep searching of available dataset to detect the accurate matching result. In addition, the quality of translation is improved by presenting an optimal selection scheme for using the sentence matches in source augmentation. The matches retrieved using various matching scores are applied to an optimization algorithm. The source augmentation using optimal retrieved matches increases the translation quality. Further, the selection of optimal match combination helps to reduce time requirement, since it is not necessary to test all retrieved matches in finding target sentence. The performance of translation is validated by measuring the quality of translation using BLEU and METEOR scores. These two scores can be achieved for the TA-EN language pairs in different configurations of about 92% and 86%, correspondingly. The results are evaluated and compared with other available NMT methods to validate the work.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust fine-grained visual recognition with images based on internet of things 基于物联网的稳健细粒度图像视觉识别
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-19 DOI: 10.1111/coin.12638
Zhenhuang Cai, Shuai Yan, Dan Huang

Labeling fine-grained objects manually is extremely challenging, as it is not only label-intensive but also requires professional knowledge. Accordingly, robust learning methods for fine-grained recognition with web images collected from Internet of Things have drawn significant attention. However, training deep fine-grained models directly using untrusted web images is confronted by two primary obstacles: (1) label noise in web images and (2) domain variance between the online sources and test datasets. To this end, in this study, we mainly focus on addressing these two pivotal problems associated with untrusted web images. To be specific, we introduce an end-to-end network that collaboratively addresses these concerns in the process of separating trusted data from untrusted web images. To validate the efficacy of our proposed model, untrusted web images are first collected by utilizing the text category labels found within fine-grained datasets. Subsequently, we employ the designed deep model to eliminate label noise and ameliorate domain mismatch. And the chosen trusted web data are utilized for model training. Comprehensive experiments and ablation studies validate that our method consistently surpasses other state-of-the-art approaches for fine-grained recognition tasks in real-world scenarios, demonstrating a significant improvement margin (2.51% on CUB200-2011 and 2.92% on Stanford Dogs). The source code and models can be accessed at: https://github.com/Codeczh/FGVC-IoT.

对细粒度对象进行人工标注极具挑战性,因为这不仅是标签密集型工作,还需要专业知识。因此,利用从物联网收集的网络图像进行细粒度识别的稳健学习方法备受关注。然而,直接使用不受信任的网络图像来训练深度细粒度模型面临两个主要障碍:(1) 网络图像中的标签噪声;(2) 在线数据源和测试数据集之间的域差异。为此,在本研究中,我们主要致力于解决与不可信网络图像相关的这两个关键问题。具体来说,我们引入了一个端到端网络,在将可信数据与不可信网络图像分离的过程中协同解决这些问题。为了验证我们提出的模型的有效性,我们首先利用细粒度数据集中的文本类别标签来收集不受信任的网络图像。随后,我们利用设计的深度模型消除标签噪声,改善领域不匹配问题。所选的可信网络数据则用于模型训练。综合实验和消融研究验证了我们的方法在实际场景中的细粒度识别任务中始终超越其他最先进的方法,显示出显著的改进幅度(CUB200-2011 中为 2.51%,斯坦福 Dogs 中为 2.92%)。源代码和模型可从以下网址获取:https://github.com/Codeczh/FGVC-IoT。
{"title":"Robust fine-grained visual recognition with images based on internet of things","authors":"Zhenhuang Cai,&nbsp;Shuai Yan,&nbsp;Dan Huang","doi":"10.1111/coin.12638","DOIUrl":"https://doi.org/10.1111/coin.12638","url":null,"abstract":"<p>Labeling fine-grained objects manually is extremely challenging, as it is not only label-intensive but also requires professional knowledge. Accordingly, robust learning methods for fine-grained recognition with web images collected from Internet of Things have drawn significant attention. However, training deep fine-grained models directly using untrusted web images is confronted by two primary obstacles: (1) label noise in web images and (2) domain variance between the online sources and test datasets. To this end, in this study, we mainly focus on addressing these two pivotal problems associated with untrusted web images. To be specific, we introduce an end-to-end network that collaboratively addresses these concerns in the process of separating trusted data from untrusted web images. To validate the efficacy of our proposed model, untrusted web images are first collected by utilizing the text category labels found within fine-grained datasets. Subsequently, we employ the designed deep model to eliminate label noise and ameliorate domain mismatch. And the chosen trusted web data are utilized for model training. Comprehensive experiments and ablation studies validate that our method consistently surpasses other state-of-the-art approaches for fine-grained recognition tasks in real-world scenarios, demonstrating a significant improvement margin (2.51% on CUB200-2011 and 2.92% on Stanford Dogs). The source code and models can be accessed at: \u0000https://github.com/Codeczh/FGVC-IoT.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140164392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computational Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1