首页 > 最新文献

Multimedia Systems最新文献

英文 中文
Low-parameter GAN inversion framework based on hypernetwork 基于超网络的低参数 GAN 反演框架
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-17 DOI: 10.1007/s00530-024-01379-9
Hongyang Wang, Ting Wang, Dong Xiang, Wenjie Yang, Jia Li

In response to the significant parameter overhead in current Generative Adversarial Networks (GAN) inversion methods when balancing high fidelity and editability, we propose a novel lightweight inversion framework based on an optimized generator. We aim to balance fidelity and editability within the StyleGAN latent space. To achieve this, the study begins by mapping raw data to the ({W}^{+}) latent space, enhancing the quality of the resulting inverted images. Following this mapping step, we introduce a carefully designed lightweight hypernetwork. This hypernetwork operates to selectively modify primary detailed features, thereby leading to a notable reduction in the parameter count essential for model training. By learning parameter variations, the precision of subsequent image editing is augmented. Lastly, our approach integrates a multi-channel parallel optimization computing module into the above structure to decrease the time needed for model image processing. Extensive experiments were conducted in facial and automotive imagery domains to validate our lightweight inversion framework. Results demonstrate that our method achieves equivalent or superior inversion and editing quality, utilizing fewer parameters.

当前的生成对抗网络(GAN)反演方法在兼顾高保真和可编辑性时参数开销很大,针对这一问题,我们提出了一种基于优化生成器的新型轻量级反演框架。我们的目标是在 StyleGAN 潜在空间内平衡保真度和可编辑性。为了实现这一目标,研究首先将原始数据映射到 ({W}^{+}) 潜在空间,从而提高反转图像的质量。在这一映射步骤之后,我们引入了一个精心设计的轻量级超网络。该超网络可选择性地修改主要细节特征,从而显著减少模型训练所需的参数数量。通过学习参数变化,可以提高后续图像编辑的精确度。最后,我们的方法将多通道并行优化计算模块集成到上述结构中,以减少模型图像处理所需的时间。我们在面部和汽车图像领域进行了广泛的实验,以验证我们的轻量级反演框架。结果表明,我们的方法利用较少的参数就能获得同等或更高的反演和编辑质量。
{"title":"Low-parameter GAN inversion framework based on hypernetwork","authors":"Hongyang Wang, Ting Wang, Dong Xiang, Wenjie Yang, Jia Li","doi":"10.1007/s00530-024-01379-9","DOIUrl":"https://doi.org/10.1007/s00530-024-01379-9","url":null,"abstract":"<p>In response to the significant parameter overhead in current Generative Adversarial Networks (GAN) inversion methods when balancing high fidelity and editability, we propose a novel lightweight inversion framework based on an optimized generator. We aim to balance fidelity and editability within the StyleGAN latent space. To achieve this, the study begins by mapping raw data to the <span>({W}^{+})</span> latent space, enhancing the quality of the resulting inverted images. Following this mapping step, we introduce a carefully designed lightweight hypernetwork. This hypernetwork operates to selectively modify primary detailed features, thereby leading to a notable reduction in the parameter count essential for model training. By learning parameter variations, the precision of subsequent image editing is augmented. Lastly, our approach integrates a multi-channel parallel optimization computing module into the above structure to decrease the time needed for model image processing. Extensive experiments were conducted in facial and automotive imagery domains to validate our lightweight inversion framework. Results demonstrate that our method achieves equivalent or superior inversion and editing quality, utilizing fewer parameters.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"5 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SenseMLP: a parallel MLP architecture for sensor-based human activity recognition SenseMLP:基于传感器的人类活动识别并行 MLP 架构
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-17 DOI: 10.1007/s00530-024-01384-y
Weilin Li, Jiaming Guo, Hong Wu

Human activity recognition (HAR) with wearable inertial sensors is a burgeoning field, propelled by advances in sensor technology. Deep learning methods for HAR have notably enhanced recognition accuracy in recent years. Nonetheless, the complexity of previous models often impedes their use in real-life scenarios, particularly in online applications. Addressing this gap, we introduce SenseMLP, a novel approach employing a multi-layer perceptron (MLP) neural network architecture. SenseMLP features three parallel MLP branches that independently process and integrate features across the time, channel, and frequency dimensions. This structure not only simplifies the model but also significantly reduces the number of required parameters compared to previous deep learning HAR frameworks. We conducted comprehensive evaluations of SenseMLP against benchmark HAR datasets, including PAMAP2, OPPORTUNITY, USC-HAD, and SKODA. Our findings demonstrate that SenseMLP not only achieves state-of-the-art performance in terms of accuracy but also boasts fewer parameters and lower floating-point operations per second. For further research and application in the field, the source code of SenseMLP is available at https://github.com/forfrees/SenseMLP.

在传感器技术进步的推动下,利用可穿戴惯性传感器进行人类活动识别(HAR)是一个新兴领域。近年来,用于 HAR 的深度学习方法显著提高了识别准确率。然而,以往模型的复杂性往往阻碍了它们在现实生活场景中的应用,尤其是在线应用。为了弥补这一不足,我们引入了 SenseMLP,这是一种采用多层感知器(MLP)神经网络架构的新方法。SenseMLP 具有三个并行的 MLP 分支,可独立处理和整合时间、信道和频率维度上的特征。与之前的深度学习 HAR 框架相比,这种结构不仅简化了模型,还大大减少了所需参数的数量。我们针对基准 HAR 数据集(包括 PAMAP2、OPPORTUNITY、USC-HAD 和 SKODA)对 SenseMLP 进行了全面评估。我们的研究结果表明,SenseMLP 不仅在准确性方面达到了最先进的性能,而且参数更少,每秒浮点运算次数更少。如需进一步研究和应用,请访问 https://github.com/forfrees/SenseMLP 获取 SenseMLP 的源代码。
{"title":"SenseMLP: a parallel MLP architecture for sensor-based human activity recognition","authors":"Weilin Li, Jiaming Guo, Hong Wu","doi":"10.1007/s00530-024-01384-y","DOIUrl":"https://doi.org/10.1007/s00530-024-01384-y","url":null,"abstract":"<p>Human activity recognition (HAR) with wearable inertial sensors is a burgeoning field, propelled by advances in sensor technology. Deep learning methods for HAR have notably enhanced recognition accuracy in recent years. Nonetheless, the complexity of previous models often impedes their use in real-life scenarios, particularly in online applications. Addressing this gap, we introduce SenseMLP, a novel approach employing a multi-layer perceptron (MLP) neural network architecture. SenseMLP features three parallel MLP branches that independently process and integrate features across the time, channel, and frequency dimensions. This structure not only simplifies the model but also significantly reduces the number of required parameters compared to previous deep learning HAR frameworks. We conducted comprehensive evaluations of SenseMLP against benchmark HAR datasets, including PAMAP2, OPPORTUNITY, USC-HAD, and SKODA. Our findings demonstrate that SenseMLP not only achieves state-of-the-art performance in terms of accuracy but also boasts fewer parameters and lower floating-point operations per second. For further research and application in the field, the source code of SenseMLP is available at https://github.com/forfrees/SenseMLP.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"36 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141515062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMFE-RDD: a road damage detector with a lightweight multi-feature extraction network LMFE-RDD:采用轻量级多特征提取网络的道路损坏检测器
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-14 DOI: 10.1007/s00530-024-01367-z
Qihan He, Zhongxu Li, Wenyuan Yang

Road damage detection using computer vision and deep learning to automatically identify all kinds of road damage is an efficient application in object detection, which can significantly improve the efficiency of road maintenance planning and repair work and ensure road safety. However, due to the complexity of target recognition, the existing road damage detection models usually carry a large number of parameters and a large amount of computation, resulting in a slow inference speed, which limits the actual deployment of the model on the equipment with limited computing resources to a certain extent. In this study, we propose a road damage detector named LMFE-RDD for balancing speed and accuracy, which constructs a Lightweight Multi-Feature Extraction Network (LMFE-Net) as the backbone network and an Efficient Semantic Fusion Network (ESF-Net) for multi-scale feature fusion. First, as the backbone feature extraction network, LMFE-Net inputs road damage images to obtain three different scale feature maps. Second, ESF-Net fuses these three feature graphs and outputs three fusion features. Finally, the detection head is sent for target identification and positioning, and the final result is obtained. In addition, we use WDB loss, a multi-task loss function with a non-monotonic dynamic focusing mechanism, to pay more attention to bounding box regression losses. The experimental results show that the proposed LMFE-RDD model has competitive accuracy while ensuring speed. In the Multi-Perspective Road Damage Dataset, combining the data from all perspectives, LMFE-RDD achieves the detection speed of 51.0 FPS and 64.2% mAP@0.5, but the parameters are only 13.5 M.

利用计算机视觉和深度学习自动识别各类道路损伤的道路损伤检测是物体检测中的一项高效应用,可以显著提高道路养护规划和维修工作的效率,保障道路安全。然而,由于目标识别的复杂性,现有的道路损伤检测模型通常携带大量参数,计算量较大,导致推理速度较慢,这在一定程度上限制了模型在计算资源有限的设备上的实际部署。在本研究中,我们提出了一种兼顾速度与精度的道路损伤检测器 LMFE-RDD,它构建了一个轻量级多特征提取网络(LMFE-Net)作为骨干网络,并构建了一个高效语义融合网络(ESF-Net)进行多尺度特征融合。首先,作为骨干特征提取网络,LMFE-Net 输入道路损坏图像,以获得三个不同尺度的特征图。其次,ESF-Net 将这三个特征图进行融合,输出三个融合特征。最后,发送检测头进行目标识别和定位,得到最终结果。此外,我们还使用了 WDB 损失,这是一种具有非单调动态聚焦机制的多任务损失函数,更加关注边界框回归损失。实验结果表明,所提出的 LMFE-RDD 模型在保证速度的同时,还具有极高的精度。在多视角道路损坏数据集中,结合所有视角的数据,LMFE-RDD 实现了 51.0 FPS 的检测速度和 64.2% 的 mAP@0.5,但参数仅为 13.5 M。
{"title":"LMFE-RDD: a road damage detector with a lightweight multi-feature extraction network","authors":"Qihan He, Zhongxu Li, Wenyuan Yang","doi":"10.1007/s00530-024-01367-z","DOIUrl":"https://doi.org/10.1007/s00530-024-01367-z","url":null,"abstract":"<p>Road damage detection using computer vision and deep learning to automatically identify all kinds of road damage is an efficient application in object detection, which can significantly improve the efficiency of road maintenance planning and repair work and ensure road safety. However, due to the complexity of target recognition, the existing road damage detection models usually carry a large number of parameters and a large amount of computation, resulting in a slow inference speed, which limits the actual deployment of the model on the equipment with limited computing resources to a certain extent. In this study, we propose a road damage detector named LMFE-RDD for balancing speed and accuracy, which constructs a Lightweight Multi-Feature Extraction Network (LMFE-Net) as the backbone network and an Efficient Semantic Fusion Network (ESF-Net) for multi-scale feature fusion. First, as the backbone feature extraction network, LMFE-Net inputs road damage images to obtain three different scale feature maps. Second, ESF-Net fuses these three feature graphs and outputs three fusion features. Finally, the detection head is sent for target identification and positioning, and the final result is obtained. In addition, we use WDB loss, a multi-task loss function with a non-monotonic dynamic focusing mechanism, to pay more attention to bounding box regression losses. The experimental results show that the proposed LMFE-RDD model has competitive accuracy while ensuring speed. In the Multi-Perspective Road Damage Dataset, combining the data from all perspectives, LMFE-RDD achieves the detection speed of 51.0 FPS and 64.2% mAP@0.5, but the parameters are only 13.5 M.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"36 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141515063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLTU: mixup long-tail unsupervised zero-shot image classification on vision-language models MLTU:基于视觉语言模型的混合长尾无监督零镜头图像分类
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-05 DOI: 10.1007/s00530-024-01373-1
Yunpeng Jia, Xiufen Ye, Xinkui Mei, Yusong Liu, Shuxiang Guo

Vision-language models (VLM), such as Contrastive Language-Image Pretraining (CLIP), have demonstrated powerful capabilities in image classification under zero-shot settings. However, current zero-shot learning (ZSL) relies on manually tagged samples of known classes through supervised learning, resulting in a waste of labor costs and limitations on foreseeable classes in real-world applications. To address these challenges, we propose the mixup long-tail unsupervised (MLTU) approach for open-world ZSL problems. The proposed approach employs a novel long-tail mixup loss that integrated class-based re-weighting assignments with a given mixup factor for each mixed visual embedding. To mitigate the adverse impact over time, we adopt a noisy learning strategy to filter out samples that generated incorrect labels. We reproduce the unsupervised experiments of existing state-of-the-art long-tail and noisy learning approaches. Experimental results demonstrate that MLTU achieves significant improvements in classification compared to these proven existing approaches on public datasets. Moreover, it serves as a plug-and-play solution for amending previous assignments and enhancing unsupervised performance. MLTU enables the automatic classification and correction of incorrect predictions caused by the projection bias of CLIP.

视觉语言模型(VLM),如对比语言-图像预训练模型(CLIP),已在零镜头设置下的图像分类中展现出强大的能力。然而,目前的零镜头学习(ZSL)依赖于通过监督学习手动标记已知类别的样本,造成了人力成本的浪费,并限制了实际应用中的可预见类别。为了应对这些挑战,我们提出了针对开放世界 ZSL 问题的混合长尾无监督(MLTU)方法。该方法采用了一种新颖的长尾混合损失,将基于类别的再加权分配与每个混合视觉嵌入的给定混合因子整合在一起。为了减轻随着时间推移产生的不利影响,我们采用了一种噪声学习策略,以过滤掉产生错误标签的样本。我们重现了现有最先进的长尾和噪声学习方法的无监督实验。实验结果表明,在公共数据集上,与这些成熟的现有方法相比,MLTU 在分类方面取得了显著的改进。此外,MLTU 还是一种即插即用的解决方案,可用于修改以前的分配并提高无监督性能。MLTU 能够自动分类和修正 CLIP 的投影偏差所导致的错误预测。
{"title":"MLTU: mixup long-tail unsupervised zero-shot image classification on vision-language models","authors":"Yunpeng Jia, Xiufen Ye, Xinkui Mei, Yusong Liu, Shuxiang Guo","doi":"10.1007/s00530-024-01373-1","DOIUrl":"https://doi.org/10.1007/s00530-024-01373-1","url":null,"abstract":"<p>Vision-language models (VLM), such as Contrastive Language-Image Pretraining (CLIP), have demonstrated powerful capabilities in image classification under zero-shot settings. However, current zero-shot learning (ZSL) relies on manually tagged samples of known classes through supervised learning, resulting in a waste of labor costs and limitations on foreseeable classes in real-world applications. To address these challenges, we propose the mixup long-tail unsupervised (MLTU) approach for open-world ZSL problems. The proposed approach employs a novel long-tail mixup loss that integrated class-based re-weighting assignments with a given mixup factor for each mixed visual embedding. To mitigate the adverse impact over time, we adopt a noisy learning strategy to filter out samples that generated incorrect labels. We reproduce the unsupervised experiments of existing state-of-the-art long-tail and noisy learning approaches. Experimental results demonstrate that MLTU achieves significant improvements in classification compared to these proven existing approaches on public datasets. Moreover, it serves as a plug-and-play solution for amending previous assignments and enhancing unsupervised performance. MLTU enables the automatic classification and correction of incorrect predictions caused by the projection bias of CLIP.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"12 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141253875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A visually meaningful secure image encryption algorithm based on conservative hyperchaotic system and optimized compressed sensing 基于保守超混沌系统和优化压缩传感的视觉意义安全图像加密算法
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-04 DOI: 10.1007/s00530-024-01370-4
Xiaojun Tong, Xilin Liu, Tao Pan, Miao Zhang, Zhu Wang

Aiming at the traditional schemes for encrypting and transmitting images can be subject to arbitrary destruction by attackers, making it difficult for algorithms with poor robustness to recover the original image, this paper proposes a new visually image encryption algorithm, which can embed the compressed and encrypted image into a carrier image to achieve visual security, thus avoiding destruction and attacks. Foremost, a new conservative hyperchaotic system without attractors was constructed that can resist reconstruction attacks. Secondly, a two-dimensional (2D) compressed sensing technique is adopted, and the pseudo random sequences of the proposed chaotic system generates a measurement matrix in compressed sensing, and optimizes this matrix to improve the visual quality of image reconstruction. Finally, by combining discrete wavelet transform (DWT) and singular value decomposition (SVD) methods, the encrypted image is embedded into the carrier image to achieve the purpose of image compression, encryption, and hiding. And experimental results and comparative analysis demonstrate that this algorithm has high security, good image reconstruction quality, and strong imperceptibility after image embedding. Under limited bandwidth conditions, the algorithm achieves excellent visual security effects.

针对传统的图像加密和传输方案会受到攻击者的任意破坏,使得鲁棒性较差的算法难以恢复原始图像的问题,本文提出了一种新的可视化图像加密算法,可以将压缩加密后的图像嵌入到载体图像中,实现可视化安全,从而避免破坏和攻击。首先,构建了一个新的无吸引子的保守超混沌系统,该系统可以抵御重构攻击。其次,采用二维(2D)压缩传感技术,利用所提出的混沌系统的伪随机序列生成压缩传感中的测量矩阵,并优化该矩阵以提高图像重建的视觉质量。最后,结合离散小波变换(DWT)和奇异值分解(SVD)方法,将加密图像嵌入到载波图像中,达到图像压缩、加密和隐藏的目的。实验结果和对比分析表明,该算法安全性高,图像重构质量好,图像嵌入后的不可感知性强。在带宽有限的条件下,该算法取得了很好的视觉安全效果。
{"title":"A visually meaningful secure image encryption algorithm based on conservative hyperchaotic system and optimized compressed sensing","authors":"Xiaojun Tong, Xilin Liu, Tao Pan, Miao Zhang, Zhu Wang","doi":"10.1007/s00530-024-01370-4","DOIUrl":"https://doi.org/10.1007/s00530-024-01370-4","url":null,"abstract":"<p>Aiming at the traditional schemes for encrypting and transmitting images can be subject to arbitrary destruction by attackers, making it difficult for algorithms with poor robustness to recover the original image, this paper proposes a new visually image encryption algorithm, which can embed the compressed and encrypted image into a carrier image to achieve visual security, thus avoiding destruction and attacks. Foremost, a new conservative hyperchaotic system without attractors was constructed that can resist reconstruction attacks. Secondly, a two-dimensional (2D) compressed sensing technique is adopted, and the pseudo random sequences of the proposed chaotic system generates a measurement matrix in compressed sensing, and optimizes this matrix to improve the visual quality of image reconstruction. Finally, by combining discrete wavelet transform (DWT) and singular value decomposition (SVD) methods, the encrypted image is embedded into the carrier image to achieve the purpose of image compression, encryption, and hiding. And experimental results and comparative analysis demonstrate that this algorithm has high security, good image reconstruction quality, and strong imperceptibility after image embedding. Under limited bandwidth conditions, the algorithm achieves excellent visual security effects.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"18 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141253772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social bot detection on Twitter: robustness evaluation and improvement Twitter 上的社交机器人检测:鲁棒性评估与改进
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-04 DOI: 10.1007/s00530-024-01364-2
Anan Liu, Yanwei Xie, Lanjun Wang, Guoqing Jin, Junbo Guo, Jun Li

Online social networks are easily exploited by social bots. Although the current models for detecting social bots show promising results, they mainly rely on Graph Neural Networks (GNNs), which have been proven to have vulnerabilities in robustness and these detection models likely have similar robustness vulnerabilities. Therefore, it is crucial to evaluate and improve their robustness. This paper proposes a robustness evaluation method: Attribute Random Iteration-Fast Gradient Sign Method (ARI-FGSM) and uses a simplified adversarial training to improve the robustness of social bot detection. Specifically, this study performs robustness evaluations of five bot detection models on two datasets under both black-box and white-box scenarios. The white-box experiments achieve a minimum attack success rate of 86.23%, while the black-box experiments achieve a minimum attack success rate of 45.86%. This shows that the social bot detection model is vulnerable to adversarial attacks. Moreover, after executing our robustness improvement method, the robustness of the detection model increased by up to 86.98%.

在线社交网络很容易被社交机器人利用。尽管目前用于检测社交机器人的模型显示出良好的效果,但它们主要依赖于图形神经网络(GNN),而图形神经网络已被证明在鲁棒性方面存在漏洞,这些检测模型也可能存在类似的鲁棒性漏洞。因此,评估和改进它们的鲁棒性至关重要。本文提出了一种鲁棒性评估方法:属性随机迭代-快速梯度符号法(ARI-FGSM),并使用简化的对抗训练来提高社交僵尸检测的鲁棒性。具体来说,本研究在两个数据集上对黑盒和白盒场景下的五个僵尸检测模型进行了鲁棒性评估。白盒实验的最低攻击成功率为 86.23%,而黑盒实验的最低攻击成功率为 45.86%。这表明社交僵尸检测模型很容易受到对抗性攻击。此外,在采用我们的鲁棒性改进方法后,检测模型的鲁棒性提高了 86.98%。
{"title":"Social bot detection on Twitter: robustness evaluation and improvement","authors":"Anan Liu, Yanwei Xie, Lanjun Wang, Guoqing Jin, Junbo Guo, Jun Li","doi":"10.1007/s00530-024-01364-2","DOIUrl":"https://doi.org/10.1007/s00530-024-01364-2","url":null,"abstract":"<p>Online social networks are easily exploited by social bots. Although the current models for detecting social bots show promising results, they mainly rely on Graph Neural Networks (GNNs), which have been proven to have vulnerabilities in robustness and these detection models likely have similar robustness vulnerabilities. Therefore, it is crucial to evaluate and improve their robustness. This paper proposes a robustness evaluation method: Attribute Random Iteration-Fast Gradient Sign Method (ARI-FGSM) and uses a simplified adversarial training to improve the robustness of social bot detection. Specifically, this study performs robustness evaluations of five bot detection models on two datasets under both black-box and white-box scenarios. The white-box experiments achieve a minimum attack success rate of 86.23%, while the black-box experiments achieve a minimum attack success rate of 45.86%. This shows that the social bot detection model is vulnerable to adversarial attacks. Moreover, after executing our robustness improvement method, the robustness of the detection model increased by up to 86.98%.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"127 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141253946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A irregular text detection via dilated recombination and efficient reorganization on natural scene 在自然场景中通过扩张重组和高效重组进行不规则文本检测
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-02 DOI: 10.1007/s00530-024-01360-6
Liwen Huang, Wenyuan Yang

In recent years, scene text detection has brought out broader prospects via growing applied opportunities. Nevertheless, pointing out which detected capability and suitable instantaneity in equilibrium is an essential consideration of irregular text detection. Out of consideration for the trouble, we propose an efficient scene text detector that unites a Dilated Recombined Unit (DRU) and a Efficient Reorganized Unit (ERU), named DENet. In the beginning, input feature information is received into a DR-VanillaNet backbone. Dilated recombined unit is devised to insert into every block of DR-VanillaNet to heighten the connection about distant pixel points. Next, an FPN with efficient reorganized unit tends to exploit feature redundancy and permutate channels partially. Correspondingly, DRU and ERU work on constructive effect for precision with a limited descent of speed. Moreover, a progressive scale expansion is carried forward which maintains the ability to generate the adjacent instances successfully. Multiple experiments on CTW1500, Total-Text benchmark datasets prove that designed model intends to improve precision accompanied by a limited drop of speed. It is specifically indicated that the value of precision on these two datasets reaches 84.29% and 85.30%. And FPS are achieved by 8.6 and 10.9, respectively.

近年来,场景文本检测的应用机会越来越多,带来了更广阔的前景。然而,在不规则文本检测中,如何确定检测能力和合适的瞬时平衡是一个重要的考虑因素。出于对这一问题的考虑,我们提出了一种结合了稀释重组单元(DRU)和高效重组单元(ERU)的高效场景文本检测器,命名为 DENet。首先,输入的特征信息被接收到 DR-VanillaNet 骨干网。在 DR-VanillaNet 的每个区块中插入扩张重组单元,以加强与远处像素点的连接。接下来,带有高效重组单元的 FPN 往往会利用特征冗余和部分排列通道。相应地,DRU 和 ERU 利用构造效应来提高精度,但速度下降有限。此外,DRU 和 ERU 还进行了渐进式规模扩展,从而保持了成功生成相邻实例的能力。在 CTW1500、Total-Text 基准数据集上进行的多次实验证明,所设计的模型旨在提高精确度,同时限制速度的下降。具体表现在,这两个数据集的精确度值分别达到了 84.29% 和 85.30%。而 FPS 分别达到了 8.6 和 10.9。
{"title":"A irregular text detection via dilated recombination and efficient reorganization on natural scene","authors":"Liwen Huang, Wenyuan Yang","doi":"10.1007/s00530-024-01360-6","DOIUrl":"https://doi.org/10.1007/s00530-024-01360-6","url":null,"abstract":"<p>In recent years, scene text detection has brought out broader prospects via growing applied opportunities. Nevertheless, pointing out which detected capability and suitable instantaneity in equilibrium is an essential consideration of irregular text detection. Out of consideration for the trouble, we propose an efficient scene text detector that unites a Dilated Recombined Unit (DRU) and a Efficient Reorganized Unit (ERU), named DENet. In the beginning, input feature information is received into a DR-VanillaNet backbone. Dilated recombined unit is devised to insert into every block of DR-VanillaNet to heighten the connection about distant pixel points. Next, an FPN with efficient reorganized unit tends to exploit feature redundancy and permutate channels partially. Correspondingly, DRU and ERU work on constructive effect for precision with a limited descent of speed. Moreover, a progressive scale expansion is carried forward which maintains the ability to generate the adjacent instances successfully. Multiple experiments on CTW1500, Total-Text benchmark datasets prove that designed model intends to improve precision accompanied by a limited drop of speed. It is specifically indicated that the value of precision on these two datasets reaches 84.29% and 85.30%. And FPS are achieved by 8.6 and 10.9, respectively.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"29 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141194673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel spatial and spectral transformer network for hyperspectral image super-resolution 用于高光谱图像超分辨率的新型空间和光谱变换器网络
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-01 DOI: 10.1007/s00530-024-01363-3
Huapeng Wu, Hui Xu, Tianming Zhan

Recently, transformer networks based on hyperspectral image super-resolution have achieved significant performance in comparison with most convolution neural networks. However, this is still an open problem of how to efficiently design a lightweight transformer structure to extract long-range spatial and spectral information from hyperspectral images. This paper proposes a novel spatial and spectral transformer network (SSTN) for hyperspectral image super-resolution. Specifically, the proposed transformer framework mainly consists of multiple consecutive alternating global attention layers and regional attention layers. In the global attention layer, a spatial and spectral self-attention module with less complexity is introduced to learn spatial and spectral global interaction, which can enhance the representation ability of the network. In addition, the proposed regional attention layer can extract regional feature information by using a window self-attention based on zero-padding strategy. This alternating architecture can adaptively learn regional and global feature information of hyperspectral images. Extensive experimental results demonstrate that the proposed method can achieve superior performance in comparison with the state-of-the-art hyperspectral image super-resolution methods.

最近,与大多数卷积神经网络相比,基于高光谱图像超分辨率的变换器网络取得了显著的性能。然而,如何有效地设计轻量级变换器结构,从高光谱图像中提取长距离空间和光谱信息,这仍是一个未决问题。本文提出了一种用于高光谱图像超分辨率的新型空间和光谱变换器网络(SSTN)。具体来说,所提出的变换器框架主要由多个连续交替的全局注意层和区域注意层组成。在全局注意层中,引入了复杂度较低的空间和光谱自注意模块,以学习空间和光谱的全局交互,从而增强网络的表示能力。此外,所提出的区域注意层可以通过使用基于零填充策略的窗口自注意来提取区域特征信息。这种交替架构可以自适应地学习高光谱图像的区域和全局特征信息。广泛的实验结果表明,与最先进的高光谱图像超分辨率方法相比,所提出的方法能实现更优越的性能。
{"title":"A novel spatial and spectral transformer network for hyperspectral image super-resolution","authors":"Huapeng Wu, Hui Xu, Tianming Zhan","doi":"10.1007/s00530-024-01363-3","DOIUrl":"https://doi.org/10.1007/s00530-024-01363-3","url":null,"abstract":"<p>Recently, transformer networks based on hyperspectral image super-resolution have achieved significant performance in comparison with most convolution neural networks. However, this is still an open problem of how to efficiently design a lightweight transformer structure to extract long-range spatial and spectral information from hyperspectral images. This paper proposes a novel spatial and spectral transformer network (SSTN) for hyperspectral image super-resolution. Specifically, the proposed transformer framework mainly consists of multiple consecutive alternating global attention layers and regional attention layers. In the global attention layer, a spatial and spectral self-attention module with less complexity is introduced to learn spatial and spectral global interaction, which can enhance the representation ability of the network. In addition, the proposed regional attention layer can extract regional feature information by using a window self-attention based on zero-padding strategy. This alternating architecture can adaptively learn regional and global feature information of hyperspectral images. Extensive experimental results demonstrate that the proposed method can achieve superior performance in comparison with the state-of-the-art hyperspectral image super-resolution methods.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"29 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141194707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SD-Pose: facilitating space-decoupled human pose estimation via adaptive pose perception guidance SD-Pose:通过自适应姿势感知引导促进空间解耦人体姿势估计
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-31 DOI: 10.1007/s00530-024-01368-y
Zhi Liu, Shengzhao Hao, Yunhua Lu, Lei Liu, Cong Chen, Ruohuang Wang

Human pose estimation is a popular and challenging task in computer vision. Currently, the mainstream methods for pose estimation are based on Gaussian heatmaps and coordinate regression techniques. However, the intensive computational overhead and quantization error introduced by heatmaps pose many limitations on their application. And coordinate regression faces difficulties in learning mapping cross and misaligned keypoints, resulting in poor robustness. Recently, pose estimation based on Coordinate Classification encodes global spatial information into one-dimensional representations in X and Y directions, which turns keypoint localization into a classification problem and thus simplifies the model while effectively improving pose estimation accuracy. Motivated by this, SD-Pose is proposed in this work, which is a spatially decoupled human pose estimation model guided by adaptive pose perception. Specifically, the model first employs a Pyramid Adaptive Feature Extractor (PAFE) to obtain multi-scale featuremaps and generate adaptive keypoint weights to assist the model in extracting unique features for keypoints at different locations. Then, the Spatial Decoupling and Coordinated Analysis Module (SDCAM) simplifies the localization problem while considering both global and fine-grained features. Experimental results on MPII human pose and COCO keypoint detection datasets validate the effectiveness of the SD-Pose model and also display satisfied performance in recovering detailed information for keypoints such as Elbow, Hip, and Ankle.

人体姿态估计是计算机视觉领域一项热门而又具有挑战性的任务。目前,姿势估计的主流方法基于高斯热图和坐标回归技术。然而,热图带来的密集计算开销和量化误差对其应用造成了诸多限制。而坐标回归在学习映射交叉和错位关键点时面临困难,导致鲁棒性较差。最近,基于坐标分类的姿态估计将全局空间信息编码为 X 和 Y 方向的一维表示,这就把关键点定位变成了分类问题,从而简化了模型,同时有效提高了姿态估计的准确性。受此启发,本研究提出了 SD-Pose 模型,它是一种以自适应姿势感知为指导的空间解耦人体姿势估计模型。具体来说,该模型首先采用金字塔自适应特征提取器(PAFE)获取多尺度特征图,并生成自适应关键点权重,以帮助模型提取不同位置关键点的独特特征。然后,空间解耦与协调分析模块(SDCAM)简化了定位问题,同时考虑了全局和细粒度特征。在 MPII 人体姿态和 COCO 关键点检测数据集上的实验结果验证了 SD-Pose 模型的有效性,并显示了在恢复肘部、髋部和踝部等关键点的详细信息方面令人满意的性能。
{"title":"SD-Pose: facilitating space-decoupled human pose estimation via adaptive pose perception guidance","authors":"Zhi Liu, Shengzhao Hao, Yunhua Lu, Lei Liu, Cong Chen, Ruohuang Wang","doi":"10.1007/s00530-024-01368-y","DOIUrl":"https://doi.org/10.1007/s00530-024-01368-y","url":null,"abstract":"<p>Human pose estimation is a popular and challenging task in computer vision. Currently, the mainstream methods for pose estimation are based on Gaussian heatmaps and coordinate regression techniques. However, the intensive computational overhead and quantization error introduced by heatmaps pose many limitations on their application. And coordinate regression faces difficulties in learning mapping cross and misaligned keypoints, resulting in poor robustness. Recently, pose estimation based on Coordinate Classification encodes global spatial information into one-dimensional representations in X and Y directions, which turns keypoint localization into a classification problem and thus simplifies the model while effectively improving pose estimation accuracy. Motivated by this, SD-Pose is proposed in this work, which is a spatially decoupled human pose estimation model guided by adaptive pose perception. Specifically, the model first employs a Pyramid Adaptive Feature Extractor (PAFE) to obtain multi-scale featuremaps and generate adaptive keypoint weights to assist the model in extracting unique features for keypoints at different locations. Then, the Spatial Decoupling and Coordinated Analysis Module (SDCAM) simplifies the localization problem while considering both global and fine-grained features. Experimental results on MPII human pose and COCO keypoint detection datasets validate the effectiveness of the SD-Pose model and also display satisfied performance in recovering detailed information for keypoints such as Elbow, Hip, and Ankle.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"48 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141194514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale geometric window transformer for orthodontic teeth point cloud registration 用于正畸牙齿点云注册的多尺度几何窗口变换器
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-31 DOI: 10.1007/s00530-024-01369-x
Hao Wang, Yan Tian, Yongchuan Xu, Jiahui Xu, Tao Yang, Yan Lu, Hong Chen

Digital orthodontic treatment monitoring has been gaining increasing attention in the past decade. However, current methods based on deep learning still face difficult challenges. Transformer, due to its excellent ability to model long-term dependencies, can be applied to the task of tooth point cloud registration. Nonetheless, most transformer-based point cloud registration networks suffer from two problems. First, they lack the embedding of credible geometric information, resulting in learned features that are not geometrically discriminative and blur the boundary between inliers and outliers. Second, the attention mechanism lacks continuous downsampling during geometric transformation invariant feature extraction at the superpixel level, thereby limiting the field of view and potentially limiting the model’s perception of local and global information. In this paper, we propose GeoSwin, which uses a novel geometric window transformer to achieve accurate registration of tooth point clouds in different stages of orthodontic treatment. This method uses the point distance, normal vector angle, and bidirectional spatial angular distances as the input geometric embedding of transformer, and then uses a proposed variable multiscale attention mechanism to achieve geometric information perception from local to global perspectives. Experiments on the Shing3D Dental Dataset demonstrate the effectiveness of our approach and that it outperforms other state-of-the-art approaches across multiple metrics. Our code and models are available at GeoSwin.

过去十年间,数字化正畸治疗监测越来越受到关注。然而,目前基于深度学习的方法仍然面临着困难的挑战。变压器具有出色的长期依赖性建模能力,可以应用于牙齿点云配准任务。然而,大多数基于变换器的点云配准网络都存在两个问题。首先,它们缺乏可信的几何信息嵌入,导致学习到的特征不具有几何鉴别力,并且模糊了异常值和离群值之间的界限。其次,在超像素级的几何变换不变特征提取过程中,注意力机制缺乏连续的下采样,从而限制了视场,并可能限制模型对局部和全局信息的感知。在本文中,我们提出了 GeoSwin,它使用一种新颖的几何窗口变换器来实现正畸治疗不同阶段牙齿点云的精确配准。该方法使用点距、法向量角和双向空间角距作为变换器的输入几何嵌入,然后使用提出的可变多尺度注意机制实现从局部到全局的几何信息感知。在 Shing3D Dental Dataset 上的实验证明了我们的方法的有效性,并且在多个指标上都优于其他最先进的方法。我们的代码和模型可在 GeoSwin 网站上查阅。
{"title":"Multiscale geometric window transformer for orthodontic teeth point cloud registration","authors":"Hao Wang, Yan Tian, Yongchuan Xu, Jiahui Xu, Tao Yang, Yan Lu, Hong Chen","doi":"10.1007/s00530-024-01369-x","DOIUrl":"https://doi.org/10.1007/s00530-024-01369-x","url":null,"abstract":"<p>Digital orthodontic treatment monitoring has been gaining increasing attention in the past decade. However, current methods based on deep learning still face difficult challenges. Transformer, due to its excellent ability to model long-term dependencies, can be applied to the task of tooth point cloud registration. Nonetheless, most transformer-based point cloud registration networks suffer from two problems. First, they lack the embedding of credible geometric information, resulting in learned features that are not geometrically discriminative and blur the boundary between inliers and outliers. Second, the attention mechanism lacks continuous downsampling during geometric transformation invariant feature extraction at the superpixel level, thereby limiting the field of view and potentially limiting the model’s perception of local and global information. In this paper, we propose GeoSwin, which uses a novel geometric window transformer to achieve accurate registration of tooth point clouds in different stages of orthodontic treatment. This method uses the point distance, normal vector angle, and bidirectional spatial angular distances as the input geometric embedding of transformer, and then uses a proposed variable multiscale attention mechanism to achieve geometric information perception from local to global perspectives. Experiments on the Shing3D Dental Dataset demonstrate the effectiveness of our approach and that it outperforms other state-of-the-art approaches across multiple metrics. Our code and models are available at GeoSwin.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"495 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141194712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Multimedia Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1