首页 > 最新文献

Comput. Vis. Image Underst.最新文献

英文 中文
Learning language to symbol and language to vision mapping for visual grounding 学习语言以符号和语言以视觉映射为视觉基础
Pub Date : 2022-04-01 DOI: 10.2139/ssrn.3989572
Su He, Xiaofeng Yang, Guosheng Lin
{"title":"Learning language to symbol and language to vision mapping for visual grounding","authors":"Su He, Xiaofeng Yang, Guosheng Lin","doi":"10.2139/ssrn.3989572","DOIUrl":"https://doi.org/10.2139/ssrn.3989572","url":null,"abstract":"","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75346652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MMSNet: Multi-modal scene recognition using multi-scale encoded features 使用多尺度编码特征的多模态场景识别
Pub Date : 2022-04-01 DOI: 10.2139/ssrn.4032570
Ali Caglayan, Nevrez Imamoglu, Ryosuke Nakamura
{"title":"MMSNet: Multi-modal scene recognition using multi-scale encoded features","authors":"Ali Caglayan, Nevrez Imamoglu, Ryosuke Nakamura","doi":"10.2139/ssrn.4032570","DOIUrl":"https://doi.org/10.2139/ssrn.4032570","url":null,"abstract":"","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80497350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Periocular Biometrics and its Relevance to Partially Masked Faces: A Survey 眼周生物识别及其与部分蒙面的相关性:一项调查
Pub Date : 2022-03-29 DOI: 10.2139/ssrn.4029455
Renu Sharma, A. Ross
The performance of face recognition systems can be negatively impacted in the presence of masks and other types of facial coverings that have become prevalent due to the COVID-19 pandemic. In such cases, the periocular region of the human face becomes an important biometric cue. In this article, we present a detailed review of periocular biometrics. We first examine the various face and periocular techniques specially designed to recognize humans wearing a face mask. Then, we review different aspects of periocular biometrics: (a) the anatomical cues present in the periocular region useful for recognition, (b) the various feature extraction and matching techniques developed, (c) recognition across different spectra, (d) fusion with other biometric modalities (face or iris), (e) recognition on mobile devices, (f) its usefulness in other applications, (g) periocular datasets, and (h) competitions organized for evaluating the efficacy of this biometric modality. Finally, we discuss various challenges and future directions in the field of periocular biometrics.
由于COVID-19大流行,口罩和其他类型的面部覆盖物普遍存在,面部识别系统的性能可能会受到负面影响。在这种情况下,人脸的眼周区域成为一个重要的生物识别线索。在这篇文章中,我们介绍了眼周生物识别的详细综述。我们首先研究各种面部和眼部技术,专门用于识别戴口罩的人。然后,我们回顾了眼周生物识别的不同方面:(a)眼周区域中存在的对识别有用的解剖线索,(b)开发的各种特征提取和匹配技术,(c)跨不同光谱的识别,(d)与其他生物识别模式(面部或虹膜)的融合,(e)移动设备上的识别,(f)其在其他应用中的实用性,(g)眼周数据集,以及(h)为评估这种生物识别模式的有效性而组织的竞赛。最后,我们讨论了眼周生物识别领域的各种挑战和未来的发展方向。
{"title":"Periocular Biometrics and its Relevance to Partially Masked Faces: A Survey","authors":"Renu Sharma, A. Ross","doi":"10.2139/ssrn.4029455","DOIUrl":"https://doi.org/10.2139/ssrn.4029455","url":null,"abstract":"The performance of face recognition systems can be negatively impacted in the presence of masks and other types of facial coverings that have become prevalent due to the COVID-19 pandemic. In such cases, the periocular region of the human face becomes an important biometric cue. In this article, we present a detailed review of periocular biometrics. We first examine the various face and periocular techniques specially designed to recognize humans wearing a face mask. Then, we review different aspects of periocular biometrics: (a) the anatomical cues present in the periocular region useful for recognition, (b) the various feature extraction and matching techniques developed, (c) recognition across different spectra, (d) fusion with other biometric modalities (face or iris), (e) recognition on mobile devices, (f) its usefulness in other applications, (g) periocular datasets, and (h) competitions organized for evaluating the efficacy of this biometric modality. Finally, we discuss various challenges and future directions in the field of periocular biometrics.","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74603765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Meta conditional variational auto-encoder for domain generalization 面向领域泛化的元条件变分自编码器
Pub Date : 2022-01-01 DOI: 10.2139/ssrn.3988579
Zhiqiang Ge, Zhihuan Song, Xin Li, Lei Zhang
{"title":"Meta conditional variational auto-encoder for domain generalization","authors":"Zhiqiang Ge, Zhihuan Song, Xin Li, Lei Zhang","doi":"10.2139/ssrn.3988579","DOIUrl":"https://doi.org/10.2139/ssrn.3988579","url":null,"abstract":"","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91354281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handcrafted localized phase features for human action recognition 用于人类动作识别的手工局部相位特征
Pub Date : 2022-01-01 DOI: 10.2139/ssrn.4022915
S. Hejazi, G. Abhayaratne
{"title":"Handcrafted localized phase features for human action recognition","authors":"S. Hejazi, G. Abhayaratne","doi":"10.2139/ssrn.4022915","DOIUrl":"https://doi.org/10.2139/ssrn.4022915","url":null,"abstract":"","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83498117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Controlling strokes in fast neural style transfer using content transforms 使用内容变换控制快速神经风格传递中的笔画
Pub Date : 2022-01-01 DOI: 10.1007/s00371-022-02518-x
M. Reimann, Benito Buchheim, Amir Semmo, J. Döllner, Matthias Trapp
{"title":"Controlling strokes in fast neural style transfer using content transforms","authors":"M. Reimann, Benito Buchheim, Amir Semmo, J. Döllner, Matthias Trapp","doi":"10.1007/s00371-022-02518-x","DOIUrl":"https://doi.org/10.1007/s00371-022-02518-x","url":null,"abstract":"","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85043172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Conditional generative data-free knowledge distillation 条件生成无数据知识蒸馏
Pub Date : 2021-12-31 DOI: 10.2139/ssrn.4039886
Xinyi Yu, Ling Yan, Yang Yang, Libo Zhou, Linlin Ou
Knowledge distillation has made remarkable achievements in model compression. However, most existing methods require the original training data, which is usually unavailable due to privacy and security issues. In this paper, we propose a conditional generative data-free knowledge distillation (CGDD) framework for training lightweight networks without any training data. This method realizes efficient knowledge distillation based on conditional image generation. Specifically, we treat the preset labels as ground truth to train a conditional generator in a semi-supervised manner. The trained generator can produce specified classes of training images. For training the student network, we force it to extract the knowledge hidden in teacher feature maps, which provide crucial cues for the learning process. Moreover, an adversarial training framework for promoting distillation performance is constructed by designing several loss functions. This framework helps the student model to explore larger data space. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on different datasets. Compared with other data-free works, our work obtains state-of-the-art results on CIFAR100, Caltech101, and different versions of ImageNet datasets. The codes will be released.
知识蒸馏在模型压缩方面取得了显著的成就。然而,大多数现有的方法都需要原始的训练数据,由于隐私和安全问题,这些数据通常是不可用的。在本文中,我们提出了一个条件生成无数据知识蒸馏(CGDD)框架,用于在没有任何训练数据的情况下训练轻量级网络。该方法实现了基于条件图像生成的高效知识蒸馏。具体来说,我们将预设标签作为基础真理,以半监督的方式训练条件生成器。训练后的生成器可以生成指定类别的训练图像。为了训练学生网络,我们强迫它提取隐藏在教师特征图中的知识,这些特征图为学习过程提供了关键的线索。此外,通过设计几个损失函数,构建了一个提升蒸馏性能的对抗训练框架。这个框架帮助学生模型探索更大的数据空间。为了证明所提出方法的有效性,我们在不同的数据集上进行了大量的实验。与其他无数据工作相比,我们的工作在CIFAR100、Caltech101和不同版本的ImageNet数据集上获得了最先进的结果。代码将被发布。
{"title":"Conditional generative data-free knowledge distillation","authors":"Xinyi Yu, Ling Yan, Yang Yang, Libo Zhou, Linlin Ou","doi":"10.2139/ssrn.4039886","DOIUrl":"https://doi.org/10.2139/ssrn.4039886","url":null,"abstract":"Knowledge distillation has made remarkable achievements in model compression. However, most existing methods require the original training data, which is usually unavailable due to privacy and security issues. In this paper, we propose a conditional generative data-free knowledge distillation (CGDD) framework for training lightweight networks without any training data. This method realizes efficient knowledge distillation based on conditional image generation. Specifically, we treat the preset labels as ground truth to train a conditional generator in a semi-supervised manner. The trained generator can produce specified classes of training images. For training the student network, we force it to extract the knowledge hidden in teacher feature maps, which provide crucial cues for the learning process. Moreover, an adversarial training framework for promoting distillation performance is constructed by designing several loss functions. This framework helps the student model to explore larger data space. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on different datasets. Compared with other data-free works, our work obtains state-of-the-art results on CIFAR100, Caltech101, and different versions of ImageNet datasets. The codes will be released.","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74094001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A formal approach to good practices in Pseudo-Labeling for Unsupervised Domain Adaptive Re-Identification 一种用于无监督域自适应再识别的伪标记实践的形式化方法
Pub Date : 2021-12-24 DOI: 10.2139/ssrn.3994206
Fabian Dubourvieux, Romaric Audigier, Angélique Loesch, Samia Ainouz, S. Canu
The use of pseudo-labels prevails in order to tackle Unsupervised Domain Adaptive (UDA) Re-Identification (re-ID) with the best performance. Indeed, this family of approaches has given rise to several UDA re-ID specific frameworks, which are effective. In these works, research directions to improve Pseudo-Labeling UDA re-ID performance are varied and mostly based on intuition and experiments: refining pseudo-labels, reducing the impact of errors in pseudo-labels... It can be hard to deduce from them general good practices, which can be implemented in any Pseudo-Labeling method, to consistently improve its performance. To address this key question, a new theoretical view on Pseudo-Labeling UDA re-ID is proposed. The contributions are threefold: (i) A novel theoretical framework for Pseudo-Labeling UDA re-ID, formalized through a new general learning upper-bound on the UDA re-ID performance. (ii) General good practices for Pseudo-Labeling, directly deduced from the interpretation of the proposed theoretical framework, in order to improve the target re-ID performance. (iii) Extensive experiments on challenging person and vehicle cross-dataset re-ID tasks, showing consistent performance improvements for various state-of-the-art methods and various proposed implementations of good practices.
伪标签的使用是解决无监督域自适应(UDA)重新识别(re-ID)的最佳方法。事实上,这一系列方法已经产生了几个有效的UDA - re-ID特定框架。在这些作品中,提高Pseudo-Labeling UDA re-ID性能的研究方向多种多样,大多基于直觉和实验:改进伪标签,减少伪标签错误的影响……很难从它们中推断出通用的良好实践,这些实践可以在任何伪标签方法中实现,以持续提高其性能。为了解决这一关键问题,本文提出了一种新的伪标记UDA重新标识的理论观点。贡献有三个方面:(i)伪标记UDA重新识别的新理论框架,通过对UDA重新识别性能的新的一般学习上界形式化。(ii)伪标签的一般良好做法,直接从所提议的理论框架的解释中推断出来,以提高目标再识别性能。(iii)对具有挑战性的人和车辆跨数据集重新识别任务进行了广泛的实验,显示了各种最先进的方法和各种建议的良好实践实现的一致性能改进。
{"title":"A formal approach to good practices in Pseudo-Labeling for Unsupervised Domain Adaptive Re-Identification","authors":"Fabian Dubourvieux, Romaric Audigier, Angélique Loesch, Samia Ainouz, S. Canu","doi":"10.2139/ssrn.3994206","DOIUrl":"https://doi.org/10.2139/ssrn.3994206","url":null,"abstract":"The use of pseudo-labels prevails in order to tackle Unsupervised Domain Adaptive (UDA) Re-Identification (re-ID) with the best performance. Indeed, this family of approaches has given rise to several UDA re-ID specific frameworks, which are effective. In these works, research directions to improve Pseudo-Labeling UDA re-ID performance are varied and mostly based on intuition and experiments: refining pseudo-labels, reducing the impact of errors in pseudo-labels... It can be hard to deduce from them general good practices, which can be implemented in any Pseudo-Labeling method, to consistently improve its performance. To address this key question, a new theoretical view on Pseudo-Labeling UDA re-ID is proposed. The contributions are threefold: (i) A novel theoretical framework for Pseudo-Labeling UDA re-ID, formalized through a new general learning upper-bound on the UDA re-ID performance. (ii) General good practices for Pseudo-Labeling, directly deduced from the interpretation of the proposed theoretical framework, in order to improve the target re-ID performance. (iii) Extensive experiments on challenging person and vehicle cross-dataset re-ID tasks, showing consistent performance improvements for various state-of-the-art methods and various proposed implementations of good practices.","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79628369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A novel privacy-preserving outsourcing computation scheme for Canny edge detection 一种新的Canny边缘检测保密外包计算方案
Pub Date : 2021-10-30 DOI: 10.1007/s00371-021-02307-y
Bowen Li, Fazhi He, Xiantao Zeng
{"title":"A novel privacy-preserving outsourcing computation scheme for Canny edge detection","authors":"Bowen Li, Fazhi He, Xiantao Zeng","doi":"10.1007/s00371-021-02307-y","DOIUrl":"https://doi.org/10.1007/s00371-021-02307-y","url":null,"abstract":"","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87824311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Dynamic Keypoints Selection Network for 6DoF Pose Estimation 一种用于6自由度姿态估计的动态关键点选择网络
Pub Date : 2021-10-24 DOI: 10.1016/j.imavis.2022.104372
Haowen Sun, Taiyong Wang
{"title":"A Dynamic Keypoints Selection Network for 6DoF Pose Estimation","authors":"Haowen Sun, Taiyong Wang","doi":"10.1016/j.imavis.2022.104372","DOIUrl":"https://doi.org/10.1016/j.imavis.2022.104372","url":null,"abstract":"","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87150276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Comput. Vis. Image Underst.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1