首页 > 最新文献

2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)最新文献

英文 中文
Hardware Accelerated Inverse Kinematics for Low Power Surgical Manipulators 低功率手术机械臂的硬件加速逆运动学
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205769
Oleksii M. Tkachenko, K. Song
Robotic minimally invasive surgery (MIS) is performed via small incisions and so lessens wound healing time, associated pain and risk of infection. We refactor the control pipeline and accelerate the most time-consuming stage- inverse kinematics (IK) calculation for robot assisted MIS. Field programmable gate array (FPGA) technology is used to develop a low power hardware IK accelerator. The set of optimization techniques reduces the design’s size so it can fit onto the real hardware. Accelerator executes IK in approximately 30 microseconds. System architecture runs on a heterogeneous CPUFPGA platform. Single and multi-point architectures are developed, where multi-point architecture overcomes communication overhead between platforms and allows achieving a higher output rate. Implementation is tested for 16, 24 and 32-bit fixed-point numbers, with an average computation error of 0.07 millimeters for 32-bit architecture. Experimental results validate and verify the proposed solution.
机器人微创手术(MIS)通过小切口进行,因此减少了伤口愈合时间,相关疼痛和感染风险。我们重构了机器人辅助管理信息系统的控制管道,并加速了最耗时的阶段-逆运动学(IK)计算。采用现场可编程门阵列(FPGA)技术开发了一种低功耗硬件IK加速器。这组优化技术减少了设计的尺寸,使其能够适应实际的硬件。加速器在大约30微秒内执行IK。系统架构运行在异构CPUFPGA平台上。开发了单点和多点架构,其中多点架构克服了平台间的通信开销,实现了更高的输出速率。对16位、24位和32位定点数的实现进行了测试,32位架构的平均计算误差为0.07毫米。实验结果验证了所提出的解决方案。
{"title":"Hardware Accelerated Inverse Kinematics for Low Power Surgical Manipulators","authors":"Oleksii M. Tkachenko, K. Song","doi":"10.1109/ARIS50834.2020.9205769","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205769","url":null,"abstract":"Robotic minimally invasive surgery (MIS) is performed via small incisions and so lessens wound healing time, associated pain and risk of infection. We refactor the control pipeline and accelerate the most time-consuming stage- inverse kinematics (IK) calculation for robot assisted MIS. Field programmable gate array (FPGA) technology is used to develop a low power hardware IK accelerator. The set of optimization techniques reduces the design’s size so it can fit onto the real hardware. Accelerator executes IK in approximately 30 microseconds. System architecture runs on a heterogeneous CPUFPGA platform. Single and multi-point architectures are developed, where multi-point architecture overcomes communication overhead between platforms and allows achieving a higher output rate. Implementation is tested for 16, 24 and 32-bit fixed-point numbers, with an average computation error of 0.07 millimeters for 32-bit architecture. Experimental results validate and verify the proposed solution.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123063420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLAM Configuration from Video Images for Remote Omni-direction Vehicle Platform 基于视频图像的远程全方位车辆平台SLAM配置
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205779
P. I. Chang, Y. Shi, S. C. Fan-Chiang, C. Lan
This paper attempts to fully reconstruct a local mapping for robotic vehicle platforms, by use of 3D commercial camera. The reconstructed SLAM is verified by the global positioning of the surrounding with a-priori knowledge. While the whole omni-directional vehicle is designed and built in-house to maximize utility of all the signals available from the system. The mapping error for 2D for localization is estimated at 5% showing promise for this approach.
本文试图利用三维商业摄像机对机器人车辆平台的局部地图进行完全重建。利用先验知识对周围环境进行全局定位,验证重构后的SLAM。而整个全向车辆是内部设计和制造的,以最大限度地利用系统提供的所有信号。2D定位的映射误差估计为5%,显示出这种方法的前景。
{"title":"SLAM Configuration from Video Images for Remote Omni-direction Vehicle Platform","authors":"P. I. Chang, Y. Shi, S. C. Fan-Chiang, C. Lan","doi":"10.1109/ARIS50834.2020.9205779","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205779","url":null,"abstract":"This paper attempts to fully reconstruct a local mapping for robotic vehicle platforms, by use of 3D commercial camera. The reconstructed SLAM is verified by the global positioning of the surrounding with a-priori knowledge. While the whole omni-directional vehicle is designed and built in-house to maximize utility of all the signals available from the system. The mapping error for 2D for localization is estimated at 5% showing promise for this approach.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129837824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Artificial Intelligence and Internet of Things for Robotic Disaster Response 机器人灾难响应的人工智能和物联网
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205794
Min-Fan Ricky Lee, Tzu-Wei Chien
After the Fukushima nuclear disaster and the Wenchuan earthquake, the relevant government agencies recognized the urgency of disaster-straining robots. There are many natural or man-made disasters in Taiwan, and it is usually impossible to dispatch relevant personnel to search or explore immediately. The project proposes to use the architecture of Intelligent Internet of Things (AIoT) (Artificial Intelligence + Internet of Things) to coordinate with ground, surface and aerial and underwater robots, and apply them to disaster response, ground, surface and aerial and underwater swarm robots to collect environmental big data from the disaster site, and then through the Internet of Things. From the field workstation to the cloud for “training” deep learning model and “model verification”, the trained deep learning model is transmitted to the field workstation via the Internet of Things, and then transmitted to the ground, surface and aerial and underwater swarm robots for on-site continuing objects classification. Continuously verify the “identification” with the environment and make the best decisions for the response. The related tasks include monitoring, search and rescue of the target.
在福岛核灾难和汶川地震之后,相关政府机构认识到灾难应变机器人的紧迫性。台湾有许多天灾人祸,通常不可能立即派出相关人员进行搜寻或探索。项目提出利用智能物联网(AIoT)(人工智能+物联网)架构与地面、水面、空中、水下机器人协同,应用于灾害响应,地面、水面、空中、水下群机器人从灾难现场采集环境大数据,然后通过物联网。从现场工作站到云端进行“训练”深度学习模型和“模型验证”,经过训练的深度学习模型通过物联网传输到现场工作站,再传输到地面、水面、空中和水下的群体机器人进行现场连续物体分类。不断验证与环境的“识别”,并为响应做出最佳决策。相关任务包括监视、搜索和营救目标。
{"title":"Artificial Intelligence and Internet of Things for Robotic Disaster Response","authors":"Min-Fan Ricky Lee, Tzu-Wei Chien","doi":"10.1109/ARIS50834.2020.9205794","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205794","url":null,"abstract":"After the Fukushima nuclear disaster and the Wenchuan earthquake, the relevant government agencies recognized the urgency of disaster-straining robots. There are many natural or man-made disasters in Taiwan, and it is usually impossible to dispatch relevant personnel to search or explore immediately. The project proposes to use the architecture of Intelligent Internet of Things (AIoT) (Artificial Intelligence + Internet of Things) to coordinate with ground, surface and aerial and underwater robots, and apply them to disaster response, ground, surface and aerial and underwater swarm robots to collect environmental big data from the disaster site, and then through the Internet of Things. From the field workstation to the cloud for “training” deep learning model and “model verification”, the trained deep learning model is transmitted to the field workstation via the Internet of Things, and then transmitted to the ground, surface and aerial and underwater swarm robots for on-site continuing objects classification. Continuously verify the “identification” with the environment and make the best decisions for the response. The related tasks include monitoring, search and rescue of the target.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133341130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Hybrid Network for Facial Age Progression and Regression Learning 面部年龄进展与回归学习的混合网络
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205788
Rui-Cang Xie, G. Hsu
Facial age transformation is an attractive application on an entertainment or amusement robot. With this application, the robot can transform an input face to the same face but in different ages. We propose a new algorithm for age transformation. Due to recent progresses made by state-of-theart deep learning approaches, the facial age progression and regression has become an attractive research topic in the fields of computer vision. Many existing approaches require paired data which refer to the face images of the same person at different ages. As the cost of collecting such paired datasets is expensive, some emerging approaches have been proposed to learn the facial age manifold from unpaired data. However, the images generated by these approaches suffer from the weakness or loss in generating some age traits, for example wrinkles and creases. We propose a hybrid network that is composed of a generator and two discriminators. The generator is trained to disentangle the age from the identity of the face so that it can generate a face of the same identity as of the input face but at a different age. One of the discriminator is designed for handling multitasks, including the identification of real vs. fake (generated) faces and the classification of the identities and ages of the faces. The other discriminator is designed to make the latent space satisfy the requirement so that the generated image can be made more realistic. Experiments show that the proposed network can generates better facial age images with more age traits compared with other state-of-the-art approaches.
面部年龄变换在娱乐或娱乐机器人上是一个很有吸引力的应用。通过这个应用程序,机器人可以将输入的人脸转换为不同年龄的同一张脸。提出了一种新的年龄变换算法。近年来,随着深度学习技术的发展,人脸年龄的增长与回归已经成为计算机视觉领域一个非常有吸引力的研究课题。许多现有的方法需要配对数据,这些数据指的是同一个人在不同年龄的面部图像。由于收集这种配对数据集的成本昂贵,人们提出了一些从非配对数据中学习面部年龄流形的新方法。然而,这些方法生成的图像在生成一些年龄特征方面存在缺陷或缺失,例如皱纹和折痕。我们提出了一个由一个生成器和两个鉴别器组成的混合网络。训练生成器将年龄从人脸的身份中分离出来,从而生成与输入人脸具有相同身份但年龄不同的人脸。其中一个鉴别器是为处理多任务而设计的,包括识别真假(生成的)人脸,以及对人脸的身份和年龄进行分类。另一种鉴别器的设计是为了使潜在空间满足要求,使生成的图像更加逼真。实验表明,与其他先进的方法相比,所提出的网络可以生成具有更多年龄特征的更好的面部年龄图像。
{"title":"A Hybrid Network for Facial Age Progression and Regression Learning","authors":"Rui-Cang Xie, G. Hsu","doi":"10.1109/ARIS50834.2020.9205788","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205788","url":null,"abstract":"Facial age transformation is an attractive application on an entertainment or amusement robot. With this application, the robot can transform an input face to the same face but in different ages. We propose a new algorithm for age transformation. Due to recent progresses made by state-of-theart deep learning approaches, the facial age progression and regression has become an attractive research topic in the fields of computer vision. Many existing approaches require paired data which refer to the face images of the same person at different ages. As the cost of collecting such paired datasets is expensive, some emerging approaches have been proposed to learn the facial age manifold from unpaired data. However, the images generated by these approaches suffer from the weakness or loss in generating some age traits, for example wrinkles and creases. We propose a hybrid network that is composed of a generator and two discriminators. The generator is trained to disentangle the age from the identity of the face so that it can generate a face of the same identity as of the input face but at a different age. One of the discriminator is designed for handling multitasks, including the identification of real vs. fake (generated) faces and the classification of the identities and ages of the faces. The other discriminator is designed to make the latent space satisfy the requirement so that the generated image can be made more realistic. Experiments show that the proposed network can generates better facial age images with more age traits compared with other state-of-the-art approaches.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131248250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skeleton-based Hand Gesture Recognition for Assembly Line Operation 基于骨架的装配线操作手势识别
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205781
Chao-Lung Yang, Wen-Ting Li, Shang-Che Hsu
This research aims to develop a hand gesture recognition (HGR) by combining the OpenPose and Spatial Temporal Graph Convolution Network (ST-GCN) to classify the operator’s assembly motion. By defining the hand gestures with five types of therbligs, the network model was trained to recognize the human hand gesture. Although the accuracy of recognition is 78.3% with room for improvement based on preliminary experiment results, the structure of the proposed network establishes a foundation for further improvement in future work.
本研究旨在结合OpenPose和时空图卷积网络(ST-GCN)开发一种手势识别(HGR)方法,对操作者的装配动作进行分类。通过对手势进行五种类型的定义,训练网络模型识别人类手势。虽然基于初步实验结果的识别准确率为78.3%,但该网络的结构为后续工作的进一步改进奠定了基础。
{"title":"Skeleton-based Hand Gesture Recognition for Assembly Line Operation","authors":"Chao-Lung Yang, Wen-Ting Li, Shang-Che Hsu","doi":"10.1109/ARIS50834.2020.9205781","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205781","url":null,"abstract":"This research aims to develop a hand gesture recognition (HGR) by combining the OpenPose and Spatial Temporal Graph Convolution Network (ST-GCN) to classify the operator’s assembly motion. By defining the hand gestures with five types of therbligs, the network model was trained to recognize the human hand gesture. Although the accuracy of recognition is 78.3% with room for improvement based on preliminary experiment results, the structure of the proposed network establishes a foundation for further improvement in future work.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115674214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automated Biometric Identification System Using CNN-Based Palm Vein Recognition 基于cnn手掌静脉识别的自动生物识别系统
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205778
Sin-Ye Jhong, Po-Yen Tseng, Natnuntnita Siriphockpirom, Chih-Hsien Hsia, Ming-Shih Huang, K. Hua, Yung-Yao Chen
Recently, automated biometric identification system (ABIS) has wide applications involving automatic identification and data capture (AIDC), which includes automatic security checking, verifying personal identity to prevent information disclosure or identity fraud, and so on. With the advancement of biotechnology, identification systems based on biometrics have emerged in the market. These systems require high accuracy and ease of use. Palm vein identification is a type of biometric that identifies palm vein features. Compared with other features, palm vein recognition provides accurate results and has received considerable attention. We developed a novel high-performance and noncontact palm vein recognition system by using high-performance adaptive background filtering to obtain palm vein images of the region of interest. We then used a modified convolutional neural network to determine the best recognition model through training and testing. Finally, the developed system was implemented on the low-level embedded Raspberry Pi platform with cloud computing technology. The results showed that the system can achieve an accuracy of 96.54%.
近年来,自动生物特征识别系统(ABIS)在自动识别和数据捕获(AIDC)方面有着广泛的应用,包括自动安全检查、验证个人身份以防止信息泄露或身份欺诈等。随着生物技术的进步,基于生物特征的身份识别系统已经出现在市场上。这些系统要求高精度和易于使用。手掌静脉识别是一种识别手掌静脉特征的生物识别技术。相对于其他特征,手掌静脉识别结果准确,受到了广泛关注。利用高性能的自适应背景滤波技术获取感兴趣区域的掌静脉图像,开发了一种新型的高性能非接触掌静脉识别系统。然后,我们使用改进的卷积神经网络,通过训练和测试来确定最佳识别模型。最后,利用云计算技术在底层嵌入式树莓派平台上实现了所开发的系统。结果表明,该系统可达到96.54%的准确率。
{"title":"An Automated Biometric Identification System Using CNN-Based Palm Vein Recognition","authors":"Sin-Ye Jhong, Po-Yen Tseng, Natnuntnita Siriphockpirom, Chih-Hsien Hsia, Ming-Shih Huang, K. Hua, Yung-Yao Chen","doi":"10.1109/ARIS50834.2020.9205778","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205778","url":null,"abstract":"Recently, automated biometric identification system (ABIS) has wide applications involving automatic identification and data capture (AIDC), which includes automatic security checking, verifying personal identity to prevent information disclosure or identity fraud, and so on. With the advancement of biotechnology, identification systems based on biometrics have emerged in the market. These systems require high accuracy and ease of use. Palm vein identification is a type of biometric that identifies palm vein features. Compared with other features, palm vein recognition provides accurate results and has received considerable attention. We developed a novel high-performance and noncontact palm vein recognition system by using high-performance adaptive background filtering to obtain palm vein images of the region of interest. We then used a modified convolutional neural network to determine the best recognition model through training and testing. Finally, the developed system was implemented on the low-level embedded Raspberry Pi platform with cloud computing technology. The results showed that the system can achieve an accuracy of 96.54%.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129671468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1