首页 > 最新文献

IEEE Transactions on Human-Machine Systems最新文献

英文 中文
Predicting Human Postures for Manual Material Handling Tasks Using a Conditional Diffusion Model 利用条件扩散模型预测人工材料搬运任务中的人体姿势
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-21 DOI: 10.1109/THMS.2024.3472548
Liwei Qing;Bingyi Su;Sehee Jung;Lu Lu;Hanwen Wang;Xu Xu
Predicting workers' body postures is crucial for effective ergonomic interventions to reduce musculoskeletal disorders (MSDs). In this study, we employ a novel generative approach to predict human postures during manual material handling tasks. Specifically, we implement two distinct network architectures, U-Net and multilayer perceptron (MLP), to build the diffusion model. The model training and testing utilizes a dataset featuring 35 full-body anatomical landmarks collected from 25 participants engaged in a variety of lifting tasks. In addition, we compare our models with two conventional generative networks (conditional generative adversarial network and conditional variational autoencoder) for comprehensive analysis. Our results show that the U-Net model performs well in predicting posture similarity [root-mean-square error (RMSE) of key-point coordinates = 5.86 cm; and RMSE of joint angle coordinates = 13.67$^{circ }$], while the MLP model leads to higher posture variability (e.g., standard deviation of joint angles = 4.49$^{circ }$/4.18$^{circ }$ for upper arm flexion/extension joints). Moreover, both generative models demonstrate reasonable prediction validity (RMSE of segment lengths are within 4.83 cm). Overall, our proposed diffusion models demonstrate good similarity and validity in predicting lifting postures, while also providing insights into the inherent variability of constrained lifting postures. This novel use of diffusion models shows potential for tailored posture prediction in common occupational environments, representing an advancement in motion synthesis and contributing to workplace design and MSD risk mitigation.
预测工人的身体姿势对于采取有效的人体工程学干预措施以减少肌肉骨骼疾病(MSD)至关重要。在这项研究中,我们采用了一种新颖的生成方法来预测人工材料搬运任务中的人体姿势。具体来说,我们采用两种不同的网络架构,即 U-Net 和多层感知器 (MLP),来构建扩散模型。模型的训练和测试使用了一个数据集,该数据集包含从 25 名参与各种搬运任务的参与者身上收集的 35 个全身解剖地标。此外,我们还将我们的模型与两个传统生成网络(条件生成对抗网络和条件变异自动编码器)进行了比较,以进行综合分析。结果表明,U-Net 模型在预测姿势相似性方面表现良好[关键点坐标的均方根误差 = 5.86 cm;关节角度坐标的均方根误差 = 13.67$^{/circ }$],而 MLP 模型则会导致较高的姿势变异性(例如,上臂屈伸关节的关节角度标准偏差 = 4.49$^{circ }$/4.18$^{circ }$)。此外,两个生成模型都显示出合理的预测有效性(节段长度的均方根误差在 4.83 厘米以内)。总体而言,我们提出的扩散模型在预测举重姿势方面表现出了良好的相似性和有效性,同时也为了解受限举重姿势的内在可变性提供了启示。这种新颖的扩散模型显示了在常见职业环境中进行定制姿势预测的潜力,代表了运动合成的进步,有助于工作场所设计和减轻 MSD 风险。
{"title":"Predicting Human Postures for Manual Material Handling Tasks Using a Conditional Diffusion Model","authors":"Liwei Qing;Bingyi Su;Sehee Jung;Lu Lu;Hanwen Wang;Xu Xu","doi":"10.1109/THMS.2024.3472548","DOIUrl":"https://doi.org/10.1109/THMS.2024.3472548","url":null,"abstract":"Predicting workers' body postures is crucial for effective ergonomic interventions to reduce musculoskeletal disorders (MSDs). In this study, we employ a novel generative approach to predict human postures during manual material handling tasks. Specifically, we implement two distinct network architectures, U-Net and multilayer perceptron (MLP), to build the diffusion model. The model training and testing utilizes a dataset featuring 35 full-body anatomical landmarks collected from 25 participants engaged in a variety of lifting tasks. In addition, we compare our models with two conventional generative networks (conditional generative adversarial network and conditional variational autoencoder) for comprehensive analysis. Our results show that the U-Net model performs well in predicting posture similarity [root-mean-square error (RMSE) of key-point coordinates = 5.86 cm; and RMSE of joint angle coordinates = 13.67\u0000<inline-formula><tex-math>$^{circ }$</tex-math></inline-formula>\u0000], while the MLP model leads to higher posture variability (e.g., standard deviation of joint angles = 4.49\u0000<inline-formula><tex-math>$^{circ }$</tex-math></inline-formula>\u0000/4.18\u0000<inline-formula><tex-math>$^{circ }$</tex-math></inline-formula>\u0000 for upper arm flexion/extension joints). Moreover, both generative models demonstrate reasonable prediction validity (RMSE of segment lengths are within 4.83 cm). Overall, our proposed diffusion models demonstrate good similarity and validity in predicting lifting postures, while also providing insights into the inherent variability of constrained lifting postures. This novel use of diffusion models shows potential for tailored posture prediction in common occupational environments, representing an advancement in motion synthesis and contributing to workplace design and MSD risk mitigation.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"723-732"},"PeriodicalIF":3.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Augmented Intelligence Perspective on Human-in-the-Loop Reinforcement Learning: Review, Concept Designs, and Future Directions 人类在圈强化学习的增强智能视角:回顾、概念设计和未来方向
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1109/THMS.2024.3467370
Kok-Lim Alvin Yau;Yasir Saleem;Yung-Wey Chong;Xiumei Fan;Jer Min Eyu;David Chieng
Augmented intelligence (AuI) is a concept that combines human intelligence (HI) and artificial intelligence (AI) to leverage their respective strengths. While AI typically aims to replace humans, AuI integrates humans into machines, recognizing their irreplaceable role. Meanwhile, human-in-the-loop reinforcement learning (HITL-RL) is a semisupervised algorithm that integrates humans into the traditional reinforcement learning (RL) algorithm, enabling autonomous agents to gather inputs from both humans and environments, learn, and select optimal actions across various environments. Both AuI and HITL-RL are still in their infancy. Based on AuI, we propose and investigate three separate concept designs for HITL-RL: HI-AI, AI-HI, and parallel-HI-and-AI approaches, each differing in the order of HI and AI involvement in decision making. The literature on AuI and HITL-RL offers insights into integrating HI into existing concept designs. A preliminary study in an Atari game offers insights for future research directions. Simulation results show that human involvement maintains RL convergence and improves system stability, while achieving approximately similar average scores to traditional $Q$-learning in the game. Future research directions are proposed to encourage further investigation in this area.
增强智能(AuI)是一个将人类智能(HI)和人工智能(AI)结合起来以发挥各自优势的概念。人工智能通常旨在取代人类,而增强智能则将人类融入机器,承认人类不可替代的作用。同时,"人在回路中强化学习"(HITL-RL)是一种半监督算法,它将人类融入传统的强化学习(RL)算法中,使自主代理能够收集来自人类和环境的输入,并在各种环境中学习和选择最佳行动。AuI 和 HITL-RL 都还处于起步阶段。在 AuI 的基础上,我们提出并研究了 HITL-RL 的三个独立概念设计:HI-AI、AI-HI 和平行-HI-AI 方法,每种方法在 HI 和 AI 参与决策的顺序上有所不同。有关人工智能和 HITL-RL 的文献为将人工智能融入现有概念设计提供了启示。在 Atari 游戏中进行的初步研究为未来的研究方向提供了启示。模拟结果表明,人类的参与保持了 RL 的收敛性并提高了系统的稳定性,同时在游戏中取得了与传统 Q$ 学习大致相同的平均分数。我们提出了未来的研究方向,以鼓励在这一领域开展进一步的研究。
{"title":"The Augmented Intelligence Perspective on Human-in-the-Loop Reinforcement Learning: Review, Concept Designs, and Future Directions","authors":"Kok-Lim Alvin Yau;Yasir Saleem;Yung-Wey Chong;Xiumei Fan;Jer Min Eyu;David Chieng","doi":"10.1109/THMS.2024.3467370","DOIUrl":"https://doi.org/10.1109/THMS.2024.3467370","url":null,"abstract":"Augmented intelligence (AuI) is a concept that combines human intelligence (HI) and artificial intelligence (AI) to leverage their respective strengths. While AI typically aims to replace humans, AuI integrates humans into machines, recognizing their irreplaceable role. Meanwhile, human-in-the-loop reinforcement learning (HITL-RL) is a semisupervised algorithm that integrates humans into the traditional reinforcement learning (RL) algorithm, enabling autonomous agents to gather inputs from both humans and environments, learn, and select optimal actions across various environments. Both AuI and HITL-RL are still in their infancy. Based on AuI, we propose and investigate three separate concept designs for HITL-RL: \u0000<italic>HI-AI</i>\u0000, \u0000<italic>AI-HI</i>\u0000, and \u0000<italic>parallel-HI-and-AI</i>\u0000 approaches, each differing in the order of HI and AI involvement in decision making. The literature on AuI and HITL-RL offers insights into integrating HI into existing concept designs. A preliminary study in an Atari game offers insights for future research directions. Simulation results show that human involvement maintains RL convergence and improves system stability, while achieving approximately similar average scores to traditional \u0000<inline-formula><tex-math>$Q$</tex-math></inline-formula>\u0000-learning in the game. Future research directions are proposed to encourage further investigation in this area.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"762-777"},"PeriodicalIF":3.5,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Model Cross-Stream Learning for Self-Supervised Human Action Recognition 用于自监督人类动作识别的跨模型跨流学习
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1109/THMS.2024.3467334
Mengyuan Liu;Hong Liu;Tianyu Guo
Considering the instance-level discriminative ability, contrastive learning methods, including MoCo and SimCLR, have been adapted from the original image representation learning task to solve the self-supervised skeleton-based action recognition task. These methods usually use multiple data streams (i.e., joint, motion, and bone) for ensemble learning, meanwhile, how to construct a discriminative feature space within a single stream and effectively aggregate the information from multiple streams remains an open problem. To this end, this article first applies a new contrastive learning method called bootstrap your own latent (BYOL) to learn from skeleton data, and then formulate SkeletonBYOL as a simple yet effective baseline for self-supervised skeleton-based action recognition. Inspired by SkeletonBYOL, this article further presents a cross-model and cross-stream (CMCS) framework. This framework combines cross-model adversarial learning (CMAL) and cross-stream collaborative learning (CSCL). Specifically, CMAL learns single-stream representation by cross-model adversarial loss to obtain more discriminative features. To aggregate and interact with multistream information, CSCL is designed by generating similarity pseudolabel of ensemble learning as supervision and guiding feature generation for individual streams. Extensive experiments on three datasets verify the complementary properties between CMAL and CSCL and also verify that the proposed method can achieve better results than state-of-the-art methods using various evaluation protocols.
考虑到实例级的判别能力,包括MoCo和SimCLR在内的对比学习方法已从原始的图像表示学习任务中调整出来,用于解决基于骨骼的自监督动作识别任务。这些方法通常使用多个数据流(即关节、运动和骨骼)进行集合学习,与此同时,如何在单个数据流中构建一个判别特征空间并有效聚合来自多个数据流的信息仍是一个有待解决的问题。为此,本文首先应用了一种新的对比学习方法--自举潜势(BYOL)来学习骨架数据,然后将 SkeletonBYOL 作为一种简单而有效的基于自我监督骨架的动作识别基线。受 SkeletonBYOL 的启发,本文进一步提出了跨模型和跨流(CMCS)框架。该框架结合了跨模型对抗学习(CMAL)和跨流协作学习(CSCL)。具体来说,CMAL 通过跨模型对抗损失来学习单流表示,从而获得更具区分性的特征。为了聚合多流信息并与之交互,CSCL 的设计是通过生成集合学习的相似性伪标签作为监督,并指导单个流的特征生成。在三个数据集上进行的大量实验验证了 CMAL 和 CSCL 之间的互补性,同时也验证了所提出的方法可以通过各种评估协议取得比最先进方法更好的结果。
{"title":"Cross-Model Cross-Stream Learning for Self-Supervised Human Action Recognition","authors":"Mengyuan Liu;Hong Liu;Tianyu Guo","doi":"10.1109/THMS.2024.3467334","DOIUrl":"https://doi.org/10.1109/THMS.2024.3467334","url":null,"abstract":"Considering the instance-level discriminative ability, contrastive learning methods, including MoCo and SimCLR, have been adapted from the original image representation learning task to solve the self-supervised skeleton-based action recognition task. These methods usually use multiple data streams (i.e., joint, motion, and bone) for ensemble learning, meanwhile, how to construct a discriminative feature space within a single stream and effectively aggregate the information from multiple streams remains an open problem. To this end, this article first applies a new contrastive learning method called bootstrap your own latent (BYOL) to learn from skeleton data, and then formulate SkeletonBYOL as a simple yet effective baseline for self-supervised skeleton-based action recognition. Inspired by SkeletonBYOL, this article further presents a cross-model and cross-stream (CMCS) framework. This framework combines cross-model adversarial learning (CMAL) and cross-stream collaborative learning (CSCL). Specifically, CMAL learns single-stream representation by cross-model adversarial loss to obtain more discriminative features. To aggregate and interact with multistream information, CSCL is designed by generating similarity pseudolabel of ensemble learning as supervision and guiding feature generation for individual streams. Extensive experiments on three datasets verify the complementary properties between CMAL and CSCL and also verify that the proposed method can achieve better results than state-of-the-art methods using various evaluation protocols.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"743-752"},"PeriodicalIF":3.5,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optical See-Through Head-Mounted Display With Mitigated Parallax-Related Registration Errors: A User Study Validation 可减轻视差相关注册错误的光学透视头戴式显示器:用户研究验证
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1109/THMS.2024.3468019
Nadia Cattari;Fabrizio Cutolo;Vincenzo Ferrari
For an optical see-through (OST) augmented reality (AR) head-mounted display (HMD) to assist in performing high-precision activities in the peripersonal space, a fundamental requirement is the correct spatial registration between the virtual information and the real environment. This registration can be achieved through a calibration procedure involving the parameterization of the virtual rendering camera via an eye-replacement camera that observes a calibration pattern rendered onto the OST display. In a previous feasibility study, we demonstrated and proved, with the same eye-replacement camera used for the calibration, that, in the case of an OST display with a focal plane close to the user's working distance, there is no need for prior-to-use viewpoint-specific calibration refinements obtained through eye-tracking cameras or additional alignment-based calibration steps. The viewpoint parallax-related AR registration error is indeed submillimetric within a reasonable range of depths around the display focal plane. This article confirms, through a user study based on a monocular virtual-to-real alignment task, that this finding is accurate and usable. In addition, we found that by performing the alignment-free calibration procedure via a high-resolution camera, the AR registration accuracy is substantially improved compared with that of other state-of-the-art approaches, with an error lower than 1mm over a notable range of distances. These results demonstrate the safe usability of OST HMDs for high-precision task guidance in the peripersonal space.
要使光学透视(OST)增强现实(AR)头戴式显示器(HMD)能够帮助在个人周围空间执行高精度活动,一个基本要求就是虚拟信息与真实环境之间的正确空间配准。这种配准可以通过一个校准程序来实现,该程序涉及通过眼球替代摄像机对虚拟渲染摄像机进行参数化,该摄像机观察渲染到 OST 显示屏上的校准模式。在之前的一项可行性研究中,我们利用用于校准的同一台眼球置换相机演示并证明,在焦平面接近用户工作距离的 OST 显示屏上,无需事先通过眼球跟踪相机或额外的基于对齐的校准步骤来获得特定视点的校准改进。在显示焦平面周围的合理深度范围内,与视点视差相关的 AR 注册误差确实是亚毫米级的。本文通过一项基于单眼虚拟到现实配准任务的用户研究,证实了这一结论的准确性和可用性。此外,我们还发现,通过高分辨率相机执行免对准校准程序,与其他最先进的方法相比,增强现实技术的配准精度大幅提高,在显著的距离范围内误差低于 1 毫米。这些结果表明,OST HMD 可安全地用于人周空间的高精度任务引导。
{"title":"Optical See-Through Head-Mounted Display With Mitigated Parallax-Related Registration Errors: A User Study Validation","authors":"Nadia Cattari;Fabrizio Cutolo;Vincenzo Ferrari","doi":"10.1109/THMS.2024.3468019","DOIUrl":"https://doi.org/10.1109/THMS.2024.3468019","url":null,"abstract":"For an optical see-through (OST) augmented reality (AR) head-mounted display (HMD) to assist in performing high-precision activities in the peripersonal space, a fundamental requirement is the correct spatial registration between the virtual information and the real environment. This registration can be achieved through a calibration procedure involving the parameterization of the virtual rendering camera via an eye-replacement camera that observes a calibration pattern rendered onto the OST display. In a previous feasibility study, we demonstrated and proved, with the same eye-replacement camera used for the calibration, that, in the case of an OST display with a focal plane close to the user's working distance, there is no need for prior-to-use viewpoint-specific calibration refinements obtained through eye-tracking cameras or additional alignment-based calibration steps. The viewpoint parallax-related AR registration error is indeed submillimetric within a reasonable range of depths around the display focal plane. This article confirms, through a user study based on a monocular virtual-to-real alignment task, that this finding is accurate and usable. In addition, we found that by performing the alignment-free calibration procedure via a high-resolution camera, the AR registration accuracy is substantially improved compared with that of other state-of-the-art approaches, with an error lower than 1mm over a notable range of distances. These results demonstrate the safe usability of OST HMDs for high-precision task guidance in the peripersonal space.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"668-677"},"PeriodicalIF":3.5,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10718696","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning for Human–Machine Systems With Advanced Persistent Threats 针对高级持续性威胁的人机系统机器学习
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-09 DOI: 10.1109/THMS.2024.3439625
Long Chen;Wei Zhang;Yanqing Song;Jianguo Chen
This article conducts a thorough exploration of the implications of machine learning (ML) in conjunction with human–machine systems within the military domain. It scrutinizes the strategic development efforts of ML by pertinent institutions, particularly in the context of military applications and the domain of advanced persistent threats. Prominent nations have delineated a technical trajectory for the integration of ML into their military frameworks. To bolster the structure and efficacy of their various military branches and units, there has been a concentrated deployment of numerous ML research endeavors. These initiatives encompass the study of sophisticated ML algorithms and the acceleration of artificial intelligence technology adaptation for intelligence processing, autonomous platforms, command and control infrastructures, and weapons systems. Forces across the globe are actively embedding ML technologies into a range of platforms-terrestrial, naval, aerial, space-faring, and cybernetic. This integration spans weaponry, networks, cognitive operations, and additional systems. Furthermore, this article reviews the incorporation within the sphere of military human–machine interaction in the Russia–Ukraine conflict. In this war, cyber human–machine interaction has become a pivotal arena of contention between Russia and Ukraine, with key levers that influence the conflict's course. In addition, the article examines the adoption of ML in prospective military functions such as, operations, intelligence gathering, networking, logistics, identification protocols, healthcare, data analysis trends, and other critical areas marked by current developments and trajectories. It also proffers a series of recommendations for the future integration of ML to inform strategic direction and research.
本文深入探讨了机器学习(ML)与人机系统在军事领域的结合所产生的影响。文章仔细研究了相关机构在 ML 战略发展方面所做的努力,特别是在军事应用和高级持久性威胁领域。著名国家已为将 ML 纳入其军事框架划定了技术轨迹。为了加强各军事部门和单位的结构和效率,各国集中部署了大量的 ML 研究工作。这些举措包括研究复杂的 ML 算法,加快人工智能技术在情报处理、自主平台、指挥和控制基础设施以及武器系统中的应用。全球各地的部队都在积极将 ML 技术嵌入各种平台--陆地平台、海军平台、航空平台、航天平台和控制论平台。这种集成涵盖武器装备、网络、认知操作和其他系统。此外,本文还回顾了俄乌冲突中军事人机互动领域的整合情况。在这场战争中,网络人机交互已成为俄罗斯和乌克兰之间争夺的关键领域,并成为影响冲突进程的关键杠杆。此外,文章还探讨了在未来军事功能中采用 ML 的情况,如作战、情报收集、网络、后勤、身份识别协议、医疗保健、数据分析趋势以及其他以当前发展和轨迹为标志的关键领域。文章还就未来如何整合 ML 提出了一系列建议,为战略方向和研究提供参考。
{"title":"Machine Learning for Human–Machine Systems With Advanced Persistent Threats","authors":"Long Chen;Wei Zhang;Yanqing Song;Jianguo Chen","doi":"10.1109/THMS.2024.3439625","DOIUrl":"https://doi.org/10.1109/THMS.2024.3439625","url":null,"abstract":"This article conducts a thorough exploration of the implications of machine learning (ML) in conjunction with human–machine systems within the military domain. It scrutinizes the strategic development efforts of ML by pertinent institutions, particularly in the context of military applications and the domain of advanced persistent threats. Prominent nations have delineated a technical trajectory for the integration of ML into their military frameworks. To bolster the structure and efficacy of their various military branches and units, there has been a concentrated deployment of numerous ML research endeavors. These initiatives encompass the study of sophisticated ML algorithms and the acceleration of artificial intelligence technology adaptation for intelligence processing, autonomous platforms, command and control infrastructures, and weapons systems. Forces across the globe are actively embedding ML technologies into a range of platforms-terrestrial, naval, aerial, space-faring, and cybernetic. This integration spans weaponry, networks, cognitive operations, and additional systems. Furthermore, this article reviews the incorporation within the sphere of military human–machine interaction in the Russia–Ukraine conflict. In this war, cyber human–machine interaction has become a pivotal arena of contention between Russia and Ukraine, with key levers that influence the conflict's course. In addition, the article examines the adoption of ML in prospective military functions such as, operations, intelligence gathering, networking, logistics, identification protocols, healthcare, data analysis trends, and other critical areas marked by current developments and trajectories. It also proffers a series of recommendations for the future integration of ML to inform strategic direction and research.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"753-761"},"PeriodicalIF":3.5,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bioinspired Virtual Reality Toolkit for Robot-Assisted Medical Application: BioVRbot 用于机器人辅助医疗应用的生物启发虚拟现实工具包生物虚拟现实机器人
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-09 DOI: 10.1109/THMS.2024.3462416
Hang Su;Francesco Jamal Sheiban;Wen Qi;Salih Ertug Ovur;Samer Alfayad
The increasingly pervasive usage of robotic surgery not only calls for advances in clinical application but also implies high availability for preliminary medical education using virtual reality. Virtual reality is currently upgrading medical education by presenting complicated medical information in an immersive and interactive way. A system that allows multiple users to observe and operate via simulated surgical platforms using wearable devices has become an efficient solution for teaching where a real surgical platform is not available. This article developed a bioinspired virtual reality toolkit (BioVRbot) for education and training in robot-assisted minimally invasive surgery. It allows multiple users to manipulate the robots working on cooperative virtual surgery using bioinspired control. The virtual reality scenario is implemented using unity and can be observed with independent virtual reality headsets. A MATLAB server is designed to manage robot motion planning of incremental teleoperation compliance with the remote center of motion constraints. Wearable sensorized gloves are adopted for continuous control of the tooltip and the gripper. Finally, the practical use of the developed surgical virtual system is demonstrated with cooperative operation tasks. It could be further spread into the classroom for preliminary education of robot-assisted surgery for early-stage medical students.
机器人手术的应用日益广泛,这不仅要求临床应用的进步,还意味着利用虚拟现实技术开展初步医学教育的可能性很大。目前,虚拟现实技术正通过身临其境的互动方式呈现复杂的医学信息,从而提升医学教育的水平。在没有真实手术平台的情况下,一个允许多个用户使用可穿戴设备通过模拟手术平台进行观察和操作的系统已成为一种有效的教学解决方案。本文开发了一个生物启发虚拟现实工具包(BioVRbot),用于机器人辅助微创手术的教育和培训。它允许多个用户通过生物启发控制来操纵机器人进行合作虚拟手术。虚拟现实场景使用 unity 实现,可通过独立的虚拟现实头显进行观察。设计了一个 MATLAB 服务器来管理符合远程运动中心约束条件的增量远程操作机器人运动规划。采用可穿戴传感手套对工具提示和抓手进行连续控制。最后,通过合作操作任务演示了所开发的外科虚拟系统的实际应用。该系统还可进一步推广到课堂中,用于早期医学生的机器人辅助手术初步教育。
{"title":"A Bioinspired Virtual Reality Toolkit for Robot-Assisted Medical Application: BioVRbot","authors":"Hang Su;Francesco Jamal Sheiban;Wen Qi;Salih Ertug Ovur;Samer Alfayad","doi":"10.1109/THMS.2024.3462416","DOIUrl":"https://doi.org/10.1109/THMS.2024.3462416","url":null,"abstract":"The increasingly pervasive usage of robotic surgery not only calls for advances in clinical application but also implies high availability for preliminary medical education using virtual reality. Virtual reality is currently upgrading medical education by presenting complicated medical information in an immersive and interactive way. A system that allows multiple users to observe and operate via simulated surgical platforms using wearable devices has become an efficient solution for teaching where a real surgical platform is not available. This article developed a bioinspired virtual reality toolkit (BioVRbot) for education and training in robot-assisted minimally invasive surgery. It allows multiple users to manipulate the robots working on cooperative virtual surgery using bioinspired control. The virtual reality scenario is implemented using unity and can be observed with independent virtual reality headsets. A MATLAB server is designed to manage robot motion planning of incremental teleoperation compliance with the remote center of motion constraints. Wearable sensorized gloves are adopted for continuous control of the tooltip and the gripper. Finally, the practical use of the developed surgical virtual system is demonstrated with cooperative operation tasks. It could be further spread into the classroom for preliminary education of robot-assisted surgery for early-stage medical students.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"688-697"},"PeriodicalIF":3.5,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech-Driven Gesture Generation Using Transformer-Based Denoising Diffusion Probabilistic Models 使用基于变压器的去噪扩散概率模型生成语音手势
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-09 DOI: 10.1109/THMS.2024.3456085
Bowen Wu;Chaoran Liu;Carlos Toshinori Ishi;Hiroshi Ishiguro
While it is crucial for human-like avatars to perform co-speech gestures, existing approaches struggle to generate natural and realistic movements. In the present study, a novel transformer-based denoising diffusion model is proposed to generate co-speech gestures. Moreover, we introduce a practical sampling trick for diffusion models to maintain the continuity between the generated motion segments while improving the within-segment motion likelihood and naturalness. Our model can be used for online generation since it generates gestures for a short segment of speech, e.g., 2 s. We evaluate our model on two large-scale speech-gesture datasets with finger movements using objective measurements and a user study, showing that our model outperforms all other baselines. Our user study is based on the Metahuman platform in the Unreal Engine, a popular tool for creating human-like avatars and motions.
对于类人化身来说,做出协同语音手势至关重要,但现有方法难以生成自然逼真的动作。在本研究中,我们提出了一种新颖的基于变换器的去噪扩散模型来生成协同语音手势。此外,我们还为扩散模型引入了实用的采样技巧,以保持生成的运动片段之间的连续性,同时提高片段内运动的可能性和自然度。我们利用客观测量和用户研究在两个大规模语音手势数据集上对我们的模型进行了评估,结果表明我们的模型优于所有其他基线模型。我们的用户研究基于虚幻引擎中的 Metahuman 平台,这是一种用于创建类人化身和动作的流行工具。
{"title":"Speech-Driven Gesture Generation Using Transformer-Based Denoising Diffusion Probabilistic Models","authors":"Bowen Wu;Chaoran Liu;Carlos Toshinori Ishi;Hiroshi Ishiguro","doi":"10.1109/THMS.2024.3456085","DOIUrl":"https://doi.org/10.1109/THMS.2024.3456085","url":null,"abstract":"While it is crucial for human-like avatars to perform co-speech gestures, existing approaches struggle to generate natural and realistic movements. In the present study, a novel transformer-based denoising diffusion model is proposed to generate co-speech gestures. Moreover, we introduce a practical sampling trick for diffusion models to maintain the continuity between the generated motion segments while improving the within-segment motion likelihood and naturalness. Our model can be used for online generation since it generates gestures for a short segment of speech, e.g., 2 s. We evaluate our model on two large-scale speech-gesture datasets with finger movements using objective measurements and a user study, showing that our model outperforms all other baselines. Our user study is based on the Metahuman platform in the Unreal Engine, a popular tool for creating human-like avatars and motions.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"733-742"},"PeriodicalIF":3.5,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10712170","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing Surgeon–Robot Cooperative Performance in Robot-Assisted Intravascular Catheterization 分析机器人辅助血管内导管插入术中外科医生与机器人的合作表现
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-04 DOI: 10.1109/THMS.2024.3452975
Wenjing Du;Guanlin Yi;Olatunji Mumini Omisore;Wenke Duan;Toluwanimi Oluwadra Akinyemi;Xingyu Chen;Jiang Liu;Boon-Giin Lee;Lei Wang
Robot-assisted catheterization offers a promising technique for cardiovascular interventions, addressing the limitations of manual interventional surgery, where precise tool manipulation is critical. In remote-control robotic systems, the lack of force feedback and imprecise navigation challenge cooperation between the surgeon and robot. This study proposes a manipulation-based evaluation framework to assess the cooperative performance between different operators and robot using kinesthetic, kinematic, and haptic data from multi-sensor technologies. The proposed evaluation framework achieves a recognition accuracy of 99.99% in assessing the cooperation between operator and robot. Additionally, the study investigates the impact of delay factors, considering no delay, constant delay, and variable delay, on cooperation characteristics. The findings suggest that variable delay contributes to improved cooperation performance between operator and robot in a primary-secondary isomorphic robotic system, compared to a constant delay factor. Furthermore, operators with experience in manual percutaneous coronary interventions exhibit significantly better cooperative manipulate on with the robot system than those without such experience, with respective synergy ratios of 89.66%, 90.28%, and 91.12% based on the three aspects of delay consideration. Moreover, the study explores interaction information, including distal force of tools-tissue and contact force of hand-control-ring, to understand how operators with different technical skills adjust their control strategy to prevent damage to the vascular vessel caused by excessive force while ensuring enough tension to navigate complex paths. The findings highlight the potential of variable delay to enhance cooperative control strategies in robotic catheterization systems, providing a basis for optimizing surgeon-robot collaboration in cardiovascular interventions.
机器人辅助导管术为心血管介入治疗提供了一种前景广阔的技术,解决了人工介入手术的局限性,因为人工介入手术对工具的精确操作至关重要。在遥控机器人系统中,缺乏力反馈和不精确的导航对外科医生和机器人之间的合作提出了挑战。本研究提出了一个基于操作的评估框架,利用来自多传感器技术的动觉、运动学和触觉数据来评估不同操作员与机器人之间的合作表现。在评估操作员与机器人之间的合作时,所提出的评估框架达到了 99.99% 的识别准确率。此外,研究还探讨了无延迟、恒定延迟和可变延迟等延迟因素对合作特征的影响。研究结果表明,与恒定延迟因素相比,可变延迟有助于提高操作员与机器人在一二级同构机器人系统中的合作性能。此外,具有人工经皮冠状动脉介入治疗经验的操作员与机器人系统的合作操作表现明显优于没有此类经验的操作员,根据延迟的三个方面考虑,协同比率分别为 89.66%、90.28% 和 91.12%。此外,研究还探讨了交互信息,包括工具-组织的远端力和手-控制环的接触力,以了解不同技术技能的操作员如何调整其控制策略,在确保有足够张力在复杂路径上导航的同时,防止过大的力对血管造成损伤。研究结果凸显了可变延迟在增强机器人导管系统合作控制策略方面的潜力,为优化心血管介入手术中外科医生与机器人的合作提供了基础。
{"title":"Analyzing Surgeon–Robot Cooperative Performance in Robot-Assisted Intravascular Catheterization","authors":"Wenjing Du;Guanlin Yi;Olatunji Mumini Omisore;Wenke Duan;Toluwanimi Oluwadra Akinyemi;Xingyu Chen;Jiang Liu;Boon-Giin Lee;Lei Wang","doi":"10.1109/THMS.2024.3452975","DOIUrl":"https://doi.org/10.1109/THMS.2024.3452975","url":null,"abstract":"Robot-assisted catheterization offers a promising technique for cardiovascular interventions, addressing the limitations of manual interventional surgery, where precise tool manipulation is critical. In remote-control robotic systems, the lack of force feedback and imprecise navigation challenge cooperation between the surgeon and robot. This study proposes a manipulation-based evaluation framework to assess the cooperative performance between different operators and robot using kinesthetic, kinematic, and haptic data from multi-sensor technologies. The proposed evaluation framework achieves a recognition accuracy of 99.99% in assessing the cooperation between operator and robot. Additionally, the study investigates the impact of delay factors, considering no delay, constant delay, and variable delay, on cooperation characteristics. The findings suggest that variable delay contributes to improved cooperation performance between operator and robot in a primary-secondary isomorphic robotic system, compared to a constant delay factor. Furthermore, operators with experience in manual percutaneous coronary interventions exhibit significantly better cooperative manipulate on with the robot system than those without such experience, with respective synergy ratios of 89.66%, 90.28%, and 91.12% based on the three aspects of delay consideration. Moreover, the study explores interaction information, including distal force of tools-tissue and contact force of hand-control-ring, to understand how operators with different technical skills adjust their control strategy to prevent damage to the vascular vessel caused by excessive force while ensuring enough tension to navigate complex paths. The findings highlight the potential of variable delay to enhance cooperative control strategies in robotic catheterization systems, providing a basis for optimizing surgeon-robot collaboration in cardiovascular interventions.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"698-710"},"PeriodicalIF":3.5,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705684","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Modified Dynamic Movement Primitive Algorithm for Adaptive Gait Control of a Lower Limb Exoskeleton 用于下肢外骨骼自适应步态控制的修正动态运动基元算法
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-27 DOI: 10.1109/THMS.2024.3458905
Lingzhou Yu;Shaoping Bai
A major challenge in the lower limb exoskeleton for walking assistance is the adaptive gait control. In this article, a modified dynamic movement primitive (DMP) (MDMP) control is proposed to achieve gait adjustment with different assistance levels. This is achieved by inclusion of interaction forces in the formulation of DMP, which enables learning from physical human–robot interaction. A threshold force is introduced accounting for different levels of walking assistance from the exoskeleton. The MDMP is, thus, capable of generating adjustable gait and reshaping trajectories with data from the interaction force sensors. The experiments on five subjects show that the average differences between the human body and the exoskeleton are 4.13° and 1.92° on the hip and knee, respectively, with average interaction forces of 42.54 N and 26.36 N exerted on the subjects' thigh and shank. The results demonstrated that the MDMP method can effectively provide adjustable gait for walking assistance.
用于辅助行走的下肢外骨骼面临的一大挑战是自适应步态控制。本文提出了一种改进的动态运动基元(DMP)(MDMP)控制,以实现不同辅助水平下的步态调整。这是通过在 DMP 中加入交互力来实现的,这样就能从物理上的人机交互中学习。根据外骨骼提供的不同步行辅助水平,引入了一个阈值力。因此,MDMP 能够根据交互力传感器的数据生成可调整的步态和重塑轨迹。对五名受试者的实验表明,人体与外骨骼在髋关节和膝关节上的平均差异分别为 4.13° 和 1.92°,受试者大腿和小腿上的平均交互力分别为 42.54 N 和 26.36 N。结果表明,MDMP 方法可有效提供可调步态的行走辅助。
{"title":"A Modified Dynamic Movement Primitive Algorithm for Adaptive Gait Control of a Lower Limb Exoskeleton","authors":"Lingzhou Yu;Shaoping Bai","doi":"10.1109/THMS.2024.3458905","DOIUrl":"https://doi.org/10.1109/THMS.2024.3458905","url":null,"abstract":"A major challenge in the lower limb exoskeleton for walking assistance is the adaptive gait control. In this article, a modified dynamic movement primitive (DMP) (MDMP) control is proposed to achieve gait adjustment with different assistance levels. This is achieved by inclusion of interaction forces in the formulation of DMP, which enables learning from physical human–robot interaction. A threshold force is introduced accounting for different levels of walking assistance from the exoskeleton. The MDMP is, thus, capable of generating adjustable gait and reshaping trajectories with data from the interaction force sensors. The experiments on five subjects show that the average differences between the human body and the exoskeleton are 4.13° and 1.92° on the hip and knee, respectively, with average interaction forces of 42.54 N and 26.36 N exerted on the subjects' thigh and shank. The results demonstrated that the MDMP method can effectively provide adjustable gait for walking assistance.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 6","pages":"778-787"},"PeriodicalIF":3.5,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Present a World of Opportunity 呈现一个充满机遇的世界
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.1109/THMS.2024.3458771
{"title":"Present a World of Opportunity","authors":"","doi":"10.1109/THMS.2024.3458771","DOIUrl":"https://doi.org/10.1109/THMS.2024.3458771","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"54 5","pages":"631-631"},"PeriodicalIF":3.5,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10684407","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Human-Machine Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1