With the advancement of robot technologies, robot-assisted tele-homecare systems enter human living environments to provide homecare services. In such a context, physical human-robot contact becomes inevitable. Ensuring safety during human-robot interaction in home environments becomes a critical challenge. However, conventional methods mainly rely on creating a collision-free environment, which is insufficient for inevitable or even desirable human-robot contact. This article proposes a proactive safety architecture that smoothly switches between collision avoidance and contact reaction. To enable proactive sensing for homecare robots, a proximity sensor is customized with the abilities of approach and contact awareness. Based on the proximity sensing data, a proactive safety architecture is proposed to ensure continuous task execution while avoiding potential collisions. For inevitable contact, a pretouch and contact reaction strategy is designed to enable a seamless transition from proximity-sensing-based collision avoidance to contact reaction. Comparative experiments are conducted to validate the proactive safety architecture on a telerobotic system prototype. Compared with the three state-of-the-art approaches, the proposed strategy reduces contact force (by at least 35.21% along the primary collision direction) while maximizing motion-tracking performance. A user study is conducted to investigate the user experience. Feedback from 10 participants highlights positive evaluations of this system’s usability, indicating the feasibility of the proposed strategy in enhancing safety during human-robot interaction for the tele-homecare system.
{"title":"A Proactive Safety Architecture Based on Proximity Sensing for Enhanced Human-Robot Interaction in Tele-Homecare","authors":"Ruohan Wang;Ying Yang;Zhengjie Zhu;Honghao Lyu;Chen Li;Xiaoyan Huang;Xiao Yang;Lipeng Chen;Dashun Zhang;Haiteng Wu;Geng Yang","doi":"10.1109/THMS.2025.3627542","DOIUrl":"https://doi.org/10.1109/THMS.2025.3627542","url":null,"abstract":"With the advancement of robot technologies, robot-assisted tele-homecare systems enter human living environments to provide homecare services. In such a context, physical human-robot contact becomes inevitable. Ensuring safety during human-robot interaction in home environments becomes a critical challenge. However, conventional methods mainly rely on creating a collision-free environment, which is insufficient for inevitable or even desirable human-robot contact. This article proposes a proactive safety architecture that smoothly switches between collision avoidance and contact reaction. To enable proactive sensing for homecare robots, a proximity sensor is customized with the abilities of approach and contact awareness. Based on the proximity sensing data, a proactive safety architecture is proposed to ensure continuous task execution while avoiding potential collisions. For inevitable contact, a pretouch and contact reaction strategy is designed to enable a seamless transition from proximity-sensing-based collision avoidance to contact reaction. Comparative experiments are conducted to validate the proactive safety architecture on a telerobotic system prototype. Compared with the three state-of-the-art approaches, the proposed strategy reduces contact force (by at least 35.21% along the primary collision direction) while maximizing motion-tracking performance. A user study is conducted to investigate the user experience. Feedback from 10 participants highlights positive evaluations of this system’s usability, indicating the feasibility of the proposed strategy in enhancing safety during human-robot interaction for the tele-homecare system.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"135-146"},"PeriodicalIF":4.4,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes a novel Transformer-based framework for identifying terrain transition states and recognizing steady-state terrains using data from inertial measurement units. Compared to traditional time series classification methods for transition states, our approach reframes the problem as a time series fitting and terrain change-point detection task, capturing the dynamic nature of human locomotion across varying terrains. Outdoor experiments demonstrate the model’s superior performance in both steady-state and transition detection, with enhanced interpretability. Specifically, steady-state identification achieves accuracies of 99.63% on normal terrain and 98.06% on complex terrain. Compared to traditional convolutional neural network-based approaches, our method improves terrain classification accuracy by 12.30% –37.67% under normal conditions and 12.34% –39.90% under complex conditions. Moreover, the normalized root mean square error for transition curve fitting is significantly reduced to 0.016 and 0.032 for normal and complex terrains, outperforming other models.
{"title":"Enhancing Terrain Recognition With a Transformer-Based Model: Integrating IMUs for Motion Intent Detection","authors":"Hui Chen;Zhuo Wang;Fangliang Yang;Xiangyang Wang;Chunjie Chen;Xinyu Wu","doi":"10.1109/THMS.2025.3631855","DOIUrl":"https://doi.org/10.1109/THMS.2025.3631855","url":null,"abstract":"This study proposes a novel Transformer-based framework for identifying terrain transition states and recognizing steady-state terrains using data from inertial measurement units. Compared to traditional time series classification methods for transition states, our approach reframes the problem as a time series fitting and terrain change-point detection task, capturing the dynamic nature of human locomotion across varying terrains. Outdoor experiments demonstrate the model’s superior performance in both steady-state and transition detection, with enhanced interpretability. Specifically, steady-state identification achieves accuracies of 99.63% on normal terrain and 98.06% on complex terrain. Compared to traditional convolutional neural network-based approaches, our method improves terrain classification accuracy by 12.30% –37.67% under normal conditions and 12.34% –39.90% under complex conditions. Moreover, the normalized root mean square error for transition curve fitting is significantly reduced to 0.016 and 0.032 for normal and complex terrains, outperforming other models.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"22-31"},"PeriodicalIF":4.4,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Controlling foot placement is a key challenge in the use of assistive lower limb exoskeletons designed for those with motor impairments. Due to the mechanical flexibility of exoskeletons, users can intentionally manipulate the resulting step length without alteration of the exoskeleton’s reference trajectory. This is generally achieved by manually applying wrench upon the exoskeleton and the ground using crutches. This work sought to investigate this mechanism as a deliberate means to control foot placement. Ten nondisabled participants were asked to pilot a user-balanced exoskeleton to target step lengths of 0.1 to 0.4 m, with the exoskeleton trajectory unchanged throughout the experiment. Performance was evaluated by mean absolute error (MAE) and standard deviation (SD) of resulting step lengths. To explore the degree that these results might apply to users with impairments, participants were asked to minimize leg muscle activations during the experiment. Simultaneously, surface electromyography (sEMG) data were collected and normalized between resting (0.0) and unassisted walking (1.0). Activations ranged between 0.014 and 2.853, and were used to categorize participants into High muscle activation (HMA) and Low muscle activation (LMA) groups. The LMA group (median MAE 0.026 m, SD 0.028 m) performed differently compared to the HMA group (median MAE 0.021 m, SD 0.021 m), however, most participants achieved acceptable performance across all target step lengths, compared to a 0.05 m guideline. The results confirm that step length can be controlled through exoskeleton users’ manual efforts. Whilst the range of adjustments may vary with device and user, this could facilitate simplified exoskeleton control strategies and an intuitive method of user control.
控制脚的位置是一个关键的挑战,在使用辅助下肢外骨骼设计为那些运动障碍。由于外骨骼的机械灵活性,用户可以有意地操纵产生的步长,而不改变外骨骼的参考轨迹。这通常是通过手动在外骨骼上应用扳手和使用拐杖的地面来实现的。这项工作旨在研究这种机制作为控制足部放置的蓄意手段。10名非残疾参与者被要求驾驶一个用户平衡的外骨骼,以达到0.1到0.4米的目标步长,在整个实验过程中外骨骼的轨迹保持不变。通过所得步长的平均绝对误差(MAE)和标准偏差(SD)来评估性能。为了探索这些结果在多大程度上适用于有障碍的用户,参与者被要求在实验中尽量减少腿部肌肉的激活。同时,收集表面肌电图(sEMG)数据,并在静息(0.0)和无辅助行走(1.0)之间归一化。激活范围在0.014到2.853之间,并用于将参与者分为高肌肉激活(HMA)组和低肌肉激活(LMA)组。LMA组(中位MAE 0.026 m, SD 0.028 m)的表现与HMA组(中位MAE 0.021 m, SD 0.021 m)不同,然而,与0.05 m的指导方针相比,大多数参与者在所有目标步长上都取得了可接受的表现。结果证实,步长可以通过外骨骼用户的手动努力来控制。虽然调整范围可能因设备和用户而异,但这可以促进简化外骨骼控制策略和直观的用户控制方法。
{"title":"Quantifying Manual Adjustment of Foot Placement Under a Fixed Robotic Trajectory in Lower Limb Exoskeletons","authors":"Xiruo Cheng;Justin Fong;Liuhua Peng;Ying Tan;Denny Oetomo","doi":"10.1109/THMS.2025.3634377","DOIUrl":"https://doi.org/10.1109/THMS.2025.3634377","url":null,"abstract":"Controlling foot placement is a key challenge in the use of assistive lower limb exoskeletons designed for those with motor impairments. Due to the mechanical flexibility of exoskeletons, users can intentionally manipulate the resulting step length without alteration of the exoskeleton’s reference trajectory. This is generally achieved by manually applying wrench upon the exoskeleton and the ground using crutches. This work sought to investigate this mechanism as a deliberate means to control foot placement. Ten nondisabled participants were asked to pilot a user-balanced exoskeleton to target step lengths of 0.1 to 0.4 m, with the exoskeleton trajectory unchanged throughout the experiment. Performance was evaluated by mean absolute error (MAE) and standard deviation (SD) of resulting step lengths. To explore the degree that these results might apply to users with impairments, participants were asked to minimize leg muscle activations during the experiment. Simultaneously, surface electromyography (sEMG) data were collected and normalized between resting (0.0) and unassisted walking (1.0). Activations ranged between 0.014 and 2.853, and were used to categorize participants into High muscle activation (HMA) and Low muscle activation (LMA) groups. The LMA group (median MAE 0.026 m, SD 0.028 m) performed differently compared to the HMA group (median MAE 0.021 m, SD 0.021 m), however, most participants achieved acceptable performance across all target step lengths, compared to a 0.05 m guideline. The results confirm that step length can be controlled through exoskeleton users’ manual efforts. Whilst the range of adjustments may vary with device and user, this could facilitate simplified exoskeleton control strategies and an intuitive method of user control.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"2-11"},"PeriodicalIF":4.4,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05DOI: 10.1109/THMS.2025.3640886
{"title":"2025 Index IEEE Transactions on Human-Machine Systems","authors":"","doi":"10.1109/THMS.2025.3640886","DOIUrl":"https://doi.org/10.1109/THMS.2025.3640886","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 6","pages":"1065-1092"},"PeriodicalIF":4.4,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11281498","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/THMS.2025.3628064
Muhammad Hamza Zafar;Syed Kumayl Raza Moosavi;Filippo Sanfilippo
The evolution of industrial robotics has advanced from isolated, caged systems through basic human–robot interaction (HRI) to sophisticated human–robot collaboration (HRC). However, conventional vision systems based on red, green, blue (RGB) cameras remain a significant limiting factor in realizing the full potential of collaborative automation. This comprehensive review examines the transformative role of event cameras in advancing HRC capabilities and addressing current limitations in industrial settings. Event cameras, with their microsecond-level temporal resolution and robust performance under challenging lighting conditions, offer substantial advantages over traditional RGB cameras that are constrained by fixed frame rates and ambient lighting dependencies. We present a systematic framework for leveraging event cameras to enhance the human state understanding in collaborative robotics, encompassing real-time detection of poses, gestures, facial expressions, and emotional states. This framework addresses fundamental challenges in workplace safety and collaborative efficiency while enabling more sophisticated and responsive HRC systems. Our review synthesizes recent research developments in event camera applications specific to HRC, providing a detailed comparative analysis of their advantages over conventional vision systems. We identify emerging opportunities and potential research directions for advancing event-based vision in industrial robotics. In addition, we examine integration challenges and propose strategies for implementing event camera technology in existing industrial infrastructure. This work contributes valuable insights into the future trajectory of adaptive and intuitive HRC systems, offering a roadmap for researchers and practitioners in the field of industrial automation.
{"title":"Applications of Neuromorphic/Event Camera in Robotics With Human in Loop: A Systematic Review, Datasets, and Challenges","authors":"Muhammad Hamza Zafar;Syed Kumayl Raza Moosavi;Filippo Sanfilippo","doi":"10.1109/THMS.2025.3628064","DOIUrl":"https://doi.org/10.1109/THMS.2025.3628064","url":null,"abstract":"The evolution of industrial robotics has advanced from isolated, caged systems through basic human–robot interaction (HRI) to sophisticated human–robot collaboration (HRC). However, conventional vision systems based on red, green, blue (RGB) cameras remain a significant limiting factor in realizing the full potential of collaborative automation. This comprehensive review examines the transformative role of event cameras in advancing HRC capabilities and addressing current limitations in industrial settings. Event cameras, with their microsecond-level temporal resolution and robust performance under challenging lighting conditions, offer substantial advantages over traditional RGB cameras that are constrained by fixed frame rates and ambient lighting dependencies. We present a systematic framework for leveraging event cameras to enhance the human state understanding in collaborative robotics, encompassing real-time detection of poses, gestures, facial expressions, and emotional states. This framework addresses fundamental challenges in workplace safety and collaborative efficiency while enabling more sophisticated and responsive HRC systems. Our review synthesizes recent research developments in event camera applications specific to HRC, providing a detailed comparative analysis of their advantages over conventional vision systems. We identify emerging opportunities and potential research directions for advancing event-based vision in industrial robotics. In addition, we examine integration challenges and propose strategies for implementing event camera technology in existing industrial infrastructure. This work contributes valuable insights into the future trajectory of adaptive and intuitive HRC systems, offering a roadmap for researchers and practitioners in the field of industrial automation.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"32-47"},"PeriodicalIF":4.4,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although a growing number of exoskeletons have been developed for occupational applications, wrist exoskeletons remain relatively rare. However, in the meat processing industry, elbow and hand-wrist musculoskeletal disorders are particularly common. The aim of this article was to assess the potential effectiveness and risks of a 670 g wrist exoskeleton prototype designed to assist operators during meat-cutting tasks. Six professional butchers performed three standardized tasks reproducing meat-cutting gestures in foam, in three randomized experimental conditions: 1) without exoskeleton, 2) wearing the exoskeleton passive, with brakes off, and 3) using it with brakes activated, locked in a static position. Cutting forces were recorded using an instrumented table, joint angles using an optoelectronic motion capture system, muscle activity using surface electromyographic (EMG), and user experience was assessed using questionnaires. The cutting inaccuracy was defined as the area between the prescribed task and the actual cut on the foam surface. Joint torques were estimated by inverse dynamics with and without taking the exoskeleton’s mass into account, to isolate its effect. Linear mixed-effects statistical models were fitted. With the exoskeleton active during tasks, EMG activity was decreased of up to 18.7% (p < 0.01 to p < 0.001) in the wrist flexors and increased of up to 61.7% (not significant to p < 0.05) in the upper trapezius. Shoulder elevation joint torques were increased of up to 39.7% (p < 0.001), mainly due to the exoskeleton mass. The proposed multicriteria exoskeleton evaluation has provided guidance for following prototyping stages. Too heavy wrist exoskeletons could increase the risk of shoulder tendinitis for such tasks.
{"title":"Operational and Biomechanical Evaluation of a Wrist Exoskeleton Prototype for Assisting Meat-Cutting Tasks","authors":"Aurélie Tomezzoli;Mathieu Gréau;Charles Pontonnier","doi":"10.1109/THMS.2025.3632876","DOIUrl":"https://doi.org/10.1109/THMS.2025.3632876","url":null,"abstract":"Although a growing number of exoskeletons have been developed for occupational applications, wrist exoskeletons remain relatively rare. However, in the meat processing industry, elbow and hand-wrist musculoskeletal disorders are particularly common. The aim of this article was to assess the potential effectiveness and risks of a 670 g wrist exoskeleton prototype designed to assist operators during meat-cutting tasks. Six professional butchers performed three standardized tasks reproducing meat-cutting gestures in foam, in three randomized experimental conditions: 1) without exoskeleton, 2) wearing the exoskeleton passive, with brakes off, and 3) using it with brakes activated, locked in a static position. Cutting forces were recorded using an instrumented table, joint angles using an optoelectronic motion capture system, muscle activity using surface electromyographic (EMG), and user experience was assessed using questionnaires. The cutting inaccuracy was defined as the area between the prescribed task and the actual cut on the foam surface. Joint torques were estimated by inverse dynamics with and without taking the exoskeleton’s mass into account, to isolate its effect. Linear mixed-effects statistical models were fitted. With the exoskeleton active during tasks, EMG activity was decreased of up to 18.7% (<italic>p</i> < 0.01 to <italic>p</i> < 0.001) in the wrist flexors and increased of up to 61.7% (not significant to <italic>p</i> < 0.05) in the upper trapezius. Shoulder elevation joint torques were increased of up to 39.7% (<italic>p</i> < 0.001), mainly due to the exoskeleton mass. The proposed multicriteria exoskeleton evaluation has provided guidance for following prototyping stages. Too heavy wrist exoskeletons could increase the risk of shoulder tendinitis for such tasks.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"56 1","pages":"58-67"},"PeriodicalIF":4.4,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146045335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1109/THMS.2025.3630230
{"title":"IEEE Transactions on Human-Machine Systems Information for Authors","authors":"","doi":"10.1109/THMS.2025.3630230","DOIUrl":"https://doi.org/10.1109/THMS.2025.3630230","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 6","pages":"C4-C4"},"PeriodicalIF":4.4,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11272142","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1109/THMS.2025.3630213
{"title":"Call for Papers: IEEE Transactions on Human-Machine Systems","authors":"","doi":"10.1109/THMS.2025.3630213","DOIUrl":"https://doi.org/10.1109/THMS.2025.3630213","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 6","pages":"1064-1064"},"PeriodicalIF":4.4,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11272140","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1109/THMS.2025.3630228
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/THMS.2025.3630228","DOIUrl":"https://doi.org/10.1109/THMS.2025.3630228","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 6","pages":"C3-C3"},"PeriodicalIF":4.4,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11272143","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1109/THMS.2025.3630226
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/THMS.2025.3630226","DOIUrl":"https://doi.org/10.1109/THMS.2025.3630226","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 6","pages":"C2-C2"},"PeriodicalIF":4.4,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11272141","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}