As the prevalence of collaborative robots increases, physical interactions between humans and robots are inevitable—presenting an opportunity for robots to not only maintain safe working parameters with humans but also learn from these interactions. To develop adaptive robots, we first aim to analyze human responses to different errors through a study in which users are asked to correct any errors that the robot makes in various tasks. With this characterization of corrections, we can treat physical human–robot interactions as informative instead of ignoring physical interactions or leaving robots to return to the originally planned behaviors when interactions end. We incorporate physical corrections into existing learning from demonstration (LfD) frameworks, which allow robots to learn new skills by observing human demonstrations. We demonstrate that learning from physical interactions can improve task-specific performance metrics. The results reveal that including information about the behavior being corrected in the update improves task performance significantly compared to adding corrected trajectories alone. In a user study with an optimal control-based LfD framework, we also find that users are able to provide less feedback to the robot after each interaction update to the robot’s behavior. Utilizing corrections could enable advanced LfD techniques to be integrated into commercial applications for collaborative robots by enabling end-users to customize the robot’s behavior through intuitive interactions rather than by modifying the behavior in software.
{"title":"Ergodic Imitation With Corrections: Learning From Implicit Information in Human Feedback","authors":"Junru Pang;Quentin Anderson-Watson;Kathleen Fitzsimons","doi":"10.1109/THMS.2025.3603434","DOIUrl":"https://doi.org/10.1109/THMS.2025.3603434","url":null,"abstract":"As the prevalence of collaborative robots increases, physical interactions between humans and robots are inevitable—presenting an opportunity for robots to not only maintain safe working parameters with humans but also learn from these interactions. To develop adaptive robots, we first aim to analyze human responses to different errors through a study in which users are asked to correct any errors that the robot makes in various tasks. With this characterization of corrections, we can treat physical human–robot interactions as informative instead of ignoring physical interactions or leaving robots to return to the originally planned behaviors when interactions end. We incorporate physical corrections into existing learning from demonstration (LfD) frameworks, which allow robots to learn new skills by observing human demonstrations. We demonstrate that learning from physical interactions can improve task-specific performance metrics. The results reveal that including information about the behavior being corrected in the update improves task performance significantly compared to adding corrected trajectories alone. In a user study with an optimal control-based LfD framework, we also find that users are able to provide less feedback to the robot after each interaction update to the robot’s behavior. Utilizing corrections could enable advanced LfD techniques to be integrated into commercial applications for collaborative robots by enabling end-users to customize the robot’s behavior through intuitive interactions rather than by modifying the behavior in software.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 6","pages":"920-929"},"PeriodicalIF":4.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Predicting user click behavior and making relevant recommendations based on the user’s historical click behavior are critical to simplifying operations and improving user experience. Modeling User Interface (UI) elements is essential to user click behavior prediction, while the complexity and variety of the UI make it difficult to adequately capture the information of different scales. In addition, the lack of relevant datasets also presents difficulties for such studies. In response to these challenges, we construct a fine-grained smartphone usage behavior dataset containing 3 664 325 clicks of 100 users and propose a UI element Spatial Hierarchy Aware Smartphone user Click behavior Prediction method (SHA-SCP). SHA-SCP builds element groups by clustering the elements according to their spatial positions and uses attention mechanisms to perceive the UI at the element level and the element group level to fully capture the information of different scales. Experiments are conducted on the fine-grained smartphone usage behavior dataset, and the results show that our method outperforms the best baseline by an average of 18.35$%$, 13.86$%$, and 11.97$%$ in Top-1 Accuracy, Top-3 Accuracy, and Top-5 Accuracy, respectively.
{"title":"SHA-SCP: A UI Element Spatial Hierarchy Aware Smartphone User Click Behavior Prediction Method","authors":"Ling Chen;Qian Chen;Yiyi Peng;Kai Qian;Hongyu Shi;Xiaofan Zhang","doi":"10.1109/THMS.2025.3601578","DOIUrl":"https://doi.org/10.1109/THMS.2025.3601578","url":null,"abstract":"Predicting user click behavior and making relevant recommendations based on the user’s historical click behavior are critical to simplifying operations and improving user experience. Modeling User Interface (UI) elements is essential to user click behavior prediction, while the complexity and variety of the UI make it difficult to adequately capture the information of different scales. In addition, the lack of relevant datasets also presents difficulties for such studies. In response to these challenges, we construct a fine-grained smartphone usage behavior dataset containing 3 664 325 clicks of 100 users and propose a UI element <underline>S</u>patial <underline>H</u>ierarchy <underline>A</u>ware <underline>S</u>martphone user <underline>C</u>lick behavior <underline>P</u>rediction method (SHA-SCP). SHA-SCP builds element groups by clustering the elements according to their spatial positions and uses attention mechanisms to perceive the UI at the element level and the element group level to fully capture the information of different scales. Experiments are conducted on the fine-grained smartphone usage behavior dataset, and the results show that our method outperforms the best baseline by an average of 18.35<inline-formula><tex-math>$%$</tex-math></inline-formula>, 13.86<inline-formula><tex-math>$%$</tex-math></inline-formula>, and 11.97<inline-formula><tex-math>$%$</tex-math></inline-formula> in Top-1 Accuracy, Top-3 Accuracy, and Top-5 Accuracy, respectively.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 6","pages":"1033-1042"},"PeriodicalIF":4.4,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-04DOI: 10.1109/THMS.2025.3602125
Tangyao Li;Yuyang Wang
Virtual reality offers the opportunity for immersive exploration, yet it is often undermined by cybersickness. However, how individuals strike a balance between exploration and discomfort remains unclear. Existing method (e.g., reinforcement learning (RL)) often fail to fully capture the complexities of navigation and decision-making patterns. This study investigates how curiosity influences users’ navigation behavior, particularly how users strike a balance between exploration and discomfort. We propose curiosity as a key factor driving irrational decision-making and apply the free energy principle to model the relationship between curiosity and user behavior quantitatively. Our findings indicate that users generally adopt conservative strategies when navigating. Also, curiosity levels tend to rise when the virtual environment changes. These results illustrate the dynamic interplay between exploration and discomfort. In addition, it offers a new perspective on how curiosity drives behavior in immersive environments, providing a foundation for designing adaptive VR environments. Future research will further refine this model by incorporating additional psychological and environmental factors to improve prediction accuracy.
{"title":"Balancing Exploration and Cybersickness: Investigating Curiosity-Driven Behavior in Virtual Environments","authors":"Tangyao Li;Yuyang Wang","doi":"10.1109/THMS.2025.3602125","DOIUrl":"https://doi.org/10.1109/THMS.2025.3602125","url":null,"abstract":"Virtual reality offers the opportunity for immersive exploration, yet it is often undermined by cybersickness. However, how individuals strike a balance between exploration and discomfort remains unclear. Existing method (e.g., reinforcement learning (RL)) often fail to fully capture the complexities of navigation and decision-making patterns. This study investigates how curiosity influences users’ navigation behavior, particularly how users strike a balance between exploration and discomfort. We propose curiosity as a key factor driving irrational decision-making and apply the free energy principle to model the relationship between curiosity and user behavior quantitatively. Our findings indicate that users generally adopt conservative strategies when navigating. Also, curiosity levels tend to rise when the virtual environment changes. These results illustrate the dynamic interplay between exploration and discomfort. In addition, it offers a new perspective on how curiosity drives behavior in immersive environments, providing a foundation for designing adaptive VR environments. Future research will further refine this model by incorporating additional psychological and environmental factors to improve prediction accuracy.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 6","pages":"1043-1052"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1109/THMS.2025.3583351
{"title":"TechRxiv: Share Your Preprint Research with the World!","authors":"","doi":"10.1109/THMS.2025.3583351","DOIUrl":"https://doi.org/10.1109/THMS.2025.3583351","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 3","pages":"478-478"},"PeriodicalIF":3.5,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052886","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1109/THMS.2025.3581253
{"title":"IEEE Transactions on Human-Machine Systems Information for Authors","authors":"","doi":"10.1109/THMS.2025.3581253","DOIUrl":"https://doi.org/10.1109/THMS.2025.3581253","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 3","pages":"C4-C4"},"PeriodicalIF":3.5,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052887","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1109/THMS.2025.3581251
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/THMS.2025.3581251","DOIUrl":"https://doi.org/10.1109/THMS.2025.3581251","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 3","pages":"C2-C2"},"PeriodicalIF":3.5,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052885","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1109/THMS.2025.3581249
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/THMS.2025.3581249","DOIUrl":"https://doi.org/10.1109/THMS.2025.3581249","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 3","pages":"C3-C3"},"PeriodicalIF":3.5,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052890","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1109/THMS.2025.3583398
{"title":"Present a World of Opportunity","authors":"","doi":"10.1109/THMS.2025.3583398","DOIUrl":"https://doi.org/10.1109/THMS.2025.3583398","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 3","pages":"477-477"},"PeriodicalIF":3.5,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052888","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1109/THMS.2025.3581300
{"title":"Call for Papers: IEEE Transactions on Human-Machine Systems","authors":"","doi":"10.1109/THMS.2025.3581300","DOIUrl":"https://doi.org/10.1109/THMS.2025.3581300","url":null,"abstract":"","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 3","pages":"476-476"},"PeriodicalIF":3.5,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052892","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-30DOI: 10.1109/THMS.2025.3558437
Yaohan Ding;Lesong Jia;Na Du
Trust and situational awareness (SA) are critical for the acceptance and safety of automated vehicles (AVs). While AV explanations with different information types have been studied to enhance drivers' trust and SA, their effectiveness remains unclear when AVs make errors that do not trigger takeover requests. This study investigated the effects of information type, error type, and their interaction on drivers' trust in AVs, SA, and their relationships. We recruited 300 participants in an online video study with a 3 (information type: why, how, why + how) × 3 (error type: false alarm, miss, correct [no error]) mixed design. How information describes the vehicle's action, while why information refers to the reason for the vehicle's action. Linear mixed models showed that false alarms and misses were associated with lower SA compared with correct scenarios, but possibly due to different reasons. Compared with correct scenarios, both false alarms and misses were associated with lower trust, with misses even lower than false alarms, possibly due to the varying severity of potential consequences. Compared with why and why + how information, how information was generally associated with lower SA and a higher potential of overtrust in false alarms. Trust and SA had a negative linear relationship in misses and false alarms, while no correlations were found in correct scenarios. To mitigate potential overtrust and misinterpretation of situations when AVs make errors, it is crucial to maintain higher SA. We recommend including why information in AV explanations and deploying AV decision systems that are less miss-prone.
{"title":"Watch Out for Explanations: Information Type and Error Type Affect Trust and Situational Awareness in Automated Vehicles","authors":"Yaohan Ding;Lesong Jia;Na Du","doi":"10.1109/THMS.2025.3558437","DOIUrl":"https://doi.org/10.1109/THMS.2025.3558437","url":null,"abstract":"Trust and situational awareness (SA) are critical for the acceptance and safety of automated vehicles (AVs). While AV explanations with different information types have been studied to enhance drivers' trust and SA, their effectiveness remains unclear when AVs make errors that do not trigger takeover requests. This study investigated the effects of information type, error type, and their interaction on drivers' trust in AVs, SA, and their relationships. We recruited 300 participants in an online video study with a 3 (information type: <italic>why</i>, <italic>how</i>, <italic>why + how</i>) × 3 (error type: false alarm, miss, correct [no error]) mixed design. <italic>How</i> information describes the vehicle's action, while <italic>why</i> information refers to the reason for the vehicle's action. Linear mixed models showed that false alarms and misses were associated with lower SA compared with correct scenarios, but possibly due to different reasons. Compared with correct scenarios, both false alarms and misses were associated with lower trust, with misses even lower than false alarms, possibly due to the varying severity of potential consequences. Compared with <italic>why</i> and <italic>why + how</i> information, <italic>how</i> information was generally associated with lower SA and a higher potential of overtrust in false alarms. Trust and SA had a negative linear relationship in misses and false alarms, while no correlations were found in correct scenarios. To mitigate potential overtrust and misinterpretation of situations when AVs make errors, it is crucial to maintain higher SA. We recommend including <italic>why</i> information in AV explanations and deploying AV decision systems that are less miss-prone.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 3","pages":"450-459"},"PeriodicalIF":3.5,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}