Pub Date : 2025-08-01Epub Date: 2025-02-03DOI: 10.1177/00187208251317470
Yossef Saad, Joachim Meyer
ObjectiveThe impact of the context in which automation is introduced to a decision-making system was analyzed theoretically and empirically.BackgroundPrevious work dealt with causality and responsibility in human-automation systems without considering the effects of how the automation's role is presented to users.MethodsAn existing analytical model for predicting the human contribution to outcomes was adapted to accommodate the context of automation. An aided signal detection experiment with 400 participants was conducted to assess the correspondence of observed behavior to model predictions.ResultsThe context in which the automation's role is presented affected users' tendency to follow its advice. When automation made decisions, and users only supervised it, they tended to contribute less to the outcome than in systems where the automation had an advisory capacity. The adapted theoretical model for human contribution was generally aligned with participants' behavior.ConclusionThe specific way automation is integrated into a system affects its use and the perceptions of user involvement, possibly altering overall system performance.ApplicationThe research can help design systems with automation-assisted decision-making and provide information on regulatory requirements and operational processes for such systems.
{"title":"Context-Based Human Influence and Causal Responsibility for Assisted Decision-Making.","authors":"Yossef Saad, Joachim Meyer","doi":"10.1177/00187208251317470","DOIUrl":"10.1177/00187208251317470","url":null,"abstract":"<p><p>ObjectiveThe impact of the context in which automation is introduced to a decision-making system was analyzed theoretically and empirically.BackgroundPrevious work dealt with causality and responsibility in human-automation systems without considering the effects of how the automation's role is presented to users.MethodsAn existing analytical model for predicting the human contribution to outcomes was adapted to accommodate the context of automation. An aided signal detection experiment with 400 participants was conducted to assess the correspondence of observed behavior to model predictions.ResultsThe context in which the automation's role is presented affected users' tendency to follow its advice. When automation made decisions, and users only supervised it, they tended to contribute less to the outcome than in systems where the automation had an advisory capacity. The adapted theoretical model for human contribution was generally aligned with participants' behavior.ConclusionThe specific way automation is integrated into a system affects its use and the perceptions of user involvement, possibly altering overall system performance.ApplicationThe research can help design systems with automation-assisted decision-making and provide information on regulatory requirements and operational processes for such systems.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"795-808"},"PeriodicalIF":2.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12231881/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143124276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-02-03DOI: 10.1177/00187208251318465
Isabella Gegoff, Monica Tatasciore, Vanessa K Bowden, Shayne Loft
ObjectiveTo better understand automation transparency, we experimentally isolated the effects of additional information and decision recommendations on decision accuracy, decision time, perceived workload, trust, and system usability.BackgroundThe benefits of automation transparency are well documented. Previously, however, transparency (in the form of additional information) has been coupled with the provision of decision recommendations, potentially decreasing decision-maker agency and promoting automation bias. It may instead be more beneficial to provide additional information without decision recommendations to inform operators' unaided decision making.MethodsParticipants selected the optimal uninhabited vehicle (UV) to complete missions. Additional display information and decision recommendations were provided but were not always accurate. The level of additional information (no, medium, high) was manipulated between-subjects, and the provision of recommendations (absent, present) within-subjects.ResultsWhen decision recommendations were provided, participants made more accurate and faster decisions, and rated the UV system as more usable. However, recommendation provision reduced participants' ability to discriminate UV system information accuracy. Increased additional information led to faster decisions, lower perceived workload, and higher trust and usability ratings but only significantly improved decision (UV selection) accuracy when recommendations were provided.ConclusionIndividuals scrutinized additional information more when not provided decision recommendations, potentially indicating a higher expected value of processing that information. However, additional information only improved performance when accompanied by recommendations to support decisions.ApplicationIt is critical to understand the potential differential impact of, and interaction between, additional display information and decision recommendations to design effective transparent automated systems in the modern workplace.
{"title":"Deciphering Automation Transparency: Do the Benefits of Transparency Differ Based on Whether Decision Recommendations Are Provided?","authors":"Isabella Gegoff, Monica Tatasciore, Vanessa K Bowden, Shayne Loft","doi":"10.1177/00187208251318465","DOIUrl":"10.1177/00187208251318465","url":null,"abstract":"<p><p>ObjectiveTo better understand automation transparency, we experimentally isolated the effects of additional information and decision recommendations on decision accuracy, decision time, perceived workload, trust, and system usability.BackgroundThe benefits of automation transparency are well documented. Previously, however, transparency (in the form of additional information) has been coupled with the provision of decision recommendations, potentially decreasing decision-maker agency and promoting automation bias. It may instead be more beneficial to provide additional information without decision recommendations to inform operators' unaided decision making.MethodsParticipants selected the optimal uninhabited vehicle (UV) to complete missions. Additional display information and decision recommendations were provided but were not always accurate. The level of additional information (no, medium, high) was manipulated between-subjects, and the provision of recommendations (absent, present) within-subjects.ResultsWhen decision recommendations were provided, participants made more accurate and faster decisions, and rated the UV system as more usable. However, recommendation provision reduced participants' ability to discriminate UV system information accuracy. Increased additional information led to faster decisions, lower perceived workload, and higher trust and usability ratings but only significantly improved decision (UV selection) accuracy when recommendations were provided.ConclusionIndividuals scrutinized additional information more when not provided decision recommendations, potentially indicating a higher expected value of processing that information. However, additional information only improved performance when accompanied by recommendations to support decisions.ApplicationIt is critical to understand the potential differential impact of, and interaction between, additional display information and decision recommendations to design effective transparent automated systems in the modern workplace.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"776-794"},"PeriodicalIF":2.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12231875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143082103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2024-12-25DOI: 10.1177/00187208241311808
Shuo Wang, Yu Liu, Xuan Wang, Zechen Liu, Xuqun You, Yuan Li
ObjectiveThis study investigated the effect of reliability on the function allocation (FA) boundary by examining the interaction effect of degree of automation (DOA) and reliability on routine performance, failure performance, and attention allocation.BackgroundAccording to the lumberjack effect, an increase in DOA will typically improve routine performance, while failure performance may remain undeteriorated until a specific, high DOA threshold is reached. This threshold can be regarded as the FA boundary. Considering that both DOA and reliability can influence failure performance through attention allocation, it is crucial to investigate how reliability affects the FA boundary.MethodParticipants performed three MATB tasks, one of which, the system monitoring task, was supported by four types of automation: information acquisition (IAc), information analysis (IAn), action selection (AS), and action implementation (AI). From IAc to AI, the DOA incrementally increased. Additionally, automation reliability was set to three levels, namely, 87.50%, 68.75%, and 56.25%.ResultsFor routine performance, participants assisted by AS reacted more rapidly to gauge malfunctions than those supported by IAc or IAn. For failure performance, participants aided by AI corrected gauge malfunctions less frequently than other participants. Correspondingly, participants supported by AI exhibited fewer fixation counts on the system monitoring task than did others.ConclusionIt appears that the FA boundary lies between AS and AI. However, there is insufficient evidence to support the effect of reliability on the FA boundary.ApplicationThese findings can provide useful insights for improving the design of automated systems in complex working environments.
{"title":"Where Is the Function Allocation Boundary? The Effect of Degree of Automation on Attention Allocation and Human Performance Under Different Reliabilities.","authors":"Shuo Wang, Yu Liu, Xuan Wang, Zechen Liu, Xuqun You, Yuan Li","doi":"10.1177/00187208241311808","DOIUrl":"10.1177/00187208241311808","url":null,"abstract":"<p><p>ObjectiveThis study investigated the effect of reliability on the function allocation (FA) boundary by examining the interaction effect of degree of automation (DOA) and reliability on routine performance, failure performance, and attention allocation.BackgroundAccording to the lumberjack effect, an increase in DOA will typically improve routine performance, while failure performance may remain undeteriorated until a specific, high DOA threshold is reached. This threshold can be regarded as the FA boundary. Considering that both DOA and reliability can influence failure performance through attention allocation, it is crucial to investigate how reliability affects the FA boundary.MethodParticipants performed three MATB tasks, one of which, the system monitoring task, was supported by four types of automation: information acquisition (IAc), information analysis (IAn), action selection (AS), and action implementation (AI). From IAc to AI, the DOA incrementally increased. Additionally, automation reliability was set to three levels, namely, 87.50%, 68.75%, and 56.25%.ResultsFor routine performance, participants assisted by AS reacted more rapidly to gauge malfunctions than those supported by IAc or IAn. For failure performance, participants aided by AI corrected gauge malfunctions less frequently than other participants. Correspondingly, participants supported by AI exhibited fewer fixation counts on the system monitoring task than did others.ConclusionIt appears that the FA boundary lies between AS and AI. However, there is insufficient evidence to support the effect of reliability on the FA boundary.ApplicationThese findings can provide useful insights for improving the design of automated systems in complex working environments.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"757-775"},"PeriodicalIF":2.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-02-19DOI: 10.1177/00187208251320179
Xin Xin, Xinyuan Chen, Wei Liu
ObjectiveThis study aims to investigate the ability of auditory cues for predicting motion and lead times to mitigate visually induced motion sickness (VIMS).BackgroundThe vehicle information systems predominantly utilize visual displays, which can introduce conflicts between visual and vestibular motion cues, potentially resulting in VIMS. In these scenarios, auditory cues may provide a viable solution, especially when visual cues are diminished by fatigue or distractions.MethodsIn two distinct studies, a total of 180 participants were involved in investigating the impact of auditory cues on VIMS. In Study 1, participants were categorized based on the type of auditory cue they received (speech, nonspeech, or no-cue). Study 2 examined the effects of three different lead times (1 s, 2 s, and 3 s) between the activation of the auditory cue and the occurrence of car braking or turning in nonspeech conditions. VIMS severity was assessed with the Simulator Sickness Questionnaire (SSQ) before and after the simulation phase.ResultsNonspeech cues significantly reduced VIMS compared to speech or no-cue. VIMS was notably lower with a 2 s lead time than with 1 s or 3 s lead times, and females reported higher levels of VIMS than males.ConclusionResults across two studies suggest using nonspeech cues with a 2-second lead time to reduce VIMS. It also recommends investigating the effects of duration, tone, and voice frequency. Furthermore, the study proposes extensive research into lead time settings for various scenarios such as driving fatigue, hillside roads, and traffic congestion.ApplicationThese findings offer potential value in designing auditory cues to reduce VIMS in autonomous driving, simulators, VR games, and films.
{"title":"Effects of Auditory Anticipatory Cues and Lead Time on Visually Induced Motion Sickness.","authors":"Xin Xin, Xinyuan Chen, Wei Liu","doi":"10.1177/00187208251320179","DOIUrl":"10.1177/00187208251320179","url":null,"abstract":"<p><p>ObjectiveThis study aims to investigate the ability of auditory cues for predicting motion and lead times to mitigate visually induced motion sickness (VIMS).BackgroundThe vehicle information systems predominantly utilize visual displays, which can introduce conflicts between visual and vestibular motion cues, potentially resulting in VIMS. In these scenarios, auditory cues may provide a viable solution, especially when visual cues are diminished by fatigue or distractions.MethodsIn two distinct studies, a total of 180 participants were involved in investigating the impact of auditory cues on VIMS. In Study 1, participants were categorized based on the type of auditory cue they received (speech, nonspeech, or no-cue). Study 2 examined the effects of three different lead times (1 s, 2 s, and 3 s) between the activation of the auditory cue and the occurrence of car braking or turning in nonspeech conditions. VIMS severity was assessed with the Simulator Sickness Questionnaire (SSQ) before and after the simulation phase.ResultsNonspeech cues significantly reduced VIMS compared to speech or no-cue. VIMS was notably lower with a 2 s lead time than with 1 s or 3 s lead times, and females reported higher levels of VIMS than males.ConclusionResults across two studies suggest using nonspeech cues with a 2-second lead time to reduce VIMS. It also recommends investigating the effects of duration, tone, and voice frequency. Furthermore, the study proposes extensive research into lead time settings for various scenarios such as driving fatigue, hillside roads, and traffic congestion.ApplicationThese findings offer potential value in designing auditory cues to reduce VIMS in autonomous driving, simulators, VR games, and films.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"809-822"},"PeriodicalIF":2.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2024-12-03DOI: 10.1177/00187208241305591
Hélio Silva, Pedro G F Ramos, Sabrina C Teno, Pedro B Júdice
ObjectiveTo gather the existing evidence on the impact of sit-stand desk-based interventions on working-time and full-day sedentary behavior and compare their impact across different intervention lengths.BackgroundReducing sedentary behavior is vital for improving office workers' health. Sit-stand desks promote sitting and standing alternation, but understanding their effects outside the workplace is essential for success.MethodsStudies published between January 2008 and January 2024 were searched through electronic databases (PubMed, Google Scholar, and Cochrane Library). The quality of the studies was assessed using the Quality Assessment Tool for Quantitative Studies of the Effective Public Health Practice Project.ResultsTwelve included studies showed that the intervention group experienced average reductions in full-day sedentary behavior of 68.7 min/day at 3 months, 77.7 min/day at 6 months, and 62.1 min/day at 12 months compared to the control group. For working hours sedentary behavior, reductions were observed in the intervention group at 9 weeks (73.0 min/day), 3 months (88.0 min/day), 6 months (80.8 min/day), and 12 months (48.0 min/day) relative to the control group.ConclusionsSit-stand desk interventions can be effective in helping office workers reduce sedentary behavior in the short, medium, and long-term both at work and throughout the full-day.ApplicationActive workstation interventions, including sit-stand desks, educational sessions, and alert software, aim to reduce sedentary behavior among office workers. While sit-stand desks show promise in decreasing sitting time during working hours, their long-term effectiveness and impact beyond the workplace remain uncertain. This review evaluates their effectiveness across different durations, addressing both workplace and full-day impact.
{"title":"The Impact of Sit-Stand Desks on Full-Day and Work-Based Sedentary Behavior of Office Workers: A Systematic Review.","authors":"Hélio Silva, Pedro G F Ramos, Sabrina C Teno, Pedro B Júdice","doi":"10.1177/00187208241305591","DOIUrl":"10.1177/00187208241305591","url":null,"abstract":"<p><p>ObjectiveTo gather the existing evidence on the impact of sit-stand desk-based interventions on working-time and full-day sedentary behavior and compare their impact across different intervention lengths.BackgroundReducing sedentary behavior is vital for improving office workers' health. Sit-stand desks promote sitting and standing alternation, but understanding their effects outside the workplace is essential for success.MethodsStudies published between January 2008 and January 2024 were searched through electronic databases (PubMed, Google Scholar, and Cochrane Library). The quality of the studies was assessed using the Quality Assessment Tool for Quantitative Studies of the Effective Public Health Practice Project.ResultsTwelve included studies showed that the intervention group experienced average reductions in full-day sedentary behavior of 68.7 min/day at 3 months, 77.7 min/day at 6 months, and 62.1 min/day at 12 months compared to the control group. For working hours sedentary behavior, reductions were observed in the intervention group at 9 weeks (73.0 min/day), 3 months (88.0 min/day), 6 months (80.8 min/day), and 12 months (48.0 min/day) relative to the control group.ConclusionsSit-stand desk interventions can be effective in helping office workers reduce sedentary behavior in the short, medium, and long-term both at work and throughout the full-day.ApplicationActive workstation interventions, including sit-stand desks, educational sessions, and alert software, aim to reduce sedentary behavior among office workers. While sit-stand desks show promise in decreasing sitting time during working hours, their long-term effectiveness and impact beyond the workplace remain uncertain. This review evaluates their effectiveness across different durations, addressing both workplace and full-day impact.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"695-713"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2024-12-26DOI: 10.1177/00187208241309748
SeHee Jung, Bingyi Su, Lu Lu, Liwei Qing, Xu Xu
ObjectiveTo identify lifting actions and count the number of lifts performed in videos based on robust class prediction and a streamlined process for reliable real-time monitoring of lifting tasks.BackgroundTraditional methods for recognizing lifting actions often rely on deep learning classifiers applied to human motion data collected from wearable sensors. Despite their high performance, these methods can be difficult to implement on systems with limited hardware resources.MethodThe proposed method follows a five-stage process: (1) BlazePose, a real-time pose estimation model, detects key joints of the human body. (2) These joints are preprocessed by smoothing, centering, and scaling techniques. (3) Kinematic features are extracted from the preprocessed joints. (4) Video frames are classified as lifting or nonlifting using rank-altered kinematic feature pairs. (5) A lifting counting algorithm counts the number of lifts based on the class predictions.ResultsNine rank-altered kinematic feature pairs are identified as key pairs. These pairs were used to construct an ensemble classifier, which achieved 0.89 or above in classification metrics, including accuracy, precision, recall, and F1 score. This classifier showed an accuracy of 0.90 in lifting counting and a latency of 0.06 ms, which is at least 12.5 times faster than baseline classifiers.ConclusionThis study demonstrates that computer vision-based kinematic features could be adopted to effectively and efficiently recognize lifting actions.ApplicationThe proposed method could be deployed on various platforms, including mobile devices and embedded systems, to monitor lifting tasks in real-time for the proactive prevention of work-related low-back injuries.
{"title":"Video-Based Lifting Action Recognition Using Rank-Altered Kinematic Feature Pairs.","authors":"SeHee Jung, Bingyi Su, Lu Lu, Liwei Qing, Xu Xu","doi":"10.1177/00187208241309748","DOIUrl":"10.1177/00187208241309748","url":null,"abstract":"<p><p>ObjectiveTo identify lifting actions and count the number of lifts performed in videos based on robust class prediction and a streamlined process for reliable real-time monitoring of lifting tasks.BackgroundTraditional methods for recognizing lifting actions often rely on deep learning classifiers applied to human motion data collected from wearable sensors. Despite their high performance, these methods can be difficult to implement on systems with limited hardware resources.MethodThe proposed method follows a five-stage process: (1) BlazePose, a real-time pose estimation model, detects key joints of the human body. (2) These joints are preprocessed by smoothing, centering, and scaling techniques. (3) Kinematic features are extracted from the preprocessed joints. (4) Video frames are classified as lifting or nonlifting using rank-altered kinematic feature pairs. (5) A lifting counting algorithm counts the number of lifts based on the class predictions.ResultsNine rank-altered kinematic feature pairs are identified as key pairs. These pairs were used to construct an ensemble classifier, which achieved 0.89 or above in classification metrics, including accuracy, precision, recall, and F1 score. This classifier showed an accuracy of 0.90 in lifting counting and a latency of 0.06 ms, which is at least 12.5 times faster than baseline classifiers.ConclusionThis study demonstrates that computer vision-based kinematic features could be adopted to effectively and efficiently recognize lifting actions.ApplicationThe proposed method could be deployed on various platforms, including mobile devices and embedded systems, to monitor lifting tasks in real-time for the proactive prevention of work-related low-back injuries.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"656-672"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2024-12-10DOI: 10.1177/00187208241306966
Yanlu Cao, Maosong Jiang, Zhuxi Yao, Shufeng Xia, Wenlong Liu
ObjectiveTo explore and validate effective eye movement features related to motion sickness (MS) through closed-track experiments and to provide valuable insights for practical applications.BackgroundWith the development of autonomous vehicles (AVs), MS has attracted more and more attention. Eye movements have great potential to evaluate the severity of MS as an objective quantitative indicator of vestibular function. Eye movement signals can be easily and noninvasively collected using a camera, which will not cause discomfort or disturbance to passengers, thus making it highly applicable.MethodEye movement data were collected from 72 participants susceptible to MS in closed-track driving environments. We extracted features including blink rate (BR), total number of fixations (TNF), total duration of fixations (TDF), mean duration of fixations (MDF), saccade amplitude (SA), saccade duration (SD), and number of nystagmus (NN). The statistical method and multivariate long short-term memory fully convolutional network (MLSTM-FCN) were used to validate the effectiveness of eye movement features.ResultsSignificant differences were shown in the extracted eye movement features across different levels of MS through statistical analysis. The MLSTM-FCN model achieved an accuracy of 91.37% for MS detection and 88.51% for prediction in binary classification. For ternary classification, it achieved an accuracy of 80.54% for MS detection and 80.11% for prediction.ConclusionEvaluating MS through eye movements is effective. The MLSTM-FCN model based on eye movements can efficiently detect and predict MS.ApplicationThis work can be used to provide a possible indication and early warning for MS.
目的:通过闭轨实验探索和验证与运动病(MS)相关的有效眼动特征,并为实际应用提供有价值的见解:通过闭轨实验探索和验证与运动病(MS)相关的有效眼动特征,并为实际应用提供有价值的见解:背景:随着自动驾驶汽车(AV)的发展,MS 已引起越来越多的关注。眼动作为前庭功能的客观量化指标,在评估 MS 的严重程度方面具有巨大潜力。眼动信号可通过摄像头轻松无创采集,不会对乘客造成不适或干扰,因此适用性很强:方法:我们收集了 72 名易患多发性硬化症的参与者在封闭轨道驾驶环境中的眼动数据。我们提取的特征包括眨眼率(BR)、定点总次数(TNF)、定点总持续时间(TDF)、平均定点持续时间(MDF)、囊回幅度(SA)、囊回持续时间(SD)和眼球震颤次数(NN)。统计方法和多变量长短期记忆全卷积网络(MLSTM-FCN)用于验证眼动特征的有效性:结果:通过统计分析,提取的眼动特征在不同级别的 MS 中存在显著差异。在二元分类中,MLSTM-FCN 模型的 MS 检测准确率为 91.37%,预测准确率为 88.51%。在三元分类中,其 MS 检测准确率为 80.54%,预测准确率为 80.11%:结论:通过眼球运动评估 MS 是有效的。结论:通过眼球运动评估 MS 是有效的,基于眼球运动的 MLSTM-FCN 模型可以有效地检测和预测 MS:应用:这项工作可用于为多发性硬化症提供可能的指示和预警。
{"title":"Exploring Eye Movement Features of Motion Sickness Using Closed-Track Driving Experiments.","authors":"Yanlu Cao, Maosong Jiang, Zhuxi Yao, Shufeng Xia, Wenlong Liu","doi":"10.1177/00187208241306966","DOIUrl":"10.1177/00187208241306966","url":null,"abstract":"<p><p>ObjectiveTo explore and validate effective eye movement features related to motion sickness (MS) through closed-track experiments and to provide valuable insights for practical applications.BackgroundWith the development of autonomous vehicles (AVs), MS has attracted more and more attention. Eye movements have great potential to evaluate the severity of MS as an objective quantitative indicator of vestibular function. Eye movement signals can be easily and noninvasively collected using a camera, which will not cause discomfort or disturbance to passengers, thus making it highly applicable.MethodEye movement data were collected from 72 participants susceptible to MS in closed-track driving environments. We extracted features including blink rate (BR), total number of fixations (TNF), total duration of fixations (TDF), mean duration of fixations (MDF), saccade amplitude (SA), saccade duration (SD), and number of nystagmus (NN). The statistical method and multivariate long short-term memory fully convolutional network (MLSTM-FCN) were used to validate the effectiveness of eye movement features.ResultsSignificant differences were shown in the extracted eye movement features across different levels of MS through statistical analysis. The MLSTM-FCN model achieved an accuracy of 91.37% for MS detection and 88.51% for prediction in binary classification. For ternary classification, it achieved an accuracy of 80.54% for MS detection and 80.11% for prediction.ConclusionEvaluating MS through eye movements is effective. The MLSTM-FCN model based on eye movements can efficiently detect and predict MS.ApplicationThis work can be used to provide a possible indication and early warning for MS.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"714-730"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ObjectiveTo investigate the biodynamics of human-exoskeleton interactions during patient handling tasks using a subject-specific modeling approach.BackgroundExoskeleton technology holds promise for mitigating musculoskeletal disorders caused by manual handling and most alarmingly by patient handling jobs. A deeper, more unified understanding of the biomechanical effects of exoskeleton use calls for advanced subject-specific models of complex, dynamic human-exoskeleton interactions.MethodsTwelve sex-balanced healthy participants performed three simulated patient handling tasks along with a reference load-lifting task, with and without wearing the exoskeleton, while their full-body motion and ground reaction forces were measured. Subject-specific models were constructed using motion and force data. Biodynamic response variables derived from the models were analyzed to examine the effects of the exoskeleton. Model validation used load-lifting trials with known hand forces.ResultsThe use of exoskeleton significantly reduced (19.7%-27.2%) the peak lumbar flexion moment but increased (26.4%-47.8%) the peak lumbar flexion motion, with greater moment percent reduction in more symmetric handling tasks; similarly affected the shoulder joint moments and motions but only during two more symmetric handling tasks; and significantly reduced the peak motions for the rest of the body joints.ConclusionSubject-specific biodynamic models simulating exoskeleton-assisted patient handling were constructed and validated, demonstrating that the exoskeleton effectively lessened the peak loading to the lumbar and shoulder joints as prime movers while redistributing more motions to these joints and less to the remaining joints.ApplicationThe findings offer new insights into biodynamic responses during exoskeleton-assisted patient handling, benefiting the development of more effective, possibly task- and individual-customized, exoskeletons.
{"title":"Biodynamic Modeling and Analysis of Human-Exoskeleton Interactions in Simulated Patient Handling Tasks.","authors":"Yinong Chen, Wei Yin, Liying Zheng, Ranjana Mehta, Xudong Zhang","doi":"10.1177/00187208241311271","DOIUrl":"10.1177/00187208241311271","url":null,"abstract":"<p><p>ObjectiveTo investigate the biodynamics of human-exoskeleton interactions during patient handling tasks using a subject-specific modeling approach.BackgroundExoskeleton technology holds promise for mitigating musculoskeletal disorders caused by manual handling and most alarmingly by patient handling jobs. A deeper, more unified understanding of the biomechanical effects of exoskeleton use calls for advanced subject-specific models of complex, dynamic human-exoskeleton interactions.MethodsTwelve sex-balanced healthy participants performed three simulated patient handling tasks along with a reference load-lifting task, with and without wearing the exoskeleton, while their full-body motion and ground reaction forces were measured. Subject-specific models were constructed using motion and force data. Biodynamic response variables derived from the models were analyzed to examine the effects of the exoskeleton. Model validation used load-lifting trials with known hand forces.ResultsThe use of exoskeleton significantly reduced (19.7%-27.2%) the peak lumbar flexion moment but increased (26.4%-47.8%) the peak lumbar flexion motion, with greater moment percent reduction in more symmetric handling tasks; similarly affected the shoulder joint moments and motions but only during two more symmetric handling tasks; and significantly reduced the peak motions for the rest of the body joints.ConclusionSubject-specific biodynamic models simulating exoskeleton-assisted patient handling were constructed and validated, demonstrating that the exoskeleton effectively lessened the peak loading to the lumbar and shoulder joints as prime movers while redistributing more motions to these joints and less to the remaining joints.ApplicationThe findings offer new insights into biodynamic responses during exoskeleton-assisted patient handling, benefiting the development of more effective, possibly task- and individual-customized, exoskeletons.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"641-655"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127603/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142928181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2024-11-28DOI: 10.1177/00187208241302475
Shiyu Deng, Chaitanya Kulkarni, Jinwoo Oh, Sarah Henrickson Parker, Nathan Lau
ObjectiveThis study aims to compare the relative sensitivity between scene-independent and scene-dependent eye metrics in assessing trainees' performance in simulated psychomotor tasks.BackgroundEye metrics have been extensively studied for skill assessment and training in psychomotor tasks, including aviation, driving, and surgery. These metrics can be categorized as scene-independent or scene-dependent, based on whether predefined areas of interest are considered. There is a paucity of direct comparisons between these metric types, particularly in their ability to assess performance during early training.MethodThirteen medical students practiced the peg transfer task in the Fundamentals of Laparoscopic Surgery. Scene-independent and scene-dependent eye metrics, completion time, and tool motion metrics were derived from eye-tracking data and task videos. K-means clustering of nine eye metrics identified three groups of practice trials with similar gaze behaviors, corresponding to three performance levels verified by completion time and tool motion metrics. A random forest model using eye metrics estimated classification accuracy and determined the feature importance of the eye metrics.ResultsScene-dependent eye metrics demonstrated a clearer linear trend with performance levels than scene-independent metrics. The random forest model achieved 88.59% accuracy, identifying the top four predictors of performance as scene-dependent metrics, whereas the two least effective predictors were scene-independent metrics.ConclusionScene-dependent eye metrics are overall more sensitive than scene-independent ones for assessing trainee performance in simulated psychomotor tasks.ApplicationThe study's findings are significant for advancing eye metrics in psychomotor skill assessment and training, enhancing operator competency, and promoting safe operations.
{"title":"Comparison Between Scene-Independent and Scene-Dependent Eye Metrics in Assessing Psychomotor Skills.","authors":"Shiyu Deng, Chaitanya Kulkarni, Jinwoo Oh, Sarah Henrickson Parker, Nathan Lau","doi":"10.1177/00187208241302475","DOIUrl":"10.1177/00187208241302475","url":null,"abstract":"<p><p>ObjectiveThis study aims to compare the relative sensitivity between scene-independent and scene-dependent eye metrics in assessing trainees' performance in simulated psychomotor tasks.BackgroundEye metrics have been extensively studied for skill assessment and training in psychomotor tasks, including aviation, driving, and surgery. These metrics can be categorized as scene-independent or scene-dependent, based on whether predefined areas of interest are considered. There is a paucity of direct comparisons between these metric types, particularly in their ability to assess performance during early training.MethodThirteen medical students practiced the peg transfer task in the Fundamentals of Laparoscopic Surgery. Scene-independent and scene-dependent eye metrics, completion time, and tool motion metrics were derived from eye-tracking data and task videos. K-means clustering of nine eye metrics identified three groups of practice trials with similar gaze behaviors, corresponding to three performance levels verified by completion time and tool motion metrics. A random forest model using eye metrics estimated classification accuracy and determined the feature importance of the eye metrics.ResultsScene-dependent eye metrics demonstrated a clearer linear trend with performance levels than scene-independent metrics. The random forest model achieved 88.59% accuracy, identifying the top four predictors of performance as scene-dependent metrics, whereas the two least effective predictors were scene-independent metrics.ConclusionScene-dependent eye metrics are overall more sensitive than scene-independent ones for assessing trainee performance in simulated psychomotor tasks.ApplicationThe study's findings are significant for advancing eye metrics in psychomotor skill assessment and training, enhancing operator competency, and promoting safe operations.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"673-694"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12306202/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-01-22DOI: 10.1177/00187208251314248
Xiaolu Bai, Jing Feng
ObjectiveThis study explores the effectiveness of conversational prompts on enhancing driver monitoring behavior and takeover performance in partially automated driving under two non-driving-related task (NDRT) scenarios with varying workloads.BackgroundDriver disengagement in partially automated driving is a serious safety concern. Intermittent conversational prompts that require responses may be a solution. However, existing literature is limited with inconsistent findings. There is little consideration of NDRTs as an important context, despite their ubiquitous involvement. A method is also lacking to measure driver engagement at the cognitive level, beyond manual and visual engagements.MethodsParticipants operated a partially automated vehicle in a simulator across six predefined drives. In each drive, participants either received driving-related prompts, daily-conversation prompts, or no prompts, with or without a takeover notification. The first experiment instructed participants to engage in NDRTs at their choice and the second experiment incentivized solving demanding anagrams with monetary rewards.ResultsWhen participants were voluntarily engaged in NDRTs, answering driving-related prompts and receiving takeover notifications improved their monitoring behavior and takeover performance. However, when participants were involved in the more demanding and incentivized NDRT, answering prompts had little effect.ConclusionThe study supports the importance of both maintaining appropriate workload and processing driving-related information during partially automated driving. Driving-related prompts improve driver engagement and takeover performance, but they are not robust enough to compete with NDRTs that have high motivational appeals and cognitive demands.ApplicationThe design of driver engagement tools should consider the workload and information processing mechanisms.
{"title":"Awakening the Disengaged: Can Driving-Related Prompts Engage Drivers in Partial Automation?","authors":"Xiaolu Bai, Jing Feng","doi":"10.1177/00187208251314248","DOIUrl":"10.1177/00187208251314248","url":null,"abstract":"<p><p>ObjectiveThis study explores the effectiveness of conversational prompts on enhancing driver monitoring behavior and takeover performance in partially automated driving under two non-driving-related task (NDRT) scenarios with varying workloads.BackgroundDriver disengagement in partially automated driving is a serious safety concern. Intermittent conversational prompts that require responses may be a solution. However, existing literature is limited with inconsistent findings. There is little consideration of NDRTs as an important context, despite their ubiquitous involvement. A method is also lacking to measure driver engagement at the cognitive level, beyond manual and visual engagements.MethodsParticipants operated a partially automated vehicle in a simulator across six predefined drives. In each drive, participants either received driving-related prompts, daily-conversation prompts, or no prompts, with or without a takeover notification. The first experiment instructed participants to engage in NDRTs at their choice and the second experiment incentivized solving demanding anagrams with monetary rewards.ResultsWhen participants were voluntarily engaged in NDRTs, answering driving-related prompts and receiving takeover notifications improved their monitoring behavior and takeover performance. However, when participants were involved in the more demanding and incentivized NDRT, answering prompts had little effect.ConclusionThe study supports the importance of both maintaining appropriate workload and processing driving-related information during partially automated driving. Driving-related prompts improve driver engagement and takeover performance, but they are not robust enough to compete with NDRTs that have high motivational appeals and cognitive demands.ApplicationThe design of driver engagement tools should consider the workload and information processing mechanisms.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"731-752"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}