Pub Date : 2025-08-01Epub Date: 2025-02-19DOI: 10.1177/00187208251320179
Xin Xin, Xinyuan Chen, Wei Liu
ObjectiveThis study aims to investigate the ability of auditory cues for predicting motion and lead times to mitigate visually induced motion sickness (VIMS).BackgroundThe vehicle information systems predominantly utilize visual displays, which can introduce conflicts between visual and vestibular motion cues, potentially resulting in VIMS. In these scenarios, auditory cues may provide a viable solution, especially when visual cues are diminished by fatigue or distractions.MethodsIn two distinct studies, a total of 180 participants were involved in investigating the impact of auditory cues on VIMS. In Study 1, participants were categorized based on the type of auditory cue they received (speech, nonspeech, or no-cue). Study 2 examined the effects of three different lead times (1 s, 2 s, and 3 s) between the activation of the auditory cue and the occurrence of car braking or turning in nonspeech conditions. VIMS severity was assessed with the Simulator Sickness Questionnaire (SSQ) before and after the simulation phase.ResultsNonspeech cues significantly reduced VIMS compared to speech or no-cue. VIMS was notably lower with a 2 s lead time than with 1 s or 3 s lead times, and females reported higher levels of VIMS than males.ConclusionResults across two studies suggest using nonspeech cues with a 2-second lead time to reduce VIMS. It also recommends investigating the effects of duration, tone, and voice frequency. Furthermore, the study proposes extensive research into lead time settings for various scenarios such as driving fatigue, hillside roads, and traffic congestion.ApplicationThese findings offer potential value in designing auditory cues to reduce VIMS in autonomous driving, simulators, VR games, and films.
{"title":"Effects of Auditory Anticipatory Cues and Lead Time on Visually Induced Motion Sickness.","authors":"Xin Xin, Xinyuan Chen, Wei Liu","doi":"10.1177/00187208251320179","DOIUrl":"10.1177/00187208251320179","url":null,"abstract":"<p><p>ObjectiveThis study aims to investigate the ability of auditory cues for predicting motion and lead times to mitigate visually induced motion sickness (VIMS).BackgroundThe vehicle information systems predominantly utilize visual displays, which can introduce conflicts between visual and vestibular motion cues, potentially resulting in VIMS. In these scenarios, auditory cues may provide a viable solution, especially when visual cues are diminished by fatigue or distractions.MethodsIn two distinct studies, a total of 180 participants were involved in investigating the impact of auditory cues on VIMS. In Study 1, participants were categorized based on the type of auditory cue they received (speech, nonspeech, or no-cue). Study 2 examined the effects of three different lead times (1 s, 2 s, and 3 s) between the activation of the auditory cue and the occurrence of car braking or turning in nonspeech conditions. VIMS severity was assessed with the Simulator Sickness Questionnaire (SSQ) before and after the simulation phase.ResultsNonspeech cues significantly reduced VIMS compared to speech or no-cue. VIMS was notably lower with a 2 s lead time than with 1 s or 3 s lead times, and females reported higher levels of VIMS than males.ConclusionResults across two studies suggest using nonspeech cues with a 2-second lead time to reduce VIMS. It also recommends investigating the effects of duration, tone, and voice frequency. Furthermore, the study proposes extensive research into lead time settings for various scenarios such as driving fatigue, hillside roads, and traffic congestion.ApplicationThese findings offer potential value in designing auditory cues to reduce VIMS in autonomous driving, simulators, VR games, and films.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"809-822"},"PeriodicalIF":2.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2024-12-03DOI: 10.1177/00187208241305591
Hélio Silva, Pedro G F Ramos, Sabrina C Teno, Pedro B Júdice
ObjectiveTo gather the existing evidence on the impact of sit-stand desk-based interventions on working-time and full-day sedentary behavior and compare their impact across different intervention lengths.BackgroundReducing sedentary behavior is vital for improving office workers' health. Sit-stand desks promote sitting and standing alternation, but understanding their effects outside the workplace is essential for success.MethodsStudies published between January 2008 and January 2024 were searched through electronic databases (PubMed, Google Scholar, and Cochrane Library). The quality of the studies was assessed using the Quality Assessment Tool for Quantitative Studies of the Effective Public Health Practice Project.ResultsTwelve included studies showed that the intervention group experienced average reductions in full-day sedentary behavior of 68.7 min/day at 3 months, 77.7 min/day at 6 months, and 62.1 min/day at 12 months compared to the control group. For working hours sedentary behavior, reductions were observed in the intervention group at 9 weeks (73.0 min/day), 3 months (88.0 min/day), 6 months (80.8 min/day), and 12 months (48.0 min/day) relative to the control group.ConclusionsSit-stand desk interventions can be effective in helping office workers reduce sedentary behavior in the short, medium, and long-term both at work and throughout the full-day.ApplicationActive workstation interventions, including sit-stand desks, educational sessions, and alert software, aim to reduce sedentary behavior among office workers. While sit-stand desks show promise in decreasing sitting time during working hours, their long-term effectiveness and impact beyond the workplace remain uncertain. This review evaluates their effectiveness across different durations, addressing both workplace and full-day impact.
{"title":"The Impact of Sit-Stand Desks on Full-Day and Work-Based Sedentary Behavior of Office Workers: A Systematic Review.","authors":"Hélio Silva, Pedro G F Ramos, Sabrina C Teno, Pedro B Júdice","doi":"10.1177/00187208241305591","DOIUrl":"10.1177/00187208241305591","url":null,"abstract":"<p><p>ObjectiveTo gather the existing evidence on the impact of sit-stand desk-based interventions on working-time and full-day sedentary behavior and compare their impact across different intervention lengths.BackgroundReducing sedentary behavior is vital for improving office workers' health. Sit-stand desks promote sitting and standing alternation, but understanding their effects outside the workplace is essential for success.MethodsStudies published between January 2008 and January 2024 were searched through electronic databases (PubMed, Google Scholar, and Cochrane Library). The quality of the studies was assessed using the Quality Assessment Tool for Quantitative Studies of the Effective Public Health Practice Project.ResultsTwelve included studies showed that the intervention group experienced average reductions in full-day sedentary behavior of 68.7 min/day at 3 months, 77.7 min/day at 6 months, and 62.1 min/day at 12 months compared to the control group. For working hours sedentary behavior, reductions were observed in the intervention group at 9 weeks (73.0 min/day), 3 months (88.0 min/day), 6 months (80.8 min/day), and 12 months (48.0 min/day) relative to the control group.ConclusionsSit-stand desk interventions can be effective in helping office workers reduce sedentary behavior in the short, medium, and long-term both at work and throughout the full-day.ApplicationActive workstation interventions, including sit-stand desks, educational sessions, and alert software, aim to reduce sedentary behavior among office workers. While sit-stand desks show promise in decreasing sitting time during working hours, their long-term effectiveness and impact beyond the workplace remain uncertain. This review evaluates their effectiveness across different durations, addressing both workplace and full-day impact.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"695-713"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2024-12-26DOI: 10.1177/00187208241309748
SeHee Jung, Bingyi Su, Lu Lu, Liwei Qing, Xu Xu
ObjectiveTo identify lifting actions and count the number of lifts performed in videos based on robust class prediction and a streamlined process for reliable real-time monitoring of lifting tasks.BackgroundTraditional methods for recognizing lifting actions often rely on deep learning classifiers applied to human motion data collected from wearable sensors. Despite their high performance, these methods can be difficult to implement on systems with limited hardware resources.MethodThe proposed method follows a five-stage process: (1) BlazePose, a real-time pose estimation model, detects key joints of the human body. (2) These joints are preprocessed by smoothing, centering, and scaling techniques. (3) Kinematic features are extracted from the preprocessed joints. (4) Video frames are classified as lifting or nonlifting using rank-altered kinematic feature pairs. (5) A lifting counting algorithm counts the number of lifts based on the class predictions.ResultsNine rank-altered kinematic feature pairs are identified as key pairs. These pairs were used to construct an ensemble classifier, which achieved 0.89 or above in classification metrics, including accuracy, precision, recall, and F1 score. This classifier showed an accuracy of 0.90 in lifting counting and a latency of 0.06 ms, which is at least 12.5 times faster than baseline classifiers.ConclusionThis study demonstrates that computer vision-based kinematic features could be adopted to effectively and efficiently recognize lifting actions.ApplicationThe proposed method could be deployed on various platforms, including mobile devices and embedded systems, to monitor lifting tasks in real-time for the proactive prevention of work-related low-back injuries.
{"title":"Video-Based Lifting Action Recognition Using Rank-Altered Kinematic Feature Pairs.","authors":"SeHee Jung, Bingyi Su, Lu Lu, Liwei Qing, Xu Xu","doi":"10.1177/00187208241309748","DOIUrl":"10.1177/00187208241309748","url":null,"abstract":"<p><p>ObjectiveTo identify lifting actions and count the number of lifts performed in videos based on robust class prediction and a streamlined process for reliable real-time monitoring of lifting tasks.BackgroundTraditional methods for recognizing lifting actions often rely on deep learning classifiers applied to human motion data collected from wearable sensors. Despite their high performance, these methods can be difficult to implement on systems with limited hardware resources.MethodThe proposed method follows a five-stage process: (1) BlazePose, a real-time pose estimation model, detects key joints of the human body. (2) These joints are preprocessed by smoothing, centering, and scaling techniques. (3) Kinematic features are extracted from the preprocessed joints. (4) Video frames are classified as lifting or nonlifting using rank-altered kinematic feature pairs. (5) A lifting counting algorithm counts the number of lifts based on the class predictions.ResultsNine rank-altered kinematic feature pairs are identified as key pairs. These pairs were used to construct an ensemble classifier, which achieved 0.89 or above in classification metrics, including accuracy, precision, recall, and F1 score. This classifier showed an accuracy of 0.90 in lifting counting and a latency of 0.06 ms, which is at least 12.5 times faster than baseline classifiers.ConclusionThis study demonstrates that computer vision-based kinematic features could be adopted to effectively and efficiently recognize lifting actions.ApplicationThe proposed method could be deployed on various platforms, including mobile devices and embedded systems, to monitor lifting tasks in real-time for the proactive prevention of work-related low-back injuries.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"656-672"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2024-12-10DOI: 10.1177/00187208241306966
Yanlu Cao, Maosong Jiang, Zhuxi Yao, Shufeng Xia, Wenlong Liu
ObjectiveTo explore and validate effective eye movement features related to motion sickness (MS) through closed-track experiments and to provide valuable insights for practical applications.BackgroundWith the development of autonomous vehicles (AVs), MS has attracted more and more attention. Eye movements have great potential to evaluate the severity of MS as an objective quantitative indicator of vestibular function. Eye movement signals can be easily and noninvasively collected using a camera, which will not cause discomfort or disturbance to passengers, thus making it highly applicable.MethodEye movement data were collected from 72 participants susceptible to MS in closed-track driving environments. We extracted features including blink rate (BR), total number of fixations (TNF), total duration of fixations (TDF), mean duration of fixations (MDF), saccade amplitude (SA), saccade duration (SD), and number of nystagmus (NN). The statistical method and multivariate long short-term memory fully convolutional network (MLSTM-FCN) were used to validate the effectiveness of eye movement features.ResultsSignificant differences were shown in the extracted eye movement features across different levels of MS through statistical analysis. The MLSTM-FCN model achieved an accuracy of 91.37% for MS detection and 88.51% for prediction in binary classification. For ternary classification, it achieved an accuracy of 80.54% for MS detection and 80.11% for prediction.ConclusionEvaluating MS through eye movements is effective. The MLSTM-FCN model based on eye movements can efficiently detect and predict MS.ApplicationThis work can be used to provide a possible indication and early warning for MS.
目的:通过闭轨实验探索和验证与运动病(MS)相关的有效眼动特征,并为实际应用提供有价值的见解:通过闭轨实验探索和验证与运动病(MS)相关的有效眼动特征,并为实际应用提供有价值的见解:背景:随着自动驾驶汽车(AV)的发展,MS 已引起越来越多的关注。眼动作为前庭功能的客观量化指标,在评估 MS 的严重程度方面具有巨大潜力。眼动信号可通过摄像头轻松无创采集,不会对乘客造成不适或干扰,因此适用性很强:方法:我们收集了 72 名易患多发性硬化症的参与者在封闭轨道驾驶环境中的眼动数据。我们提取的特征包括眨眼率(BR)、定点总次数(TNF)、定点总持续时间(TDF)、平均定点持续时间(MDF)、囊回幅度(SA)、囊回持续时间(SD)和眼球震颤次数(NN)。统计方法和多变量长短期记忆全卷积网络(MLSTM-FCN)用于验证眼动特征的有效性:结果:通过统计分析,提取的眼动特征在不同级别的 MS 中存在显著差异。在二元分类中,MLSTM-FCN 模型的 MS 检测准确率为 91.37%,预测准确率为 88.51%。在三元分类中,其 MS 检测准确率为 80.54%,预测准确率为 80.11%:结论:通过眼球运动评估 MS 是有效的。结论:通过眼球运动评估 MS 是有效的,基于眼球运动的 MLSTM-FCN 模型可以有效地检测和预测 MS:应用:这项工作可用于为多发性硬化症提供可能的指示和预警。
{"title":"Exploring Eye Movement Features of Motion Sickness Using Closed-Track Driving Experiments.","authors":"Yanlu Cao, Maosong Jiang, Zhuxi Yao, Shufeng Xia, Wenlong Liu","doi":"10.1177/00187208241306966","DOIUrl":"10.1177/00187208241306966","url":null,"abstract":"<p><p>ObjectiveTo explore and validate effective eye movement features related to motion sickness (MS) through closed-track experiments and to provide valuable insights for practical applications.BackgroundWith the development of autonomous vehicles (AVs), MS has attracted more and more attention. Eye movements have great potential to evaluate the severity of MS as an objective quantitative indicator of vestibular function. Eye movement signals can be easily and noninvasively collected using a camera, which will not cause discomfort or disturbance to passengers, thus making it highly applicable.MethodEye movement data were collected from 72 participants susceptible to MS in closed-track driving environments. We extracted features including blink rate (BR), total number of fixations (TNF), total duration of fixations (TDF), mean duration of fixations (MDF), saccade amplitude (SA), saccade duration (SD), and number of nystagmus (NN). The statistical method and multivariate long short-term memory fully convolutional network (MLSTM-FCN) were used to validate the effectiveness of eye movement features.ResultsSignificant differences were shown in the extracted eye movement features across different levels of MS through statistical analysis. The MLSTM-FCN model achieved an accuracy of 91.37% for MS detection and 88.51% for prediction in binary classification. For ternary classification, it achieved an accuracy of 80.54% for MS detection and 80.11% for prediction.ConclusionEvaluating MS through eye movements is effective. The MLSTM-FCN model based on eye movements can efficiently detect and predict MS.ApplicationThis work can be used to provide a possible indication and early warning for MS.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"714-730"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ObjectiveTo investigate the biodynamics of human-exoskeleton interactions during patient handling tasks using a subject-specific modeling approach.BackgroundExoskeleton technology holds promise for mitigating musculoskeletal disorders caused by manual handling and most alarmingly by patient handling jobs. A deeper, more unified understanding of the biomechanical effects of exoskeleton use calls for advanced subject-specific models of complex, dynamic human-exoskeleton interactions.MethodsTwelve sex-balanced healthy participants performed three simulated patient handling tasks along with a reference load-lifting task, with and without wearing the exoskeleton, while their full-body motion and ground reaction forces were measured. Subject-specific models were constructed using motion and force data. Biodynamic response variables derived from the models were analyzed to examine the effects of the exoskeleton. Model validation used load-lifting trials with known hand forces.ResultsThe use of exoskeleton significantly reduced (19.7%-27.2%) the peak lumbar flexion moment but increased (26.4%-47.8%) the peak lumbar flexion motion, with greater moment percent reduction in more symmetric handling tasks; similarly affected the shoulder joint moments and motions but only during two more symmetric handling tasks; and significantly reduced the peak motions for the rest of the body joints.ConclusionSubject-specific biodynamic models simulating exoskeleton-assisted patient handling were constructed and validated, demonstrating that the exoskeleton effectively lessened the peak loading to the lumbar and shoulder joints as prime movers while redistributing more motions to these joints and less to the remaining joints.ApplicationThe findings offer new insights into biodynamic responses during exoskeleton-assisted patient handling, benefiting the development of more effective, possibly task- and individual-customized, exoskeletons.
{"title":"Biodynamic Modeling and Analysis of Human-Exoskeleton Interactions in Simulated Patient Handling Tasks.","authors":"Yinong Chen, Wei Yin, Liying Zheng, Ranjana Mehta, Xudong Zhang","doi":"10.1177/00187208241311271","DOIUrl":"10.1177/00187208241311271","url":null,"abstract":"<p><p>ObjectiveTo investigate the biodynamics of human-exoskeleton interactions during patient handling tasks using a subject-specific modeling approach.BackgroundExoskeleton technology holds promise for mitigating musculoskeletal disorders caused by manual handling and most alarmingly by patient handling jobs. A deeper, more unified understanding of the biomechanical effects of exoskeleton use calls for advanced subject-specific models of complex, dynamic human-exoskeleton interactions.MethodsTwelve sex-balanced healthy participants performed three simulated patient handling tasks along with a reference load-lifting task, with and without wearing the exoskeleton, while their full-body motion and ground reaction forces were measured. Subject-specific models were constructed using motion and force data. Biodynamic response variables derived from the models were analyzed to examine the effects of the exoskeleton. Model validation used load-lifting trials with known hand forces.ResultsThe use of exoskeleton significantly reduced (19.7%-27.2%) the peak lumbar flexion moment but increased (26.4%-47.8%) the peak lumbar flexion motion, with greater moment percent reduction in more symmetric handling tasks; similarly affected the shoulder joint moments and motions but only during two more symmetric handling tasks; and significantly reduced the peak motions for the rest of the body joints.ConclusionSubject-specific biodynamic models simulating exoskeleton-assisted patient handling were constructed and validated, demonstrating that the exoskeleton effectively lessened the peak loading to the lumbar and shoulder joints as prime movers while redistributing more motions to these joints and less to the remaining joints.ApplicationThe findings offer new insights into biodynamic responses during exoskeleton-assisted patient handling, benefiting the development of more effective, possibly task- and individual-customized, exoskeletons.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"641-655"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127603/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142928181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2024-11-28DOI: 10.1177/00187208241302475
Shiyu Deng, Chaitanya Kulkarni, Jinwoo Oh, Sarah Henrickson Parker, Nathan Lau
ObjectiveThis study aims to compare the relative sensitivity between scene-independent and scene-dependent eye metrics in assessing trainees' performance in simulated psychomotor tasks.BackgroundEye metrics have been extensively studied for skill assessment and training in psychomotor tasks, including aviation, driving, and surgery. These metrics can be categorized as scene-independent or scene-dependent, based on whether predefined areas of interest are considered. There is a paucity of direct comparisons between these metric types, particularly in their ability to assess performance during early training.MethodThirteen medical students practiced the peg transfer task in the Fundamentals of Laparoscopic Surgery. Scene-independent and scene-dependent eye metrics, completion time, and tool motion metrics were derived from eye-tracking data and task videos. K-means clustering of nine eye metrics identified three groups of practice trials with similar gaze behaviors, corresponding to three performance levels verified by completion time and tool motion metrics. A random forest model using eye metrics estimated classification accuracy and determined the feature importance of the eye metrics.ResultsScene-dependent eye metrics demonstrated a clearer linear trend with performance levels than scene-independent metrics. The random forest model achieved 88.59% accuracy, identifying the top four predictors of performance as scene-dependent metrics, whereas the two least effective predictors were scene-independent metrics.ConclusionScene-dependent eye metrics are overall more sensitive than scene-independent ones for assessing trainee performance in simulated psychomotor tasks.ApplicationThe study's findings are significant for advancing eye metrics in psychomotor skill assessment and training, enhancing operator competency, and promoting safe operations.
{"title":"Comparison Between Scene-Independent and Scene-Dependent Eye Metrics in Assessing Psychomotor Skills.","authors":"Shiyu Deng, Chaitanya Kulkarni, Jinwoo Oh, Sarah Henrickson Parker, Nathan Lau","doi":"10.1177/00187208241302475","DOIUrl":"10.1177/00187208241302475","url":null,"abstract":"<p><p>ObjectiveThis study aims to compare the relative sensitivity between scene-independent and scene-dependent eye metrics in assessing trainees' performance in simulated psychomotor tasks.BackgroundEye metrics have been extensively studied for skill assessment and training in psychomotor tasks, including aviation, driving, and surgery. These metrics can be categorized as scene-independent or scene-dependent, based on whether predefined areas of interest are considered. There is a paucity of direct comparisons between these metric types, particularly in their ability to assess performance during early training.MethodThirteen medical students practiced the peg transfer task in the Fundamentals of Laparoscopic Surgery. Scene-independent and scene-dependent eye metrics, completion time, and tool motion metrics were derived from eye-tracking data and task videos. K-means clustering of nine eye metrics identified three groups of practice trials with similar gaze behaviors, corresponding to three performance levels verified by completion time and tool motion metrics. A random forest model using eye metrics estimated classification accuracy and determined the feature importance of the eye metrics.ResultsScene-dependent eye metrics demonstrated a clearer linear trend with performance levels than scene-independent metrics. The random forest model achieved 88.59% accuracy, identifying the top four predictors of performance as scene-dependent metrics, whereas the two least effective predictors were scene-independent metrics.ConclusionScene-dependent eye metrics are overall more sensitive than scene-independent ones for assessing trainee performance in simulated psychomotor tasks.ApplicationThe study's findings are significant for advancing eye metrics in psychomotor skill assessment and training, enhancing operator competency, and promoting safe operations.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"673-694"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12306202/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-01-22DOI: 10.1177/00187208251314248
Xiaolu Bai, Jing Feng
ObjectiveThis study explores the effectiveness of conversational prompts on enhancing driver monitoring behavior and takeover performance in partially automated driving under two non-driving-related task (NDRT) scenarios with varying workloads.BackgroundDriver disengagement in partially automated driving is a serious safety concern. Intermittent conversational prompts that require responses may be a solution. However, existing literature is limited with inconsistent findings. There is little consideration of NDRTs as an important context, despite their ubiquitous involvement. A method is also lacking to measure driver engagement at the cognitive level, beyond manual and visual engagements.MethodsParticipants operated a partially automated vehicle in a simulator across six predefined drives. In each drive, participants either received driving-related prompts, daily-conversation prompts, or no prompts, with or without a takeover notification. The first experiment instructed participants to engage in NDRTs at their choice and the second experiment incentivized solving demanding anagrams with monetary rewards.ResultsWhen participants were voluntarily engaged in NDRTs, answering driving-related prompts and receiving takeover notifications improved their monitoring behavior and takeover performance. However, when participants were involved in the more demanding and incentivized NDRT, answering prompts had little effect.ConclusionThe study supports the importance of both maintaining appropriate workload and processing driving-related information during partially automated driving. Driving-related prompts improve driver engagement and takeover performance, but they are not robust enough to compete with NDRTs that have high motivational appeals and cognitive demands.ApplicationThe design of driver engagement tools should consider the workload and information processing mechanisms.
{"title":"Awakening the Disengaged: Can Driving-Related Prompts Engage Drivers in Partial Automation?","authors":"Xiaolu Bai, Jing Feng","doi":"10.1177/00187208251314248","DOIUrl":"10.1177/00187208251314248","url":null,"abstract":"<p><p>ObjectiveThis study explores the effectiveness of conversational prompts on enhancing driver monitoring behavior and takeover performance in partially automated driving under two non-driving-related task (NDRT) scenarios with varying workloads.BackgroundDriver disengagement in partially automated driving is a serious safety concern. Intermittent conversational prompts that require responses may be a solution. However, existing literature is limited with inconsistent findings. There is little consideration of NDRTs as an important context, despite their ubiquitous involvement. A method is also lacking to measure driver engagement at the cognitive level, beyond manual and visual engagements.MethodsParticipants operated a partially automated vehicle in a simulator across six predefined drives. In each drive, participants either received driving-related prompts, daily-conversation prompts, or no prompts, with or without a takeover notification. The first experiment instructed participants to engage in NDRTs at their choice and the second experiment incentivized solving demanding anagrams with monetary rewards.ResultsWhen participants were voluntarily engaged in NDRTs, answering driving-related prompts and receiving takeover notifications improved their monitoring behavior and takeover performance. However, when participants were involved in the more demanding and incentivized NDRT, answering prompts had little effect.ConclusionThe study supports the importance of both maintaining appropriate workload and processing driving-related information during partially automated driving. Driving-related prompts improve driver engagement and takeover performance, but they are not robust enough to compete with NDRTs that have high motivational appeals and cognitive demands.ApplicationThe design of driver engagement tools should consider the workload and information processing mechanisms.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"731-752"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2024-10-21DOI: 10.1177/00187208241293720
Rahul K Pabla, Jeffrey D Graham, Michael W B Watterworth, Nicholas J La Delfa
ObjectiveThis study compared the effects of prior cognitive, physical, and concurrent exertion on physical performance.BackgroundFatiguing cognitive and physical exertions have been shown to negatively affect subsequent task performance. However, it is not clearly understood if concurrent physical and cognitive effort may exaggerate the negative carryover effects on physical task performance when compared to cognitive or physical exertion alone.MethodTwenty-five participants completed four isometric handgrip endurance trials on different days. The endurance trials were preceded by four, 15-minute experimental manipulations (cognitive, physical, concurrent, control). Electromyography (EMG) and force tracing performance were monitored, with handgrip strength measured pre and post. Subjective ratings of mental and physical fatigue, as well as affect, motivation, and task self-efficacy, were also assessed.ResultsHandgrip strength decreased following both physical (-14.4% MVC) and concurrent (-12.3% MVC) exertion manipulations, with no changes being observed for the cognitive and control conditions. No differences were observed across conditions for endurance time, EMG, nor tracing performance. When compared to the control conditions, perceptions of mental and physical fatigue were higher following the experimental manipulation. Endurance trial self-efficacy was lower for the mental, physical and concurrent conditions compared to control.ConclusionThe concurrent condition resulted in similar decreases in strength as the physical fatigue condition, but otherwise resulted in similar carryover effects on endurance performance across all conditions. Further study is required at higher exposure levels, or for longer exposure durations, to further probe the influence of concurrent physical and cognitive effort on task performance.ApplicationConcurrent cognitive and physical effort resulted in similar physical performance decrements to physical effort alone.
{"title":"Examining the Independent and Interactive Carryover Effects of Cognitive and Physical Exertions on Physical Performance.","authors":"Rahul K Pabla, Jeffrey D Graham, Michael W B Watterworth, Nicholas J La Delfa","doi":"10.1177/00187208241293720","DOIUrl":"10.1177/00187208241293720","url":null,"abstract":"<p><p>ObjectiveThis study compared the effects of prior cognitive, physical, and concurrent exertion on physical performance.BackgroundFatiguing cognitive and physical exertions have been shown to negatively affect subsequent task performance. However, it is not clearly understood if concurrent physical and cognitive effort may exaggerate the negative carryover effects on physical task performance when compared to cognitive or physical exertion alone.MethodTwenty-five participants completed four isometric handgrip endurance trials on different days. The endurance trials were preceded by four, 15-minute experimental manipulations (cognitive, physical, concurrent, control). Electromyography (EMG) and force tracing performance were monitored, with handgrip strength measured pre and post. Subjective ratings of mental and physical fatigue, as well as affect, motivation, and task self-efficacy, were also assessed.ResultsHandgrip strength decreased following both physical (-14.4% MVC) and concurrent (-12.3% MVC) exertion manipulations, with no changes being observed for the cognitive and control conditions. No differences were observed across conditions for endurance time, EMG, nor tracing performance. When compared to the control conditions, perceptions of mental and physical fatigue were higher following the experimental manipulation. Endurance trial self-efficacy was lower for the mental, physical and concurrent conditions compared to control.ConclusionThe concurrent condition resulted in similar decreases in strength as the physical fatigue condition, but otherwise resulted in similar carryover effects on endurance performance across all conditions. Further study is required at higher exposure levels, or for longer exposure durations, to further probe the influence of concurrent physical and cognitive effort on task performance.ApplicationConcurrent cognitive and physical effort resulted in similar physical performance decrements to physical effort alone.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"560-577"},"PeriodicalIF":2.9,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12049582/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2024-10-30DOI: 10.1177/00187208241296830
Sebastian Pütz, Alexander Mertens, Lewis L Chuang, Verena Nitsch
ObjectiveThe present study investigated how pupil size and heart rate variability (HRV) can contribute to the prediction of operator performance. We illustrate how focusing on mental effort as the conceptual link between physiological measures and task performance can align relevant empirical findings across research domains.BackgroundPhysiological measures are often treated as indicators of operators' mental state. Thereby, they could enable a continuous and unobtrusive assessment of operators' current ability to perform the task.MethodFifty participants performed a process monitoring task consisting of ten 9-minute task blocks. Blocks alternated between low and high task demands, and the last two blocks introduced a task reward manipulation. We measured response times as primary performance indicator, pupil size and HRV as physiological measures, and mental fatigue, task engagement, and perceived effort as subjective ratings.ResultsBoth increased pupil size and increased HRV significantly predicted better task performance. However, the underlying associations between physiological measures and performance were influenced by task demands and time on task. Pupil size, but not HRV, results were consistent with subjective ratings.ConclusionThe empirical findings suggest that, by capturing variance in operators' mental effort, physiological measures, specifically pupil size, can contribute to the prediction of task performance. Their predictive value is limited by confounding effects that alter the amount of effort required to achieve a given level of performance.ApplicationThe outlined conceptual approach and empirical results can guide study designs and performance prediction models that examine physiological measures as the basis for dynamic operator assistance.
{"title":"Physiological Predictors of Operator Performance: The Role of Mental Effort and Its Link to Task Performance.","authors":"Sebastian Pütz, Alexander Mertens, Lewis L Chuang, Verena Nitsch","doi":"10.1177/00187208241296830","DOIUrl":"10.1177/00187208241296830","url":null,"abstract":"<p><p>ObjectiveThe present study investigated how pupil size and heart rate variability (HRV) can contribute to the prediction of operator performance. We illustrate how focusing on mental effort as the conceptual link between physiological measures and task performance can align relevant empirical findings across research domains.BackgroundPhysiological measures are often treated as indicators of operators' mental state. Thereby, they could enable a continuous and unobtrusive assessment of operators' current ability to perform the task.MethodFifty participants performed a process monitoring task consisting of ten 9-minute task blocks. Blocks alternated between low and high task demands, and the last two blocks introduced a task reward manipulation. We measured response times as primary performance indicator, pupil size and HRV as physiological measures, and mental fatigue, task engagement, and perceived effort as subjective ratings.ResultsBoth increased pupil size and increased HRV significantly predicted better task performance. However, the underlying associations between physiological measures and performance were influenced by task demands and time on task. Pupil size, but not HRV, results were consistent with subjective ratings.ConclusionThe empirical findings suggest that, by capturing variance in operators' mental effort, physiological measures, specifically pupil size, can contribute to the prediction of task performance. Their predictive value is limited by confounding effects that alter the amount of effort required to achieve a given level of performance.ApplicationThe outlined conceptual approach and empirical results can guide study designs and performance prediction models that examine physiological measures as the basis for dynamic operator assistance.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"595-615"},"PeriodicalIF":2.9,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12049591/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142549216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2024-12-06DOI: 10.1177/00187208241305568
Beatriz M Matesanz, Eduardo G Vicente, Luis Issolio, Fernando Rodríguez Merino, M Teresa G Arteaga, Isabel Arranz
ObjectiveThis study aims to analyze the effect of correlated color temperature from LED glare sources on driving performance. The evaluation includes assessing the effect of disability glare on visual reaction time and rating discomfort glare using a standardized scale.BackgroundLED technology is widely incorporated into various lighting systems; however, the impact of glare from oncoming car headlamps on driving performance at night-time is crucial for road safety.MethodTwenty-three healthy young subjects participated in a laboratory-based experiment simulating night driving using a two-channel Maxwellian view optical system. Two LED lamps with correlated color temperature of 2800 K and 6500 K were used to generate a glare of 52 lx. Disability glare was quantified in terms of foveal reaction time and discomfort glare was rated using the de Boer scale.ResultsThe results show that glare-induced effect is mitigated by an increase in background luminance. The correlated color temperature of the LED lamp does not affect either reaction time or discomfort glare rating.ConclusionThe greater short-wavelength emission of 6500 K lamp does not intensify the effect of disability or discomfort glare, probably due to the macular pigment absorption on foveal vision and the transparency of ocular media, coupled with the involvement of other contributing factors. The correlated color temperature of the lamp is not the best descriptive parameter to identify its effect on glare.ApplicationIt is important to consider the impact of LED technology on visual performance to enhance road safety in critical glare situations during night driving.
{"title":"Glare at Night-Time Driving: Effect of Correlated Color Temperature of Led Lamps.","authors":"Beatriz M Matesanz, Eduardo G Vicente, Luis Issolio, Fernando Rodríguez Merino, M Teresa G Arteaga, Isabel Arranz","doi":"10.1177/00187208241305568","DOIUrl":"10.1177/00187208241305568","url":null,"abstract":"<p><p>ObjectiveThis study aims to analyze the effect of correlated color temperature from LED glare sources on driving performance. The evaluation includes assessing the effect of disability glare on visual reaction time and rating discomfort glare using a standardized scale.BackgroundLED technology is widely incorporated into various lighting systems; however, the impact of glare from oncoming car headlamps on driving performance at night-time is crucial for road safety.MethodTwenty-three healthy young subjects participated in a laboratory-based experiment simulating night driving using a two-channel Maxwellian view optical system. Two LED lamps with correlated color temperature of 2800 K and 6500 K were used to generate a glare of 52 lx. Disability glare was quantified in terms of foveal reaction time and discomfort glare was rated using the de Boer scale.ResultsThe results show that glare-induced effect is mitigated by an increase in background luminance. The correlated color temperature of the LED lamp does not affect either reaction time or discomfort glare rating.ConclusionThe greater short-wavelength emission of 6500 K lamp does not intensify the effect of disability or discomfort glare, probably due to the macular pigment absorption on foveal vision and the transparency of ocular media, coupled with the involvement of other contributing factors. The correlated color temperature of the lamp is not the best descriptive parameter to identify its effect on glare.ApplicationIt is important to consider the impact of LED technology on visual performance to enhance road safety in critical glare situations during night driving.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"578-594"},"PeriodicalIF":2.9,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}