Pub Date : 2025-12-30DOI: 10.1080/00140139.2025.2606779
Niosh Basnet, Maryam Zahabi
Real-time cognitive workload (CWL) assessment has been used to enhance human performance and safety across various operational domains. This review synthesizes findings from 50 peer-reviewed studies to examine current practices, methodological trends, and technological advances in physiological and behavioural CWL monitoring. Studies utilized electrocardiography (ECG), photoplethysmography (PPG), electrodermal activity (EDA), eye-tracking, electroencephalography (EEG), and skin temperature (SKT). The use of wearable devices were predominant (∼74%). Task categorization into cognitive, perceptual, motor, and physical domains revealed alignment between physiological measures and task demands. Computational approaches favour traditional machine learning (32%) and statistical models (∼21%) over advanced deep learning models (∼14%). The use of hybrid approaches (∼22%), where multiple models are combined or applied in parallel rather than using a single model, suggests evolution towards adaptive frameworks for real-world implementation. The review offers guidelines on measurement and model selection based on task requirements and outlines future directions for real-world CWL system deployment.
{"title":"Real-time cognitive workload assessment using non-intrusive methods: a systematic review.","authors":"Niosh Basnet, Maryam Zahabi","doi":"10.1080/00140139.2025.2606779","DOIUrl":"https://doi.org/10.1080/00140139.2025.2606779","url":null,"abstract":"<p><p>Real-time cognitive workload (CWL) assessment has been used to enhance human performance and safety across various operational domains. This review synthesizes findings from 50 peer-reviewed studies to examine current practices, methodological trends, and technological advances in physiological and behavioural CWL monitoring. Studies utilized electrocardiography (ECG), photoplethysmography (PPG), electrodermal activity (EDA), eye-tracking, electroencephalography (EEG), and skin temperature (SKT). The use of wearable devices were predominant (∼74%). Task categorization into cognitive, perceptual, motor, and physical domains revealed alignment between physiological measures and task demands. Computational approaches favour traditional machine learning (32%) and statistical models (∼21%) over advanced deep learning models (∼14%). The use of hybrid approaches (∼22%), where multiple models are combined or applied in parallel rather than using a single model, suggests evolution towards adaptive frameworks for real-world implementation. The review offers guidelines on measurement and model selection based on task requirements and outlines future directions for real-world CWL system deployment.</p>","PeriodicalId":50503,"journal":{"name":"Ergonomics","volume":" ","pages":"1-26"},"PeriodicalIF":2.4,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145866335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1080/00140139.2025.2608280
Hao Su, Yuxi Zeng, Xin Qing, Siping Fan, Jian Wang, Lifei Xu, Lu Liu
Situation awareness is crucial for the safety of oil and gas drilling operations, as deficits can increase human error rates. Current research on multimodal physiological signal modelling for drillers remains limited, and single-modality approaches struggle to comprehensively reflect the entire cognitive process from perception to decision-making. To overcome this limitation, this study combines eye-tracking metrics reflecting external gaze behaviour with electroencephalography features reflecting internal neural activity for recognising situation awareness in drillers. Through a simulated drilling monitoring experiment and machine learning model construction, results revealed significant differences in multiple eye-tracking and electroencephalography features across different situation awareness levels. The multimodal fusion model achieved a recognition accuracy of 89.33%, outperforming single-modality models. This study systematically validates the effectiveness of eye-tracking-electroencephalography fusion for situation awareness recognition among drillers. It provides scientific support for personnel training in the drilling industry and demonstrates its potential for enhancing safety in high-risk industrial environments.
{"title":"Recognising situation awareness of drillers based on eye-tracking and EEG features.","authors":"Hao Su, Yuxi Zeng, Xin Qing, Siping Fan, Jian Wang, Lifei Xu, Lu Liu","doi":"10.1080/00140139.2025.2608280","DOIUrl":"https://doi.org/10.1080/00140139.2025.2608280","url":null,"abstract":"<p><p>Situation awareness is crucial for the safety of oil and gas drilling operations, as deficits can increase human error rates. Current research on multimodal physiological signal modelling for drillers remains limited, and single-modality approaches struggle to comprehensively reflect the entire cognitive process from perception to decision-making. To overcome this limitation, this study combines eye-tracking metrics reflecting external gaze behaviour with electroencephalography features reflecting internal neural activity for recognising situation awareness in drillers. Through a simulated drilling monitoring experiment and machine learning model construction, results revealed significant differences in multiple eye-tracking and electroencephalography features across different situation awareness levels. The multimodal fusion model achieved a recognition accuracy of 89.33%, outperforming single-modality models. This study systematically validates the effectiveness of eye-tracking-electroencephalography fusion for situation awareness recognition among drillers. It provides scientific support for personnel training in the drilling industry and demonstrates its potential for enhancing safety in high-risk industrial environments.</p>","PeriodicalId":50503,"journal":{"name":"Ergonomics","volume":" ","pages":"1-15"},"PeriodicalIF":2.4,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145859070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1080/00140139.2025.2605050
Haiyue Liu, Chuanyun Fu, Yue Zhou, Peter Shi, Chaozhe Jiang
This study develops a Convolutional Neural Network (CNN) -based two-level fusion model to identify cognitive distractions of metro drivers using their Electrocardiography (ECG) features and three types of functional near-infra-red spectroscopy (fNIRS) features (ΔOxyHb, ΔDeoxyHb, and ΔTotalHb). The model incorporates feature-level and decision-level fusions. Feature-level fusion combines ECG and fNIRS features to create a unified feature set, while decision-level fusion applies independent classifiers for ECG, fNIRS, and combined data to make final identification. For comparison, several alternative models are developed. Results indicate that the proposed two-level fusion model outperforms the non-fusion, feature-level fusion, and decision-level fusion models. Among the alternative models, those incorporating feature-level fusion outperform decision-level or non-fusion models. The feature-level fusion model that combines three types of fNIRS features demonstrates superior performance. Furthermore, all ECG features and 54.1% of fNIRS features show significant differences across the distraction levels. Drivers' prefrontal cortex is more active during cognitive distractions.
{"title":"A two-level fusion CNN model for classifying metro drivers' distractions with functional near-infra-red spectroscopy and electrocardiography signals.","authors":"Haiyue Liu, Chuanyun Fu, Yue Zhou, Peter Shi, Chaozhe Jiang","doi":"10.1080/00140139.2025.2605050","DOIUrl":"https://doi.org/10.1080/00140139.2025.2605050","url":null,"abstract":"<p><p>This study develops a Convolutional Neural Network (CNN) -based two-level fusion model to identify cognitive distractions of metro drivers using their Electrocardiography (ECG) features and three types of functional near-infra-red spectroscopy (fNIRS) features (ΔOxyHb, ΔDeoxyHb, and ΔTotalHb). The model incorporates feature-level and decision-level fusions. Feature-level fusion combines ECG and fNIRS features to create a unified feature set, while decision-level fusion applies independent classifiers for ECG, fNIRS, and combined data to make final identification. For comparison, several alternative models are developed. Results indicate that the proposed two-level fusion model outperforms the non-fusion, feature-level fusion, and decision-level fusion models. Among the alternative models, those incorporating feature-level fusion outperform decision-level or non-fusion models. The feature-level fusion model that combines three types of fNIRS features demonstrates superior performance. Furthermore, all ECG features and 54.1% of fNIRS features show significant differences across the distraction levels. Drivers' prefrontal cortex is more active during cognitive distractions.</p>","PeriodicalId":50503,"journal":{"name":"Ergonomics","volume":" ","pages":"1-25"},"PeriodicalIF":2.4,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145866323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-28DOI: 10.1080/00140139.2025.2608273
Yunmei Liu, David B Kaber
The human systems literature is littered with conceptual models of human-automation interaction presented as a basis for understanding and explaining human performance effects. Unfortunately, the discrete and ordinal nature of existing models of levels of automation limits reliable prediction of operator performance, workload and situation awareness (SA). This paper presents enhanced quantitative models for determination of system automation proportion (AP) and SA in human-in-the-loop systems, building on an earlier preliminary model. We refine the AP concept as a continuous measure of level of system automation and introduce a generalised SA function that accounts for operator characteristics. The AP is calculated using hierarchical task analysis according to information processing stages. An overall proportion is then calculated for the system. The practicality and feasibility of this model are verified through a case study. We further propose a relationship between the AP and operator SA responses, based on existing empirical research findings.
{"title":"Models of automation proportion in human-in-the-loop systems and operator situation awareness responses.","authors":"Yunmei Liu, David B Kaber","doi":"10.1080/00140139.2025.2608273","DOIUrl":"https://doi.org/10.1080/00140139.2025.2608273","url":null,"abstract":"<p><p>The human systems literature is littered with conceptual models of human-automation interaction presented as a basis for understanding and explaining human performance effects. Unfortunately, the discrete and ordinal nature of existing models of levels of automation limits reliable prediction of operator performance, workload and situation awareness (SA). This paper presents enhanced quantitative models for determination of system automation proportion (AP) and SA in human-in-the-loop systems, building on an earlier preliminary model. We refine the AP concept as a continuous measure of level of system automation and introduce a generalised SA function that accounts for operator characteristics. The AP is calculated using hierarchical task analysis according to information processing stages. An overall proportion is then calculated for the system. The practicality and feasibility of this model are verified through a case study. We further propose a relationship between the AP and operator SA responses, based on existing empirical research findings.</p>","PeriodicalId":50503,"journal":{"name":"Ergonomics","volume":" ","pages":"1-14"},"PeriodicalIF":2.4,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1080/00140139.2025.2608276
Kung-Jeng Wang, Kim-Tien Truong
Manual lifting tasks are commonplace in industries such as logistics, healthcare, and manufacturing, which can potentially lead to significant ergonomic risks, including musculoskeletal disorders. This study proposes a video-based ergonomic risk assessment tool that integrates MediaPipe Pose Landmarker for joint coordinate tracking, classification and regression tree models for lifting stage classification, and a back-propagation neural network for parameter compensation. Utilising the revised NIOSH lifting equation, our proposed tool calculates the recommended weight limit and lifting index to quantify ergonomic risks from smartphone videos. Validated on laboratory and field datasets, the tool demonstrates adaptability, scalability, and the potential to provide cost-effective ergonomic assessments.
{"title":"A video-based assessment tool using machine learning for ergonomic risk prediction in manual lifting tasks.","authors":"Kung-Jeng Wang, Kim-Tien Truong","doi":"10.1080/00140139.2025.2608276","DOIUrl":"https://doi.org/10.1080/00140139.2025.2608276","url":null,"abstract":"<p><p>Manual lifting tasks are commonplace in industries such as logistics, healthcare, and manufacturing, which can potentially lead to significant ergonomic risks, including musculoskeletal disorders. This study proposes a video-based ergonomic risk assessment tool that integrates MediaPipe Pose Landmarker for joint coordinate tracking, classification and regression tree models for lifting stage classification, and a back-propagation neural network for parameter compensation. Utilising the revised NIOSH lifting equation, our proposed tool calculates the recommended weight limit and lifting index to quantify ergonomic risks from smartphone videos. Validated on laboratory and field datasets, the tool demonstrates adaptability, scalability, and the potential to provide cost-effective ergonomic assessments.</p>","PeriodicalId":50503,"journal":{"name":"Ergonomics","volume":" ","pages":"1-17"},"PeriodicalIF":2.4,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145846913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1080/00140139.2025.2605058
Jiahao Chen, Fu Guo, Zeyu Zhang, Mingming Li, Jaap Ham
Measuring users' emotional reactions and subsequent neural activity towards chatbots' design cues could significantly enhance our understanding of chatbots and potentially optimise the design of human-chatbot interaction. This study conducted a mixed experimental design with two within-subject factors (chatbot identity disclosure and emotional support) and one between-subject factor (interaction modality) to explore users' subjective evaluation and neural activity. Results revealed that not disclosing the chatbot's identity or providing emotional support during the interaction could enhance user' positive emotional reactions. Notably, results showed a three-way interaction effect on users' HbO2 concentration changes in the right DLPFC. Specifically, in the text-based interaction condition where chatbots' identity was disclosed, providing emotional support from chatbots elicited a higher concentration increase of HbO2. Similarly, when the chatbot provided emotional support, disclosing the chatbot's identity induced a higher concentration increase of HbO2. These findings offer theoretical contributions and practical implications for the design of human-chatbot interaction.
{"title":"Text or talk? The impact of chatbot identity disclosure, emotional support, and interaction modality on users' emotional reactions and neural activity.","authors":"Jiahao Chen, Fu Guo, Zeyu Zhang, Mingming Li, Jaap Ham","doi":"10.1080/00140139.2025.2605058","DOIUrl":"https://doi.org/10.1080/00140139.2025.2605058","url":null,"abstract":"<p><p>Measuring users' emotional reactions and subsequent neural activity towards chatbots' design cues could significantly enhance our understanding of chatbots and potentially optimise the design of human-chatbot interaction. This study conducted a mixed experimental design with two within-subject factors (chatbot identity disclosure and emotional support) and one between-subject factor (interaction modality) to explore users' subjective evaluation and neural activity. Results revealed that not disclosing the chatbot's identity or providing emotional support during the interaction could enhance user' positive emotional reactions. Notably, results showed a three-way interaction effect on users' HbO<sub>2</sub> concentration changes in the right DLPFC. Specifically, in the text-based interaction condition where chatbots' identity was disclosed, providing emotional support from chatbots elicited a higher concentration increase of HbO<sub>2</sub>. Similarly, when the chatbot provided emotional support, disclosing the chatbot's identity induced a higher concentration increase of HbO<sub>2</sub>. These findings offer theoretical contributions and practical implications for the design of human-chatbot interaction.</p>","PeriodicalId":50503,"journal":{"name":"Ergonomics","volume":" ","pages":"1-17"},"PeriodicalIF":2.4,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-25DOI: 10.1080/00140139.2025.2606778
Fangda Zhang, Lin Jie, Le Zou, Tingru Zhang
Drivers' takeover performance is critical to the safety of Level 3 automated driving, making the design of takeover requests (TORs) a central topic in recent ergonomic research. This study proposed a novel approach to delivering tactile TORs through wearable devices. A driving simulator experiment involving 36 participants was conducted to examine how the location of tactile warnings-ear, wrist, abdomen, and tibia-affects takeover performance and user acceptance under various non-driving-related task (NDRT) conditions. Results showed that ear-based warnings elicited the shortest reaction times, while tibia-based cues triggered the strongest braking responses. Wrist-based warnings, despite slightly slower reactions, received the highest acceptance ratings from participants. These findings emphasise the importance of signal placement and reveal a trade-off between objective performance and user preference. We recommend ear-based cues for urgent scenarios and wrist-based cues for routine takeovers. This work offers valuable guidance for designing user-centered haptic interfaces in automated vehicles.
{"title":"Informing drivers via wearable devices: Impacts of tactile warning locations on takeover performance in automated driving.","authors":"Fangda Zhang, Lin Jie, Le Zou, Tingru Zhang","doi":"10.1080/00140139.2025.2606778","DOIUrl":"https://doi.org/10.1080/00140139.2025.2606778","url":null,"abstract":"<p><p>Drivers' takeover performance is critical to the safety of Level 3 automated driving, making the design of takeover requests (TORs) a central topic in recent ergonomic research. This study proposed a novel approach to delivering tactile TORs through wearable devices. A driving simulator experiment involving 36 participants was conducted to examine how the location of tactile warnings-ear, wrist, abdomen, and tibia-affects takeover performance and user acceptance under various non-driving-related task (NDRT) conditions. Results showed that ear-based warnings elicited the shortest reaction times, while tibia-based cues triggered the strongest braking responses. Wrist-based warnings, despite slightly slower reactions, received the highest acceptance ratings from participants. These findings emphasise the importance of signal placement and reveal a trade-off between objective performance and user preference. We recommend ear-based cues for urgent scenarios and wrist-based cues for routine takeovers. This work offers valuable guidance for designing user-centered haptic interfaces in automated vehicles.</p>","PeriodicalId":50503,"journal":{"name":"Ergonomics","volume":" ","pages":"1-13"},"PeriodicalIF":2.4,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1080/00140139.2025.2592985
Ifeoma Michael, Aditya Subramani Murugan, Eunsik Kim
As travel becomes integral to modern life, many workers complete office tasks in non-traditional settings using small screens, which may hinder productivity and well-being. Virtual reality (VR) offers a potential solution. This study (n = 20) examined the effects of VR work environments on cognitive performance, physiological responses, and subjective ratings. Participants completed cognitive tasks in both VR and physical environments. While most performance measures showed no significant differences, reaction tasks were slightly better in the physical setup. However, VR yielded better electroencephalogram and subjective outcomes. To enhance ecological validity, a subset of participants repeated the VR condition in a campus cafeteria. Their performance, physiological, and subjective responses remained consistent with lab results. These findings suggest VR can effectively support office tasks in non-traditional environments, maintaining cognitive and physiological performance while improving user experience.
{"title":"Evaluating virtual reality work environments: cognitive and physiological impacts on office workers.","authors":"Ifeoma Michael, Aditya Subramani Murugan, Eunsik Kim","doi":"10.1080/00140139.2025.2592985","DOIUrl":"https://doi.org/10.1080/00140139.2025.2592985","url":null,"abstract":"<p><p>As travel becomes integral to modern life, many workers complete office tasks in non-traditional settings using small screens, which may hinder productivity and well-being. Virtual reality (VR) offers a potential solution. This study (n = 20) examined the effects of VR work environments on cognitive performance, physiological responses, and subjective ratings. Participants completed cognitive tasks in both VR and physical environments. While most performance measures showed no significant differences, reaction tasks were slightly better in the physical setup. However, VR yielded better electroencephalogram and subjective outcomes. To enhance ecological validity, a subset of participants repeated the VR condition in a campus cafeteria. Their performance, physiological, and subjective responses remained consistent with lab results. These findings suggest VR can effectively support office tasks in non-traditional environments, maintaining cognitive and physiological performance while improving user experience.</p>","PeriodicalId":50503,"journal":{"name":"Ergonomics","volume":" ","pages":"1-16"},"PeriodicalIF":2.4,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145783835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1080/00140139.2025.2588167
James Szalma, Grace Teo, Tarah Schmidt-Daly, P A Hancock
The present experiment investigated the effects of the type of knowledge of results (KR) on performance, workload, and stress in a dynamic, first-person perspective vigilance task. There were five KR conditions: Complete KR, which included feedback regarding hits, misses, and false alarms; three conditions in which participants received only one of these types; and finally a No KR control. Half of the participants in each of these conditions also received summary feedback at the end of their training phase. Those receiving Complete KR were the only group for which detection performance exceeded that of the control group. Summary KR reduced false alarm frequency during the subsequent (transfer) vigil but also resulted in higher levels of perceived workload and emotion-focused coping. These findings suggest that while KR can benefit vigilance training, such benefits are moderated by KR type and can affect concomitant subjective experience during task performance.
{"title":"Training for vigilance on the move using knowledge of results: The effects of feedback type on performance and subjective response.","authors":"James Szalma, Grace Teo, Tarah Schmidt-Daly, P A Hancock","doi":"10.1080/00140139.2025.2588167","DOIUrl":"https://doi.org/10.1080/00140139.2025.2588167","url":null,"abstract":"<p><p>The present experiment investigated the effects of the type of knowledge of results (KR) on performance, workload, and stress in a dynamic, first-person perspective vigilance task. There were five KR conditions: Complete KR, which included feedback regarding hits, misses, and false alarms; three conditions in which participants received only one of these types; and finally a No KR control. Half of the participants in each of these conditions also received summary feedback at the end of their training phase. Those receiving Complete KR were the only group for which detection performance exceeded that of the control group. Summary KR reduced false alarm frequency during the subsequent (transfer) vigil but also resulted in higher levels of perceived workload and emotion-focused coping. These findings suggest that while KR can benefit vigilance training, such benefits are moderated by KR type and can affect concomitant subjective experience during task performance.</p>","PeriodicalId":50503,"journal":{"name":"Ergonomics","volume":" ","pages":"1-19"},"PeriodicalIF":2.4,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145783897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1080/00140139.2025.2598050
Luwei Chen, Jie Zhang, Ruoyue Tang, Sina Sareh, Yan Luximon
Product perception is a core dimension of ergonomics, encompassing how individuals cognitively and emotionally interpret product attributes. Eyeglasses, as a salient and prevalent facial appearance-related product for children, whose self- and peer-perceptions are both complex and influential. Yet, how these perceptions shape children's preferences remains underexplored. Moreover, colour represents one of the most immediate product attributes, which is associated with gender. Accordingly, this study investigates the mechanisms linking children's emotional perceptions to their eyeglasses colour preferences, both boys and girls. A quantified hierarchical perception model was developed through a psychological experiment with 32 children (17 boys, 15 girls) and subsequent statistical and regression analyses. Using eyeglasses as a representative case, the model offers practical and quantitative guidance for colour design in facial appearance-related products. Overall, the study contributes to advancing knowledge of children's perception and decision-making in the domains of colour, ergonomics, and product design.
{"title":"Children's preference for eyeglasses colour: towards a quantified hierarchical perception model.","authors":"Luwei Chen, Jie Zhang, Ruoyue Tang, Sina Sareh, Yan Luximon","doi":"10.1080/00140139.2025.2598050","DOIUrl":"https://doi.org/10.1080/00140139.2025.2598050","url":null,"abstract":"<p><p>Product perception is a core dimension of ergonomics, encompassing how individuals cognitively and emotionally interpret product attributes. Eyeglasses, as a salient and prevalent facial appearance-related product for children, whose self- and peer-perceptions are both complex and influential. Yet, how these perceptions shape children's preferences remains underexplored. Moreover, colour represents one of the most immediate product attributes, which is associated with gender. Accordingly, this study investigates the mechanisms linking children's emotional perceptions to their eyeglasses colour preferences, both boys and girls. A quantified hierarchical perception model was developed through a psychological experiment with 32 children (17 boys, 15 girls) and subsequent statistical and regression analyses. Using eyeglasses as a representative case, the model offers practical and quantitative guidance for colour design in facial appearance-related products. Overall, the study contributes to advancing knowledge of children's perception and decision-making in the domains of colour, ergonomics, and product design.</p>","PeriodicalId":50503,"journal":{"name":"Ergonomics","volume":" ","pages":"1-20"},"PeriodicalIF":2.4,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}