Safety signs are crucial for accident prevention, yet their effectiveness hinges on individuals’ accurate hazard perception. While electroencephalogram (EEG) studies have described the neuropsychological mechanisms underlying safety signs processing, they have been limited to descriptive associations and cannot quantitatively predict specific level of hazard perception from complex, high-dimensional EEG data, nor quantify the relative contribution of concurrent cognitive processes reflected by different EEG indicators. To address this gap, this study developed an interpretable machine learning framework to classify hazard perception levels based on EEG signals for safety signs. To better approximate the real-world safety signs processing, we employed a temporally dissociated paradigm: EEG was recorded during implicit viewing of safety signs, and explicit subjective hazard ratings were subsequently collected as ground-truth labels. From the pre-processed EEG, 9 time-domain and 14 frequency-domain features were extracted and tested across five classifiers (Logistic Regression, Naive Bayes, Support Vector Machine, Random Forest, and Back Propagation Neural Network). The Random Forest model integrating both feature types achieved the highest accuracy (83.5%) in predicting three hazard levels (low, medium, high). Feature importance analysis further identified the occipital beta band and the parieto-occipital N100 component as the most contributive features, highlighting the roles of early attentional engagement and emotional valence evaluation in hazard perception. By advancing from descriptive mechanism to quantitative, predictive classification, this study establishes a neuro-cognitive framework for decoding hazard perception for abstract, symbolic warning. It also offers a practical, brain-based assessment tool to guide the design and evaluation of more effective safety signs.
扫码关注我们
求助内容:
应助结果提醒方式:
