Background: Brain-Machine Interfaces (BMIs) increasingly mediate human interaction with assistive systems, yet remain sensitive to internal cognitive divergence. Subtle shifts in user intention-due to fatigue, overload, or schema conflict-may affect system reliability. While decoding accuracy has improved, most systems still lack mechanisms to communicate internal uncertainty or reasoning dynamics in real time.
Objective: We present NECAP-Interaction, a neuro-symbolic architecture that explores the potential of symbolic feedback to support real-time human-AI alignment. The framework aims to improve neuroergonomic transparency by integrating symbolic trace generation into the BMI control pipeline.
Methods: All evaluations were conducted using high-fidelity synthetic agents across three simulation tasks (motor control, visual attention, cognitive inhibition). NECAP-Interaction generates symbolic descriptors of epistemic shifts, supporting co-adaptive human-system communication. We report trace clarity, response latency, and symbolic coverage using structured replay analysis and interpretability metrics.
Results: NECAP-Interaction anticipated behavioral divergence up to 2.3 ± 0.4 s before error onset and maintained over 90% symbolic trace interpretability across uncertainty tiers. In simulated overlays, symbolic feedback improved user comprehension of system states and reduced latency to trust collapse compared to baseline architectures (CNN, RNN).
Conclusion: Cognitive interpretability is not merely a technical concern-it is a design priority. By embedding symbolic introspection into BMI workflows, NECAP-Interaction supports user transparency and co-regulated interaction in cognitively demanding contexts. These findings contribute to the development of human-centered neurotechnologies where explainability is experienced in real time.
扫码关注我们
求助内容:
应助结果提醒方式:
