Introduction: Understanding the cognitive process of thinking as a neural phenomenon remains a central challenge in neuroscience and computational modeling. This study addresses this challenge by presenting a biologically grounded framework that simulates adaptive decision making across cognitive states.
Methods: The model integrates neuronal synchronization, metabolic energy consumption, and reinforcement learning. Neural synchronization is simulated using Kuramoto oscillators, while energy dynamics are constrained by multimodal activity profiles. Reinforcement learning agents-Q-learning and Deep Q-Network (DQN)-modulate external inputs to maintain optimal synchrony with minimal energy cost. The model is validated using real EEG and fMRI data, comparing simulated and empirical outputs across spectral power, phase synchrony, and BOLD activity.
Results: The DQN agent achieved rapid convergence, stabilizing cumulative rewards within 200 episodes and reducing mean synchronization error by over 40%, outperforming Q-learning in speed and generalization. The model successfully reproduced canonical brain states-focused attention, multitasking, and rest. Simulated EEG showed dominant alpha-band power (3.2 × 10-4 a.u.), while real EEG exhibited beta-dominance (3.2 × 10-4 a.u.), indicating accurate modeling of resting states and tunability for active tasks. Phase Locking Value (PLV) ranged from 0.9806 to 0.9926, with the focused condition yielding the lowest circular variance (0.0456) and a near significant phase shift compared to rest (t = -2.15, p = 0.075). Cross-modal validation revealed moderate correlation between simulated and real BOLD signals (r = 0.30, resting condition), with delayed inputs improving temporal alignment. General Linear Model (GLM) analysis of simulated BOLD data showed high region-specific prediction accuracy (R 2 = 0.973-0.993, p < 0.001), particularly in prefrontal, parietal, and anterior cingulate cortices. Voxel-wise correlation and ICA decomposition confirmed structured network dynamics.
Discussion: These findings demonstrate that the framework captures both electrophysiological and spatial aspects of brain activity, respects neuroenergetic constraints, and adaptively regulates brain-like states through reinforcement learning. The model offers a scalable platform for simulating cognition and developing biologically inspired neuroadaptive systems.
Conclusion: This work provides a novel and testable approach to modeling thinking as a biologically constrained control problem and lays the groundwork for future applications in cognitive modeling and brain-computer interfaces.
扫码关注我们
求助内容:
应助结果提醒方式:
