Background
Spatiotemporal mapping of neural activity during continuous speech production has been traditionally approached using correlation coefficient (CC) analysis between cortical signals and speech recordings. A prior study employed this approach using electrocorticography (ECoG) data from participants who underwent invasive intracranial monitoring for epilepsy. However, CC cannot detect nonlinear relationships and is dominated by the correspondence between periods of silence and of non-silence.
New Method
We introduce the mutual information (MI) measure, which can capture both linear and nonlinear dependencies. We validated CC and MI on the sub-second spatiotemporal brain activity recorded during continuous speech tasks. To refine the results, we also implemented a novel “masked analysis”, which excludes periods of silence, and compared it with the standard (unmasked) analysis.
Results
Our findings show that previous results, obtained through more complex statistical methods, can be reproduced using CC with an appropriate threshold cutoff. Moreover, both standard MI and CC are influenced by broad transitions between silence and speech, but masking allows the detection of intrinsic correspondences between the two signals, revealing more localized activity.
Comparison with existing methods
Compared to the standard CC, masked MI highlights early prefrontal and premotor activations emerging ∼440 ms before speech onset. It also identifies sharper, anatomically coherent activations in key speech-related areas, demonstrating improved sensitivity to the fine-grained spatiotemporal dynamics of continuous speech production.
Conclusion
These findings deepen our understanding of the neural pathways underlying speech and underscore the potential of masked MI for advancing neural decoding in future speech-based brain-computer interface applications.
扫码关注我们
求助内容:
应助结果提醒方式:
