Recent advances in neuroscience and artificial intelligence have pushed the state-of-the-art from being able to decode the meaning of individual words from non-invasive brain recordings, to the reconstruction of the meaning of continuous language. Beyond game changing practical implications of such “mind reading” mapping models, e.g., brain-computer interfaces that restore lost ability to speak, they also hold the promise to be instrumental in addressing a fundamental question in the cognitive sciences: How does the human brain represent the meaning of concepts, phrases, and sentences? In order to fulfil this promise, however, important methodological and theoretical challenges need to be overcome: (1) extant mapping results are inconsistent and difficult to reconcile with neurocognitive theory, (2) extant neural meaning representations do not model the compositional semantics capturing the meaning of multi-word utterances, and (3) extant mapping models fail to take into account the spatiotemporal dynamics of lexical and compositional semantic representation and computation. I argue that in order to overcome these challenges, we should ground mapping models in linguistic and neurocognitive theory, and develop neurocomputational models that explicate the spatiotemporal dynamics of meaning in the brain's language.
扫码关注我们
求助内容:
应助结果提醒方式:
