The human body is perhaps the most ubiquitous and salient visual stimulus that we encounter in our daily lives. Given the prevalence of images of human bodies in natural scene statistics, it is no surprise that our mental representations of the body are thought to strongly originate from visual experience. Yet, little is still known about high-level cognitive representations of the body. Here, we retrieved a body map from natural language, taking this as a window into high-level cognitive processes. We first extracted a matrix of distances between body parts from natural language data and employed this matrix to extrapolate a body map. To test the effectiveness of this high-level body map, we then conducted a series of experiments in which participants were asked to classify the distance between pairs of body parts, presented either as words or images. We found that the high-level body map was systematically activated when participants were making these distance judgments. Crucially, the linguistic map explained participants' performance over and above the visual body map, indicating that the former cannot be simply conceived as a by-product of perceptual experience. These findings, therefore, establish the existence of a behaviorally relevant, high-level representation of the human body.
Grainger et al. (2006) were the first to use ERP masked priming to explore the differing contributions of phonological and orthographic representations to visual word processing. Here we adapted their paradigm to examine word processing in deaf readers. We investigated whether reading-matched deaf and hearing readers (n = 36) exhibit different ERP effects associated with the activation of orthographic and phonological codes during word processing. In a visual masked priming paradigm, participants performed a go/no-go categorization task (detect an occasional animal word). Critical target words were preceded by orthographically-related (transposed letter - TL) or phonologically-related (pseudohomophone - PH) masked non-word primes were contrasted with the same target words preceded by letter substitution (control) non-words primes. Hearing readers exhibited typical N250 and N400 priming effects (greater negativity for control compared to TL or PH primed targets), and the TL and PH priming effects did not differ. For deaf readers, the N250 PH priming effect was later (250-350 ms), and they showed a reversed N250 priming effect for TL primes in this time window. The N400 TL and PH priming effects did not differ between groups. For hearing readers, those with better phonological and spelling skills showed larger early N250 PH and TL priming effects (150-250 ms). For deaf readers, those with better phonological skills showed a larger reversed TL priming effect in the late N250 window. We speculate that phonological knowledge modulates how strongly deaf readers rely on whole-word orthographic representations and/or the mapping from sublexical to lexical representations.
Prior research has shown that a sentence context can decrease the necessity for language control relative to single word processing. In particular, measures of language control such as language switch costs are reduced or even absent in a sentence context. Yet, this evidence is mainly based on bilingual language production and is far from straightforward. To further investigate this issue in the comprehension modality, we relied on the lexical flanker task, which is known to introduce sentence-like processing. More specifically, Dutch-English bilinguals (n = 68) performed a classification task in mixed language blocks on target words that were either presented alone or flanked by unrelated words in the same language. While overall no L1 switch costs were observed, we only observed L2 switch costs in the no-flanker condition. This pattern of results indicates that the presence of flankers can reduce or even abolish switch costs, suggesting that the language control process can benefit from sentence(-like) processing compared to single word processing.
Recent research has shown that readers may to fail notice word transpositions during reading (e.g., the transposition of "fail" and "to" in this sentence). Although this transposed word (TW) phenomenon was initially taken as evidence that readers process multiple words in parallel, several studies now show that TW-effects may also occur when words are presented one-by-one. Critically however, in the majority of studies TW-effects are weaker in serial presentation. Here we argue that while word position coding may to some extent proceed post-lexically (allowing TW-effects to occur despite seeing words one-by-one), stronger TW-effects in parallel presentation nonetheless evidence a degree of parallel word processing. We additionally report an experiment wherein a sample of Dutch participants (N = 34) made grammaticality judgments about 4-word TW sentences (e.g., 'the was man here', 'the went dog away') and ungrammatical control sentences ('the man dog here', 'the was went away'), whereby the four words were presented either serially or in parallel. Ungrammaticality was decidedly more difficult to notice in the TW condition, but only when words were presented in parallel. No effects were observed in the serial presentation whatsoever. The present results bolster the notion that word order is encoded with a degree of flexibility, and further provide straightforward evidence for parallel word processing during reading.