Distributional semantic representations were used to investigate crossmodal correspondences within language, offering a comprehensive analysis of how sensory experiences interconnect in linguistic constructs. By computing semantic proximity between words from different sensory modalities, a crossmodal semantic network was constructed, providing a general view of crossmodal correspondences in the English language. Community detection techniques were applied to unveil domains of experience where crossmodal correspondences were likely to manifest, while also considering the role of affective dimensions in shaping these domains. The study revealed the existence of an architecture of structured domains of experience in language, whereby crossmodal correspondences are deeply embedded. The present research highlights the roles of emotion and statistical associations in the organization of sensory concepts across modalities in language. The domains identified, including food, the body, the physical world and emotions/values, underscored the intricate interplay between the senses, emotion and semantic patterns. These findings align with the embodied lexicon hypothesis and the semantic coding hypothesis, emphasizing the capacity of language to capture and reflect crossmodal correspondences’ emotional and perceptual subtleties in the form of networks, while also revealing opportunities for further perceptual research on crossmodal correspondences and multisensory integration.
Embodied imagery hypothesis proposes the activation of perceptual-motor systems during language processing. Previous studies primarily used concrete visual stimuli to investigate mental imagery in language processing by native speakers (NSs) and second language (L2) learners, but few studies employed schematic diagrams. The study aims to investigate mental imagery in processing prepositional phrases by English NSs and L2 learners. Using image-schematic diagrams as primes, we examine whether any mental imagery effect is modulated by target preposition (over, in), the abstractness of meaning (spatial, extended), and stimulus onset asynchrony (SOA; 1,040 ms, 2,040 ms). A total of 79 adult L2 learners and 100 NSs of English completed diagram–picture matching and semantic priming phrasal decision tasks. Results revealed interference effects on L2 processing of over phrases and under 2,040 ms SOA, but no such effects were observed in the NS group. The selective interference effects in L2 suggest different mental imagery patterns between L1 and L2 processing, and processing schematic diagram primes requires high cognitive demands, potentially leading to difficulties in integrating visual and linguistic information and making grammaticality judgments. The findings partially validate schematic diagrams as visual representations of concepts and suggest the need for further examination of schematic diagrams with varying degrees of complexity.
A commonly held assumption is that demonstration and pantomime differ from ordinary action in that the movements are slowed down and exaggerated to be better understood by intended receivers. This claim has, however, been based on meagre empirical support. This article provides direct evidence that the different functional demands of demonstration and pantomime result in motion characteristics that differ from those for praxic action. In the experiment, participants were dressed in motion capture suits and asked to (1) perform an action, (2) demonstrate this action so that somebody else could learn how to perform it, (3) pantomime this action without using the object so that somebody else could learn how to perform it, and (4) pantomime this action without using the object so that somebody else could distinguish it from another action. The results confirm that actors slow down and exaggerate their movements in demonstrations and pantomimes when compared to ordinary actions.
The literature on face emojis raises the central question whether they should be treated as pictures or conventionalized signals. Our experiment addresses this question by investigating semantic differences in visually similar face emojis. We test a prediction following from a pictorial approach: small visual features of emojis that do not correspond to human facial features should be semantically less relevant than features that represent aspects of facial expressions. We compare emoji pairs with a visual difference that either does or does not correspond to a difference in a human facial expression according to an adaptation of the Facial Action Coding System. We created two contexts per pair, each fitted to correspond to a prominent meaning of one or the other emoji. Participants had to choose a suitable emoji for each context. The rate at which the context-matching emoji was chosen was significantly above chance for both types of emoji pairs and it did not differ significantly between them. Our results show that the small differences are meaningful in all pairs whether or not they correspond to human facial differences. This supports a lexicalist approach to emoji semantics, which treats face emojis as conventionalized signals rather than mere pictures of faces.

