Biscriptuality is the ability to write in two different scripts. Achieving handwriting expertise in a single script demands years of intensive practice, and these demands are even stronger when two scripts must be mastered. Biscriptuality could thus impact the cognitive and motor skills underlying graphomotor control. Here, we aimed at establishing that biscriptuality enhances graphomotor control, and at testing whether biscriptuals have better fine motor skills and working memory performance compared to Latin monoscriptuals.
We found that biscriptuals perform better than monoscriptuals on graphomotor tasks, and on 3 types of fine motor control tasks indexing dexterity, motor timing under spatial constraints, and spontaneous motor tempo; the two groups did not significantly differ in their working memory performance. These results demonstrate that writing expertise widely impacts the organization of the motor system.
Humans are endowed with a motor system that resonates to speech sounds, but whether concurrent visual information from lip movements can improve speech perception at a motor level through multisensory integration mechanisms remains unknown. Therefore, the aim of the study was to explore behavioral and neurophysiological correlates of multisensory influences on motor resonance in speech perception. Motor-evoked potentials (MEPs), by single pulse transcranial magnetic stimulation (TMS) applied over the left lip muscle (orbicularis oris) representation in the primary motor cortex, were recorded in healthy participants during the presentation of syllables in unimodal (visual or auditory) or multisensory (audio-visual) congruent or incongruent conditions. At the behavioral level, subjects showed better syllable identification in the congruent audio-visual condition as compared to the unimodal conditions, hence showing a multisensory enhancement effect. Accordingly, at the neurophysiological level, increased MEPs amplitudes were found in the congruent audio-visual condition, as compared to the unimodal ones. Incongruent audio-visual syllables resulting in illusory percepts did not increase corticospinal excitability, which in fact was comparable to that induced by the real perception of the same syllable. In conclusion, seeing and hearing congruent bilabial syllables increases the excitability of the lip representation in the primary motor cortex, hence documenting that multisensory integration can facilitate speech processing by influencing motor resonance. These findings highlight the modulation role of multisensory processing showing that it can boost speech perception and that multisensory interactions occur not only within higher-order regions, but also within primary motor areas, as shown by corticospinal excitability changes.
Reward value and selective attention both enhance the representation of sensory stimuli at the earliest stages of processing. It is still debated whether and how reward-driven and attentional mechanisms interact to influence perception. Here we ask whether the interaction between reward value and selective attention depends on the sensory modality through which the reward information is conveyed. Human participants first learned the reward value of uni-modal visual and auditory stimuli during a conditioning phase. Subsequently, they performed a target detection task on bimodal stimuli containing a previously rewarded stimulus in one, both, or neither of the modalities. Additionally, participants were required to focus their attention on one side and only report targets on the attended side. Our results showed a strong modulation of visual and auditory event-related potentials (ERPs) by spatial attention. We found no main effect of reward value but importantly we found an interaction effect as the strength of attentional modulation of the ERPs was significantly affected by the reward value. When reward effects were examined separately with respect to each modality, auditory value-driven modulation of attention was found to dominate the ERP effects whereas visual reward value on its own led to no effect, likely due to its interference with the target processing. These results inspire a two-stage model where first the salience of a high reward stimulus is enhanced on a local priority map specific to each sensory modality, and at a second stage reward value and top-down attentional mechanisms are integrated across sensory modalities to affect perception.