Research on Parkinson's disease (PD) has documented significant deficits in verb production, with more robust results in single word retrieval tasks than in connected speech, yet the underlying causes of these deficits are disputable, especially concerning connected speech production. We analyzed picture descriptions provided by 48 individuals with PD and 48 age-matched healthy controls, and examined the percent of nouns and verbs of all words, the number of described events, verbs denoting activity, verbs in active morpho-syntactic patterns, and transitive verbs. Individuals with PD produced a lower percent of verbs than did control participants, but the groups differed in no other variable. Scores on a cognitive screening task associated with the percent of verbs and the number of events. We suggest that verb retrieval in connected speech in PD reflects no specific difficulty with action semantics, but rather the spread of PD pathology into more diffuse verb-specific neural networks.
This study examines how children with developmental language disorder (DLD) discriminate voiced and voiceless consonants and their processing speed. It also explores the contribution of factors like age, nonverbal intelligence, vocabulary, morphosyntactic skills, and sentence repetition in explaining speech perception abilities. Fourteen Cypriot Greek children with DLD and 14 peers with typical development (TD) aged 7; 10–10; 4 were recruited. Children were divided into four groups based on age and condition: young-DLD, young-TD, old-DLD, and old-TD. All children participated in an AX task, which measured their ability to discriminate sounds and their processing speed. They also completed a nonverbal intelligence test and a DVIQ test, which provided measures of various language abilities. The results demonstrated that the young-DLD group exhibited lower performance in discriminating consonants compared to the young-TD group, while such differences were not observed between the old-DLD and old-TD groups. Furthermore, while no significant differences in processing time were found between the DLD and TD groups, both young DLD and TD groups displayed longer processing times compared to their older counterparts. Age was the best-contributing factor to speech perception abilities in children with DLD in contrast to morphosyntax and vocabulary for children with TD. These findings highlight the role of voicing discrimination as a diagnostic marker of DLD as opposed to reaction time. Moreover, they underscore the crucial role of age in detecting DLD. The language developmental trajectories of children with TD appear distinct from those with DLD, as evidenced by variations in contributing factors between the two groups. These disparities can be attributed to the diverse nature of the DLD population, the therapies they receive, the compensatory strategies they employ, and the potential impact of other contributing factors.
Both artificial and biological systems are faced with the challenge of noisy and uncertain estimation of the state of the world, in contexts where feedback is often delayed. This challenge also applies to the processes of language production and comprehension, both when they take place in isolation (e.g., in monologue or solo reading) and when they are combined as is the case in dialogue. Crucially, we argue, dialogue brings with it some unique challenges. In this paper, we describe three such challenges within the general framework of control theory, drawing analogies to mechanical and biological systems where possible: (1) the need to distinguish between self- and other-generated utterances; (2) the need to adjust the amount of advance planning (i.e., the degree to which planning precedes articulation) flexibly to achieve timely turn-taking; (3) the need to track changing conversational goals. We show that message-to-sound models of language production (i.e., those that cover the whole process from message generation to articulation) tend to implement fairly simple control architectures. However, we argue that more sophisticated control architectures are necessary to build language production models that can account for both monologue and dialogue.
Previous research has identified a homogeneous language behavior among women speakers with a progressive mild cognitive impairment (MCI). These speakers primarily utilize verbal and non-verbal pragmatic markers with interactive functions to maintain communication with the interlocutor, and this function significantly increases in time. However, the speakers have observed variations, prompting the development of an individualized analysis of the participants' discursive productions considering neurolinguistic models.
A multimodal and individualized analysis was conducted on five women over 75, diagnosed with progressive MCI, using longitudinal and natural language corpora. The data were processed using transcription tools (i.e., verbal discourse) and annotation tools (i.e., gestures), then subjected to Principal Component Analyses due to the diverse data set and discursive modalities to analyze for each individual.
The results reveal variations, even specialization, in verbal and gestural pragmatic markers based on cognitive and empathic profiles, as well as certain resilience factors among study participants. Three behavioral patterns emerge among the profiles of amnestic MCI with standard progression, multidomain MCI profiles, and MCI profiles occurring at a very advanced age in the context of good cognitive reserve. These findings encourage further research to characterize MCI as a dynamic and variable diagnostic entity from one individual to another. Additionally, corpus analysis could enable clinicians to assess the discourse of individuals with MCI for diagnostic purposes and evaluate treatments' effectiveness, especially speech therapy.
Although historically considered a motor disorder, cervical dystonia (CD) may present with subtle cognitive impairments. Basal ganglia dysfunction in other neurological conditions can lead to language impairments. Language in people with CD (pwCD) remains unexplored.
The study aimed to explore phonological, grammatical, and semantic language abilities in pwCD compared to healthy controls.
19 pwCD and 20 control participants completed the Object and Colour subtests of the Rapid Automized Naming Task (RAN), the Test for Reception of Grammar-2 (TROG-2), and a lexical decision task with a masked priming paradigm that compared reaction times to words varying according to two factors-hand relatedness (hand-related, non-hand-related) and word category (verb, noun).
Compared to controls, pwCD were less accurate at grammatical comprehension on the TROG-2 (p < 0.05, n2 = 0.15). There were no significant differences between pwCD and controls in phonological retrieval, as measured by the RAN. PwCD demonstrated an overall reduced priming effect for all words, however, there is some evidence in our data that this may be more pronounced for hand-related words.
Language deficits should be considered an area of future research in pwCD. These findings support the role of the motor system in language.
Although the ability to acquire a second language (L2) and attain fluency in that language is beneficial for a growing number of people, it is significantly more difficult to acquire such skills in adulthood. While traditional in-person and computer training programs can aid in this process, learning is often slow and retention is quite poor. A method for driving long-lasting neural plasticity during language learning would be valuable for those who need or want to achieve fluency in a second language later in life. However, little is known about the effect of neuromodulation methods on language learning. In the current study, we investigated the effect of non-invasive transcutaneous auricular vagus nerve stimulation (taVNS) on vocabulary word-learning in healthy young adults. Importantly, we approached this research question by investigating two key parameters of taVNS, stimulation frequency (Experiment 1) and current intensity (Experiment 2). Typically developing young adults completed a 1-h training session in which they learned 30 concrete, Palauan nouns while receiving real or sham stimulation to the left posterior tragus (Experiment 1) or stimulation at various intensities (Experiment 2). Participants completed a Palau-to-English translation test immediately after training and seven days later to quantify learning and retention. The results largely revealed that high frequency stimulation above sensory threshold improved retention of learned words. These results suggest that taVNS may improve retention of vocabulary words in a second language and that stimulation frequency may impact efficacy.
As a high-demanding mental activity with both cognitive and emotional factors, verbal-humor processing consumes more attentional resources than non-humor processing, which has been demonstrated by behavioral studies, but little has been examined at a real-time scale. Based on the three-stage model (incongruity detection, incongruity resolution, and mirth), the current study used event-related potential (ERP) and event-related oscillation (ERO) to explore the attentional resource consumption of verbal-humor processing by employing a dual-task paradigm in which sentence comprehension (humorous, positive, neutral) was the primary task and arithmetical calculation (simple, difficult) was the secondary task.
Participants’ (N=38) behavioral performance and ERP/ERO measures in two tasks were analyzed. ERP results of verbal-humor processing revealed significantly larger LAN, LLAN, and LPP activation, which indexed three stages. ERO results showed significant beta power changes in the detection stage and theta changes in the resolution and mirth stages. The behavioral data indicated that the Reaction Times (RTs) of the arithmetical task following verbal-humor processing were longer than those following non-humorous positive and neutral ones. The ERP results of arithmetical calculation found that the calculations following verbal-humor processing elicited significantly greater P2, P3b, and positive slow wave amplitudes than those following the other two processings, which reflected more resource allocation in the calculation to compensate for the resource preemption of verbal-humor processing. In addition, the calculation following positive sentence exhibited a greater ERP amplitude in the relatively early P2 time intervals than that following the neutral sentences. Collectively, the behavioral, ERP, and ERO results concurrently confirmed that verbal-humor processing consumed more attentional resources compared with non-humorous counterparts, and moreover, the comparison of ERP following humorous and positive sentences suggested that the processing of the cognitive factor consumes more attentional resources than the emotional factor although both factors play a role in the process.