Pub Date : 2024-08-14DOI: 10.1016/j.neuropsychologia.2024.108973
A. Banaszkiewicz , B. Costello , A. Marchewka
The goal of this study was to investigate the impact of the age of acquisition (AoA) on functional brain representations of sign language in two exceptional groups of hearing bimodal bilinguals: native signers (simultaneous bilinguals since early childhood) and late signers (proficient sequential bilinguals, who learnt a sign language after puberty). We asked whether effects of AoA would be present across languages – signed and audiovisual spoken – and thus observed only in late signers as they acquired each language at different life stages, and whether effects of AoA would be present during sign language processing across groups. Moreover, we aimed to carefully control participants’ level of sign language proficiency by implementing a battery of language tests developed for the purpose of the project, which confirmed that participants had high competences of sign language.
Between-group analyses revealed a hypothesized modulatory effect of AoA in the right inferior parietal lobule (IPL) in native signers, compared to late signers. With respect to within-group differences across languages we observed greater involvement of the left IPL in response to sign language in comparison to spoken language in both native and late signers, indicating language modality effects. Overall, our results suggest that the neural underpinnings of language are molded by the linguistic characteristics of the language as well as by when in life the language is learnt.
{"title":"Early language experience and modality affect parietal cortex activation in different hemispheres: Insights from hearing bimodal bilinguals","authors":"A. Banaszkiewicz , B. Costello , A. Marchewka","doi":"10.1016/j.neuropsychologia.2024.108973","DOIUrl":"10.1016/j.neuropsychologia.2024.108973","url":null,"abstract":"<div><p>The goal of this study was to investigate the impact of the age of acquisition (AoA) on functional brain representations of sign language in two exceptional groups of hearing bimodal bilinguals: native signers (simultaneous bilinguals since early childhood) and late signers (proficient sequential bilinguals, who learnt a sign language after puberty). We asked whether effects of AoA would be present across languages – signed and audiovisual spoken – and thus observed only in late signers as they acquired each language at different life stages, and whether effects of AoA would be present during sign language processing across groups. Moreover, we aimed to carefully control participants’ level of sign language proficiency by implementing a battery of language tests developed for the purpose of the project, which confirmed that participants had high competences of sign language.</p><p>Between-group analyses revealed a hypothesized modulatory effect of AoA in the right inferior parietal lobule (IPL) in native signers, compared to late signers. With respect to within-group differences across languages we observed greater involvement of the left IPL in response to sign language in comparison to spoken language in both native and late signers, indicating language modality effects. Overall, our results suggest that the neural underpinnings of language are molded by the linguistic characteristics of the language as well as by when in life the language is learnt.</p></div>","PeriodicalId":19279,"journal":{"name":"Neuropsychologia","volume":"204 ","pages":"Article 108973"},"PeriodicalIF":2.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141996208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1016/j.neuropsychologia.2024.108970
Raju Pooja , Pritha Ghosh , Vishnu Sreekumar
The landscape of human memory and event cognition research has witnessed a transformative journey toward the use of naturalistic contexts and tasks. In this review, we track this progression from abrupt, artificial stimuli used in extensively controlled laboratory experiments to more naturalistic tasks and stimuli that present a more faithful representation of the real world. We argue that in order to improve ecological validity, naturalistic study designs must consider the complexity of the cognitive phenomenon being studied. Then, we review the current state of “naturalistic” event segmentation studies and critically assess frequently employed movie stimuli. We evaluate recently developed tools like lifelogging and other extended reality technologies to help address the challenges we identified with existing naturalistic approaches. We conclude by offering some guidelines that can be used to design ecologically valid cognitive neuroscience studies of memory and event cognition.
{"title":"Towards an ecologically valid naturalistic cognitive neuroscience of memory and event cognition","authors":"Raju Pooja , Pritha Ghosh , Vishnu Sreekumar","doi":"10.1016/j.neuropsychologia.2024.108970","DOIUrl":"10.1016/j.neuropsychologia.2024.108970","url":null,"abstract":"<div><p>The landscape of human memory and event cognition research has witnessed a transformative journey toward the use of naturalistic contexts and tasks. In this review, we track this progression from abrupt, artificial stimuli used in extensively controlled laboratory experiments to more naturalistic tasks and stimuli that present a more faithful representation of the real world. We argue that in order to improve ecological validity, naturalistic study designs must consider the complexity of the cognitive phenomenon being studied. Then, we review the current state of “naturalistic” event segmentation studies and critically assess frequently employed movie stimuli. We evaluate recently developed tools like lifelogging and other extended reality technologies to help address the challenges we identified with existing naturalistic approaches. We conclude by offering some guidelines that can be used to design ecologically valid cognitive neuroscience studies of memory and event cognition.</p></div>","PeriodicalId":19279,"journal":{"name":"Neuropsychologia","volume":"203 ","pages":"Article 108970"},"PeriodicalIF":2.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141988416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-10DOI: 10.1016/j.neuropsychologia.2024.108971
Samantha Marshall , Gianna Jeyarajan , Nicholas Hayhow , Raphael Gabiazon , Tia Seleem , Mathew R. Hammerstrom , Olav Krigolson , Lindsay S. Nagamatsu
Human mobility requires neurocognitive inputs to safely navigate the environment. Previous research has examined neural processes that underly walking using mobile neuroimaging technologies, yet few studies have incorporated true real-world methods without a specific task imposed on participants (e.g., dual-task, motor demands). The present study included 40 young adults (M = 22.60, SD = 2.63, 24 female) and utilized mobile electroencephalography (EEG) to examine and compare theta, alpha, and beta frequency band power (μV2) during sitting and walking in laboratory and real-world environments. EEG data was recorded using the Muse S brain sensing headband, a portable system equipped with four electrodes (two frontal, two temporal) and one reference sensor. Qualitative data detailing the thoughts of each participant were collected after each condition. For the quantitative data, a 2 × 2 repeated measures ANOVA with within subject factors of environment and mobility was conducted with full participant datasets (n = 17, M = 22.59, SD = 2.97, 10 female). Thematic analysis was performed on the qualitative data (n = 40). Our findings support that mobility and environment may modulate neural activity, as we observed increased brain activation for walking compared to sitting, and for real-world walking compared to laboratory walking. We identified five qualitative themes across the four conditions 1) physical sensations and bodily awareness, 2) responsibilities and planning, 3) environmental awareness, 4) mobility, and 5) spotlight effect. Our study highlights the importance and potential for real-world methods to supplement standard research practices to increase the ecological validity of studies conducted in the fields of neuroscience and kinesiology.
{"title":"Cortical activation among young adults during mobility in an indoor real-world environment: A mobile EEG approach","authors":"Samantha Marshall , Gianna Jeyarajan , Nicholas Hayhow , Raphael Gabiazon , Tia Seleem , Mathew R. Hammerstrom , Olav Krigolson , Lindsay S. Nagamatsu","doi":"10.1016/j.neuropsychologia.2024.108971","DOIUrl":"10.1016/j.neuropsychologia.2024.108971","url":null,"abstract":"<div><p>Human mobility requires neurocognitive inputs to safely navigate the environment. Previous research has examined neural processes that underly walking using mobile neuroimaging technologies, yet few studies have incorporated true real-world methods without a specific task imposed on participants (e.g., dual-task, motor demands). The present study included 40 young adults (M = 22.60, SD = 2.63, 24 female) and utilized mobile electroencephalography (EEG) to examine and compare theta, alpha, and beta frequency band power (μV<sup>2</sup>) during sitting and walking in laboratory and real-world environments. EEG data was recorded using the Muse S brain sensing headband, a portable system equipped with four electrodes (two frontal, two temporal) and one reference sensor. Qualitative data detailing the thoughts of each participant were collected after each condition. For the quantitative data, a 2 × 2 repeated measures ANOVA with within subject factors of environment and mobility was conducted with full participant datasets (n = 17, M = 22.59, SD = 2.97, 10 female). Thematic analysis was performed on the qualitative data (n = 40). Our findings support that mobility and environment may modulate neural activity, as we observed increased brain activation for walking compared to sitting, and for real-world walking compared to laboratory walking. We identified five qualitative themes across the four conditions 1) physical sensations and bodily awareness, 2) responsibilities and planning, 3) environmental awareness, 4) mobility, and 5) spotlight effect. Our study highlights the importance and potential for real-world methods to supplement standard research practices to increase the ecological validity of studies conducted in the fields of neuroscience and kinesiology.</p></div>","PeriodicalId":19279,"journal":{"name":"Neuropsychologia","volume":"203 ","pages":"Article 108971"},"PeriodicalIF":2.0,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0028393224001866/pdfft?md5=052c8c274ece1da0e2ed9f714016175e&pid=1-s2.0-S0028393224001866-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141917147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1016/j.neuropsychologia.2024.108969
Yijie Huang, Wenyi Shen, Shimin Fu
Numerous research studies have demonstrated that eye gaze and arrows act as cues that automatically guide spatial attention. However, it remains uncertain whether the attention shifts triggered by these two types of stimuli vary in terms of automatic processing mechanisms. In our current investigation, we employed an equal probability paradigm to explore the likenesses and distinctions in the neural mechanisms of automatic processing for eye gaze and arrows in non-attentive conditions, using visual mismatch negative (vMMN) as an indicator of automatic processing. The sample size comprised 17 participants. The results indicated a significant interaction between time duration, stimulus material, and stimulus type. The findings demonstrated that both eye gaze and arrows were processed automatically, triggering an early vMMN, although with temporal variations. The vMMN for eye gaze occurred between 180 and 220 ms, whereas for arrows it ranged from 235 to 275 ms. Moreover, arrow stimuli produced a more pronounced vMMN amplitude. The earlier vMMN response to eye gaze compared with arrows implies the specificity and precedence of social information processing associated with eye gaze over the processing of nonsocial information with arrows. However, arrow could potentially elicit a stronger vMMN because of their heightened salience compared to the background, and the expansion of attention focusing might amplify the vMMN impact. This study offers insights into the similarities and differences in attention processing of social and non-social information under unattended conditions from the perspective of automatic processing.
{"title":"Prioritization of social information processing: Eye gaze elicits earlier vMMN than arrows","authors":"Yijie Huang, Wenyi Shen, Shimin Fu","doi":"10.1016/j.neuropsychologia.2024.108969","DOIUrl":"10.1016/j.neuropsychologia.2024.108969","url":null,"abstract":"<div><p>Numerous research studies have demonstrated that eye gaze and arrows act as cues that automatically guide spatial attention. However, it remains uncertain whether the attention shifts triggered by these two types of stimuli vary in terms of automatic processing mechanisms. In our current investigation, we employed an equal probability paradigm to explore the likenesses and distinctions in the neural mechanisms of automatic processing for eye gaze and arrows in non-attentive conditions, using visual mismatch negative (vMMN) as an indicator of automatic processing. The sample size comprised 17 participants. The results indicated a significant interaction between time duration, stimulus material, and stimulus type. The findings demonstrated that both eye gaze and arrows were processed automatically, triggering an early vMMN, although with temporal variations. The vMMN for eye gaze occurred between 180 and 220 ms, whereas for arrows it ranged from 235 to 275 ms. Moreover, arrow stimuli produced a more pronounced vMMN amplitude. The earlier vMMN response to eye gaze compared with arrows implies the specificity and precedence of social information processing associated with eye gaze over the processing of nonsocial information with arrows. However, arrow could potentially elicit a stronger vMMN because of their heightened salience compared to the background, and the expansion of attention focusing might amplify the vMMN impact. This study offers insights into the similarities and differences in attention processing of social and non-social information under unattended conditions from the perspective of automatic processing.</p></div>","PeriodicalId":19279,"journal":{"name":"Neuropsychologia","volume":"203 ","pages":"Article 108969"},"PeriodicalIF":2.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141913412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1016/j.neuropsychologia.2024.108965
Matthieu Béreau , Axel Garnier-Allain , Mathieu Servant
The ability to use past learned experiences to guide decisions is an important component of adaptive behavior, especially when decision-making is performed under time pressure or when perceptual information is unreliable. Previous studies using visual discrimination tasks have shown that this prior-informed decision-making ability is impaired in Parkinson's disease (PD), but the mechanisms underlying this deficit and the precise impact of dopaminergic denervation within cortico-basal circuits remain unclear. To shed light on this problem, we evaluated prior-informed decision-making under various conditions of perceptual uncertainty in a sample of 13 clinically established early PD patients, and compared behavioral performance with healthy control (HC) subjects matched in age, sex and education. PD patients and HC subjects performed a random dot motion task in which they had to decide the net direction (leftward vs. rightward) of a field of moving dots and communicate their choices through manual button presses. We manipulated prior knowledge by modulating the probability of occurrence of leftward vs. rightward motion stimuli between blocks of trials, and by explicitly giving these probabilities to subjects at the beginning of each block. We further manipulated stimulus discriminability by varying the proportion of dots moving coherently in the signal direction and speed-accuracy instructions. PD patients used choice probabilities to guide perceptual decisions in both speed and accuracy conditions, and their performance did not significantly differ from that of HC subjects. An additional analysis of the data with the diffusion decision model confirmed this conclusion. These results suggest that the impaired use of priors during visual discrimination observed at more advanced stages of PD is independent of dopaminergic denervation, though additional studies with larger sample sizes are needed to more firmly establish this conclusion.
{"title":"Clinically established early Parkinson's disease patients do not show impaired use of priors in conditions of perceptual uncertainty","authors":"Matthieu Béreau , Axel Garnier-Allain , Mathieu Servant","doi":"10.1016/j.neuropsychologia.2024.108965","DOIUrl":"10.1016/j.neuropsychologia.2024.108965","url":null,"abstract":"<div><p>The ability to use past learned experiences to guide decisions is an important component of adaptive behavior, especially when decision-making is performed under time pressure or when perceptual information is unreliable. Previous studies using visual discrimination tasks have shown that this prior-informed decision-making ability is impaired in Parkinson's disease (PD), but the mechanisms underlying this deficit and the precise impact of dopaminergic denervation within cortico-basal circuits remain unclear. To shed light on this problem, we evaluated prior-informed decision-making under various conditions of perceptual uncertainty in a sample of 13 clinically established early PD patients, and compared behavioral performance with healthy control (HC) subjects matched in age, sex and education. PD patients and HC subjects performed a random dot motion task in which they had to decide the net direction (leftward vs. rightward) of a field of moving dots and communicate their choices through manual button presses. We manipulated prior knowledge by modulating the probability of occurrence of leftward vs. rightward motion stimuli between blocks of trials, and by explicitly giving these probabilities to subjects at the beginning of each block. We further manipulated stimulus discriminability by varying the proportion of dots moving coherently in the signal direction and speed-accuracy instructions. PD patients used choice probabilities to guide perceptual decisions in both speed and accuracy conditions, and their performance did not significantly differ from that of HC subjects. An additional analysis of the data with the diffusion decision model confirmed this conclusion. These results suggest that the impaired use of priors during visual discrimination observed at more advanced stages of PD is independent of dopaminergic denervation, though additional studies with larger sample sizes are needed to more firmly establish this conclusion.</p></div>","PeriodicalId":19279,"journal":{"name":"Neuropsychologia","volume":"202 ","pages":"Article 108965"},"PeriodicalIF":2.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0028393224001805/pdfft?md5=68dc84091e29243b26160ade1c1f7380&pid=1-s2.0-S0028393224001805-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1016/j.neuropsychologia.2024.108968
Yushuang Liu , Janet G. van Hell
We examined the neural correlates underlying the semantic processing of native- and nonnative-accented sentences, presented in quiet or embedded in multi-talker noise. Implementing a semantic violation paradigm, 36 English monolingual young adults listened to American-accented (native) and Chinese-accented (nonnative) English sentences with or without semantic anomalies, presented in quiet or embedded in multi-talker noise, while EEG was recorded. After hearing each sentence, participants verbally repeated the sentence, which was coded and scored as an offline comprehension accuracy measure. In line with earlier behavioral studies, the negative impact of background noise on sentence repetition accuracy was higher for nonnative-accented than for native-accented sentences. At the neural level, the N400 effect for semantic anomaly was larger for native-accented than for nonnative-accented sentences, and was also larger for sentences presented in quiet than in noise, indicating impaired lexical-semantic access when listening to nonnative-accented speech or sentences embedded in noise. No semantic N400 effect was observed for nonnative-accented sentences presented in noise. Furthermore, the frequency of neural oscillations in the alpha frequency band (an index of online cognitive listening effort) was higher when listening to sentences in noise versus in quiet, but no difference was observed across the accent conditions. Semantic anomalies presented in background noise also elicited higher theta activity, whereas processing nonnative-accented anomalies was associated with decreased theta activity. Taken together, we found that listening to nonnative accents or background noise is associated with processing challenges during online semantic access, leading to decreased comprehension accuracy. However, the underlying cognitive mechanism (e.g., associated listening efforts) might manifest differently across accented speech processing and speech in noise processing.
{"title":"Neural correlates of listening to nonnative-accented speech in multi-talker background noise","authors":"Yushuang Liu , Janet G. van Hell","doi":"10.1016/j.neuropsychologia.2024.108968","DOIUrl":"10.1016/j.neuropsychologia.2024.108968","url":null,"abstract":"<div><p>We examined the neural correlates underlying the semantic processing of native- and nonnative-accented sentences, presented in quiet or embedded in multi-talker noise. Implementing a semantic violation paradigm, 36 English monolingual young adults listened to American-accented (native) and Chinese-accented (nonnative) English sentences with or without semantic anomalies, presented in quiet or embedded in multi-talker noise, while EEG was recorded. After hearing each sentence, participants verbally repeated the sentence, which was coded and scored as an offline comprehension accuracy measure. In line with earlier behavioral studies, the negative impact of background noise on sentence repetition accuracy was higher for nonnative-accented than for native-accented sentences. At the neural level, the N400 effect for semantic anomaly was larger for native-accented than for nonnative-accented sentences, and was also larger for sentences presented in quiet than in noise, indicating impaired lexical-semantic access when listening to nonnative-accented speech or sentences embedded in noise. No semantic N400 effect was observed for nonnative-accented sentences presented in noise. Furthermore, the frequency of neural oscillations in the alpha frequency band (an index of online cognitive listening effort) was higher when listening to sentences in noise versus in quiet, but no difference was observed across the accent conditions. Semantic anomalies presented in background noise also elicited higher theta activity, whereas processing nonnative-accented anomalies was associated with decreased theta activity. Taken together, we found that listening to nonnative accents or background noise is associated with processing challenges during online semantic access, leading to decreased comprehension accuracy. However, the underlying cognitive mechanism (e.g., associated listening efforts) might manifest differently across accented speech processing and speech in noise processing.</p></div>","PeriodicalId":19279,"journal":{"name":"Neuropsychologia","volume":"203 ","pages":"Article 108968"},"PeriodicalIF":2.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141907281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online shopping addiction (OSA) is defined as a behavioral addiction where an individual exhibits an unhealthy and excessive attachment to shopping on the Internet. Since the OSA shown its adverse impacts on individuals' daily life and social functions, it is important to examine the neurobiological underpinnings of OSA that could be used in clinical practice to identify individuals with OSA. The present study addressed this question by employing a connectome-based prediction model approach to predict the OSA tendency of healthy subjects from whole-brain resting-state functional connectivity. The OSA connectome - a set of connections across multiple brain networks that contributed to predict individuals' OSA tendency was identified, including the functional connectivity between the frontal-parietal network (FPN) and cingulo-opercular network (CON) (i.e., positive network), as well as the functional connectivity within default mode network (DMN) and that between FPN and DMN (i.e., negative network). Key nodes that contributed to the prediction model included the middle frontal gyrus, inferior frontal gyrus, anterior cingulate cortex, and inferior temporal gyrus, which have been associated with impulsivity and emotional processing. Notably, this connectome has shown its specific role in predicting OSA by controlling for the influence of general Internet addiction. Moreover, the strength of the negative network mediated the relationship between OSA and impulsivity, highlighting that the negative network underlies the impulsivity characteristic of OSA. Together, these findings advanced our understanding of the neural correlates of OSA and provided a promising framework for diagnosing OSA.
网上购物成瘾(OSA)被定义为一种行为成瘾,即个人对网上购物表现出不健康和过度的依恋。由于 OSA 会对个人的日常生活和社会功能造成不良影响,因此研究 OSA 的神经生物学基础非常重要,它可用于临床实践以识别 OSA 患者。本研究针对这一问题,采用基于连接组的预测模型方法,从全脑静息态功能连接中预测健康受试者的 OSA 倾向。OSA 连接组--一组跨多个大脑网络的连接,有助于预测个体的 OSA 倾向,包括额叶-顶叶网络(FPN)和脑髓鞘-小脑网络(CON)之间的功能连接(即正向网络),以及默认模式网络(DMN)内部和 FPN 与 DMN 之间的功能连接(即负向网络)。对预测模型做出贡献的关键节点包括额叶中回、额叶下回、扣带回前部皮层和颞下回,它们与冲动和情绪处理有关。值得注意的是,通过控制一般网络成瘾的影响,这一连接组显示了其在预测 OSA 方面的特殊作用。此外,负性网络的强度在 OSA 和冲动性之间起着中介作用,这突出表明负性网络是 OSA 冲动性特征的基础。这些发现共同推进了我们对OSA神经相关性的理解,并为诊断OSA提供了一个前景广阔的框架。
{"title":"Individualized prediction of online shopping addiction from whole-brain functional connectivity","authors":"Liang Shi , Zhiting Ren , Qiuyang Feng , Jiang Qiu","doi":"10.1016/j.neuropsychologia.2024.108967","DOIUrl":"10.1016/j.neuropsychologia.2024.108967","url":null,"abstract":"<div><p>Online shopping addiction (OSA) is defined as a behavioral addiction where an individual exhibits an unhealthy and excessive attachment to shopping on the Internet. Since the OSA shown its adverse impacts on individuals' daily life and social functions, it is important to examine the neurobiological underpinnings of OSA that could be used in clinical practice to identify individuals with OSA. The present study addressed this question by employing a connectome-based prediction model approach to predict the OSA tendency of healthy subjects from whole-brain resting-state functional connectivity. The OSA connectome - a set of connections across multiple brain networks that contributed to predict individuals' OSA tendency was identified, including the functional connectivity between the frontal-parietal network (FPN) and cingulo-opercular network (CON) (i.e., positive network), as well as the functional connectivity within default mode network (DMN) and that between FPN and DMN (i.e., negative network). Key nodes that contributed to the prediction model included the middle frontal gyrus, inferior frontal gyrus, anterior cingulate cortex, and inferior temporal gyrus, which have been associated with impulsivity and emotional processing. Notably, this connectome has shown its specific role in predicting OSA by controlling for the influence of general Internet addiction. Moreover, the strength of the negative network mediated the relationship between OSA and impulsivity, highlighting that the negative network underlies the impulsivity characteristic of OSA. Together, these findings advanced our understanding of the neural correlates of OSA and provided a promising framework for diagnosing OSA.</p></div>","PeriodicalId":19279,"journal":{"name":"Neuropsychologia","volume":"202 ","pages":"Article 108967"},"PeriodicalIF":2.0,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141893933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-03DOI: 10.1016/j.neuropsychologia.2024.108966
Meghan E. McGarry , Katherine J. Midgley , Phillip J. Holcomb , Karen Emmorey
The type of form-meaning mapping for iconic signs can vary. For perceptually-iconic signs there is a correspondence between visual features of a referent (e.g., the beak of a bird) and the form of the sign (e.g., extended thumb and index finger at the mouth for the American Sign Language (ASL) sign BIRD). For motorically-iconic signs there is a correspondence between how an object is held/manipulated and the form of the sign (e.g., the ASL sign FLUTE depicts how a flute is played). Previous studies have found that iconic signs are retrieved faster in picture-naming tasks, but type of iconicity has not been manipulated. We conducted an ERP study in which deaf signers and a control group of English speakers named pictures that targeted perceptually-iconic, motorically-iconic, or non-iconic ASL signs. For signers (unlike the control group), naming latencies varied by iconicity type: perceptually-iconic < motorically-iconic < non-iconic signs. A reduction in the N400 amplitude was only found for the perceptually-iconic signs, compared to both non-iconic and motorically-iconic signs. No modulations of N400 amplitudes were observed for the control group. We suggest that this pattern of results arises because pictures eliciting perceptually-iconic signs can more effectively prime lexical access due to greater alignment between features of the picture and the semantic and phonological features of the sign. We speculate that naming latencies are facilitated for motorically-iconic signs due to later processes (e.g., faster phonological encoding via cascading activation from semantic features). Overall, the results indicate that type of iconicity plays role in sign production when elicited by picture-naming tasks.
{"title":"An ERP investigation of perceptual vs motoric iconicity in sign production","authors":"Meghan E. McGarry , Katherine J. Midgley , Phillip J. Holcomb , Karen Emmorey","doi":"10.1016/j.neuropsychologia.2024.108966","DOIUrl":"10.1016/j.neuropsychologia.2024.108966","url":null,"abstract":"<div><p>The type of form-meaning mapping for iconic signs can vary. For perceptually-iconic signs there is a correspondence between visual features of a referent (e.g., the beak of a bird) and the form of the sign (e.g., extended thumb and index finger at the mouth for the American Sign Language (ASL) sign BIRD). For motorically-iconic signs there is a correspondence between how an object is held/manipulated and the form of the sign (e.g., the ASL sign FLUTE depicts how a flute is played). Previous studies have found that iconic signs are retrieved faster in picture-naming tasks, but type of iconicity has not been manipulated. We conducted an ERP study in which deaf signers and a control group of English speakers named pictures that targeted perceptually-iconic, motorically-iconic, or non-iconic ASL signs. For signers (unlike the control group), naming latencies varied by iconicity type: perceptually-iconic < motorically-iconic < non-iconic signs. A reduction in the N400 amplitude was only found for the perceptually-iconic signs, compared to both non-iconic and motorically-iconic signs. No modulations of N400 amplitudes were observed for the control group. We suggest that this pattern of results arises because pictures eliciting perceptually-iconic signs can more effectively prime lexical access due to greater alignment between features of the picture and the semantic and phonological features of the sign. We speculate that naming latencies are facilitated for motorically-iconic signs due to later processes (e.g., faster phonological encoding via cascading activation from semantic features). Overall, the results indicate that type of iconicity plays role in sign production when elicited by picture-naming tasks.</p></div>","PeriodicalId":19279,"journal":{"name":"Neuropsychologia","volume":"203 ","pages":"Article 108966"},"PeriodicalIF":2.0,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0028393224001817/pdfft?md5=f8d5db02c890d184a7b2ff4bbeecbf4c&pid=1-s2.0-S0028393224001817-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-29DOI: 10.1016/j.neuropsychologia.2024.108964
Tiziana Vercillo , Alexandra Scurry , Fang Jiang
Investigating peripheral visual processing in individuals with early auditory deprivation is a critical research area in the field of neuroscience, since it helps understanding the phenomenon of sensory adaptation and brain plasticity after sensory loss. Prior research has already demonstrated that the absence of auditory input, which is crucial to detect events occurring out of the central egocentric visual space, leads to an improved processing of visual and tactile stimuli occurring in peripheral regions of the sensory space. Nevertheless, no prior studies have explored whether such enhanced processing also takes place within the domain of action, particularly when individuals are required to perform actions that produce peripheral sensory outcomes. To test this hypothesis, we recruited 15 hearing (31 ± 3.3 years) and 15 early deaf adults (42 ± 2.6 years) for a neuro-behavioral experiment involving: 1) a behavioral task where participants executed a simple motor action (i.e., a button press) and received a visual feedback either in the center or in a peripheral region of the visual field, and 2) the electrophysiological recording of brain electrical potentials (EEG). We measured and compared neural activity preceding the motor action (the readiness potentials) and visual evoked responses (the N1 and P2 ERP components) and found that deaf individuals did not exhibit more pronounced modulation of neural responses when their motor actions resulted in peripheral visual stimuli compared to their hearing counterparts. Instead they showed a reduced modulation when visual stimuli were presented in the center. Our results suggest a redistribution of attentional resources from center to periphery in deaf individuals during sensorimotor coupling.
{"title":"Investigating the impact of early deafness on learned action-effect contingency for action linked to peripheral sensory effects","authors":"Tiziana Vercillo , Alexandra Scurry , Fang Jiang","doi":"10.1016/j.neuropsychologia.2024.108964","DOIUrl":"10.1016/j.neuropsychologia.2024.108964","url":null,"abstract":"<div><p>Investigating peripheral visual processing in individuals with early auditory deprivation is a critical research area in the field of neuroscience, since it helps understanding the phenomenon of sensory adaptation and brain plasticity after sensory loss. Prior research has already demonstrated that the absence of auditory input, which is crucial to detect events occurring out of the central egocentric visual space, leads to an improved processing of visual and tactile stimuli occurring in peripheral regions of the sensory space. Nevertheless, no prior studies have explored whether such enhanced processing also takes place within the domain of action, particularly when individuals are required to perform actions that produce peripheral sensory outcomes. To test this hypothesis, we recruited 15 hearing (31 ± 3.3 years) and 15 early deaf adults (42 ± 2.6 years) for a neuro-behavioral experiment involving: 1) a behavioral task where participants executed a simple motor action (i.e., a button press) and received a visual feedback either in the center or in a peripheral region of the visual field, and 2) the electrophysiological recording of brain electrical potentials (EEG). We measured and compared neural activity preceding the motor action (the readiness potentials) and visual evoked responses (the N1 and P2 ERP components) and found that deaf individuals did not exhibit more pronounced modulation of neural responses when their motor actions resulted in peripheral visual stimuli compared to their hearing counterparts. Instead they showed a reduced modulation when visual stimuli were presented in the center. Our results suggest a redistribution of attentional resources from center to periphery in deaf individuals during sensorimotor coupling.</p></div>","PeriodicalId":19279,"journal":{"name":"Neuropsychologia","volume":"202 ","pages":"Article 108964"},"PeriodicalIF":2.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141796614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}