Pub Date : 2024-04-08DOI: 10.1007/s10339-024-01185-1
Zhengye Xu, Duo Liu
A rating of body–object interactions (BOIs) reflects the ease with which a human body can interact physically with a word’s referent. Studies with adults have demonstrated a facilitating BOI effect in language tasks, with faster and more accurate responses for high BOI words (e.g., cup) than low BOI words (e.g., coal). A few studies have explored the BOI effect in children. However, these studies have all adopted adult-rated BOIs, which may differ from children’s. Using child-rated BOIs, the present study investigated the BOI effect in Chinese children and its relationship with age, as well as whether there was a community difference in the BOI effect. Children (aged 7–8) from Mainland China (N = 100) and Hong Kong SAR (HK; N = 90) completed a lexical decision task used to measure the BOI effect. The children were asked to judge whether each item was a real Chinese word; each real word was assigned a child-rated BOI score. After controlling nonverbal intelligence, gender, working memory, and Chinese character reading, a significant BOI effect was observed at the response accuracy and speed levels. The accuracy and latency analyses illustrated a community difference; the BOI effect was smaller in the HK children. This study suggests that BOI measures may be sensitive to the ecological differences between tested communities. The findings support the need for further investigations into the BOI effect across Chinese communities, particularly those in Mainland China.
{"title":"The role of body–object interaction in children’s concept processing: insights from two Chinese communities","authors":"Zhengye Xu, Duo Liu","doi":"10.1007/s10339-024-01185-1","DOIUrl":"https://doi.org/10.1007/s10339-024-01185-1","url":null,"abstract":"<p>A rating of body–object interactions (BOIs) reflects the ease with which a human body can interact physically with a word’s referent. Studies with adults have demonstrated a facilitating BOI effect in language tasks, with faster and more accurate responses for high BOI words (e.g., cup) than low BOI words (e.g., coal). A few studies have explored the BOI effect in children. However, these studies have all adopted adult-rated BOIs, which may differ from children’s. Using child-rated BOIs, the present study investigated the BOI effect in Chinese children and its relationship with age, as well as whether there was a community difference in the BOI effect. Children (aged 7–8) from Mainland China (<i>N</i> = 100) and Hong Kong SAR (HK; <i>N</i> = 90) completed a lexical decision task used to measure the BOI effect. The children were asked to judge whether each item was a real Chinese word; each real word was assigned a child-rated BOI score. After controlling nonverbal intelligence, gender, working memory, and Chinese character reading, a significant BOI effect was observed at the response accuracy and speed levels. The accuracy and latency analyses illustrated a community difference; the BOI effect was smaller in the HK children. This study suggests that BOI measures may be sensitive to the ecological differences between tested communities. The findings support the need for further investigations into the BOI effect across Chinese communities, particularly those in Mainland China.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":"84 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140579906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1007/s10339-024-01181-5
Abstract
One objective of neuroscience is to understand a wide range of specific cognitive processes in terms of neuron activity. The huge amount of observational data about the brain makes achieving this objective challenging. Different models on different levels of detail provide some insight, but the relationship between models on different levels is not clear. Complex computing systems with trillions of components like transistors are fully understood in the sense that system features can be precisely related to transistor activity. Such understanding could not involve a designer simultaneously thinking about the ongoing activity of all the components active in the course of carrying out some system feature. Brain modeling approaches like dynamical systems are inadequate to support understanding of computing systems, because their use relies on approximations like treating all components as more or less identical. Understanding computing systems needs a much more sophisticated use of approximation, involving creation of hierarchies of description in which the higher levels are more approximate, with effective translation between different levels in the hierarchy made possible by using the same general types of information processes on every level. These types are instruction and data read/write. There are no direct resemblances between computers and brains, but natural selection pressures have resulted in brain resources being organized into modular hierarchies and in the existence of two general types of information processes called condition definition/detection and behavioral recommendation. As a result, it is possible to create hierarchies of description linking cognitive phenomena to neuron activity, analogous with but qualitatively different from the hierarchies of description used to understand computing systems. An intuitively satisfying understanding of cognitive processes in terms of more detailed brain activity is then possible.
{"title":"Hierarchies of description enable understanding of cognitive phenomena in terms of neuron activity","authors":"","doi":"10.1007/s10339-024-01181-5","DOIUrl":"https://doi.org/10.1007/s10339-024-01181-5","url":null,"abstract":"<h3>Abstract</h3> <p>One objective of neuroscience is to understand a wide range of specific cognitive processes in terms of neuron activity. The huge amount of observational data about the brain makes achieving this objective challenging. Different models on different levels of detail provide some insight, but the relationship between models on different levels is not clear. Complex computing systems with trillions of components like transistors are fully understood in the sense that system features can be precisely related to transistor activity. Such understanding could not involve a designer simultaneously thinking about the ongoing activity of all the components active in the course of carrying out some system feature. Brain modeling approaches like dynamical systems are inadequate to support understanding of computing systems, because their use relies on approximations like treating all components as more or less identical. Understanding computing systems needs a much more sophisticated use of approximation, involving creation of hierarchies of description in which the higher levels are more approximate, with effective translation between different levels in the hierarchy made possible by using the same general types of information processes on every level. These types are instruction and data read/write. There are no direct resemblances between computers and brains, but natural selection pressures have resulted in brain resources being organized into modular hierarchies and in the existence of two general types of information processes called condition definition/detection and behavioral recommendation. As a result, it is possible to create hierarchies of description linking cognitive phenomena to neuron activity, analogous with but qualitatively different from the hierarchies of description used to understand computing systems. An intuitively satisfying understanding of cognitive processes in terms of more detailed brain activity is then possible.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":"9 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140127318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-11-30DOI: 10.1007/s10339-023-01169-7
Mira Schwarz, Kai Hamburger
Non-human animals are exceptionally good at using smell to find their way through the environment. However, the use of olfactory cues for human navigation is often underestimated. Although the sense of smell is well-known for its distinct connection to memory and emotion, memory effects in human navigation using olfactory landmarks have not been studied yet. Therefore, this article compares wayfinding and recognition performance for visual and olfactory landmarks learned by 52 participants in a virtual maze. Furthermore, it is one of the first empirical studies investigating differences in memory effects on human navigation by using two separate test situations 1 month apart. The experimental task was to find the way through a maze-like virtual environment with either olfactory or visual cues at the intersections that served as decision points. Our descriptive results show that performance was above chance level for both conditions (visual and olfactory landmarks). Wayfinding performance did not decrease 1 month later when using olfactory landmarks. In contrast, when using visual landmarks wayfinding performance decreased significantly, while visual landmarks overall lead to better recognition than olfactory landmarks at both times of testing. The results demonstrate the unique character of human odor memory and support the conclusion that olfactory cues may be used in human spatial orientation. Furthermore, the present study expands the research field of human wayfinding by providing a study that investigates memory for landmark knowledge and route decisions for the visual and olfactory modality. However, more studies are required to put this important research strand forward.
{"title":"Memory effects of visual and olfactory landmark information in human wayfinding.","authors":"Mira Schwarz, Kai Hamburger","doi":"10.1007/s10339-023-01169-7","DOIUrl":"10.1007/s10339-023-01169-7","url":null,"abstract":"<p><p>Non-human animals are exceptionally good at using smell to find their way through the environment. However, the use of olfactory cues for human navigation is often underestimated. Although the sense of smell is well-known for its distinct connection to memory and emotion, memory effects in human navigation using olfactory landmarks have not been studied yet. Therefore, this article compares wayfinding and recognition performance for visual and olfactory landmarks learned by 52 participants in a virtual maze. Furthermore, it is one of the first empirical studies investigating differences in memory effects on human navigation by using two separate test situations 1 month apart. The experimental task was to find the way through a maze-like virtual environment with either olfactory or visual cues at the intersections that served as decision points. Our descriptive results show that performance was above chance level for both conditions (visual and olfactory landmarks). Wayfinding performance did not decrease 1 month later when using olfactory landmarks. In contrast, when using visual landmarks wayfinding performance decreased significantly, while visual landmarks overall lead to better recognition than olfactory landmarks at both times of testing. The results demonstrate the unique character of human odor memory and support the conclusion that olfactory cues may be used in human spatial orientation. Furthermore, the present study expands the research field of human wayfinding by providing a study that investigates memory for landmark knowledge and route decisions for the visual and olfactory modality. However, more studies are required to put this important research strand forward.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"37-51"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10827900/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138463666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-11-23DOI: 10.1007/s10339-023-01168-8
Valéria Krepsz, Viktória Horváth, Anna Huszár, Tilda Neuberger, Dorottya Gyarmathy
Laughter is one of the most common non-verbal features; however, contrary to the previous assumptions, it may also act as signals of bonding, affection, emotional regulation agreement or empathy (Scott et al. Trends Cogn Sci 18:618-620, 2014). Although previous research agrees that laughter does not form a uniform group in many respects, different types of laughter have been defined differently by individual research. Due to the various definitions of laughter, as well as their different methodologies, the results of the previous examinations were often contradictory. The analysed laughs were often recorded in controlled, artificial situations; however, less is known about laughs from social conversations. Thus, the aim of the present study is to examine the acoustic realisation, as well as the automatic classification of laughter that appear in human interactions according to whether listeners consider them to be voluntary or involuntary. The study consists of three parts using a multi-method approach. Firstly, in the perception task, participants had to decide whether the given laughter seemed to be rather involuntary or voluntary. In the second part of the experiment, those sound samples of laughter were analysed that were considered to be voluntary or involuntary by at least 66.6% of listeners. In the third part, all the sound samples were grouped into the two categories by an automatic classifier. The results showed that listeners were able to distinguish laughter extracted from spontaneous conversation into two different types, as well as the distinction was possible on the basis of the automatic classification. In addition, there were significant differences in acoustic parameters between the two groups of laughter. The results of the research showed that, although the distinction between voluntary and involuntary laughter categories appears based on the analysis of everyday, spontaneous conversations in terms of the perception and acoustic features, there is often an overlap in the acoustic features of voluntary and involuntary laughter. The results will enrich our previous knowledge of laughter and help to describe and explore the diversity of non-verbal vocalisations.
{"title":"'Should we laugh?' Acoustic features of (in)voluntary laughters in spontaneous conversations.","authors":"Valéria Krepsz, Viktória Horváth, Anna Huszár, Tilda Neuberger, Dorottya Gyarmathy","doi":"10.1007/s10339-023-01168-8","DOIUrl":"10.1007/s10339-023-01168-8","url":null,"abstract":"<p><p>Laughter is one of the most common non-verbal features; however, contrary to the previous assumptions, it may also act as signals of bonding, affection, emotional regulation agreement or empathy (Scott et al. Trends Cogn Sci 18:618-620, 2014). Although previous research agrees that laughter does not form a uniform group in many respects, different types of laughter have been defined differently by individual research. Due to the various definitions of laughter, as well as their different methodologies, the results of the previous examinations were often contradictory. The analysed laughs were often recorded in controlled, artificial situations; however, less is known about laughs from social conversations. Thus, the aim of the present study is to examine the acoustic realisation, as well as the automatic classification of laughter that appear in human interactions according to whether listeners consider them to be voluntary or involuntary. The study consists of three parts using a multi-method approach. Firstly, in the perception task, participants had to decide whether the given laughter seemed to be rather involuntary or voluntary. In the second part of the experiment, those sound samples of laughter were analysed that were considered to be voluntary or involuntary by at least 66.6% of listeners. In the third part, all the sound samples were grouped into the two categories by an automatic classifier. The results showed that listeners were able to distinguish laughter extracted from spontaneous conversation into two different types, as well as the distinction was possible on the basis of the automatic classification. In addition, there were significant differences in acoustic parameters between the two groups of laughter. The results of the research showed that, although the distinction between voluntary and involuntary laughter categories appears based on the analysis of everyday, spontaneous conversations in terms of the perception and acoustic features, there is often an overlap in the acoustic features of voluntary and involuntary laughter. The results will enrich our previous knowledge of laughter and help to describe and explore the diversity of non-verbal vocalisations.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"89-106"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10828014/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138296254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-09-22DOI: 10.1007/s10339-023-01159-9
Miles Rooney
The nature of music improvisation continues to provide an interesting showcase of the multifaceted and skilful ways we engage with and act within our environments. Improvising musicians are somehow able to generate musical material in real time that adaptively navigates musical situations. In this article I explore the broader aspects of improvised activity-such as our bodily interactions with the instrument and environment-as they relate to improvised music-making. I do so by drawing upon principles from the embodied cognitive sciences, namely ecological and dynamical systems approaches. Firstly, I introduce the concept of affordances to illustrate the bidirectional relationship between improvisor and environment. I then take a dynamical view, exploring the ways that a trumpet player coordinates their body with their instrument and engages with trumpet affordances in order to navigate musical situations. I continue this dynamical view, taking the improviser to be an adaptive system whose behaviours are self-organised responses to a set of constraints. To conclude, I situate my research within the wider 4E approach. I advocate that 'E' approaches, which take seriously the role of the body-instrument-environment relationship, provide an insightful perspective on the nature of improvisation.
{"title":"The ecological dynamics of trumpet improvisation.","authors":"Miles Rooney","doi":"10.1007/s10339-023-01159-9","DOIUrl":"10.1007/s10339-023-01159-9","url":null,"abstract":"<p><p>The nature of music improvisation continues to provide an interesting showcase of the multifaceted and skilful ways we engage with and act within our environments. Improvising musicians are somehow able to generate musical material in real time that adaptively navigates musical situations. In this article I explore the broader aspects of improvised activity-such as our bodily interactions with the instrument and environment-as they relate to improvised music-making. I do so by drawing upon principles from the embodied cognitive sciences, namely ecological and dynamical systems approaches. Firstly, I introduce the concept of affordances to illustrate the bidirectional relationship between improvisor and environment. I then take a dynamical view, exploring the ways that a trumpet player coordinates their body with their instrument and engages with trumpet affordances in order to navigate musical situations. I continue this dynamical view, taking the improviser to be an adaptive system whose behaviours are self-organised responses to a set of constraints. To conclude, I situate my research within the wider 4E approach. I advocate that 'E' approaches, which take seriously the role of the body-instrument-environment relationship, provide an insightful perspective on the nature of improvisation.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"163-171"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10827878/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41153249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-10-18DOI: 10.1007/s10339-023-01165-x
Ariadne Loutrari, Aseel Alqadi, Cunmei Jiang, Fang Liu
Sentence repetition has been the focus of extensive psycholinguistic research. The notion that music training can bolster speech perception in adverse auditory conditions has been met with mixed results. In this work, we sought to gauge the effect of babble noise on immediate repetition of spoken and sung phrases of varying semantic content (expository, narrative, and anomalous), initially in 100 English-speaking monolinguals with and without music training. The two cohorts also completed some non-musical cognitive tests and the Montreal Battery of Evaluation of Amusia (MBEA). When disregarding MBEA results, musicians were found to significantly outperform non-musicians in terms of overall repetition accuracy. Sung targets were recalled significantly better than spoken ones across groups in the presence of babble noise. Sung expository targets were recalled better than spoken expository ones, and semantically anomalous content was recalled more poorly in noise. Rerunning the analysis after eliminating thirteen participants who were diagnosed with amusia showed no significant group differences. This suggests that the notion of enhanced speech perception-in noise or otherwise-in musicians needs to be evaluated with caution. Musicianship aside, this study showed for the first time that sung targets presented in babble noise seem to be recalled better than spoken ones. We discuss the present design and the methodological approach of screening for amusia as factors which may partially account for some of the mixed results in the field.
{"title":"Exploring the role of singing, semantics, and amusia screening in speech-in-noise perception in musicians and non-musicians.","authors":"Ariadne Loutrari, Aseel Alqadi, Cunmei Jiang, Fang Liu","doi":"10.1007/s10339-023-01165-x","DOIUrl":"10.1007/s10339-023-01165-x","url":null,"abstract":"<p><p>Sentence repetition has been the focus of extensive psycholinguistic research. The notion that music training can bolster speech perception in adverse auditory conditions has been met with mixed results. In this work, we sought to gauge the effect of babble noise on immediate repetition of spoken and sung phrases of varying semantic content (expository, narrative, and anomalous), initially in 100 English-speaking monolinguals with and without music training. The two cohorts also completed some non-musical cognitive tests and the Montreal Battery of Evaluation of Amusia (MBEA). When disregarding MBEA results, musicians were found to significantly outperform non-musicians in terms of overall repetition accuracy. Sung targets were recalled significantly better than spoken ones across groups in the presence of babble noise. Sung expository targets were recalled better than spoken expository ones, and semantically anomalous content was recalled more poorly in noise. Rerunning the analysis after eliminating thirteen participants who were diagnosed with amusia showed no significant group differences. This suggests that the notion of enhanced speech perception-in noise or otherwise-in musicians needs to be evaluated with caution. Musicianship aside, this study showed for the first time that sung targets presented in babble noise seem to be recalled better than spoken ones. We discuss the present design and the methodological approach of screening for amusia as factors which may partially account for some of the mixed results in the field.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"147-161"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10827916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41239856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-09-16DOI: 10.1007/s10339-023-01160-2
Enrique Canessa, Sergio E Chaigneau, Sebastián Moreno
To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list "is a pet", "has four legs", etc.). When PLT data is collected for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, only recently a model that describes the listing course of a PLT has been developed and validated. That original model describes the listing course using order of production of properties. Here we go a step beyond and validate the model using response times (RT), i.e., the time from cue onset to property listing. Our results show that RT data exhibits the same regularities observed in the previous model, but now we can also analyze the time course, i.e., dynamics of the PLT. As such, the RT validated model may be applied to study several similar memory retrieval tasks, such as the Free Listing Task, Verbal Fluidity Task, and to research related cognitive processes. To illustrate those kinds of analyses, we present a brief example of the difference in PLT's dynamics between listing properties for abstract versus concrete concepts, which shows that the model may be fruitfully applied to study concepts.
{"title":"Describing and understanding the time course of the property listing task.","authors":"Enrique Canessa, Sergio E Chaigneau, Sebastián Moreno","doi":"10.1007/s10339-023-01160-2","DOIUrl":"10.1007/s10339-023-01160-2","url":null,"abstract":"<p><p>To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list \"is a pet\", \"has four legs\", etc.). When PLT data is collected for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, only recently a model that describes the listing course of a PLT has been developed and validated. That original model describes the listing course using order of production of properties. Here we go a step beyond and validate the model using response times (RT), i.e., the time from cue onset to property listing. Our results show that RT data exhibits the same regularities observed in the previous model, but now we can also analyze the time course, i.e., dynamics of the PLT. As such, the RT validated model may be applied to study several similar memory retrieval tasks, such as the Free Listing Task, Verbal Fluidity Task, and to research related cognitive processes. To illustrate those kinds of analyses, we present a brief example of the difference in PLT's dynamics between listing properties for abstract versus concrete concepts, which shows that the model may be fruitfully applied to study concepts.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"61-74"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10268634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We experience our self as a body located in space. However, how information about self-location is integrated into multisensory processes underlying the representation of the peripersonal space (PPS), is still unclear. Prior studies showed that the presence of visual information related to oneself modulates the multisensory processes underlying PPS. Here, we used the crossmodal congruency effect (CCE) to test whether this top-down modulation depends on the spatial location of the body-related visual information. Participants responded to tactile events on their bodies while trying to ignore a visual distractor presented on the mirror reflection of their body (Self) either in the peripersonal space (Near) or in the extrapersonal space (Far). We found larger CCE when visual events were presented on the mirror reflection in the peripersonal space, as compared to the extrapersonal space. These results suggest that top-down modulation of the multisensory bodily self is only possible within the PPS.
{"title":"Beyond peripersonal boundaries: insights from crossmodal interactions.","authors":"Gianluca Finotti, Dario Menicagli, Daniele Migliorati, Marcello Costantini, Francesca Ferri","doi":"10.1007/s10339-023-01154-0","DOIUrl":"10.1007/s10339-023-01154-0","url":null,"abstract":"<p><p>We experience our self as a body located in space. However, how information about self-location is integrated into multisensory processes underlying the representation of the peripersonal space (PPS), is still unclear. Prior studies showed that the presence of visual information related to oneself modulates the multisensory processes underlying PPS. Here, we used the crossmodal congruency effect (CCE) to test whether this top-down modulation depends on the spatial location of the body-related visual information. Participants responded to tactile events on their bodies while trying to ignore a visual distractor presented on the mirror reflection of their body (Self) either in the peripersonal space (Near) or in the extrapersonal space (Far). We found larger CCE when visual events were presented on the mirror reflection in the peripersonal space, as compared to the extrapersonal space. These results suggest that top-down modulation of the multisensory bodily self is only possible within the PPS.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"121-132"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10827818/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10129649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-11-02DOI: 10.1007/s10339-023-01166-w
Seyed Mohammad Mahdi Moshirian Farahi, Craig Leth-Steensen
The present study aimed to examine the links between a self-report measure known to be discriminative of autism (the AQ-10) and performance on the classic unidimensional absolute identification judgment task with 10 line lengths. The interest in this task is due to the fact that discriminating absolutely between such items is quite perceptually challenging and also that it is not very amenable to generalization. Importantly, there are two currently available views of perceptual learning in autism that suggest that those higher on the autism spectrum might have an advantage on this task. Results showed, however, that for N = 291 typically developing individuals, higher scores on the AQ-10 (and also on a measure of the degree to which individuals self-report having a more spontaneous, activist-type learning style) tended to relate to lower levels of accuracy on this task in contrast to what was expected. One explanation furthered for this result was that those with higher AQ-10 scores may have had more difficulties maintaining the overall stimulus context in memory. Such work adds greatly to knowledge of the nature of the individual differences that can affect performance on this particular task.
{"title":"Individual differences in absolute identification as a function of autistic trait levels.","authors":"Seyed Mohammad Mahdi Moshirian Farahi, Craig Leth-Steensen","doi":"10.1007/s10339-023-01166-w","DOIUrl":"10.1007/s10339-023-01166-w","url":null,"abstract":"<p><p>The present study aimed to examine the links between a self-report measure known to be discriminative of autism (the AQ-10) and performance on the classic unidimensional absolute identification judgment task with 10 line lengths. The interest in this task is due to the fact that discriminating absolutely between such items is quite perceptually challenging and also that it is not very amenable to generalization. Importantly, there are two currently available views of perceptual learning in autism that suggest that those higher on the autism spectrum might have an advantage on this task. Results showed, however, that for N = 291 typically developing individuals, higher scores on the AQ-10 (and also on a measure of the degree to which individuals self-report having a more spontaneous, activist-type learning style) tended to relate to lower levels of accuracy on this task in contrast to what was expected. One explanation furthered for this result was that those with higher AQ-10 scores may have had more difficulties maintaining the overall stimulus context in memory. Such work adds greatly to knowledge of the nature of the individual differences that can affect performance on this particular task.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"133-145"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71427898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-09-11DOI: 10.1007/s10339-023-01153-1
Marek Nieznański, Michał Obidziński, Daria Ford
Episodic recollection is defined by the re-experiencing of contextual and target details of a past event. The base-rate dependency hypothesis assumes that the retrieval of one contextual feature from an integrated episodic trace cues the retrieval of another associated feature, and that the more often a particular configuration of features occurs, the more effective this mutual cueing will be. Alternatively, the conditional probability of one feature given another feature may be neglected in memory for contextual features since they are not directly bound to one another. Three conjoint recognition experiments investigated whether memory for context is sensitive to the base-rates of features. Participants studied frequent versus infrequent configurations of features and, during the test, they were asked to recognise one of these features with (vs. without) another feature reinstated. The results showed that the context recollection parameter, representing the re-experience of contextual features in the dual-recollection model, was higher for frequent than infrequent feature configurations only when the binding of feature information was made easier and the differences in the base-rates were extreme, otherwise no difference was found. Similarly, base-rates of features influenced response guessing only in the condition with salient differences in base-rates. The Bayes factor analyses showed that the evidence from two of our experiments favoured the base-rate neglect hypothesis over the base-rate dependency hypothesis; the opposite result was obtained in the third experiment, but only when high base-rate disproportion and facilitated feature binding conditions were used.
{"title":"Does context recollection depend on the base-rate of contextual features?","authors":"Marek Nieznański, Michał Obidziński, Daria Ford","doi":"10.1007/s10339-023-01153-1","DOIUrl":"10.1007/s10339-023-01153-1","url":null,"abstract":"<p><p>Episodic recollection is defined by the re-experiencing of contextual and target details of a past event. The base-rate dependency hypothesis assumes that the retrieval of one contextual feature from an integrated episodic trace cues the retrieval of another associated feature, and that the more often a particular configuration of features occurs, the more effective this mutual cueing will be. Alternatively, the conditional probability of one feature given another feature may be neglected in memory for contextual features since they are not directly bound to one another. Three conjoint recognition experiments investigated whether memory for context is sensitive to the base-rates of features. Participants studied frequent versus infrequent configurations of features and, during the test, they were asked to recognise one of these features with (vs. without) another feature reinstated. The results showed that the context recollection parameter, representing the re-experience of contextual features in the dual-recollection model, was higher for frequent than infrequent feature configurations only when the binding of feature information was made easier and the differences in the base-rates were extreme, otherwise no difference was found. Similarly, base-rates of features influenced response guessing only in the condition with salient differences in base-rates. The Bayes factor analyses showed that the evidence from two of our experiments favoured the base-rate neglect hypothesis over the base-rate dependency hypothesis; the opposite result was obtained in the third experiment, but only when high base-rate disproportion and facilitated feature binding conditions were used.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"9-35"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10827963/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10194444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}