Pub Date : 2024-04-16DOI: 10.1007/s10339-024-01187-z
Leilani Forby, Farid Pazhoohi, Alan Kingstone
Individuals high in autistic traits can have difficulties with social interactions which may stem from difficulties with mentalizing abilities, yet findings from research investigating anthropomorphism of non-human objects in high trait individuals are inconsistent. Measuring emotions and attributes of front-facing vehicles, individuals scoring high versus low on the AQ-10 were compared for ratings of angry-happy, hostile-friendly, masculine-feminine, and submissive-dominant, as a function of vehicle size (large versus small). Our results showed that participants perceived large vehicles as more angry, hostile, masculine, and dominant than small vehicles, with no significant difference in ratings between high and low AQ-10 scorers. The current findings support previous research reporting high autistic trait individuals’ intact object processing. Our novel findings also suggest high autistic trait individuals’ anthropomorphizing abilities are comparable to those found in low autistic trait individuals.
{"title":"Autistic traits and anthropomorphism: the case of vehicle fascia perception","authors":"Leilani Forby, Farid Pazhoohi, Alan Kingstone","doi":"10.1007/s10339-024-01187-z","DOIUrl":"https://doi.org/10.1007/s10339-024-01187-z","url":null,"abstract":"<p>Individuals high in autistic traits can have difficulties with social interactions which may stem from difficulties with mentalizing abilities, yet findings from research investigating anthropomorphism of non-human objects in high trait individuals are inconsistent. Measuring emotions and attributes of front-facing vehicles, individuals scoring high versus low on the AQ-10 were compared for ratings of angry-happy, hostile-friendly, masculine-feminine, and submissive-dominant, as a function of vehicle size (large versus small). Our results showed that participants perceived large vehicles as more angry, hostile, masculine, and dominant than small vehicles, with no significant difference in ratings between high and low AQ-10 scorers. The current findings support previous research reporting high autistic trait individuals’ intact object processing. Our novel findings also suggest high autistic trait individuals’ anthropomorphizing abilities are comparable to those found in low autistic trait individuals.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":"28 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140615111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-13DOI: 10.1007/s10339-024-01189-x
Paulo G. Laurence, Stella A. Bassetto, Natalia P. Bertolino, Mayara S. C. V. O. Barros, Elizeu C. Macedo
Different tests measure text comprehension, including the cloze gap-filling test, often used for language learning. Different studies hypothesized cognitive strategies in this type of test and their relationship with working memory and performance. However, no study investigated the cloze test, working memory, and possible cognitive strategies, while performing the test. Therefore, this study aimed to identify cognitive visual strategies in the cloze test by applying an unsupervised algorithm and to analyze the relationship between these strategies with working memory and performance in the cloze test. Our sample consisted of 51 university students, the largest sample in studies of cognitive strategies with cloze tests. Participants answered an 11-item cloze test in a computer with eye-tracking, a verbal working memory test, and a visuospatial working memory test. Our analysis of participants’ scanpath identified two main strategies: one with fewer toggles between text and word bank and fewer fixations than the other one, indicating the existence of a global strategy. Furthermore, a model predicting the efficiency of participants in the cloze test found that item complexity, using a global strategy, and higher scores of working memory were the most significant predictors. These results confirm the hypothesis of a global strategy being related to successfully achieving higher-order reading processes.
{"title":"Differences in scanpath pattern and verbal working memory predicts efficient reading in the Cloze gap-filling test","authors":"Paulo G. Laurence, Stella A. Bassetto, Natalia P. Bertolino, Mayara S. C. V. O. Barros, Elizeu C. Macedo","doi":"10.1007/s10339-024-01189-x","DOIUrl":"https://doi.org/10.1007/s10339-024-01189-x","url":null,"abstract":"<p>Different tests measure text comprehension, including the cloze gap-filling test, often used for language learning. Different studies hypothesized cognitive strategies in this type of test and their relationship with working memory and performance. However, no study investigated the cloze test, working memory, and possible cognitive strategies, while performing the test. Therefore, this study aimed to identify cognitive visual strategies in the cloze test by applying an unsupervised algorithm and to analyze the relationship between these strategies with working memory and performance in the cloze test. Our sample consisted of 51 university students, the largest sample in studies of cognitive strategies with cloze tests. Participants answered an 11-item cloze test in a computer with eye-tracking, a verbal working memory test, and a visuospatial working memory test. Our analysis of participants’ scanpath identified two main strategies: one with fewer toggles between text and word bank and fewer fixations than the other one, indicating the existence of a global strategy. Furthermore, a model predicting the efficiency of participants in the cloze test found that item complexity, using a global strategy, and higher scores of working memory were the most significant predictors. These results confirm the hypothesis of a global strategy being related to successfully achieving higher-order reading processes.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":"51 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140579943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.1007/s10339-024-01188-y
L. Vainio, I. L. Myllylä, M. Vainio
It has been shown that reading the vowel [i] and consonant [t] facilitates precision grip responses, while [ɑ] and [k] are associated with faster power grip responses. A similar effect has been observed when participants perform responses with small or large response keys. The present study investigated whether the vowels and consonants could produce different effects with the grip responses and keypresses when the speech units are read aloud (Experiment 1) or silently (Experiment 2). As a second objective, the study investigated whether the recently observed effect, in which the upper position of a visual stimulus is associated with faster vocalizations of the high vowel and the lower position is associated with the low vowel, can be observed in manual responses linking, for example, the [i] with responses of the upper key and [ɑ] with lower responses. Firstly, the study showed that when the consonants are overtly articulated, the interaction effect can be observed only with the grip responses, while the vowel production was shown to systematically influence small/large keypresses, as well as precision/power grip responses. Secondly, the vowel [i] and consonant [t] were associated with the upper responses, while [ɑ] and [k] were associated with the lower responses, particularly in the overt articulation task. The paper delves into the potential sound-symbolic implications of these phonetic elements, suggesting that their acoustic and articulatory characteristics might implicitly align them with specific response magnitudes, vertical positions, and grip types.
{"title":"Sound symbolism in manual and vocal responses: phoneme-response interactions associated with grasping as well as vertical and size dimensions of keypresses","authors":"L. Vainio, I. L. Myllylä, M. Vainio","doi":"10.1007/s10339-024-01188-y","DOIUrl":"https://doi.org/10.1007/s10339-024-01188-y","url":null,"abstract":"<p>It has been shown that reading the vowel [i] and consonant [t] facilitates precision grip responses, while [ɑ] and [k] are associated with faster power grip responses. A similar effect has been observed when participants perform responses with small or large response keys. The present study investigated whether the vowels and consonants could produce different effects with the grip responses and keypresses when the speech units are read aloud (Experiment 1) or silently (Experiment 2). As a second objective, the study investigated whether the recently observed effect, in which the upper position of a visual stimulus is associated with faster vocalizations of the high vowel and the lower position is associated with the low vowel, can be observed in manual responses linking, for example, the [i] with responses of the upper key and [ɑ] with lower responses. Firstly, the study showed that when the consonants are overtly articulated, the interaction effect can be observed only with the grip responses, while the vowel production was shown to systematically influence small/large keypresses, as well as precision/power grip responses. Secondly, the vowel [i] and consonant [t] were associated with the upper responses, while [ɑ] and [k] were associated with the lower responses, particularly in the overt articulation task. The paper delves into the potential sound-symbolic implications of these phonetic elements, suggesting that their acoustic and articulatory characteristics might implicitly align them with specific response magnitudes, vertical positions, and grip types.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":"32 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140579948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1007/s10339-024-01185-1
Zhengye Xu, Duo Liu
A rating of body–object interactions (BOIs) reflects the ease with which a human body can interact physically with a word’s referent. Studies with adults have demonstrated a facilitating BOI effect in language tasks, with faster and more accurate responses for high BOI words (e.g., cup) than low BOI words (e.g., coal). A few studies have explored the BOI effect in children. However, these studies have all adopted adult-rated BOIs, which may differ from children’s. Using child-rated BOIs, the present study investigated the BOI effect in Chinese children and its relationship with age, as well as whether there was a community difference in the BOI effect. Children (aged 7–8) from Mainland China (N = 100) and Hong Kong SAR (HK; N = 90) completed a lexical decision task used to measure the BOI effect. The children were asked to judge whether each item was a real Chinese word; each real word was assigned a child-rated BOI score. After controlling nonverbal intelligence, gender, working memory, and Chinese character reading, a significant BOI effect was observed at the response accuracy and speed levels. The accuracy and latency analyses illustrated a community difference; the BOI effect was smaller in the HK children. This study suggests that BOI measures may be sensitive to the ecological differences between tested communities. The findings support the need for further investigations into the BOI effect across Chinese communities, particularly those in Mainland China.
{"title":"The role of body–object interaction in children’s concept processing: insights from two Chinese communities","authors":"Zhengye Xu, Duo Liu","doi":"10.1007/s10339-024-01185-1","DOIUrl":"https://doi.org/10.1007/s10339-024-01185-1","url":null,"abstract":"<p>A rating of body–object interactions (BOIs) reflects the ease with which a human body can interact physically with a word’s referent. Studies with adults have demonstrated a facilitating BOI effect in language tasks, with faster and more accurate responses for high BOI words (e.g., cup) than low BOI words (e.g., coal). A few studies have explored the BOI effect in children. However, these studies have all adopted adult-rated BOIs, which may differ from children’s. Using child-rated BOIs, the present study investigated the BOI effect in Chinese children and its relationship with age, as well as whether there was a community difference in the BOI effect. Children (aged 7–8) from Mainland China (<i>N</i> = 100) and Hong Kong SAR (HK; <i>N</i> = 90) completed a lexical decision task used to measure the BOI effect. The children were asked to judge whether each item was a real Chinese word; each real word was assigned a child-rated BOI score. After controlling nonverbal intelligence, gender, working memory, and Chinese character reading, a significant BOI effect was observed at the response accuracy and speed levels. The accuracy and latency analyses illustrated a community difference; the BOI effect was smaller in the HK children. This study suggests that BOI measures may be sensitive to the ecological differences between tested communities. The findings support the need for further investigations into the BOI effect across Chinese communities, particularly those in Mainland China.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":"84 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140579906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1007/s10339-024-01181-5
Abstract
One objective of neuroscience is to understand a wide range of specific cognitive processes in terms of neuron activity. The huge amount of observational data about the brain makes achieving this objective challenging. Different models on different levels of detail provide some insight, but the relationship between models on different levels is not clear. Complex computing systems with trillions of components like transistors are fully understood in the sense that system features can be precisely related to transistor activity. Such understanding could not involve a designer simultaneously thinking about the ongoing activity of all the components active in the course of carrying out some system feature. Brain modeling approaches like dynamical systems are inadequate to support understanding of computing systems, because their use relies on approximations like treating all components as more or less identical. Understanding computing systems needs a much more sophisticated use of approximation, involving creation of hierarchies of description in which the higher levels are more approximate, with effective translation between different levels in the hierarchy made possible by using the same general types of information processes on every level. These types are instruction and data read/write. There are no direct resemblances between computers and brains, but natural selection pressures have resulted in brain resources being organized into modular hierarchies and in the existence of two general types of information processes called condition definition/detection and behavioral recommendation. As a result, it is possible to create hierarchies of description linking cognitive phenomena to neuron activity, analogous with but qualitatively different from the hierarchies of description used to understand computing systems. An intuitively satisfying understanding of cognitive processes in terms of more detailed brain activity is then possible.
{"title":"Hierarchies of description enable understanding of cognitive phenomena in terms of neuron activity","authors":"","doi":"10.1007/s10339-024-01181-5","DOIUrl":"https://doi.org/10.1007/s10339-024-01181-5","url":null,"abstract":"<h3>Abstract</h3> <p>One objective of neuroscience is to understand a wide range of specific cognitive processes in terms of neuron activity. The huge amount of observational data about the brain makes achieving this objective challenging. Different models on different levels of detail provide some insight, but the relationship between models on different levels is not clear. Complex computing systems with trillions of components like transistors are fully understood in the sense that system features can be precisely related to transistor activity. Such understanding could not involve a designer simultaneously thinking about the ongoing activity of all the components active in the course of carrying out some system feature. Brain modeling approaches like dynamical systems are inadequate to support understanding of computing systems, because their use relies on approximations like treating all components as more or less identical. Understanding computing systems needs a much more sophisticated use of approximation, involving creation of hierarchies of description in which the higher levels are more approximate, with effective translation between different levels in the hierarchy made possible by using the same general types of information processes on every level. These types are instruction and data read/write. There are no direct resemblances between computers and brains, but natural selection pressures have resulted in brain resources being organized into modular hierarchies and in the existence of two general types of information processes called condition definition/detection and behavioral recommendation. As a result, it is possible to create hierarchies of description linking cognitive phenomena to neuron activity, analogous with but qualitatively different from the hierarchies of description used to understand computing systems. An intuitively satisfying understanding of cognitive processes in terms of more detailed brain activity is then possible.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":"9 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140127318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-11-30DOI: 10.1007/s10339-023-01169-7
Mira Schwarz, Kai Hamburger
Non-human animals are exceptionally good at using smell to find their way through the environment. However, the use of olfactory cues for human navigation is often underestimated. Although the sense of smell is well-known for its distinct connection to memory and emotion, memory effects in human navigation using olfactory landmarks have not been studied yet. Therefore, this article compares wayfinding and recognition performance for visual and olfactory landmarks learned by 52 participants in a virtual maze. Furthermore, it is one of the first empirical studies investigating differences in memory effects on human navigation by using two separate test situations 1 month apart. The experimental task was to find the way through a maze-like virtual environment with either olfactory or visual cues at the intersections that served as decision points. Our descriptive results show that performance was above chance level for both conditions (visual and olfactory landmarks). Wayfinding performance did not decrease 1 month later when using olfactory landmarks. In contrast, when using visual landmarks wayfinding performance decreased significantly, while visual landmarks overall lead to better recognition than olfactory landmarks at both times of testing. The results demonstrate the unique character of human odor memory and support the conclusion that olfactory cues may be used in human spatial orientation. Furthermore, the present study expands the research field of human wayfinding by providing a study that investigates memory for landmark knowledge and route decisions for the visual and olfactory modality. However, more studies are required to put this important research strand forward.
{"title":"Memory effects of visual and olfactory landmark information in human wayfinding.","authors":"Mira Schwarz, Kai Hamburger","doi":"10.1007/s10339-023-01169-7","DOIUrl":"10.1007/s10339-023-01169-7","url":null,"abstract":"<p><p>Non-human animals are exceptionally good at using smell to find their way through the environment. However, the use of olfactory cues for human navigation is often underestimated. Although the sense of smell is well-known for its distinct connection to memory and emotion, memory effects in human navigation using olfactory landmarks have not been studied yet. Therefore, this article compares wayfinding and recognition performance for visual and olfactory landmarks learned by 52 participants in a virtual maze. Furthermore, it is one of the first empirical studies investigating differences in memory effects on human navigation by using two separate test situations 1 month apart. The experimental task was to find the way through a maze-like virtual environment with either olfactory or visual cues at the intersections that served as decision points. Our descriptive results show that performance was above chance level for both conditions (visual and olfactory landmarks). Wayfinding performance did not decrease 1 month later when using olfactory landmarks. In contrast, when using visual landmarks wayfinding performance decreased significantly, while visual landmarks overall lead to better recognition than olfactory landmarks at both times of testing. The results demonstrate the unique character of human odor memory and support the conclusion that olfactory cues may be used in human spatial orientation. Furthermore, the present study expands the research field of human wayfinding by providing a study that investigates memory for landmark knowledge and route decisions for the visual and olfactory modality. However, more studies are required to put this important research strand forward.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"37-51"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10827900/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138463666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-11-23DOI: 10.1007/s10339-023-01168-8
Valéria Krepsz, Viktória Horváth, Anna Huszár, Tilda Neuberger, Dorottya Gyarmathy
Laughter is one of the most common non-verbal features; however, contrary to the previous assumptions, it may also act as signals of bonding, affection, emotional regulation agreement or empathy (Scott et al. Trends Cogn Sci 18:618-620, 2014). Although previous research agrees that laughter does not form a uniform group in many respects, different types of laughter have been defined differently by individual research. Due to the various definitions of laughter, as well as their different methodologies, the results of the previous examinations were often contradictory. The analysed laughs were often recorded in controlled, artificial situations; however, less is known about laughs from social conversations. Thus, the aim of the present study is to examine the acoustic realisation, as well as the automatic classification of laughter that appear in human interactions according to whether listeners consider them to be voluntary or involuntary. The study consists of three parts using a multi-method approach. Firstly, in the perception task, participants had to decide whether the given laughter seemed to be rather involuntary or voluntary. In the second part of the experiment, those sound samples of laughter were analysed that were considered to be voluntary or involuntary by at least 66.6% of listeners. In the third part, all the sound samples were grouped into the two categories by an automatic classifier. The results showed that listeners were able to distinguish laughter extracted from spontaneous conversation into two different types, as well as the distinction was possible on the basis of the automatic classification. In addition, there were significant differences in acoustic parameters between the two groups of laughter. The results of the research showed that, although the distinction between voluntary and involuntary laughter categories appears based on the analysis of everyday, spontaneous conversations in terms of the perception and acoustic features, there is often an overlap in the acoustic features of voluntary and involuntary laughter. The results will enrich our previous knowledge of laughter and help to describe and explore the diversity of non-verbal vocalisations.
{"title":"'Should we laugh?' Acoustic features of (in)voluntary laughters in spontaneous conversations.","authors":"Valéria Krepsz, Viktória Horváth, Anna Huszár, Tilda Neuberger, Dorottya Gyarmathy","doi":"10.1007/s10339-023-01168-8","DOIUrl":"10.1007/s10339-023-01168-8","url":null,"abstract":"<p><p>Laughter is one of the most common non-verbal features; however, contrary to the previous assumptions, it may also act as signals of bonding, affection, emotional regulation agreement or empathy (Scott et al. Trends Cogn Sci 18:618-620, 2014). Although previous research agrees that laughter does not form a uniform group in many respects, different types of laughter have been defined differently by individual research. Due to the various definitions of laughter, as well as their different methodologies, the results of the previous examinations were often contradictory. The analysed laughs were often recorded in controlled, artificial situations; however, less is known about laughs from social conversations. Thus, the aim of the present study is to examine the acoustic realisation, as well as the automatic classification of laughter that appear in human interactions according to whether listeners consider them to be voluntary or involuntary. The study consists of three parts using a multi-method approach. Firstly, in the perception task, participants had to decide whether the given laughter seemed to be rather involuntary or voluntary. In the second part of the experiment, those sound samples of laughter were analysed that were considered to be voluntary or involuntary by at least 66.6% of listeners. In the third part, all the sound samples were grouped into the two categories by an automatic classifier. The results showed that listeners were able to distinguish laughter extracted from spontaneous conversation into two different types, as well as the distinction was possible on the basis of the automatic classification. In addition, there were significant differences in acoustic parameters between the two groups of laughter. The results of the research showed that, although the distinction between voluntary and involuntary laughter categories appears based on the analysis of everyday, spontaneous conversations in terms of the perception and acoustic features, there is often an overlap in the acoustic features of voluntary and involuntary laughter. The results will enrich our previous knowledge of laughter and help to describe and explore the diversity of non-verbal vocalisations.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"89-106"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10828014/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138296254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-09-22DOI: 10.1007/s10339-023-01159-9
Miles Rooney
The nature of music improvisation continues to provide an interesting showcase of the multifaceted and skilful ways we engage with and act within our environments. Improvising musicians are somehow able to generate musical material in real time that adaptively navigates musical situations. In this article I explore the broader aspects of improvised activity-such as our bodily interactions with the instrument and environment-as they relate to improvised music-making. I do so by drawing upon principles from the embodied cognitive sciences, namely ecological and dynamical systems approaches. Firstly, I introduce the concept of affordances to illustrate the bidirectional relationship between improvisor and environment. I then take a dynamical view, exploring the ways that a trumpet player coordinates their body with their instrument and engages with trumpet affordances in order to navigate musical situations. I continue this dynamical view, taking the improviser to be an adaptive system whose behaviours are self-organised responses to a set of constraints. To conclude, I situate my research within the wider 4E approach. I advocate that 'E' approaches, which take seriously the role of the body-instrument-environment relationship, provide an insightful perspective on the nature of improvisation.
{"title":"The ecological dynamics of trumpet improvisation.","authors":"Miles Rooney","doi":"10.1007/s10339-023-01159-9","DOIUrl":"10.1007/s10339-023-01159-9","url":null,"abstract":"<p><p>The nature of music improvisation continues to provide an interesting showcase of the multifaceted and skilful ways we engage with and act within our environments. Improvising musicians are somehow able to generate musical material in real time that adaptively navigates musical situations. In this article I explore the broader aspects of improvised activity-such as our bodily interactions with the instrument and environment-as they relate to improvised music-making. I do so by drawing upon principles from the embodied cognitive sciences, namely ecological and dynamical systems approaches. Firstly, I introduce the concept of affordances to illustrate the bidirectional relationship between improvisor and environment. I then take a dynamical view, exploring the ways that a trumpet player coordinates their body with their instrument and engages with trumpet affordances in order to navigate musical situations. I continue this dynamical view, taking the improviser to be an adaptive system whose behaviours are self-organised responses to a set of constraints. To conclude, I situate my research within the wider 4E approach. I advocate that 'E' approaches, which take seriously the role of the body-instrument-environment relationship, provide an insightful perspective on the nature of improvisation.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"163-171"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10827878/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41153249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-10-18DOI: 10.1007/s10339-023-01165-x
Ariadne Loutrari, Aseel Alqadi, Cunmei Jiang, Fang Liu
Sentence repetition has been the focus of extensive psycholinguistic research. The notion that music training can bolster speech perception in adverse auditory conditions has been met with mixed results. In this work, we sought to gauge the effect of babble noise on immediate repetition of spoken and sung phrases of varying semantic content (expository, narrative, and anomalous), initially in 100 English-speaking monolinguals with and without music training. The two cohorts also completed some non-musical cognitive tests and the Montreal Battery of Evaluation of Amusia (MBEA). When disregarding MBEA results, musicians were found to significantly outperform non-musicians in terms of overall repetition accuracy. Sung targets were recalled significantly better than spoken ones across groups in the presence of babble noise. Sung expository targets were recalled better than spoken expository ones, and semantically anomalous content was recalled more poorly in noise. Rerunning the analysis after eliminating thirteen participants who were diagnosed with amusia showed no significant group differences. This suggests that the notion of enhanced speech perception-in noise or otherwise-in musicians needs to be evaluated with caution. Musicianship aside, this study showed for the first time that sung targets presented in babble noise seem to be recalled better than spoken ones. We discuss the present design and the methodological approach of screening for amusia as factors which may partially account for some of the mixed results in the field.
{"title":"Exploring the role of singing, semantics, and amusia screening in speech-in-noise perception in musicians and non-musicians.","authors":"Ariadne Loutrari, Aseel Alqadi, Cunmei Jiang, Fang Liu","doi":"10.1007/s10339-023-01165-x","DOIUrl":"10.1007/s10339-023-01165-x","url":null,"abstract":"<p><p>Sentence repetition has been the focus of extensive psycholinguistic research. The notion that music training can bolster speech perception in adverse auditory conditions has been met with mixed results. In this work, we sought to gauge the effect of babble noise on immediate repetition of spoken and sung phrases of varying semantic content (expository, narrative, and anomalous), initially in 100 English-speaking monolinguals with and without music training. The two cohorts also completed some non-musical cognitive tests and the Montreal Battery of Evaluation of Amusia (MBEA). When disregarding MBEA results, musicians were found to significantly outperform non-musicians in terms of overall repetition accuracy. Sung targets were recalled significantly better than spoken ones across groups in the presence of babble noise. Sung expository targets were recalled better than spoken expository ones, and semantically anomalous content was recalled more poorly in noise. Rerunning the analysis after eliminating thirteen participants who were diagnosed with amusia showed no significant group differences. This suggests that the notion of enhanced speech perception-in noise or otherwise-in musicians needs to be evaluated with caution. Musicianship aside, this study showed for the first time that sung targets presented in babble noise seem to be recalled better than spoken ones. We discuss the present design and the methodological approach of screening for amusia as factors which may partially account for some of the mixed results in the field.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"147-161"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10827916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41239856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-09-16DOI: 10.1007/s10339-023-01160-2
Enrique Canessa, Sergio E Chaigneau, Sebastián Moreno
To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list "is a pet", "has four legs", etc.). When PLT data is collected for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, only recently a model that describes the listing course of a PLT has been developed and validated. That original model describes the listing course using order of production of properties. Here we go a step beyond and validate the model using response times (RT), i.e., the time from cue onset to property listing. Our results show that RT data exhibits the same regularities observed in the previous model, but now we can also analyze the time course, i.e., dynamics of the PLT. As such, the RT validated model may be applied to study several similar memory retrieval tasks, such as the Free Listing Task, Verbal Fluidity Task, and to research related cognitive processes. To illustrate those kinds of analyses, we present a brief example of the difference in PLT's dynamics between listing properties for abstract versus concrete concepts, which shows that the model may be fruitfully applied to study concepts.
{"title":"Describing and understanding the time course of the property listing task.","authors":"Enrique Canessa, Sergio E Chaigneau, Sebastián Moreno","doi":"10.1007/s10339-023-01160-2","DOIUrl":"10.1007/s10339-023-01160-2","url":null,"abstract":"<p><p>To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list \"is a pet\", \"has four legs\", etc.). When PLT data is collected for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, only recently a model that describes the listing course of a PLT has been developed and validated. That original model describes the listing course using order of production of properties. Here we go a step beyond and validate the model using response times (RT), i.e., the time from cue onset to property listing. Our results show that RT data exhibits the same regularities observed in the previous model, but now we can also analyze the time course, i.e., dynamics of the PLT. As such, the RT validated model may be applied to study several similar memory retrieval tasks, such as the Free Listing Task, Verbal Fluidity Task, and to research related cognitive processes. To illustrate those kinds of analyses, we present a brief example of the difference in PLT's dynamics between listing properties for abstract versus concrete concepts, which shows that the model may be fruitfully applied to study concepts.</p>","PeriodicalId":47638,"journal":{"name":"Cognitive Processing","volume":" ","pages":"61-74"},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10268634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}