Pub Date : 2024-11-12DOI: 10.1163/22134808-bja10137
Ghazaleh Mahzouni, Moorea M Welch, Michael Young, Veda Reddy, Patrawat Samermit, Nicolas Davidenko
Misophonia is characterized by strong negative reactions to everyday sounds, such as chewing, slurping or breathing, that can have negative consequences for daily life. Here, we investigated the role of visual stimuli in modulating misophonic reactions. We recruited 26 misophonics and 31 healthy controls and presented them with 26 sound-swapped videos: 13 trigger sounds paired with the 13 Original Video Sources (OVS) and with 13 Positive Attributable Visual Sources (PAVS). Our results show that PAVS stimuli significantly increase the pleasantness and reduce the intensity of bodily sensations associated with trigger sounds in both the misophonia and control groups. Importantly, people with misophonia experienced a larger reduction of bodily sensations compared to the control participants. An analysis of self-reported bodily sensation descriptions revealed that PAVS-paired sounds led participants to use significantly fewer words pertaining to body parts compared to the OVS-paired sounds. We also found that participants who scored higher on the Duke Misophonia Questionnaire (DMQ) symptom severity scale had higher auditory imagery scores, yet visual imagery was not associated with the DMQ. Overall, our results show that the negative impact of misophonic trigger sounds can be attenuated by presenting them alongside PAVSs.
{"title":"Positive Attributable Visual Sources Attenuate the Impact of Trigger Sounds in Misophonia.","authors":"Ghazaleh Mahzouni, Moorea M Welch, Michael Young, Veda Reddy, Patrawat Samermit, Nicolas Davidenko","doi":"10.1163/22134808-bja10137","DOIUrl":"10.1163/22134808-bja10137","url":null,"abstract":"<p><p>Misophonia is characterized by strong negative reactions to everyday sounds, such as chewing, slurping or breathing, that can have negative consequences for daily life. Here, we investigated the role of visual stimuli in modulating misophonic reactions. We recruited 26 misophonics and 31 healthy controls and presented them with 26 sound-swapped videos: 13 trigger sounds paired with the 13 Original Video Sources (OVS) and with 13 Positive Attributable Visual Sources (PAVS). Our results show that PAVS stimuli significantly increase the pleasantness and reduce the intensity of bodily sensations associated with trigger sounds in both the misophonia and control groups. Importantly, people with misophonia experienced a larger reduction of bodily sensations compared to the control participants. An analysis of self-reported bodily sensation descriptions revealed that PAVS-paired sounds led participants to use significantly fewer words pertaining to body parts compared to the OVS-paired sounds. We also found that participants who scored higher on the Duke Misophonia Questionnaire (DMQ) symptom severity scale had higher auditory imagery scores, yet visual imagery was not associated with the DMQ. Overall, our results show that the negative impact of misophonic trigger sounds can be attenuated by presenting them alongside PAVSs.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"475-498"},"PeriodicalIF":1.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.1163/22134808-bja10135
Ivan Makarov, Runar Unnthorsson, Árni Kristjánsson, Ian M Thornton
In two experiments, we explored whether cross-modal cues can be used to improve foraging for multiple targets in a novel human foraging paradigm. Foraging arrays consisted of a 6 × 6 grid containing outline circles with a small dot on the circumference. Each dot rotated from a random starting location in steps of 30°, either clockwise or counterclockwise, around the circumference. Targets were defined by a synchronized rate of rotation, which varied from trial-to-trial, and there were two distractor sets, one that rotated faster and one that rotated slower than the target rate. In Experiment 1, we compared baseline performance to a condition in which a nonspatial auditory cue was used to indicate the rate of target rotation. While overall foraging speed remained slow in both conditions, suggesting serial scanning of the display, the auditory cue reduced target detection times by a factor of two. In Experiment 2, we replicated the auditory cue advantage, and also showed that a vibrotactile pulse, delivered to the wrist, could be almost as effective. Interestingly, a visual-cue to rotation rate, in which the frame of the display changed polarity in step with target rotation, did not lead to the same foraging advantage. Our results clearly demonstrate that cross-modal cues to synchrony can be used to improve multitarget foraging, provided that synchrony itself is a defining feature of target identity.
{"title":"Cross-Modal Cues Improve the Detection of Synchronized Targets during Human Foraging.","authors":"Ivan Makarov, Runar Unnthorsson, Árni Kristjánsson, Ian M Thornton","doi":"10.1163/22134808-bja10135","DOIUrl":"10.1163/22134808-bja10135","url":null,"abstract":"<p><p>In two experiments, we explored whether cross-modal cues can be used to improve foraging for multiple targets in a novel human foraging paradigm. Foraging arrays consisted of a 6 × 6 grid containing outline circles with a small dot on the circumference. Each dot rotated from a random starting location in steps of 30°, either clockwise or counterclockwise, around the circumference. Targets were defined by a synchronized rate of rotation, which varied from trial-to-trial, and there were two distractor sets, one that rotated faster and one that rotated slower than the target rate. In Experiment 1, we compared baseline performance to a condition in which a nonspatial auditory cue was used to indicate the rate of target rotation. While overall foraging speed remained slow in both conditions, suggesting serial scanning of the display, the auditory cue reduced target detection times by a factor of two. In Experiment 2, we replicated the auditory cue advantage, and also showed that a vibrotactile pulse, delivered to the wrist, could be almost as effective. Interestingly, a visual-cue to rotation rate, in which the frame of the display changed polarity in step with target rotation, did not lead to the same foraging advantage. Our results clearly demonstrate that cross-modal cues to synchrony can be used to improve multitarget foraging, provided that synchrony itself is a defining feature of target identity.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"457-474"},"PeriodicalIF":1.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Combining information from visual and auditory modalities to form a unified and coherent perception is known as audiovisual integration. Audiovisual integration is affected by many factors. However, it remains unclear whether the trial history can influence audiovisual integration. We used a target-target paradigm to investigate how the target modality and spatial location of the previous trial affect audiovisual integration under conditions of divided-modalities attention (Experiment 1) and modality-specific selective attention (Experiment 2). In Experiment 1, we found that audiovisual integration was enhanced in the repeat locations compared with switch locations. Audiovisual integration was the largest following the auditory targets compared to following the visual and audiovisual targets. In Experiment 2, where participants were asked to focus only on visual, we found that the audiovisual integration effect was larger in the repeat location trials than switch location trials only when the audiovisual target was presented in the previous trial. The present results provide the first evidence that trial history can have an effect on audiovisual integration. The mechanisms of trial history modulating audiovisual integration are discussed. Future examining of audiovisual integration should carefully manipulate experimental conditions based on the effects of trial history.
{"title":"The Power of Trial History: How Previous Trial Shapes Audiovisual Integration.","authors":"Xiaoyu Tang, Wanlong Liu, Yingnan Wu, Rongxia Ren, Jiaying Sun, Jiajia Yang, Aijun Wang, Ming Zhang","doi":"10.1163/22134808-bja10133","DOIUrl":"10.1163/22134808-bja10133","url":null,"abstract":"<p><p>Combining information from visual and auditory modalities to form a unified and coherent perception is known as audiovisual integration. Audiovisual integration is affected by many factors. However, it remains unclear whether the trial history can influence audiovisual integration. We used a target-target paradigm to investigate how the target modality and spatial location of the previous trial affect audiovisual integration under conditions of divided-modalities attention (Experiment 1) and modality-specific selective attention (Experiment 2). In Experiment 1, we found that audiovisual integration was enhanced in the repeat locations compared with switch locations. Audiovisual integration was the largest following the auditory targets compared to following the visual and audiovisual targets. In Experiment 2, where participants were asked to focus only on visual, we found that the audiovisual integration effect was larger in the repeat location trials than switch location trials only when the audiovisual target was presented in the previous trial. The present results provide the first evidence that trial history can have an effect on audiovisual integration. The mechanisms of trial history modulating audiovisual integration are discussed. Future examining of audiovisual integration should carefully manipulate experimental conditions based on the effects of trial history.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"431-456"},"PeriodicalIF":1.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-08DOI: 10.1163/22134808-bja10132
Riham Hafez Mohamed, Niloufar Ansari, Bahaa Abdeljawad, Celina Valdivia, Abigail Edwards, Kaitlyn M A Parks, Yassaman Rafat, Ryan A Stevenson
Face-to-face speech communication is an audiovisual process during which the interlocuters use both the auditory speech signals as well as visual, oral articulations to understand the other. These sensory inputs are merged into a single, unified process known as multisensory integration. Audiovisual speech integration is known to be influenced by many factors, including listener experience. In this study, we investigated the roles of bilingualism and language experience on integration. We used a McGurk paradigm in which participants were presented with incongruent auditory and visual speech. This included an auditory utterance of 'ba' paired with visual articulations of 'ga' that often induce the perception of 'da' or 'tha', a fusion effect that is strong evidence of integration, as well as an auditory utterance of 'ga' paired with visual articulations of 'ba' that often induce the perception of 'bga', a combination effect that is weaker evidence of integration. We compared fusion and combination effects on three groups ( N = 20 each), English monolinguals, Spanish-English bilinguals, and Arabic-English bilinguals, with stimuli presented in all three languages. Monolinguals exhibited significantly stronger multisensory integration than bilinguals in fusion effects, regardless of the stimulus language. Bilinguals exhibited a nonsignificant trend by which greater experience led to increased integration as measured by fusion. These results held regardless of whether McGurk presentations were presented as stand-alone syllables or in the context of real words.
{"title":"Multisensory Integration of Native and Nonnative Speech in Bilingual and Monolingual Adults.","authors":"Riham Hafez Mohamed, Niloufar Ansari, Bahaa Abdeljawad, Celina Valdivia, Abigail Edwards, Kaitlyn M A Parks, Yassaman Rafat, Ryan A Stevenson","doi":"10.1163/22134808-bja10132","DOIUrl":"10.1163/22134808-bja10132","url":null,"abstract":"<p><p>Face-to-face speech communication is an audiovisual process during which the interlocuters use both the auditory speech signals as well as visual, oral articulations to understand the other. These sensory inputs are merged into a single, unified process known as multisensory integration. Audiovisual speech integration is known to be influenced by many factors, including listener experience. In this study, we investigated the roles of bilingualism and language experience on integration. We used a McGurk paradigm in which participants were presented with incongruent auditory and visual speech. This included an auditory utterance of 'ba' paired with visual articulations of 'ga' that often induce the perception of 'da' or 'tha', a fusion effect that is strong evidence of integration, as well as an auditory utterance of 'ga' paired with visual articulations of 'ba' that often induce the perception of 'bga', a combination effect that is weaker evidence of integration. We compared fusion and combination effects on three groups ( N = 20 each), English monolinguals, Spanish-English bilinguals, and Arabic-English bilinguals, with stimuli presented in all three languages. Monolinguals exhibited significantly stronger multisensory integration than bilinguals in fusion effects, regardless of the stimulus language. Bilinguals exhibited a nonsignificant trend by which greater experience led to increased integration as measured by fusion. These results held regardless of whether McGurk presentations were presented as stand-alone syllables or in the context of real words.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"413-430"},"PeriodicalIF":1.5,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1163/22134808-bja10131
Max Teaford, Zachary J Mularczyk, Alannah Gernon, Daniel M Merfeld
Our ability to maintain our balance plays a pivotal role in day-to-day activities. This ability is believed to be the result of interactions between several sensory modalities including vision and proprioception. Past research has revealed that different aspects of vision including relative visual motion (i.e., sensed motion of the visual field due to head motion), which can be manipulated by changing the viewing distance between the individual and the predominant visual cues, have an impact on balance. However, only a small number of studies have examined this in the context of virtual reality, and none examined the impact of proprioceptive manipulations for viewing distances greater than 3.5 m. To address this, we conducted an experiment in which 25 healthy adults viewed a dartboard in a virtual gymnasium while standing in narrow stance on firm and compliant surfaces. The dartboard distance varied with three different conditions of 1.5 m, 6 m, and 24 m, including a blacked-out condition. Our results indicate that decreases in relative visual motion, due to an increased viewing distance, yield decreased postural stability - but only with simultaneous proprioceptive disruptions.
{"title":"The Impact of Viewing Distance and Proprioceptive Manipulations on a Virtual Reality Based Balance Test.","authors":"Max Teaford, Zachary J Mularczyk, Alannah Gernon, Daniel M Merfeld","doi":"10.1163/22134808-bja10131","DOIUrl":"10.1163/22134808-bja10131","url":null,"abstract":"<p><p>Our ability to maintain our balance plays a pivotal role in day-to-day activities. This ability is believed to be the result of interactions between several sensory modalities including vision and proprioception. Past research has revealed that different aspects of vision including relative visual motion (i.e., sensed motion of the visual field due to head motion), which can be manipulated by changing the viewing distance between the individual and the predominant visual cues, have an impact on balance. However, only a small number of studies have examined this in the context of virtual reality, and none examined the impact of proprioceptive manipulations for viewing distances greater than 3.5 m. To address this, we conducted an experiment in which 25 healthy adults viewed a dartboard in a virtual gymnasium while standing in narrow stance on firm and compliant surfaces. The dartboard distance varied with three different conditions of 1.5 m, 6 m, and 24 m, including a blacked-out condition. Our results indicate that decreases in relative visual motion, due to an increased viewing distance, yield decreased postural stability - but only with simultaneous proprioceptive disruptions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"395-412"},"PeriodicalIF":1.8,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27DOI: 10.1163/22134808-bja10130
Charles Spence
The study of chemosensory mental imagery is undoubtedly made more difficult because of the profound individual differences that have been reported in the vividness of (e.g.) olfactory mental imagery. At the same time, the majority of those researchers who have attempted to study people's mental imagery abilities for taste (gustation) have actually mostly been studying flavour mental imagery. Nevertheless, there exists a body of human psychophysical research showing that chemosensory mental imagery exhibits a number of similarities with chemosensory perception. Furthermore, the two systems have frequently been shown to interact with one another, the similarities and differences between chemosensory perception and chemosensory mental imagery at the introspective, behavioural, psychophysical, and cognitive neuroscience levels in humans are considered in this narrative historical review. The latest neuroimaging evidence show that many of the same brain areas are engaged by chemosensory mental imagery as have previously been documented to be involved in chemosensory perception. That said, the pattern of neural connectively is reversed between the 'top-down' control of chemosensory mental imagery and the 'bottom-up' control seen in the case of chemosensory perception. At the same time, however, there remain a number of intriguing questions as to whether it is even possible to distinguish between orthonasal and retronasal olfactory mental imagery, and the extent to which mental imagery for flavour, which most people not only describe as, but also perceive to be, the 'taste' of food and drink, is capable of reactivating the entire flavour network in the human brain.
{"title":"What is the Relation between Chemosensory Perception and Chemosensory Mental Imagery?","authors":"Charles Spence","doi":"10.1163/22134808-bja10130","DOIUrl":"10.1163/22134808-bja10130","url":null,"abstract":"<p><p>The study of chemosensory mental imagery is undoubtedly made more difficult because of the profound individual differences that have been reported in the vividness of (e.g.) olfactory mental imagery. At the same time, the majority of those researchers who have attempted to study people's mental imagery abilities for taste (gustation) have actually mostly been studying flavour mental imagery. Nevertheless, there exists a body of human psychophysical research showing that chemosensory mental imagery exhibits a number of similarities with chemosensory perception. Furthermore, the two systems have frequently been shown to interact with one another, the similarities and differences between chemosensory perception and chemosensory mental imagery at the introspective, behavioural, psychophysical, and cognitive neuroscience levels in humans are considered in this narrative historical review. The latest neuroimaging evidence show that many of the same brain areas are engaged by chemosensory mental imagery as have previously been documented to be involved in chemosensory perception. That said, the pattern of neural connectively is reversed between the 'top-down' control of chemosensory mental imagery and the 'bottom-up' control seen in the case of chemosensory perception. At the same time, however, there remain a number of intriguing questions as to whether it is even possible to distinguish between orthonasal and retronasal olfactory mental imagery, and the extent to which mental imagery for flavour, which most people not only describe as, but also perceive to be, the 'taste' of food and drink, is capable of reactivating the entire flavour network in the human brain.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"365-394"},"PeriodicalIF":1.5,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-16DOI: 10.1163/22134808-bja10129
EunSeon Ahn, Areti Majumdar, Taraz G Lee, David Brang
Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept that differs from the auditory and visual components, known as the McGurk effect. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e.g., increased accuracy and faster reaction times). Indeed, recent correlative research suggests that the benefits of congruent visual speech and the McGurk effect rely on largely independent mechanisms. To better understand how these different features of audiovisual integration are causally generated by the left pSTS, we used single-pulse TMS to temporarily disrupt processing within this region while subjects were presented with either congruent or incongruent (McGurk) audiovisual combinations. Consistent with past research, we observed that TMS to the left pSTS reduced the strength of the McGurk effect. Importantly, however, left pSTS stimulation had no effect on the positive benefits of congruent audiovisual speech (increased accuracy and faster reaction times), demonstrating a causal dissociation between the two processes. Our results are consistent with models proposing that the pSTS is but one of multiple critical areas supporting audiovisual speech interactions. Moreover, these data add to a growing body of evidence suggesting that the McGurk effect is an imperfect surrogate measure for more general and ecologically valid audiovisual speech behaviors.
{"title":"Evidence for a Causal Dissociation of the McGurk Effect and Congruent Audiovisual Speech Perception via TMS to the Left pSTS.","authors":"EunSeon Ahn, Areti Majumdar, Taraz G Lee, David Brang","doi":"10.1163/22134808-bja10129","DOIUrl":"10.1163/22134808-bja10129","url":null,"abstract":"<p><p>Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept that differs from the auditory and visual components, known as the McGurk effect. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e.g., increased accuracy and faster reaction times). Indeed, recent correlative research suggests that the benefits of congruent visual speech and the McGurk effect rely on largely independent mechanisms. To better understand how these different features of audiovisual integration are causally generated by the left pSTS, we used single-pulse TMS to temporarily disrupt processing within this region while subjects were presented with either congruent or incongruent (McGurk) audiovisual combinations. Consistent with past research, we observed that TMS to the left pSTS reduced the strength of the McGurk effect. Importantly, however, left pSTS stimulation had no effect on the positive benefits of congruent audiovisual speech (increased accuracy and faster reaction times), demonstrating a causal dissociation between the two processes. Our results are consistent with models proposing that the pSTS is but one of multiple critical areas supporting audiovisual speech interactions. Moreover, these data add to a growing body of evidence suggesting that the McGurk effect is an imperfect surrogate measure for more general and ecologically valid audiovisual speech behaviors.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 4-5","pages":"341-363"},"PeriodicalIF":1.5,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11388023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1163/22134808-bja10128
Liesbeth Gijbels, Jason D Yeatman, Kaylah Lalonde, Piper Doering, Adrian K C Lee
The ability to leverage visual cues in speech perception - especially in noisy backgrounds - is well established from infancy to adulthood. Yet, the developmental trajectory of audiovisual benefits stays a topic of debate. The inconsistency in findings can be attributed to relatively small sample sizes or tasks that are not appropriate for given age groups. We designed an audiovisual speech perception task that was cognitively and linguistically age-appropriate from preschool to adolescence and recruited a large sample ( N = 161) of children (age 4-15). We found that even the youngest children show reliable speech perception benefits when provided with visual cues and that these benefits are consistent throughout development when auditory and visual signals match. Individual variability is explained by how the child experiences their speech-in-noise performance rather than the quality of the signal itself. This underscores the importance of visual speech for young children who are regularly in noisy environments like classrooms and playgrounds.
{"title":"Audiovisual Speech Perception Benefits are Stable from Preschool through Adolescence.","authors":"Liesbeth Gijbels, Jason D Yeatman, Kaylah Lalonde, Piper Doering, Adrian K C Lee","doi":"10.1163/22134808-bja10128","DOIUrl":"10.1163/22134808-bja10128","url":null,"abstract":"<p><p>The ability to leverage visual cues in speech perception - especially in noisy backgrounds - is well established from infancy to adulthood. Yet, the developmental trajectory of audiovisual benefits stays a topic of debate. The inconsistency in findings can be attributed to relatively small sample sizes or tasks that are not appropriate for given age groups. We designed an audiovisual speech perception task that was cognitively and linguistically age-appropriate from preschool to adolescence and recruited a large sample ( N = 161) of children (age 4-15). We found that even the youngest children show reliable speech perception benefits when provided with visual cues and that these benefits are consistent throughout development when auditory and visual signals match. Individual variability is explained by how the child experiences their speech-in-noise performance rather than the quality of the signal itself. This underscores the importance of visual speech for young children who are regularly in noisy environments like classrooms and playgrounds.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"317-340"},"PeriodicalIF":1.5,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490438/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1163/22134808-bja10127
Gözde Filiz, Simon Bérubé, Claudia Demers, Frank Cloutier, Angela Chen, Valérie Pek, Émilie Hudon, Josiane Bolduc-Bégin, Johannes Frasnelli
Approximately 30-60% of people suffer from olfactory dysfunction (OD) such as hyposmia or anosmia after being diagnosed with COVID-19; 15-20% of these cases last beyond resolution of the acute phase. Previous studies have shown that olfactory training can be beneficial for patients affected by OD caused by viral infections of the upper respiratory tract. The aim of the study is to evaluate whether a multisensory olfactory training involving simultaneously tasting and seeing congruent stimuli is more effective than the classical olfactory training. We recruited 68 participants with persistent OD for two months or more after COVID-19 infection; they were divided into three groups. One group received olfactory training which involved smelling four odorants (strawberry, cheese, coffee, lemon; classical olfactory training). The other group received the same olfactory stimuli but presented retronasally (i.e., as droplets on their tongue); while simultaneous and congruent gustatory (i.e., sweet, salty, bitter, sour) and visual (corresponding images) stimuli were presented (multisensory olfactory training). The third group received odorless propylene glycol in four bottles (control group). Training was carried out twice daily for 12 weeks. We assessed olfactory function and olfactory specific quality of life before and after the intervention. Both intervention groups showed a similar significant improvement of olfactory function, although there was no difference in the assessment of quality of life. Both multisensory and classical training can be beneficial for OD following a viral infection; however, only the classical olfactory training paradigm leads to an improvement that was significantly stronger than the control group.
{"title":"Can Multisensory Olfactory Training Improve Olfactory Dysfunction Caused by COVID-19?","authors":"Gözde Filiz, Simon Bérubé, Claudia Demers, Frank Cloutier, Angela Chen, Valérie Pek, Émilie Hudon, Josiane Bolduc-Bégin, Johannes Frasnelli","doi":"10.1163/22134808-bja10127","DOIUrl":"10.1163/22134808-bja10127","url":null,"abstract":"<p><p>Approximately 30-60% of people suffer from olfactory dysfunction (OD) such as hyposmia or anosmia after being diagnosed with COVID-19; 15-20% of these cases last beyond resolution of the acute phase. Previous studies have shown that olfactory training can be beneficial for patients affected by OD caused by viral infections of the upper respiratory tract. The aim of the study is to evaluate whether a multisensory olfactory training involving simultaneously tasting and seeing congruent stimuli is more effective than the classical olfactory training. We recruited 68 participants with persistent OD for two months or more after COVID-19 infection; they were divided into three groups. One group received olfactory training which involved smelling four odorants (strawberry, cheese, coffee, lemon; classical olfactory training). The other group received the same olfactory stimuli but presented retronasally (i.e., as droplets on their tongue); while simultaneous and congruent gustatory (i.e., sweet, salty, bitter, sour) and visual (corresponding images) stimuli were presented (multisensory olfactory training). The third group received odorless propylene glycol in four bottles (control group). Training was carried out twice daily for 12 weeks. We assessed olfactory function and olfactory specific quality of life before and after the intervention. Both intervention groups showed a similar significant improvement of olfactory function, although there was no difference in the assessment of quality of life. Both multisensory and classical training can be beneficial for OD following a viral infection; however, only the classical olfactory training paradigm leads to an improvement that was significantly stronger than the control group.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"299-316"},"PeriodicalIF":1.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1163/22134808-bja10126
Chunmao Wu, Pei Li, Charles Spence
The latest research demonstrates that people's perception of orange juice can be influenced by the shape/type of receptacle in which it happens to be served. Two studies are reported that were designed to investigate the impact, if any, that the shape/type of glass might exert over the perception of the contents, the emotions induced on tasting the juice and the consumer's intention to purchase orange juice. The same quantity of orange juice (100 ml) was presented and evaluated in three different glasses: a straight-sided, a curved and a tapered glass. Questionnaires were used to assess taste (aroma, flavour intensity, sweetness, freshness and fruitiness), pleasantness and intention to buy orange juice. Study 2 assessed the impact of the same three glasses in two digitally rendered atmospheric conditions (nature vs urban). In Study 1, the perceived sweetness and pleasantness of the orange juice was significantly influenced by the shape/type of the glass in which it was presented. Study 2 reported significant interactions between condition (nature vs urban) and glass shape (tapered, straight-sided and curved). Perceived aroma, flavour intensity and pleasantness were all significantly affected by the simulated audiovisual context or atmosphere. Compared to the urban condition, perceived aroma, freshness, fruitiness and pleasantness were rated significantly higher in the nature condition. On the other hand, flavour intensity and sweetness were rated significantly higher in the urban condition than in the natural condition. These results are likely to be relevant for those interested in providing food services, or company managers offering beverages to their customers.
{"title":"Glassware Influences the Perception of Orange Juice in Simulated Naturalistic versus Urban Conditions.","authors":"Chunmao Wu, Pei Li, Charles Spence","doi":"10.1163/22134808-bja10126","DOIUrl":"10.1163/22134808-bja10126","url":null,"abstract":"<p><p>The latest research demonstrates that people's perception of orange juice can be influenced by the shape/type of receptacle in which it happens to be served. Two studies are reported that were designed to investigate the impact, if any, that the shape/type of glass might exert over the perception of the contents, the emotions induced on tasting the juice and the consumer's intention to purchase orange juice. The same quantity of orange juice (100 ml) was presented and evaluated in three different glasses: a straight-sided, a curved and a tapered glass. Questionnaires were used to assess taste (aroma, flavour intensity, sweetness, freshness and fruitiness), pleasantness and intention to buy orange juice. Study 2 assessed the impact of the same three glasses in two digitally rendered atmospheric conditions (nature vs urban). In Study 1, the perceived sweetness and pleasantness of the orange juice was significantly influenced by the shape/type of the glass in which it was presented. Study 2 reported significant interactions between condition (nature vs urban) and glass shape (tapered, straight-sided and curved). Perceived aroma, flavour intensity and pleasantness were all significantly affected by the simulated audiovisual context or atmosphere. Compared to the urban condition, perceived aroma, freshness, fruitiness and pleasantness were rated significantly higher in the nature condition. On the other hand, flavour intensity and sweetness were rated significantly higher in the urban condition than in the natural condition. These results are likely to be relevant for those interested in providing food services, or company managers offering beverages to their customers.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"275-297"},"PeriodicalIF":1.8,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141421790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}