Pub Date : 2024-07-03DOI: 10.1163/22134808-bja10128
Liesbeth Gijbels, Jason D Yeatman, Kaylah Lalonde, Piper Doering, Adrian K C Lee
The ability to leverage visual cues in speech perception - especially in noisy backgrounds - is well established from infancy to adulthood. Yet, the developmental trajectory of audiovisual benefits stays a topic of debate. The inconsistency in findings can be attributed to relatively small sample sizes or tasks that are not appropriate for given age groups. We designed an audiovisual speech perception task that was cognitively and linguistically age-appropriate from preschool to adolescence and recruited a large sample ( N = 161) of children (age 4-15). We found that even the youngest children show reliable speech perception benefits when provided with visual cues and that these benefits are consistent throughout development when auditory and visual signals match. Individual variability is explained by how the child experiences their speech-in-noise performance rather than the quality of the signal itself. This underscores the importance of visual speech for young children who are regularly in noisy environments like classrooms and playgrounds.
{"title":"Audiovisual Speech Perception Benefits are Stable from Preschool through Adolescence.","authors":"Liesbeth Gijbels, Jason D Yeatman, Kaylah Lalonde, Piper Doering, Adrian K C Lee","doi":"10.1163/22134808-bja10128","DOIUrl":"10.1163/22134808-bja10128","url":null,"abstract":"<p><p>The ability to leverage visual cues in speech perception - especially in noisy backgrounds - is well established from infancy to adulthood. Yet, the developmental trajectory of audiovisual benefits stays a topic of debate. The inconsistency in findings can be attributed to relatively small sample sizes or tasks that are not appropriate for given age groups. We designed an audiovisual speech perception task that was cognitively and linguistically age-appropriate from preschool to adolescence and recruited a large sample ( N = 161) of children (age 4-15). We found that even the youngest children show reliable speech perception benefits when provided with visual cues and that these benefits are consistent throughout development when auditory and visual signals match. Individual variability is explained by how the child experiences their speech-in-noise performance rather than the quality of the signal itself. This underscores the importance of visual speech for young children who are regularly in noisy environments like classrooms and playgrounds.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"317-340"},"PeriodicalIF":1.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1163/22134808-bja10127
Gözde Filiz, Simon Bérubé, Claudia Demers, Frank Cloutier, Angela Chen, Valérie Pek, Émilie Hudon, Josiane Bolduc-Bégin, Johannes Frasnelli
Approximately 30-60% of people suffer from olfactory dysfunction (OD) such as hyposmia or anosmia after being diagnosed with COVID-19; 15-20% of these cases last beyond resolution of the acute phase. Previous studies have shown that olfactory training can be beneficial for patients affected by OD caused by viral infections of the upper respiratory tract. The aim of the study is to evaluate whether a multisensory olfactory training involving simultaneously tasting and seeing congruent stimuli is more effective than the classical olfactory training. We recruited 68 participants with persistent OD for two months or more after COVID-19 infection; they were divided into three groups. One group received olfactory training which involved smelling four odorants (strawberry, cheese, coffee, lemon; classical olfactory training). The other group received the same olfactory stimuli but presented retronasally (i.e., as droplets on their tongue); while simultaneous and congruent gustatory (i.e., sweet, salty, bitter, sour) and visual (corresponding images) stimuli were presented (multisensory olfactory training). The third group received odorless propylene glycol in four bottles (control group). Training was carried out twice daily for 12 weeks. We assessed olfactory function and olfactory specific quality of life before and after the intervention. Both intervention groups showed a similar significant improvement of olfactory function, although there was no difference in the assessment of quality of life. Both multisensory and classical training can be beneficial for OD following a viral infection; however, only the classical olfactory training paradigm leads to an improvement that was significantly stronger than the control group.
{"title":"Can Multisensory Olfactory Training Improve Olfactory Dysfunction Caused by COVID-19?","authors":"Gözde Filiz, Simon Bérubé, Claudia Demers, Frank Cloutier, Angela Chen, Valérie Pek, Émilie Hudon, Josiane Bolduc-Bégin, Johannes Frasnelli","doi":"10.1163/22134808-bja10127","DOIUrl":"10.1163/22134808-bja10127","url":null,"abstract":"<p><p>Approximately 30-60% of people suffer from olfactory dysfunction (OD) such as hyposmia or anosmia after being diagnosed with COVID-19; 15-20% of these cases last beyond resolution of the acute phase. Previous studies have shown that olfactory training can be beneficial for patients affected by OD caused by viral infections of the upper respiratory tract. The aim of the study is to evaluate whether a multisensory olfactory training involving simultaneously tasting and seeing congruent stimuli is more effective than the classical olfactory training. We recruited 68 participants with persistent OD for two months or more after COVID-19 infection; they were divided into three groups. One group received olfactory training which involved smelling four odorants (strawberry, cheese, coffee, lemon; classical olfactory training). The other group received the same olfactory stimuli but presented retronasally (i.e., as droplets on their tongue); while simultaneous and congruent gustatory (i.e., sweet, salty, bitter, sour) and visual (corresponding images) stimuli were presented (multisensory olfactory training). The third group received odorless propylene glycol in four bottles (control group). Training was carried out twice daily for 12 weeks. We assessed olfactory function and olfactory specific quality of life before and after the intervention. Both intervention groups showed a similar significant improvement of olfactory function, although there was no difference in the assessment of quality of life. Both multisensory and classical training can be beneficial for OD following a viral infection; however, only the classical olfactory training paradigm leads to an improvement that was significantly stronger than the control group.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"299-316"},"PeriodicalIF":1.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1163/22134808-bja10126
Chunmao Wu, Pei Li, Charles Spence
The latest research demonstrates that people's perception of orange juice can be influenced by the shape/type of receptacle in which it happens to be served. Two studies are reported that were designed to investigate the impact, if any, that the shape/type of glass might exert over the perception of the contents, the emotions induced on tasting the juice and the consumer's intention to purchase orange juice. The same quantity of orange juice (100 ml) was presented and evaluated in three different glasses: a straight-sided, a curved and a tapered glass. Questionnaires were used to assess taste (aroma, flavour intensity, sweetness, freshness and fruitiness), pleasantness and intention to buy orange juice. Study 2 assessed the impact of the same three glasses in two digitally rendered atmospheric conditions (nature vs urban). In Study 1, the perceived sweetness and pleasantness of the orange juice was significantly influenced by the shape/type of the glass in which it was presented. Study 2 reported significant interactions between condition (nature vs urban) and glass shape (tapered, straight-sided and curved). Perceived aroma, flavour intensity and pleasantness were all significantly affected by the simulated audiovisual context or atmosphere. Compared to the urban condition, perceived aroma, freshness, fruitiness and pleasantness were rated significantly higher in the nature condition. On the other hand, flavour intensity and sweetness were rated significantly higher in the urban condition than in the natural condition. These results are likely to be relevant for those interested in providing food services, or company managers offering beverages to their customers.
{"title":"Glassware Influences the Perception of Orange Juice in Simulated Naturalistic versus Urban Conditions.","authors":"Chunmao Wu, Pei Li, Charles Spence","doi":"10.1163/22134808-bja10126","DOIUrl":"10.1163/22134808-bja10126","url":null,"abstract":"<p><p>The latest research demonstrates that people's perception of orange juice can be influenced by the shape/type of receptacle in which it happens to be served. Two studies are reported that were designed to investigate the impact, if any, that the shape/type of glass might exert over the perception of the contents, the emotions induced on tasting the juice and the consumer's intention to purchase orange juice. The same quantity of orange juice (100 ml) was presented and evaluated in three different glasses: a straight-sided, a curved and a tapered glass. Questionnaires were used to assess taste (aroma, flavour intensity, sweetness, freshness and fruitiness), pleasantness and intention to buy orange juice. Study 2 assessed the impact of the same three glasses in two digitally rendered atmospheric conditions (nature vs urban). In Study 1, the perceived sweetness and pleasantness of the orange juice was significantly influenced by the shape/type of the glass in which it was presented. Study 2 reported significant interactions between condition (nature vs urban) and glass shape (tapered, straight-sided and curved). Perceived aroma, flavour intensity and pleasantness were all significantly affected by the simulated audiovisual context or atmosphere. Compared to the urban condition, perceived aroma, freshness, fruitiness and pleasantness were rated significantly higher in the nature condition. On the other hand, flavour intensity and sweetness were rated significantly higher in the urban condition than in the natural condition. These results are likely to be relevant for those interested in providing food services, or company managers offering beverages to their customers.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"275-297"},"PeriodicalIF":1.8,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141421790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-23DOI: 10.1163/22134808-bja10125
Faezeh Pourhashemi, Martijn Baart, Jean Vroomen
Auditory speech can be difficult to understand but seeing the articulatory movements of a speaker can drastically improve spoken-word recognition and, on the longer-term, it helps listeners to adapt to acoustically distorted speech. Given that individuals with developmental dyslexia (DD) have sometimes been reported to rely less on lip-read speech than typical readers, we examined lip-read-driven adaptation to distorted speech in a group of adults with DD ( N = 29) and a comparison group of typical readers ( N = 29). Participants were presented with acoustically distorted Dutch words (six-channel noise-vocoded speech, NVS) in audiovisual training blocks (where the speaker could be seen) interspersed with audio-only test blocks. Results showed that words were more accurately recognized if the speaker could be seen (a lip-read advantage), and that performance steadily improved across subsequent auditory-only test blocks (adaptation). There were no group differences, suggesting that perceptual adaptation to disrupted spoken words is comparable for dyslexic and typical readers. These data open up a research avenue to investigate the degree to which lip-read-driven speech adaptation generalizes across different types of auditory degradation, and across dyslexic readers with decoding versus comprehension difficulties.
{"title":"Perceptual Adaptation to Noise-Vocoded Speech by Lip-Read Information: No Difference between Dyslexic and Typical Readers.","authors":"Faezeh Pourhashemi, Martijn Baart, Jean Vroomen","doi":"10.1163/22134808-bja10125","DOIUrl":"10.1163/22134808-bja10125","url":null,"abstract":"<p><p>Auditory speech can be difficult to understand but seeing the articulatory movements of a speaker can drastically improve spoken-word recognition and, on the longer-term, it helps listeners to adapt to acoustically distorted speech. Given that individuals with developmental dyslexia (DD) have sometimes been reported to rely less on lip-read speech than typical readers, we examined lip-read-driven adaptation to distorted speech in a group of adults with DD ( N = 29) and a comparison group of typical readers ( N = 29). Participants were presented with acoustically distorted Dutch words (six-channel noise-vocoded speech, NVS) in audiovisual training blocks (where the speaker could be seen) interspersed with audio-only test blocks. Results showed that words were more accurately recognized if the speaker could be seen (a lip-read advantage), and that performance steadily improved across subsequent auditory-only test blocks (adaptation). There were no group differences, suggesting that perceptual adaptation to disrupted spoken words is comparable for dyslexic and typical readers. These data open up a research avenue to investigate the degree to which lip-read-driven speech adaptation generalizes across different types of auditory degradation, and across dyslexic readers with decoding versus comprehension difficulties.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"243-259"},"PeriodicalIF":1.6,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-17DOI: 10.1163/22134808-bja10124
Lari Vainio, Martti Vainio
Previous research has revealed congruency effects between different spatial dimensions such as right and up. In the audiovisual context, high-pitched sounds are associated with the spatial dimensions of up/above and front, while low-pitched sounds are associated with the spatial dimensions of down/below and back. This opens the question of whether there could also be a spatial association between above and front and/or below and back. Participants were presented with a high- or low-pitch stimulus at the time of the onset of the visual stimulus. In one block, participants responded according to the above/below location of the visual target stimulus if the target appeared in front of the reference object, and in the other block, they performed these above/below responses if the target appeared at the back of the reference. In general, reaction times revealed an advantage in processing the target location in the front-above and back-below locations. The front-above/back-below effect was more robust concerning the back-below component of the effect, and significantly larger in reaction times that were slower rather than faster than the median value of a participant. However, the pitch did not robustly influence responding to front/back or above/below locations. We propose that this effect might be based on the conceptual association between different spatial dimensions.
{"title":"Is Front associated with Above and Back with Below? Association between Allocentric Representations of Spatial Dimensions.","authors":"Lari Vainio, Martti Vainio","doi":"10.1163/22134808-bja10124","DOIUrl":"10.1163/22134808-bja10124","url":null,"abstract":"<p><p>Previous research has revealed congruency effects between different spatial dimensions such as right and up. In the audiovisual context, high-pitched sounds are associated with the spatial dimensions of up/above and front, while low-pitched sounds are associated with the spatial dimensions of down/below and back. This opens the question of whether there could also be a spatial association between above and front and/or below and back. Participants were presented with a high- or low-pitch stimulus at the time of the onset of the visual stimulus. In one block, participants responded according to the above/below location of the visual target stimulus if the target appeared in front of the reference object, and in the other block, they performed these above/below responses if the target appeared at the back of the reference. In general, reaction times revealed an advantage in processing the target location in the front-above and back-below locations. The front-above/back-below effect was more robust concerning the back-below component of the effect, and significantly larger in reaction times that were slower rather than faster than the median value of a participant. However, the pitch did not robustly influence responding to front/back or above/below locations. We propose that this effect might be based on the conceptual association between different spatial dimensions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"217-241"},"PeriodicalIF":1.6,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-10DOI: 10.1163/22134808-bja10123
Yu Nakajima, Hiroshi Ashida
Two types of disruptive effects of irrelevant sound on visual tasks have been reported: the changing-state effect and the deviation effect. The idea that the deviation effect, which arises from attentional capture, is independent of task requirements, whereas the changing-state effect is specific to tasks that require serial processing, has been examined by comparing tasks that do or do not require serial-order processing. While many previous studies used the missing-item task as the nonserial task, it is unclear whether other cognitive tasks lead to similar results regarding the different task specificity of both effects. Kattner et al. (Memory and Cognition, 2023) used the mental-arithmetic task as the nonserial task, and failed to demonstrate the deviation effect. However, there were several procedural factors that could account for the lack of deviation effect, such as differences in design and procedures (e.g., conducted online, intermixed conditions). In the present study, we aimed to investigate whether the deviation effect could be observed in both the serial-recall and mental-arithmetic tasks when these procedural factors were modified. We found strong evidence of the deviation effect in both the serial-recall and the mental-arithmetic tasks when stimulus presentation and experimental design were aligned with previous studies that demonstrated the deviation effect (e.g., conducted in-person, blockwise presentation of sound, etc.). The results support the idea that the deviation effect is not task-specific.
{"title":"Revisiting the Deviation Effects of Irrelevant Sound on Serial and Nonserial Tasks.","authors":"Yu Nakajima, Hiroshi Ashida","doi":"10.1163/22134808-bja10123","DOIUrl":"10.1163/22134808-bja10123","url":null,"abstract":"<p><p>Two types of disruptive effects of irrelevant sound on visual tasks have been reported: the changing-state effect and the deviation effect. The idea that the deviation effect, which arises from attentional capture, is independent of task requirements, whereas the changing-state effect is specific to tasks that require serial processing, has been examined by comparing tasks that do or do not require serial-order processing. While many previous studies used the missing-item task as the nonserial task, it is unclear whether other cognitive tasks lead to similar results regarding the different task specificity of both effects. Kattner et al. (Memory and Cognition, 2023) used the mental-arithmetic task as the nonserial task, and failed to demonstrate the deviation effect. However, there were several procedural factors that could account for the lack of deviation effect, such as differences in design and procedures (e.g., conducted online, intermixed conditions). In the present study, we aimed to investigate whether the deviation effect could be observed in both the serial-recall and mental-arithmetic tasks when these procedural factors were modified. We found strong evidence of the deviation effect in both the serial-recall and the mental-arithmetic tasks when stimulus presentation and experimental design were aligned with previous studies that demonstrated the deviation effect (e.g., conducted in-person, blockwise presentation of sound, etc.). The results support the idea that the deviation effect is not task-specific.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"261-273"},"PeriodicalIF":1.6,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140900337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1163/22134808-bja10121
Ryan Horsfall, Neil Harrison, Georg Meyer, Sophie Wuerger
A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.
在判断视听信号是否来自同一事件时,一个重要的启发式方法就是各信号的时间重合性。以往的研究强调了一个过程,即同时性感知会迅速重新校准,以考虑刺激物物理时间偏移的差异。本文研究了快速重新校准是否也会因中心到达潜伏期的不同而发生,而中心到达潜伏期是由视觉强度相关的处理时间驱动的。在一项行为实验中,观察者完成了时序判断(TOJ)、同时性判断(SJ)和简单反应时间(RT)任务,并对前面带有明亮或昏暗视觉刺激的其他视听试验做出了反应。结果发现,在 TOJ 任务中,主观同时性点会因前面刺激的视觉强度而移动,但在 SJ 任务中不会,而在 RT 数据中,前面刺激的强度没有影响。因此,我们的数据提供了一些证据,证明同时性感知会根据刺激强度迅速重新校准。
{"title":"Perceived Audio-Visual Simultaneity Is Recalibrated by the Visual Intensity of the Preceding Trial.","authors":"Ryan Horsfall, Neil Harrison, Georg Meyer, Sophie Wuerger","doi":"10.1163/22134808-bja10121","DOIUrl":"https://doi.org/10.1163/22134808-bja10121","url":null,"abstract":"<p><p>A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 2","pages":"143-162"},"PeriodicalIF":1.6,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1163/22134808-bja10117
Silvia Zanchi, Luigi F Cuturi, Giulio Sandini, Monica Gori, Elisa R Ferrè
While navigating through the surroundings, we constantly rely on inertial vestibular signals for self-motion along with visual and acoustic spatial references from the environment. However, the interaction between inertial cues and environmental spatial references is not yet fully understood. Here we investigated whether vestibular self-motion sensitivity is influenced by sensory spatial references. Healthy participants were administered a Vestibular Self-Motion Detection Task in which they were asked to detect vestibular self-motion sensations induced by low-intensity Galvanic Vestibular Stimulation. Participants performed this detection task with or without an external visual or acoustic spatial reference placed directly in front of them. We computed the d prime ( d ' ) as a measure of participants' vestibular sensitivity and the criterion as an index of their response bias. Results showed that the visual spatial reference increased sensitivity to detect vestibular self-motion. Conversely, the acoustic spatial reference did not influence self-motion sensitivity. Both visual and auditory spatial references did not cause changes in response bias. Environmental visual spatial references provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.
在周围环境中导航时,我们不断依靠惯性前庭信号以及来自环境的视觉和听觉空间参考来进行自我运动。然而,惯性线索与环境空间参考之间的相互作用尚未完全明了。在此,我们研究了前庭自我运动灵敏度是否受感官空间参考的影响。我们对健康的参与者进行了前庭自我运动检测任务,要求他们检测由低强度 Galvanic Vestibular Stimulation 引起的前庭自我运动感觉。受试者在有或没有外部视觉或听觉空间参照物的情况下完成这项检测任务。我们计算了 d prime ( d '),以此来衡量参与者的前庭敏感度,并计算了标准值,以此来衡量参与者的反应偏差。结果显示,视觉空间参照物提高了检测前庭自我运动的灵敏度。相反,听觉空间参照物并不影响自我运动灵敏度。视觉和听觉空间参照物都不会引起反应偏差的变化。环境视觉空间参照物提供了相关信息,提高了我们感知惯性自我运动线索的能力,这表明视觉和前庭系统在自我运动感知中存在特定的相互作用。
{"title":"Spatial Sensory References for Vestibular Self-Motion Perception.","authors":"Silvia Zanchi, Luigi F Cuturi, Giulio Sandini, Monica Gori, Elisa R Ferrè","doi":"10.1163/22134808-bja10117","DOIUrl":"10.1163/22134808-bja10117","url":null,"abstract":"<p><p>While navigating through the surroundings, we constantly rely on inertial vestibular signals for self-motion along with visual and acoustic spatial references from the environment. However, the interaction between inertial cues and environmental spatial references is not yet fully understood. Here we investigated whether vestibular self-motion sensitivity is influenced by sensory spatial references. Healthy participants were administered a Vestibular Self-Motion Detection Task in which they were asked to detect vestibular self-motion sensations induced by low-intensity Galvanic Vestibular Stimulation. Participants performed this detection task with or without an external visual or acoustic spatial reference placed directly in front of them. We computed the d prime ( d ' ) as a measure of participants' vestibular sensitivity and the criterion as an index of their response bias. Results showed that the visual spatial reference increased sensitivity to detect vestibular self-motion. Conversely, the acoustic spatial reference did not influence self-motion sensitivity. Both visual and auditory spatial references did not cause changes in response bias. Environmental visual spatial references provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"75-88"},"PeriodicalIF":1.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138832890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1163/22134808-bja10116
Joshua R Tatz, Zehra F Peynircioğlu
Multisensory context often facilitates perception and memory. In fact, encoding items within a multisensory context can improve memory even on strictly unisensory tests (i.e., when the multisensory context is absent). Prior studies that have consistently found these multisensory facilitation effects have largely employed multisensory contexts in which the stimuli were meaningfully related to the items targeting for remembering (e.g., pairing canonical sounds and images). Other studies have used unrelated stimuli as multisensory context. A third possible type of multisensory context is one that is environmentally related simply because the stimuli are often encountered together in the real world. We predicted that encountering such a multisensory context would also enhance memory through cross-modal associations, or representations relating to one's prior multisensory experience with that sort of stimuli in general. In two memory experiments, we used faces and voices of unfamiliar people as everyday stimuli individuals have substantial experience integrating the perceptual features of. We assigned participants to face- or voice-recognition groups and ensured that, during the study phase, half of the face or voice targets were encountered also with information in the other modality. Voices initially encoded along with faces were consistently remembered better, providing evidence that cross-modal associations could explain the observed multisensory facilitation.
{"title":"Cross-Modal Contributions to Episodic Memory for Voices.","authors":"Joshua R Tatz, Zehra F Peynircioğlu","doi":"10.1163/22134808-bja10116","DOIUrl":"10.1163/22134808-bja10116","url":null,"abstract":"<p><p>Multisensory context often facilitates perception and memory. In fact, encoding items within a multisensory context can improve memory even on strictly unisensory tests (i.e., when the multisensory context is absent). Prior studies that have consistently found these multisensory facilitation effects have largely employed multisensory contexts in which the stimuli were meaningfully related to the items targeting for remembering (e.g., pairing canonical sounds and images). Other studies have used unrelated stimuli as multisensory context. A third possible type of multisensory context is one that is environmentally related simply because the stimuli are often encountered together in the real world. We predicted that encountering such a multisensory context would also enhance memory through cross-modal associations, or representations relating to one's prior multisensory experience with that sort of stimuli in general. In two memory experiments, we used faces and voices of unfamiliar people as everyday stimuli individuals have substantial experience integrating the perceptual features of. We assigned participants to face- or voice-recognition groups and ensured that, during the study phase, half of the face or voice targets were encountered also with information in the other modality. Voices initially encoded along with faces were consistently remembered better, providing evidence that cross-modal associations could explain the observed multisensory facilitation.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"47-74"},"PeriodicalIF":1.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138808898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.1163/22134808-bja10115
Lawrence R Stark, Kim Shiraishi, Tyler Sommerfeld
This study aimed to determine the extent to which haptic stimuli can influence ocular accommodation, either alone or in combination with vision. Accommodation was measured objectively in 15 young adults as they read stationary targets containing Braille letters. These cards were presented at four distances in the range 20-50 cm. In the Touch condition, the participant read by touch with their dominant hand in a dark room. Afterward, they estimated card distance with their non-dominant hand. In the Vision condition, they read by sight binocularly without touch in a lighted room. In the Touch with Vision condition, they read by sight binocularly and with touch in a lighted room. Sensory modality had a significant overall effect on the slope of the accommodative stimulus-response function. The slope in the Touch condition was not significantly different from zero, even though depth perception from touch was accurate. Nevertheless, one atypical participant had a moderate accommodative slope in the Touch condition. The accommodative slope in the Touch condition was significantly poorer than in the Vision condition. The accommodative slopes in the Vision condition and Touch with Vision condition were not significantly different. For most individuals, haptic stimuli for stationary objects do not influence the accommodation response, alone or in combination with vision. These haptic stimuli provide accurate distance perception, thus questioning the general validity of Heath's model of proximal accommodation as driven by perceived distance. Instead, proximally induced accommodation relies on visual rather than touch stimuli.
{"title":"Stationary Haptic Stimuli Do not Produce Ocular Accommodation in Most Individuals.","authors":"Lawrence R Stark, Kim Shiraishi, Tyler Sommerfeld","doi":"10.1163/22134808-bja10115","DOIUrl":"10.1163/22134808-bja10115","url":null,"abstract":"<p><p>This study aimed to determine the extent to which haptic stimuli can influence ocular accommodation, either alone or in combination with vision. Accommodation was measured objectively in 15 young adults as they read stationary targets containing Braille letters. These cards were presented at four distances in the range 20-50 cm. In the Touch condition, the participant read by touch with their dominant hand in a dark room. Afterward, they estimated card distance with their non-dominant hand. In the Vision condition, they read by sight binocularly without touch in a lighted room. In the Touch with Vision condition, they read by sight binocularly and with touch in a lighted room. Sensory modality had a significant overall effect on the slope of the accommodative stimulus-response function. The slope in the Touch condition was not significantly different from zero, even though depth perception from touch was accurate. Nevertheless, one atypical participant had a moderate accommodative slope in the Touch condition. The accommodative slope in the Touch condition was significantly poorer than in the Vision condition. The accommodative slopes in the Vision condition and Touch with Vision condition were not significantly different. For most individuals, haptic stimuli for stationary objects do not influence the accommodation response, alone or in combination with vision. These haptic stimuli provide accurate distance perception, thus questioning the general validity of Heath's model of proximal accommodation as driven by perceived distance. Instead, proximally induced accommodation relies on visual rather than touch stimuli.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"25-45"},"PeriodicalIF":1.6,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138453050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}