Pub Date : 2023-04-07DOI: 10.1163/22134808-bja10097
Keisuke Tani, Shintaro Uehara, Satoshi Tanaka
The mechanisms underlying geocentric (orientations of an object or the body relative to 'gravity') and egocentric estimates (object orientation relative to the 'body') have each been examined; however, little is known regarding the association between these estimates, especially when the body is nearly upright. To address this, we conducted two psychophysical experiments. In Experiment 1, participants estimated the direction of a visual line (subjective visual vertical; SVV) and their own body relative to gravity (subjective body tilt; SBT) and the direction of a visual line relative to the body longitudinal axis (subjective visual body axis; SVBA) during a small-range whole-body roll tilt. We evaluated the correlations between performance on each of these tasks as covariates of actual body tilt angles. Our results showed a significant correlation of performance (estimation errors) on the SVBA task with performance on the SBT task but not performance on the SVV task at the group level after adjusting for the actual body tilt angles, suggesting a link between the estimates for SVBA and SBT tasks. To confirm this relationship, in Experiment 2, we further assessed whether manipulating the subjective direction of the body axis by providing visual feedback in the SVBA task subsequently affected SBT performance. We found that feedback in the SVBA task significantly shifted the SBT angles even when the actual body angles were identical. The observed association between SVBA and SBT performance supports at least a partially shared mechanism underlying body tilt and egocentric estimates when the body is nearly upright.
{"title":"Association Between Body Tilt and Egocentric Estimates Near Upright.","authors":"Keisuke Tani, Shintaro Uehara, Satoshi Tanaka","doi":"10.1163/22134808-bja10097","DOIUrl":"https://doi.org/10.1163/22134808-bja10097","url":null,"abstract":"<p><p>The mechanisms underlying geocentric (orientations of an object or the body relative to 'gravity') and egocentric estimates (object orientation relative to the 'body') have each been examined; however, little is known regarding the association between these estimates, especially when the body is nearly upright. To address this, we conducted two psychophysical experiments. In Experiment 1, participants estimated the direction of a visual line (subjective visual vertical; SVV) and their own body relative to gravity (subjective body tilt; SBT) and the direction of a visual line relative to the body longitudinal axis (subjective visual body axis; SVBA) during a small-range whole-body roll tilt. We evaluated the correlations between performance on each of these tasks as covariates of actual body tilt angles. Our results showed a significant correlation of performance (estimation errors) on the SVBA task with performance on the SBT task but not performance on the SVV task at the group level after adjusting for the actual body tilt angles, suggesting a link between the estimates for SVBA and SBT tasks. To confirm this relationship, in Experiment 2, we further assessed whether manipulating the subjective direction of the body axis by providing visual feedback in the SVBA task subsequently affected SBT performance. We found that feedback in the SVBA task significantly shifted the SBT angles even when the actual body angles were identical. The observed association between SVBA and SBT performance supports at least a partially shared mechanism underlying body tilt and egocentric estimates when the body is nearly upright.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 4","pages":"367-386"},"PeriodicalIF":1.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9421378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-20DOI: 10.1163/22134808-bja10096
Charles Spence
A growing body of experimental research now demonstrates that neurologically normal individuals associate different taste qualities with design features such as curvature, symmetry, orientation, texture and movement. The form of everything from the food itself through to the curvature of the plateware on which it happens to be served, and from glassware to typeface, not to mention the shapes of/on food product packaging have all been shown to influence people's taste expectations, and, on occasion, also their taste/food experiences. Although the origins of shape-taste and other form-taste crossmodal correspondences have yet to be fully worked out, it would appear that shape qualities are occasionally elicited directly. However, more often, there may be a metaphorical attempt to translate the temporal qualities of taste sensations into a spatial analogue. At the same time, emotional mediation may sometimes also play a role in the affinity people experience between shape properties and taste. And finally, it should be acknowledged that associative learning of the relation between packaging shapes, glassware shapes, logos, labels and iconic food forms that commonly co-occur with specific taste properties (i.e., in the case of branded food products) may also play an important role in determining the nature of shape-taste correspondences. Ultimately, however, any attempt to use such shape-taste correspondences to nudge people's behaviour/perception in the real world is made challenging due to the fact that shape properties are associated with multiple qualities, and not just taste.
{"title":"Explaining Visual Shape-Taste Crossmodal Correspondences.","authors":"Charles Spence","doi":"10.1163/22134808-bja10096","DOIUrl":"https://doi.org/10.1163/22134808-bja10096","url":null,"abstract":"<p><p>A growing body of experimental research now demonstrates that neurologically normal individuals associate different taste qualities with design features such as curvature, symmetry, orientation, texture and movement. The form of everything from the food itself through to the curvature of the plateware on which it happens to be served, and from glassware to typeface, not to mention the shapes of/on food product packaging have all been shown to influence people's taste expectations, and, on occasion, also their taste/food experiences. Although the origins of shape-taste and other form-taste crossmodal correspondences have yet to be fully worked out, it would appear that shape qualities are occasionally elicited directly. However, more often, there may be a metaphorical attempt to translate the temporal qualities of taste sensations into a spatial analogue. At the same time, emotional mediation may sometimes also play a role in the affinity people experience between shape properties and taste. And finally, it should be acknowledged that associative learning of the relation between packaging shapes, glassware shapes, logos, labels and iconic food forms that commonly co-occur with specific taste properties (i.e., in the case of branded food products) may also play an important role in determining the nature of shape-taste correspondences. Ultimately, however, any attempt to use such shape-taste correspondences to nudge people's behaviour/perception in the real world is made challenging due to the fact that shape properties are associated with multiple qualities, and not just taste.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 4","pages":"313-345"},"PeriodicalIF":1.6,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9790067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-08DOI: 10.1163/22134808-bja10095
William Chung, Michael Barnett-Cowan
The integration of vestibular, visual and body cues is a fundamental process in the perception of self-motion and is commonly experienced in an upright posture. However, when the body is tilted in an off-vertical orientation these signals are no longer aligned relative to the influence of gravity. In this study, the perceived timing of visual motion was examined in the presence of sensory conflict introduced by manipulating the orientation of the body, generating a mismatch between body and vestibular cues due to gravity and creating an ambiguous vestibular signal of either head tilt or translation. In a series of temporal-order judgment tasks, participants reported the perceived onset of a visual scene simulating rotation around the yaw axis presented in virtual reality with a paired auditory tone while in an upright, supine and side-recumbent body position. The results revealed that the perceived onset of visual motion was further delayed from zero (i.e., true simultaneity between visual onset and a reference auditory tone) by approximately an additional 30 ms when viewed in a supine or side-recumbent orientation compared to an upright posture. There were also no significant differences in the timing estimates of the visual motion between all the non-upright orientations. This indicates that the perceived timing of visual motion is negatively impacted by the presence of conflict in the vestibular and body signals due to the direction of gravity and body orientation, even when the mismatch is not in the direct plane of the axis of rotation.
{"title":"Off-Vertical Body Orientation Delays the Perceived Onset of Visual Motion.","authors":"William Chung, Michael Barnett-Cowan","doi":"10.1163/22134808-bja10095","DOIUrl":"https://doi.org/10.1163/22134808-bja10095","url":null,"abstract":"<p><p>The integration of vestibular, visual and body cues is a fundamental process in the perception of self-motion and is commonly experienced in an upright posture. However, when the body is tilted in an off-vertical orientation these signals are no longer aligned relative to the influence of gravity. In this study, the perceived timing of visual motion was examined in the presence of sensory conflict introduced by manipulating the orientation of the body, generating a mismatch between body and vestibular cues due to gravity and creating an ambiguous vestibular signal of either head tilt or translation. In a series of temporal-order judgment tasks, participants reported the perceived onset of a visual scene simulating rotation around the yaw axis presented in virtual reality with a paired auditory tone while in an upright, supine and side-recumbent body position. The results revealed that the perceived onset of visual motion was further delayed from zero (i.e., true simultaneity between visual onset and a reference auditory tone) by approximately an additional 30 ms when viewed in a supine or side-recumbent orientation compared to an upright posture. There were also no significant differences in the timing estimates of the visual motion between all the non-upright orientations. This indicates that the perceived timing of visual motion is negatively impacted by the presence of conflict in the vestibular and body signals due to the direction of gravity and body orientation, even when the mismatch is not in the direct plane of the axis of rotation.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 4","pages":"347-366"},"PeriodicalIF":1.6,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9790065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-23DOI: 10.1163/22134808-bja10094
Faith Kimmet, Samantha Pedersen, Victoria Cardenas, Camila Rubiera, Grey Johnson, Addison Sans, Matthew Baldwin, Brian Odegaard
In multisensory environments, our brains perform causal inference to estimate which sources produce specific sensory signals. Decades of research have revealed the dynamics which underlie this process of causal inference for multisensory (audiovisual) signals, including how temporal, spatial, and semantic relationships between stimuli influence the brain's decision about whether to integrate or segregate. However, presently, very little is known about the relationship between metacognition and multisensory integration, and the characteristics of perceptual confidence for audiovisual signals. In this investigation, we ask two questions about the relationship between metacognition and multisensory causal inference: are observers' confidence ratings for judgments about Congruent, McGurk, and Rarely Integrated speech similar, or different? And do confidence judgments distinguish between these three scenarios when the perceived syllable is identical? To answer these questions, 92 online participants completed experiments where on each trial, participants reported which syllable they perceived, and rated confidence in their judgment. Results from Experiment 1 showed that confidence ratings were quite similar across Congruent speech, McGurk speech, and Rarely Integrated speech. In Experiment 2, when the perceived syllable for congruent and McGurk videos was matched, confidence scores were higher for congruent stimuli compared to McGurk stimuli. In Experiment 3, when the perceived syllable was matched between McGurk and Rarely Integrated stimuli, confidence judgments were similar between the two conditions. Together, these results provide evidence of the capacities and limitations of metacognition's ability to distinguish between different sources of multisensory information.
{"title":"Metacognition and Causal Inference in Audiovisual Speech.","authors":"Faith Kimmet, Samantha Pedersen, Victoria Cardenas, Camila Rubiera, Grey Johnson, Addison Sans, Matthew Baldwin, Brian Odegaard","doi":"10.1163/22134808-bja10094","DOIUrl":"https://doi.org/10.1163/22134808-bja10094","url":null,"abstract":"<p><p>In multisensory environments, our brains perform causal inference to estimate which sources produce specific sensory signals. Decades of research have revealed the dynamics which underlie this process of causal inference for multisensory (audiovisual) signals, including how temporal, spatial, and semantic relationships between stimuli influence the brain's decision about whether to integrate or segregate. However, presently, very little is known about the relationship between metacognition and multisensory integration, and the characteristics of perceptual confidence for audiovisual signals. In this investigation, we ask two questions about the relationship between metacognition and multisensory causal inference: are observers' confidence ratings for judgments about Congruent, McGurk, and Rarely Integrated speech similar, or different? And do confidence judgments distinguish between these three scenarios when the perceived syllable is identical? To answer these questions, 92 online participants completed experiments where on each trial, participants reported which syllable they perceived, and rated confidence in their judgment. Results from Experiment 1 showed that confidence ratings were quite similar across Congruent speech, McGurk speech, and Rarely Integrated speech. In Experiment 2, when the perceived syllable for congruent and McGurk videos was matched, confidence scores were higher for congruent stimuli compared to McGurk stimuli. In Experiment 3, when the perceived syllable was matched between McGurk and Rarely Integrated stimuli, confidence judgments were similar between the two conditions. Together, these results provide evidence of the capacities and limitations of metacognition's ability to distinguish between different sources of multisensory information.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 3","pages":"289-311"},"PeriodicalIF":1.6,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9790066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-24DOI: 10.1163/22134808-bja10085
Aysha Basharat, Michael Barnett-Cowan
A single bout of aerobic exercise is related to positive changes in higher-order cognitive function among older adults; however, the impact of aerobic exercise on multisensory processing remains unclear. Here we assessed the effects of a single bout of aerobic exercise on commonly utilized tasks that measure audiovisual multisensory processing: response time (RT), simultaneity judgements (SJ), and temporal-order judgements (TOJ), in a pilot study. To our knowledge this is the first effort to investigate the effects of three well-controlled intervention conditions on multisensory processing: resting, completing a cognitively demanding task, and performing aerobic exercise for 20 minutes. Our results indicate that the window of time within which stimuli from different modalities are integrated and perceived as simultaneous (temporal binding window; TBW) is malleable and changes after each intervention condition for both the SJ and TOJ tasks. Specifically, the TBW consistently became narrower post exercise while consistently increasing in width post rest, suggesting that aerobic exercise may improve temporal perception precision via broad neural change rather than targeting the specific networks that subserve either the SJ or TOJ tasks individually. The results from the RT task further support our findings of malleability of the multisensory processing system, as changes in performance, as assessed through cumulative probability models, were observed after each intervention condition. An increase in integration (i.e., greater magnitude of multisensory effect) however, was only found after a single bout of aerobic exercise. Overall, our results indicate that exercise uniquely affects the central nervous system and may broadly affect multisensory processing.
{"title":"Assessing the Effects of Exercise, Cognitive Demand, and Rest on Audiovisual Multisensory Processing in Older Adults: A Pilot Study.","authors":"Aysha Basharat, Michael Barnett-Cowan","doi":"10.1163/22134808-bja10085","DOIUrl":"https://doi.org/10.1163/22134808-bja10085","url":null,"abstract":"<p><p>A single bout of aerobic exercise is related to positive changes in higher-order cognitive function among older adults; however, the impact of aerobic exercise on multisensory processing remains unclear. Here we assessed the effects of a single bout of aerobic exercise on commonly utilized tasks that measure audiovisual multisensory processing: response time (RT), simultaneity judgements (SJ), and temporal-order judgements (TOJ), in a pilot study. To our knowledge this is the first effort to investigate the effects of three well-controlled intervention conditions on multisensory processing: resting, completing a cognitively demanding task, and performing aerobic exercise for 20 minutes. Our results indicate that the window of time within which stimuli from different modalities are integrated and perceived as simultaneous (temporal binding window; TBW) is malleable and changes after each intervention condition for both the SJ and TOJ tasks. Specifically, the TBW consistently became narrower post exercise while consistently increasing in width post rest, suggesting that aerobic exercise may improve temporal perception precision via broad neural change rather than targeting the specific networks that subserve either the SJ or TOJ tasks individually. The results from the RT task further support our findings of malleability of the multisensory processing system, as changes in performance, as assessed through cumulative probability models, were observed after each intervention condition. An increase in integration (i.e., greater magnitude of multisensory effect) however, was only found after a single bout of aerobic exercise. Overall, our results indicate that exercise uniquely affects the central nervous system and may broadly affect multisensory processing.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 3","pages":"213-262"},"PeriodicalIF":1.6,"publicationDate":"2023-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9382058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-19DOI: 10.1163/22134808-bja10093
Kacie Dunham, Alisa Zoltowski, Jacob I Feldman, Samona Davis, Baxter Rogers, Michelle D Failla, Mark T Wallace, Carissa J Cascio, Tiffany G Woynaroski
Autistic youth demonstrate differences in processing multisensory information, particularly in temporal processing of multisensory speech. Extensive research has identified several key brain regions for multisensory speech processing in non-autistic adults, including the superior temporal sulcus (STS) and insula, but it is unclear to what extent these regions are involved in temporal processing of multisensory speech in autistic youth. As a first step in exploring the neural substrates of multisensory temporal processing in this clinical population, we employed functional magnetic resonance imaging (fMRI) with a simultaneity-judgment audiovisual speech task. Eighteen autistic youth and a comparison group of 20 non-autistic youth matched on chronological age, biological sex, and gender participated. Results extend prior findings from studies of non-autistic adults, with non-autistic youth demonstrating responses in several similar regions as previously implicated in adult temporal processing of multisensory speech. Autistic youth demonstrated responses in fewer of the multisensory regions identified in adult studies; responses were limited to visual and motor cortices. Group responses in the middle temporal gyrus significantly interacted with age; younger autistic individuals showed reduced MTG responses whereas older individuals showed comparable MTG responses relative to non-autistic controls. Across groups, responses in the precuneus covaried with task accuracy, and anterior temporal and insula responses covaried with nonverbal IQ. These preliminary findings suggest possible differences in neural mechanisms of audiovisual processing in autistic youth while highlighting the need to consider participant characteristics in future, larger-scale studies exploring the neural basis of multisensory function in autism.
{"title":"Neural Correlates of Audiovisual Speech Processing in Autistic and Non-Autistic Youth.","authors":"Kacie Dunham, Alisa Zoltowski, Jacob I Feldman, Samona Davis, Baxter Rogers, Michelle D Failla, Mark T Wallace, Carissa J Cascio, Tiffany G Woynaroski","doi":"10.1163/22134808-bja10093","DOIUrl":"https://doi.org/10.1163/22134808-bja10093","url":null,"abstract":"<p><p>Autistic youth demonstrate differences in processing multisensory information, particularly in temporal processing of multisensory speech. Extensive research has identified several key brain regions for multisensory speech processing in non-autistic adults, including the superior temporal sulcus (STS) and insula, but it is unclear to what extent these regions are involved in temporal processing of multisensory speech in autistic youth. As a first step in exploring the neural substrates of multisensory temporal processing in this clinical population, we employed functional magnetic resonance imaging (fMRI) with a simultaneity-judgment audiovisual speech task. Eighteen autistic youth and a comparison group of 20 non-autistic youth matched on chronological age, biological sex, and gender participated. Results extend prior findings from studies of non-autistic adults, with non-autistic youth demonstrating responses in several similar regions as previously implicated in adult temporal processing of multisensory speech. Autistic youth demonstrated responses in fewer of the multisensory regions identified in adult studies; responses were limited to visual and motor cortices. Group responses in the middle temporal gyrus significantly interacted with age; younger autistic individuals showed reduced MTG responses whereas older individuals showed comparable MTG responses relative to non-autistic controls. Across groups, responses in the precuneus covaried with task accuracy, and anterior temporal and insula responses covaried with nonverbal IQ. These preliminary findings suggest possible differences in neural mechanisms of audiovisual processing in autistic youth while highlighting the need to consider participant characteristics in future, larger-scale studies exploring the neural basis of multisensory function in autism.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 3","pages":"263-288"},"PeriodicalIF":1.6,"publicationDate":"2023-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10121891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9382061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-17DOI: 10.1163/22134808-bja10092
Gesa Feenders, Georg M Klump
Motion discrimination is essential for animals to avoid collisions, to escape from predators, to catch prey or to communicate. Although most terrestrial vertebrates can benefit by combining concurrent stimuli from sound and vision to obtain a most salient percept of the moving object, there is little research on the mechanisms involved in such cross-modal motion discrimination. We used European starlings as a model with a well-studied visual and auditory system. In a behavioural motion discrimination task with visual and acoustic stimuli, we investigated the effects of cross-modal interference and attentional processes. Our results showed an impairment of motion discrimination when the visual and acoustic stimuli moved in opposite directions as compared to congruent motion direction. By presenting an acoustic stimulus of very short duration, thus lacking directional motion information, an additional alerting effect of the acoustic stimulus became evident. Finally, we show that a temporally leading acoustic stimulus did not improve the response behaviour compared to the synchronous presentation of the stimuli as would have been expected in case of major alerting effects. This further supports the importance of congruency and synchronicity in the current test paradigm with a minor role of attentional processes elicited by the acoustic stimulus. Together, our data clearly show cross-modal interference effects in an audio-visual motion discrimination paradigm when carefully selecting real-life stimuli under parameter conditions that meet the known criteria for cross-modal binding.
{"title":"Audio-Visual Interference During Motion Discrimination in Starlings.","authors":"Gesa Feenders, Georg M Klump","doi":"10.1163/22134808-bja10092","DOIUrl":"https://doi.org/10.1163/22134808-bja10092","url":null,"abstract":"<p><p>Motion discrimination is essential for animals to avoid collisions, to escape from predators, to catch prey or to communicate. Although most terrestrial vertebrates can benefit by combining concurrent stimuli from sound and vision to obtain a most salient percept of the moving object, there is little research on the mechanisms involved in such cross-modal motion discrimination. We used European starlings as a model with a well-studied visual and auditory system. In a behavioural motion discrimination task with visual and acoustic stimuli, we investigated the effects of cross-modal interference and attentional processes. Our results showed an impairment of motion discrimination when the visual and acoustic stimuli moved in opposite directions as compared to congruent motion direction. By presenting an acoustic stimulus of very short duration, thus lacking directional motion information, an additional alerting effect of the acoustic stimulus became evident. Finally, we show that a temporally leading acoustic stimulus did not improve the response behaviour compared to the synchronous presentation of the stimuli as would have been expected in case of major alerting effects. This further supports the importance of congruency and synchronicity in the current test paradigm with a minor role of attentional processes elicited by the acoustic stimulus. Together, our data clearly show cross-modal interference effects in an audio-visual motion discrimination paradigm when carefully selecting real-life stimuli under parameter conditions that meet the known criteria for cross-modal binding.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 2","pages":"181-212"},"PeriodicalIF":1.6,"publicationDate":"2023-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10834687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-13DOI: 10.1163/22134808-bja10090
Jessica O'Brien, Amy Mason, Jason Chan, Annalisa Setti
The ability to efficiently combine information from different senses is an important perceptual process that underpins much of our daily activities. This process, known as multisensory integration, varies from individual to individual, and is affected by the ageing process, with impaired processing associated with age-related conditions, including balance difficulties, mild cognitive impairment and cognitive decline. Impaired multisensory perception has also been associated with a range of neurodevelopmental conditions, where novel intervention approaches are actively sought, for example dyslexia and autism. However, it remains unclear to what extent and how multisensory perception can be modified by training. This systematic review aims to evaluate the evidence that we can train multisensory perception in neurotypical adults. In all, 1521 studies were identified following a systematic search of the databases PubMed, Scopus, PsychInfo and Web of Science. Following screening for inclusion and exclusion criteria, 27 studies were chosen for inclusion. Study quality was assessed using the Methodological Index for Non-Randomised Studies (MINORS) tool and the Cochrane Risk of Bias tool 2.0 for Randomised Control Trials. We found considerable evidence that in-task feedback training using psychophysics protocols led to improved task performance. The generalisability of this training to other tasks of multisensory integration was inconclusive, with few studies and mixed findings reported. Promising findings from exercise-based training indicate physical activity protocols warrant further investigation as potential training avenues for improving multisensory integration. Future research directions should include trialling training protocols with clinical populations and other groups who would benefit from targeted training to improve inefficient multisensory integration.
有效地结合来自不同感官的信息的能力是一个重要的感知过程,它支撑着我们的日常活动。这一过程被称为多感觉统合,因人而异,并受到衰老过程的影响,与年龄相关的疾病(包括平衡困难、轻度认知障碍和认知衰退)相关的处理受损。多感觉知觉受损也与一系列神经发育状况有关,因此人们正在积极寻求新的干预方法,例如阅读障碍和自闭症。然而,目前还不清楚多感官知觉可以通过训练改变到什么程度以及如何改变。本系统综述的目的是评估证据,我们可以训练多感官知觉在神经正常的成年人。通过对PubMed、Scopus、PsychInfo和Web of Science等数据库的系统搜索,总共确定了1521项研究。在筛选纳入和排除标准后,选择了27项研究纳入。采用非随机研究方法学指数(Methodological Index for Non-Randomised Studies,简称:minor)工具和Cochrane Risk of Bias工具2.0随机对照试验评估研究质量。我们发现大量证据表明,使用心理物理学协议的任务内反馈训练可以提高任务绩效。这种训练对其他多感觉统合任务的普遍性尚无定论,报道的研究和结果不一。基于运动的训练有希望的发现表明,体育活动方案值得进一步研究,作为改善多感觉整合的潜在训练途径。未来的研究方向应该包括在临床人群和其他群体中试验训练方案,这些人群将受益于有针对性的训练,以改善低效的多感觉整合。
{"title":"Can We Train Multisensory Integration in Adults? A Systematic Review.","authors":"Jessica O'Brien, Amy Mason, Jason Chan, Annalisa Setti","doi":"10.1163/22134808-bja10090","DOIUrl":"https://doi.org/10.1163/22134808-bja10090","url":null,"abstract":"<p><p>The ability to efficiently combine information from different senses is an important perceptual process that underpins much of our daily activities. This process, known as multisensory integration, varies from individual to individual, and is affected by the ageing process, with impaired processing associated with age-related conditions, including balance difficulties, mild cognitive impairment and cognitive decline. Impaired multisensory perception has also been associated with a range of neurodevelopmental conditions, where novel intervention approaches are actively sought, for example dyslexia and autism. However, it remains unclear to what extent and how multisensory perception can be modified by training. This systematic review aims to evaluate the evidence that we can train multisensory perception in neurotypical adults. In all, 1521 studies were identified following a systematic search of the databases PubMed, Scopus, PsychInfo and Web of Science. Following screening for inclusion and exclusion criteria, 27 studies were chosen for inclusion. Study quality was assessed using the Methodological Index for Non-Randomised Studies (MINORS) tool and the Cochrane Risk of Bias tool 2.0 for Randomised Control Trials. We found considerable evidence that in-task feedback training using psychophysics protocols led to improved task performance. The generalisability of this training to other tasks of multisensory integration was inconclusive, with few studies and mixed findings reported. Promising findings from exercise-based training indicate physical activity protocols warrant further investigation as potential training avenues for improving multisensory integration. Future research directions should include trialling training protocols with clinical populations and other groups who would benefit from targeted training to improve inefficient multisensory integration.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 2","pages":"111-180"},"PeriodicalIF":1.6,"publicationDate":"2023-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10835145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-30DOI: 10.1163/22134808-bja10091
Charles Spence
A number of perplexing phenomena in the area of olfactory/flavour perception may fruitfully be explained by the suggestion that chemosensory mental imagery can be triggered automatically by perceptual inputs. In particular, the disconnect between the seemingly limited ability of participants in chemosensory psychophysics studies to distinguish more than two or three odorants in mixtures and the rich and detailed flavour descriptions that are sometimes reported by wine experts; the absence of awareness of chemosensory loss in many elderly individuals; and the insensitivity of the odour-induced taste enhancement (OITE) effect to the mode of presentation of olfactory stimuli (i.e., orthonasal or retronasal). The suggestion made here is that the theory of predictive coding, developed first in the visual modality, be extended to chemosensation. This may provide a fruitful way of thinking about the interaction between mental imagery and perception in the experience of aromas and flavours. Accepting such a suggestion also raises some important questions concerning the ecological validity/meaning of much of the chemosensory psychophysics literature that has been published to date.
{"title":"'Tasting Imagination': What Role Chemosensory Mental Imagery in Multisensory Flavour Perception?","authors":"Charles Spence","doi":"10.1163/22134808-bja10091","DOIUrl":"https://doi.org/10.1163/22134808-bja10091","url":null,"abstract":"<p><p>A number of perplexing phenomena in the area of olfactory/flavour perception may fruitfully be explained by the suggestion that chemosensory mental imagery can be triggered automatically by perceptual inputs. In particular, the disconnect between the seemingly limited ability of participants in chemosensory psychophysics studies to distinguish more than two or three odorants in mixtures and the rich and detailed flavour descriptions that are sometimes reported by wine experts; the absence of awareness of chemosensory loss in many elderly individuals; and the insensitivity of the odour-induced taste enhancement (OITE) effect to the mode of presentation of olfactory stimuli (i.e., orthonasal or retronasal). The suggestion made here is that the theory of predictive coding, developed first in the visual modality, be extended to chemosensation. This may provide a fruitful way of thinking about the interaction between mental imagery and perception in the experience of aromas and flavours. Accepting such a suggestion also raises some important questions concerning the ecological validity/meaning of much of the chemosensory psychophysics literature that has been published to date.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1","pages":"93-109"},"PeriodicalIF":1.6,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}