Pub Date : 2024-04-30DOI: 10.1163/22134808-bja10121
Ryan Horsfall, Neil Harrison, Georg Meyer, Sophie Wuerger
A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.
在判断视听信号是否来自同一事件时,一个重要的启发式方法就是各信号的时间重合性。以往的研究强调了一个过程,即同时性感知会迅速重新校准,以考虑刺激物物理时间偏移的差异。本文研究了快速重新校准是否也会因中心到达潜伏期的不同而发生,而中心到达潜伏期是由视觉强度相关的处理时间驱动的。在一项行为实验中,观察者完成了时序判断(TOJ)、同时性判断(SJ)和简单反应时间(RT)任务,并对前面带有明亮或昏暗视觉刺激的其他视听试验做出了反应。结果发现,在 TOJ 任务中,主观同时性点会因前面刺激的视觉强度而移动,但在 SJ 任务中不会,而在 RT 数据中,前面刺激的强度没有影响。因此,我们的数据提供了一些证据,证明同时性感知会根据刺激强度迅速重新校准。
{"title":"Perceived Audio-Visual Simultaneity Is Recalibrated by the Visual Intensity of the Preceding Trial.","authors":"Ryan Horsfall, Neil Harrison, Georg Meyer, Sophie Wuerger","doi":"10.1163/22134808-bja10121","DOIUrl":"https://doi.org/10.1163/22134808-bja10121","url":null,"abstract":"<p><p>A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-25DOI: 10.1163/22134808-bja10122
Paula Soballa, Christian Frings, Simon Merz
The influence of landmarks, that is, nearby non-target stimuli, on spatial perception has been shown in multiple ways. These include altered target localization variability near landmarks and systematic spatial distortions of target localizations. Previous studies have mostly been conducted in the visual modality using temporary, artificial landmarks or the tactile modality with persistent landmarks on the body. Thus, it is unclear whether both landmark types produce the same spatial distortions as they were never investigated in the same modality. Addressing this, we used a novel tactile setup to present temporary, artificial landmarks on the forearm and systematically manipulated their location to either be close to a persistent landmark (wrist or elbow) or in between both persistent landmarks at the middle of the forearm. Initial data (Exp. 1 and Exp. 2) suggested systematic differences of temporary landmarks based on their distance from the persistent landmark, possibly indicating different distortions of temporary and persistent landmarks. Subsequent control studies (Exp. 3 and Exp. 4) showed this effect was driven by the relative landmark location within the target distribution. Specifically, landmarks in the middle of the target distribution led to systematic distortions of target localizations toward the landmark, whereas landmarks at the side led to distortions away from the landmark for nearby targets, and toward the landmark with wider distances. Our results indicate that experimental results with temporary landmarks can be generalized to more natural settings with persistent landmarks, and further reveal that the relative landmark location leads to different effects of the pattern of spatial distortions.
{"title":"Tactile Landmarks: the Relative Landmark Location Alters Spatial Distortions","authors":"Paula Soballa, Christian Frings, Simon Merz","doi":"10.1163/22134808-bja10122","DOIUrl":"https://doi.org/10.1163/22134808-bja10122","url":null,"abstract":"\u0000The influence of landmarks, that is, nearby non-target stimuli, on spatial perception has been shown in multiple ways. These include altered target localization variability near landmarks and systematic spatial distortions of target localizations. Previous studies have mostly been conducted in the visual modality using temporary, artificial landmarks or the tactile modality with persistent landmarks on the body. Thus, it is unclear whether both landmark types produce the same spatial distortions as they were never investigated in the same modality. Addressing this, we used a novel tactile setup to present temporary, artificial landmarks on the forearm and systematically manipulated their location to either be close to a persistent landmark (wrist or elbow) or in between both persistent landmarks at the middle of the forearm. Initial data (Exp. 1 and Exp. 2) suggested systematic differences of temporary landmarks based on their distance from the persistent landmark, possibly indicating different distortions of temporary and persistent landmarks. Subsequent control studies (Exp. 3 and Exp. 4) showed this effect was driven by the relative landmark location within the target distribution. Specifically, landmarks in the middle of the target distribution led to systematic distortions of target localizations toward the landmark, whereas landmarks at the side led to distortions away from the landmark for nearby targets, and toward the landmark with wider distances. Our results indicate that experimental results with temporary landmarks can be generalized to more natural settings with persistent landmarks, and further reveal that the relative landmark location leads to different effects of the pattern of spatial distortions.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140654488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-24DOI: 10.1163/22134808-bja10120
Shinji Nakamura
The current investigation examined whether visual motion without continuous visual displacement could effectively induce self-motion perception (vection). Four-stroke apparent motions (4SAM) were employed in the experiments as visual inducers. The 4SAM pattern contained luminance-defined motion energy equivalent to the real motion pattern, and the participants perceived unidirectional motion according to the motion energy but without displacements (the visual elements flickered on the spot). The experiments revealed that the 4SAM stimulus could effectively induce vection in the horizontal, expanding, or rotational directions, although its strength was significantly weaker than that induced by the real-motion stimulus. This result suggests that visual displacement is not essential, and the luminance-defined motion energy and/or the resulting perceived motion of the visual inducer would be sufficient for inducing visual self-motion perception. Conversely, when the 4SAM and real-motion patterns were presented simultaneously, self-motion perception was mainly determined in accordance with real motion, suggesting that the real-motion stimulus is a predominant determinant of vection. These research outcomes may be worthy of considering the perceptual and neurological mechanisms underlying self-motion perception.
{"title":"Four-Stroke Apparent Motion Can Effectively Induce Visual Self-Motion Perception: an Examination Using Expanding, Rotating, and Translating Motion","authors":"Shinji Nakamura","doi":"10.1163/22134808-bja10120","DOIUrl":"https://doi.org/10.1163/22134808-bja10120","url":null,"abstract":"\u0000The current investigation examined whether visual motion without continuous visual displacement could effectively induce self-motion perception (vection). Four-stroke apparent motions (4SAM) were employed in the experiments as visual inducers. The 4SAM pattern contained luminance-defined motion energy equivalent to the real motion pattern, and the participants perceived unidirectional motion according to the motion energy but without displacements (the visual elements flickered on the spot). The experiments revealed that the 4SAM stimulus could effectively induce vection in the horizontal, expanding, or rotational directions, although its strength was significantly weaker than that induced by the real-motion stimulus. This result suggests that visual displacement is not essential, and the luminance-defined motion energy and/or the resulting perceived motion of the visual inducer would be sufficient for inducing visual self-motion perception. Conversely, when the 4SAM and real-motion patterns were presented simultaneously, self-motion perception was mainly determined in accordance with real motion, suggesting that the real-motion stimulus is a predominant determinant of vection. These research outcomes may be worthy of considering the perceptual and neurological mechanisms underlying self-motion perception.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140661460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1163/22134808-bja10119
Isar Syed, M. Baart, Jean Vroomen
Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.
{"title":"The Multimodal Trust Effects of Face, Voice, and Sentence Content","authors":"Isar Syed, M. Baart, Jean Vroomen","doi":"10.1163/22134808-bja10119","DOIUrl":"https://doi.org/10.1163/22134808-bja10119","url":null,"abstract":"\u0000Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140747513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1163/22134808-bja10118
M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco
Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes — a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of . Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.
{"title":"Addressing the Association Between Action Video Game Playing Experience and Visual Search in Naturalistic Multisensory Scenes","authors":"M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco","doi":"10.1163/22134808-bja10118","DOIUrl":"https://doi.org/10.1163/22134808-bja10118","url":null,"abstract":"\u0000Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes — a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of . Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139780110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1163/22134808-bja10118
M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco
Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes — a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of . Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.
{"title":"Addressing the Association Between Action Video Game Playing Experience and Visual Search in Naturalistic Multisensory Scenes","authors":"M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco","doi":"10.1163/22134808-bja10118","DOIUrl":"https://doi.org/10.1163/22134808-bja10118","url":null,"abstract":"\u0000Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes — a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of . Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139840141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1163/22134808-bja10117
Silvia Zanchi, Luigi F Cuturi, Giulio Sandini, Monica Gori, Elisa R Ferrè
While navigating through the surroundings, we constantly rely on inertial vestibular signals for self-motion along with visual and acoustic spatial references from the environment. However, the interaction between inertial cues and environmental spatial references is not yet fully understood. Here we investigated whether vestibular self-motion sensitivity is influenced by sensory spatial references. Healthy participants were administered a Vestibular Self-Motion Detection Task in which they were asked to detect vestibular self-motion sensations induced by low-intensity Galvanic Vestibular Stimulation. Participants performed this detection task with or without an external visual or acoustic spatial reference placed directly in front of them. We computed the d prime ( d ' ) as a measure of participants' vestibular sensitivity and the criterion as an index of their response bias. Results showed that the visual spatial reference increased sensitivity to detect vestibular self-motion. Conversely, the acoustic spatial reference did not influence self-motion sensitivity. Both visual and auditory spatial references did not cause changes in response bias. Environmental visual spatial references provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.
在周围环境中导航时,我们不断依靠惯性前庭信号以及来自环境的视觉和听觉空间参考来进行自我运动。然而,惯性线索与环境空间参考之间的相互作用尚未完全明了。在此,我们研究了前庭自我运动灵敏度是否受感官空间参考的影响。我们对健康的参与者进行了前庭自我运动检测任务,要求他们检测由低强度 Galvanic Vestibular Stimulation 引起的前庭自我运动感觉。受试者在有或没有外部视觉或听觉空间参照物的情况下完成这项检测任务。我们计算了 d prime ( d '),以此来衡量参与者的前庭敏感度,并计算了标准值,以此来衡量参与者的反应偏差。结果显示,视觉空间参照物提高了检测前庭自我运动的灵敏度。相反,听觉空间参照物并不影响自我运动灵敏度。视觉和听觉空间参照物都不会引起反应偏差的变化。环境视觉空间参照物提供了相关信息,提高了我们感知惯性自我运动线索的能力,这表明视觉和前庭系统在自我运动感知中存在特定的相互作用。
{"title":"Spatial Sensory References for Vestibular Self-Motion Perception.","authors":"Silvia Zanchi, Luigi F Cuturi, Giulio Sandini, Monica Gori, Elisa R Ferrè","doi":"10.1163/22134808-bja10117","DOIUrl":"10.1163/22134808-bja10117","url":null,"abstract":"<p><p>While navigating through the surroundings, we constantly rely on inertial vestibular signals for self-motion along with visual and acoustic spatial references from the environment. However, the interaction between inertial cues and environmental spatial references is not yet fully understood. Here we investigated whether vestibular self-motion sensitivity is influenced by sensory spatial references. Healthy participants were administered a Vestibular Self-Motion Detection Task in which they were asked to detect vestibular self-motion sensations induced by low-intensity Galvanic Vestibular Stimulation. Participants performed this detection task with or without an external visual or acoustic spatial reference placed directly in front of them. We computed the d prime ( d ' ) as a measure of participants' vestibular sensitivity and the criterion as an index of their response bias. Results showed that the visual spatial reference increased sensitivity to detect vestibular self-motion. Conversely, the acoustic spatial reference did not influence self-motion sensitivity. Both visual and auditory spatial references did not cause changes in response bias. Environmental visual spatial references provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138832890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1163/22134808-bja10116
Joshua R Tatz, Zehra F Peynircioğlu
Multisensory context often facilitates perception and memory. In fact, encoding items within a multisensory context can improve memory even on strictly unisensory tests (i.e., when the multisensory context is absent). Prior studies that have consistently found these multisensory facilitation effects have largely employed multisensory contexts in which the stimuli were meaningfully related to the items targeting for remembering (e.g., pairing canonical sounds and images). Other studies have used unrelated stimuli as multisensory context. A third possible type of multisensory context is one that is environmentally related simply because the stimuli are often encountered together in the real world. We predicted that encountering such a multisensory context would also enhance memory through cross-modal associations, or representations relating to one's prior multisensory experience with that sort of stimuli in general. In two memory experiments, we used faces and voices of unfamiliar people as everyday stimuli individuals have substantial experience integrating the perceptual features of. We assigned participants to face- or voice-recognition groups and ensured that, during the study phase, half of the face or voice targets were encountered also with information in the other modality. Voices initially encoded along with faces were consistently remembered better, providing evidence that cross-modal associations could explain the observed multisensory facilitation.
{"title":"Cross-Modal Contributions to Episodic Memory for Voices.","authors":"Joshua R Tatz, Zehra F Peynircioğlu","doi":"10.1163/22134808-bja10116","DOIUrl":"10.1163/22134808-bja10116","url":null,"abstract":"<p><p>Multisensory context often facilitates perception and memory. In fact, encoding items within a multisensory context can improve memory even on strictly unisensory tests (i.e., when the multisensory context is absent). Prior studies that have consistently found these multisensory facilitation effects have largely employed multisensory contexts in which the stimuli were meaningfully related to the items targeting for remembering (e.g., pairing canonical sounds and images). Other studies have used unrelated stimuli as multisensory context. A third possible type of multisensory context is one that is environmentally related simply because the stimuli are often encountered together in the real world. We predicted that encountering such a multisensory context would also enhance memory through cross-modal associations, or representations relating to one's prior multisensory experience with that sort of stimuli in general. In two memory experiments, we used faces and voices of unfamiliar people as everyday stimuli individuals have substantial experience integrating the perceptual features of. We assigned participants to face- or voice-recognition groups and ensured that, during the study phase, half of the face or voice targets were encountered also with information in the other modality. Voices initially encoded along with faces were consistently remembered better, providing evidence that cross-modal associations could explain the observed multisensory facilitation.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138808898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.1163/22134808-bja10115
Lawrence R Stark, Kim Shiraishi, Tyler Sommerfeld
This study aimed to determine the extent to which haptic stimuli can influence ocular accommodation, either alone or in combination with vision. Accommodation was measured objectively in 15 young adults as they read stationary targets containing Braille letters. These cards were presented at four distances in the range 20-50 cm. In the Touch condition, the participant read by touch with their dominant hand in a dark room. Afterward, they estimated card distance with their non-dominant hand. In the Vision condition, they read by sight binocularly without touch in a lighted room. In the Touch with Vision condition, they read by sight binocularly and with touch in a lighted room. Sensory modality had a significant overall effect on the slope of the accommodative stimulus-response function. The slope in the Touch condition was not significantly different from zero, even though depth perception from touch was accurate. Nevertheless, one atypical participant had a moderate accommodative slope in the Touch condition. The accommodative slope in the Touch condition was significantly poorer than in the Vision condition. The accommodative slopes in the Vision condition and Touch with Vision condition were not significantly different. For most individuals, haptic stimuli for stationary objects do not influence the accommodation response, alone or in combination with vision. These haptic stimuli provide accurate distance perception, thus questioning the general validity of Heath's model of proximal accommodation as driven by perceived distance. Instead, proximally induced accommodation relies on visual rather than touch stimuli.
{"title":"Stationary Haptic Stimuli Do not Produce Ocular Accommodation in Most Individuals.","authors":"Lawrence R Stark, Kim Shiraishi, Tyler Sommerfeld","doi":"10.1163/22134808-bja10115","DOIUrl":"10.1163/22134808-bja10115","url":null,"abstract":"<p><p>This study aimed to determine the extent to which haptic stimuli can influence ocular accommodation, either alone or in combination with vision. Accommodation was measured objectively in 15 young adults as they read stationary targets containing Braille letters. These cards were presented at four distances in the range 20-50 cm. In the Touch condition, the participant read by touch with their dominant hand in a dark room. Afterward, they estimated card distance with their non-dominant hand. In the Vision condition, they read by sight binocularly without touch in a lighted room. In the Touch with Vision condition, they read by sight binocularly and with touch in a lighted room. Sensory modality had a significant overall effect on the slope of the accommodative stimulus-response function. The slope in the Touch condition was not significantly different from zero, even though depth perception from touch was accurate. Nevertheless, one atypical participant had a moderate accommodative slope in the Touch condition. The accommodative slope in the Touch condition was significantly poorer than in the Vision condition. The accommodative slopes in the Vision condition and Touch with Vision condition were not significantly different. For most individuals, haptic stimuli for stationary objects do not influence the accommodation response, alone or in combination with vision. These haptic stimuli provide accurate distance perception, thus questioning the general validity of Heath's model of proximal accommodation as driven by perceived distance. Instead, proximally induced accommodation relies on visual rather than touch stimuli.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138453050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-10DOI: 10.1163/22134808-bja10114
Kosuke Motoki, Lawrence E Marks, Carlos Velasco
The past two decades have seen an explosion of research on cross-modal correspondences. Broadly speaking, this term has been used to encompass associations between and among features, dimensions, or attributes across the senses. There has been an increasing interest in this topic amongst researchers from multiple fields (psychology, neuroscience, music, art, environmental design, etc.) and, importantly, an increasing breadth of the topic's scope. Here, this narrative review aims to reflect on what cross-modal correspondences are, where they come from, and what underlies them. We suggest that cross-modal correspondences are usefully conceived as relative associations between different actual or imagined sensory stimuli, many of these correspondences being shared by most people. A taxonomy of correspondences with four major kinds of associations (physiological, semantic, statistical, and affective) characterizes cross-modal correspondences. Sensory dimensions (quantity/quality) and sensory features (lower perceptual/higher cognitive) correspond in cross-modal correspondences. Cross-modal correspondences may be understood (or measured) from two complementary perspectives: the phenomenal view (perceptual experiences of subjective matching) and the behavioural response view (observable patterns of behavioural response to multiple sensory stimuli). Importantly, we reflect on remaining questions and standing issues that need to be addressed in order to develop an explanatory framework for cross-modal correspondences. Future research needs (a) to understand better when (and why) phenomenal and behavioural measures are coincidental and when they are not, and, ideally, (b) to determine whether different kinds of cross-modal correspondence (quantity/quality, lower perceptual/higher cognitive) rely on the same or different mechanisms.
{"title":"Reflections on Cross-Modal Correspondences: Current Understanding and Issues for Future Research.","authors":"Kosuke Motoki, Lawrence E Marks, Carlos Velasco","doi":"10.1163/22134808-bja10114","DOIUrl":"10.1163/22134808-bja10114","url":null,"abstract":"<p><p>The past two decades have seen an explosion of research on cross-modal correspondences. Broadly speaking, this term has been used to encompass associations between and among features, dimensions, or attributes across the senses. There has been an increasing interest in this topic amongst researchers from multiple fields (psychology, neuroscience, music, art, environmental design, etc.) and, importantly, an increasing breadth of the topic's scope. Here, this narrative review aims to reflect on what cross-modal correspondences are, where they come from, and what underlies them. We suggest that cross-modal correspondences are usefully conceived as relative associations between different actual or imagined sensory stimuli, many of these correspondences being shared by most people. A taxonomy of correspondences with four major kinds of associations (physiological, semantic, statistical, and affective) characterizes cross-modal correspondences. Sensory dimensions (quantity/quality) and sensory features (lower perceptual/higher cognitive) correspond in cross-modal correspondences. Cross-modal correspondences may be understood (or measured) from two complementary perspectives: the phenomenal view (perceptual experiences of subjective matching) and the behavioural response view (observable patterns of behavioural response to multiple sensory stimuli). Importantly, we reflect on remaining questions and standing issues that need to be addressed in order to develop an explanatory framework for cross-modal correspondences. Future research needs (a) to understand better when (and why) phenomenal and behavioural measures are coincidental and when they are not, and, ideally, (b) to determine whether different kinds of cross-modal correspondence (quantity/quality, lower perceptual/higher cognitive) rely on the same or different mechanisms.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"107592772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}