首页 > 最新文献

Multisensory Research最新文献

英文 中文
Influence of Tactile Flow on Visual Heading Perception. 触觉流对视觉头球感知的影响。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-09-27 DOI: 10.1167/jov.21.9.1915
Lisa Rosenblum, Elisa Grewe, J. Churan, F. Bremmer
The integration of information from different sensory modalities is crucial for successful navigation through an environment. Among others, self-motion induces distinct optic flow patterns on the retina, vestibular signals and tactile flow, which contribute to determine traveled distance (path integration) or movement direction (heading). While the processing of combined visual-vestibular information is subject to a growing body of literature, the processing of visuo-tactile signals in the context of self-motion has received comparatively little attention. Here, we investigated whether visual heading perception is influenced by behaviorally irrelevant tactile flow. In the visual modality, we simulated an observer's self-motion across a horizontal ground plane (optic flow). Tactile self-motion stimuli were delivered by air flow from head-mounted nozzles (tactile flow). In blocks of trials, we presented only visual or tactile stimuli and subjects had to report their perceived heading. In another block of trials, tactile and visual stimuli were presented simultaneously, with the tactile flow within ±40° of the visual heading (bimodal condition). Here, importantly, participants had to report their perceived visual heading. Perceived self-motion direction in all conditions revealed a centripetal bias, i.e., heading directions were perceived as compressed toward straight ahead. In the bimodal condition, we found a small but systematic influence of task-irrelevant tactile flow on visually perceived headings as function of their directional offset. We conclude that tactile flow is more tightly linked to self-motion perception than previously thought.
整合来自不同感觉模式的信息对于在环境中成功导航至关重要。除其他外,自我运动在视网膜上诱导不同的视觉流模式、前庭信号和触觉流,这有助于确定行进距离(路径积分)或运动方向(航向)。虽然对组合视觉前庭信息的处理受到越来越多的文献的关注,但在自我运动的背景下对视觉触觉信号的处理却相对较少受到关注。在这里,我们研究了视觉航向感知是否受到与行为无关的触觉流的影响。在视觉模态中,我们模拟了观察者在水平地平面上的自我运动(光流)。触觉自运动刺激由头戴式喷嘴的气流(触觉流)提供。在一组组试验中,我们只提供视觉或触觉刺激,受试者必须报告他们感知到的航向。在另一组试验中,触觉和视觉刺激同时出现,触觉流在视觉航向的±40°范围内(双峰条件)。在这里,重要的是,参与者必须报告他们感知到的视觉方向。在所有条件下感知到的自运动方向都显示出向心偏差,即航向方向被感知为朝着直行方向压缩。在双峰条件下,我们发现与任务无关的触觉流对视觉感知标题的影响很小,但有系统性,这是其方向偏移的函数。我们得出的结论是,触觉流与自我运动感知的联系比以前认为的更紧密。
{"title":"Influence of Tactile Flow on Visual Heading Perception.","authors":"Lisa Rosenblum, Elisa Grewe, J. Churan, F. Bremmer","doi":"10.1167/jov.21.9.1915","DOIUrl":"https://doi.org/10.1167/jov.21.9.1915","url":null,"abstract":"The integration of information from different sensory modalities is crucial for successful navigation through an environment. Among others, self-motion induces distinct optic flow patterns on the retina, vestibular signals and tactile flow, which contribute to determine traveled distance (path integration) or movement direction (heading). While the processing of combined visual-vestibular information is subject to a growing body of literature, the processing of visuo-tactile signals in the context of self-motion has received comparatively little attention. Here, we investigated whether visual heading perception is influenced by behaviorally irrelevant tactile flow. In the visual modality, we simulated an observer's self-motion across a horizontal ground plane (optic flow). Tactile self-motion stimuli were delivered by air flow from head-mounted nozzles (tactile flow). In blocks of trials, we presented only visual or tactile stimuli and subjects had to report their perceived heading. In another block of trials, tactile and visual stimuli were presented simultaneously, with the tactile flow within ±40° of the visual heading (bimodal condition). Here, importantly, participants had to report their perceived visual heading. Perceived self-motion direction in all conditions revealed a centripetal bias, i.e., heading directions were perceived as compressed toward straight ahead. In the bimodal condition, we found a small but systematic influence of task-irrelevant tactile flow on visually perceived headings as function of their directional offset. We conclude that tactile flow is more tightly linked to self-motion perception than previously thought.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"1 1","pages":"1-18"},"PeriodicalIF":1.6,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46632558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Developmental Changes in Gaze Behavior and the Effects of Auditory Emotion Word Priming in Emotional Face Categorization. 注视行为的发育变化和听觉情绪词引物对情绪面孔分类的影响
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-09-16 DOI: 10.1163/22134808-bja10063
Michael Vesker, Daniela Bahn, Christina Kauschke, Gudrun Schwarzer

Social interactions often require the simultaneous processing of emotions from facial expressions and speech. However, the development of the gaze behavior used for emotion recognition, and the effects of speech perception on the visual encoding of facial expressions is less understood. We therefore conducted a word-primed face categorization experiment, where participants from multiple age groups (six-year-olds, 12-year-olds, and adults) categorized target facial expressions as positive or negative after priming with valence-congruent or -incongruent auditory emotion words, or no words at all. We recorded our participants' gaze behavior during this task using an eye-tracker, and analyzed the data with respect to the fixation time toward the eyes and mouth regions of faces, as well as the time until participants made the first fixation within those regions (time to first fixation, TTFF). We found that the six-year-olds showed significantly higher accuracy in categorizing congruently primed faces compared to the other conditions. The six-year-olds also showed faster response times, shorter total fixation durations, and faster TTFF measures in all primed trials, regardless of congruency, as compared to unprimed trials. We also found that while adults looked first, and longer, at the eyes as compared to the mouth regions of target faces, children did not exhibit this gaze behavior. Our results thus indicate that young children are more sensitive than adults or older children to auditory emotion word primes during the perception of emotional faces, and that the distribution of gaze across the regions of the face changes significantly from childhood to adulthood.

社会交往通常需要同时处理面部表情和言语中的情绪。然而,人们对用于情绪识别的注视行为的发展以及言语感知对面部表情视觉编码的影响却知之甚少。因此,我们进行了一项以词语为引子的面部分类实验,来自多个年龄组(6 岁儿童、12 岁儿童和成人)的参与者在使用与情绪价值一致或不一致的听觉情绪词语或不使用任何词语作为引子后,将目标面部表情分为积极或消极。我们使用眼动仪记录了被试在这项任务中的注视行为,并分析了被试注视脸部眼睛和嘴巴区域的时间,以及被试在这些区域内首次注视的时间(首次注视时间,TTFF)。我们发现,与其他条件相比,六岁儿童对一致引物面孔进行分类的准确率明显更高。与无引物试验相比,六岁儿童在所有引物试验中(无论是否一致)都表现出更快的反应时间、更短的总定格时间和更快的 TTFF 测量值。我们还发现,与看嘴部相比,成人首先注视目标脸部的眼睛,而且注视时间更长,而儿童却没有表现出这种注视行为。因此,我们的研究结果表明,在感知情绪面孔时,幼儿比成人或年龄较大的儿童对听觉情绪词诱因更敏感,而且从儿童期到成年期,目光在面孔各区域的分布会发生显著变化。
{"title":"Developmental Changes in Gaze Behavior and the Effects of Auditory Emotion Word Priming in Emotional Face Categorization.","authors":"Michael Vesker, Daniela Bahn, Christina Kauschke, Gudrun Schwarzer","doi":"10.1163/22134808-bja10063","DOIUrl":"10.1163/22134808-bja10063","url":null,"abstract":"<p><p>Social interactions often require the simultaneous processing of emotions from facial expressions and speech. However, the development of the gaze behavior used for emotion recognition, and the effects of speech perception on the visual encoding of facial expressions is less understood. We therefore conducted a word-primed face categorization experiment, where participants from multiple age groups (six-year-olds, 12-year-olds, and adults) categorized target facial expressions as positive or negative after priming with valence-congruent or -incongruent auditory emotion words, or no words at all. We recorded our participants' gaze behavior during this task using an eye-tracker, and analyzed the data with respect to the fixation time toward the eyes and mouth regions of faces, as well as the time until participants made the first fixation within those regions (time to first fixation, TTFF). We found that the six-year-olds showed significantly higher accuracy in categorizing congruently primed faces compared to the other conditions. The six-year-olds also showed faster response times, shorter total fixation durations, and faster TTFF measures in all primed trials, regardless of congruency, as compared to unprimed trials. We also found that while adults looked first, and longer, at the eyes as compared to the mouth regions of target faces, children did not exhibit this gaze behavior. Our results thus indicate that young children are more sensitive than adults or older children to auditory emotion word primes during the perception of emotional faces, and that the distribution of gaze across the regions of the face changes significantly from childhood to adulthood.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-21"},"PeriodicalIF":1.6,"publicationDate":"2021-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39428547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contents Index to Volume 34 目录索引第34卷
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-09-14 DOI: 10.1163/22134808-340800ci
{"title":"Contents Index to Volume 34","authors":"","doi":"10.1163/22134808-340800ci","DOIUrl":"https://doi.org/10.1163/22134808-340800ci","url":null,"abstract":"","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"1 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46658681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multisensory Effects on Illusory Self-Motion (Vection): the Role of Visual, Auditory, and Tactile Cues. 幻觉自我运动(Vection)的多感官效应:视觉、听觉和触觉线索的作用。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-08-11 DOI: 10.1163/22134808-bja10058
Brandy Murovec, Julia Spaniol, Jennifer L Campos, Behrang Keshavarz

A critical component to many immersive experiences in virtual reality (VR) is vection, defined as the illusion of self-motion. Traditionally, vection has been described as a visual phenomenon, but more recent research suggests that vection can be influenced by a variety of senses. The goal of the present study was to investigate the role of multisensory cues on vection by manipulating the availability of visual, auditory, and tactile stimuli in a VR setting. To achieve this, 24 adults (Mage = 25.04) were presented with a rotating stimulus aimed to induce circular vection. All participants completed trials that included a single sensory cue, a combination of two cues, or all three cues presented together. The size of the field of view (FOV) was manipulated across four levels (no-visuals, small, medium, full). Participants rated vection intensity and duration verbally after each trial. Results showed that all three sensory cues induced vection when presented in isolation, with visual cues eliciting the highest intensity and longest duration. The presence of auditory and tactile cues further increased vection intensity and duration compared to conditions where these cues were not presented. These findings support the idea that vection can be induced via multiple types of sensory inputs and can be intensified when multiple sensory inputs are combined.

虚拟现实(VR)中许多沉浸式体验的一个重要组成部分是 "视差",即自我运动的错觉。传统上,vection 被描述为一种视觉现象,但最近的研究表明,vection 可以受到多种感官的影响。本研究的目的是通过操纵 VR 环境中视觉、听觉和触觉刺激的可用性,研究多感官线索对牵引力的作用。为此,研究人员向 24 名成年人(Mage = 25.04)展示了一个旋转刺激物,旨在诱发圆形吸力。所有参与者都完成了包含单一感官线索、两种线索组合或三种线索同时呈现的试验。视场(FOV)的大小在四个级别(无视觉、小、中、全)中进行调节。每次试验后,受试者都会以口头方式对视觉强度和持续时间进行评分。结果表明,当三种感觉线索单独出现时,都会诱发牵引,其中视觉线索诱发的牵引强度最高,持续时间最长。与不出现这些线索的情况相比,出现听觉和触觉线索会进一步增加牵拉强度和持续时间。这些发现支持了这样一种观点,即牵引可以通过多种类型的感觉输入来诱发,并且当多种感觉输入结合在一起时可以得到加强。
{"title":"Multisensory Effects on Illusory Self-Motion (Vection): the Role of Visual, Auditory, and Tactile Cues.","authors":"Brandy Murovec, Julia Spaniol, Jennifer L Campos, Behrang Keshavarz","doi":"10.1163/22134808-bja10058","DOIUrl":"10.1163/22134808-bja10058","url":null,"abstract":"<p><p>A critical component to many immersive experiences in virtual reality (VR) is vection, defined as the illusion of self-motion. Traditionally, vection has been described as a visual phenomenon, but more recent research suggests that vection can be influenced by a variety of senses. The goal of the present study was to investigate the role of multisensory cues on vection by manipulating the availability of visual, auditory, and tactile stimuli in a VR setting. To achieve this, 24 adults (Mage = 25.04) were presented with a rotating stimulus aimed to induce circular vection. All participants completed trials that included a single sensory cue, a combination of two cues, or all three cues presented together. The size of the field of view (FOV) was manipulated across four levels (no-visuals, small, medium, full). Participants rated vection intensity and duration verbally after each trial. Results showed that all three sensory cues induced vection when presented in isolation, with visual cues eliciting the highest intensity and longest duration. The presence of auditory and tactile cues further increased vection intensity and duration compared to conditions where these cues were not presented. These findings support the idea that vection can be induced via multiple types of sensory inputs and can be intensified when multiple sensory inputs are combined.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-22"},"PeriodicalIF":1.6,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39304487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Basis of the Sound-Symbolic Crossmodal Correspondence Between Auditory Pseudowords and Visual Shapes. 听觉伪词与视觉形状之间声音-符号跨模态对应的神经基础
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-08-11 DOI: 10.1163/22134808-bja10060
Kelly McCormick, Simon Lacey, Randall Stilla, Lynne C Nygaard, K Sathian

Sound symbolism refers to the association between the sounds of words and their meanings, often studied using the crossmodal correspondence between auditory pseudowords, e.g., 'takete' or 'maluma', and pointed or rounded visual shapes, respectively. In a functional magnetic resonance imaging study, participants were presented with pseudoword-shape pairs that were sound-symbolically congruent or incongruent. We found no significant congruency effects in the blood oxygenation level-dependent (BOLD) signal when participants were attending to visual shapes. During attention to auditory pseudowords, however, we observed greater BOLD activity for incongruent compared to congruent audiovisual pairs bilaterally in the intraparietal sulcus and supramarginal gyrus, and in the left middle frontal gyrus. We compared this activity to independent functional contrasts designed to test competing explanations of sound symbolism, but found no evidence for mediation via language, and only limited evidence for accounts based on multisensory integration and a general magnitude system. Instead, we suggest that the observed incongruency effects are likely to reflect phonological processing and/or multisensory attention. These findings advance our understanding of sound-to-meaning mapping in the brain.

声音符号学是指单词的声音与其含义之间的关联,通常使用听觉假词(如 "takete "或 "maluma")与尖状或圆形视觉形状之间的跨模态对应关系进行研究。在一项功能磁共振成像研究中,研究人员向参与者展示了声音-符号一致或不一致的伪词-形状对。我们发现,当参与者注意视觉形状时,血氧水平依赖性(BOLD)信号没有明显的同义效应。然而,在注意听觉假词时,我们观察到在顶内沟和边际上回以及左侧额中回的双侧,不一致视听对的血氧浓度依赖性(BOLD)活动比一致视听对的血氧浓度依赖性(BOLD)活动更大。我们将这种活动与独立的功能对比进行了比较,旨在测试声音符号学的竞争性解释,但没有发现通过语言进行调解的证据,而基于多感官整合和一般幅度系统的解释也只有有限的证据。相反,我们认为观察到的不一致性效应很可能反映了语音处理和/或多感官注意。这些发现推进了我们对大脑中音义映射的理解。
{"title":"Neural Basis of the Sound-Symbolic Crossmodal Correspondence Between Auditory Pseudowords and Visual Shapes.","authors":"Kelly McCormick, Simon Lacey, Randall Stilla, Lynne C Nygaard, K Sathian","doi":"10.1163/22134808-bja10060","DOIUrl":"10.1163/22134808-bja10060","url":null,"abstract":"<p><p>Sound symbolism refers to the association between the sounds of words and their meanings, often studied using the crossmodal correspondence between auditory pseudowords, e.g., 'takete' or 'maluma', and pointed or rounded visual shapes, respectively. In a functional magnetic resonance imaging study, participants were presented with pseudoword-shape pairs that were sound-symbolically congruent or incongruent. We found no significant congruency effects in the blood oxygenation level-dependent (BOLD) signal when participants were attending to visual shapes. During attention to auditory pseudowords, however, we observed greater BOLD activity for incongruent compared to congruent audiovisual pairs bilaterally in the intraparietal sulcus and supramarginal gyrus, and in the left middle frontal gyrus. We compared this activity to independent functional contrasts designed to test competing explanations of sound symbolism, but found no evidence for mediation via language, and only limited evidence for accounts based on multisensory integration and a general magnitude system. Instead, we suggest that the observed incongruency effects are likely to reflect phonological processing and/or multisensory attention. These findings advance our understanding of sound-to-meaning mapping in the brain.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 1","pages":"29-78"},"PeriodicalIF":1.8,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9196751/pdf/nihms-1804729.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10098984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Reference Frame Integration Using Response Demands in a Tactile Temporal-Order Judgement Task. 在触觉时序判断任务中利用反应要求探索参照系整合。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-07-23 DOI: 10.1163/22134808-bja10057
Kaian Unwalla, Daniel Goldreich, David I Shore

Exploring the world through touch requires the integration of internal (e.g., anatomical) and external (e.g., spatial) reference frames - you only know what you touch when you know where your hands are in space. The deficit observed in tactile temporal-order judgements when the hands are crossed over the midline provides one tool to explore this integration. We used foot pedals and required participants to focus on either the hand that was stimulated first (an anatomical bias condition) or the location of the hand that was stimulated first (a spatiotopic bias condition). Spatiotopic-based responses produce a larger crossed-hands deficit, presumably by focusing observers on the external reference frame. In contrast, anatomical-based responses focus the observer on the internal reference frame and produce a smaller deficit. This manipulation thus provides evidence that observers can change the relative weight given to each reference frame. We quantify this effect using a probabilistic model that produces a population estimate of the relative weight given to each reference frame. We show that a spatiotopic bias can result in either a larger external weight (Experiment 1) or a smaller internal weight (Experiment 2) and provide an explanation of when each one would occur.

通过触觉探索世界需要整合内部(如解剖学)和外部(如空间)参考框架--只有当你知道你的手在空间中的位置时,你才能知道你触摸到了什么。当双手越过中线时,在触觉时序判断方面观察到的缺陷为探索这种整合提供了一种工具。我们使用脚踏板,要求参与者将注意力集中在最先受到刺激的那只手上(解剖偏差条件)或最先受到刺激的那只手的位置上(空间偏差条件)。基于空间的反应会产生较大的交叉手缺陷,这可能是由于观察者将注意力集中在外部参照系上。相比之下,基于解剖学的反应则将观察者的注意力集中在内部参照系上,产生的障碍较小。因此,这种操作提供了观察者可以改变给予每个参照框架的相对权重的证据。我们使用一个概率模型对这种效应进行量化,该模型可对给予每个参照系的相对权重进行总体估计。我们证明,时空偏差既可能导致外部权重增大(实验 1),也可能导致内部权重减小(实验 2),并对两种情况的发生时间做出了解释。
{"title":"Exploring Reference Frame Integration Using Response Demands in a Tactile Temporal-Order Judgement Task.","authors":"Kaian Unwalla, Daniel Goldreich, David I Shore","doi":"10.1163/22134808-bja10057","DOIUrl":"10.1163/22134808-bja10057","url":null,"abstract":"<p><p>Exploring the world through touch requires the integration of internal (e.g., anatomical) and external (e.g., spatial) reference frames - you only know what you touch when you know where your hands are in space. The deficit observed in tactile temporal-order judgements when the hands are crossed over the midline provides one tool to explore this integration. We used foot pedals and required participants to focus on either the hand that was stimulated first (an anatomical bias condition) or the location of the hand that was stimulated first (a spatiotopic bias condition). Spatiotopic-based responses produce a larger crossed-hands deficit, presumably by focusing observers on the external reference frame. In contrast, anatomical-based responses focus the observer on the internal reference frame and produce a smaller deficit. This manipulation thus provides evidence that observers can change the relative weight given to each reference frame. We quantify this effect using a probabilistic model that produces a population estimate of the relative weight given to each reference frame. We show that a spatiotopic bias can result in either a larger external weight (Experiment 1) or a smaller internal weight (Experiment 2) and provide an explanation of when each one would occur.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-32"},"PeriodicalIF":1.6,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39300169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metacognition and Crossmodal Correspondences Between Auditory Attributes and Saltiness in a Large Sample Study. 大样本研究中听觉属性与咸度之间的元认知和跨模态对应关系
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-07-23 DOI: 10.1163/22134808-bja10055
Qian Janice Wang, Steve Keller, Charles Spence

Mounting evidence demonstrates that people make surprisingly consistent associations between auditory attributes and a number of the commonly-agreed basic tastes. However, the sonic representation of (association with) saltiness has remained rather elusive. In the present study, a crowd-sourced online study ( n = 1819 participants) was conducted to determine the acoustical/musical attributes that best match saltiness, as well as participants' confidence levels in their choices. Based on previous literature on crossmodal correspondences involving saltiness, thirteen attributes were selected to cover a variety of temporal, tactile, and emotional associations. The results revealed that saltiness was associated most strongly with a long decay time, high auditory roughness, and a regular rhythm. In terms of emotional associations, saltiness was matched with negative valence, high arousal, and minor mode. Moreover, significantly higher average confidence ratings were observed for those saltiness-matching choices for which there was majority agreement, suggesting that individuals were more confident about their own judgments when it matched with the group response, therefore providing support for the so-called 'consensuality principle'. Taken together, these results help to uncover the complex interplay of mechanisms behind seemingly surprising crossmodal correspondences between sound attributes and taste.

越来越多的证据表明,人们在听觉属性和一些普遍认同的基本味道之间会产生惊人一致的联想。然而,人们对咸味的声音表征(联想)仍然相当模糊。在本研究中,我们进行了一项众包在线研究(n = 1819 名参与者),以确定与咸味最匹配的声学/音乐属性,以及参与者对其选择的信心水平。根据以往涉及咸味的跨模态对应的文献,我们选择了 13 种属性,涵盖了各种时间、触觉和情感关联。结果表明,咸味与衰减时间长、听觉粗糙度高和节奏规律的联系最为紧密。在情感联想方面,咸味与负面情绪、高唤醒度和次要模式相匹配。此外,在咸味匹配的选择中,多数人同意的平均信心评级明显较高,这表明当咸味与群体反应相匹配时,个体对自己的判断更有信心,从而为所谓的 "一致性原则 "提供了支持。综上所述,这些结果有助于揭示声音属性与味道之间看似惊人的跨模态对应背后复杂的相互作用机制。
{"title":"Metacognition and Crossmodal Correspondences Between Auditory Attributes and Saltiness in a Large Sample Study.","authors":"Qian Janice Wang, Steve Keller, Charles Spence","doi":"10.1163/22134808-bja10055","DOIUrl":"10.1163/22134808-bja10055","url":null,"abstract":"<p><p>Mounting evidence demonstrates that people make surprisingly consistent associations between auditory attributes and a number of the commonly-agreed basic tastes. However, the sonic representation of (association with) saltiness has remained rather elusive. In the present study, a crowd-sourced online study ( n = 1819 participants) was conducted to determine the acoustical/musical attributes that best match saltiness, as well as participants' confidence levels in their choices. Based on previous literature on crossmodal correspondences involving saltiness, thirteen attributes were selected to cover a variety of temporal, tactile, and emotional associations. The results revealed that saltiness was associated most strongly with a long decay time, high auditory roughness, and a regular rhythm. In terms of emotional associations, saltiness was matched with negative valence, high arousal, and minor mode. Moreover, significantly higher average confidence ratings were observed for those saltiness-matching choices for which there was majority agreement, suggesting that individuals were more confident about their own judgments when it matched with the group response, therefore providing support for the so-called 'consensuality principle'. Taken together, these results help to uncover the complex interplay of mechanisms behind seemingly surprising crossmodal correspondences between sound attributes and taste.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-21"},"PeriodicalIF":1.6,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39300168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptions of Audio-Visual Impact Events in Younger and Older Adults. 年轻人和老年人对视听影响事件的感知。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-07-21 DOI: 10.1163/22134808-bja10056
Katherine Bak, George S W Chan, Michael Schutz, Jennifer L Campos

Previous studies have examined whether audio-visual integration changes in older age, with some studies reporting age-related differences and others reporting no differences. Most studies have either used very basic and ambiguous stimuli (e.g., flash/beep) or highly contextualized, causally related stimuli (e.g., speech). However, few have used tasks that fall somewhere between the extremes of this continuum, such as those that include contextualized, causally related stimuli that are not speech-based; for example, audio-visual impact events. The present study used a paradigm requiring duration estimates and temporal order judgements (TOJ) of audio-visual impact events. Specifically, the Schutz-Lipscomb illusion, in which the perceived duration of a percussive tone is influenced by the length of the visual striking gesture, was examined in younger and older adults. Twenty-one younger and 21 older adult participants were presented with a visual point-light representation of a percussive impact event (i.e., a marimbist striking their instrument with a long or short gesture) combined with a percussive auditory tone. Participants completed a tone duration judgement task and a TOJ task. Five audio-visual temporal offsets (-400 to +400 ms) and five spatial offsets (from -90 to +90°) were randomly introduced. Results demonstrated that the strength of the illusion did not differ between older and younger adults and was not influenced by spatial or temporal offsets. Older adults showed an 'auditory first bias' when making TOJs. The current findings expand what is known about age-related differences in audio-visual integration by considering them in the context of impact-related events.

以往的研究探讨了视听整合是否会随着年龄的增长而发生变化,其中一些研究报告了与年龄相关的差异,而另一些研究则报告称没有差异。大多数研究要么使用非常基本和模糊的刺激(如闪光/哔哔声),要么使用高度情景化和因果相关的刺激(如语音)。然而,很少有人使用介于这两个极端之间的任务,例如那些包含语境化、因果相关的非语音刺激的任务;例如视听冲击事件。本研究使用的范式要求对视听冲击事件进行持续时间估计和时序判断(TOJ)。具体来说,本研究考察了年轻人和老年人的舒兹-利普斯科姆幻觉,在这种幻觉中,打击音的感知持续时间受到视觉打击手势长度的影响。研究人员向 21 名年轻人和 21 名老年人展示了打击乐冲击事件的视觉点光表示(即弹琴手以长或短的手势敲击乐器)以及打击乐听觉音调。受试者完成了音调持续时间判断任务和 TOJ 任务。随机引入五个视听时间偏移(-400 到 +400 毫秒)和五个空间偏移(从 -90 到 +90°)。结果表明,幻觉的强度在老年人和年轻人之间没有差异,也不受空间或时间偏移的影响。老年人在做出 TOJ 时表现出 "听觉优先偏差"。当前的研究结果将视听整合中与年龄有关的差异放在与撞击有关的事件中加以考虑,从而扩展了人们对这些差异的认识。
{"title":"Perceptions of Audio-Visual Impact Events in Younger and Older Adults.","authors":"Katherine Bak, George S W Chan, Michael Schutz, Jennifer L Campos","doi":"10.1163/22134808-bja10056","DOIUrl":"10.1163/22134808-bja10056","url":null,"abstract":"<p><p>Previous studies have examined whether audio-visual integration changes in older age, with some studies reporting age-related differences and others reporting no differences. Most studies have either used very basic and ambiguous stimuli (e.g., flash/beep) or highly contextualized, causally related stimuli (e.g., speech). However, few have used tasks that fall somewhere between the extremes of this continuum, such as those that include contextualized, causally related stimuli that are not speech-based; for example, audio-visual impact events. The present study used a paradigm requiring duration estimates and temporal order judgements (TOJ) of audio-visual impact events. Specifically, the Schutz-Lipscomb illusion, in which the perceived duration of a percussive tone is influenced by the length of the visual striking gesture, was examined in younger and older adults. Twenty-one younger and 21 older adult participants were presented with a visual point-light representation of a percussive impact event (i.e., a marimbist striking their instrument with a long or short gesture) combined with a percussive auditory tone. Participants completed a tone duration judgement task and a TOJ task. Five audio-visual temporal offsets (-400 to +400 ms) and five spatial offsets (from -90 to +90°) were randomly introduced. Results demonstrated that the strength of the illusion did not differ between older and younger adults and was not influenced by spatial or temporal offsets. Older adults showed an 'auditory first bias' when making TOJs. The current findings expand what is known about age-related differences in audio-visual integration by considering them in the context of impact-related events.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-30"},"PeriodicalIF":1.6,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39213139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Lightness/Pitch Crossmodal Correspondence Modulates the Rubin Face/Vase Perception. 亮度/音调跨模态对应调节鲁宾面孔/花瓶感知
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-06-16 DOI: 10.1163/22134808-bja10054
Mick Zeljko, Philip M Grove, Ada Kritikos

We examine whether crossmodal correspondences (CMCs) modulate perceptual disambiguation by considering the influence of lightness/pitch congruency on the perceptual resolution of the Rubin face/vase (RFV). We randomly paired a black-and-white RFV (black faces and white vase, or vice versa) with either a high or low pitch tone and found that CMC congruency biases the dominant visual percept. The perceptual option that was CMC-congruent with the tone (white/high pitch or black/low pitch) was reported significantly more often than the perceptual option CMC-incongruent with the tone (white/low pitch or black/high pitch). However, the effect was only observed for stimuli presented for longer and not shorter durations suggesting a perceptual effect rather than a response bias, and moreover, we infer an effect on perceptual reversals rather than initial percepts. We found that the CMC congruency effect for longer-duration stimuli only occurred after prior exposure to the stimuli of several minutes, suggesting that the CMC congruency develops over time. These findings extend the observed effects of CMCs from relatively low-level feature-based effects to higher-level object-based perceptual effects (specifically, resolving ambiguity) and demonstrate that an entirely new category of crossmodal factors (CMC congruency) influence perceptual disambiguation in bistability.

我们通过考虑亮度/音调一致性对鲁宾面孔/花瓶(Rubin face/vase,RFV)知觉分辨率的影响,研究了跨模态对应(CMC)是否会调节知觉消歧。我们将黑白RFV(黑色脸部和白色花瓶,反之亦然)与高音或低音随机配对,结果发现,CMC一致性会使主导视觉感知出现偏差。与音调(白色/高音调或黑色/低音调)CMC 一致的知觉选项的报告频率明显高于与音调(白色/低音调或黑色/高音调)CMC 不一致的知觉选项。然而,只有在刺激持续时间较长而非较短时才会出现这种效应,这表明这是一种知觉效应而非反应偏差,此外,我们还推断这是一种知觉逆转效应而非初始知觉效应。我们发现,对持续时间较长的刺激物的 CMC 一致性效应只有在事先接触该刺激物数分钟后才会出现,这表明 CMC 一致性是随着时间的推移逐渐形成的。这些发现将所观察到的 CMC 效应从相对低级的基于特征的效应扩展到了更高级的基于对象的知觉效应(特别是消除歧义),并证明了一种全新的跨模态因素(CMC 一致性)会影响双稳态知觉消歧。
{"title":"The Lightness/Pitch Crossmodal Correspondence Modulates the Rubin Face/Vase Perception.","authors":"Mick Zeljko, Philip M Grove, Ada Kritikos","doi":"10.1163/22134808-bja10054","DOIUrl":"10.1163/22134808-bja10054","url":null,"abstract":"<p><p>We examine whether crossmodal correspondences (CMCs) modulate perceptual disambiguation by considering the influence of lightness/pitch congruency on the perceptual resolution of the Rubin face/vase (RFV). We randomly paired a black-and-white RFV (black faces and white vase, or vice versa) with either a high or low pitch tone and found that CMC congruency biases the dominant visual percept. The perceptual option that was CMC-congruent with the tone (white/high pitch or black/low pitch) was reported significantly more often than the perceptual option CMC-incongruent with the tone (white/low pitch or black/high pitch). However, the effect was only observed for stimuli presented for longer and not shorter durations suggesting a perceptual effect rather than a response bias, and moreover, we infer an effect on perceptual reversals rather than initial percepts. We found that the CMC congruency effect for longer-duration stimuli only occurred after prior exposure to the stimuli of several minutes, suggesting that the CMC congruency develops over time. These findings extend the observed effects of CMCs from relatively low-level feature-based effects to higher-level object-based perceptual effects (specifically, resolving ambiguity) and demonstrate that an entirely new category of crossmodal factors (CMC congruency) influence perceptual disambiguation in bistability.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-21"},"PeriodicalIF":1.6,"publicationDate":"2021-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39241388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effects of Cue Reliability on Crossmodal Recalibration in Adults and Children. 线索可靠性对成人和儿童跨模态再校准的影响
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-05-31 DOI: 10.1163/22134808-bja10053
Sophie Rohlf, Patrick Bruns, Brigitte Röder

Reliability-based cue combination is a hallmark of multisensory integration, while the role of cue reliability for crossmodal recalibration is less understood. The present study investigated whether visual cue reliability affects audiovisual recalibration in adults and children. Participants had to localize sounds, which were presented either alone or in combination with a spatially discrepant high- or low-reliability visual stimulus. In a previous study we had shown that the ventriloquist effect (indicating multisensory integration) was overall larger in the children groups and that the shift in sound localization toward the spatially discrepant visual stimulus decreased with visual cue reliability in all groups. The present study replicated the onset of the immediate ventriloquist aftereffect (a shift in unimodal sound localization following a single exposure of a spatially discrepant audiovisual stimulus) at the age of 6-7 years. In adults the immediate ventriloquist aftereffect depended on visual cue reliability, whereas the cumulative ventriloquist aftereffect (reflecting the audiovisual spatial discrepancies over the complete experiment) did not. In 6-7-year-olds the immediate ventriloquist aftereffect was independent of visual cue reliability. The present results are compatible with the idea of immediate and cumulative crossmodal recalibrations being dissociable processes and that the immediate ventriloquist aftereffect is more closely related to genuine multisensory integration.

以可靠性为基础的线索组合是多感官整合的一个标志,而线索可靠性对跨模态重新校准的作用却鲜为人知。本研究调查了视觉线索可靠性是否会影响成人和儿童的视听再校准。受试者必须对声音进行定位,这些声音要么单独出现,要么与空间差异大或可靠性低的视觉刺激物同时出现。在之前的一项研究中,我们发现儿童组的腹语者效应(表示多感官整合)总体上更大,而且声音定位向空间差异视觉刺激的转移在所有组别中都随着视觉线索可靠性的降低而降低。本研究复制了 6-7 岁时出现的腹语者即刻后遗效应(单次接触空间差异视听刺激后,单模态声音定位的转移)。在成人中,即时腹语者后效取决于视觉线索的可靠性,而累积腹语者后效(反映整个实验中的视听空间差异)则不取决于视觉线索的可靠性。在 6-7 岁的儿童中,即时腹语者后效与视觉线索的可靠性无关。本实验结果与以下观点相吻合:即刻的和累积的跨模态再校准是两个不同的过程,即刻的口技后效与真正的多感官整合关系更为密切。
{"title":"The Effects of Cue Reliability on Crossmodal Recalibration in Adults and Children.","authors":"Sophie Rohlf, Patrick Bruns, Brigitte Röder","doi":"10.1163/22134808-bja10053","DOIUrl":"10.1163/22134808-bja10053","url":null,"abstract":"<p><p>Reliability-based cue combination is a hallmark of multisensory integration, while the role of cue reliability for crossmodal recalibration is less understood. The present study investigated whether visual cue reliability affects audiovisual recalibration in adults and children. Participants had to localize sounds, which were presented either alone or in combination with a spatially discrepant high- or low-reliability visual stimulus. In a previous study we had shown that the ventriloquist effect (indicating multisensory integration) was overall larger in the children groups and that the shift in sound localization toward the spatially discrepant visual stimulus decreased with visual cue reliability in all groups. The present study replicated the onset of the immediate ventriloquist aftereffect (a shift in unimodal sound localization following a single exposure of a spatially discrepant audiovisual stimulus) at the age of 6-7 years. In adults the immediate ventriloquist aftereffect depended on visual cue reliability, whereas the cumulative ventriloquist aftereffect (reflecting the audiovisual spatial discrepancies over the complete experiment) did not. In 6-7-year-olds the immediate ventriloquist aftereffect was independent of visual cue reliability. The present results are compatible with the idea of immediate and cumulative crossmodal recalibrations being dissociable processes and that the immediate ventriloquist aftereffect is more closely related to genuine multisensory integration.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-19"},"PeriodicalIF":1.6,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39040401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Multisensory Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1