Pub Date : 2012-01-01DOI: 10.1163/187847612X647757
Jared Medina, S. Khurshid, Roy H. Hamilton, H. Coslett
Previous research has provided evidence for two stages of tactile processing (e.g., Azanon and Soto-Faraco, 2008; Groh and Sparks, 1996). First, tactile stimuli are represented in a somatotopic representation that does not take into account body position in space, followed by a representation of body position in external space (body posture representation, see Medina and Coslett, 2010). In order to explore potential functional and neural dissociations between these two stages of processing, we presented eight participants with TMS before and after a tactile temporal order judgment (TOJ) task (see Yamamoto and Kitazawa, 2001). Participants were tested with their hands crossed and uncrossed before and after 20 min of 1 Hz repetitive TMS (rTMS). Stimulation occurred at the left anterior intraparietal sulcus (aIPS, somatotopic representation) or left Brodmann Area 5 (BA5, body posture) during two separate sessions. We predicted that left aIPS TMS would affect a somatotopic representation of the body, and would disrupt performance in both the uncrossed and crossed conditions. However, we predicted that TMS of body posture areas (BA5) would disrupt mechanisms for updating limb position with the hands crossed, resulting in a paradoxical improvement in performance after TMS. Using thresholds derived from adaptive staircase procedures, we found that left aIPS TMS disrupted performance in the uncrossed condition. However, left BA5 TMS resulted in a significant improvement in performance with the hands crossed. We discuss these results with reference to potential dissociations of the traditional body schema.
{"title":"Examining tactile spatial remapping using transcranial magnetic stimulation","authors":"Jared Medina, S. Khurshid, Roy H. Hamilton, H. Coslett","doi":"10.1163/187847612X647757","DOIUrl":"https://doi.org/10.1163/187847612X647757","url":null,"abstract":"Previous research has provided evidence for two stages of tactile processing (e.g., Azanon and Soto-Faraco, 2008; Groh and Sparks, 1996). First, tactile stimuli are represented in a somatotopic representation that does not take into account body position in space, followed by a representation of body position in external space (body posture representation, see Medina and Coslett, 2010). In order to explore potential functional and neural dissociations between these two stages of processing, we presented eight participants with TMS before and after a tactile temporal order judgment (TOJ) task (see Yamamoto and Kitazawa, 2001). Participants were tested with their hands crossed and uncrossed before and after 20 min of 1 Hz repetitive TMS (rTMS). Stimulation occurred at the left anterior intraparietal sulcus (aIPS, somatotopic representation) or left Brodmann Area 5 (BA5, body posture) during two separate sessions. We predicted that left aIPS TMS would affect a somatotopic representation of the body, and would disrupt performance in both the uncrossed and crossed conditions. However, we predicted that TMS of body posture areas (BA5) would disrupt mechanisms for updating limb position with the hands crossed, resulting in a paradoxical improvement in performance after TMS. Using thresholds derived from adaptive staircase procedures, we found that left aIPS TMS disrupted performance in the uncrossed condition. However, left BA5 TMS resulted in a significant improvement in performance with the hands crossed. We discuss these results with reference to potential dissociations of the traditional body schema.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"143-143"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647757","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647694
K. Yarrow, Ingvild Sverdrup-Stueland, Derek H. Arnold
Repeated presentation of artificially induced delays between actions and events leads to shifts in participants’ subjective simultaneity towards the adapted lag. This sensorimotor temporal recalibration generalises across sensory modalities, presumably via a shift in the motor component. Here we examined two overlapping questions regarding (1) the level of representation of temporal recalibration (by testing whether it also generalises across limbs) and (2) the neural underpinning of the shift in the motor component (by comparing adaption magnitude in the foot relative to the hand). An adaption-test paradigm was used, with hand or foot adaptation, and same-limb and cross-limb test phases that used a synchrony judgement task. By demonstrating that temporal recalibration occurs in the foot, we confirmed that it is a robust motor phenomenon. Shifts in the distribution of participants’ synchrony responses were quantified using a detection-theoretic model of the SJ task, where a shift of both boundaries together gives a stronger indication that the effect is not simply a result of decision bias. The results showed a significant shift in both boundaries in the same-limb conditions, whereas there was only a shift of the higher boundary in the cross-limb conditions. These two patterns most likely reflect a genuine shift in neural timing, and a criterion shift, respectively.
{"title":"Sensorimotor temporal recalibration within and across limbs","authors":"K. Yarrow, Ingvild Sverdrup-Stueland, Derek H. Arnold","doi":"10.1163/187847612X647694","DOIUrl":"https://doi.org/10.1163/187847612X647694","url":null,"abstract":"Repeated presentation of artificially induced delays between actions and events leads to shifts in participants’ subjective simultaneity towards the adapted lag. This sensorimotor temporal recalibration generalises across sensory modalities, presumably via a shift in the motor component. Here we examined two overlapping questions regarding (1) the level of representation of temporal recalibration (by testing whether it also generalises across limbs) and (2) the neural underpinning of the shift in the motor component (by comparing adaption magnitude in the foot relative to the hand). An adaption-test paradigm was used, with hand or foot adaptation, and same-limb and cross-limb test phases that used a synchrony judgement task. By demonstrating that temporal recalibration occurs in the foot, we confirmed that it is a robust motor phenomenon. Shifts in the distribution of participants’ synchrony responses were quantified using a detection-theoretic model of the SJ task, where a shift of both boundaries together gives a stronger indication that the effect is not simply a result of decision bias. The results showed a significant shift in both boundaries in the same-limb conditions, whereas there was only a shift of the higher boundary in the cross-limb conditions. These two patterns most likely reflect a genuine shift in neural timing, and a criterion shift, respectively.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"38 1","pages":"137-137"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647694","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647883
Mika Sato, T. Kawase, S. Sakamoto, Yôiti Suzuki, Toshimitsu Kobayashi
Artificial auditory devices such as cochlear implants (CIs) and auditory brainstem implants (ABIs) have become standard means to manage profound sensorineural hearing loss. However, because of their structural limitations compared to the cochlea and the cochlear nucleus, the generated auditory sensations are still imperfect. Recipients need postoperative auditory rehabilitation. To improve these rehabilitation programs, this study evaluated the effects of bimodal (audio–visual) training under seven experimental conditions of distorted speech sound, named noise-vocoded speech sound (NVSS), which is similarly processed with a speech processor of CI/ABI. Word intelligibilities under the seven conditions of two-band noise-vocoded speech were measured for auditory (A), visual (V) and auditory–visual (AV) modalities after a few hours of bimodal (AV) training. The experiment was performed with 56 subjects with normal hearing. Performance of A and AV word recognition was significantly different under the seven auditory conditions. The V word intelligibility was not influenced by the condition of combined auditory cues. However, V word intelligibility was correlated with AV word recognition under all frequency conditions. Correlation between A and AV word intelligibilities was ambiguous. These findings suggest the importance of visual cues in AV speech perception under extremely degraded auditory conditions, and underscore the importance of the possible effectiveness of bimodal audio–visual training in postoperative rehabilitation for patients with postlingual deafness who have undergone artificial auditory device implantation.
{"title":"Visual benefit in bimodal training with highly distorted speech sound","authors":"Mika Sato, T. Kawase, S. Sakamoto, Yôiti Suzuki, Toshimitsu Kobayashi","doi":"10.1163/187847612X647883","DOIUrl":"https://doi.org/10.1163/187847612X647883","url":null,"abstract":"Artificial auditory devices such as cochlear implants (CIs) and auditory brainstem implants (ABIs) have become standard means to manage profound sensorineural hearing loss. However, because of their structural limitations compared to the cochlea and the cochlear nucleus, the generated auditory sensations are still imperfect. Recipients need postoperative auditory rehabilitation. To improve these rehabilitation programs, this study evaluated the effects of bimodal (audio–visual) training under seven experimental conditions of distorted speech sound, named noise-vocoded speech sound (NVSS), which is similarly processed with a speech processor of CI/ABI. Word intelligibilities under the seven conditions of two-band noise-vocoded speech were measured for auditory (A), visual (V) and auditory–visual (AV) modalities after a few hours of bimodal (AV) training. The experiment was performed with 56 subjects with normal hearing. Performance of A and AV word recognition was significantly different under the seven auditory conditions. The V word intelligibility was not influenced by the condition of combined auditory cues. However, V word intelligibility was correlated with AV word recognition under all frequency conditions. Correlation between A and AV word intelligibilities was ambiguous. These findings suggest the importance of visual cues in AV speech perception under extremely degraded auditory conditions, and underscore the importance of the possible effectiveness of bimodal audio–visual training in postoperative rehabilitation for patients with postlingual deafness who have undergone artificial auditory device implantation.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"68 1","pages":"157-157"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647883","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647900
M. Binder
The aim of this study was to examine relation between conscious perception of temporal relation between the elements of an audiovisual pair and the dynamics of accompanying neural activity. This was done by using a simultaneity judgment task and EEG event-related potentials (ERP). During Experiment 1 the pairs of 10 ms white-noise bursts and flashes were used. On presenting each pair subjects pressed one of two buttons to indicate their synchrony. Values of stimulus onset asynchrony (SOA) were based on individual estimates of simultaneity thresholds (50∕50 probability of either response). They were estimated prior to EEG measurement using interleaved staircase involving both sound-first and flash-first stimulus pairs. Experiment 2 had the identical setup, except subjects indicated if audio–visual pair began simultaneously (termination was synchronous). ERP waveforms were time-locked to the second stimulus in the pair. Effects of synchrony perception were studied by comparing ERPs in trials that were judged as simultaneous and non-simultaneous. Subjects were divided into two subgroups with similar SOA values. In both experiments at about 200 ms after the second stimulus onset a stronger ERP wave positivity for trials judged as non-simultaneous was observed in parieto-central sites. This effect was observed for both sound-first and video-first pairs and for both SOA subgroups. The results demonstrate that the perception of temporal relations between multimodal stimuli with identical physical parameters is reflected in localized ERP differences. Given their localization in the posterior parietal regions, these differences may be viewed as correlates of conscious perception of temporal integration vs. separation of audiovisual stimuli.
{"title":"An ERP study of audiovisual simultaneity perception","authors":"M. Binder","doi":"10.1163/187847612X647900","DOIUrl":"https://doi.org/10.1163/187847612X647900","url":null,"abstract":"The aim of this study was to examine relation between conscious perception of temporal relation between the elements of an audiovisual pair and the dynamics of accompanying neural activity. This was done by using a simultaneity judgment task and EEG event-related potentials (ERP). During Experiment 1 the pairs of 10 ms white-noise bursts and flashes were used. On presenting each pair subjects pressed one of two buttons to indicate their synchrony. Values of stimulus onset asynchrony (SOA) were based on individual estimates of simultaneity thresholds (50∕50 probability of either response). They were estimated prior to EEG measurement using interleaved staircase involving both sound-first and flash-first stimulus pairs. Experiment 2 had the identical setup, except subjects indicated if audio–visual pair began simultaneously (termination was synchronous). ERP waveforms were time-locked to the second stimulus in the pair. Effects of synchrony perception were studied by comparing ERPs in trials that were judged as simultaneous and non-simultaneous. Subjects were divided into two subgroups with similar SOA values. In both experiments at about 200 ms after the second stimulus onset a stronger ERP wave positivity for trials judged as non-simultaneous was observed in parieto-central sites. This effect was observed for both sound-first and video-first pairs and for both SOA subgroups. The results demonstrate that the perception of temporal relations between multimodal stimuli with identical physical parameters is reflected in localized ERP differences. Given their localization in the posterior parietal regions, these differences may be viewed as correlates of conscious perception of temporal integration vs. separation of audiovisual stimuli.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"159-159"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647900","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647964
Mario Maiworm, Marina Bellantoni, C. Spence, B. Roeder
It is currently unknown to what extent the integration of inputs from different modalities are subject to the influence of attention, emotion, and/or motivation. The ventriloquist effect is widely assumed to be an automatic, crossmodal phenomenon, normally shifting the perceived location of an auditory stimulus toward a concurrently-presented visual stimulus. The present study examined whether audiovisual binding, as indicated by the magnitude of the ventriloquist effect, is influenced by threatening auditory stimuli presented prior to the ventriloquist experiment. Syllables spoken in a fearful voice were presented from one of eight loudspeakers while syllables spoken in a neutral voice were presented from the other seven locations. Subsequently, participants had to localize pure tones while trying to ignore concurrent light flashes (both of which were emotionally neutral). A reliable ventriloquist effect was observed. The emotional stimulus manipulation resulted in a reduced ventriloquist effect in both hemifields, as compared to a control group exposed to a similar attention-capturing but non-emotional manipulation. These results suggest that the emotional system is capable of influencing crossmodal binding processes which have heretofore been considered as being automatic.
{"title":"The size of the ventriloquist effect is modulated by emotional valence","authors":"Mario Maiworm, Marina Bellantoni, C. Spence, B. Roeder","doi":"10.1163/187847612X647964","DOIUrl":"https://doi.org/10.1163/187847612X647964","url":null,"abstract":"It is currently unknown to what extent the integration of inputs from different modalities are subject to the influence of attention, emotion, and/or motivation. The ventriloquist effect is widely assumed to be an automatic, crossmodal phenomenon, normally shifting the perceived location of an auditory stimulus toward a concurrently-presented visual stimulus. The present study examined whether audiovisual binding, as indicated by the magnitude of the ventriloquist effect, is influenced by threatening auditory stimuli presented prior to the ventriloquist experiment. Syllables spoken in a fearful voice were presented from one of eight loudspeakers while syllables spoken in a neutral voice were presented from the other seven locations. Subsequently, participants had to localize pure tones while trying to ignore concurrent light flashes (both of which were emotionally neutral). A reliable ventriloquist effect was observed. The emotional stimulus manipulation resulted in a reduced ventriloquist effect in both hemifields, as compared to a control group exposed to a similar attention-capturing but non-emotional manipulation. These results suggest that the emotional system is capable of influencing crossmodal binding processes which have heretofore been considered as being automatic.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"166-166"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647964","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648116
Tifanie Bouchara, B. Katz
This study concerns stimuli-driven perceptual processes involved in target search among concurrent distractors with a focus on comparing auditory, visual, and audio–visual search tasks. Previous works, concerning unimodal search tasks, highlighted different preattentive features that can enhance target saliency, making it ‘pop-out’, e.g., a visually sharp target among blurred distractors. A cue from another modality can also help direct attention towards the target. Our study investigates a new kind of search task, where stimuli consist of audio–visual objects presented using both audio and visual modalities simultaneously. Redundancy effects are evaluated, first from the combination of audio and visual modalities, second from the combination of each unimodal cue in such a bimodal search task. A perceptual experiment was performed where the task was to identify an audio–visual object from a set of six competing stimuli. We employed static visual blur and developed an auditory blur analogue to cue the search. Results show that both visual and auditory blurs render distractors less prominent and automatically attracts attention toward a sharp target. The combination of both unimodal blurs, i.e., audio–visual blur, also proved to be an efficient cue to facilitate bimodal search task. Results also showed that search tasks were performed faster in redundant bimodal conditions than in unimodal ones. That gain was due to redundant target effect only without any redundancy gain due to the cue combination, as solely cueing the visual component was sufficient, with no improvement found by the addition of the redundant audio cue in bimodal search tasks.
{"title":"Redundancy gains in audio–visual search","authors":"Tifanie Bouchara, B. Katz","doi":"10.1163/187847612X648116","DOIUrl":"https://doi.org/10.1163/187847612X648116","url":null,"abstract":"This study concerns stimuli-driven perceptual processes involved in target search among concurrent distractors with a focus on comparing auditory, visual, and audio–visual search tasks. Previous works, concerning unimodal search tasks, highlighted different preattentive features that can enhance target saliency, making it ‘pop-out’, e.g., a visually sharp target among blurred distractors. A cue from another modality can also help direct attention towards the target. Our study investigates a new kind of search task, where stimuli consist of audio–visual objects presented using both audio and visual modalities simultaneously. Redundancy effects are evaluated, first from the combination of audio and visual modalities, second from the combination of each unimodal cue in such a bimodal search task. A perceptual experiment was performed where the task was to identify an audio–visual object from a set of six competing stimuli. We employed static visual blur and developed an auditory blur analogue to cue the search. Results show that both visual and auditory blurs render distractors less prominent and automatically attracts attention toward a sharp target. The combination of both unimodal blurs, i.e., audio–visual blur, also proved to be an efficient cue to facilitate bimodal search task. Results also showed that search tasks were performed faster in redundant bimodal conditions than in unimodal ones. That gain was due to redundant target effect only without any redundancy gain due to the cue combination, as solely cueing the visual component was sufficient, with no improvement found by the addition of the redundant audio cue in bimodal search tasks.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"181-181"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648116","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648170
Helena Sgouramani, Chris Muller, L. V. Noorden, M. Leman, A. Vatakis
We report two experiments aiming to define how experience and stimulus enactment affect multisensory temporal integration for ecologically-valid stimuli. In both experiments, a number of different dance steps were used as audiovisual displays at a range of stimulus onset asynchronies using the method of constant stimuli. Participants were either professional dancers or non-dancers. In Experiment 1, using a simultaneity judgment (SJ) task, we aimed at defining — for the first time — the temporal window of integration (TWI) for dancers and non-dancers and the role of experience in SJ performance. Preliminary results showed that dancers had smaller TWI in comparison to non-dancers for all stimuli tested, with higher complexity (participant rated) dance steps requiring larger auditory leads for both participant groups. In Experiment 2, we adapted a more embodied point of view by examining how enactment of the stimulus modulates the TWIs. Participants were presented with simple audiovisual dance steps that could be synchronous or asynchronous and were asked to synchronize with the audiovisual display by actually performing the step indicated. A motion capture system recorded their performance at a millisecond level of accuracy. Based on the optimal integration hypothesis, we are currently looking at the data in terms of which modality will be dominant, considering that dance is a spatially (visual) and temporally (audio) coordinated action. Any corrective adjustments, accelerations–decelerations, hesitations will be interpreted as indicators of the perception of ambiguity in comparison to their performance at the synchronous condition, thus, for the first time, an implicit SJ response will be measured.
{"title":"From observation to enactment: Can dance experience enhance multisensory temporal integration?","authors":"Helena Sgouramani, Chris Muller, L. V. Noorden, M. Leman, A. Vatakis","doi":"10.1163/187847612X648170","DOIUrl":"https://doi.org/10.1163/187847612X648170","url":null,"abstract":"We report two experiments aiming to define how experience and stimulus enactment affect multisensory temporal integration for ecologically-valid stimuli. In both experiments, a number of different dance steps were used as audiovisual displays at a range of stimulus onset asynchronies using the method of constant stimuli. Participants were either professional dancers or non-dancers. In Experiment 1, using a simultaneity judgment (SJ) task, we aimed at defining — for the first time — the temporal window of integration (TWI) for dancers and non-dancers and the role of experience in SJ performance. Preliminary results showed that dancers had smaller TWI in comparison to non-dancers for all stimuli tested, with higher complexity (participant rated) dance steps requiring larger auditory leads for both participant groups. In Experiment 2, we adapted a more embodied point of view by examining how enactment of the stimulus modulates the TWIs. Participants were presented with simple audiovisual dance steps that could be synchronous or asynchronous and were asked to synchronize with the audiovisual display by actually performing the step indicated. A motion capture system recorded their performance at a millisecond level of accuracy. Based on the optimal integration hypothesis, we are currently looking at the data in terms of which modality will be dominant, considering that dance is a spatially (visual) and temporally (audio) coordinated action. Any corrective adjustments, accelerations–decelerations, hesitations will be interpreted as indicators of the perception of ambiguity in comparison to their performance at the synchronous condition, thus, for the first time, an implicit SJ response will be measured.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"188-188"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648170","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648189
Tamara L. Ansons, Aradhna Krishna, N. Schwarz
Does sensory imagery influence consumers’ perception of variety for a set of products? We tested this possibility across two studies in which participants received one of three alternate coffee menus where all the coffees were the same but the category labels were varied on how imagery-evocative they were. The less evocative labels (i) were more generic in nature (e.g., ‘Sweet’ or ‘Category A’), whereas the more evocative ones related either (ii) to the sensory experience of coffee (e.g., ‘Sweet Chocolate Flavor’ or ‘Smokey-Sweet Charred Dark Roast’) or (iii) to imagery related to where the coffee was grown (e.g., ‘Rich Volcanic Soil’ or ‘Dark Rich Volcanic Soil’). The labels relating to where the coffee was grown was included as a second control to show that merely increasing imagery does not increase perceived variety; it is increasing the sensory imagery relating to the items that does so. As expected, only category labels that evoked sensory imagery increased consumers’ perception of variety, whereas imagining where the coffee was grown did not enhance perception of variety. This finding extends recent research that shows that the type of sensory information included in an ad alters the perceptions of a product (Elder and Krishna, 2010) by illustrating that the inclusion of sensory information can also alter the perceived variety of a set of products. Thus, the inclusion of sensory information can be used flexibly to alter perceptions of both a single product and a set of choice alternatives.
{"title":"The impact of imagery-evoking category labels on perceived variety","authors":"Tamara L. Ansons, Aradhna Krishna, N. Schwarz","doi":"10.1163/187847612X648189","DOIUrl":"https://doi.org/10.1163/187847612X648189","url":null,"abstract":"Does sensory imagery influence consumers’ perception of variety for a set of products? We tested this possibility across two studies in which participants received one of three alternate coffee menus where all the coffees were the same but the category labels were varied on how imagery-evocative they were. The less evocative labels (i) were more generic in nature (e.g., ‘Sweet’ or ‘Category A’), whereas the more evocative ones related either (ii) to the sensory experience of coffee (e.g., ‘Sweet Chocolate Flavor’ or ‘Smokey-Sweet Charred Dark Roast’) or (iii) to imagery related to where the coffee was grown (e.g., ‘Rich Volcanic Soil’ or ‘Dark Rich Volcanic Soil’). The labels relating to where the coffee was grown was included as a second control to show that merely increasing imagery does not increase perceived variety; it is increasing the sensory imagery relating to the items that does so. As expected, only category labels that evoked sensory imagery increased consumers’ perception of variety, whereas imagining where the coffee was grown did not enhance perception of variety. This finding extends recent research that shows that the type of sensory information included in an ad alters the perceptions of a product (Elder and Krishna, 2010) by illustrating that the inclusion of sensory information can also alter the perceived variety of a set of products. Thus, the inclusion of sensory information can be used flexibly to alter perceptions of both a single product and a set of choice alternatives.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"189-189"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648189","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648242
Lisa M. Pritchett, Michael J. Carnevale, L. Harris
We have previously reported that head position affects the perceived location of touch differently depending on the dynamics of the task the subject is involved in. When touch was delivered and responses were made with head rotated touch location shifted in the opposite direction to the head position, consistent with body-centered coding. When touch was delivered with head rotated but response was made with head centered touch shifted in the same direction as the head, consistent with gaze-centered coding. Here we tested whether moving the head in-between touch and response would modulate the effects of head position on touch location. Each trial consisted of three periods, in the first arrows and LEDs guided the subject to a randomly chosen head orientation (90° left, right, or center) and a vibration stimulus was delivered. Next, they were either guided to turn their head or to remain in the same location. In the final period they again were guided to turn or to remain in the same location before reporting the perceived location of the touch on a visual scale using a mouse and computer screen. Reported touch location was shifted in the opposite direction of head orientation during touch presentation regardless of the orientation during response or whether a movement was made before the response. The size of the effect was much reduced compared to our previous results. These results are consistent with touch location being coded in both a gaze centered and body centered reference frame during dynamic conditions.
{"title":"Body and gaze centered coding of touch locations during a dynamic task","authors":"Lisa M. Pritchett, Michael J. Carnevale, L. Harris","doi":"10.1163/187847612X648242","DOIUrl":"https://doi.org/10.1163/187847612X648242","url":null,"abstract":"We have previously reported that head position affects the perceived location of touch differently depending on the dynamics of the task the subject is involved in. When touch was delivered and responses were made with head rotated touch location shifted in the opposite direction to the head position, consistent with body-centered coding. When touch was delivered with head rotated but response was made with head centered touch shifted in the same direction as the head, consistent with gaze-centered coding. Here we tested whether moving the head in-between touch and response would modulate the effects of head position on touch location. Each trial consisted of three periods, in the first arrows and LEDs guided the subject to a randomly chosen head orientation (90° left, right, or center) and a vibration stimulus was delivered. Next, they were either guided to turn their head or to remain in the same location. In the final period they again were guided to turn or to remain in the same location before reporting the perceived location of the touch on a visual scale using a mouse and computer screen. Reported touch location was shifted in the opposite direction of head orientation during touch presentation regardless of the orientation during response or whether a movement was made before the response. The size of the effect was much reduced compared to our previous results. These results are consistent with touch location being coded in both a gaze centered and body centered reference frame during dynamic conditions.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"195-195"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648242","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648369
F. Soyka, M. Cowan, P. Giordano, H. Bülthoff
Reaction times (RTs) to purely inertial self-motion stimuli have only infrequently been studied, and comparisons of RTs for translations and rotations, to our knowledge, are nonexistent. We recently proposed a model (Soyka et al., 2011) which describes direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). This model also predicts differences in RTs for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations). In order to assess these predictions we measured RTs in 20 participants for 8 supra-threshold motion profiles (4 translations, 4 rotations). A two-alternative forced-choice task, discriminating leftward from rightward motions, was used and 30 correct responses per condition were evaluated. The results agree with predictions for RT differences between motion profiles as derived from previously identified model parameters from threshold measurements. To describe absolute RT, a constant is added to the predictions representing both the discrimination process, and the time needed to press the response button. This constant is approximately 160 ms shorter for rotations, thus indicating that additional processing time is required for translational motion. As this additional latency cannot be explained by our model based on the dynamics of the sensory organs, we speculate that it originates at a later stage, e.g., during tilt-translation disambiguation. Varying processing latencies for different self-motion stimuli (either translations or rotations) which our model can account for must be considered when assessing the perceived timing of vestibular stimulation in comparison with other senses (Barnett-Cowan and Harris, 2009; Sanders et al., 2011).
对纯惯性自运动刺激的反应时间(RTs)的研究很少,据我们所知,平移和旋转的反应时间的比较还不存在。我们最近提出了一个模型(Soyka et al., 2011),该模型描述了基于前庭感觉器官(耳石和半规管)动态的旋转和平移运动的方向识别阈值。该模型还预测了不同运动剖面(例如,梯形与三角形加速剖面或不同剖面持续时间)的RTs差异。为了评估这些预测,我们测量了20名参与者的8个超阈值运动概况(4个平移,4个旋转)的RTs。使用了一个两种选择的强迫选择任务,区分向左和向右的运动,并评估了每种情况下30个正确的反应。结果与从阈值测量中获得的先前确定的模型参数得出的运动剖面之间的RT差异的预测一致。为了描述绝对RT,在预测中加入一个常数,表示识别过程和按下响应按钮所需的时间。对于旋转,这个常数大约短160毫秒,因此表明平移运动需要额外的处理时间。由于这种额外的延迟不能用我们基于感觉器官动力学的模型来解释,我们推测它起源于较晚的阶段,例如在倾斜翻译消歧义期间。在评估前庭刺激与其他感官的感知时间时,我们的模型可以考虑不同的自我运动刺激(平移或旋转)的不同处理潜伏期(Barnett-Cowan and Harris, 2009;Sanders et al., 2011)。
{"title":"Temporal processing of self-motion: Translations are processed slower than rotations","authors":"F. Soyka, M. Cowan, P. Giordano, H. Bülthoff","doi":"10.1163/187847612X648369","DOIUrl":"https://doi.org/10.1163/187847612X648369","url":null,"abstract":"Reaction times (RTs) to purely inertial self-motion stimuli have only infrequently been studied, and comparisons of RTs for translations and rotations, to our knowledge, are nonexistent. We recently proposed a model (Soyka et al., 2011) which describes direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). This model also predicts differences in RTs for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations). In order to assess these predictions we measured RTs in 20 participants for 8 supra-threshold motion profiles (4 translations, 4 rotations). A two-alternative forced-choice task, discriminating leftward from rightward motions, was used and 30 correct responses per condition were evaluated. The results agree with predictions for RT differences between motion profiles as derived from previously identified model parameters from threshold measurements. To describe absolute RT, a constant is added to the predictions representing both the discrimination process, and the time needed to press the response button. This constant is approximately 160 ms shorter for rotations, thus indicating that additional processing time is required for translational motion. As this additional latency cannot be explained by our model based on the dynamics of the sensory organs, we speculate that it originates at a later stage, e.g., during tilt-translation disambiguation. Varying processing latencies for different self-motion stimuli (either translations or rotations) which our model can account for must be considered when assessing the perceived timing of vestibular stimulation in comparison with other senses (Barnett-Cowan and Harris, 2009; Sanders et al., 2011).","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"207-208"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648369","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}