Pub Date : 2012-01-01DOI: 10.1163/187847612X646361
David Aagten-Murphy, G. Cappagli, D. Burr
Expert musicians are able to accurately and consistently time their actions during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises to vision in a ‘ready-set-go’ paradigm. Subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall musicians performed more veridically than non-musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However non-musicians, particularly with visual intervals, consistently exhibited a substantial and systematic regression towards the mean of the interval. When subjects judged intervals from distributions of longer total length they tended to exhibit more regression towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model which minimizes reproduction errors by incorporating a central tendency prior weighted by the subject’s own temporal precision relative to the current intervals distribution (Cicchini et al., 2012; Jazayeri and Shadlen, 2010). Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors.
专业的音乐家能够在音乐表演中准确而一致地计时他们的动作。我们调查了音乐专业知识如何影响再现听觉间隔的能力,以及这种能力如何在“准备-开始”范式中推广到视觉。受试者再现从总长度(176、352或704毫秒)或总长度内的离散间隔数(3、5、11或21个离散间隔)的分布中提取的时间间隔。总的来说,音乐家比非音乐家表现得更真实,所有的受试者都比视觉定义的音程更准确地再现了听觉定义的音程。然而,非音乐家,特别是在视觉音程方面,始终表现出对音程平均值的实质性和系统性的回归。当受试者从较长的总长度分布中判断区间时,他们倾向于向均值回归,而在分布中区分离散区间的能力对受试者误差的影响很小。这些结果与贝叶斯模型一致,该模型通过纳入由受试者自身相对于当前间隔分布的时间精度加权的集中趋势先验来最大限度地减少再现误差(Cicchini et al., 2012;Jazayeri和Shadlen, 2010)。最后,在正式音乐训练的所有持续时间和两种模式的总再现误差之间观察到很强的相关性(占方差的30%)。综上所述,这些结果表明,正规的音乐训练改善了时间再现,并且这种改善从听力转移到视觉。他们进一步证明了感觉运动机制在适应不同任务条件以最小化时间估计误差方面的灵活性。
{"title":"Musical training generalises across modalities and reveals efficient and adaptive mechanisms for judging temporal intervals","authors":"David Aagten-Murphy, G. Cappagli, D. Burr","doi":"10.1163/187847612X646361","DOIUrl":"https://doi.org/10.1163/187847612X646361","url":null,"abstract":"Expert musicians are able to accurately and consistently time their actions during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises to vision in a ‘ready-set-go’ paradigm. Subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall musicians performed more veridically than non-musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However non-musicians, particularly with visual intervals, consistently exhibited a substantial and systematic regression towards the mean of the interval. When subjects judged intervals from distributions of longer total length they tended to exhibit more regression towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model which minimizes reproduction errors by incorporating a central tendency prior weighted by the subject’s own temporal precision relative to the current intervals distribution (Cicchini et al., 2012; Jazayeri and Shadlen, 2010). Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"13-13"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646361","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646235
V. Harrar, G. Meyer, C. Spence
{"title":"Abstracts from the 13th International Multisensory Research Forum, University of Oxford, June 19th–22nd 2012","authors":"V. Harrar, G. Meyer, C. Spence","doi":"10.1163/187847612X646235","DOIUrl":"https://doi.org/10.1163/187847612X646235","url":null,"abstract":"","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646235","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646415
S. Vandenbroucke, G. Crombez, D. V. Ryckeghem, V. Harrar, L. Goubert, C. Spence, Wouter Durnez, S. Damme
Introduction: There is preliminary evidence that viewing touch or pain can modulate the experience of tactile stimulation. The aim of this study was to develop an experimental paradigm to investigate whether the observation of needle pricks to another person’s hand facilitates the detection of subtle somatic sensations. Furthermore, differences between control persons and persons reporting synaesthesia for pain (i.e., experiencing observed pain as if it is their own pain) will be examined. Method: Synaesthetes ( n = 15 ) and controls ( n = 20 ) were presented a series of videos showing left or right hands being pricked and control videos (e.g., a sponge being pricked), whilst receiving occasionally subtle threshold sensations themselves on the hand in the same spatial location (congruent trials) or in the opposite location (incongruent trials) as the visual stimuli. Participants were asked to detect the sensory stimulus. Signal detection theory was used to compare whether sensitivity was different for both groups and both categories of visual stimuli. Results: Overall, perceptual sensitivity (d′) was significantly higher when the visual stimuli involved a painful situation (e.g., needle pricking another’s hand) compared to the control videos, and was significantly lower in synaesthetes compared to control participants. When no sensory stimulus was administered, participants reported significantly more illusory sensations when a painful situation was depicted compared to a non-painful situation. Discussion: This study suggests that the detection of somatic sensations can be facilitated or inhibited by observing visual stimuli. Synaesthetes were generally less sensitive, suggesting that they experience more difficulties in disentangling somatic and visual stimuli.
{"title":"Observing social stimuli influences detection of subtle somatic sensations differently for pain synaesthetes and controls","authors":"S. Vandenbroucke, G. Crombez, D. V. Ryckeghem, V. Harrar, L. Goubert, C. Spence, Wouter Durnez, S. Damme","doi":"10.1163/187847612X646415","DOIUrl":"https://doi.org/10.1163/187847612X646415","url":null,"abstract":"Introduction: There is preliminary evidence that viewing touch or pain can modulate the experience of tactile stimulation. The aim of this study was to develop an experimental paradigm to investigate whether the observation of needle pricks to another person’s hand facilitates the detection of subtle somatic sensations. Furthermore, differences between control persons and persons reporting synaesthesia for pain (i.e., experiencing observed pain as if it is their own pain) will be examined. Method: Synaesthetes ( n = 15 ) and controls ( n = 20 ) were presented a series of videos showing left or right hands being pricked and control videos (e.g., a sponge being pricked), whilst receiving occasionally subtle threshold sensations themselves on the hand in the same spatial location (congruent trials) or in the opposite location (incongruent trials) as the visual stimuli. Participants were asked to detect the sensory stimulus. Signal detection theory was used to compare whether sensitivity was different for both groups and both categories of visual stimuli. Results: Overall, perceptual sensitivity (d′) was significantly higher when the visual stimuli involved a painful situation (e.g., needle pricking another’s hand) compared to the control videos, and was significantly lower in synaesthetes compared to control participants. When no sensory stimulus was administered, participants reported significantly more illusory sensations when a painful situation was depicted compared to a non-painful situation. Discussion: This study suggests that the detection of somatic sensations can be facilitated or inhibited by observing visual stimuli. Synaesthetes were generally less sensitive, suggesting that they experience more difficulties in disentangling somatic and visual stimuli.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"162 3 1","pages":"19-19"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646415","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646505
C. Sutter, Stefan Ladwig, S. Sülzenbrück
When using tools effects in body space and distant space often do not correspond or are even in conflict. The ideomotor principle holds that actors select, initiate and execute movements by activating the anticipatory codes of the movements’ sensory effects (Greenwald, 1970; James, 1890). These may be representations of body-related effects and/or representations of more distal effects. Previous studies have demonstrated that distant action effects dominate action control, while body-related effects are attenuated (e.g., Musseler and Sutter, 2009). In two experiments, participants performed closed-loop controlled movements on a covered digitizer tablet to control a cursor on a monitor. Different gains perturbed the relation between hand and cursor amplitude, so that the hand amplitude varied and the cursor amplitude remained constant, and vice versa. Within a block the location of amplitude perturbation randomly varied (low predictability) or not (high predictability). In Experiment 1 both trajectories of hand and cursor followed the same linear path, in Experiment 2 a linear hand trajectory produced a curved cursor trajectory on the monitor. When participants were asked to evaluate their hand movement, they were extremely uncertain about their trajectories. Both, predictability of amplitude perturbation and shape of cursor trajectory modulated the awareness of one’s own hand movements. We will discuss whether the low awareness of proximal action effects originates from an insufficient quality of the humans’ tactile and proprioceptive system or from an insufficient spatial reconstruction of this information in memory.
{"title":"Complexity of sensorimotor transformations alters hand perception","authors":"C. Sutter, Stefan Ladwig, S. Sülzenbrück","doi":"10.1163/187847612X646505","DOIUrl":"https://doi.org/10.1163/187847612X646505","url":null,"abstract":"When using tools effects in body space and distant space often do not correspond or are even in conflict. The ideomotor principle holds that actors select, initiate and execute movements by activating the anticipatory codes of the movements’ sensory effects (Greenwald, 1970; James, 1890). These may be representations of body-related effects and/or representations of more distal effects. Previous studies have demonstrated that distant action effects dominate action control, while body-related effects are attenuated (e.g., Musseler and Sutter, 2009). In two experiments, participants performed closed-loop controlled movements on a covered digitizer tablet to control a cursor on a monitor. Different gains perturbed the relation between hand and cursor amplitude, so that the hand amplitude varied and the cursor amplitude remained constant, and vice versa. Within a block the location of amplitude perturbation randomly varied (low predictability) or not (high predictability). In Experiment 1 both trajectories of hand and cursor followed the same linear path, in Experiment 2 a linear hand trajectory produced a curved cursor trajectory on the monitor. When participants were asked to evaluate their hand movement, they were extremely uncertain about their trajectories. Both, predictability of amplitude perturbation and shape of cursor trajectory modulated the awareness of one’s own hand movements. We will discuss whether the low awareness of proximal action effects originates from an insufficient quality of the humans’ tactile and proprioceptive system or from an insufficient spatial reconstruction of this information in memory.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"28-28"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646505","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646604
Maeve M. Barrett, F. Newell
This study investigated whether performance in recognising and locating target objects benefited from the simultaneous presentation of a crossmodal cue. Furthermore, we examined whether these ‘what’ and ‘where’ tasks were affected by developmental processes by testing across different age groups. Using the same set of stimuli, participants conducted either an object recognition task, or object location task. For the recognition task, participants were required to respond to two of four target objects (animals) and withhold response to the remaining two objects. For the location task, participants responded when an object occupied either of two target locations and withheld response if the object occupied a different location. Target stimuli were presented either by vision alone, audition alone, or bimodally. In both tasks cross-modal cues were either congruent or incongruent. The results revealed that response time performance in both the object recognition task and in the object location task benefited from the presence of a congruent cross-modal cue, relative to incongruent or unisensory conditions. In the younger adult group, the effect was strongest for response times although the same pattern was found for accuracy in the object location task but not for the recognition task. Following recent studies on multisensory integration in children (e.g., Brandwein, 2010; Gori, 2008), we then tested performance in children (i.e., 8–14 year olds) using the same task. Although overall performance was affected by age, our findings suggest interesting parallels in the benefit of congruent, cross-modal cues between children and adults, for both object recognition and location tasks.
{"title":"Developmental processes in audiovisual object recognition and object location","authors":"Maeve M. Barrett, F. Newell","doi":"10.1163/187847612X646604","DOIUrl":"https://doi.org/10.1163/187847612X646604","url":null,"abstract":"This study investigated whether performance in recognising and locating target objects benefited from the simultaneous presentation of a crossmodal cue. Furthermore, we examined whether these ‘what’ and ‘where’ tasks were affected by developmental processes by testing across different age groups. Using the same set of stimuli, participants conducted either an object recognition task, or object location task. For the recognition task, participants were required to respond to two of four target objects (animals) and withhold response to the remaining two objects. For the location task, participants responded when an object occupied either of two target locations and withheld response if the object occupied a different location. Target stimuli were presented either by vision alone, audition alone, or bimodally. In both tasks cross-modal cues were either congruent or incongruent. The results revealed that response time performance in both the object recognition task and in the object location task benefited from the presence of a congruent cross-modal cue, relative to incongruent or unisensory conditions. In the younger adult group, the effect was strongest for response times although the same pattern was found for accuracy in the object location task but not for the recognition task. Following recent studies on multisensory integration in children (e.g., Brandwein, 2010; Gori, 2008), we then tested performance in children (i.e., 8–14 year olds) using the same task. Although overall performance was affected by age, our findings suggest interesting parallels in the benefit of congruent, cross-modal cues between children and adults, for both object recognition and location tasks.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"38-38"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646604","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646910
Z. Eitan, L. Marks
Garner’s speeded discrimination paradigm is a central tool in studying crossmodal interaction, revealing automatic perceptual correspondences between dimensions in different modalities. To date, however, the paradigm has been used solely with static, unchanging stimuli, limiting its ecological validity. Here, we use Garner’s paradigm to examine interactions between dynamic (time-varying) audiovisual dimensions — pitch direction and vertical visual motion. In Experiment 1, 32 participants rapidly discriminated ascending vs. descending pitch glides, ignoring concurrent visual motion (auditory task), and ascending vs. descending visual motion, ignoring pitch change (visual task). Results in both tasks revealed strong congruence effects, but no Garner interference, an unusual pattern inconsistent with some interpretations of Garner interference. To examine whether this pattern of results is specific to dynamic stimuli, Experiment 2 (testing another 64 participants) used a modified Garner design with two baseline conditions: The irrelevant stimuli were dynamic in one baseline and static in the other, the test stimuli always being dynamic. The results showed significant Garner interference relative to the static baseline (for both the auditory and visual tasks), but not relative to the dynamic baseline. Congruence effects were evident throughout. We suggest that dynamic stimuli reduce attention to and memory of between-trial variation, thereby reducing Garner interference. Because congruence effects depend primarily on within-trial relations, however, congruence effects are unaffected. Results indicate how a classic tool such as Garner’s paradigm, used productively to examine dimensional interactions between static stimuli, may be readily adapted to probe the radically different behavior of dynamic, time-varying multisensory stimuli.
{"title":"Garner’s paradigm and audiovisual correspondence in dynamic stimuli: Pitch and vertical direction","authors":"Z. Eitan, L. Marks","doi":"10.1163/187847612X646910","DOIUrl":"https://doi.org/10.1163/187847612X646910","url":null,"abstract":"Garner’s speeded discrimination paradigm is a central tool in studying crossmodal interaction, revealing automatic perceptual correspondences between dimensions in different modalities. To date, however, the paradigm has been used solely with static, unchanging stimuli, limiting its ecological validity. Here, we use Garner’s paradigm to examine interactions between dynamic (time-varying) audiovisual dimensions — pitch direction and vertical visual motion. In Experiment 1, 32 participants rapidly discriminated ascending vs. descending pitch glides, ignoring concurrent visual motion (auditory task), and ascending vs. descending visual motion, ignoring pitch change (visual task). Results in both tasks revealed strong congruence effects, but no Garner interference, an unusual pattern inconsistent with some interpretations of Garner interference. To examine whether this pattern of results is specific to dynamic stimuli, Experiment 2 (testing another 64 participants) used a modified Garner design with two baseline conditions: The irrelevant stimuli were dynamic in one baseline and static in the other, the test stimuli always being dynamic. The results showed significant Garner interference relative to the static baseline (for both the auditory and visual tasks), but not relative to the dynamic baseline. Congruence effects were evident throughout. We suggest that dynamic stimuli reduce attention to and memory of between-trial variation, thereby reducing Garner interference. Because congruence effects depend primarily on within-trial relations, however, congruence effects are unaffected. Results indicate how a classic tool such as Garner’s paradigm, used productively to examine dimensional interactions between static stimuli, may be readily adapted to probe the radically different behavior of dynamic, time-varying multisensory stimuli.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"70-70"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646910","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647027
V. Harrar, C. Spence, T. Makin
The body is represented in a somatotopic framework such that adjacent body parts are represented next to each other in the brain. We utilised the organisation of the somatosensory cortex to study the generalisation pattern of tactile perceptual learning. Perceptual learning refers to the process of long-lasting improvement in the performance of a perceptual task following persistent sensory exposure. In order to test if perceptual learning generalises to neighbouring brain/body areas, 12 participants were trained on a tactile discrimination task on one fingertip (using tactile oriented gratings) over the course of four days. Thresholds for tactile acuity were estimated prior to, and following, the training for the ‘trained’ finger and three additional fingers: ‘adjacent’, ‘homologous’ (the same finger as trained but on the opposite hand) and ‘other’ (which was neither adjacent nor homologous to the trained finger). Identical threshold estimating with no training was also carried out for a control group. Following training, tactile thresholds were improved (as compared to the control group). Importantly, improved performance was not exclusive for the trained finger; it generalised to the adjacent and homologous fingers, but not the other finger. We found that perceptual learning indeed generalises in a way that can be predicted by the topography of the somatosensory cortex, suggesting that sensory experience is not necessary for perceptual learning. These findings may be translated to rehabilitation procedures that train the partially-deprived cortex using similar principles of perceptual learning generalisation, such as following amputation or blindness in adults.
{"title":"Improved tactile acuity following perceptual learning generalises to untrained fingers","authors":"V. Harrar, C. Spence, T. Makin","doi":"10.1163/187847612X647027","DOIUrl":"https://doi.org/10.1163/187847612X647027","url":null,"abstract":"The body is represented in a somatotopic framework such that adjacent body parts are represented next to each other in the brain. We utilised the organisation of the somatosensory cortex to study the generalisation pattern of tactile perceptual learning. Perceptual learning refers to the process of long-lasting improvement in the performance of a perceptual task following persistent sensory exposure. In order to test if perceptual learning generalises to neighbouring brain/body areas, 12 participants were trained on a tactile discrimination task on one fingertip (using tactile oriented gratings) over the course of four days. Thresholds for tactile acuity were estimated prior to, and following, the training for the ‘trained’ finger and three additional fingers: ‘adjacent’, ‘homologous’ (the same finger as trained but on the opposite hand) and ‘other’ (which was neither adjacent nor homologous to the trained finger). Identical threshold estimating with no training was also carried out for a control group. Following training, tactile thresholds were improved (as compared to the control group). Importantly, improved performance was not exclusive for the trained finger; it generalised to the adjacent and homologous fingers, but not the other finger. We found that perceptual learning indeed generalises in a way that can be predicted by the topography of the somatosensory cortex, suggesting that sensory experience is not necessary for perceptual learning. These findings may be translated to rehabilitation procedures that train the partially-deprived cortex using similar principles of perceptual learning generalisation, such as following amputation or blindness in adults.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"69 1","pages":"82-82"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647513
Bernard M. C. Stienen, F. Newell
The interaction of audio–visual signals transferring information about the emotional state of others may play a significant role in social engagement. There is ample evidence that recognition of visual emotional information does not necessarily depend on conscious processing. However, little is known about how multisensory integration of affective signals relates to visual awareness. Previous research using masking experiments has shown relative independence of audio–visual integration on visual awareness. However, masking does not capture the dynamic nature of consciousness in which dynamic stimulus selection depends on a multitude of signals. Therefore, we presented neutral and happy faces in one eye and houses in the other resulting in perceptual rivalry between the two stimuli while at the same time we presented laughing, coughing or no sound. The participants were asked to report when they saw the faces, houses or their mixtures and were instructed to ignore the playback of sounds. When happy facial expressions were shown participants reported seeing fewer houses in comparison to when neutral expressions were shown. In addition, human sounds increase the viewing time of faces in comparison when there was no sound. Taken together, emotional expressions of the face affect which face is selected for visual awareness and at the same time, this is facilitated by human sounds.
{"title":"Human sounds facilitates conscious processing of emotional faces","authors":"Bernard M. C. Stienen, F. Newell","doi":"10.1163/187847612X647513","DOIUrl":"https://doi.org/10.1163/187847612X647513","url":null,"abstract":"The interaction of audio–visual signals transferring information about the emotional state of others may play a significant role in social engagement. There is ample evidence that recognition of visual emotional information does not necessarily depend on conscious processing. However, little is known about how multisensory integration of affective signals relates to visual awareness. Previous research using masking experiments has shown relative independence of audio–visual integration on visual awareness. However, masking does not capture the dynamic nature of consciousness in which dynamic stimulus selection depends on a multitude of signals. Therefore, we presented neutral and happy faces in one eye and houses in the other resulting in perceptual rivalry between the two stimuli while at the same time we presented laughing, coughing or no sound. The participants were asked to report when they saw the faces, houses or their mixtures and were instructed to ignore the playback of sounds. When happy facial expressions were shown participants reported seeing fewer houses in comparison to when neutral expressions were shown. In addition, human sounds increase the viewing time of faces in comparison when there was no sound. Taken together, emotional expressions of the face affect which face is selected for visual awareness and at the same time, this is facilitated by human sounds.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"118-118"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647513","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647838
M. Ruzzoli, S. Soto-Faraco
It is widely recognized that oscillatory activity plays an important functional role in neural systems. Decreases in alpha (∼10 Hz) EEG/MEG activity in the parietal cortex correlate with the deployment of spatial attention controlateral to target location in visual, auditory and tactile domains. Recently, repetitive Transcranial Magnetic Stimulation (rTMS) has been successfully applied to entrain a specific frequency at the parietal cortex (IPS) and the visual cortex. A short burst of 10 Hz rTMS impaired contralateral visual target detection and improved it ipsilaterally, compared to other control frequencies. This finding suggests a causal role of rhythmic activity in the alfa range in perception. The aim of the present study is to address whether entraining alpha frequency in the IPS plays a role in tactile orienting, indicating similarities between senses (vision and touch) in the communication between top-down (parietal) and primary sensory areas (V1 or S1). We applied rhythmic TMS at 10 and 20 Hz to the (right or left) IPS and S1, immediately before a masked vibrotactile target stimulus (present in 50% of the trials) to the left or right hand. Preliminary results lean towards the consequential effects of entraining alpha frequency into IPS for tactile detection such that it decreases tactile perception contralaterally and increases it ipsilaterally, compared to Beta frequency.
{"title":"TMS entrainment of pre-stimulus oscillatory activity in tactile perception","authors":"M. Ruzzoli, S. Soto-Faraco","doi":"10.1163/187847612X647838","DOIUrl":"https://doi.org/10.1163/187847612X647838","url":null,"abstract":"It is widely recognized that oscillatory activity plays an important functional role in neural systems. Decreases in alpha (∼10 Hz) EEG/MEG activity in the parietal cortex correlate with the deployment of spatial attention controlateral to target location in visual, auditory and tactile domains. Recently, repetitive Transcranial Magnetic Stimulation (rTMS) has been successfully applied to entrain a specific frequency at the parietal cortex (IPS) and the visual cortex. A short burst of 10 Hz rTMS impaired contralateral visual target detection and improved it ipsilaterally, compared to other control frequencies. This finding suggests a causal role of rhythmic activity in the alfa range in perception. The aim of the present study is to address whether entraining alpha frequency in the IPS plays a role in tactile orienting, indicating similarities between senses (vision and touch) in the communication between top-down (parietal) and primary sensory areas (V1 or S1). We applied rhythmic TMS at 10 and 20 Hz to the (right or left) IPS and S1, immediately before a masked vibrotactile target stimulus (present in 50% of the trials) to the left or right hand. Preliminary results lean towards the consequential effects of entraining alpha frequency into IPS for tactile detection such that it decreases tactile perception contralaterally and increases it ipsilaterally, compared to Beta frequency.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"152-152"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647838","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647595
L. Harris, Sarah D’Amour, Lisa M. Pritchett
Two-point discrimination threshold depends on the number and size of receptive fields between the touches. But what determines the size of the receptive fields? Are they anatomically fixed? Or are they related to perceived body size? To answer this question we manipulated perceived arm length using the Pinocchio illusion. The test arm was held at the wrist and the holding arm was made to feel perceptually more extended than it was by applying vibration to the tendon of the biceps (cf. de Vignemont et al., 2005). For control trials the holding arm was vibrated elsewhere. An array of tactors, separated by 3 cm, was placed on the upper surface of the arm and covered with a cloth. Vibro-tactile stimulation was applied to either one or two tactors in two periods. Subjects identified which period contained two stimuli. A psychometric function was drawn through the probability of correct response as a function of tactor separation to determine the threshold distance. In a separate experiment, subjects estimated the perceived location of each tactor against a scale laid on top of the cloth. The estimated locations of the tactors on the tested arm were displaced by tendon vibration of the holding arm compatible with a perceptual lengthening of the arm. The threshold for two-touch discrimination was significantly increased from 4.5 (±0.6) cm with no tendon stimulation to 5.7 (±0.5) cm when the arm was perceptually extended. We conclude that two-point touch discrimination depends on the size of central receptive fields that become larger when the arm is perceptually lengthened.
两点辨别阈值取决于触摸之间的感受野的数量和大小。但是是什么决定了接收野的大小呢?它们在解剖学上是固定的吗?或者它们与感知到的体型有关?为了回答这个问题,我们使用匹诺曹错觉来操纵感知到的手臂长度。将测试臂放在手腕处,通过对肱二头肌肌腱施加振动,使测试臂在感知上比实际更伸展(cf. de Vignemont et al., 2005)。在控制试验中,控制臂在其他地方振动。一组因子,间隔3厘米,放置在手臂的上表面,并用一块布覆盖。振动触觉刺激在两个时间段内分别作用于一个或两个因素。受试者确定哪段时间包含两个刺激。通过正确反应概率作为因子分离的函数,绘制心理测量函数来确定阈值距离。在另一个单独的实验中,受试者根据放在布上的刻度估计每个因素的感知位置。测试臂上因子的估计位置被与手臂知觉延长相容的保持臂的肌腱振动所移位。两触辨别阈值从无肌腱刺激时的4.5(±0.6)cm显著增加到手臂知觉伸展时的5.7(±0.5)cm。我们得出的结论是,两点触摸辨别取决于中央感受野的大小,当手臂知觉延长时,中央感受野会变大。
{"title":"Two-point touch discrimination depends on the perceived length of the arm","authors":"L. Harris, Sarah D’Amour, Lisa M. Pritchett","doi":"10.1163/187847612X647595","DOIUrl":"https://doi.org/10.1163/187847612X647595","url":null,"abstract":"Two-point discrimination threshold depends on the number and size of receptive fields between the touches. But what determines the size of the receptive fields? Are they anatomically fixed? Or are they related to perceived body size? To answer this question we manipulated perceived arm length using the Pinocchio illusion. The test arm was held at the wrist and the holding arm was made to feel perceptually more extended than it was by applying vibration to the tendon of the biceps (cf. de Vignemont et al., 2005). For control trials the holding arm was vibrated elsewhere. An array of tactors, separated by 3 cm, was placed on the upper surface of the arm and covered with a cloth. Vibro-tactile stimulation was applied to either one or two tactors in two periods. Subjects identified which period contained two stimuli. A psychometric function was drawn through the probability of correct response as a function of tactor separation to determine the threshold distance. In a separate experiment, subjects estimated the perceived location of each tactor against a scale laid on top of the cloth. The estimated locations of the tactors on the tested arm were displaced by tendon vibration of the holding arm compatible with a perceptual lengthening of the arm. The threshold for two-touch discrimination was significantly increased from 4.5 (±0.6) cm with no tendon stimulation to 5.7 (±0.5) cm when the arm was perceptually extended. We conclude that two-point touch discrimination depends on the size of central receptive fields that become larger when the arm is perceptually lengthened.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"126-126"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647595","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}