Pub Date : 2025-04-29DOI: 10.1163/22134808-bja10147
Kathryn Nason, Jonathan Wilbiks
Smartphone use has been examined in a variety of contexts, including their influence on sustained attention. Most importantly, notifications received while completing the Sustained Attention to Response Task (SART) have led to deficits in sustained attention performance. The present study re-examined this phenomenon by differentiating audio and visual notifications, to examine their individual influence. It was hypothesized that trials that notifications were received would result in slower reaction times across both notification types. Data were collected using the SART in both the fixed and random conditions. Visual pop-up notifications were sent for half the trials, while auditory cues were sent for the other half. Results were in accordance with previous findings, demonstrating an overall effect on sustained attention performance. Furthermore, visual notifications led to more errors than the auditory condition.
{"title":"Call Me Maybe: Effects of Notification Modality on Visual Sustained Attention.","authors":"Kathryn Nason, Jonathan Wilbiks","doi":"10.1163/22134808-bja10147","DOIUrl":"10.1163/22134808-bja10147","url":null,"abstract":"<p><p>Smartphone use has been examined in a variety of contexts, including their influence on sustained attention. Most importantly, notifications received while completing the Sustained Attention to Response Task (SART) have led to deficits in sustained attention performance. The present study re-examined this phenomenon by differentiating audio and visual notifications, to examine their individual influence. It was hypothesized that trials that notifications were received would result in slower reaction times across both notification types. Data were collected using the SART in both the fixed and random conditions. Visual pop-up notifications were sent for half the trials, while auditory cues were sent for the other half. Results were in accordance with previous findings, demonstrating an overall effect on sustained attention performance. Furthermore, visual notifications led to more errors than the auditory condition.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"61-75"},"PeriodicalIF":1.8,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-23DOI: 10.1163/22134808-bja10146
Wendy J Adams, Sina Mehraeen, Marc O Ernst
When picking up objects, we prefer stable grips with minimal torque by seeking grasp points that straddle the object's centre of mass (CoM). For homogeneous objects, the CoM is at the geometric centre (GC), computable from shape cues. However, everyday objects often include components of different materials and densities. In this case, the CoM depends on the object's geometry and the components' densities. We asked how participants estimate the CoM of novel, two-part objects. Across four experiments, participants used a precision grip to lift cylindrical objects comprised of steel and PVC in varying proportions (steel three times denser than PVC). In all experiments, initial grasps were close to objects' GCs; neither every-day experience (metals are denser than PVC) nor pre-exposure to the stimulus materials in isolation moved first grasps away from the GC. Within a few trials, however, grasps shifted towards the CoM, reducing but not eliminating torque. Learning transferred across the stimulus set, that is, observers learnt the materials' densities (or their ratio) rather than learning each object's CoM. In addition, there was a stable 'under-reaching' bias towards the grasping hand. An 'inverted density' stimulus set (PVC 3 × denser than steel) induced similarly fast learning, confirming that prior knowledge of materials has little effect on grasp point selection. When stimulus sets were covertly switched during an experiment, the unexpected force feedback caused even faster grasp adaptation. Torque minimisation is a strong driver of grasp point adaptation, but there is a surprising lack of transfer following pre-exposure to relevant materials.
{"title":"Grasping New Material Densities.","authors":"Wendy J Adams, Sina Mehraeen, Marc O Ernst","doi":"10.1163/22134808-bja10146","DOIUrl":"10.1163/22134808-bja10146","url":null,"abstract":"<p><p>When picking up objects, we prefer stable grips with minimal torque by seeking grasp points that straddle the object's centre of mass (CoM). For homogeneous objects, the CoM is at the geometric centre (GC), computable from shape cues. However, everyday objects often include components of different materials and densities. In this case, the CoM depends on the object's geometry and the components' densities. We asked how participants estimate the CoM of novel, two-part objects. Across four experiments, participants used a precision grip to lift cylindrical objects comprised of steel and PVC in varying proportions (steel three times denser than PVC). In all experiments, initial grasps were close to objects' GCs; neither every-day experience (metals are denser than PVC) nor pre-exposure to the stimulus materials in isolation moved first grasps away from the GC. Within a few trials, however, grasps shifted towards the CoM, reducing but not eliminating torque. Learning transferred across the stimulus set, that is, observers learnt the materials' densities (or their ratio) rather than learning each object's CoM. In addition, there was a stable 'under-reaching' bias towards the grasping hand. An 'inverted density' stimulus set (PVC 3 × denser than steel) induced similarly fast learning, confirming that prior knowledge of materials has little effect on grasp point selection. When stimulus sets were covertly switched during an experiment, the unexpected force feedback caused even faster grasp adaptation. Torque minimisation is a strong driver of grasp point adaptation, but there is a surprising lack of transfer following pre-exposure to relevant materials.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-30"},"PeriodicalIF":1.8,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-24DOI: 10.1163/22134808-bja10144
Erick G Chuquichambi, Nina Veflen, Enric Munar, Carlos Velasco
People infer the taste of products based on semantic knowledge (e.g., associations with the category and brand elements). They also link shape features with certain taste qualities through inherent associations commonly referred to as crossmodal correspondences. This research examined how shape features influence the evaluation of familiar and unfamiliar products, and thus varying levels of semantic knowledge. Participants evaluated the expected taste, familiarity, liking, and willingness to purchase products with curved and angular logos presented with sweet, bitter, and neutral characteristics, as well as unfamiliar products. The results of Experiment 1 indicated that the curved logos were preferred and associated with greater sweetness, while the angular logos were less preferred and associated with bitterness. However, in Experiment 2, these differences disappeared when the logos were presented with packages of familiar (sweet, bitter, and neutral) and unfamiliar products. In Experiment 3, the expected tastes for the logos were more pronounced when they were framed as representing new or unfamiliar products than sweet and bitter familiar products. The difference in expected sweetness between curved and angular logos was greater for new or unfamiliar products than for familiar sweet products, and the same pattern was found for expected bitterness. Together, these results suggest that feature-based expectations of taste are absent or less pronounced when semantic knowledge about the products is greater.
{"title":"The Role of Taste-Shape Correspondences and Semantic Congruence in Product Preference and Taste Expectations.","authors":"Erick G Chuquichambi, Nina Veflen, Enric Munar, Carlos Velasco","doi":"10.1163/22134808-bja10144","DOIUrl":"10.1163/22134808-bja10144","url":null,"abstract":"<p><p>People infer the taste of products based on semantic knowledge (e.g., associations with the category and brand elements). They also link shape features with certain taste qualities through inherent associations commonly referred to as crossmodal correspondences. This research examined how shape features influence the evaluation of familiar and unfamiliar products, and thus varying levels of semantic knowledge. Participants evaluated the expected taste, familiarity, liking, and willingness to purchase products with curved and angular logos presented with sweet, bitter, and neutral characteristics, as well as unfamiliar products. The results of Experiment 1 indicated that the curved logos were preferred and associated with greater sweetness, while the angular logos were less preferred and associated with bitterness. However, in Experiment 2, these differences disappeared when the logos were presented with packages of familiar (sweet, bitter, and neutral) and unfamiliar products. In Experiment 3, the expected tastes for the logos were more pronounced when they were framed as representing new or unfamiliar products than sweet and bitter familiar products. The difference in expected sweetness between curved and angular logos was greater for new or unfamiliar products than for familiar sweet products, and the same pattern was found for expected bitterness. Together, these results suggest that feature-based expectations of taste are absent or less pronounced when semantic knowledge about the products is greater.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"27-59"},"PeriodicalIF":1.8,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-13DOI: 10.1163/22134808-bja10143
Isabel Gonzalo-Fonrodona
In the context of the great boom in research on multisensory processes initiated with the publication of The Merging of the Senses by Stein and Meredith (1993), and the great achievements since then, we note here the recent posthumous publication of Justo Gonzalo, which is the first English translation of his original publications. He described multisensory phenomena at a functional and macroscopic physiological level in patients with unilateral parieto-occipital cortical lesions in an associative area equidistant from the visual, tactile and auditory areas. The disorder is a multisensory and bilateral alteration called 'central syndrome'. Here we focus on some aspects related to the facilitation effect, i.e., the improvement in the perception of a test stimulus with the help of another stimulus. The greater the lesion and the lower the intensity of the test stimulus, the greater the facilitation effect. One of the most effective facilitating stimuli in these patients was found to come from the motor system, such as muscular effort. The gradation observed between different cortical syndromes led Gonzalo to introduce the concept of functional cortical gradients, whose superposition would result in multisensory zones. The fact that functional behaviour in the central syndrome is considered similar to that of a normal individual, but on a reduced scale of excitability, allows scaling concepts to be applied and some generalisations to be made.
斯坦和梅雷迪思1993年出版的《感官的融合》(the merge of the Senses)引发了对多感官过程的研究热潮,此后取得了巨大成就。在这种背景下,我们在这里注意到最近胡斯托·冈萨洛(Justo Gonzalo)的遗作,这是他的原始出版物的第一本英文译本。他描述了在与视觉、触觉和听觉区域等距离的联合区域中,单侧顶枕皮质病变患者在功能和宏观生理水平上的多感觉现象。这种疾病是一种多感觉和双侧改变,称为“中枢综合征”。在这里,我们将重点讨论与促进效应相关的一些方面,即在另一个刺激的帮助下对测试刺激的感知的改善。损伤越大,测试刺激强度越低,促进作用越大。在这些患者中,最有效的促进刺激之一被发现来自运动系统,如肌肉的努力。由于观察到不同皮层综合征之间的梯度,Gonzalo引入了功能性皮层梯度的概念,这些梯度的叠加会导致多感觉区。事实上,中枢综合症的功能行为被认为与正常个体相似,但在兴奋性的程度上有所降低,这允许应用缩放概念并做出一些概括。
{"title":"Historical Note on Multisensory and Motor Facilitation and its Dependence on Brain Excitability Deficit.","authors":"Isabel Gonzalo-Fonrodona","doi":"10.1163/22134808-bja10143","DOIUrl":"10.1163/22134808-bja10143","url":null,"abstract":"<p><p>In the context of the great boom in research on multisensory processes initiated with the publication of The Merging of the Senses by Stein and Meredith (1993), and the great achievements since then, we note here the recent posthumous publication of Justo Gonzalo, which is the first English translation of his original publications. He described multisensory phenomena at a functional and macroscopic physiological level in patients with unilateral parieto-occipital cortical lesions in an associative area equidistant from the visual, tactile and auditory areas. The disorder is a multisensory and bilateral alteration called 'central syndrome'. Here we focus on some aspects related to the facilitation effect, i.e., the improvement in the perception of a test stimulus with the help of another stimulus. The greater the lesion and the lower the intensity of the test stimulus, the greater the facilitation effect. One of the most effective facilitating stimuli in these patients was found to come from the motor system, such as muscular effort. The gradation observed between different cortical syndromes led Gonzalo to introduce the concept of functional cortical gradients, whose superposition would result in multisensory zones. The fact that functional behaviour in the central syndrome is considered similar to that of a normal individual, but on a reduced scale of excitability, allows scaling concepts to be applied and some generalisations to be made.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"181-197"},"PeriodicalIF":1.5,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-27DOI: 10.1163/22134808-bja10141
Andy T Woods, Marusa Levstek, Jamie Moffatt, Mark Lycett, Laryssa Whittaker, Polly Dalton
This exploratory study investigates the relative impacts of incorporating additional sensory- and embodiment-enhancing elements into virtual reality (VR) experiences beyond standard headset features, including vibrating floors, blowing wind, accurately rendered hands, free-roam walking and seeing avatars of real people; the outcome is sometimes called a hyper-reality experience. After taking part in the Current Rising immersive experience at the Royal Opera House, 726 participants completed a survey examining the different perceived impacts the various additional elements were thought to have on presence. Blowing wind and free-roam walking were thought to be most impactful on presence, followed by floor vibration (contrary to expectations), along with seeing avatars. Conversely, virtual hands were thought to exhibit the least influence, despite being rendered with greater detail and precision than those commonly found in standard VR applications. Past VR experience only minimally affected these reported impacts, suggesting that hyper-reality experiences introduce novel elements even to experienced users. By looking at the perceived impact on presence over a rich, holistic range of factors (multisensory elements, virtual bodies, prior experience and enjoyment) in a real-world cultural experience, these findings offer practical guidance for immersive experience designers and researchers to optimise presence. Future research should explore more nuanced assessments of presence and consider non-correlational experimental designs that mitigate various highlighted potential biases and confounding factors.
{"title":"Going Beyond the Ordinary - User Perceptions of the Impact of Multisensory Elements on Presence in Virtual Reality at the Royal Opera House.","authors":"Andy T Woods, Marusa Levstek, Jamie Moffatt, Mark Lycett, Laryssa Whittaker, Polly Dalton","doi":"10.1163/22134808-bja10141","DOIUrl":"10.1163/22134808-bja10141","url":null,"abstract":"<p><p>This exploratory study investigates the relative impacts of incorporating additional sensory- and embodiment-enhancing elements into virtual reality (VR) experiences beyond standard headset features, including vibrating floors, blowing wind, accurately rendered hands, free-roam walking and seeing avatars of real people; the outcome is sometimes called a hyper-reality experience. After taking part in the Current Rising immersive experience at the Royal Opera House, 726 participants completed a survey examining the different perceived impacts the various additional elements were thought to have on presence. Blowing wind and free-roam walking were thought to be most impactful on presence, followed by floor vibration (contrary to expectations), along with seeing avatars. Conversely, virtual hands were thought to exhibit the least influence, despite being rendered with greater detail and precision than those commonly found in standard VR applications. Past VR experience only minimally affected these reported impacts, suggesting that hyper-reality experiences introduce novel elements even to experienced users. By looking at the perceived impact on presence over a rich, holistic range of factors (multisensory elements, virtual bodies, prior experience and enjoyment) in a real-world cultural experience, these findings offer practical guidance for immersive experience designers and researchers to optimise presence. Future research should explore more nuanced assessments of presence and consider non-correlational experimental designs that mitigate various highlighted potential biases and confounding factors.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-25"},"PeriodicalIF":1.8,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-27DOI: 10.1163/22134808-bja10142
Maria Arioli, Francesco Ruotolo, Gennaro Ruggiero, Michela Candini, Tina Iachini, Zaira Cattaneo
Interpersonal distance plays a critical role in communication and social interactions. Here we investigated whether deaf individuals that use sign language differ from hearing (non-signer) individuals in their preferred interpersonal distance. Specifically, we asked a group of deaf participants (all signers) and control hearing participants to report their preferred social distance from a stranger using a computer-presented stop-distance paradigm. Results show that deaf participants prefer larger interpersonal distances than hearing individuals. We suggest that preference for a larger interpersonal distance in deaf participants may relate to different factors such as optimization of distance for sign language communication or the feelings of social exclusion that are often reported in individuals with hearing loss. Our experiment reports preliminary data that may pave the way for further research on proxemic behaviour in conditions of hearing loss.
{"title":"Interpersonal Distance Preferences in Deaf Signers and Hearing Individuals.","authors":"Maria Arioli, Francesco Ruotolo, Gennaro Ruggiero, Michela Candini, Tina Iachini, Zaira Cattaneo","doi":"10.1163/22134808-bja10142","DOIUrl":"10.1163/22134808-bja10142","url":null,"abstract":"<p><p>Interpersonal distance plays a critical role in communication and social interactions. Here we investigated whether deaf individuals that use sign language differ from hearing (non-signer) individuals in their preferred interpersonal distance. Specifically, we asked a group of deaf participants (all signers) and control hearing participants to report their preferred social distance from a stranger using a computer-presented stop-distance paradigm. Results show that deaf participants prefer larger interpersonal distances than hearing individuals. We suggest that preference for a larger interpersonal distance in deaf participants may relate to different factors such as optimization of distance for sign language communication or the feelings of social exclusion that are often reported in individuals with hearing loss. Our experiment reports preliminary data that may pave the way for further research on proxemic behaviour in conditions of hearing loss.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"123-135"},"PeriodicalIF":1.8,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1163/22134808-bja10136
Thomas J Hostler, Giulia L Poerio, Clau Nader, Safiyya Mank, Andrew C Lin, Mario Villena-González, Nate Plutzik, Nitin K Ahuja, Daniel H Baker, Scott Bannister, Emma L Barratt, Stacey A Bedwell, Pierre-Edouard Billot, Emma Blakey, Flavia Cardini, Daniella K Cash, Nick J Davis, Bleiz M Del Sette, Mercede Erfanian, Josephine R Flockton, Beverley Fredborg, Helge Gillmeister, Emma Gray, Sarah M Haigh, Laura L Heisick, Agnieszka Janik McErlean, Helle Breth Klausen, Hirohito M Kondo, Franzisca Maas, L Taylor Maurand, Lawrie S McKay, Marco Mozzoni, Gabriele Navyte, Jessica A Ortega-Balderas, Emma C Palmer-Cooper, Craig A H Richard, Natalie Roberts, Vincenzo Romei, Felix Schoeller, Steven D Shaw, Julia Simner, Stephen D Smith, Eva Specker, Angelica Succi, Niilo V Valtakari, Jennie Weinheimer, Jasper Zehetgrube
Autonomous Sensory Meridian Response (ASMR) is a multisensory experience most often associated with feelings of relaxation and altered consciousness, elicited by stimuli which include whispering, repetitive movements, and close personal attention. Since 2015, ASMR research has grown rapidly, spanning disciplines from neuroscience to media studies but lacking a collaborative or interdisciplinary approach. To build a cohesive and connected structure for ASMR research moving forwards, a modified Delphi study was conducted with ASMR experts, practitioners, community members, and researchers from various disciplines. Ninety-eight participants provided 451 suggestions for ASMR research priorities which were condensed into 13 key areas: (1) Definition, conceptual clarification, and measurement of ASMR; (2) Origins and development of ASMR; (3) Neurophysiology of ASMR; (4) Understanding ASMR triggers; (5) Factors affecting the likelihood of experiencing/eliciting ASMR; (6) ASMR and individual/cultural differences; (7) ASMR and the senses; (8) ASMR and social intimacy; (9) Positive and negative consequences of ASMR in the general population; (10) Therapeutic applications of ASMR in clinical contexts; (11) Effects of long-term ASMR use; (12) ASMR platforms and technology; (13) ASMR community, culture, and practice. These were voted on by 70% of the initial participant pool using best/worst scaling methods. The resulting agenda provides a clear map for ASMR research to enable new and existing researchers to orient themselves towards important questions for the field and to inspire interdisciplinary collaborations.
{"title":"Research Priorities for Autonomous Sensory Meridian Response: An Interdisciplinary Delphi Study.","authors":"Thomas J Hostler, Giulia L Poerio, Clau Nader, Safiyya Mank, Andrew C Lin, Mario Villena-González, Nate Plutzik, Nitin K Ahuja, Daniel H Baker, Scott Bannister, Emma L Barratt, Stacey A Bedwell, Pierre-Edouard Billot, Emma Blakey, Flavia Cardini, Daniella K Cash, Nick J Davis, Bleiz M Del Sette, Mercede Erfanian, Josephine R Flockton, Beverley Fredborg, Helge Gillmeister, Emma Gray, Sarah M Haigh, Laura L Heisick, Agnieszka Janik McErlean, Helle Breth Klausen, Hirohito M Kondo, Franzisca Maas, L Taylor Maurand, Lawrie S McKay, Marco Mozzoni, Gabriele Navyte, Jessica A Ortega-Balderas, Emma C Palmer-Cooper, Craig A H Richard, Natalie Roberts, Vincenzo Romei, Felix Schoeller, Steven D Shaw, Julia Simner, Stephen D Smith, Eva Specker, Angelica Succi, Niilo V Valtakari, Jennie Weinheimer, Jasper Zehetgrube","doi":"10.1163/22134808-bja10136","DOIUrl":"10.1163/22134808-bja10136","url":null,"abstract":"<p><p>Autonomous Sensory Meridian Response (ASMR) is a multisensory experience most often associated with feelings of relaxation and altered consciousness, elicited by stimuli which include whispering, repetitive movements, and close personal attention. Since 2015, ASMR research has grown rapidly, spanning disciplines from neuroscience to media studies but lacking a collaborative or interdisciplinary approach. To build a cohesive and connected structure for ASMR research moving forwards, a modified Delphi study was conducted with ASMR experts, practitioners, community members, and researchers from various disciplines. Ninety-eight participants provided 451 suggestions for ASMR research priorities which were condensed into 13 key areas: (1) Definition, conceptual clarification, and measurement of ASMR; (2) Origins and development of ASMR; (3) Neurophysiology of ASMR; (4) Understanding ASMR triggers; (5) Factors affecting the likelihood of experiencing/eliciting ASMR; (6) ASMR and individual/cultural differences; (7) ASMR and the senses; (8) ASMR and social intimacy; (9) Positive and negative consequences of ASMR in the general population; (10) Therapeutic applications of ASMR in clinical contexts; (11) Effects of long-term ASMR use; (12) ASMR platforms and technology; (13) ASMR community, culture, and practice. These were voted on by 70% of the initial participant pool using best/worst scaling methods. The resulting agenda provides a clear map for ASMR research to enable new and existing researchers to orient themselves towards important questions for the field and to inspire interdisciplinary collaborations.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"499-528"},"PeriodicalIF":1.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1163/22134808-bja10134
Luke E Miller, Alessandro Farnè
Tools can extend the sense of touch beyond the body, allowing the user to extract sensory information about distal objects in their environment. Though research on this topic has trickled in over the last few decades, little is known about the neurocomputational mechanisms of extended touch. In 2016, along with our late collaborator Vincent Hayward, we began a series of studies that attempted to fill this gap. We specifically focused on the ability to localize touch on the surface of a rod, as if it were part of the body. We have conducted eight behavioral experiments over the last several years, all of which have found that humans are incredibly accurate at tool-extended tactile localization. In the present article, we perform a model-driven re-analysis of these findings with an eye toward estimating the underlying parameters that map sensory input into spatial perception. This re-analysis revealed that users can almost perfectly localize touch on handheld tools. This raises the question of how humans can be so good at localizing touch on an inert noncorporeal object. The remainder of the paper focuses on three aspects of this process that occupied much of our collaboration with Vincent: the mechanical information used by participants for localization; the speed by which the nervous system can transform this information into a spatial percept; and whether body-based computations are repurposed for tool-extended touch. In all, these studies underscore the special relationship between bodies and tools.
{"title":"Extending Tactile Space With Handheld Tools: A Re-Analysis and Review.","authors":"Luke E Miller, Alessandro Farnè","doi":"10.1163/22134808-bja10134","DOIUrl":"https://doi.org/10.1163/22134808-bja10134","url":null,"abstract":"<p><p>Tools can extend the sense of touch beyond the body, allowing the user to extract sensory information about distal objects in their environment. Though research on this topic has trickled in over the last few decades, little is known about the neurocomputational mechanisms of extended touch. In 2016, along with our late collaborator Vincent Hayward, we began a series of studies that attempted to fill this gap. We specifically focused on the ability to localize touch on the surface of a rod, as if it were part of the body. We have conducted eight behavioral experiments over the last several years, all of which have found that humans are incredibly accurate at tool-extended tactile localization. In the present article, we perform a model-driven re-analysis of these findings with an eye toward estimating the underlying parameters that map sensory input into spatial perception. This re-analysis revealed that users can almost perfectly localize touch on handheld tools. This raises the question of how humans can be so good at localizing touch on an inert noncorporeal object. The remainder of the paper focuses on three aspects of this process that occupied much of our collaboration with Vincent: the mechanical information used by participants for localization; the speed by which the nervous system can transform this information into a spatial percept; and whether body-based computations are repurposed for tool-extended touch. In all, these studies underscore the special relationship between bodies and tools.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-19"},"PeriodicalIF":1.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1163/22134808-bja10138
Yusuke Suzuki, Masayoshi Nagai
Participants tend to produce a higher or lower vocal pitch in response to upward or downward visual motion, suggesting a pitch-motion correspondence between the visual and speech production processes. However, previous studies were contaminated by factors such as the meaning of vocalized words and the intrinsic pitch or tongue movements associated with the vowels. To address these issues, we examined the pitch-motion correspondence between simple visual motion and pitched speech production. Participants were required to produce a high- or low-pitched meaningless single vowel [a] in response to the upward or downward direction of a visual motion stimulus. Using a single vowel, we eliminated the artifacts related to the meaning, intrinsic pitch, and tongue movements of multiple vocalized vowels. The results revealed that vocal responses were faster when the pitch corresponded to the visual motion (consistent condition) than when it did not (inconsistent condition). This result indicates that the pitch-motion correspondence in speech production does not depend on the stimulus meaning, intrinsic pitch, or tongue movement of the vocalized words. In other words, the present study suggests that the pitch-motion correspondence can be explained more parsimoniously as an association between simple sensory (visual motion) and motoric (vocal pitch) features. Additionally, acoustic analysis revealed that speech production aligned with visual motion exhibited lower stress, greater confidence, and higher vocal fluency.
{"title":"Visual Upward/Downward Motion Elicits Fast and Fluent High-/Low-Pitched Speech Production.","authors":"Yusuke Suzuki, Masayoshi Nagai","doi":"10.1163/22134808-bja10138","DOIUrl":"10.1163/22134808-bja10138","url":null,"abstract":"<p><p>Participants tend to produce a higher or lower vocal pitch in response to upward or downward visual motion, suggesting a pitch-motion correspondence between the visual and speech production processes. However, previous studies were contaminated by factors such as the meaning of vocalized words and the intrinsic pitch or tongue movements associated with the vowels. To address these issues, we examined the pitch-motion correspondence between simple visual motion and pitched speech production. Participants were required to produce a high- or low-pitched meaningless single vowel [a] in response to the upward or downward direction of a visual motion stimulus. Using a single vowel, we eliminated the artifacts related to the meaning, intrinsic pitch, and tongue movements of multiple vocalized vowels. The results revealed that vocal responses were faster when the pitch corresponded to the visual motion (consistent condition) than when it did not (inconsistent condition). This result indicates that the pitch-motion correspondence in speech production does not depend on the stimulus meaning, intrinsic pitch, or tongue movement of the vocalized words. In other words, the present study suggests that the pitch-motion correspondence can be explained more parsimoniously as an association between simple sensory (visual motion) and motoric (vocal pitch) features. Additionally, acoustic analysis revealed that speech production aligned with visual motion exhibited lower stress, greater confidence, and higher vocal fluency.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"529-555"},"PeriodicalIF":1.5,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}