Catherine A Fromm, Ross K Maddox, Melissa J Polonenko, Krystel R Huxlin, Gabriel J Diaz
{"title":"多感官刺激促进了虚拟现实中一项艰巨的全局运动任务的低级感知学习。","authors":"Catherine A Fromm, Ross K Maddox, Melissa J Polonenko, Krystel R Huxlin, Gabriel J Diaz","doi":"10.1371/journal.pone.0319007","DOIUrl":null,"url":null,"abstract":"<p><p>The present study investigates the feasibility of inducing visual perceptual learning on a peripheral, global direction discrimination and integration task in virtual reality, and tests whether audio-visual multisensory training induces faster or greater visual learning than unisensory visual training. Seventeen participants completed a 10-day training experiment wherein they repeatedly performed a 4-alternative, combined visual global-motion and direction discrimination task at 10° azimuth/elevation in a virtual environment. A visual-only group of 8 participants was trained using a unimodal visual stimulus. An audio-visual group of 9 participants underwent training whereby the visual stimulus was always paired with a pulsed, white-noise auditory cue that simulated auditory motion in a direction consistent with the horizontal component of the visual motion stimulus. Our results reveal that, for both groups, learning occurred and transferred to untrained locations. For the AV group, there was an additional performance benefit to training from the AV cue to horizontal motion. This benefit extended into the unisensory post-test, where the auditory cue was removed. However, this benefit did not generalize spatially to previously untrained areas. This spatial specificity suggests that AV learning may have occurred at a lower level in the visual pathways, compared to visual-only learning.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"20 3","pages":"e0319007"},"PeriodicalIF":2.9000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11878941/pdf/","citationCount":"0","resultStr":"{\"title\":\"Multisensory stimuli facilitate low-level perceptual learning on a difficult global motion task in virtual reality.\",\"authors\":\"Catherine A Fromm, Ross K Maddox, Melissa J Polonenko, Krystel R Huxlin, Gabriel J Diaz\",\"doi\":\"10.1371/journal.pone.0319007\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The present study investigates the feasibility of inducing visual perceptual learning on a peripheral, global direction discrimination and integration task in virtual reality, and tests whether audio-visual multisensory training induces faster or greater visual learning than unisensory visual training. Seventeen participants completed a 10-day training experiment wherein they repeatedly performed a 4-alternative, combined visual global-motion and direction discrimination task at 10° azimuth/elevation in a virtual environment. A visual-only group of 8 participants was trained using a unimodal visual stimulus. An audio-visual group of 9 participants underwent training whereby the visual stimulus was always paired with a pulsed, white-noise auditory cue that simulated auditory motion in a direction consistent with the horizontal component of the visual motion stimulus. Our results reveal that, for both groups, learning occurred and transferred to untrained locations. For the AV group, there was an additional performance benefit to training from the AV cue to horizontal motion. This benefit extended into the unisensory post-test, where the auditory cue was removed. However, this benefit did not generalize spatially to previously untrained areas. This spatial specificity suggests that AV learning may have occurred at a lower level in the visual pathways, compared to visual-only learning.</p>\",\"PeriodicalId\":20189,\"journal\":{\"name\":\"PLoS ONE\",\"volume\":\"20 3\",\"pages\":\"e0319007\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-03-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11878941/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLoS ONE\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pone.0319007\",\"RegionNum\":3,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0319007","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
Multisensory stimuli facilitate low-level perceptual learning on a difficult global motion task in virtual reality.
The present study investigates the feasibility of inducing visual perceptual learning on a peripheral, global direction discrimination and integration task in virtual reality, and tests whether audio-visual multisensory training induces faster or greater visual learning than unisensory visual training. Seventeen participants completed a 10-day training experiment wherein they repeatedly performed a 4-alternative, combined visual global-motion and direction discrimination task at 10° azimuth/elevation in a virtual environment. A visual-only group of 8 participants was trained using a unimodal visual stimulus. An audio-visual group of 9 participants underwent training whereby the visual stimulus was always paired with a pulsed, white-noise auditory cue that simulated auditory motion in a direction consistent with the horizontal component of the visual motion stimulus. Our results reveal that, for both groups, learning occurred and transferred to untrained locations. For the AV group, there was an additional performance benefit to training from the AV cue to horizontal motion. This benefit extended into the unisensory post-test, where the auditory cue was removed. However, this benefit did not generalize spatially to previously untrained areas. This spatial specificity suggests that AV learning may have occurred at a lower level in the visual pathways, compared to visual-only learning.
期刊介绍:
PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides:
* Open-access—freely accessible online, authors retain copyright
* Fast publication times
* Peer review by expert, practicing researchers
* Post-publication tools to indicate quality and impact
* Community-based dialogue on articles
* Worldwide media coverage