The Audio-Corsi: an acoustic virtual reality-based technological solution for evaluating audio-spatial memory abilities

IF 2.2 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Journal on Multimodal User Interfaces Pub Date : 2021-11-24 DOI:10.1007/s12193-021-00383-x
Walter Setti, Isaac Alonso-Martinez Engel, Luigi F. Cuturi, Monica Gori, Lorenzo Picinali
{"title":"The Audio-Corsi: an acoustic virtual reality-based technological solution for evaluating audio-spatial memory abilities","authors":"Walter Setti, Isaac Alonso-Martinez Engel, Luigi F. Cuturi, Monica Gori, Lorenzo Picinali","doi":"10.1007/s12193-021-00383-x","DOIUrl":null,"url":null,"abstract":"<p>Spatial memory is a cognitive skill that allows the recall of information about the space, its layout, and items’ locations. We present a novel application built around 3D spatial audio technology to evaluate audio-spatial memory abilities. The sound sources have been spatially distributed employing the 3D Tune-In Toolkit, a virtual acoustic simulator. The participants are presented with sequences of sounds of increasing length emitted from virtual auditory sources around their heads. To identify stimuli positions and register the test responses, we designed a custom-made interface with buttons arranged according to sound locations. We took inspiration from the <i>Corsi-Block</i> test for the experimental procedure, a validated clinical approach for assessing visuo-spatial memory abilities. In two different experimental sessions, the participants were tested with the classical <i>Corsi-Block</i> and, blindfolded, with the proposed task, named <i>Audio-Corsi</i> for brevity. Our results show comparable performance across the two tests in terms of the estimated memory parameter precision. Furthermore, in the <i>Audio-Corsi</i> we observe a lower span compared to the <i>Corsi-Block</i> test. We discuss these results in the context of the theoretical relationship between the auditory and visual sensory modalities and potential applications of this system in multiple scientific and clinical contexts.\n</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"36 3","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal on Multimodal User Interfaces","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12193-021-00383-x","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 3

Abstract

Spatial memory is a cognitive skill that allows the recall of information about the space, its layout, and items’ locations. We present a novel application built around 3D spatial audio technology to evaluate audio-spatial memory abilities. The sound sources have been spatially distributed employing the 3D Tune-In Toolkit, a virtual acoustic simulator. The participants are presented with sequences of sounds of increasing length emitted from virtual auditory sources around their heads. To identify stimuli positions and register the test responses, we designed a custom-made interface with buttons arranged according to sound locations. We took inspiration from the Corsi-Block test for the experimental procedure, a validated clinical approach for assessing visuo-spatial memory abilities. In two different experimental sessions, the participants were tested with the classical Corsi-Block and, blindfolded, with the proposed task, named Audio-Corsi for brevity. Our results show comparable performance across the two tests in terms of the estimated memory parameter precision. Furthermore, in the Audio-Corsi we observe a lower span compared to the Corsi-Block test. We discuss these results in the context of the theoretical relationship between the auditory and visual sensory modalities and potential applications of this system in multiple scientific and clinical contexts.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
音频- corsi:一种基于声学虚拟现实的技术解决方案,用于评估音频空间记忆能力
空间记忆是一种认知技能,它允许回忆有关空间、布局和物品位置的信息。我们提出了一个围绕3D空间音频技术构建的新应用程序来评估音频-空间记忆能力。声源利用虚拟声学模拟器3D Tune-In Toolkit进行空间分布。参与者会听到从他们头部周围的虚拟声源发出的不断增加的声音序列。为了识别刺激位置并记录测试反应,我们设计了一个定制的界面,根据声音位置排列按钮。我们从Corsi-Block测试中获得了灵感,这是一种评估视觉空间记忆能力的有效临床方法。在两个不同的实验阶段中,参与者分别接受了经典的Corsi-Block测试和蒙上眼睛的拟议任务测试,该任务被命名为Audio-Corsi。我们的结果显示,在估计的内存参数精度方面,两个测试的性能相当。此外,与corsi块测试相比,在音频- corsi测试中,我们观察到较低的广度。我们在听觉和视觉感觉模式之间的理论关系以及该系统在多种科学和临床环境中的潜在应用的背景下讨论这些结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal on Multimodal User Interfaces
Journal on Multimodal User Interfaces COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
6.90
自引率
3.40%
发文量
12
审稿时长
>12 weeks
期刊介绍: The Journal of Multimodal User Interfaces publishes work in the design, implementation and evaluation of multimodal interfaces. Research in the domain of multimodal interaction is by its very essence a multidisciplinary area involving several fields including signal processing, human-machine interaction, computer science, cognitive science and ergonomics. This journal focuses on multimodal interfaces involving advanced modalities, several modalities and their fusion, user-centric design, usability and architectural considerations. Use cases and descriptions of specific application areas are welcome including for example e-learning, assistance, serious games, affective and social computing, interaction with avatars and robots.
期刊最新文献
Human or robot? Exploring different avatar appearances to increase perceived security in shared automated vehicles AirWhisper: enhancing virtual reality experience via visual-airflow multimodal feedback Truck drivers’ views on the road safety benefits of advanced driver assistance systems and Intelligent Transport Systems in Tanzania In-vehicle nudging for increased Adaptive Cruise Control use: a field study Prediction of pedestrian crossing behaviour at unsignalized intersections using machine learning algorithms: analysis and comparison
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1