Combining audio and visual displays to highlight temporal and spatial seismic patterns

IF 2.2 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Journal on Multimodal User Interfaces Pub Date : 2021-07-27 DOI:10.1007/s12193-021-00378-8
Arthur Paté, Gaspard Farge, Benjamin K. Holtzman, Anna C. Barth, Piero Poli, Lapo Boschi, Leif Karlstrom
{"title":"Combining audio and visual displays to highlight temporal and spatial seismic patterns","authors":"Arthur Paté, Gaspard Farge, Benjamin K. Holtzman, Anna C. Barth, Piero Poli, Lapo Boschi, Leif Karlstrom","doi":"10.1007/s12193-021-00378-8","DOIUrl":null,"url":null,"abstract":"<p>Data visualization, and to a lesser extent data sonification, are classic tools to the scientific community. However, these two approaches are very rarely combined, although they are highly complementary: our visual system is good at recognizing spatial patterns, whereas our auditory system is better tuned for temporal patterns. In this article, data representation methods are proposed that combine visualization, sonification, and spatial audio techniques, in order to optimize the user’s perception of spatial and temporal patterns in a single display, to increase the feeling of immersion, and to take advantage of multimodal integration mechanisms. Three seismic data sets are used to illustrate the methods, covering different physical phenomena, time scales, spatial distributions, and spatio-temporal dynamics. The methods are adapted to the specificities of each data set, and to the amount of information that the designer wants to display. This leads to further developments, namely the use of audification with two time scales, the switch from pure audification to time-modulated noise, and the switch from pure audification to sonic icons. First user feedback from live demonstrations indicates that the methods presented in this article seem to enhance the perception of spatio-temporal patterns, which is a key parameter to the understanding of seismically active systems, and a step towards apprehending the processes that drive this activity.\n</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2021-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal on Multimodal User Interfaces","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12193-021-00378-8","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Data visualization, and to a lesser extent data sonification, are classic tools to the scientific community. However, these two approaches are very rarely combined, although they are highly complementary: our visual system is good at recognizing spatial patterns, whereas our auditory system is better tuned for temporal patterns. In this article, data representation methods are proposed that combine visualization, sonification, and spatial audio techniques, in order to optimize the user’s perception of spatial and temporal patterns in a single display, to increase the feeling of immersion, and to take advantage of multimodal integration mechanisms. Three seismic data sets are used to illustrate the methods, covering different physical phenomena, time scales, spatial distributions, and spatio-temporal dynamics. The methods are adapted to the specificities of each data set, and to the amount of information that the designer wants to display. This leads to further developments, namely the use of audification with two time scales, the switch from pure audification to time-modulated noise, and the switch from pure audification to sonic icons. First user feedback from live demonstrations indicates that the methods presented in this article seem to enhance the perception of spatio-temporal patterns, which is a key parameter to the understanding of seismically active systems, and a step towards apprehending the processes that drive this activity.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
结合声音和视觉显示,突出时间和空间的地震模式
数据可视化,以及在较小程度上的数据声化,是科学界的经典工具。然而,这两种方法很少结合在一起,尽管它们是高度互补的:我们的视觉系统擅长识别空间模式,而我们的听觉系统更擅长识别时间模式。本文提出了结合可视化、声化和空间音频技术的数据表示方法,以优化用户在单个显示中对空间和时间模式的感知,增加沉浸感,并利用多模态集成机制。三个地震数据集用于说明方法,涵盖不同的物理现象,时间尺度,空间分布和时空动态。这些方法适应于每个数据集的特殊性,以及设计人员想要显示的信息量。这导致了进一步的发展,即使用两个时间尺度的审核,从纯审核到时间调制噪声的切换,以及从纯审核到声音图标的切换。首先,来自现场演示的用户反馈表明,本文中提出的方法似乎增强了对时空模式的感知,这是理解地震活跃系统的关键参数,也是理解驱动地震活动的过程的一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal on Multimodal User Interfaces
Journal on Multimodal User Interfaces COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
6.90
自引率
3.40%
发文量
12
审稿时长
>12 weeks
期刊介绍: The Journal of Multimodal User Interfaces publishes work in the design, implementation and evaluation of multimodal interfaces. Research in the domain of multimodal interaction is by its very essence a multidisciplinary area involving several fields including signal processing, human-machine interaction, computer science, cognitive science and ergonomics. This journal focuses on multimodal interfaces involving advanced modalities, several modalities and their fusion, user-centric design, usability and architectural considerations. Use cases and descriptions of specific application areas are welcome including for example e-learning, assistance, serious games, affective and social computing, interaction with avatars and robots.
期刊最新文献
Human or robot? Exploring different avatar appearances to increase perceived security in shared automated vehicles AirWhisper: enhancing virtual reality experience via visual-airflow multimodal feedback Truck drivers’ views on the road safety benefits of advanced driver assistance systems and Intelligent Transport Systems in Tanzania What is good? Exploring the applicability of a one item measure as a proxy for measuring acceptance in driver-vehicle interaction studies In-vehicle nudging for increased Adaptive Cruise Control use: a field study
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1