首页 > 最新文献

Journal on Multimodal User Interfaces最新文献

英文 中文
Combining audio and visual displays to highlight temporal and spatial seismic patterns 结合声音和视觉显示,突出时间和空间的地震模式
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2021-07-27 DOI: 10.1007/s12193-021-00378-8
Arthur Paté, Gaspard Farge, Benjamin K. Holtzman, Anna C. Barth, Piero Poli, Lapo Boschi, Leif Karlstrom

Data visualization, and to a lesser extent data sonification, are classic tools to the scientific community. However, these two approaches are very rarely combined, although they are highly complementary: our visual system is good at recognizing spatial patterns, whereas our auditory system is better tuned for temporal patterns. In this article, data representation methods are proposed that combine visualization, sonification, and spatial audio techniques, in order to optimize the user’s perception of spatial and temporal patterns in a single display, to increase the feeling of immersion, and to take advantage of multimodal integration mechanisms. Three seismic data sets are used to illustrate the methods, covering different physical phenomena, time scales, spatial distributions, and spatio-temporal dynamics. The methods are adapted to the specificities of each data set, and to the amount of information that the designer wants to display. This leads to further developments, namely the use of audification with two time scales, the switch from pure audification to time-modulated noise, and the switch from pure audification to sonic icons. First user feedback from live demonstrations indicates that the methods presented in this article seem to enhance the perception of spatio-temporal patterns, which is a key parameter to the understanding of seismically active systems, and a step towards apprehending the processes that drive this activity.

数据可视化,以及在较小程度上的数据声化,是科学界的经典工具。然而,这两种方法很少结合在一起,尽管它们是高度互补的:我们的视觉系统擅长识别空间模式,而我们的听觉系统更擅长识别时间模式。本文提出了结合可视化、声化和空间音频技术的数据表示方法,以优化用户在单个显示中对空间和时间模式的感知,增加沉浸感,并利用多模态集成机制。三个地震数据集用于说明方法,涵盖不同的物理现象,时间尺度,空间分布和时空动态。这些方法适应于每个数据集的特殊性,以及设计人员想要显示的信息量。这导致了进一步的发展,即使用两个时间尺度的审核,从纯审核到时间调制噪声的切换,以及从纯审核到声音图标的切换。首先,来自现场演示的用户反馈表明,本文中提出的方法似乎增强了对时空模式的感知,这是理解地震活跃系统的关键参数,也是理解驱动地震活动的过程的一步。
{"title":"Combining audio and visual displays to highlight temporal and spatial seismic patterns","authors":"Arthur Paté, Gaspard Farge, Benjamin K. Holtzman, Anna C. Barth, Piero Poli, Lapo Boschi, Leif Karlstrom","doi":"10.1007/s12193-021-00378-8","DOIUrl":"https://doi.org/10.1007/s12193-021-00378-8","url":null,"abstract":"<p>Data visualization, and to a lesser extent data sonification, are classic tools to the scientific community. However, these two approaches are very rarely combined, although they are highly complementary: our visual system is good at recognizing spatial patterns, whereas our auditory system is better tuned for temporal patterns. In this article, data representation methods are proposed that combine visualization, sonification, and spatial audio techniques, in order to optimize the user’s perception of spatial and temporal patterns in a single display, to increase the feeling of immersion, and to take advantage of multimodal integration mechanisms. Three seismic data sets are used to illustrate the methods, covering different physical phenomena, time scales, spatial distributions, and spatio-temporal dynamics. The methods are adapted to the specificities of each data set, and to the amount of information that the designer wants to display. This leads to further developments, namely the use of audification with two time scales, the switch from pure audification to time-modulated noise, and the switch from pure audification to sonic icons. First user feedback from live demonstrations indicates that the methods presented in this article seem to enhance the perception of spatio-temporal patterns, which is a key parameter to the understanding of seismically active systems, and a step towards apprehending the processes that drive this activity.\u0000</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SoundSight: a mobile sensory substitution device that sonifies colour, distance, and temperature SoundSight:一种可对颜色、距离和温度进行超声波处理的移动感官替代设备
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2021-07-02 DOI: 10.1007/s12193-021-00376-w
Giles Hamilton-Fletcher, James Alvarez, Marianna Obrist, Jamie Ward
{"title":"SoundSight: a mobile sensory substitution device that sonifies colour, distance, and temperature","authors":"Giles Hamilton-Fletcher, James Alvarez, Marianna Obrist, Jamie Ward","doi":"10.1007/s12193-021-00376-w","DOIUrl":"https://doi.org/10.1007/s12193-021-00376-w","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-021-00376-w","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42978218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A wearable virtual touch system for IVIS in cars 一种用于车载IVIS的可穿戴式虚拟触摸系统
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2021-06-22 DOI: 10.1007/s12193-021-00377-9
Gowdham Prabhakar, Priyam Rajkhowa, Dharmesh Harsha, Pradipta Biswas

In automotive domain, operation of secondary tasks like accessing infotainment system, adjusting air conditioning vents, and side mirrors distract drivers from driving. Though existing modalities like gesture and speech recognition systems facilitate undertaking secondary tasks by reducing duration of eyes off the road, those often require remembering a set of gestures or screen sequences. In this paper, we have proposed two different modalities for drivers to virtually touch the dashboard display using a laser tracker with a mechanical switch and an eye gaze switch. We compared performances of our proposed modalities against conventional touch modality in automotive environment by comparing pointing and selection times of representative secondary task and also analysed effect on driving performance in terms of deviation from lane, average speed, variation in perceived workload and system usability. We did not find significant difference in driving and pointing performance between laser tracking system and existing touchscreen system. Our result also showed that the driving and pointing performance of the virtual touch system with eye gaze switch was significantly better than the same with mechanical switch. We evaluated the efficacy of the proposed virtual touch system with eye gaze switch inside a real car and investigated acceptance of the system by professional drivers using qualitative research. The quantitative and qualitative studies indicated importance of using multimodal system inside car and highlighted several criteria for acceptance of new automotive user interface.

在汽车领域,操作信息娱乐系统、调节空调通风口、侧后视镜等次要任务会分散驾驶员的驾驶注意力。虽然现有的方式,如手势和语音识别系统,通过减少眼睛离开道路的时间来帮助完成次要任务,但这些通常需要记住一组手势或屏幕序列。在本文中,我们提出了两种不同的模式,让驾驶员使用带有机械开关和眼睛注视开关的激光跟踪器虚拟触摸仪表板显示器。我们通过比较代表性次要任务的指向和选择时间,比较了我们提出的模式与传统触摸模式在汽车环境中的性能,并从偏离车道、平均速度、感知工作量变化和系统可用性等方面分析了对驾驶性能的影响。我们发现激光跟踪系统与现有触摸屏系统在驱动和指向性能上没有显著差异。实验结果还表明,采用眼睛注视开关的虚拟触摸系统的驱动和指向性能明显优于机械开关。我们在一辆真实的汽车中评估了所提出的带有眼睛注视切换的虚拟触摸系统的功效,并使用定性研究调查了专业驾驶员对该系统的接受程度。定量和定性研究表明了在车内使用多模式系统的重要性,并强调了接受新的汽车用户界面的几个标准。
{"title":"A wearable virtual touch system for IVIS in cars","authors":"Gowdham Prabhakar, Priyam Rajkhowa, Dharmesh Harsha, Pradipta Biswas","doi":"10.1007/s12193-021-00377-9","DOIUrl":"https://doi.org/10.1007/s12193-021-00377-9","url":null,"abstract":"<p>In automotive domain, operation of secondary tasks like accessing infotainment system, adjusting air conditioning vents, and side mirrors distract drivers from driving. Though existing modalities like gesture and speech recognition systems facilitate undertaking secondary tasks by reducing duration of eyes off the road, those often require remembering a set of gestures or screen sequences. In this paper, we have proposed two different modalities for drivers to virtually touch the dashboard display using a laser tracker with a mechanical switch and an eye gaze switch. We compared performances of our proposed modalities against conventional touch modality in automotive environment by comparing pointing and selection times of representative secondary task and also analysed effect on driving performance in terms of deviation from lane, average speed, variation in perceived workload and system usability. We did not find significant difference in driving and pointing performance between laser tracking system and existing touchscreen system. Our result also showed that the driving and pointing performance of the virtual touch system with eye gaze switch was significantly better than the same with mechanical switch. We evaluated the efficacy of the proposed virtual touch system with eye gaze switch inside a real car and investigated acceptance of the system by professional drivers using qualitative research. The quantitative and qualitative studies indicated importance of using multimodal system inside car and highlighted several criteria for acceptance of new automotive user interface.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Interactive exploration of a hierarchical spider web structure with sound 一个层次蜘蛛网结构与声音的互动探索
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2021-06-21 DOI: 10.1007/s12193-021-00375-x
Isabelle Su, Ian Hattwick, Christine Southworth, Evan Ziporyn, Ally Bisshop, R. Mühlethaler, Tomás Saraceno, M. Buehler
{"title":"Interactive exploration of a hierarchical spider web structure with sound","authors":"Isabelle Su, Ian Hattwick, Christine Southworth, Evan Ziporyn, Ally Bisshop, R. Mühlethaler, Tomás Saraceno, M. Buehler","doi":"10.1007/s12193-021-00375-x","DOIUrl":"https://doi.org/10.1007/s12193-021-00375-x","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-021-00375-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52688825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Correction to: A gaze-based interactive system to explore artwork imagery 更正:一个基于凝视的互动系统,用于探索艺术品图像
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2021-05-31 DOI: 10.1007/s12193-021-00374-y
Piercarlo Dondi, Marco Porta, Angelo Donvito, Giovanni Volpe

A Correction to this paper has been published: https://doi.org/10.1007/s12193-021-00373-z

本文的更正已发表:https://doi.org/10.1007/s12193-021-00373-z
{"title":"Correction to: A gaze-based interactive system to explore artwork imagery","authors":"Piercarlo Dondi, Marco Porta, Angelo Donvito, Giovanni Volpe","doi":"10.1007/s12193-021-00374-y","DOIUrl":"https://doi.org/10.1007/s12193-021-00374-y","url":null,"abstract":"<p>A Correction to this paper has been published: https://doi.org/10.1007/s12193-021-00373-z</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A gaze-based interactive system to explore artwork imagery 一个基于凝视的交互式系统,用于探索艺术品图像
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2021-05-21 DOI: 10.1007/s12193-021-00373-z
Piercarlo Dondi, Marco Porta, Angelo Donvito, Giovanni Volpe

Interactive and immersive technologies can significantly enhance the fruition of museums and exhibits. Several studies have proved that multimedia installations can attract visitors, presenting cultural and scientific information in an appealing way. In this article, we present our workflow for achieving a gaze-based interaction with artwork imagery. We designed both a tool for creating interactive “gaze-aware” images and an eye tracking application conceived to interact with those images with the gaze. Users can display different pictures, perform pan and zoom operations, and search for regions of interest with associated multimedia content (text, image, audio, or video). Besides being an assistive technology for motor impaired people (like most gaze-based interaction applications), our solution can also be a valid alternative to the common touch screen panels present in museums, in accordance with the new safety guidelines imposed by the COVID-19 pandemic. Experiments carried out with a panel of volunteer testers have shown that the tool is usable, effective, and easy to learn.

交互式和沉浸式技术可以显著提高博物馆和展览的成果。几项研究证明,多媒体装置可以吸引游客,以一种吸引人的方式展示文化和科学信息。在本文中,我们介绍了实现与艺术品图像的基于凝视的交互的工作流程。我们设计了一个创建交互式“凝视感知”图像的工具,以及一个眼球追踪应用程序,该应用程序旨在与这些图像进行交互。用户可以显示不同的图片,执行平移和缩放操作,并使用相关的多媒体内容(文本、图像、音频或视频)搜索感兴趣的区域。除了作为运动障碍人士的辅助技术(就像大多数基于凝视的交互应用程序一样)之外,我们的解决方案还可以作为博物馆中常见触摸屏面板的有效替代品,符合2019冠状病毒病大流行实施的新安全指南。由一组志愿者测试人员进行的实验表明,该工具是可用的、有效的,并且易于学习。
{"title":"A gaze-based interactive system to explore artwork imagery","authors":"Piercarlo Dondi, Marco Porta, Angelo Donvito, Giovanni Volpe","doi":"10.1007/s12193-021-00373-z","DOIUrl":"https://doi.org/10.1007/s12193-021-00373-z","url":null,"abstract":"<p>Interactive and immersive technologies can significantly enhance the fruition of museums and exhibits. Several studies have proved that multimedia installations can attract visitors, presenting cultural and scientific information in an appealing way. In this article, we present our workflow for achieving a gaze-based interaction with artwork imagery. We designed both a tool for creating interactive “gaze-aware” images and an eye tracking application conceived to interact with those images with the gaze. Users can display different pictures, perform pan and zoom operations, and search for regions of interest with associated multimedia content (text, image, audio, or video). Besides being an assistive technology for motor impaired people (like most gaze-based interaction applications), our solution can also be a valid alternative to the common touch screen panels present in museums, in accordance with the new safety guidelines imposed by the COVID-19 pandemic. Experiments carried out with a panel of volunteer testers have shown that the tool is usable, effective, and easy to learn.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Grounding behaviours with conversational interfaces: effects of embodiment and failures 会话接口的接地行为:体现和失败的影响
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2021-03-24 DOI: 10.1007/s12193-021-00366-y
Dimosthenis Kontogiorgos, Andre Pereira, Joakim Gustafson

Conversational interfaces that interact with humans need to continuously establish, maintain and repair common ground in task-oriented dialogues. Uncertainty, repairs and acknowledgements are expressed in user behaviour in the continuous efforts of the conversational partners to maintain mutual understanding. Users change their behaviour when interacting with systems in different forms of embodiment, which affects the abilities of these interfaces to observe users’ recurrent social signals. Additionally, humans are intellectually biased towards social activity when facing anthropomorphic agents or when presented with subtle social cues. Two studies are presented in this paper examining how humans interact in a referential communication task with wizarded interfaces in different forms of embodiment. In study 1 (N = 30), we test whether humans respond the same way to agents, in different forms of embodiment and social behaviour. In study 2 (N = 44), we replicate the same task and agents but introduce conversational failures disrupting the process of grounding. Findings indicate that it is not always favourable for agents to be anthropomorphised or to communicate with non-verbal cues, as human grounding behaviours change when embodiment and failures are manipulated.

与人类交互的会话界面需要在面向任务的对话中不断建立、维护和修复共同点。不确定性、修复和确认在会话伙伴为保持相互理解而不断努力的过程中表现为用户行为。用户在与不同形式的系统交互时会改变他们的行为,这影响了这些接口观察用户反复出现的社会信号的能力。此外,当面对拟人化的代理人或面对微妙的社交暗示时,人类在智力上倾向于社交活动。本文提出了两项研究,研究了人类如何在不同形式的体现的向导界面的参考通信任务中进行交互。在研究1 (N = 30)中,我们测试了人类是否以不同形式的体现和社会行为对代理人做出相同的反应。在研究2 (N = 44)中,我们复制了相同的任务和代理,但引入了干扰接地过程的会话失败。研究结果表明,拟人化或使用非语言线索进行交流并不总是有利的,因为当具体化和失败被操纵时,人类的接地行为会发生变化。
{"title":"Grounding behaviours with conversational interfaces: effects of embodiment and failures","authors":"Dimosthenis Kontogiorgos, Andre Pereira, Joakim Gustafson","doi":"10.1007/s12193-021-00366-y","DOIUrl":"https://doi.org/10.1007/s12193-021-00366-y","url":null,"abstract":"<p>Conversational interfaces that interact with humans need to continuously establish, maintain and repair common ground in task-oriented dialogues. Uncertainty, repairs and acknowledgements are expressed in user behaviour in the continuous efforts of the conversational partners to maintain mutual understanding. Users change their behaviour when interacting with systems in different forms of embodiment, which affects the abilities of these interfaces to observe users’ recurrent social signals. Additionally, humans are intellectually biased towards social activity when facing anthropomorphic agents or when presented with subtle social cues. Two studies are presented in this paper examining how humans interact in a referential communication task with wizarded interfaces in different forms of embodiment. In study 1 (N = 30), we test whether humans respond the same way to agents, in different forms of embodiment and social behaviour. In study 2 (N = 44), we replicate the same task and agents but introduce conversational failures disrupting the process of grounding. Findings indicate that it is not always favourable for agents to be anthropomorphised or to communicate with non-verbal cues, as human grounding behaviours change when embodiment and failures are manipulated.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
RFID-based tangible and touch tabletop for dual reality in crisis management context 危机管理环境中基于射频识别的有形和触摸桌面双重现实
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2021-03-19 DOI: 10.1007/s12193-021-00370-2
Walid Merrad, A. Héloir, C. Kolski, Antonio Krüger
{"title":"RFID-based tangible and touch tabletop for dual reality in crisis management context","authors":"Walid Merrad, A. Héloir, C. Kolski, Antonio Krüger","doi":"10.1007/s12193-021-00370-2","DOIUrl":"https://doi.org/10.1007/s12193-021-00370-2","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-021-00370-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52688649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Behavior and usability analysis for multimodal user interfaces 多模态用户界面的行为和可用性分析
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2021-03-16 DOI: 10.1007/s12193-021-00372-0
Hamdi Dibeklioğlu, Elif Surer, A. A. Salah, T. Dutoit
{"title":"Behavior and usability analysis for multimodal user interfaces","authors":"Hamdi Dibeklioğlu, Elif Surer, A. A. Salah, T. Dutoit","doi":"10.1007/s12193-021-00372-0","DOIUrl":"https://doi.org/10.1007/s12193-021-00372-0","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-021-00372-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43329247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Identifying and evaluating conceptual representations for auditory-enhanced interactive physics simulations 识别和评估听觉增强的交互式物理模拟的概念表示
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2021-03-15 DOI: 10.1007/s12193-021-00365-z
Brianna J. Tomlinson, B. Walker, Emily B. Moore
{"title":"Identifying and evaluating conceptual representations for auditory-enhanced interactive physics simulations","authors":"Brianna J. Tomlinson, B. Walker, Emily B. Moore","doi":"10.1007/s12193-021-00365-z","DOIUrl":"https://doi.org/10.1007/s12193-021-00365-z","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-021-00365-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42301305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Journal on Multimodal User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1