NUX IVE - a research tool for comparing voice user interface and graphical user interface in VR

Karolina Buchta, Piotr Wójcik, Mateusz Pelc, Agnieszka Górowska, Duarte Mota, Kostiantyn Boichenko, Konrad Nakonieczny, K. Wrona, Marta Szymczyk, Tymoteusz Czuchnowski, Justyna Janicka, Damian Galuszka, Radoslaw Sterna, Magdalena Igras-Cybulska
{"title":"NUX IVE - a research tool for comparing voice user interface and graphical user interface in VR","authors":"Karolina Buchta, Piotr Wójcik, Mateusz Pelc, Agnieszka Górowska, Duarte Mota, Kostiantyn Boichenko, Konrad Nakonieczny, K. Wrona, Marta Szymczyk, Tymoteusz Czuchnowski, Justyna Janicka, Damian Galuszka, Radoslaw Sterna, Magdalena Igras-Cybulska","doi":"10.1109/VRW55335.2022.00342","DOIUrl":null,"url":null,"abstract":"A trend of using natural interaction such us speech is clearly visible in human-computer interaction, while in interactive virtual environments (IVE) still it has not become a common practice. Most of input interface elements are graphical and usually they are im-plemented as non-diegetic 2D boards hanging in 3D space. Such holographic interfaces are usually hard to learn and operate, espe-cially for inexperienced users. We have observed a need to explore the potential of using multimodal interfaces in VR and conduct the systematic research that compare the interaction mode in order to optimize the interface and increase the quality of user experience (UX). We introduce a new IVE designed to compare the user inter-action between the mode with traditional graphical user interface (GUI) with the mode in which every element of interface is replaced by voice user interface (VUI). In each version, four scenarios of interaction with a virtual assistant in a sci-fi location are implemented using Unreal Engine, each of them lasting several minutes. The IVE is supplemented with tools for automatic generating reports on user behavior (clicktracking, audiotracking and eyetracking) which makes it useful for UX and usability studies.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VRW55335.2022.00342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

A trend of using natural interaction such us speech is clearly visible in human-computer interaction, while in interactive virtual environments (IVE) still it has not become a common practice. Most of input interface elements are graphical and usually they are im-plemented as non-diegetic 2D boards hanging in 3D space. Such holographic interfaces are usually hard to learn and operate, espe-cially for inexperienced users. We have observed a need to explore the potential of using multimodal interfaces in VR and conduct the systematic research that compare the interaction mode in order to optimize the interface and increase the quality of user experience (UX). We introduce a new IVE designed to compare the user inter-action between the mode with traditional graphical user interface (GUI) with the mode in which every element of interface is replaced by voice user interface (VUI). In each version, four scenarios of interaction with a virtual assistant in a sci-fi location are implemented using Unreal Engine, each of them lasting several minutes. The IVE is supplemented with tools for automatic generating reports on user behavior (clicktracking, audiotracking and eyetracking) which makes it useful for UX and usability studies.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一个比较语音用户界面和图形用户界面在VR中的研究工具
在人机交互中,使用语音等自然交互的趋势明显,但在交互式虚拟环境(IVE)中还没有成为一种普遍的做法。大多数输入界面元素都是图形化的,它们通常被执行为悬挂在3D空间中的非叙事2D面板。这样的全息界面通常很难学习和操作,特别是对于没有经验的用户。我们观察到有必要探索在VR中使用多模式界面的潜力,并进行系统研究,比较交互模式,以优化界面并提高用户体验(UX)的质量。本文介绍了一种新的IVE,旨在比较传统图形用户界面(GUI)模式与语音用户界面(VUI)取代所有界面元素的模式之间的用户交互。在每个版本中,使用虚幻引擎实现了与科幻地点的虚拟助手交互的四个场景,每个场景持续几分钟。IVE还补充了自动生成用户行为报告的工具(点击跟踪、声音跟踪和眼球跟踪),这对用户体验和可用性研究很有用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Jitsi360: Using 360 Images for Live Tours Control with Vergence Eye Movement in Augmented Reality See-Through Vision Understanding Shoulder Surfer Behavior Using Virtual Reality High-speed Gaze-oriented Projection by Cross-ratio-based Eye Tracking with Dual Infrared Imaging [DC] Leveraging AR Cues towards New Navigation Assistant Paradigm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1