首页 > 最新文献

2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)最新文献

英文 中文
Designing a Physiological Loop for the Adaptation of Virtual Human Characters in a Social VR Scenario 社交VR场景中虚拟人角色适应的生理回路设计
Francesco Chiossi, Robin Welsch, Steeven Villa, Lewis L. Chuang, Sven Mayer
Social virtual reality is getting mainstream not only for entertainment purposes but also for productivity and education. This makes the design of social VR scenarios functional to support the operator's performance. We present a physiologically-adaptive system that optimizes for visual complexity in a dual-task scenario based on electrodermal activity. Specifically, we propose a system that adapts the amount of non-player characters while jointly performing an N-Back task (primary) and visual detection task (secondary). Our preliminary results show that when optimizing the complexity of the secondary task, users report an improved user experience.
社交虚拟现实正在成为主流,不仅用于娱乐目的,还用于生产和教育。这使得社交VR场景的设计功能支持操作员的表现。我们提出了一个生理适应系统,该系统在基于皮肤电活动的双任务场景中优化了视觉复杂性。具体来说,我们提出了一个系统,该系统可以在共同执行N-Back任务(主要任务)和视觉检测任务(次要任务)时适应非玩家角色的数量。我们的初步结果表明,当优化次要任务的复杂性时,用户报告了改进的用户体验。
{"title":"Designing a Physiological Loop for the Adaptation of Virtual Human Characters in a Social VR Scenario","authors":"Francesco Chiossi, Robin Welsch, Steeven Villa, Lewis L. Chuang, Sven Mayer","doi":"10.1109/VRW55335.2022.00140","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00140","url":null,"abstract":"Social virtual reality is getting mainstream not only for entertainment purposes but also for productivity and education. This makes the design of social VR scenarios functional to support the operator's performance. We present a physiologically-adaptive system that optimizes for visual complexity in a dual-task scenario based on electrodermal activity. Specifically, we propose a system that adapts the amount of non-player characters while jointly performing an N-Back task (primary) and visual detection task (secondary). Our preliminary results show that when optimizing the complexity of the secondary task, users report an improved user experience.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116724847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adding Difference Flow between Virtual and Actual Motion to Reduce Sensory Mismatch and VR Sickness while Moving 增加虚拟和实际运动之间的差异流,以减少移动时的感觉不匹配和VR疾病
Kwan Yun, G. Kim
Enjoying Virtual Reality in vehicles presents a problem because of the sensory mismatch and sickness. While moving, the vestibular sense perceives actual motion in one direction, and the visual sense, visual motion in another. We propose to zero out such physiological mismatch by mixing in motion information as computed by the difference between those of the actual and virtual, namely, “Difference” flow. We present the system for computing and visualizing the difference flow and validate our approach through a small pilot field experiment. Although tested only with a low number of subjects, the initial results are promising.
在车辆中享受虚拟现实带来了一个问题,因为感觉不匹配和疾病。运动时,前庭感觉在一个方向感知实际运动,而视觉感觉在另一个方向感知视觉运动。我们建议通过混合运动信息来消除这种生理上的不匹配,这些运动信息是由实际和虚拟之间的差异计算出来的,即“差”流。我们提出了一个计算和可视化差分流的系统,并通过一个小型的中试试验验证了我们的方法。虽然测试对象很少,但初步结果是有希望的。
{"title":"Adding Difference Flow between Virtual and Actual Motion to Reduce Sensory Mismatch and VR Sickness while Moving","authors":"Kwan Yun, G. Kim","doi":"10.1109/VRW55335.2022.00257","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00257","url":null,"abstract":"Enjoying Virtual Reality in vehicles presents a problem because of the sensory mismatch and sickness. While moving, the vestibular sense perceives actual motion in one direction, and the visual sense, visual motion in another. We propose to zero out such physiological mismatch by mixing in motion information as computed by the difference between those of the actual and virtual, namely, “Difference” flow. We present the system for computing and visualizing the difference flow and validate our approach through a small pilot field experiment. Although tested only with a low number of subjects, the initial results are promising.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129472253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Factors Associated with Retention in Computer Science Using Virtual Reality 利用虚拟现实探索与计算机科学留用相关的因素
Vidya Gaddy, F. Ortega
In this research, the goal was to dissect the main attributes associated with student engagement in introductory Computer Science (CS) courses. A Virtual Reality simulation and survey were designed. Results indicated that there was a strong positive reaction to goal orientation, and a strong negative reaction to demographic characteristics.
在这项研究中,目标是剖析与学生参与计算机科学(CS)入门课程相关的主要属性。设计了虚拟现实仿真与调查系统。结果表明,大学生对目标取向有较强的正面反应,对人口特征有较强的负面反应。
{"title":"Exploring Factors Associated with Retention in Computer Science Using Virtual Reality","authors":"Vidya Gaddy, F. Ortega","doi":"10.1109/VRW55335.2022.00062","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00062","url":null,"abstract":"In this research, the goal was to dissect the main attributes associated with student engagement in introductory Computer Science (CS) courses. A Virtual Reality simulation and survey were designed. Results indicated that there was a strong positive reaction to goal orientation, and a strong negative reaction to demographic characteristics.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126994970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Keynote Speaker: The Hitchhiker's Guide to the Metaverse 主讲人:《星际漫游指南》
P. Hui
We envision in the future the virtual world will mix and co-exist with the physical world in such an immersive way that we cannot tell what is real and what is virtual. We will live and interact with the virtual objects that are blended into our environments with advanced holographic technology or with high-quality head mounted displays and lose the virtuality boundary. We call such a new reality Surreality. Our vision of “metaverse” is a multi-world. There are multiple virtual worlds developed by different technology companies and there is also the Surreality where real and virtual merged. While the metaverse may seem futuristic, catalysed by emerging technologies such as Extended Reality, 5G, and Artificial Intelligence, the digital “big bang” of our cyberspace is not far away. This talk aims to offer a comprehensive framework that examines the latest metaverse development under the dimensions of state-of-the-art technologies and metaverse ecosystems, illustrates the possibility of the digital “big bang”, and propose a concrete research agenda for the development of the metaverse. Reality will die; long live Surreality.
我们设想,在未来,虚拟世界将以一种沉浸式的方式与物理世界混合共存,以至于我们无法分辨什么是真实的,什么是虚拟的。我们将通过先进的全息技术或高质量的头戴式显示器与融入我们环境的虚拟物体生活和互动,并失去虚拟边界。我们称这种新的现实为超现实。我们对“虚拟世界”的设想是一个多重世界。不同的技术公司开发了多个虚拟世界,也有真实和虚拟融合的超现实世界。虽然在扩展现实、5G和人工智能等新兴技术的催化下,虚拟世界可能看起来很未来,但我们的网络空间的数字“大爆炸”并不遥远。本次演讲旨在提供一个全面的框架,在最先进的技术和元宇宙生态系统的维度下,研究最新的元宇宙发展,说明数字“大爆炸”的可能性,并为元宇宙的发展提出具体的研究议程。现实会消亡;超现实万岁。
{"title":"Keynote Speaker: The Hitchhiker's Guide to the Metaverse","authors":"P. Hui","doi":"10.1109/VRW55335.2022.00049","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00049","url":null,"abstract":"We envision in the future the virtual world will mix and co-exist with the physical world in such an immersive way that we cannot tell what is real and what is virtual. We will live and interact with the virtual objects that are blended into our environments with advanced holographic technology or with high-quality head mounted displays and lose the virtuality boundary. We call such a new reality Surreality. Our vision of “metaverse” is a multi-world. There are multiple virtual worlds developed by different technology companies and there is also the Surreality where real and virtual merged. While the metaverse may seem futuristic, catalysed by emerging technologies such as Extended Reality, 5G, and Artificial Intelligence, the digital “big bang” of our cyberspace is not far away. This talk aims to offer a comprehensive framework that examines the latest metaverse development under the dimensions of state-of-the-art technologies and metaverse ecosystems, illustrates the possibility of the digital “big bang”, and propose a concrete research agenda for the development of the metaverse. Reality will die; long live Surreality.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123356102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Challenges and Opportunities for Playful Technology in Health Prevention: Using Virtual Reality to Supplement Breastfeeding Education 游戏技术在健康预防中的挑战和机遇:利用虚拟现实补充母乳喂养教育
Kymeng Tang, K. Gerling, L. Geurts
Playful technology offers the opportunity to engage users, convey knowledge and prompt reflection. We built on this potential and designed a VR simulation to give parents-to-be insights into the lived breastfeeding experience. An evaluation with 10 participants revealed that users appreciated the system but perceived similarity between the simulation and games, leading to conflicting expectations. Reflecting on this, we outline challenges for playful VR simulation design for healthcare contexts.
有趣的技术提供了吸引用户、传达知识和促进反思的机会。我们利用这一潜力,设计了一个VR模拟,让准父母们了解母乳喂养的真实体验。一项由10名参与者参与的评估显示,用户喜欢这个系统,但认为模拟和游戏之间存在相似性,这导致了期望的冲突。考虑到这一点,我们概述了医疗保健环境中好玩的VR模拟设计的挑战。
{"title":"Challenges and Opportunities for Playful Technology in Health Prevention: Using Virtual Reality to Supplement Breastfeeding Education","authors":"Kymeng Tang, K. Gerling, L. Geurts","doi":"10.1109/VRW55335.2022.00088","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00088","url":null,"abstract":"Playful technology offers the opportunity to engage users, convey knowledge and prompt reflection. We built on this potential and designed a VR simulation to give parents-to-be insights into the lived breastfeeding experience. An evaluation with 10 participants revealed that users appreciated the system but perceived similarity between the simulation and games, leading to conflicting expectations. Reflecting on this, we outline challenges for playful VR simulation design for healthcare contexts.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127684107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Local Free-View Neural 3D Head Synthesis for Virtual Group Meetings 虚拟小组会议的局部自由视图神经三维头部合成
Sebastian Rings, Frank Steinicke
Virtual group meetings provide enormous potential for remote group communication. However, today's video conferences incur numer-ous challenges compared to face-to-face meetings. For instance, perception of correct gaze, deictic relations, or eye-to-eye contact is impeded due to the fact that the camera is offset from the eyes of the other users' avatars and that the gallery view is different for each group member. In this paper, we describe how 3D neural heads can be synthesized to overcome these limitations. Therefore, we generate different head poses using a generative adversarial network for a given source image frame using state-of-the-art technology. These head poses can then be viewed in a local space to freely control the gaze of the head poses. We introduce and discuss five use cases for these synthesized head poses that aim to improve intelligent agents and virtual avatar representations in regular video group meetings.
虚拟小组会议为远程小组通信提供了巨大的潜力。然而,与面对面的会议相比,今天的视频会议面临着许多挑战。例如,正确的凝视,指示关系或眼睛对眼睛接触的感知受到阻碍,因为相机与其他用户头像的眼睛偏移,并且每个组成员的画廊视图都不同。在本文中,我们描述了如何合成3D神经头来克服这些限制。因此,我们使用最先进的技术,为给定的源图像帧使用生成对抗网络生成不同的头部姿势。然后,这些头部姿势可以在局部空间中观看,以自由控制头部姿势的凝视。我们介绍并讨论了这些合成头部姿势的五个用例,旨在提高智能代理和虚拟化身在常规视频小组会议中的表现。
{"title":"Local Free-View Neural 3D Head Synthesis for Virtual Group Meetings","authors":"Sebastian Rings, Frank Steinicke","doi":"10.1109/VRW55335.2022.00075","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00075","url":null,"abstract":"Virtual group meetings provide enormous potential for remote group communication. However, today's video conferences incur numer-ous challenges compared to face-to-face meetings. For instance, perception of correct gaze, deictic relations, or eye-to-eye contact is impeded due to the fact that the camera is offset from the eyes of the other users' avatars and that the gallery view is different for each group member. In this paper, we describe how 3D neural heads can be synthesized to overcome these limitations. Therefore, we generate different head poses using a generative adversarial network for a given source image frame using state-of-the-art technology. These head poses can then be viewed in a local space to freely control the gaze of the head poses. We introduce and discuss five use cases for these synthesized head poses that aim to improve intelligent agents and virtual avatar representations in regular video group meetings.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127995207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Ragdoll Recovery: Manipulating Virtual Mannequins to Aid Action Sequence Proficiency 布娃娃恢复:操纵虚拟人体模型,以帮助行动序列熟练
Paul Watson, Swen E. Gaudl
In this paper, we present a Virtual Reality (VR) prototype to support the demonstration and practice of the First Aid recovery position. When someone is unconscious and awaiting medical attention, they are placed in the recovery position to keep their airways clear. The recovery position is a commonly taught action sequence for medical professionals and trained first-aiders across industries. VR is a potential pathway for recovery position training as it can deliver spatial information of a demonstrated action for a subsequent copy. However, due to limits of physical interaction with virtual avatars, the practice of this motor sequence is normally performed in the real world on training partners and body mannequins. This limits remote practice, a key strength of any digital, educational resource. We present Ragdoll Recovery (RR), a VR prototype designed to aid training of the recovery position through avatar demonstration and virtual practice mannequins. Users can view the recovery position sequence by walking around two demonstrator avatars. Observed motor skill sequence can then be practised on a virtual mannequin that uses ragdoll physics for realistic and real-time limb behaviour. RR enables remote access to motor skill training that bridges the gap between knowledge of a demonstrated action sequence and real-world performance. We aim to use this prototype to test the viability of action sequence training within a VR educational space.
在本文中,我们提出了一个虚拟现实(VR)原型,以支持急救恢复位置的演示和实践。当有人失去知觉并等待医疗救助时,他们被置于恢复姿势以保持呼吸道畅通。恢复姿势是医疗专业人员和各行业训练有素的急救人员通常教授的动作顺序。VR是恢复位置训练的潜在途径,因为它可以为后续复制提供演示动作的空间信息。然而,由于与虚拟化身的物理交互的限制,这种运动序列的练习通常是在现实世界中对训练伙伴和人体模型进行的。这限制了远程实践,而远程实践是任何数字教育资源的关键优势。我们展示了Ragdoll Recovery (RR),这是一个VR原型,旨在通过化身演示和虚拟练习人体模型来帮助训练恢复位置。用户可以通过在两个演示化身周围走动来查看恢复位置序列。观察到的运动技能序列可以在虚拟人体模型上练习,该模型使用布娃娃物理来实现真实和实时的肢体行为。RR可以远程访问运动技能训练,弥合演示动作序列知识与现实世界表现之间的差距。我们的目标是使用这个原型来测试动作序列训练在VR教育空间中的可行性。
{"title":"Ragdoll Recovery: Manipulating Virtual Mannequins to Aid Action Sequence Proficiency","authors":"Paul Watson, Swen E. Gaudl","doi":"10.1109/VRW55335.2022.00090","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00090","url":null,"abstract":"In this paper, we present a Virtual Reality (VR) prototype to support the demonstration and practice of the First Aid recovery position. When someone is unconscious and awaiting medical attention, they are placed in the recovery position to keep their airways clear. The recovery position is a commonly taught action sequence for medical professionals and trained first-aiders across industries. VR is a potential pathway for recovery position training as it can deliver spatial information of a demonstrated action for a subsequent copy. However, due to limits of physical interaction with virtual avatars, the practice of this motor sequence is normally performed in the real world on training partners and body mannequins. This limits remote practice, a key strength of any digital, educational resource. We present Ragdoll Recovery (RR), a VR prototype designed to aid training of the recovery position through avatar demonstration and virtual practice mannequins. Users can view the recovery position sequence by walking around two demonstrator avatars. Observed motor skill sequence can then be practised on a virtual mannequin that uses ragdoll physics for realistic and real-time limb behaviour. RR enables remote access to motor skill training that bridges the gap between knowledge of a demonstrated action sequence and real-world performance. We aim to use this prototype to test the viability of action sequence training within a VR educational space.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127963024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NUX IVE - a research tool for comparing voice user interface and graphical user interface in VR 一个比较语音用户界面和图形用户界面在VR中的研究工具
Karolina Buchta, Piotr Wójcik, Mateusz Pelc, Agnieszka Górowska, Duarte Mota, Kostiantyn Boichenko, Konrad Nakonieczny, K. Wrona, Marta Szymczyk, Tymoteusz Czuchnowski, Justyna Janicka, Damian Galuszka, Radoslaw Sterna, Magdalena Igras-Cybulska
A trend of using natural interaction such us speech is clearly visible in human-computer interaction, while in interactive virtual environments (IVE) still it has not become a common practice. Most of input interface elements are graphical and usually they are im-plemented as non-diegetic 2D boards hanging in 3D space. Such holographic interfaces are usually hard to learn and operate, espe-cially for inexperienced users. We have observed a need to explore the potential of using multimodal interfaces in VR and conduct the systematic research that compare the interaction mode in order to optimize the interface and increase the quality of user experience (UX). We introduce a new IVE designed to compare the user inter-action between the mode with traditional graphical user interface (GUI) with the mode in which every element of interface is replaced by voice user interface (VUI). In each version, four scenarios of interaction with a virtual assistant in a sci-fi location are implemented using Unreal Engine, each of them lasting several minutes. The IVE is supplemented with tools for automatic generating reports on user behavior (clicktracking, audiotracking and eyetracking) which makes it useful for UX and usability studies.
在人机交互中,使用语音等自然交互的趋势明显,但在交互式虚拟环境(IVE)中还没有成为一种普遍的做法。大多数输入界面元素都是图形化的,它们通常被执行为悬挂在3D空间中的非叙事2D面板。这样的全息界面通常很难学习和操作,特别是对于没有经验的用户。我们观察到有必要探索在VR中使用多模式界面的潜力,并进行系统研究,比较交互模式,以优化界面并提高用户体验(UX)的质量。本文介绍了一种新的IVE,旨在比较传统图形用户界面(GUI)模式与语音用户界面(VUI)取代所有界面元素的模式之间的用户交互。在每个版本中,使用虚幻引擎实现了与科幻地点的虚拟助手交互的四个场景,每个场景持续几分钟。IVE还补充了自动生成用户行为报告的工具(点击跟踪、声音跟踪和眼球跟踪),这对用户体验和可用性研究很有用。
{"title":"NUX IVE - a research tool for comparing voice user interface and graphical user interface in VR","authors":"Karolina Buchta, Piotr Wójcik, Mateusz Pelc, Agnieszka Górowska, Duarte Mota, Kostiantyn Boichenko, Konrad Nakonieczny, K. Wrona, Marta Szymczyk, Tymoteusz Czuchnowski, Justyna Janicka, Damian Galuszka, Radoslaw Sterna, Magdalena Igras-Cybulska","doi":"10.1109/VRW55335.2022.00342","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00342","url":null,"abstract":"A trend of using natural interaction such us speech is clearly visible in human-computer interaction, while in interactive virtual environments (IVE) still it has not become a common practice. Most of input interface elements are graphical and usually they are im-plemented as non-diegetic 2D boards hanging in 3D space. Such holographic interfaces are usually hard to learn and operate, espe-cially for inexperienced users. We have observed a need to explore the potential of using multimodal interfaces in VR and conduct the systematic research that compare the interaction mode in order to optimize the interface and increase the quality of user experience (UX). We introduce a new IVE designed to compare the user inter-action between the mode with traditional graphical user interface (GUI) with the mode in which every element of interface is replaced by voice user interface (VUI). In each version, four scenarios of interaction with a virtual assistant in a sci-fi location are implemented using Unreal Engine, each of them lasting several minutes. The IVE is supplemented with tools for automatic generating reports on user behavior (clicktracking, audiotracking and eyetracking) which makes it useful for UX and usability studies.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127994699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Photogrammabot: An Autonomous ROS-Based Mobile Photography Robot for Precise 3D Reconstruction and Mapping of Large Indoor Spaces for Mixed Reality photogramabot:一种基于ros的自主移动摄影机器人,用于混合现实中大型室内空间的精确3D重建和映射
Soroosh Mortezapoor, Christian Schönauer, Julien Rüggeberg, H. Kaufmann
Precise 3D reconstruction of environments and real objects for Mixed-Reality applications can be burdensome. Photogrammetry can help to create accurate representations of actual objects in the virtual world using a high number of photos of a subject or an environment. Photogrammabot is an affordable mobile robot that facilitates photogrammetry and 3D reconstruction by autonomously and systematically capturing images. It explores an unknown indoor environment and uses map-based localization and navigation to maintain camera direction at different shooting points. Photogrammabot employs a Raspberry Pi 4B and Robot Operating System (ROS) to control the exploration and capturing processes. The photos are taken using a point-and-shoot camera mounted on a 2-DOF micro turret to enable photography from different angles and compensate for possible robot orientation errors to ensure parallel photos. Photogrammabot has been designed as a general solution to facilitate precise 3D reconstruction of unknown environments. In addition we developed tools to integrate it with and extend the Immersive Deck™ MR system [23], where it aids the setup of the system in new locations.
混合现实应用程序的环境和真实对象的精确3D重建可能是繁重的。摄影测量学可以使用大量的主题或环境照片,帮助创建虚拟世界中实际物体的准确表示。photogramabot是一款经济实惠的移动机器人,通过自主和系统地捕获图像,促进摄影测量和3D重建。它探索未知的室内环境,并使用基于地图的定位和导航来保持相机在不同拍摄点的方向。photogramabot采用树莓派4B和机器人操作系统(ROS)来控制探索和捕获过程。这些照片是用安装在2自由度微炮塔上的傻瓜相机拍摄的,可以从不同的角度拍摄,并补偿可能的机器人方向误差,以确保照片平行。photogramabot被设计为一种通用的解决方案,以方便对未知环境进行精确的3D重建。此外,我们还开发了工具,将其与Immersive Deck™MR系统[23]集成并扩展,从而帮助在新地点设置系统。
{"title":"Photogrammabot: An Autonomous ROS-Based Mobile Photography Robot for Precise 3D Reconstruction and Mapping of Large Indoor Spaces for Mixed Reality","authors":"Soroosh Mortezapoor, Christian Schönauer, Julien Rüggeberg, H. Kaufmann","doi":"10.1109/VRW55335.2022.00033","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00033","url":null,"abstract":"Precise 3D reconstruction of environments and real objects for Mixed-Reality applications can be burdensome. Photogrammetry can help to create accurate representations of actual objects in the virtual world using a high number of photos of a subject or an environment. Photogrammabot is an affordable mobile robot that facilitates photogrammetry and 3D reconstruction by autonomously and systematically capturing images. It explores an unknown indoor environment and uses map-based localization and navigation to maintain camera direction at different shooting points. Photogrammabot employs a Raspberry Pi 4B and Robot Operating System (ROS) to control the exploration and capturing processes. The photos are taken using a point-and-shoot camera mounted on a 2-DOF micro turret to enable photography from different angles and compensate for possible robot orientation errors to ensure parallel photos. Photogrammabot has been designed as a general solution to facilitate precise 3D reconstruction of unknown environments. In addition we developed tools to integrate it with and extend the Immersive Deck™ MR system [23], where it aids the setup of the system in new locations.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133933970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Extended Reality and Internet of Things for Hyper-Connected Metaverse Environments 超连接的虚拟世界环境的扩展现实和物联网
Jie Guan, Jay Irizawa, Alexis Morris
The Metaverse encompasses technologies related to the internet, virtual and augmented reality, and other domains toward smart interfaces that are hyper-connected, immersive, and engaging. However, Metaverse applications face inherent disconnects between virtual and physical components and interfaces. This work explores how an Extended Metaverse framework can be used to increase the seamless integration of interoperable agents between virtual and physical environments. It contributes an early theory and practice toward the synthesis of virtual and physical smart environments anticipating future designs and their potential for connected experiences.
虚拟世界包含了与互联网、虚拟现实和增强现实相关的技术,以及其他面向超连接、沉浸式和引人入胜的智能界面领域的技术。然而,Metaverse应用程序在虚拟和物理组件及接口之间面临固有的断开。这项工作探讨了如何使用Extended Metaverse框架来增加虚拟环境和物理环境之间可互操作代理的无缝集成。它为虚拟和物理智能环境的综合提供了早期的理论和实践,预测了未来的设计及其连接体验的潜力。
{"title":"Extended Reality and Internet of Things for Hyper-Connected Metaverse Environments","authors":"Jie Guan, Jay Irizawa, Alexis Morris","doi":"10.1109/VRW55335.2022.00043","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00043","url":null,"abstract":"The Metaverse encompasses technologies related to the internet, virtual and augmented reality, and other domains toward smart interfaces that are hyper-connected, immersive, and engaging. However, Metaverse applications face inherent disconnects between virtual and physical components and interfaces. This work explores how an Extended Metaverse framework can be used to increase the seamless integration of interoperable agents between virtual and physical environments. It contributes an early theory and practice toward the synthesis of virtual and physical smart environments anticipating future designs and their potential for connected experiences.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134048503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1