首页 > 最新文献

2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)最新文献

英文 中文
Study on Pseudo-haptics during Swimming Motion in a Virtual Reality Space 虚拟现实空间中游泳运动的伪触觉研究
H. Aoki
In this study, the creation of pseudo-haptics during swimming in a virtual reality (VR) space. For this purpose, the user can swim in the VR space, to visualize spheres floating around them in the water. The spheres move from the front of the user to the rear as the user perform a breaststroke. Perception of the movement of these spheres can create the sensation of swimming against the flow of water. Thus, the developed system presents pseudo-haptics by controlling the amount of movement of these spheres. Four types of presentation methods were experimentally examined and compared, and their effects were verified by a psychophysical method. The results suggested that the sensation during pseudohaptics can be finely separated into different levels by generating a constant fluid force against the user.
在这项研究中,在虚拟现实(VR)空间中游泳时的伪触觉的创建。为此,用户可以在VR空间中游泳,想象水中漂浮在他们周围的球体。当使用者进行蛙泳时,球体会从使用者的前面移动到后面。对这些球体运动的感知可以创造出逆着水流游泳的感觉。因此,开发的系统通过控制这些球体的运动量来呈现伪触觉。对四种呈现方式进行了实验检验和比较,并采用心理物理方法对其效果进行了验证。结果表明,假性触觉过程中的感觉可以通过对使用者产生恒定的流体力而精细地分离为不同的层次。
{"title":"Study on Pseudo-haptics during Swimming Motion in a Virtual Reality Space","authors":"H. Aoki","doi":"10.1109/AIVR50618.2020.00068","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00068","url":null,"abstract":"In this study, the creation of pseudo-haptics during swimming in a virtual reality (VR) space. For this purpose, the user can swim in the VR space, to visualize spheres floating around them in the water. The spheres move from the front of the user to the rear as the user perform a breaststroke. Perception of the movement of these spheres can create the sensation of swimming against the flow of water. Thus, the developed system presents pseudo-haptics by controlling the amount of movement of these spheres. Four types of presentation methods were experimentally examined and compared, and their effects were verified by a psychophysical method. The results suggested that the sensation during pseudohaptics can be finely separated into different levels by generating a constant fluid force against the user.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115572993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MoveBox: Democratizing MoCap for the Microsoft Rocketbox Avatar Library MoveBox:为Microsoft Rocketbox Avatar Library民主化动作捕捉
Mar González-Franco, Zelia Egan, Matt Peachey, Angus Antley, Tanmay Randhavane, Payod Panda, Yaying Zhang, Cheng Yao Wang, Derek F. Reilly, Tabitha C. Peck, A. S. Won, A. Steed, E. Ofek
This paper presents MoveBox an open sourced toolbox for animating motion captured (MoCap) movements onto the Microsoft Rocketbox library of avatars. Motion capture is performed using a single depth sensor, such as Azure Kinect or Windows Kinect V2. Motion capture is performed in real-time using a single depth sensor, such as Azure Kinect or Windows Kinect V2, or extracted from existing RGB videos offline leveraging deep-learning computer vision techniques. Our toolbox enables real-time animation of the user’s avatar by converting the transformations between systems that have different joints and hierarchies. Additional features of the toolbox include recording, playback and looping animations, as well as basic audio lip sync, blinking and resizing of avatars as well as finger and hand animations. Our main contribution is both in the creation of this open source tool as well as the validation on different devices and discussion of MoveBox’s capabilities by end users.
本文介绍了MoveBox一个开源的工具箱,用于将动画动作捕捉(MoCap)移动到Microsoft Rocketbox的化身库中。动作捕捉使用单个深度传感器执行,如Azure Kinect或Windows Kinect V2。动作捕捉使用单个深度传感器(如Azure Kinect或Windows Kinect V2)实时执行,或者利用深度学习计算机视觉技术从现有的RGB视频中离线提取。我们的工具箱通过转换具有不同关节和层次的系统之间的转换来实现用户化身的实时动画。工具箱的其他功能包括录音,播放和循环动画,以及基本的音频口型同步,闪烁和调整头像的大小以及手指和手动画。我们的主要贡献是创建这个开源工具,以及在不同设备上的验证和最终用户对MoveBox功能的讨论。
{"title":"MoveBox: Democratizing MoCap for the Microsoft Rocketbox Avatar Library","authors":"Mar González-Franco, Zelia Egan, Matt Peachey, Angus Antley, Tanmay Randhavane, Payod Panda, Yaying Zhang, Cheng Yao Wang, Derek F. Reilly, Tabitha C. Peck, A. S. Won, A. Steed, E. Ofek","doi":"10.1109/AIVR50618.2020.00026","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00026","url":null,"abstract":"This paper presents MoveBox an open sourced toolbox for animating motion captured (MoCap) movements onto the Microsoft Rocketbox library of avatars. Motion capture is performed using a single depth sensor, such as Azure Kinect or Windows Kinect V2. Motion capture is performed in real-time using a single depth sensor, such as Azure Kinect or Windows Kinect V2, or extracted from existing RGB videos offline leveraging deep-learning computer vision techniques. Our toolbox enables real-time animation of the user’s avatar by converting the transformations between systems that have different joints and hierarchies. Additional features of the toolbox include recording, playback and looping animations, as well as basic audio lip sync, blinking and resizing of avatars as well as finger and hand animations. Our main contribution is both in the creation of this open source tool as well as the validation on different devices and discussion of MoveBox’s capabilities by end users.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123938316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Using Marker Based Augmented Reality to teach autistic eating skills 使用基于标记的增强现实来教授自闭症患者的饮食技巧
Rahma Bouaziz, Maimounah Alhejaili, Raneem Al-Saedi, Abrar Mihdhar, Jawaher Alsarrani
Autistic children suffer from distraction and difficulty in learning. The research is still ongoing continues to find suitable ways to help autistic children learn and live normally. Recently, the use of digital technologies in supporting children with Autism has increased dramatically. We focus on problems that autistic children face in the learning process. We propose a new learning system based on Augmented Reality overlie digital objects on top of physical cards and rendering them as a 3D object on mobile devices to help in teaching eating food skills using related phrases and sounds. We aim to improve the learning abilities to repeat the correct behavior.
自闭症儿童患有注意力不集中和学习困难。这项研究仍在继续,继续寻找合适的方法来帮助自闭症儿童正常学习和生活。最近,在支持自闭症儿童方面使用数字技术的情况急剧增加。我们关注自闭症儿童在学习过程中面临的问题。我们提出了一种基于增强现实的新学习系统,将数字对象覆盖在物理卡片上,并将其渲染为移动设备上的3D对象,以帮助使用相关短语和声音教授饮食技能。我们的目标是提高学习能力,以重复正确的行为。
{"title":"Using Marker Based Augmented Reality to teach autistic eating skills","authors":"Rahma Bouaziz, Maimounah Alhejaili, Raneem Al-Saedi, Abrar Mihdhar, Jawaher Alsarrani","doi":"10.1109/AIVR50618.2020.00050","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00050","url":null,"abstract":"Autistic children suffer from distraction and difficulty in learning. The research is still ongoing continues to find suitable ways to help autistic children learn and live normally. Recently, the use of digital technologies in supporting children with Autism has increased dramatically. We focus on problems that autistic children face in the learning process. We propose a new learning system based on Augmented Reality overlie digital objects on top of physical cards and rendering them as a 3D object on mobile devices to help in teaching eating food skills using related phrases and sounds. We aim to improve the learning abilities to repeat the correct behavior.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131504646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A serious VR game for acrophobia therapy in an urban environment 一个严肃的VR游戏,在城市环境中治疗恐高症
Costa Anton, Oana Mitrut, A. Moldoveanu, F. Moldoveanu, J. Kosinka
Much of the costs and dangers of exposure therapy in phobia treatment can be removed through virtual reality (VR). Exposing people to heights, for instance, might sound easy, but it still involves time and money investments to reach a tall building, mountain or bridge. People suffering from milder forms of acrophobia might not even be treated at all, the cost not being worth it. This paper presents a prototype that allows exposure therapy to be done in a controlled environment, in a more comfortable, quick and cheaper way. By applying acrophobia questionnaires, collecting biophysical data and developing a virtual reality game, we can expose volunteers to heights and analyze if there is any change in their fear and anxiety levels. This way, regardless of the initial anxiety level and phobia severity, we can check if there is any post-therapy improvement and verify if virtual reality is a viable alternative to real-world exposure.
通过虚拟现实(VR)可以消除恐惧症治疗中暴露疗法的大部分成本和危险。例如,把人放在高处听起来很容易,但要到达一座高楼、一座山或一座桥,仍然需要投入时间和金钱。患有轻度恐高症的人甚至可能根本不需要治疗,因为费用不值得。本文提出了一种原型,可以在受控环境中以更舒适、更快速、更便宜的方式进行暴露治疗。通过使用恐高问卷,收集生物物理数据和开发虚拟现实游戏,我们可以将志愿者暴露在高处,并分析他们的恐惧和焦虑水平是否有任何变化。这样,不管最初的焦虑程度和恐惧程度如何,我们都可以检查治疗后是否有任何改善,并验证虚拟现实是否是现实世界暴露的可行替代方案。
{"title":"A serious VR game for acrophobia therapy in an urban environment","authors":"Costa Anton, Oana Mitrut, A. Moldoveanu, F. Moldoveanu, J. Kosinka","doi":"10.1109/AIVR50618.2020.00054","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00054","url":null,"abstract":"Much of the costs and dangers of exposure therapy in phobia treatment can be removed through virtual reality (VR). Exposing people to heights, for instance, might sound easy, but it still involves time and money investments to reach a tall building, mountain or bridge. People suffering from milder forms of acrophobia might not even be treated at all, the cost not being worth it. This paper presents a prototype that allows exposure therapy to be done in a controlled environment, in a more comfortable, quick and cheaper way. By applying acrophobia questionnaires, collecting biophysical data and developing a virtual reality game, we can expose volunteers to heights and analyze if there is any change in their fear and anxiety levels. This way, regardless of the initial anxiety level and phobia severity, we can check if there is any post-therapy improvement and verify if virtual reality is a viable alternative to real-world exposure.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128038942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Verbal Mimicry Predicts Social Distance and Social Attraction to an Outgroup Member in Virtual Reality 虚拟现实中言语模仿预测外群体成员的社会距离和社会吸引力
Salvador Alvídrez, Jorge Peña
The present study analyzes the extent to which verbal mimicry contributes to improving outgroup perceptions in virtual reality (VR) interactions. Particularly, this study examined the interplay between avatar customization, the salience of a common ingroup identity, and verbal mimicry in 54 VR dyads comprising users from different ethnic backgrounds. Participants were asked to customize their avatars to look either like themselves or someone completely different. Participants interacted wearing either similar avatar uniforms (salient common identity) or different clothes (nonsalient identity). The linguistic style matching (LSM) algorithm was employed to calculate verbal mimicry in the communication exchanged during a joint task. The results suggested that verbal mimicry significantly predicted lesser social distance and greater social attraction towards the outgroup member. These results are discussed in terms of their contribution for potential intergroup models of avatar communication in immersive virtual environments (IVEs).
本研究分析了言语模仿在虚拟现实(VR)互动中有助于改善外群体感知的程度。特别地,这项研究在54个由不同种族背景的用户组成的虚拟现实组合中检验了虚拟形象定制、共同群体内身份的显著性和语言模仿之间的相互作用。参与者被要求定制他们的头像,要么看起来像自己,要么看起来完全不同。参与者要么穿着相似的虚拟角色制服(突出的共同身份),要么穿着不同的衣服(不突出的身份)进行互动。采用语言风格匹配(LSM)算法计算联合任务中交流中的言语模仿。结果表明,言语模仿显著地预示着更小的社会距离和对外群体成员更大的社会吸引力。这些结果对沉浸式虚拟环境(IVEs)中虚拟角色通信的潜在群体间模型的贡献进行了讨论。
{"title":"Verbal Mimicry Predicts Social Distance and Social Attraction to an Outgroup Member in Virtual Reality","authors":"Salvador Alvídrez, Jorge Peña","doi":"10.1109/AIVR50618.2020.00023","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00023","url":null,"abstract":"The present study analyzes the extent to which verbal mimicry contributes to improving outgroup perceptions in virtual reality (VR) interactions. Particularly, this study examined the interplay between avatar customization, the salience of a common ingroup identity, and verbal mimicry in 54 VR dyads comprising users from different ethnic backgrounds. Participants were asked to customize their avatars to look either like themselves or someone completely different. Participants interacted wearing either similar avatar uniforms (salient common identity) or different clothes (nonsalient identity). The linguistic style matching (LSM) algorithm was employed to calculate verbal mimicry in the communication exchanged during a joint task. The results suggested that verbal mimicry significantly predicted lesser social distance and greater social attraction towards the outgroup member. These results are discussed in terms of their contribution for potential intergroup models of avatar communication in immersive virtual environments (IVEs).","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121091441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
“The Robot-Arm Talks Back to Me” - Human Perception of Augmented Human-Robot Collaboration in Virtual Reality “机械臂与我对话”-虚拟现实中增强人机协作的人类感知
Alexander Arntz, S. Eimler, H. Hoppe
The usage of AI enhanced robots in shared task environments is likely to become more and more common with the increase of digitalization in different industrial sectors. To take up this new challenge, research on the design of Human-Robot-Collaboration (HRC) involving AI-based systems has yet to establish common targets and guidelines. This paper presents results from an explorative qualitative study. Participants (N= 80) were either exposed to a virtual representation of an industrial robot-arm equipped with several augmentation channels for communication with the human operator (lights, textual statements about intentions, etc.) or one with no communicative functions at all. Across all conditions, participants recognized the benefit of collaborating with robots in industrial scenarios regarding work efficiency and alleviation of working conditions. However, a communication channel from the robot to the human is crucial for achieving these benefits. Participants interacting with the non-communicative robot expressed dissatisfaction about the workflow. In both conditions we found remarks about the insufficient speed of the robot-arm for an efficient collaborative process. Our results indicate a wider spectrum of questions to be further explored in the design of collaborative experiences with intelligent technological counterparts considering efficiency, safety, economic success and well-being.
随着不同工业部门数字化程度的提高,在共享任务环境中使用人工智能增强机器人可能会变得越来越普遍。为了应对这一新的挑战,涉及人工智能系统的人-机器人协作(HRC)设计研究尚未建立共同的目标和指导方针。本文介绍了一项探索性质的研究结果。参与者(N= 80)要么接触到一个工业机械臂的虚拟代表,该机械臂配备了几个增强通道,用于与人类操作员进行交流(灯光、关于意图的文本陈述等),要么完全没有交流功能。在所有条件下,参与者都认识到在工业场景中与机器人合作在工作效率和缓解工作条件方面的好处。然而,从机器人到人类的沟通渠道对于实现这些好处至关重要。与非交流机器人互动的参与者表达了对工作流程的不满。在这两种情况下,我们都发现了关于机械臂速度不足以进行有效协作过程的评论。我们的研究结果表明,在考虑效率、安全、经济成功和福祉的智能技术对应物的协作体验设计中,需要进一步探索更广泛的问题。
{"title":"“The Robot-Arm Talks Back to Me” - Human Perception of Augmented Human-Robot Collaboration in Virtual Reality","authors":"Alexander Arntz, S. Eimler, H. Hoppe","doi":"10.1109/AIVR50618.2020.00062","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00062","url":null,"abstract":"The usage of AI enhanced robots in shared task environments is likely to become more and more common with the increase of digitalization in different industrial sectors. To take up this new challenge, research on the design of Human-Robot-Collaboration (HRC) involving AI-based systems has yet to establish common targets and guidelines. This paper presents results from an explorative qualitative study. Participants (N= 80) were either exposed to a virtual representation of an industrial robot-arm equipped with several augmentation channels for communication with the human operator (lights, textual statements about intentions, etc.) or one with no communicative functions at all. Across all conditions, participants recognized the benefit of collaborating with robots in industrial scenarios regarding work efficiency and alleviation of working conditions. However, a communication channel from the robot to the human is crucial for achieving these benefits. Participants interacting with the non-communicative robot expressed dissatisfaction about the workflow. In both conditions we found remarks about the insufficient speed of the robot-arm for an efficient collaborative process. Our results indicate a wider spectrum of questions to be further explored in the design of collaborative experiences with intelligent technological counterparts considering efficiency, safety, economic success and well-being.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124296729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Mill Instructor: Teaching Industrial CNC Procedures Using Virtual Reality 铣床讲师:使用虚拟现实教授工业数控程序
D. Keßler, Alexander Arntz, J. Friedhoff, S. Eimler
Virtual Reality (VR) holds great potential for new didactic concepts in teaching, since environments, information and objects can be represented and manipulated digitally. Especially when it comes to training environments that include potentially dangerous processes, are expensive or bring the risk of damage to important tools, VR offers an alternative way of approaching a new subject. This paper presents a VR-application used in the studies of mechanical engineering. It includes the virtual representation of a Hermle CNC C42U milling machine, which serves to acquire basic knowledge in controlling such a system, avoiding safety risks and logistical constraints. Results from an evaluation with the target group show a good usability and (perceived) impact on the user’s learning gain.
虚拟现实(VR)在教学中具有巨大的新教学概念潜力,因为环境、信息和对象可以以数字方式表示和操纵。特别是当涉及到包括潜在危险过程,昂贵或带来重要工具损坏风险的培训环境时,VR提供了接近新主题的另一种方式。本文介绍了虚拟现实技术在机械工程研究中的应用。它包括一台Hermle CNC C42U铣床的虚拟表示,这有助于获得控制这样一个系统的基本知识,避免安全风险和后勤限制。对目标群体的评估结果显示出良好的可用性和(可感知的)对用户学习收益的影响。
{"title":"Mill Instructor: Teaching Industrial CNC Procedures Using Virtual Reality","authors":"D. Keßler, Alexander Arntz, J. Friedhoff, S. Eimler","doi":"10.1109/AIVR50618.2020.00048","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00048","url":null,"abstract":"Virtual Reality (VR) holds great potential for new didactic concepts in teaching, since environments, information and objects can be represented and manipulated digitally. Especially when it comes to training environments that include potentially dangerous processes, are expensive or bring the risk of damage to important tools, VR offers an alternative way of approaching a new subject. This paper presents a VR-application used in the studies of mechanical engineering. It includes the virtual representation of a Hermle CNC C42U milling machine, which serves to acquire basic knowledge in controlling such a system, avoiding safety risks and logistical constraints. Results from an evaluation with the target group show a good usability and (perceived) impact on the user’s learning gain.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1994 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125544153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Review of Electrostimulation-based Cybersickness Mitigations 基于电刺激的晕动病缓解研究综述
Gang Li, Mark Mcgill, S. Brewster, F. Pollick
With the development of consumer virtual reality (VR), people have increasing opportunities to experience cybersickness (CS) –- a kind of visuallyinduced motion sickness (MS). In view of the importance of CS mitigation (CSM), this paper reviews the methods of electrostimulation-based CSM (e-CSM), broadly categorised as either “VR-centric” or “Human-centric”. “VR-centric” refers to approaches where knowledge regarding the visual motion being experienced in VR directly affects how the neurostimulation is delivered, whereas “Human-centric” approaches focus on the inhibition or enhancement of human functions per se without knowledge of the experienced visual motion. We DIFFERENT E-found that 1) most e-CSM approaches are based on visual-vestibular sensory conflict theory –- one of the generally-accepted aetiologies of MS, 2) the majority of eCSM approaches are vestibular system-centric, either stimulating it to compensate for the mismatched vestibular sensory responses, or inhibiting it to make an artificial and temporary dysfunction in vestibular sensory organs or cortical areas, 3) Vestibular sensory organbased solutions are able to mitigate CS with immediate effect, while the real-time effect of vestibular cortical areas-based methods remains unclear, due to limited public data, 4) Based on subjective assessment, VRcentric approaches could relieve all three kinds of symptoms (nausea, oculomotor, and disorientation), which appears superior to the human-centric ones that could only alleviate one of the symptom types or just have an overall relief effect. Finally, we propose promising future research directions in the development of e-CSM.
随着消费者虚拟现实技术(VR)的发展,人们有越来越多的机会体验晕动病(CS)——一种视觉诱发的晕动病(MS)。鉴于CS缓解(CSM)的重要性,本文回顾了基于电刺激的CSM (e-CSM)的方法,大致分为“以vr为中心”或“以人类为中心”。“以VR为中心”指的是关于VR中所体验的视觉运动的知识直接影响神经刺激的传递方式,而“以人为中心”的方法侧重于在不了解所体验的视觉运动的情况下抑制或增强人体功能本身。我们发现,1)大多数电- csm方法是基于视觉-前庭感觉冲突理论——这是公认的MS病因之一;2)大多数电- csm方法是以前庭系统为中心的,要么刺激它来补偿不匹配的前庭感觉反应,要么抑制它,使前庭感觉器官或皮层区域出现人工和暂时的功能障碍。3)基于前庭感觉器官的解决方案能够立即缓解CS,而基于前庭皮质区域的方法的实时效果尚不清楚,由于公共数据有限。4)基于主观评估,以vr为中心的方法可以缓解所有三种症状(恶心、动眼和定向障碍),优于以人类为中心的方法只能缓解其中一种症状或仅具有整体缓解效果。最后,我们提出了e-CSM未来的研究方向。
{"title":"A Review of Electrostimulation-based Cybersickness Mitigations","authors":"Gang Li, Mark Mcgill, S. Brewster, F. Pollick","doi":"10.1109/AIVR50618.2020.00034","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00034","url":null,"abstract":"With the development of consumer virtual reality (VR), people have increasing opportunities to experience cybersickness (CS) –- a kind of visuallyinduced motion sickness (MS). In view of the importance of CS mitigation (CSM), this paper reviews the methods of electrostimulation-based CSM (e-CSM), broadly categorised as either “VR-centric” or “Human-centric”. “VR-centric” refers to approaches where knowledge regarding the visual motion being experienced in VR directly affects how the neurostimulation is delivered, whereas “Human-centric” approaches focus on the inhibition or enhancement of human functions per se without knowledge of the experienced visual motion. We DIFFERENT E-found that 1) most e-CSM approaches are based on visual-vestibular sensory conflict theory –- one of the generally-accepted aetiologies of MS, 2) the majority of eCSM approaches are vestibular system-centric, either stimulating it to compensate for the mismatched vestibular sensory responses, or inhibiting it to make an artificial and temporary dysfunction in vestibular sensory organs or cortical areas, 3) Vestibular sensory organbased solutions are able to mitigate CS with immediate effect, while the real-time effect of vestibular cortical areas-based methods remains unclear, due to limited public data, 4) Based on subjective assessment, VRcentric approaches could relieve all three kinds of symptoms (nausea, oculomotor, and disorientation), which appears superior to the human-centric ones that could only alleviate one of the symptom types or just have an overall relief effect. Finally, we propose promising future research directions in the development of e-CSM.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129335365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Thermodynamics Reloaded: Experiencing Heating, Ventilation and Air Conditioning in AR 热力学重装:体验AR中的采暖、通风和空调
Alexander Arntz, S. Eimler, D. Keßler, A. Nabokova, S. Schädlich
Augmented Reality (AR) has great potential for new didactic concepts in teaching. Environments, information and objects can be comprehensively and dynamically represented, supporting self-paced and holistic learning. This paper presents an implementation of a multimodal AR-application for the purpose of teaching complex features and mechanics of a ”Heating, Ventilation and Air Conditioning System” in a situated and engaging way. The application was designed and implemented by an interdisciplinary team and evaluated in a mixed-methods approach. Results show a high usability and acceptance of the application. Students recognized the benefit of the application regarding their motivation and learning gains and made suggestions for further improvements.
增强现实(AR)在教学中具有巨大的新教学概念潜力。环境、信息和对象可以全面、动态地呈现,支持自定节奏和整体学习。本文介绍了一个多模态ar应用程序的实现,以一种情境化和引人入胜的方式来教授“供暖、通风和空调系统”的复杂特征和力学。该应用程序由跨学科团队设计和实施,并采用混合方法进行评估。结果表明,该应用程序具有较高的可用性和可接受性。学生们认识到应用程序对他们的动机和学习收益的好处,并提出了进一步改进的建议。
{"title":"Thermodynamics Reloaded: Experiencing Heating, Ventilation and Air Conditioning in AR","authors":"Alexander Arntz, S. Eimler, D. Keßler, A. Nabokova, S. Schädlich","doi":"10.1109/AIVR50618.2020.00064","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00064","url":null,"abstract":"Augmented Reality (AR) has great potential for new didactic concepts in teaching. Environments, information and objects can be comprehensively and dynamically represented, supporting self-paced and holistic learning. This paper presents an implementation of a multimodal AR-application for the purpose of teaching complex features and mechanics of a ”Heating, Ventilation and Air Conditioning System” in a situated and engaging way. The application was designed and implemented by an interdisciplinary team and evaluated in a mixed-methods approach. Results show a high usability and acceptance of the application. Students recognized the benefit of the application regarding their motivation and learning gains and made suggestions for further improvements.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115049126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Unmasking Communication Partners: A Low-Cost AI Solution for Digitally Removing Head-Mounted Displays in VR-Based Telepresence 揭露通信伙伴:在基于vr的远程呈现中数字移除头戴式显示器的低成本人工智能解决方案
Philipp Ladwig, Alexander Pech, R. Dörner, C. Geiger
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD). A significant portion of a participant’s face is hidden and facial expressions are difficult to perceive. Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware. In this paper, we propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware. Our approach is to track the user’s face underneath the HMD utilizing a Convolutional Neural Network (CNN) and generate corresponding expressions with Generative Adversarial Networks (GAN) for producing RGBD images of the person’s face. We use commodity hardware with low-cost extensions such as 3Dprinted mounts and miniature cameras. Our approach learns end-to-end without manual intervention, runs in real time, and can be trained and executed on an ordinary gaming computer. We report evaluation results showing that our low-cost system does not achieve the same fidelity of research prototypes using high-end hardware and closed source software, but it is capable of creating individual facial avatars with personspecific characteristics in movements and expressions.
当参与者戴着头戴式显示器(HMD)时,虚拟现实(VR)中的面对面对话是一个挑战。参与者脸部的很大一部分被隐藏起来,面部表情很难被察觉。过去的研究表明,在实验室条件下,使用高成本的硬件,在VR中使用个人头像进行高保真人脸重建是可能的。在本文中,我们为这项任务提出了第一个低成本系统之一,它只使用开源,免费软件和负担得起的硬件。我们的方法是利用卷积神经网络(CNN)在HMD下跟踪用户的面部,并使用生成对抗网络(GAN)生成相应的表情,以生成人脸的RGBD图像。我们使用具有低成本扩展的商品硬件,如3d打印支架和微型相机。我们的方法是端到端学习,无需人工干预,实时运行,可以在普通游戏计算机上进行训练和执行。我们报告的评估结果显示,我们的低成本系统无法达到使用高端硬件和闭源软件的研究原型的相同保真度,但它能够创建具有个人运动和表情特征的个人面部化身。
{"title":"Unmasking Communication Partners: A Low-Cost AI Solution for Digitally Removing Head-Mounted Displays in VR-Based Telepresence","authors":"Philipp Ladwig, Alexander Pech, R. Dörner, C. Geiger","doi":"10.1109/AIVR50618.2020.00025","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00025","url":null,"abstract":"Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD). A significant portion of a participant’s face is hidden and facial expressions are difficult to perceive. Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware. In this paper, we propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware. Our approach is to track the user’s face underneath the HMD utilizing a Convolutional Neural Network (CNN) and generate corresponding expressions with Generative Adversarial Networks (GAN) for producing RGBD images of the person’s face. We use commodity hardware with low-cost extensions such as 3Dprinted mounts and miniature cameras. Our approach learns end-to-end without manual intervention, runs in real time, and can be trained and executed on an ordinary gaming computer. We report evaluation results showing that our low-cost system does not achieve the same fidelity of research prototypes using high-end hardware and closed source software, but it is capable of creating individual facial avatars with personspecific characteristics in movements and expressions.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131103092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1