首页 > 最新文献

2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)最新文献

英文 中文
neomento SAD - VR Treatment For Social Anxiety 社交焦虑的新现象SAD - VR治疗
Adam Streck, Philipp Stepnicka, Jens Klaubert, T. Wolbers
We present the neomento project, a solution for virtual reality (VR) exposure therapy (VRET). In our work we have created specific rendering methods and virtual environments (VEs); designed and published a novel form of behavioural control for virtual agents (VA); included biophysiological measurements directly into the experience; and created an asymmetric gameplay system to assure the correct progress of a therapy session. This is a reprint of [A. Streck, P.Stepnicka, J. Klaubert, and T. Wolbers. neomento - Towards Building a Universal Solution for Virtual Reality Exposure Psychotherapy. 2019 IEEE Conference on Games (CoG), IEEE Press, 2019] ©2019 IEEE, with the title, acknowledgement, references, the Fig. 1, and phrasing updated.
我们提出了neomento项目,一个虚拟现实(VR)暴露治疗(VRET)的解决方案。在我们的工作中,我们创建了特定的渲染方法和虚拟环境(ve);为虚拟代理(VA)设计并发布了一种新的行为控制形式;将生物生理测量直接纳入体验;并创造了一个非对称的游戏玩法系统,以确保治疗过程的正确进展。这是[a .]斯特莱克,p.s stepnicka, J. Klaubert和T. Wolbers。neomento -迈向构建虚拟现实暴露心理治疗的通用解决方案。2019年IEEE游戏会议(CoG), IEEE出版社,2019]©2019 IEEE,标题、致谢、参考文献、图1和措辞更新。
{"title":"neomento SAD - VR Treatment For Social Anxiety","authors":"Adam Streck, Philipp Stepnicka, Jens Klaubert, T. Wolbers","doi":"10.1109/AIVR46125.2019.00054","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00054","url":null,"abstract":"We present the neomento project, a solution for virtual reality (VR) exposure therapy (VRET). In our work we have created specific rendering methods and virtual environments (VEs); designed and published a novel form of behavioural control for virtual agents (VA); included biophysiological measurements directly into the experience; and created an asymmetric gameplay system to assure the correct progress of a therapy session. This is a reprint of [A. Streck, P.Stepnicka, J. Klaubert, and T. Wolbers. neomento - Towards Building a Universal Solution for Virtual Reality Exposure Psychotherapy. 2019 IEEE Conference on Games (CoG), IEEE Press, 2019] ©2019 IEEE, with the title, acknowledgement, references, the Fig. 1, and phrasing updated.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130981331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring Perspective Dependency in a Shared Body with Virtual Supernumerary Robotic Arms 基于虚拟机械臂的共享体视角依赖研究
Ryo Takizawa, Adrien Verhulst, Katie Seaborn, M. Fukuoka, Atsushi Hiyama, M. Kitazaki, M. Inami, M. Sugimoto
With advancements in robotics, systems featuring wearable robotic arms teleoperated by a third party are appearing. An important aspect of these systems is the visual feedback provided to the third party operator. This can be achieved by placing a wearable camera on the robotic arm's "host", but such a setup makes the visual feedback dependent on the movements of the main body. Here we reproduced this view dependency in VR using a shared body. The "host" shares their virtual body with the virtual Supernumerary Robotic Arms of the teleoperator. In our research, the two users perform two tasks: (i) a "synchronization task" to improve their joint action performance and (ii) a "building task" where they worked together to build a tower. In a user study, we evaluated the embodiment, workload, and performance of the teleoperator through the "building task" with three different view dependency modes. We present some early outcomes as trends that might give directions for the future investigations.
随着机器人技术的进步,由第三方远程操作的可穿戴机械臂系统正在出现。这些系统的一个重要方面是提供给第三方操作员的视觉反馈。这可以通过在机械臂的“主人”上放置一个可穿戴摄像头来实现,但这样的设置使得视觉反馈依赖于主体的运动。在这里,我们使用共享体在VR中再现了这种视图依赖。“主人”与远程操作者的虚拟多余机械臂共享他们的虚拟身体。在我们的研究中,两个用户执行两个任务:(i)一个“同步任务”,以提高他们的联合行动性能;(ii)一个“建造任务”,他们一起建造一座塔。在一项用户研究中,我们通过使用三种不同的视图依赖模式的“构建任务”来评估远程操作员的体现、工作量和性能。我们提出了一些早期结果作为趋势,可能为未来的调查提供方向。
{"title":"Exploring Perspective Dependency in a Shared Body with Virtual Supernumerary Robotic Arms","authors":"Ryo Takizawa, Adrien Verhulst, Katie Seaborn, M. Fukuoka, Atsushi Hiyama, M. Kitazaki, M. Inami, M. Sugimoto","doi":"10.1109/AIVR46125.2019.00014","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00014","url":null,"abstract":"With advancements in robotics, systems featuring wearable robotic arms teleoperated by a third party are appearing. An important aspect of these systems is the visual feedback provided to the third party operator. This can be achieved by placing a wearable camera on the robotic arm's \"host\", but such a setup makes the visual feedback dependent on the movements of the main body. Here we reproduced this view dependency in VR using a shared body. The \"host\" shares their virtual body with the virtual Supernumerary Robotic Arms of the teleoperator. In our research, the two users perform two tasks: (i) a \"synchronization task\" to improve their joint action performance and (ii) a \"building task\" where they worked together to build a tower. In a user study, we evaluated the embodiment, workload, and performance of the teleoperator through the \"building task\" with three different view dependency modes. We present some early outcomes as trends that might give directions for the future investigations.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130036901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Using Eye Tracked Virtual Reality to Classify Understanding of Vocabulary in Recall Tasks 眼动追踪虚拟现实技术在词汇记忆分类中的应用
J. Orlosky, Brandon Huynh, Tobias Höllerer
In recent years, augmented and virtual reality (AR/VR) have started to take a foothold in markets such as training and education. Although AR and VR have tremendous potential, current interfaces and applications are still limited in their ability to recognize context, user understanding, and intention, which can limit the options for customized individual user support and the ease of automation. This paper addresses the problem of automatically recognizing whether or not a user has an understanding of a certain term, which is directly applicable to AR/VR interfaces for language and concept learning. To do so, we first designed an interactive word recall task in VR that required non-native English speakers to assess their knowledge of English words, many of which were difficult or uncommon. Using an eye tracker integrated into the VR Display, we collected a variety of eye movement metrics that might correspond to the user's knowledge or memory of a particular word. Through experimentation, we show that both eye movement and pupil radius have a high correlation to user memory, and that several other metrics can also be used to help classify the state of word understanding. This allowed us to build a support vector machine (SVM) that can predict a user's knowledge with an accuracy of 62% in the general case and and 75% for easy versus medium words, which was tested using cross-fold validation. We discuss these results in the context of in-situ learning applications.
近年来,增强现实和虚拟现实(AR/VR)已经开始在培训和教育等市场站稳脚跟。尽管AR和VR具有巨大的潜力,但目前的界面和应用程序在识别上下文、用户理解和意图方面的能力仍然有限,这可能限制了定制个人用户支持的选择和自动化的便利性。本文解决了自动识别用户是否理解某个术语的问题,直接适用于AR/VR界面的语言和概念学习。为此,我们首先在VR中设计了一个交互式单词回忆任务,要求非英语母语者评估他们对英语单词的认识,其中许多单词是困难的或不常见的。使用集成到VR显示器中的眼动仪,我们收集了各种眼动指标,这些指标可能与用户对特定单词的知识或记忆相对应。通过实验,我们发现眼球运动和瞳孔半径都与用户记忆有很高的相关性,并且其他几个指标也可以用来帮助分类单词理解的状态。这使我们能够建立一个支持向量机(SVM),它可以预测用户的知识,在一般情况下准确率为62%,在简单词和中等词的情况下准确率为75%,这是使用交叉折叠验证进行测试的。我们在现场学习应用的背景下讨论这些结果。
{"title":"Using Eye Tracked Virtual Reality to Classify Understanding of Vocabulary in Recall Tasks","authors":"J. Orlosky, Brandon Huynh, Tobias Höllerer","doi":"10.1109/AIVR46125.2019.00019","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00019","url":null,"abstract":"In recent years, augmented and virtual reality (AR/VR) have started to take a foothold in markets such as training and education. Although AR and VR have tremendous potential, current interfaces and applications are still limited in their ability to recognize context, user understanding, and intention, which can limit the options for customized individual user support and the ease of automation. This paper addresses the problem of automatically recognizing whether or not a user has an understanding of a certain term, which is directly applicable to AR/VR interfaces for language and concept learning. To do so, we first designed an interactive word recall task in VR that required non-native English speakers to assess their knowledge of English words, many of which were difficult or uncommon. Using an eye tracker integrated into the VR Display, we collected a variety of eye movement metrics that might correspond to the user's knowledge or memory of a particular word. Through experimentation, we show that both eye movement and pupil radius have a high correlation to user memory, and that several other metrics can also be used to help classify the state of word understanding. This allowed us to build a support vector machine (SVM) that can predict a user's knowledge with an accuracy of 62% in the general case and and 75% for easy versus medium words, which was tested using cross-fold validation. We discuss these results in the context of in-situ learning applications.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131125467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Measuring User Responses to Driving Simulators: A Galvanic Skin Response Based Study 测量用户对驾驶模拟器的反应:基于皮肤电反应的研究
Atiqul Islam, Jinshuai Ma, Tom Gedeon, Md. Zakir Hossain, Ying-Hsang Liu
The use of simulator technology has become popular in providing training, investigating driving activity and performing research as it is a suitable alternative to actual field study. The transferability of the achieved result from driving simulators to the real world is a critical issue considering later real-world risks, and important to the ethics of experiments. Moreover, researchers have to trade-off between simulator sophistication and the cost it incurs to achieve a given level of realism. This study will be the first step towards investigating the plausibility of different driving simulator configurations of varying verisimilitude, from drivers' galvanic skin response (GSR) signals. GSR is the widely used indicator of behavioural response. By analyzing GSR signals in a simulation environment, our results are aimed to support or contradict the use of simple low-level driving simulators. We investigate GSR signals of 23 participants doing virtual driving tasks in 5 different configurations of simulation environments. A number of features are extracted from the GSR signals after data preprocessing. With a simple neural network classifier, the prediction accuracy of different simulator configurations reaches up to 90% during driving. Our results suggest that participants are more engaged when realistic controls are used in normal driving, and are less affected by visible context during driving in emergency situations. The implications for future research are that for emergency situations realistic controls are important and research can be conducted with simple simulators in lab settings, whereas for normal driving the research should be conducted with full context in a real driving setting.
模拟器技术的使用在提供培训,调查驾驶活动和进行研究方面已经变得流行,因为它是实际现场研究的合适替代方案。从驾驶模拟器获得的结果能否转移到现实世界是一个关键问题,考虑到现实世界的风险,对实验的伦理也很重要。此外,研究人员必须在模拟器的复杂性和为达到给定的逼真程度而产生的成本之间进行权衡。这项研究将是从驾驶员皮肤电反应(GSR)信号出发,研究不同驾驶模拟器配置不同逼真度的可行性的第一步。GSR是被广泛使用的行为反应指标。通过在仿真环境中分析GSR信号,我们的结果旨在支持或反对使用简单的低级驾驶模拟器。我们研究了23名参与者在5种不同的模拟环境配置下进行虚拟驾驶任务的GSR信号。经过数据预处理,从GSR信号中提取了许多特征。利用简单的神经网络分类器,在驾驶过程中对不同模拟器配置的预测准确率可达90%以上。我们的研究结果表明,当参与者在正常驾驶中使用现实控制时,他们会更投入,而在紧急情况下驾驶时,受可见环境的影响较小。对未来研究的启示是,在紧急情况下,现实控制是重要的,研究可以在实验室环境中使用简单的模拟器进行,而对于正常驾驶,研究应该在真实驾驶环境中进行。
{"title":"Measuring User Responses to Driving Simulators: A Galvanic Skin Response Based Study","authors":"Atiqul Islam, Jinshuai Ma, Tom Gedeon, Md. Zakir Hossain, Ying-Hsang Liu","doi":"10.1109/AIVR46125.2019.00015","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00015","url":null,"abstract":"The use of simulator technology has become popular in providing training, investigating driving activity and performing research as it is a suitable alternative to actual field study. The transferability of the achieved result from driving simulators to the real world is a critical issue considering later real-world risks, and important to the ethics of experiments. Moreover, researchers have to trade-off between simulator sophistication and the cost it incurs to achieve a given level of realism. This study will be the first step towards investigating the plausibility of different driving simulator configurations of varying verisimilitude, from drivers' galvanic skin response (GSR) signals. GSR is the widely used indicator of behavioural response. By analyzing GSR signals in a simulation environment, our results are aimed to support or contradict the use of simple low-level driving simulators. We investigate GSR signals of 23 participants doing virtual driving tasks in 5 different configurations of simulation environments. A number of features are extracted from the GSR signals after data preprocessing. With a simple neural network classifier, the prediction accuracy of different simulator configurations reaches up to 90% during driving. Our results suggest that participants are more engaged when realistic controls are used in normal driving, and are less affected by visible context during driving in emergency situations. The implications for future research are that for emergency situations realistic controls are important and research can be conducted with simple simulators in lab settings, whereas for normal driving the research should be conducted with full context in a real driving setting.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115279300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Influence of Motion Speed on the Perception of Latency in Avatar Control 动作速度对虚拟化身控制中延迟感知的影响
Ludovic Hoyet, Clément Spies, Pierre Plantard, A. Sorel, R. Kulpa, F. Multon
For fullbody interaction, avatar simulation and control involves several steps that leads to delay (or latency) between the users' and their avatar's motion. Previous works have shown an impact of this delay on the perception-action loop, with possible impact on Presence and embodiment. In this paper we explore how the speed of the motion can impact the way people perceive and react to such a delay. We conducted an experiment where users were asked to follow a moving object with their finger, while embodied in a realistic avatar. We artificially increased the latency (up to 300ms) and measured their performance in the mentioned task.Our results show that motion speed influenced the perception of latency: we found critical latencies of 80ms for medium and fast motion speeds, while the critical latency reached 120ms for a slow motion speed. We also found that performance was affected by latency before the critical latency for medium and fast speeds, but not for a slower speed.
对于全身交互,虚拟角色的模拟和控制涉及几个步骤,导致用户和虚拟角色的运动之间的延迟(或延迟)。先前的研究表明,这种延迟对感知-行动回路的影响,可能对存在和体现产生影响。在本文中,我们探讨了运动的速度如何影响人们对这种延迟的感知和反应。我们进行了一个实验,要求用户用手指跟随一个移动的物体,同时体现在一个现实的化身中。我们人为地增加了延迟(最多300ms),并测量了它们在上述任务中的性能。我们的研究结果表明,运动速度影响对延迟的感知:我们发现中速和快速运动速度的临界延迟为80ms,而慢速运动速度的临界延迟为120ms。我们还发现,对于中速和高速来说,性能受到临界延迟之前的延迟的影响,但对于较慢的速度则没有影响。
{"title":"Influence of Motion Speed on the Perception of Latency in Avatar Control","authors":"Ludovic Hoyet, Clément Spies, Pierre Plantard, A. Sorel, R. Kulpa, F. Multon","doi":"10.1109/AIVR46125.2019.00066","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00066","url":null,"abstract":"For fullbody interaction, avatar simulation and control involves several steps that leads to delay (or latency) between the users' and their avatar's motion. Previous works have shown an impact of this delay on the perception-action loop, with possible impact on Presence and embodiment. In this paper we explore how the speed of the motion can impact the way people perceive and react to such a delay. We conducted an experiment where users were asked to follow a moving object with their finger, while embodied in a realistic avatar. We artificially increased the latency (up to 300ms) and measured their performance in the mentioned task.Our results show that motion speed influenced the perception of latency: we found critical latencies of 80ms for medium and fast motion speeds, while the critical latency reached 120ms for a slow motion speed. We also found that performance was affected by latency before the critical latency for medium and fast speeds, but not for a slower speed.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"IA-17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114119869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
AR Food Changer using Deep Learning And Cross-Modal Effects 使用深度学习和跨模态效应的AR食物改变者
Junya Ueda, K. Okajima
We propose an AR application that enables us to change the appearance of food without AR markers by applying machine learning and image processing. Modifying the appearance of real food is a difficult task because the shape of the food is atypical and deforms while eating. Therefore, we developed a real-time object region extraction method that combines two approaches in a complementary manner to extract food regions with high accuracy and stability. These approaches are based on color and edge information processing with a deep learning module trained with a small amount of data. Besides, we implemented some novel methods to improve the accuracy and reliability of the system. Then, we experimented and the results show that the taste and oral texture were affected by visual textures. Our application can change not only the appearance in real-time but also the taste and texture of actual real food. Therefore, in conclusion, our application can be virtually termed as an "AR food changer".
我们提出了一个AR应用程序,通过应用机器学习和图像处理,使我们能够在没有AR标记的情况下改变食物的外观。改变真正食物的外观是一项艰巨的任务,因为食物的形状是非典型的,在吃的时候会变形。因此,我们开发了一种实时目标区域提取方法,将两种方法相结合,以互补的方式提取食物区域,具有较高的准确性和稳定性。这些方法基于颜色和边缘信息处理,使用少量数据训练的深度学习模块。此外,我们还采用了一些新颖的方法来提高系统的准确性和可靠性。实验结果表明,视觉纹理会影响口感和口腔质感。我们的应用程序不仅可以实时改变外观,还可以改变真实食物的味道和质地。因此,总而言之,我们的应用实际上可以被称为“AR食品改变者”。
{"title":"AR Food Changer using Deep Learning And Cross-Modal Effects","authors":"Junya Ueda, K. Okajima","doi":"10.1109/AIVR46125.2019.00025","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00025","url":null,"abstract":"We propose an AR application that enables us to change the appearance of food without AR markers by applying machine learning and image processing. Modifying the appearance of real food is a difficult task because the shape of the food is atypical and deforms while eating. Therefore, we developed a real-time object region extraction method that combines two approaches in a complementary manner to extract food regions with high accuracy and stability. These approaches are based on color and edge information processing with a deep learning module trained with a small amount of data. Besides, we implemented some novel methods to improve the accuracy and reliability of the system. Then, we experimented and the results show that the taste and oral texture were affected by visual textures. Our application can change not only the appearance in real-time but also the taste and texture of actual real food. Therefore, in conclusion, our application can be virtually termed as an \"AR food changer\".","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129314171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
The Role of Virtual Reality in Autonomous Vehicles’ Safety 虚拟现实技术在自动驾驶汽车安全中的作用
A. M. Nascimento, A. B. M. Queiroz, L. Vismari, J. Bailenson, P. Cugnasca, J. B. C. Junior, J. R. Almeida
Virtual Reality (VR) has played an important role in the development of autonomous robots. From Computer Aided Design (CAD) to simulators for testing automation algorithms without risking expensive equipment, VR has been used in a wide range of applications. Most recently, Autonomous Vehicles (AV), a special application of autonomous robots, became a major focus of the scientific and practitioner community for road and vehicle safety improvements. However, recent AV accidents shed a light on the new safety challenges that need be addressed to fulfill those safety expectations. This paper presents a systematic literature mapping on the use of VR for AV safety and assimilates this literature to create a vision of how VR will play an important role in the development of safety in AV.
虚拟现实技术在自主机器人的发展中起着重要的作用。从计算机辅助设计(CAD)到用于测试自动化算法的模拟器,而无需冒昂贵设备的风险,VR已被广泛应用。最近,自动驾驶汽车(AV)作为自主机器人的一种特殊应用,成为道路和车辆安全改进的科学和实践者社区的主要焦点。然而,最近的自动驾驶汽车事故揭示了实现这些安全期望需要解决的新安全挑战。本文提出了一个系统的文献映射使用虚拟现实的自动驾驶安全,并吸收这些文献,以创建一个愿景,虚拟现实将如何在自动驾驶安全的发展中发挥重要作用。
{"title":"The Role of Virtual Reality in Autonomous Vehicles’ Safety","authors":"A. M. Nascimento, A. B. M. Queiroz, L. Vismari, J. Bailenson, P. Cugnasca, J. B. C. Junior, J. R. Almeida","doi":"10.1109/AIVR46125.2019.00017","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00017","url":null,"abstract":"Virtual Reality (VR) has played an important role in the development of autonomous robots. From Computer Aided Design (CAD) to simulators for testing automation algorithms without risking expensive equipment, VR has been used in a wide range of applications. Most recently, Autonomous Vehicles (AV), a special application of autonomous robots, became a major focus of the scientific and practitioner community for road and vehicle safety improvements. However, recent AV accidents shed a light on the new safety challenges that need be addressed to fulfill those safety expectations. This paper presents a systematic literature mapping on the use of VR for AV safety and assimilates this literature to create a vision of how VR will play an important role in the development of safety in AV.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129321492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
XR for Augmented Utilitarianism XR是增强功利主义
Nadisha-Marie Aliman, L. Kester, P. Werkhoven
Steady progresses in the AI field create enriching possibilities for society while simultaneously posing new complex challenges of ethical, legal and safety-relevant nature. In order to achieve an efficient human-centered governance of artificial intelligent systems, it has been proposed to harness augmented utilitarianism (AU), a novel non-normative ethical framework grounded in science which can be assisted e.g. by Extended Reality (XR) technologies. While AU provides a scaffold to encode human ethical and legal conceptions in a machine-readable form, the filling in of these conceptions requires a transdisciplinary amalgamation of scientific insights and preconditions from manifold research areas. In this short paper, we present a compact review on how XR technologies could leverage the underlying transdisciplinary AI governance approach utilizing the AU framework. Towards that end, we outline pertinent needs for XR in two hereto related contexts: as experiential testbed for AU-relevant moral psychology studies and as proactive AI Safety measure and enhancing policy-by-simulation method preceding the deployment of AU-based ethical goal functions.
人工智能领域的稳步发展为社会创造了丰富的可能性,同时也带来了伦理、法律和安全相关性质的新的复杂挑战。为了实现以人为中心的人工智能系统的有效治理,已经提出利用增强功利主义(AU),这是一种基于科学的新型非规范性伦理框架,可以通过扩展现实(XR)技术进行辅助。虽然非盟提供了一个框架,以机器可读的形式对人类伦理和法律概念进行编码,但这些概念的填写需要跨学科的科学见解和来自多个研究领域的先决条件的融合。在这篇短文中,我们简要回顾了XR技术如何利用非盟框架利用潜在的跨学科人工智能治理方法。为此,我们在以下两个相关背景下概述了XR的相关需求:作为非盟相关道德心理学研究的经验测试平台,作为主动的人工智能安全措施,并在部署基于非盟的道德目标函数之前加强政策模拟方法。
{"title":"XR for Augmented Utilitarianism","authors":"Nadisha-Marie Aliman, L. Kester, P. Werkhoven","doi":"10.1109/AIVR46125.2019.00065","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00065","url":null,"abstract":"Steady progresses in the AI field create enriching possibilities for society while simultaneously posing new complex challenges of ethical, legal and safety-relevant nature. In order to achieve an efficient human-centered governance of artificial intelligent systems, it has been proposed to harness augmented utilitarianism (AU), a novel non-normative ethical framework grounded in science which can be assisted e.g. by Extended Reality (XR) technologies. While AU provides a scaffold to encode human ethical and legal conceptions in a machine-readable form, the filling in of these conceptions requires a transdisciplinary amalgamation of scientific insights and preconditions from manifold research areas. In this short paper, we present a compact review on how XR technologies could leverage the underlying transdisciplinary AI governance approach utilizing the AU framework. Towards that end, we outline pertinent needs for XR in two hereto related contexts: as experiential testbed for AU-relevant moral psychology studies and as proactive AI Safety measure and enhancing policy-by-simulation method preceding the deployment of AU-based ethical goal functions.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121802548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
VRescuer: A Virtual Reality Application for Disaster Response Training VRescuer:灾难响应训练的虚拟现实应用
V. Nguyen, Kwanghee Jung, Tommy Dang
With the advancement of modern technologies, Virtual Reality plays an essential role for training rescuers, particularly for disaster savers employing simulation training. By wholly immersed in the virtual environment, rescuers are capable of practicing the required skills without being threatened of their lives before experiencing the real world situation. This paper presented a work-in-progress Virtual Reality application called VRescuer to help trainees get used to various disaster circumstances. A scenario of a city was created with an ambulance rescuer and several rescuees in the scene. The intelligent ambulance rescuer was introduced as a rescuer/guider to automatically search and find the optimal paths for saving all rescuees. The trainee can interfere in the rescuing process by placing obstacles or adding more rescuees along the ways which cause the rescue agent to re-route the paths. The VRescuer was implemented in Unity3D with an Oculus Rift device, and it was assessed by twenty users to improve the proposed application.
随着现代技术的进步,虚拟现实技术在救援人员的训练中发挥着至关重要的作用,特别是对灾难救援人员进行模拟训练。通过完全沉浸在虚拟环境中,救援人员能够在经历真实世界的情况之前,在没有生命威胁的情况下练习所需的技能。本文介绍了一个正在开发中的虚拟现实应用程序VRescuer,以帮助学员适应各种灾难环境。创建了一个城市场景,现场有一辆救护车救援人员和几名救援人员。引入智能救护车救援器作为救援者/导航员,自动搜索并找到最优路径来拯救所有被救援者。受训人员可以通过设置障碍或增加更多的救援人员来干预救援过程,从而导致救援代理重新规划路径。在Unity3D中使用Oculus Rift设备实现了VRescuer,并由20名用户进行了评估,以改进拟议的应用程序。
{"title":"VRescuer: A Virtual Reality Application for Disaster Response Training","authors":"V. Nguyen, Kwanghee Jung, Tommy Dang","doi":"10.1109/AIVR46125.2019.00042","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00042","url":null,"abstract":"With the advancement of modern technologies, Virtual Reality plays an essential role for training rescuers, particularly for disaster savers employing simulation training. By wholly immersed in the virtual environment, rescuers are capable of practicing the required skills without being threatened of their lives before experiencing the real world situation. This paper presented a work-in-progress Virtual Reality application called VRescuer to help trainees get used to various disaster circumstances. A scenario of a city was created with an ambulance rescuer and several rescuees in the scene. The intelligent ambulance rescuer was introduced as a rescuer/guider to automatically search and find the optimal paths for saving all rescuees. The trainee can interfere in the rescuing process by placing obstacles or adding more rescuees along the ways which cause the rescue agent to re-route the paths. The VRescuer was implemented in Unity3D with an Oculus Rift device, and it was assessed by twenty users to improve the proposed application.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134161081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Creating Virtual Reality and Augmented Reality Development in Classroom: Is it a Hype? 在课堂上创建虚拟现实和增强现实:是一种炒作吗?
V. Nguyen, Kwanghee Jung, Tommy Dang
The fast-growing number of high-performance computer processor and hand-held devices have paved the way for the development of Virtual Reality and Augmented Reality in terms of hardware and software in the education sector. The question of whether students can adopt these new technologies is not fully addressed. Answering this question thus plays an essential role for instructors and course designers. The objectives of this study are: (1) to investigate the feasibility of the Virtual Reality/Augmented Reality development for undergraduate students, and (2) to highlight some practical challenges when creating and sharing Virtual Reality and Augmented Reality applications from student's perspective. Study design for the coursework was given with detail. During a 16-week long, 63 Virtual Reality/Augmented Reality applications were created from a variety of topics and various development tools. 43 survey questions are prepared and administered to students for each phase of the projects to address technical difficulties. The exploration method was used for data analysis.
高性能计算机处理器和手持设备数量的快速增长为教育领域的虚拟现实和增强现实的硬件和软件的发展铺平了道路。学生能否采用这些新技术的问题还没有得到充分解决。因此,回答这个问题对教师和课程设计者起着至关重要的作用。本研究的目的是:(1)探讨大学生开发虚拟现实/增强现实的可行性;(2)从学生的角度出发,强调在创建和共享虚拟现实和增强现实应用程序时面临的一些实际挑战。详细地给出了本课程的研究设计。在长达16周的时间里,从各种主题和各种开发工具创建了63个虚拟现实/增强现实应用程序。在项目的每个阶段,为学生准备和管理43个调查问题,以解决技术难题。采用勘探法进行数据分析。
{"title":"Creating Virtual Reality and Augmented Reality Development in Classroom: Is it a Hype?","authors":"V. Nguyen, Kwanghee Jung, Tommy Dang","doi":"10.1109/AIVR46125.2019.00045","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00045","url":null,"abstract":"The fast-growing number of high-performance computer processor and hand-held devices have paved the way for the development of Virtual Reality and Augmented Reality in terms of hardware and software in the education sector. The question of whether students can adopt these new technologies is not fully addressed. Answering this question thus plays an essential role for instructors and course designers. The objectives of this study are: (1) to investigate the feasibility of the Virtual Reality/Augmented Reality development for undergraduate students, and (2) to highlight some practical challenges when creating and sharing Virtual Reality and Augmented Reality applications from student's perspective. Study design for the coursework was given with detail. During a 16-week long, 63 Virtual Reality/Augmented Reality applications were created from a variety of topics and various development tools. 43 survey questions are prepared and administered to students for each phase of the projects to address technical difficulties. The exploration method was used for data analysis.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128343298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1