首页 > 最新文献

Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology最新文献

英文 中文
Experimenting novel virtual-reality immersion strategy to alleviate cybersickness 尝试新的虚拟现实沉浸策略来缓解晕屏
S. F. M. Zaidi, T. Male
Cybersickness, related to virtual-reality (VR) using head-mounted devices (HMD), is also known as motion sickness in VR environment. Researchers and developers have been working to find an appropriate technological facility to alleviate this feeling of sickness. In this paper, we aim to further improve VR immersion technique via HMD by strengthening userś sense of presence in the virtual world along with engagement. Our results show that, with alternative ways in the same VR environment, cybersickness could be overcome resulting in user acceptability of VR technology.
晕屏病与使用头戴式设备(HMD)的虚拟现实(VR)有关,也被称为虚拟现实环境中的晕动病。研究人员和开发人员一直在努力寻找一种合适的技术设施来减轻这种恶心的感觉。在本文中,我们的目标是通过增强用户在虚拟世界中的存在感和参与度,进一步改进VR沉浸技术。我们的研究结果表明,在相同的虚拟现实环境中,通过不同的方式,可以克服晕动症,从而使用户接受虚拟现实技术。
{"title":"Experimenting novel virtual-reality immersion strategy to alleviate cybersickness","authors":"S. F. M. Zaidi, T. Male","doi":"10.1145/3281505.3281613","DOIUrl":"https://doi.org/10.1145/3281505.3281613","url":null,"abstract":"Cybersickness, related to virtual-reality (VR) using head-mounted devices (HMD), is also known as motion sickness in VR environment. Researchers and developers have been working to find an appropriate technological facility to alleviate this feeling of sickness. In this paper, we aim to further improve VR immersion technique via HMD by strengthening userś sense of presence in the virtual world along with engagement. Our results show that, with alternative ways in the same VR environment, cybersickness could be overcome resulting in user acceptability of VR technology.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132938941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The effect of chair type on users' viewing experience for 360-degree video 椅子类型对用户360度视频观看体验的影响
Yang Hong, Andrew MacQuarrie, A. Steed
The consumption of 360-degree videos with head-mounted displays (HMDs) is increasing rapidly. A large number of HMD users watch 360-degree videos at home, often on non-swivel seats; however videos are frequently designed to require the user to turn around. This work explores how the difference in users' chair type might influence their viewing experience. A between-subject experiment was conducted with 41 participants. Three chair conditions were used: fixed, half-swivel and full-swivel. A variety of measures were explored using eye-tracking, questionnaires, tasks and semi-structured interviews. Results suggest that the fixed and half-swivel chairs discouraged exploration for certain videos compared with the full-swivel chair. Additionally, participants in the fixed chair had worse spatial awareness and greater concern about missing something for certain video than those in the full-swivel chair. No significant differences were found in terms of incidental memory, general engagement and simulator sickness among the three chair conditions. Furthermore, thematic analysis of post-experiment interviews revealed four themes regarding the restrictive chairs: physical discomfort, difficulty following moving objects, reduced orientation and guided attention. Based on the findings, practical implications, limitations and future work are discussed.
使用头戴式显示器(hmd)观看360度视频的消费正在迅速增长。大量HMD用户在家观看360度视频,通常是在非旋转座椅上;然而,视频经常被设计成需要用户转身。这项工作探讨了用户椅子类型的差异如何影响他们的观看体验。研究人员对41名参与者进行了受试者间实验。使用了三种椅子状态:固定、半旋转和全旋转。通过眼动追踪、问卷调查、任务和半结构化访谈,研究了各种测量方法。结果表明,与全转椅相比,固定和半转椅阻碍了对某些视频的探索。此外,与坐在全转椅上的参与者相比,坐在固定椅子上的参与者空间意识更差,更担心错过某些视频。在三种椅子条件下,在附带记忆,一般参与和模拟器疾病方面没有发现显着差异。此外,实验后访谈的主题分析揭示了限制性椅子的四个主题:身体不适、跟随移动物体困难、定向降低和引导注意力。根据研究结果,讨论了实际意义、局限性和未来的工作。
{"title":"The effect of chair type on users' viewing experience for 360-degree video","authors":"Yang Hong, Andrew MacQuarrie, A. Steed","doi":"10.1145/3281505.3281519","DOIUrl":"https://doi.org/10.1145/3281505.3281519","url":null,"abstract":"The consumption of 360-degree videos with head-mounted displays (HMDs) is increasing rapidly. A large number of HMD users watch 360-degree videos at home, often on non-swivel seats; however videos are frequently designed to require the user to turn around. This work explores how the difference in users' chair type might influence their viewing experience. A between-subject experiment was conducted with 41 participants. Three chair conditions were used: fixed, half-swivel and full-swivel. A variety of measures were explored using eye-tracking, questionnaires, tasks and semi-structured interviews. Results suggest that the fixed and half-swivel chairs discouraged exploration for certain videos compared with the full-swivel chair. Additionally, participants in the fixed chair had worse spatial awareness and greater concern about missing something for certain video than those in the full-swivel chair. No significant differences were found in terms of incidental memory, general engagement and simulator sickness among the three chair conditions. Furthermore, thematic analysis of post-experiment interviews revealed four themes regarding the restrictive chairs: physical discomfort, difficulty following moving objects, reduced orientation and guided attention. Based on the findings, practical implications, limitations and future work are discussed.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132951749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Evaluating ray casting and two gaze-based pointing techniques for object selection in virtual reality 评估光线投射和两种基于注视的指向技术在虚拟现实中的对象选择
Tomi Nukarinen, J. Kangas, Jussi Rantala, Olli Koskinen, R. Raisamo
Selecting an object is a basic interaction task in virtual reality (VR) environments. Interaction techniques with gaze pointing have potential for this elementary task. There appears to be little empirical evidence concerning the benefits and drawbacks of these methods in VR. We ran an experiment studying three interaction techniques: ray casting, dwell time and gaze trigger, where gaze trigger was a combination of gaze pointing and controller selection. We studied user experience and interaction speed in a simple object selection task. The results indicated that ray casting outperforms both gaze-based methods while gaze trigger performs better than dwell time.
选择对象是虚拟现实(VR)环境中的一项基本交互任务。注视指向的交互技术有潜力完成这一基本任务。关于这些方法在虚拟现实中的利弊,似乎很少有经验证据。我们进行了一个实验,研究了三种交互技术:光线投射、停留时间和凝视触发,其中凝视触发是凝视指向和控制器选择的结合。我们在一个简单的对象选择任务中研究了用户体验和交互速度。结果表明,光线投射优于两种基于凝视的方法,而凝视触发优于停留时间。
{"title":"Evaluating ray casting and two gaze-based pointing techniques for object selection in virtual reality","authors":"Tomi Nukarinen, J. Kangas, Jussi Rantala, Olli Koskinen, R. Raisamo","doi":"10.1145/3281505.3283382","DOIUrl":"https://doi.org/10.1145/3281505.3283382","url":null,"abstract":"Selecting an object is a basic interaction task in virtual reality (VR) environments. Interaction techniques with gaze pointing have potential for this elementary task. There appears to be little empirical evidence concerning the benefits and drawbacks of these methods in VR. We ran an experiment studying three interaction techniques: ray casting, dwell time and gaze trigger, where gaze trigger was a combination of gaze pointing and controller selection. We studied user experience and interaction speed in a simple object selection task. The results indicated that ray casting outperforms both gaze-based methods while gaze trigger performs better than dwell time.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129528190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
EXG wearable human-machine interface for natural multimodal interaction in VR environment EXG可穿戴人机界面,实现虚拟现实环境下的自然多模态交互
Ker-Jiun Wang, Quanbo Liu, Soumya Vhasure, Quanfeng Liu, C. Zheng, Prakash Thakur
Current assistive technologies are complicated, cumbersome, not portable, and users still need to apply extensive fine motor control to operate the device. Brain-Computer Interfaces (BCIs) could provide an alternative approach to solve these problems. However, the current BCIs have low classification accuracy and require tedious human-learning procedures. The use of complicated Electroencephalogram (EEG) caps, where many electrodes must be attached on the user's head to identify imaginary motor commands, brings a lot of inconvenience. In this demonstration, we will showcase EXGbuds, a compact, non-obtrusive, and comfortable wearable device with non-invasive biosensing technology. People can comfortably wear it for long hours without tiring. Under our developed machine learning algorithms, we can identify various eye movements and facial expressions with over 95% accuracy, such that people with motor disabilities could have a fun time to play VR games totally "Hands-free".
目前的辅助技术复杂,笨重,不便携,用户仍然需要应用广泛的精细电机控制来操作设备。脑机接口(bci)可以为解决这些问题提供另一种方法。然而,目前的bci分类精度较低,并且需要繁琐的人工学习过程。使用复杂的脑电图(EEG)帽,其中许多电极必须连接到用户的头上,以识别想象的运动命令,带来了很多不便。在这次演示中,我们将展示EXGbuds,一种紧凑、不显眼、舒适的可穿戴设备,采用非侵入性生物传感技术。人们可以舒适地长时间穿着它而不累。在我们开发的机器学习算法下,我们可以识别各种眼球运动和面部表情,准确率超过95%,让运动障碍人士可以完全“免提”地玩VR游戏。
{"title":"EXG wearable human-machine interface for natural multimodal interaction in VR environment","authors":"Ker-Jiun Wang, Quanbo Liu, Soumya Vhasure, Quanfeng Liu, C. Zheng, Prakash Thakur","doi":"10.1145/3281505.3281577","DOIUrl":"https://doi.org/10.1145/3281505.3281577","url":null,"abstract":"Current assistive technologies are complicated, cumbersome, not portable, and users still need to apply extensive fine motor control to operate the device. Brain-Computer Interfaces (BCIs) could provide an alternative approach to solve these problems. However, the current BCIs have low classification accuracy and require tedious human-learning procedures. The use of complicated Electroencephalogram (EEG) caps, where many electrodes must be attached on the user's head to identify imaginary motor commands, brings a lot of inconvenience. In this demonstration, we will showcase EXGbuds, a compact, non-obtrusive, and comfortable wearable device with non-invasive biosensing technology. People can comfortably wear it for long hours without tiring. Under our developed machine learning algorithms, we can identify various eye movements and facial expressions with over 95% accuracy, such that people with motor disabilities could have a fun time to play VR games totally \"Hands-free\".","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130908764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Effect of accompanying onomatopoeia with sound feedback toward presence and user experience in virtual reality 伴声反馈拟声词对虚拟现实中存在感和用户体验的影响
Jiwon Oh, G. Kim
Onomatopoeia refers to a word that phonetically imitates the sound. It is often used, in comics or video, in caption as a way to dramatize, emphasize, exaggerate and draw attention the situation. In this paper we explore if the use of onomatopoeia could also bring about similar effects and improve the user experience in virtual reality. We present an experiment comparing the user's subjective experiences and attentive performance in two virtual worlds, each configured in two test conditions: (1) sound feedback with no onomatopoeia and (2) sound feedback with it. Our experiment has found that the moderate and strategic use of onomatopoeia can indeed help direct user attention, offer object affordance and thereby enhance user experience and even the sense of presence and immersion.
拟声词是指在语音上模仿声音的词。它经常被用在漫画或视频的标题中,作为戏剧化、强调、夸张和吸引注意力的一种方式。在本文中,我们探讨拟声词的使用是否也能带来类似的效果,并改善虚拟现实中的用户体验。我们提出了一个实验,比较用户在两个虚拟世界中的主观体验和注意力表现,每个虚拟世界都配置在两种测试条件下:(1)没有拟声词的声音反馈和(2)有拟声词的声音反馈。我们的实验发现,适度和有策略地使用拟声词确实有助于引导用户的注意力,提供对象提供,从而增强用户体验,甚至是存在感和沉浸感。
{"title":"Effect of accompanying onomatopoeia with sound feedback toward presence and user experience in virtual reality","authors":"Jiwon Oh, G. Kim","doi":"10.1145/3281505.3283401","DOIUrl":"https://doi.org/10.1145/3281505.3283401","url":null,"abstract":"Onomatopoeia refers to a word that phonetically imitates the sound. It is often used, in comics or video, in caption as a way to dramatize, emphasize, exaggerate and draw attention the situation. In this paper we explore if the use of onomatopoeia could also bring about similar effects and improve the user experience in virtual reality. We present an experiment comparing the user's subjective experiences and attentive performance in two virtual worlds, each configured in two test conditions: (1) sound feedback with no onomatopoeia and (2) sound feedback with it. Our experiment has found that the moderate and strategic use of onomatopoeia can indeed help direct user attention, offer object affordance and thereby enhance user experience and even the sense of presence and immersion.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133554979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
AR DeepCalorieCam V2: food calorie estimation with CNN and AR-based actual size estimation AR DeepCalorieCam V2:使用CNN和基于AR的实际尺寸估计进行食物卡路里估计
Ryosuke Tanno, Takumi Ege, Keiji Yanai
In most of the cases, the estimated calories are just associated with the estimated food categories, or the relative size compared to the standard size of each food category which are usually provided by a user manually. In addition, in the case of calorie estimation based on the amount of meal, a user conventionally needs to register a size-known reference object in advance and to take a food photo with the registered reference object. In this demo, we propose a new approach for food calorie estimation with CNN and Augmented Reality (AR)-based actual size estimation. By using Apple ARKit framework, we can measure the actual size of the meal area by acquiring the coordinates on the real world as a three-dimensional vector, we implemented this demo app. As a result, it is possible to calculate the size more accurately than in the previous method by measuring the meal area directly, the calorie estimation accuracy has improved.
在大多数情况下,估计的卡路里只是与估计的食物类别相关联,或者与每种食物类别的标准尺寸相比较,这些通常是由用户手动提供的。此外,在基于餐量估算卡路里的情况下,用户通常需要提前注册一个已知大小的参考对象,并使用注册的参考对象拍摄食物照片。在这个演示中,我们提出了一种基于CNN和增强现实(AR)的食物卡路里估计的新方法。通过使用Apple ARKit框架,我们可以通过获取现实世界上的坐标作为三维矢量来测量用餐区域的实际大小,我们实现了这个演示应用程序。因此,可以比之前直接测量用餐区域的方法更准确地计算出大小,提高了卡路里估算的精度。
{"title":"AR DeepCalorieCam V2: food calorie estimation with CNN and AR-based actual size estimation","authors":"Ryosuke Tanno, Takumi Ege, Keiji Yanai","doi":"10.1145/3281505.3281580","DOIUrl":"https://doi.org/10.1145/3281505.3281580","url":null,"abstract":"In most of the cases, the estimated calories are just associated with the estimated food categories, or the relative size compared to the standard size of each food category which are usually provided by a user manually. In addition, in the case of calorie estimation based on the amount of meal, a user conventionally needs to register a size-known reference object in advance and to take a food photo with the registered reference object. In this demo, we propose a new approach for food calorie estimation with CNN and Augmented Reality (AR)-based actual size estimation. By using Apple ARKit framework, we can measure the actual size of the meal area by acquiring the coordinates on the real world as a three-dimensional vector, we implemented this demo app. As a result, it is possible to calculate the size more accurately than in the previous method by measuring the meal area directly, the calorie estimation accuracy has improved.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122203648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Hamlet 哈姆雷特
Krzysztof Pietroszek, C. Eckhardt, Liudmila Tahai
We present "Hamlet", a prototype implementation of a virtual reality experience in which a player takes on a role of the theater director. The objective of the experience is to direct Adam, a virtual actor, to deliver the best possible performance of Hamlet's famous "To be, or not to be" soliloquy. The player interacts with Adam using voice commands, gestures, and body motion. Adam responds to acting directions, offers his own interpretations of the soliloquy, acquires the choreography from the player's body motion, and learns the scene blocking by following the player's pointing gestures.
{"title":"Hamlet","authors":"Krzysztof Pietroszek, C. Eckhardt, Liudmila Tahai","doi":"10.1145/3281505.3281600","DOIUrl":"https://doi.org/10.1145/3281505.3281600","url":null,"abstract":"We present \"Hamlet\", a prototype implementation of a virtual reality experience in which a player takes on a role of the theater director. The objective of the experience is to direct Adam, a virtual actor, to deliver the best possible performance of Hamlet's famous \"To be, or not to be\" soliloquy. The player interacts with Adam using voice commands, gestures, and body motion. Adam responds to acting directions, offers his own interpretations of the soliloquy, acquires the choreography from the player's body motion, and learns the scene blocking by following the player's pointing gestures.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115729149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic 3D modeling of artwork and visualizing audio in an augmented reality environment 艺术品的自动3D建模和可视化音频在增强现实环境
Elijah Schwelling, Kyungjin Yoo
In recent years, traditional art museums have begun to use AR/VR technology to make visits more engaging and interactive. This paper details an application which provides features designed to be immediately engaging and educational to museum visitors within an AR view. The application superimposes an automatically generated 3D representation over a scanned artwork, along with the work's authorship, title, and date of creation. A GUI allows the user to exaggerate or decrease the depth scale of the 3D representation, as well as to search for related works of music. Given this music as audio input, the generated 3D model will act as an audio visualizer by changing depth scale based on input frequency.
近年来,传统艺术博物馆开始使用AR/VR技术使参观更具吸引力和互动性。本文详细介绍了一个应用程序,该应用程序提供了在AR视图中立即吸引和教育博物馆游客的功能。该应用程序在扫描的艺术品上叠加自动生成的3D表示,以及作品的作者身份、标题和创作日期。GUI允许用户放大或缩小3D表示的深度尺度,以及搜索相关的音乐作品。将音乐作为音频输入,生成的3D模型将通过根据输入频率改变深度尺度来充当音频可视化器。
{"title":"Automatic 3D modeling of artwork and visualizing audio in an augmented reality environment","authors":"Elijah Schwelling, Kyungjin Yoo","doi":"10.1145/3281505.3281576","DOIUrl":"https://doi.org/10.1145/3281505.3281576","url":null,"abstract":"In recent years, traditional art museums have begun to use AR/VR technology to make visits more engaging and interactive. This paper details an application which provides features designed to be immediately engaging and educational to museum visitors within an AR view. The application superimposes an automatically generated 3D representation over a scanned artwork, along with the work's authorship, title, and date of creation. A GUI allows the user to exaggerate or decrease the depth scale of the 3D representation, as well as to search for related works of music. Given this music as audio input, the generated 3D model will act as an audio visualizer by changing depth scale based on input frequency.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123346370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tactile hand motion and pose guidance for 3D interaction 触觉手部运动和姿态指导的3D交互
Alexander Marquardt, Jens Maiero, E. Kruijff, Christina Trepkowski, A. Schwandt, André Hinkenjann, Johannes Schöning, W. Stuerzlinger
We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.
我们提出了一种新颖的前臂和手套触觉界面,可以通过指导手部运动规划和协调来增强3D交互。特别是,我们的目标是改善与选择和操作任务相关的手部运动和姿势动作。通过我们的用户研究,我们说明了触觉模式如何引导用户,通过触发手的姿势和运动变化,例如抓住(选择)和操纵(移动)一个对象。我们讨论了接口的潜力和局限性,并概述了未来的工作。
{"title":"Tactile hand motion and pose guidance for 3D interaction","authors":"Alexander Marquardt, Jens Maiero, E. Kruijff, Christina Trepkowski, A. Schwandt, André Hinkenjann, Johannes Schöning, W. Stuerzlinger","doi":"10.1145/3281505.3281526","DOIUrl":"https://doi.org/10.1145/3281505.3281526","url":null,"abstract":"We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123326676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
GravityCup GravityCup
Chih-Hao Cheng, Chia-Chi Chang, Ying-Hsuan Chen, Ying-Li Lin, Jing-Yuan Huang, Ping-Hsuan Han, Ju-Chun Ko, Lai-Chung Lee
During interaction in a virtual environment, haptic displays provide users with sensations such as vibration, texture simulation, and electrical muscle stimulation. However, as humans perceive object weights naturally in daily life, objects picked up in virtual reality feel unrealistically light. To create an immersive experience in virtual reality that includes weight sensation, we propose GravityCup, a liquid-based haptic feedback device that simulates realistic object weights and inertia when moving virtual handheld objects. In different scenarios, GravityCup uses liquid to provide users with a dynamic weight sensation experience that enhances interaction with handheld objects in virtual reality.
{"title":"GravityCup","authors":"Chih-Hao Cheng, Chia-Chi Chang, Ying-Hsuan Chen, Ying-Li Lin, Jing-Yuan Huang, Ping-Hsuan Han, Ju-Chun Ko, Lai-Chung Lee","doi":"10.1145/3281505.3281569","DOIUrl":"https://doi.org/10.1145/3281505.3281569","url":null,"abstract":"During interaction in a virtual environment, haptic displays provide users with sensations such as vibration, texture simulation, and electrical muscle stimulation. However, as humans perceive object weights naturally in daily life, objects picked up in virtual reality feel unrealistically light. To create an immersive experience in virtual reality that includes weight sensation, we propose GravityCup, a liquid-based haptic feedback device that simulates realistic object weights and inertia when moving virtual handheld objects. In different scenarios, GravityCup uses liquid to provide users with a dynamic weight sensation experience that enhances interaction with handheld objects in virtual reality.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"36 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126100758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
期刊
Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1