首页 > 最新文献

Symposium on Spatial User Interaction最新文献

英文 中文
Autonomous control of human-robot spacing: a socially situated approach 人-机器人间隔的自主控制:一种社会情境方法
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491402
Ross Mead, M. Matarić
To enable socially situated human-robot interaction, a robot must both understand and control proxemics, the social use of space, to employ communication mechanisms analogous to those used by humans. In this work, we investigate speech and gesture production and recognition as a function of social agent spacing during both human-human and human-robot interactions. These models were used to implement an autonomous proxemic robot controller. The controller utilizes a sampling-based method, wherein each sample represents inter-agent pose, as well as agent speech and gesture production and recognition estimates; a particle filter uses these estimates to maximize the performance of both the robot and the human during the interaction. This functional approach yields pose, speech, and gesture estimates consistent with related literature. This work contributes to the understanding of the underlying pre-cultural processes that govern proxemic behavior, and has implications for robust proxemic controllers for robots in complex interactions and environments.
为了实现社会情境下的人机交互,机器人必须理解并控制空间的社会用途,采用类似于人类使用的通信机制。在这项工作中,我们研究了在人类和人类机器人交互过程中,语音和手势的产生和识别作为社会代理间距的函数。利用这些模型实现了机器人的自主近身控制器。控制器采用基于采样的方法,其中每个样本代表代理间姿态,以及代理语音和手势的产生和识别估计;粒子滤波利用这些估计来最大化机器人和人在交互过程中的性能。这种功能方法得出的姿势、语音和手势估计与相关文献一致。这项工作有助于理解控制近距离行为的潜在前文化过程,并对复杂交互和环境中机器人的鲁棒近距离控制器具有重要意义。
{"title":"Autonomous control of human-robot spacing: a socially situated approach","authors":"Ross Mead, M. Matarić","doi":"10.1145/2491367.2491402","DOIUrl":"https://doi.org/10.1145/2491367.2491402","url":null,"abstract":"To enable socially situated human-robot interaction, a robot must both understand and control proxemics, the social use of space, to employ communication mechanisms analogous to those used by humans. In this work, we investigate speech and gesture production and recognition as a function of social agent spacing during both human-human and human-robot interactions. These models were used to implement an autonomous proxemic robot controller. The controller utilizes a sampling-based method, wherein each sample represents inter-agent pose, as well as agent speech and gesture production and recognition estimates; a particle filter uses these estimates to maximize the performance of both the robot and the human during the interaction. This functional approach yields pose, speech, and gesture estimates consistent with related literature. This work contributes to the understanding of the underlying pre-cultural processes that govern proxemic behavior, and has implications for robust proxemic controllers for robots in complex interactions and environments.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129071921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visualization of off-surface 3D viewpoint locations in spatial augmented reality 空间增强现实中非表面三维视点位置的可视化
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491378
Matt Adcock, David Feng, B. Thomas
Spatial Augmented Reality (SAR) systems can be used to convey guidance in a physical task from a remote expert. Sometimes that remote expert is provided with a single camera view of the workspace but, if they are given a live captured 3D model and can freely control their point of view, the local worker needs to know what the remote expert can see. We present three new SAR techniques, Composite Wedge, Vector Boxes, and Eyelight, for visualizing off-surface 3D viewpoints and supporting the required workspace awareness. Our study showed that the Composite Wedge cue was best for providing location awareness, and the Eyelight cue was best for providing visibility map awareness.
空间增强现实(SAR)系统可用于传达远程专家在物理任务中的指导。有时为远程专家提供了一个工作空间的单摄像头视图,但是,如果给他们一个实时捕获的3D模型,并且可以自由控制他们的观点,那么本地工作人员需要知道远程专家可以看到什么。我们提出了三种新的合成孔径雷达技术,复合楔形,矢量盒和Eyelight,用于可视化非表面3D视点并支持所需的工作空间感知。我们的研究表明,复合楔形球杆最适合提供位置感知,而灯光球杆最适合提供能见度地图感知。
{"title":"Visualization of off-surface 3D viewpoint locations in spatial augmented reality","authors":"Matt Adcock, David Feng, B. Thomas","doi":"10.1145/2491367.2491378","DOIUrl":"https://doi.org/10.1145/2491367.2491378","url":null,"abstract":"Spatial Augmented Reality (SAR) systems can be used to convey guidance in a physical task from a remote expert. Sometimes that remote expert is provided with a single camera view of the workspace but, if they are given a live captured 3D model and can freely control their point of view, the local worker needs to know what the remote expert can see. We present three new SAR techniques, Composite Wedge, Vector Boxes, and Eyelight, for visualizing off-surface 3D viewpoints and supporting the required workspace awareness. Our study showed that the Composite Wedge cue was best for providing location awareness, and the Eyelight cue was best for providing visibility map awareness.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131130673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Fusing depth, color, and skeleton data for enhanced real-time hand segmentation 融合深度,颜色和骨骼数据,增强实时手分割
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491401
Yu-Jen Huang, M. Bolas, Evan A. Suma
As sensing technology has evolved, spatial user interfaces have become increasingly popular platforms for interacting with video games and virtual environments. In particular, recent advances in consumer-level motion tracking devices such as the Microsoft Kinect have sparked a dramatic increase in user interfaces controlled directly by the user's hands and body. However, existing skeleton tracking middleware created for these sensors, such as those developed by Microsoft and OpenNI, tend to focus on coarse full-body motions, and suffers from several well-documented limitations when attempting to track the positions of the user's hands and segment them from the background. In this paper, we present an approach for more robustly handling these failure cases by combining the original skeleton tracking positions with the color and depth information returned from the sensor.
随着传感技术的发展,空间用户界面已成为与视频游戏和虚拟环境交互的日益流行的平台。特别是,最近消费者级运动追踪设备(如微软Kinect)的进步,引发了由用户的手和身体直接控制的用户界面的急剧增加。然而,为这些传感器创建的现有骨架跟踪中间件,例如由Microsoft和OpenNI开发的那些,往往侧重于粗糙的全身运动,并且在试图跟踪用户的手的位置并将它们从背景中分割出来时,受到几个充分证明的限制。在本文中,我们提出了一种将原始骨架跟踪位置与传感器返回的颜色和深度信息相结合的方法来更稳健地处理这些故障情况。
{"title":"Fusing depth, color, and skeleton data for enhanced real-time hand segmentation","authors":"Yu-Jen Huang, M. Bolas, Evan A. Suma","doi":"10.1145/2491367.2491401","DOIUrl":"https://doi.org/10.1145/2491367.2491401","url":null,"abstract":"As sensing technology has evolved, spatial user interfaces have become increasingly popular platforms for interacting with video games and virtual environments. In particular, recent advances in consumer-level motion tracking devices such as the Microsoft Kinect have sparked a dramatic increase in user interfaces controlled directly by the user's hands and body. However, existing skeleton tracking middleware created for these sensors, such as those developed by Microsoft and OpenNI, tend to focus on coarse full-body motions, and suffers from several well-documented limitations when attempting to track the positions of the user's hands and segment them from the background. In this paper, we present an approach for more robustly handling these failure cases by combining the original skeleton tracking positions with the color and depth information returned from the sensor.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133026867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Spatial user interface for experiencing Mogao caves 体验莫高窟的空间用户界面
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491372
L. Chan, S. Kenderdine, J. Shaw
In this paper, we describe the design and implementation of the Pure Land AR, which is an installation that employs spatial user interface and allows users to virtually visit the UNESCO world heritage -- Mogao Caves by using handheld devices. The installation was shown to the public at different museums and galleries. The result of the work and the user responses is discussed.
在本文中,我们描述了净土AR的设计和实现,这是一个采用空间用户界面的装置,允许用户通过手持设备虚拟地访问联合国教科文组织世界遗产——莫高窟。该装置在不同的博物馆和画廊向公众展示。讨论了工作的结果和用户的反应。
{"title":"Spatial user interface for experiencing Mogao caves","authors":"L. Chan, S. Kenderdine, J. Shaw","doi":"10.1145/2491367.2491372","DOIUrl":"https://doi.org/10.1145/2491367.2491372","url":null,"abstract":"In this paper, we describe the design and implementation of the Pure Land AR, which is an installation that employs spatial user interface and allows users to virtually visit the UNESCO world heritage -- Mogao Caves by using handheld devices. The installation was shown to the public at different museums and galleries. The result of the work and the user responses is discussed.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125800169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Real-time image-based animation using morphing with human skeletal tracking 实时图像动画使用变形与人体骨骼跟踪
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491395
Wataru Naya, Kazuya Fukumoto, Tsuyoshi Yamamoto, Y. Dobashi
We propose a real-time image-based animation technique for virtual fitting applications. Our method uses key image finding from a database which uses skeletal data as a search key, and then create in-between images by using image morphing. Comparing to conventional method using 3DCG rendering, our method achieves higher frame rate and realistic textile representation. Unlike [1], data size and search time are reduced with database optimization.
我们提出了一种基于实时图像的虚拟试衣动画技术。我们的方法使用从使用骨骼数据作为搜索键的数据库中查找关键图像,然后使用图像变形创建中间图像。与传统的3DCG渲染方法相比,我们的方法实现了更高的帧率和逼真的纺织品表现。与[1]不同,数据库优化减少了数据大小和搜索时间。
{"title":"Real-time image-based animation using morphing with human skeletal tracking","authors":"Wataru Naya, Kazuya Fukumoto, Tsuyoshi Yamamoto, Y. Dobashi","doi":"10.1145/2491367.2491395","DOIUrl":"https://doi.org/10.1145/2491367.2491395","url":null,"abstract":"We propose a real-time image-based animation technique for virtual fitting applications. Our method uses key image finding from a database which uses skeletal data as a search key, and then create in-between images by using image morphing. Comparing to conventional method using 3DCG rendering, our method achieves higher frame rate and realistic textile representation. Unlike [1], data size and search time are reduced with database optimization.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124735738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Free-hands interaction in augmented reality 增强现实中的徒手交互
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491370
D. Datcu, S. Lukosch
The ability to use free-hand gestures is extremely important for mobile augmented reality applications. This paper proposes a computer vision-driven model for natural free-hands interaction in augmented reality. The novelty of the research is the use of robust hand modeling by combining Viola&Jones and Active Appearance Models. A usability study evaluates the hands free interaction model in with a focus on the accuracy of hand based pointing for menu navigation and menu item selection. The results indicate high accuracy of pointing and high usability of the free-hands interaction in augmented reality. The research is part of a joint project of TU Delft and the Netherlands Forensic Institute in The Hague, aiming at the development of novel technologies for crime scene investigations.
使用自由手势的能力对于移动增强现实应用程序非常重要。提出了一种增强现实中自然徒手交互的计算机视觉驱动模型。该研究的新颖之处在于结合Viola&Jones和活跃外观模型,使用了健壮的手部建模。一项可用性研究评估了手自由交互模型,重点是手指向菜单导航和菜单项选择的准确性。结果表明,在增强现实中,自由手交互具有较高的指向精度和可用性。这项研究是代尔夫特理工大学和海牙荷兰法医研究所联合项目的一部分,旨在开发犯罪现场调查的新技术。
{"title":"Free-hands interaction in augmented reality","authors":"D. Datcu, S. Lukosch","doi":"10.1145/2491367.2491370","DOIUrl":"https://doi.org/10.1145/2491367.2491370","url":null,"abstract":"The ability to use free-hand gestures is extremely important for mobile augmented reality applications. This paper proposes a computer vision-driven model for natural free-hands interaction in augmented reality. The novelty of the research is the use of robust hand modeling by combining Viola&Jones and Active Appearance Models. A usability study evaluates the hands free interaction model in with a focus on the accuracy of hand based pointing for menu navigation and menu item selection. The results indicate high accuracy of pointing and high usability of the free-hands interaction in augmented reality. The research is part of a joint project of TU Delft and the Netherlands Forensic Institute in The Hague, aiming at the development of novel technologies for crime scene investigations.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"30 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116406157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
FocalSpace: multimodal activity tracking, synthetic blur and adaptive presentation for video conferencing FocalSpace:用于视频会议的多模式活动跟踪、合成模糊和自适应演示
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491377
Lining Yao, Anthony DeVincenzi, Anna Pereira, H. Ishii
We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users' focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing.
我们介绍FocalSpace,这是一个视频会议系统,通过深度传感和多模态线索(如语音、手势和接近表面)的混合跟踪,动态识别相关活动和物体。FocalSpace利用这些信息通过合成模糊效果来减弱背景,从而增强用户的焦点。我们提出了支持抑制视觉干扰的场景,提供上下文增强,并在动态移动环境中实现隐私。我们的用户评估表明,与传统视频会议相比,FocalSpace技术的记忆准确性和用户偏好有所提高。
{"title":"FocalSpace: multimodal activity tracking, synthetic blur and adaptive presentation for video conferencing","authors":"Lining Yao, Anthony DeVincenzi, Anna Pereira, H. Ishii","doi":"10.1145/2491367.2491377","DOIUrl":"https://doi.org/10.1145/2491367.2491377","url":null,"abstract":"We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users' focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122331844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Up- and downwards motions in 3D pointing 在三维指向中向上和向下运动
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491393
Sidrah Laldin, Robert J. Teather, W. Stuerzlinger
We present an experiment that examines 3D pointing in fish tank VR using the ISO 9241-9 standard. The experiment used three pointing techniques: mouse, ray, and touch using a stylus. It evaluated user pointing performance with stereoscopically displayed varying height targets above an upward-facing display. Results show differences in upwards and downwards motions for the 3D touch technique.
我们提出了一个实验,使用ISO 9241-9标准检查鱼缸VR中的3D指向。该实验使用了三种指向技术:鼠标、射线和触控笔。它通过立体显示不同高度的目标来评估用户指向的性能。结果显示了3D触摸技术在向上和向下运动方面的差异。
{"title":"Up- and downwards motions in 3D pointing","authors":"Sidrah Laldin, Robert J. Teather, W. Stuerzlinger","doi":"10.1145/2491367.2491393","DOIUrl":"https://doi.org/10.1145/2491367.2491393","url":null,"abstract":"We present an experiment that examines 3D pointing in fish tank VR using the ISO 9241-9 standard. The experiment used three pointing techniques: mouse, ray, and touch using a stylus. It evaluated user pointing performance with stereoscopically displayed varying height targets above an upward-facing display. Results show differences in upwards and downwards motions for the 3D touch technique.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"302 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132772437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seamless interaction using a portable projector in perspective corrected multi display environments 使用便携式投影仪在视角校正的多显示环境中进行无缝交互
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491375
Jorge H. dos S. Chernicharo, Kazuki Takashima, Y. Kitamura
In this work, we study ways to use a portable projector to extend the workspace in a perspective corrected multi display environment (MDE). This system uses the relative position between the user and displays in order to show the content perpendicularly to the user's point of view in a deformation-free fashion. We introduce the image created by the portable projector as a new, temporary and movable image in the perspective corrected MDE, creating a more flexible workspace to the user. In our study, we combined two ways of using the projector (handheld or head-mounted) with two ways of moving the cursor on the screens (using a mouse or a laser-pointing based strategy), proposing four techniques to be tried by the users. Also, two exploratory evaluation experiments were performed in order to evaluate our system. The first experiment (5 participants) aimed to evaluate how using a movable screen in order to fill the gaps between displays affects the performance of the user in a cross-display pointing task; while the second (6 participants) aimed to evaluate how using the projector to extend the workspace impacts the task completion time in an off-screen content recognition task. Our results showed that while no significant improvement of the performance of the users could be seen on the pointing task, the users were significantly faster when recognizing off-screen content. Also, the introduction of the portable projector reduced the overall task load on both tasks.
在这项工作中,我们研究了如何使用便携式投影仪来扩展透视校正多显示环境(MDE)中的工作空间。该系统使用用户和显示器之间的相对位置,以便以无变形的方式垂直显示用户视角的内容。我们将便携式投影仪创建的图像作为一个新的、临时的、可移动的图像引入到透视校正的MDE中,为用户创造一个更灵活的工作空间。在我们的研究中,我们结合了两种使用投影仪的方式(手持或头戴式)和两种在屏幕上移动光标的方式(使用鼠标或基于激光指向的策略),提出了四种技术供用户尝试。并进行了两次探索性评价实验,对系统进行了评价。第一个实验(5名参与者)旨在评估使用可移动屏幕来填补显示之间的空白如何影响用户在交叉显示指向任务中的表现;而第二组(6名参与者)旨在评估在屏幕外内容识别任务中,使用投影仪扩展工作空间如何影响任务完成时间。我们的结果表明,虽然用户在指向任务上的表现没有明显改善,但在识别屏幕外的内容时,用户的速度明显加快。此外,便携式投影仪的引入减少了这两项任务的总体任务负荷。
{"title":"Seamless interaction using a portable projector in perspective corrected multi display environments","authors":"Jorge H. dos S. Chernicharo, Kazuki Takashima, Y. Kitamura","doi":"10.1145/2491367.2491375","DOIUrl":"https://doi.org/10.1145/2491367.2491375","url":null,"abstract":"In this work, we study ways to use a portable projector to extend the workspace in a perspective corrected multi display environment (MDE). This system uses the relative position between the user and displays in order to show the content perpendicularly to the user's point of view in a deformation-free fashion. We introduce the image created by the portable projector as a new, temporary and movable image in the perspective corrected MDE, creating a more flexible workspace to the user. In our study, we combined two ways of using the projector (handheld or head-mounted) with two ways of moving the cursor on the screens (using a mouse or a laser-pointing based strategy), proposing four techniques to be tried by the users. Also, two exploratory evaluation experiments were performed in order to evaluate our system. The first experiment (5 participants) aimed to evaluate how using a movable screen in order to fill the gaps between displays affects the performance of the user in a cross-display pointing task; while the second (6 participants) aimed to evaluate how using the projector to extend the workspace impacts the task completion time in an off-screen content recognition task. Our results showed that while no significant improvement of the performance of the users could be seen on the pointing task, the users were significantly faster when recognizing off-screen content. Also, the introduction of the portable projector reduced the overall task load on both tasks.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131482195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Evaluating performance benefits of head tracking in modern video games 评估现代电子游戏中头部跟踪的性能优势
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491376
Arun K. Kulshreshth, J. Laviola
We present a study that investigates user performance benefits of using head tracking in modern video games. We explored four different carefully chosen commercial games with tasks which can potentially benefit from head tracking. For each game, quantitative and qualitative measures were taken to determine if users performed better and learned faster in the experimental group (with head tracking) than in the control group (without head tracking). A game expertise pre-questionnaire was used to classify participants into casual and expert categories to analyze a possible impact on performance differences. Our results indicate that head tracking provided a significant performance benefit for experts in two of the games tested. In addition, our results indicate that head tracking is more enjoyable for slow paced video games and it potentially hurts performance in fast paced modern video games. Reasoning behind our results is discussed and is the basis for our recommendations to game developers who want to make use of head tracking to enhance game experiences.
我们提出了一项研究,调查在现代电子游戏中使用头部跟踪的用户性能效益。我们研究了四款精心挑选的商业游戏,它们的任务都可能受益于头部追踪。对于每一款游戏,我们都采用了定量和定性的测量方法来确定实验组(有头部跟踪)的用户是否比对照组(没有头部跟踪)的用户表现得更好,学习得更快。我们使用游戏专业知识预问卷将参与者划分为休闲和专家类别,以分析对表现差异可能产生的影响。我们的研究结果表明,在测试的两个游戏中,头部跟踪为专家提供了显著的性能优势。此外,我们的研究结果表明,在慢节奏的电子游戏中,头部跟踪更令人愉快,而在快节奏的现代电子游戏中,它可能会损害表现。我们讨论了结果背后的原因,这也是我们向那些想要利用头部追踪来提升游戏体验的游戏开发者提出建议的基础。
{"title":"Evaluating performance benefits of head tracking in modern video games","authors":"Arun K. Kulshreshth, J. Laviola","doi":"10.1145/2491367.2491376","DOIUrl":"https://doi.org/10.1145/2491367.2491376","url":null,"abstract":"We present a study that investigates user performance benefits of using head tracking in modern video games. We explored four different carefully chosen commercial games with tasks which can potentially benefit from head tracking. For each game, quantitative and qualitative measures were taken to determine if users performed better and learned faster in the experimental group (with head tracking) than in the control group (without head tracking). A game expertise pre-questionnaire was used to classify participants into casual and expert categories to analyze a possible impact on performance differences. Our results indicate that head tracking provided a significant performance benefit for experts in two of the games tested. In addition, our results indicate that head tracking is more enjoyable for slow paced video games and it potentially hurts performance in fast paced modern video games. Reasoning behind our results is discussed and is the basis for our recommendations to game developers who want to make use of head tracking to enhance game experiences.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114225158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
期刊
Symposium on Spatial User Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1