首页 > 最新文献

Symposium on Spatial User Interaction最新文献

英文 中文
Extramission: A Large Scale Interactive Virtual Environment Using Head Mounted Projectors and Retro-reflectors 外射:使用头戴式投影仪和反向反射器的大型交互式虚拟环境
Pub Date : 2019-10-19 DOI: 10.1145/3357251.3357592
Hiroto Aoki, J. Rekimoto
We present Extramission, a method to a large scale interactive virtual environment. It consists of dual head mounted pico projectors and retro-reflective materials. With high-accuracy retro-reflective materials, laser beams scanned on user’s retina makes clear and free-focus vision. In this retinal scanning configuration, even if the luminance of the projector is low, scanned images can be seen clearly, which helps to evade overlaps between projected images. Due to small overlaps, Extramission can provide multi-user virtual experiences showing different images to each individual, and dual pico projectors can provide each user with stereoscopic vision. Moreover, the tolerance of low luminance allows larger distance between users and retro-reflectors, which is required for large scale virtual experiences using head mounted projectors. In this paper, we describe the principle and the implementation of Extramission. We also see its performance of displaying images.
提出了一种用于大规模交互虚拟环境的方法extramtion。它由双头安装的微型投影仪和反光材料组成。激光束扫描到用户的视网膜上,采用高精度的反向反射材料,使视觉清晰自由聚焦。在这种视网膜扫描配置中,即使投影仪的亮度很低,也可以清楚地看到扫描的图像,这有助于避免投影图像之间的重叠。由于重叠很小,extramision可以提供多用户虚拟体验,向每个人展示不同的图像,双微型投影仪可以为每个用户提供立体视觉。此外,低亮度的容忍度允许用户和反向反射器之间的距离更大,这是使用头戴式投影仪进行大规模虚拟体验所必需的。在本文中,我们描述了提取的原理和实现。我们还看到了它显示图像的性能。
{"title":"Extramission: A Large Scale Interactive Virtual Environment Using Head Mounted Projectors and Retro-reflectors","authors":"Hiroto Aoki, J. Rekimoto","doi":"10.1145/3357251.3357592","DOIUrl":"https://doi.org/10.1145/3357251.3357592","url":null,"abstract":"We present Extramission, a method to a large scale interactive virtual environment. It consists of dual head mounted pico projectors and retro-reflective materials. With high-accuracy retro-reflective materials, laser beams scanned on user’s retina makes clear and free-focus vision. In this retinal scanning configuration, even if the luminance of the projector is low, scanned images can be seen clearly, which helps to evade overlaps between projected images. Due to small overlaps, Extramission can provide multi-user virtual experiences showing different images to each individual, and dual pico projectors can provide each user with stereoscopic vision. Moreover, the tolerance of low luminance allows larger distance between users and retro-reflectors, which is required for large scale virtual experiences using head mounted projectors. In this paper, we describe the principle and the implementation of Extramission. We also see its performance of displaying images.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132228520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Object Manipulation by Absolute Pointing with a Smartphone Gyro Sensor 对象操纵的绝对指向与智能手机陀螺仪传感器
Pub Date : 2019-10-19 DOI: 10.1145/3357251.3360006
Koki Sato, Mitsunori Matsushita
The purpose of this study is to operate various computers around us using our own smartphones. Methods for operating computers around the home by voice, such as the Internet of Things (IoT) appliances, are now widespread. However, there are problems with operation by voice; it is limited in terms of instruction patterns that can be expressed, and it cannot be used simultaneously by many users. To solve the problem, we propose a method to determine the location pointed to by a user with a smartphone gyro sensor. This method achieves controller integration, multiple functions, and simultaneous use by multiple people.
本研究的目的是使用我们自己的智能手机操作我们周围的各种电脑。物联网(IoT)家电等在家中通过语音操作电脑的方法正在普及。但是,语音操作存在一些问题;它在可以表达的指令模式方面是有限的,并且不能被许多用户同时使用。为了解决这个问题,我们提出了一种方法来确定用户用智能手机陀螺仪传感器指向的位置。该方法实现了控制器的集成化、多功能、多人同时使用。
{"title":"Object Manipulation by Absolute Pointing with a Smartphone Gyro Sensor","authors":"Koki Sato, Mitsunori Matsushita","doi":"10.1145/3357251.3360006","DOIUrl":"https://doi.org/10.1145/3357251.3360006","url":null,"abstract":"The purpose of this study is to operate various computers around us using our own smartphones. Methods for operating computers around the home by voice, such as the Internet of Things (IoT) appliances, are now widespread. However, there are problems with operation by voice; it is limited in terms of instruction patterns that can be expressed, and it cannot be used simultaneously by many users. To solve the problem, we propose a method to determine the location pointed to by a user with a smartphone gyro sensor. This method achieves controller integration, multiple functions, and simultaneous use by multiple people.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"22 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132433876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effects of Shared Gaze Parameters on Visual Target Identification Task Performance in Augmented Reality 共享凝视参数对增强现实视觉目标识别任务性能的影响
Pub Date : 2019-10-19 DOI: 10.1145/3357251.3357587
Nahal Norouzi, A. Erickson, Kangsoo Kim, Ryan Schubert, J. Laviola, G. Bruder, G. Welch
Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we conducted a human-subject study to understand the impact of accuracy, precision, latency, and dropout based errors on users’ performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants’ objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found some significant differences suggesting that the simulated error levels had stronger effects on participants’ performance than target distance with accuracy and latency having a high impact on participants’ error rate. We also observed that participants assessed their own performance as lower than it objectively was, and we discuss implications for practical shared gaze applications.
增强现实(AR)技术为用户在涉及真实和虚拟内容的物理环境中进行协作提供了一个共享平台。为了提高增强现实用户之间的互动质量,研究人员提出了通过用户的凝视方向等具体线索来增强用户的人际空间。虽然有助于改善人际空间交流,但这种共享凝视环境存在与眼动追踪和网络相关的多种错误,可能会降低客观表现和主观体验。在本文中,我们进行了一项人类受试者研究,以了解当使用共享凝视线索在人群中识别目标时,准确度、精度、延迟和基于错误的退出对用户性能的影响。我们模拟了不同的错误量和目标距离,并通过参与者的反应时间和错误率来衡量他们的客观表现,通过问卷调查来衡量他们的主观经验和认知负荷。结果表明,模拟误差水平对被试成绩的影响大于目标距离,准确率和延迟对被试错误率的影响较大。我们还观察到参与者对自己表现的评估低于客观情况,并讨论了实际共享凝视应用的含义。
{"title":"Effects of Shared Gaze Parameters on Visual Target Identification Task Performance in Augmented Reality","authors":"Nahal Norouzi, A. Erickson, Kangsoo Kim, Ryan Schubert, J. Laviola, G. Bruder, G. Welch","doi":"10.1145/3357251.3357587","DOIUrl":"https://doi.org/10.1145/3357251.3357587","url":null,"abstract":"Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we conducted a human-subject study to understand the impact of accuracy, precision, latency, and dropout based errors on users’ performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants’ objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found some significant differences suggesting that the simulated error levels had stronger effects on participants’ performance than target distance with accuracy and latency having a high impact on participants’ error rate. We also observed that participants assessed their own performance as lower than it objectively was, and we discuss implications for practical shared gaze applications.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127890871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Extending Virtual Reality Display Wall Environments Using Augmented Reality 使用增强现实扩展虚拟现实显示墙环境
Pub Date : 2019-10-19 DOI: 10.1145/3357251.3357579
Arthur Nishimoto, Andrew E. Johnson
Two major form factors for virtual reality are head-mounted displays and large display environments such as CAVE®and the LCD-based successor CAVE2®. Each of these has distinct advantages and limitations based on how they’re used. This work explores preserving the high resolution and sense of presence of CAVE2 environments in full stereoscopic mode by using a see-though augmented reality HMD to expand the user’s field of regard beyond the physical display walls. In our explorative study, we found that in a visual search task in a stereoscopic CAVE2, the addition of the HoloLens to expand the field of regard did not hinder the performance or accuracy of the participant, but promoted more physical navigation which in post-study interviews participants felt aided in their spatial awareness of the virtual environment.
虚拟现实的两个主要外形因素是头戴式显示器和大型显示环境,如CAVE®和基于lcd的后继产品CAVE2®。根据使用方式的不同,每种方法都有不同的优点和局限性。这项工作探索了在全立体模式下,通过使用可视增强现实头戴式显示器来扩展用户的关注领域,从而保持CAVE2环境的高分辨率和存在感。在我们的探索性研究中,我们发现在立体CAVE2的视觉搜索任务中,增加HoloLens来扩大关注领域并不会阻碍参与者的表现或准确性,但会促进更多的物理导航,在研究后的访谈中,参与者认为这有助于他们对虚拟环境的空间意识。
{"title":"Extending Virtual Reality Display Wall Environments Using Augmented Reality","authors":"Arthur Nishimoto, Andrew E. Johnson","doi":"10.1145/3357251.3357579","DOIUrl":"https://doi.org/10.1145/3357251.3357579","url":null,"abstract":"Two major form factors for virtual reality are head-mounted displays and large display environments such as CAVE®and the LCD-based successor CAVE2®. Each of these has distinct advantages and limitations based on how they’re used. This work explores preserving the high resolution and sense of presence of CAVE2 environments in full stereoscopic mode by using a see-though augmented reality HMD to expand the user’s field of regard beyond the physical display walls. In our explorative study, we found that in a visual search task in a stereoscopic CAVE2, the addition of the HoloLens to expand the field of regard did not hinder the performance or accuracy of the participant, but promoted more physical navigation which in post-study interviews participants felt aided in their spatial awareness of the virtual environment.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126110545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Effects of Depth Layer Switching between an Optical See-Through Head-Mounted Display and a Body-Proximate Display 光学透明头戴式显示器和近体显示之间深度层切换的影响
Pub Date : 2019-09-06 DOI: 10.1145/3357251.3357588
Anna Eiberger, P. Kristensson, S. Mayr, M. Kranz, Jens Grubert
Optical see-through head-mounted displays (OST HMDs) typically display virtual content at a fixed focal distance while users need to integrate this information with real-world information at different depth layers. This problem is pronounced in body-proximate multi-display systems, such as when an OST HMD is combined with a smartphone or smartwatch. While such joint systems open up a new design space, they also reduce users’ ability to integrate visual information. We quantify this cost by presenting the results of an experiment (n=24) that evaluates human performance in a visual search task across an OST HMD and a body-proximate display at 30 cm. The results reveal that task completion time increases significantly by approximately 50% and the error rate increases significantly by approximately 100% compared to visual search on a single depth layer. These results highlight a design trade-off when designing joint OST HMD-body proximate display systems.
光学透明头戴式显示器(OST hmd)通常以固定的焦距显示虚拟内容,而用户需要将这些信息与不同深度层的真实信息相结合。这个问题在接近人体的多显示系统中很明显,比如当OST头戴式显示器与智能手机或智能手表结合使用时。虽然这种联合系统开辟了一个新的设计空间,但它们也降低了用户整合视觉信息的能力。我们通过展示一项实验(n=24)的结果来量化这一成本,该实验评估了人类在30厘米的OST HMD和接近身体的显示器上的视觉搜索任务中的表现。结果表明,与单深度层的视觉搜索相比,任务完成时间显著增加了约50%,错误率显著增加了约100%。这些结果突出了设计联合OST HMD-body近似显示系统时的设计权衡。
{"title":"Effects of Depth Layer Switching between an Optical See-Through Head-Mounted Display and a Body-Proximate Display","authors":"Anna Eiberger, P. Kristensson, S. Mayr, M. Kranz, Jens Grubert","doi":"10.1145/3357251.3357588","DOIUrl":"https://doi.org/10.1145/3357251.3357588","url":null,"abstract":"Optical see-through head-mounted displays (OST HMDs) typically display virtual content at a fixed focal distance while users need to integrate this information with real-world information at different depth layers. This problem is pronounced in body-proximate multi-display systems, such as when an OST HMD is combined with a smartphone or smartwatch. While such joint systems open up a new design space, they also reduce users’ ability to integrate visual information. We quantify this cost by presenting the results of an experiment (n=24) that evaluates human performance in a visual search task across an OST HMD and a body-proximate display at 30 cm. The results reveal that task completion time increases significantly by approximately 50% and the error rate increases significantly by approximately 100% compared to visual search on a single depth layer. These results highlight a design trade-off when designing joint OST HMD-body proximate display systems.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132259627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Improving Usability, Efficiency, and Safety of UAV Path Planning through a Virtual Reality Interface 通过虚拟现实界面提高无人机路径规划的可用性、效率和安全性
Pub Date : 2019-04-18 DOI: 10.1145/3357251.3362742
Jesse Paterson, Jiwoong Han, T. Cheng, P. Laker, D. McPherson, Joseph Menke, A. Yang
As the capability and complexity of UAVs continue to increase, specifying the complex 3D flight paths necessary for instructing gets more complicated. Immersive interfaces, such as those afforded by virtual reality (VR), have several unique traits which may improve the user’s ability to perceive and specify 3D information. These traits include stereoscopic depth cues which induce a sense of physical space as well as six degrees of freedom (DoF) natural head-pose and gesture interactions. This work introduces an open-source platform for 3D aerial path planning in VR and compares it to existing UAV piloting interfaces. Our study has found statistically significant improvements in safety over a manual control interface and in efficiency over a 2D touchscreen interface. The results illustrate that immersive interfaces provide a viable alternative to touchscreen interfaces for UAV path planning.
随着无人机性能和复杂程度的不断提高,确定复杂的三维飞行路径也变得越来越复杂。沉浸式界面,比如那些由虚拟现实(VR)提供的界面,有几个独特的特征,可以提高用户感知和指定3D信息的能力。这些特征包括产生物理空间感的立体深度线索,以及六自由度(DoF)自然的头部姿势和手势互动。本文介绍了一个开源的VR三维航路规划平台,并将其与现有的无人机驾驶接口进行了比较。我们的研究发现,与手动控制界面相比,在安全性和效率方面,与2D触摸屏界面相比有统计学上的显著提高。结果表明,沉浸式界面为无人机路径规划提供了一种可行的替代触摸屏界面。
{"title":"Improving Usability, Efficiency, and Safety of UAV Path Planning through a Virtual Reality Interface","authors":"Jesse Paterson, Jiwoong Han, T. Cheng, P. Laker, D. McPherson, Joseph Menke, A. Yang","doi":"10.1145/3357251.3362742","DOIUrl":"https://doi.org/10.1145/3357251.3362742","url":null,"abstract":"As the capability and complexity of UAVs continue to increase, specifying the complex 3D flight paths necessary for instructing gets more complicated. Immersive interfaces, such as those afforded by virtual reality (VR), have several unique traits which may improve the user’s ability to perceive and specify 3D information. These traits include stereoscopic depth cues which induce a sense of physical space as well as six degrees of freedom (DoF) natural head-pose and gesture interactions. This work introduces an open-source platform for 3D aerial path planning in VR and compares it to existing UAV piloting interfaces. Our study has found statistically significant improvements in safety over a manual control interface and in efficiency over a 2D touchscreen interface. The results illustrate that immersive interfaces provide a viable alternative to touchscreen interfaces for UAV path planning.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133583799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Effects of stereo and head tracking in 3d selection tasks 立体和头部跟踪在三维选择任务中的效果
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491392
Bartosz Bajer, Robert J. Teather, W. Stuerzlinger
We report a 3D selection study comparing stereo and head-tracking with both mouse and pen pointing. Results indicate stereo was primarily beneficial to the pen mode, but slightly hindered mouse speed. Head tracking had fewer noticeable effects.
我们报告了一项3D选择研究,比较了鼠标和笔指向的立体和头部跟踪。结果表明,立体声主要有利于笔模式,但略有阻碍鼠标速度。头部跟踪的影响较小。
{"title":"Effects of stereo and head tracking in 3d selection tasks","authors":"Bartosz Bajer, Robert J. Teather, W. Stuerzlinger","doi":"10.1145/2491367.2491392","DOIUrl":"https://doi.org/10.1145/2491367.2491392","url":null,"abstract":"We report a 3D selection study comparing stereo and head-tracking with both mouse and pen pointing. Results indicate stereo was primarily beneficial to the pen mode, but slightly hindered mouse speed. Head tracking had fewer noticeable effects.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116686615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Effectiveness of commodity BCI devices as means to control an immersive virtual environment 商品BCI设备作为控制沉浸式虚拟环境手段的有效性
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491403
Jerald Thomas, Steve Jungst, P. Willemsen
This poster focuses on research investigating the control of an immersive virtual environment using the Emotiv EPOC, a consumer-grade brain computer interface. The primary emphasis of the work is to determine the feasibility of the Emotiv EPOC at manipulating elements of an interactive virtual environment. We have developed a system utilizing the Emotiv EPOC as the main interface to a custom testing environment comprised of the Blender Game Engine, Python, and a VRPN system. A series of experiments that measure response time, reliability, and accuracy have been developed and the current results are described. Our poster presents the current state of the project including preliminary efforts in piloting the experiments. These findings provide insight into potential results from experimentation with active subjects and prove to be promising.
这张海报上关注的是使用Emotiv EPOC(一种消费级脑机接口)对沉浸式虚拟环境进行控制的研究。这项工作的主要重点是确定Emotiv EPOC在操纵交互式虚拟环境元素方面的可行性。我们开发了一个系统,利用Emotiv EPOC作为自定义测试环境的主界面,该环境由Blender游戏引擎、Python和VRPN系统组成。已经开发了一系列测量响应时间、可靠性和准确性的实验,并描述了当前的结果。我们的海报展示了项目的现状,包括试验的初步努力。这些发现提供了对活跃受试者实验的潜在结果的见解,并被证明是有希望的。
{"title":"Effectiveness of commodity BCI devices as means to control an immersive virtual environment","authors":"Jerald Thomas, Steve Jungst, P. Willemsen","doi":"10.1145/2491367.2491403","DOIUrl":"https://doi.org/10.1145/2491367.2491403","url":null,"abstract":"This poster focuses on research investigating the control of an immersive virtual environment using the Emotiv EPOC, a consumer-grade brain computer interface. The primary emphasis of the work is to determine the feasibility of the Emotiv EPOC at manipulating elements of an interactive virtual environment. We have developed a system utilizing the Emotiv EPOC as the main interface to a custom testing environment comprised of the Blender Game Engine, Python, and a VRPN system. A series of experiments that measure response time, reliability, and accuracy have been developed and the current results are described.\u0000 Our poster presents the current state of the project including preliminary efforts in piloting the experiments. These findings provide insight into potential results from experimentation with active subjects and prove to be promising.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125839748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Bimanual spatial haptic interface for assembly tasks 装配任务的双手空间触觉界面
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491398
Jonas Forsslund, Sara C. Schvartzman, S. Girod, Rebeka G. Silva, J. Salisbury, Sonny Chan, B. Jo
We have created a novel virtual assembly tool that uses two haptic devices for bimanual manipulation. The project is focused on the manipulation of fractured jaw bones for patient-specific surgical planning, but can be extended to any assembly task of organic shaped objects (Figure 1). Spatial input devices that support virtual object manipulation through direct mapping are easier and more natural to use for tasks that are fundamentally in 3D, like assembly tasks. Employing both hands further provides a frame of reference which improves spatial understanding of the manipulated objects [2]. Few studies have been carried out on the importance of haptic feedback for bimanual interactions, but it has been showed meaningful even for unimanual tasks [4]. We are showing a demo of our work in progress to bring high-fidelity haptic rendering to bimanually operated spatial interfaces. As bimanual direct manipulation interaction improves performance even without collision response, we hypothesize that haptic feedback improves it further.
我们创造了一种新的虚拟装配工具,它使用两个触觉设备进行双手操作。该项目专注于为患者特定的手术计划操作骨折颌骨,但可以扩展到任何有机形状物体的组装任务(图1)。通过直接映射支持虚拟对象操作的空间输入设备更容易,更自然地用于基本上是3D的任务,如组装任务。使用双手进一步提供了一个参考框架,提高了对被操纵物体的空间理解[2]。关于触觉反馈对双手交互的重要性的研究很少,但即使对于单手任务,它也被证明是有意义的[4]。我们正在展示我们正在进行的工作演示,将高保真的触觉渲染带到手动操作的空间界面。由于即使没有碰撞反应,双手直接操作交互也可以提高性能,我们假设触觉反馈可以进一步提高性能。
{"title":"Bimanual spatial haptic interface for assembly tasks","authors":"Jonas Forsslund, Sara C. Schvartzman, S. Girod, Rebeka G. Silva, J. Salisbury, Sonny Chan, B. Jo","doi":"10.1145/2491367.2491398","DOIUrl":"https://doi.org/10.1145/2491367.2491398","url":null,"abstract":"We have created a novel virtual assembly tool that uses two haptic devices for bimanual manipulation. The project is focused on the manipulation of fractured jaw bones for patient-specific surgical planning, but can be extended to any assembly task of organic shaped objects (Figure 1). Spatial input devices that support virtual object manipulation through direct mapping are easier and more natural to use for tasks that are fundamentally in 3D, like assembly tasks. Employing both hands further provides a frame of reference which improves spatial understanding of the manipulated objects [2]. Few studies have been carried out on the importance of haptic feedback for bimanual interactions, but it has been showed meaningful even for unimanual tasks [4]. We are showing a demo of our work in progress to bring high-fidelity haptic rendering to bimanually operated spatial interfaces. As bimanual direct manipulation interaction improves performance even without collision response, we hypothesize that haptic feedback improves it further.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117035724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To touch or not to touch?: comparing 2D touch and 3D mid-air interaction on stereoscopic tabletop surfaces 碰还是不碰?:在立体桌面表面上比较2D触摸和3D空中交互
Pub Date : 2013-07-20 DOI: 10.1145/2491367.2491369
G. Bruder, Frank Steinicke, W. Stuerzlinger
Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Although this combination provides a compelling user experience, interaction with objects stereoscopically displayed in front of the screen poses some fundamental challenges: Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate the visually perceived object to touch the 2D surface behind the object. Conversely, recent technologies support capturing finger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In this paper we perform a comparison between such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results show that the 2D touch technique is more efficient close to the screen, whereas for targets further away from the screen, 3D selection outperforms 2D touch. Based on the results, we present implications for the design and development of future touch-sensitive interfaces for stereoscopic displays.
触摸和显示技术的最新发展为将触摸敏感显示系统与立体三维(3D)显示相结合奠定了基础。虽然这种组合提供了引人注目的用户体验,但与屏幕前立体显示的物体的交互带来了一些根本性的挑战:传统上,触摸敏感表面只能捕获直接接触,这样用户必须穿透视觉感知对象才能触摸对象后面的2D表面。相反,最近的技术支持在显示器前捕获手指位置,使用户能够在空中3D空间中与无形物体进行交互。在本文中,我们在菲茨定律实验中对具有不同立体视差的物体进行了这种2D触摸和3D半空中相互作用的比较。结果表明,2D触摸技术在靠近屏幕的地方更有效,而对于远离屏幕的目标,3D选择优于2D触摸。基于这些结果,我们提出了对未来立体显示器触敏界面设计和开发的启示。
{"title":"To touch or not to touch?: comparing 2D touch and 3D mid-air interaction on stereoscopic tabletop surfaces","authors":"G. Bruder, Frank Steinicke, W. Stuerzlinger","doi":"10.1145/2491367.2491369","DOIUrl":"https://doi.org/10.1145/2491367.2491369","url":null,"abstract":"Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Although this combination provides a compelling user experience, interaction with objects stereoscopically displayed in front of the screen poses some fundamental challenges: Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate the visually perceived object to touch the 2D surface behind the object. Conversely, recent technologies support capturing finger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In this paper we perform a comparison between such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results show that the 2D touch technique is more efficient close to the screen, whereas for targets further away from the screen, 3D selection outperforms 2D touch. Based on the results, we present implications for the design and development of future touch-sensitive interfaces for stereoscopic displays.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129443187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
期刊
Symposium on Spatial User Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1