首页 > 最新文献

2009 IEEE Symposium on 3D User Interfaces最新文献

英文 中文
Tech-note: Device-free interaction spaces 技术提示:无设备交互空间
Pub Date : 2009-03-14 DOI: 10.1109/3DUI.2009.4811203
D. Stødle, O. Troyanskaya, K. Li, Otto J. Anshus
Existing approaches to 3D input on wall-sized displays include tracking users with markers, using stereo- or depth-cameras or have users carry devices like the Nintendo Wiimote. Markers makes ad hoc usage difficult, and in public settings devices may easily get lost or stolen. Further, most camera-based approaches limit the area where users can interact.
在墙壁大小的显示器上进行3D输入的现有方法包括用标记跟踪用户,使用立体或深度摄像头,或者让用户携带任天堂Wiimote等设备。标记使得临时使用变得困难,而且在公共场合,设备很容易丢失或被盗。此外,大多数基于摄像头的方法限制了用户可以交互的区域。
{"title":"Tech-note: Device-free interaction spaces","authors":"D. Stødle, O. Troyanskaya, K. Li, Otto J. Anshus","doi":"10.1109/3DUI.2009.4811203","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811203","url":null,"abstract":"Existing approaches to 3D input on wall-sized displays include tracking users with markers, using stereo- or depth-cameras or have users carry devices like the Nintendo Wiimote. Markers makes ad hoc usage difficult, and in public settings devices may easily get lost or stolen. Further, most camera-based approaches limit the area where users can interact.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124413884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Tech-note: Spatial interaction using depth camera for miniature AR 技术说明:使用深度相机进行空间交互的微型AR
Pub Date : 2009-03-14 DOI: 10.1109/3DUI.2009.4811216
Kyungdahm Yun, Woontack Woo
Spatial Interaction (SPINT) is a non-contact passive interaction method that exploits a depth-sensing camera for monitoring the spaces around an augmented virtual object and interpreting their occupancy states as user input. The proposed method provides 3D hand interaction requiring no wearable device. The interaction schemes can be extended by combining virtual space sensors with different types of interpretation units. The depth perception anomaly caused by an incorrect occlusion between real and virtual objects is also alleviated for more precise interaction. The fluid interface will be used for a new exhibit platform, such as Miniature AR System (MINARS), to support a dynamic content manipulation by multiple users without severe tracking constraints.
空间交互(SPINT)是一种非接触式被动交互方法,利用深度感测相机监测增强虚拟物体周围的空间,并将其占用状态解释为用户输入。所提出的方法提供不需要可穿戴设备的3D手部交互。通过将虚拟空间传感器与不同类型的解释单元相结合,可以扩展交互方案。为了实现更精确的交互,还减轻了由于真实物体和虚拟物体之间不正确遮挡而导致的深度感知异常。流体界面将用于新的展示平台,如微型AR系统(MINARS),以支持多个用户在没有严格跟踪约束的情况下进行动态内容操作。
{"title":"Tech-note: Spatial interaction using depth camera for miniature AR","authors":"Kyungdahm Yun, Woontack Woo","doi":"10.1109/3DUI.2009.4811216","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811216","url":null,"abstract":"Spatial Interaction (SPINT) is a non-contact passive interaction method that exploits a depth-sensing camera for monitoring the spaces around an augmented virtual object and interpreting their occupancy states as user input. The proposed method provides 3D hand interaction requiring no wearable device. The interaction schemes can be extended by combining virtual space sensors with different types of interpretation units. The depth perception anomaly caused by an incorrect occlusion between real and virtual objects is also alleviated for more precise interaction. The fluid interface will be used for a new exhibit platform, such as Miniature AR System (MINARS), to support a dynamic content manipulation by multiple users without severe tracking constraints.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Effects of tracking technology, latency, and spatial jitter on object movement 跟踪技术、延迟和空间抖动对物体运动的影响
Pub Date : 2009-03-14 DOI: 10.1109/3DUI.2009.4811204
Robert J. Teather, Andriy Pavlovych, W. Stuerzlinger, I. MacKenzie
We investigate the effects of input device latency and spatial jitter on 2D pointing tasks and 3D object movement tasks. First, we characterize jitter and latency in a 3D tracking device and an optical mouse used as a baseline comparison. We then present an experiment based on ISO 9241-9, which measures performance characteristics of pointing devices. We artificially introduce latency and jitter to the mouse and compared the results to the 3D tracker. Results indicate that latency has a much stronger effect on human performance than low amounts of spatial jitter. In a second study, we use a subset of conditions from the first to test latency and jitter on 3D object movement. The results indicate that large, uncharacterized jitter “spikes” significantly impact 3D performance.
我们研究了输入设备延迟和空间抖动对2D指向任务和3D对象移动任务的影响。首先,我们表征抖动和延迟在3D跟踪设备和光学鼠标用作基线比较。然后,我们提出了一个基于ISO 9241-9的实验,该实验测量了指向设备的性能特征。我们人为地在鼠标中引入延迟和抖动,并将结果与3D跟踪器进行比较。结果表明,延迟比低量的空间抖动对人类表现的影响要大得多。在第二项研究中,我们使用第一个条件的子集来测试3D物体运动的延迟和抖动。结果表明,大的,未表征的抖动“尖峰”显著影响3D性能。
{"title":"Effects of tracking technology, latency, and spatial jitter on object movement","authors":"Robert J. Teather, Andriy Pavlovych, W. Stuerzlinger, I. MacKenzie","doi":"10.1109/3DUI.2009.4811204","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811204","url":null,"abstract":"We investigate the effects of input device latency and spatial jitter on 2D pointing tasks and 3D object movement tasks. First, we characterize jitter and latency in a 3D tracking device and an optical mouse used as a baseline comparison. We then present an experiment based on ISO 9241-9, which measures performance characteristics of pointing devices. We artificially introduce latency and jitter to the mouse and compared the results to the 3D tracker. Results indicate that latency has a much stronger effect on human performance than low amounts of spatial jitter. In a second study, we use a subset of conditions from the first to test latency and jitter on 3D object movement. The results indicate that large, uncharacterized jitter “spikes” significantly impact 3D performance.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131645692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 156
Poster: MVCE - a design pattern to guide the development of next generation user interfaces 海报:MVCE——指导下一代用户界面开发的设计模式
Pub Date : 2009-03-14 DOI: 10.1109/3DUI.2009.4811232
Jörg Stöcklein, C. Geiger, V. Paelke, Patrick Pogscheba
The development of next generation user interfaces that employ novel sensors and additional output modalities has high potential to improve the usability of applications used in non-desktop environments. The design of such interfaces requires an exploratory design approach to handle the interaction of newly developed interaction techniques with complex hardware. As a first step towards a structured design process we extended the MVC design pattern by an additional dimension “Environment” to capture elements and constraint from the real world.
下一代用户界面的开发采用新颖的传感器和额外的输出模式,这对于改善非桌面环境中使用的应用程序的可用性具有很大的潜力。这种接口的设计需要一种探索性的设计方法来处理新开发的交互技术与复杂硬件的交互。作为迈向结构化设计过程的第一步,我们通过一个额外的维度“环境”扩展了MVC设计模式,以捕获来自现实世界的元素和约束。
{"title":"Poster: MVCE - a design pattern to guide the development of next generation user interfaces","authors":"Jörg Stöcklein, C. Geiger, V. Paelke, Patrick Pogscheba","doi":"10.1109/3DUI.2009.4811232","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811232","url":null,"abstract":"The development of next generation user interfaces that employ novel sensors and additional output modalities has high potential to improve the usability of applications used in non-desktop environments. The design of such interfaces requires an exploratory design approach to handle the interaction of newly developed interaction techniques with complex hardware. As a first step towards a structured design process we extended the MVC design pattern by an additional dimension “Environment” to capture elements and constraint from the real world.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132519863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Poster: Vibration as a wayfinding aid 海报:振动作为指路工具
Pub Date : 2009-03-14 DOI: 10.1109/3DUI.2009.4811223
C. P. Quintero, P. Figueroa
There are several ways to guide users to a destination in a Virtual World, most of them inherited from real counterparts, and typically based on visual feedback. Although these aids are very useful in general, we want to avoid user's distractions from the main scene and visual cluttering that may occur when visual feedback for wayfinding is used. We present our work on a “vibrating belt”, a belt of motors that can be used as an orientation aid. We conducted a set of experiments that compared such device with a low cognitive load visual aid for wayfinding, and we have found our device as effective as the visual aids in our study. We believe this device could improve the user's performance and concentration on the main activities in the scene.
在虚拟世界中,有几种方法可以引导用户到达目的地,其中大多数都继承自现实世界,并且通常基于视觉反馈。虽然这些辅助工具在一般情况下非常有用,但我们希望避免用户从主要场景中分心,以及在使用寻路视觉反馈时可能出现的视觉混乱。我们展示了我们在“振动带”上的工作,这是一种可以用作定向辅助的电机带。我们进行了一组实验,将这种设备与低认知负荷的视觉辅助设备进行了比较,我们发现我们的设备在我们的研究中与视觉辅助设备一样有效。我们相信这个设备可以提高用户在场景中主要活动的表现和注意力。
{"title":"Poster: Vibration as a wayfinding aid","authors":"C. P. Quintero, P. Figueroa","doi":"10.1109/3DUI.2009.4811223","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811223","url":null,"abstract":"There are several ways to guide users to a destination in a Virtual World, most of them inherited from real counterparts, and typically based on visual feedback. Although these aids are very useful in general, we want to avoid user's distractions from the main scene and visual cluttering that may occur when visual feedback for wayfinding is used. We present our work on a “vibrating belt”, a belt of motors that can be used as an orientation aid. We conducted a set of experiments that compared such device with a low cognitive load visual aid for wayfinding, and we have found our device as effective as the visual aids in our study. We believe this device could improve the user's performance and concentration on the main activities in the scene.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127815721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Poster: A virtual body for augmented virtuality by chroma-keying of egocentric videos 海报:通过以自我为中心的视频的色彩键控来增强虚拟的虚拟身体
Pub Date : 2009-03-14 DOI: 10.1109/3DUI.2009.4811218
Frank Steinicke, G. Bruder, K. Rothaus, K. Hinrichs
A fully-articulated visual representation of oneself in an immersive virtual environment has considerable impact on the subjective sense of presence in the virtual world. Therefore, many approaches address this challenge and incorporate a virtual model of the user's body in the VE. Such a “virtual body” (VB) is manipulated according to user motions which are defined by feature points detected by a tracking system. The required tracking devices are unsuitable in scenarios which involve multiple persons simultaneously or in which participants frequently change. Furthermore, individual characteristics such as skin pigmentation, hairiness or clothes are not considered by this procedure. In this paper we present a software-based approach that allows to incorporate a realistic visual representation of oneself in the VE. The idea is to make use of images captured by cameras that are attached to video-see-through head-mounted displays. These egocentric frames can be segmented into foreground showing parts of the human body and background. Then the extremities can be overlayed with the user's current view of the virtual world, and thus a high-fidelity virtual body can be visualized.
在沉浸式虚拟环境中,完全清晰的自我视觉表现对虚拟世界中的主观存在感有相当大的影响。因此,许多方法解决了这一挑战,并在VE中合并了用户身体的虚拟模型。这种“虚拟身体”(VB)是根据跟踪系统检测到的特征点定义的用户运动来操纵的。所要求的跟踪装置不适用于同时涉及多人或参与者频繁变化的情况。此外,个人特征,如皮肤色素沉着,毛发或衣服不考虑该程序。在本文中,我们提出了一种基于软件的方法,允许在VE中合并自己的逼真视觉表示。这个想法是利用连接在可视头戴式显示器上的摄像头捕捉到的图像。这些以自我为中心的框架可以被分割成显示人体部分的前景和背景。然后,四肢可以与用户当前的虚拟世界视图叠加,从而可以可视化高保真的虚拟身体。
{"title":"Poster: A virtual body for augmented virtuality by chroma-keying of egocentric videos","authors":"Frank Steinicke, G. Bruder, K. Rothaus, K. Hinrichs","doi":"10.1109/3DUI.2009.4811218","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811218","url":null,"abstract":"A fully-articulated visual representation of oneself in an immersive virtual environment has considerable impact on the subjective sense of presence in the virtual world. Therefore, many approaches address this challenge and incorporate a virtual model of the user's body in the VE. Such a “virtual body” (VB) is manipulated according to user motions which are defined by feature points detected by a tracking system. The required tracking devices are unsuitable in scenarios which involve multiple persons simultaneously or in which participants frequently change. Furthermore, individual characteristics such as skin pigmentation, hairiness or clothes are not considered by this procedure. In this paper we present a software-based approach that allows to incorporate a realistic visual representation of oneself in the VE. The idea is to make use of images captured by cameras that are attached to video-see-through head-mounted displays. These egocentric frames can be segmented into foreground showing parts of the human body and background. Then the extremities can be overlayed with the user's current view of the virtual world, and thus a high-fidelity virtual body can be visualized.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124311369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A tactile distribution sensor which enables stable measurement under high and dynamic stretch 一种触觉分布传感器,可在高和动态拉伸下进行稳定测量
Pub Date : 2009-03-14 DOI: 10.1109/3DUI.2009.4811210
Hassan Alirezaei, Akihiko Nagakubo, Y. Kuniyoshi
Recaently, we have been studying various tactile distribution sensors based on Electrical Impedance Tomography (EIT) which is a non-invasive technique to measure the resistance distribution of a conductive material only from a boundary, and needs no wiring inside the sensing area. In this paper, we present a newly developed conductive structure which is pressure sensitive but stretch insensitive and is based on the concept of contact resistance between (1)a network of stretchable wave-like conductive yarns with high resistance and (2)a conductive stretchable sheet with low resistance. Based on this newly developed structure, we have realized a novel tactile distribution sensor which enables stable measurement under dynamic and large stretch from various directions. Stable measurement of pressure distribution under dynamic and complex deformation cases such as pinching and pushing on a balloon surface are demonstrated. The sensor has been originally designed for implementation over interactive robots with soft and highly deformable bodies, but can also be used as novel user interface devices, or ordinary pressure distribution sensors. Some of the most remarkable specifications of the developed tactile sensor are high stretchability up to 140% and toughness under adverse load conditions. The sensor also has a realistic potential of becoming as thin and stretchable as stocking fabric. A goal of this research is to combine this thin sensor with stretch distribution sensors so that richer and more sophisticated tactile interactions can be realized.
近年来,我们一直在研究各种基于电阻抗断层成像(EIT)的触觉分布传感器,它是一种非侵入性的技术,仅从一个边界测量导电材料的电阻分布,并且不需要在传感区域内布线。在本文中,我们提出了一种新的导电结构,它是压力敏感而拉伸不敏感的,它是基于(1)具有高电阻的可拉伸波状导电纱线网络和(2)具有低电阻的导电拉伸片之间的接触电阻的概念。基于这种新开发的结构,我们实现了一种新型的触觉分布传感器,可以在各个方向的动态和大拉伸下稳定地测量。给出了气球表面挤压、挤压等动态复杂变形情况下压力分布的稳定测量方法。该传感器最初设计用于在具有柔软和高度可变形身体的交互式机器人上实现,但也可以用作新型用户界面设备或普通压力分布传感器。开发的触觉传感器的一些最显着的规格是高拉伸性高达140%和在不利负载条件下的韧性。这种传感器也有可能变得像袜子织物一样薄且可拉伸。本研究的目标是将这种薄传感器与拉伸分布传感器相结合,从而实现更丰富、更复杂的触觉交互。
{"title":"A tactile distribution sensor which enables stable measurement under high and dynamic stretch","authors":"Hassan Alirezaei, Akihiko Nagakubo, Y. Kuniyoshi","doi":"10.1109/3DUI.2009.4811210","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811210","url":null,"abstract":"Recaently, we have been studying various tactile distribution sensors based on Electrical Impedance Tomography (EIT) which is a non-invasive technique to measure the resistance distribution of a conductive material only from a boundary, and needs no wiring inside the sensing area. In this paper, we present a newly developed conductive structure which is pressure sensitive but stretch insensitive and is based on the concept of contact resistance between (1)a network of stretchable wave-like conductive yarns with high resistance and (2)a conductive stretchable sheet with low resistance. Based on this newly developed structure, we have realized a novel tactile distribution sensor which enables stable measurement under dynamic and large stretch from various directions. Stable measurement of pressure distribution under dynamic and complex deformation cases such as pinching and pushing on a balloon surface are demonstrated. The sensor has been originally designed for implementation over interactive robots with soft and highly deformable bodies, but can also be used as novel user interface devices, or ordinary pressure distribution sensors. Some of the most remarkable specifications of the developed tactile sensor are high stretchability up to 140% and toughness under adverse load conditions. The sensor also has a realistic potential of becoming as thin and stretchable as stocking fabric. A goal of this research is to combine this thin sensor with stretch distribution sensors so that richer and more sophisticated tactile interactions can be realized.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126395769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Tech-note: ScrutiCam: Camera manipulation technique for 3D objects inspection 技术说明:ScrutiCam:用于3D对象检查的相机操作技术
Pub Date : 2009-03-14 DOI: 10.1109/3DUI.2009.4811200
Fabrice Decle, M. Hachet, P. Guitton
Inspecting a 3D object is a common task in 3D applications. However, such a camera movement is not trivial and standard tools do not provide an efficient and unique tool for such a move. ScrutiCam is a new 3D camera manipulation technique. It is based on the “click-and-drag” mouse move, where the user “drags” the point of interest on the screen to perform different camera movements such as zooming, panning and rotating around a model. ScrutiCam can stay aligned with the surface of the model in order to keep the area of interest visible. ScrutiCam is also based on the Point-Of-Interest (POI) approach, where the final camera position is specified by clicking on the screen. Contrary to other POI techniques, ScrutiCam allows the user to control the animation of the camera along the trajectory. It is also inspired by the “Trackball” technique, where the virtual camera moves along the bounding sphere of the model. However, ScrutiCam's camera stays close to the surface of the model, whatever its shape. It can be used with mice as well as with touch screens as it only needs a 2D input and a single button.
检查3D对象是3D应用程序中的一项常见任务。然而,这样的相机移动是不平凡的,标准的工具不能提供一个有效的和独特的工具,这样的移动。ScrutiCam是一种新的3D相机操作技术。它基于“点击-拖动”鼠标移动,用户在屏幕上“拖动”感兴趣的点,以执行不同的相机移动,如围绕模型缩放、平移和旋转。ScrutiCam可以与模型的表面保持对齐,以保持感兴趣的区域可见。ScrutiCam也是基于Point-Of-Interest (POI)方法,通过点击屏幕来指定最终的相机位置。与其他POI技术相反,ScrutiCam允许用户沿着轨迹控制摄像机的动画。它还受到了“轨迹球”技术的启发,即虚拟摄像机沿着模型的边界球移动。然而,ScrutiCam的相机总是靠近模型的表面,不管它是什么形状。它可以与鼠标和触摸屏一起使用,因为它只需要一个2D输入和一个按钮。
{"title":"Tech-note: ScrutiCam: Camera manipulation technique for 3D objects inspection","authors":"Fabrice Decle, M. Hachet, P. Guitton","doi":"10.1109/3DUI.2009.4811200","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811200","url":null,"abstract":"Inspecting a 3D object is a common task in 3D applications. However, such a camera movement is not trivial and standard tools do not provide an efficient and unique tool for such a move. ScrutiCam is a new 3D camera manipulation technique. It is based on the “click-and-drag” mouse move, where the user “drags” the point of interest on the screen to perform different camera movements such as zooming, panning and rotating around a model. ScrutiCam can stay aligned with the surface of the model in order to keep the area of interest visible. ScrutiCam is also based on the Point-Of-Interest (POI) approach, where the final camera position is specified by clicking on the screen. Contrary to other POI techniques, ScrutiCam allows the user to control the animation of the camera along the trajectory. It is also inspired by the “Trackball” technique, where the virtual camera moves along the bounding sphere of the model. However, ScrutiCam's camera stays close to the surface of the model, whatever its shape. It can be used with mice as well as with touch screens as it only needs a 2D input and a single button.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114156926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Visual clutter management in augmented reality: Effects of three label separation methods on spatial judgments 增强现实中的视觉杂乱管理:三种标签分离方法对空间判断的影响
Pub Date : 2009-03-14 DOI: 10.1109/3DUI.2009.4811215
Stephen D. O'Connell, Magnus Axholt, M. Cooper, S. Ellis
This paper reports an experiment comparing three label separation methods for reducing visual clutter in Augmented Reality (AR) displays. We contrasted two common methods of avoiding visual overlap by moving labels in the 2D view plane with a third that distributes overlapping labels in stereoscopic depth. The experiment measured user identification performance during spatial judgment tasks in static scenes. The threemethods were compared with a control condition in which no label separation method was employed. The results showed significant performance improvements, generally 15–30%, for all three methods over the control; however, these methods were statistically indistinguishable from each other. Indepth analysis showed significant performance degradation when the 2D view plane methods produced potentially confusing spatial correlations between labels and the markers they designate. Stereoscopically separated labels were subjectively judged harder to read than view-plane separated labels. Since measured performance was affected both by label legibility and spatial correlation of labels and their designated objects, it is likely that the improved spatial correlation of stereoscopically separated labels and their designated objects has compensated for poorer stereoscopic text legibility. Future testing with dynamic scenes is expected to more clearly distinguish the three label separation techniques.
本文报道了一项实验,比较了三种标签分离方法在增强现实(AR)显示中减少视觉杂波的效果。我们对比了两种常见的避免视觉重叠的方法,即在二维视图平面上移动标签,第三种方法是在立体深度上分布重叠标签。实验测量了静态场景下空间判断任务中的用户识别性能。将这三种方法与不采用标签分离法的对照条件进行比较。结果显示,与对照组相比,所有三种方法的性能都有显著提高,一般为15-30%;然而,这些方法在统计上是无法区分的。深度分析表明,当2D视图平面方法在标签和它们所指定的标记之间产生潜在的混淆空间相关性时,性能会显著下降。立体分离的标签被主观上认为比视平面分离的标签更难阅读。由于测量的性能受到标签及其指定对象的标签易读性和空间相关性的影响,因此很可能立体分离标签及其指定对象的空间相关性的改善弥补了较差的立体文本易读性。未来的动态场景测试有望更清楚地区分这三种标签分离技术。
{"title":"Visual clutter management in augmented reality: Effects of three label separation methods on spatial judgments","authors":"Stephen D. O'Connell, Magnus Axholt, M. Cooper, S. Ellis","doi":"10.1109/3DUI.2009.4811215","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811215","url":null,"abstract":"This paper reports an experiment comparing three label separation methods for reducing visual clutter in Augmented Reality (AR) displays. We contrasted two common methods of avoiding visual overlap by moving labels in the 2D view plane with a third that distributes overlapping labels in stereoscopic depth. The experiment measured user identification performance during spatial judgment tasks in static scenes. The threemethods were compared with a control condition in which no label separation method was employed. The results showed significant performance improvements, generally 15–30%, for all three methods over the control; however, these methods were statistically indistinguishable from each other. Indepth analysis showed significant performance degradation when the 2D view plane methods produced potentially confusing spatial correlations between labels and the markers they designate. Stereoscopically separated labels were subjectively judged harder to read than view-plane separated labels. Since measured performance was affected both by label legibility and spatial correlation of labels and their designated objects, it is likely that the improved spatial correlation of stereoscopically separated labels and their designated objects has compensated for poorer stereoscopic text legibility. Future testing with dynamic scenes is expected to more clearly distinguish the three label separation techniques.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126387013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Tech-note: Multimodal feedback in 3D target acquisition 技术说明:3D目标获取中的多模态反馈
Pub Date : 2009-03-14 DOI: 10.1109/3DUI.2009.4811212
Dalia El-Shimy, G. Marentakis, J. Cooperstock
We investigated dynamic target acquisition within a 3D scene, rendered on a 2D display. Our focus was on the relative effects of specific perceptual cues provided as feedback. Participants were asked to use a specially designed input device to control the position of a volumetric cursor, and acquire targets as they appeared one by one on the screen. To compensate for the limited depth cues afforded by 2D rendering, additional feedback was offered through audio, visual and haptic modalities. Cues were delivered either as discrete multimodal feedback given only when the target was completely contained within the cursor, or continuously in proportion to the distance between the cursor and the target. Discrete feedback prevailed by improving accuracy without compromising selection times. Continuous feedback resulted in lower accuracy compared to discrete. In addition, reaction to the haptic stimulus was faster than for visual feedback. Finally, while the haptic modality helped decrease completion time, it led to a lower success rate.
我们研究了在2D显示器上呈现的3D场景中的动态目标获取。我们关注的是作为反馈提供的特定知觉线索的相对影响。参与者被要求使用一种特殊设计的输入设备来控制体积光标的位置,并在屏幕上一个接一个地显示目标。为了弥补2D渲染所提供的有限深度线索,我们通过音频、视觉和触觉方式提供了额外的反馈。线索要么作为离散的多模态反馈传递,只有当目标完全包含在光标内时,要么连续地与光标和目标之间的距离成比例。离散反馈在不影响选择时间的情况下提高了准确性。与离散反馈相比,连续反馈的精度较低。此外,对触觉刺激的反应比视觉反馈更快。最后,虽然触觉方式有助于减少完成时间,但它导致较低的成功率。
{"title":"Tech-note: Multimodal feedback in 3D target acquisition","authors":"Dalia El-Shimy, G. Marentakis, J. Cooperstock","doi":"10.1109/3DUI.2009.4811212","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811212","url":null,"abstract":"We investigated dynamic target acquisition within a 3D scene, rendered on a 2D display. Our focus was on the relative effects of specific perceptual cues provided as feedback. Participants were asked to use a specially designed input device to control the position of a volumetric cursor, and acquire targets as they appeared one by one on the screen. To compensate for the limited depth cues afforded by 2D rendering, additional feedback was offered through audio, visual and haptic modalities. Cues were delivered either as discrete multimodal feedback given only when the target was completely contained within the cursor, or continuously in proportion to the distance between the cursor and the target. Discrete feedback prevailed by improving accuracy without compromising selection times. Continuous feedback resulted in lower accuracy compared to discrete. In addition, reaction to the haptic stimulus was faster than for visual feedback. Finally, while the haptic modality helped decrease completion time, it led to a lower success rate.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128253600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2009 IEEE Symposium on 3D User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1