Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811203
D. Stødle, O. Troyanskaya, K. Li, Otto J. Anshus
Existing approaches to 3D input on wall-sized displays include tracking users with markers, using stereo- or depth-cameras or have users carry devices like the Nintendo Wiimote. Markers makes ad hoc usage difficult, and in public settings devices may easily get lost or stolen. Further, most camera-based approaches limit the area where users can interact.
{"title":"Tech-note: Device-free interaction spaces","authors":"D. Stødle, O. Troyanskaya, K. Li, Otto J. Anshus","doi":"10.1109/3DUI.2009.4811203","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811203","url":null,"abstract":"Existing approaches to 3D input on wall-sized displays include tracking users with markers, using stereo- or depth-cameras or have users carry devices like the Nintendo Wiimote. Markers makes ad hoc usage difficult, and in public settings devices may easily get lost or stolen. Further, most camera-based approaches limit the area where users can interact.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124413884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811216
Kyungdahm Yun, Woontack Woo
Spatial Interaction (SPINT) is a non-contact passive interaction method that exploits a depth-sensing camera for monitoring the spaces around an augmented virtual object and interpreting their occupancy states as user input. The proposed method provides 3D hand interaction requiring no wearable device. The interaction schemes can be extended by combining virtual space sensors with different types of interpretation units. The depth perception anomaly caused by an incorrect occlusion between real and virtual objects is also alleviated for more precise interaction. The fluid interface will be used for a new exhibit platform, such as Miniature AR System (MINARS), to support a dynamic content manipulation by multiple users without severe tracking constraints.
{"title":"Tech-note: Spatial interaction using depth camera for miniature AR","authors":"Kyungdahm Yun, Woontack Woo","doi":"10.1109/3DUI.2009.4811216","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811216","url":null,"abstract":"Spatial Interaction (SPINT) is a non-contact passive interaction method that exploits a depth-sensing camera for monitoring the spaces around an augmented virtual object and interpreting their occupancy states as user input. The proposed method provides 3D hand interaction requiring no wearable device. The interaction schemes can be extended by combining virtual space sensors with different types of interpretation units. The depth perception anomaly caused by an incorrect occlusion between real and virtual objects is also alleviated for more precise interaction. The fluid interface will be used for a new exhibit platform, such as Miniature AR System (MINARS), to support a dynamic content manipulation by multiple users without severe tracking constraints.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811204
Robert J. Teather, Andriy Pavlovych, W. Stuerzlinger, I. MacKenzie
We investigate the effects of input device latency and spatial jitter on 2D pointing tasks and 3D object movement tasks. First, we characterize jitter and latency in a 3D tracking device and an optical mouse used as a baseline comparison. We then present an experiment based on ISO 9241-9, which measures performance characteristics of pointing devices. We artificially introduce latency and jitter to the mouse and compared the results to the 3D tracker. Results indicate that latency has a much stronger effect on human performance than low amounts of spatial jitter. In a second study, we use a subset of conditions from the first to test latency and jitter on 3D object movement. The results indicate that large, uncharacterized jitter “spikes” significantly impact 3D performance.
{"title":"Effects of tracking technology, latency, and spatial jitter on object movement","authors":"Robert J. Teather, Andriy Pavlovych, W. Stuerzlinger, I. MacKenzie","doi":"10.1109/3DUI.2009.4811204","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811204","url":null,"abstract":"We investigate the effects of input device latency and spatial jitter on 2D pointing tasks and 3D object movement tasks. First, we characterize jitter and latency in a 3D tracking device and an optical mouse used as a baseline comparison. We then present an experiment based on ISO 9241-9, which measures performance characteristics of pointing devices. We artificially introduce latency and jitter to the mouse and compared the results to the 3D tracker. Results indicate that latency has a much stronger effect on human performance than low amounts of spatial jitter. In a second study, we use a subset of conditions from the first to test latency and jitter on 3D object movement. The results indicate that large, uncharacterized jitter “spikes” significantly impact 3D performance.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131645692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811232
Jörg Stöcklein, C. Geiger, V. Paelke, Patrick Pogscheba
The development of next generation user interfaces that employ novel sensors and additional output modalities has high potential to improve the usability of applications used in non-desktop environments. The design of such interfaces requires an exploratory design approach to handle the interaction of newly developed interaction techniques with complex hardware. As a first step towards a structured design process we extended the MVC design pattern by an additional dimension “Environment” to capture elements and constraint from the real world.
{"title":"Poster: MVCE - a design pattern to guide the development of next generation user interfaces","authors":"Jörg Stöcklein, C. Geiger, V. Paelke, Patrick Pogscheba","doi":"10.1109/3DUI.2009.4811232","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811232","url":null,"abstract":"The development of next generation user interfaces that employ novel sensors and additional output modalities has high potential to improve the usability of applications used in non-desktop environments. The design of such interfaces requires an exploratory design approach to handle the interaction of newly developed interaction techniques with complex hardware. As a first step towards a structured design process we extended the MVC design pattern by an additional dimension “Environment” to capture elements and constraint from the real world.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132519863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811223
C. P. Quintero, P. Figueroa
There are several ways to guide users to a destination in a Virtual World, most of them inherited from real counterparts, and typically based on visual feedback. Although these aids are very useful in general, we want to avoid user's distractions from the main scene and visual cluttering that may occur when visual feedback for wayfinding is used. We present our work on a “vibrating belt”, a belt of motors that can be used as an orientation aid. We conducted a set of experiments that compared such device with a low cognitive load visual aid for wayfinding, and we have found our device as effective as the visual aids in our study. We believe this device could improve the user's performance and concentration on the main activities in the scene.
{"title":"Poster: Vibration as a wayfinding aid","authors":"C. P. Quintero, P. Figueroa","doi":"10.1109/3DUI.2009.4811223","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811223","url":null,"abstract":"There are several ways to guide users to a destination in a Virtual World, most of them inherited from real counterparts, and typically based on visual feedback. Although these aids are very useful in general, we want to avoid user's distractions from the main scene and visual cluttering that may occur when visual feedback for wayfinding is used. We present our work on a “vibrating belt”, a belt of motors that can be used as an orientation aid. We conducted a set of experiments that compared such device with a low cognitive load visual aid for wayfinding, and we have found our device as effective as the visual aids in our study. We believe this device could improve the user's performance and concentration on the main activities in the scene.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127815721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811218
Frank Steinicke, G. Bruder, K. Rothaus, K. Hinrichs
A fully-articulated visual representation of oneself in an immersive virtual environment has considerable impact on the subjective sense of presence in the virtual world. Therefore, many approaches address this challenge and incorporate a virtual model of the user's body in the VE. Such a “virtual body” (VB) is manipulated according to user motions which are defined by feature points detected by a tracking system. The required tracking devices are unsuitable in scenarios which involve multiple persons simultaneously or in which participants frequently change. Furthermore, individual characteristics such as skin pigmentation, hairiness or clothes are not considered by this procedure. In this paper we present a software-based approach that allows to incorporate a realistic visual representation of oneself in the VE. The idea is to make use of images captured by cameras that are attached to video-see-through head-mounted displays. These egocentric frames can be segmented into foreground showing parts of the human body and background. Then the extremities can be overlayed with the user's current view of the virtual world, and thus a high-fidelity virtual body can be visualized.
{"title":"Poster: A virtual body for augmented virtuality by chroma-keying of egocentric videos","authors":"Frank Steinicke, G. Bruder, K. Rothaus, K. Hinrichs","doi":"10.1109/3DUI.2009.4811218","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811218","url":null,"abstract":"A fully-articulated visual representation of oneself in an immersive virtual environment has considerable impact on the subjective sense of presence in the virtual world. Therefore, many approaches address this challenge and incorporate a virtual model of the user's body in the VE. Such a “virtual body” (VB) is manipulated according to user motions which are defined by feature points detected by a tracking system. The required tracking devices are unsuitable in scenarios which involve multiple persons simultaneously or in which participants frequently change. Furthermore, individual characteristics such as skin pigmentation, hairiness or clothes are not considered by this procedure. In this paper we present a software-based approach that allows to incorporate a realistic visual representation of oneself in the VE. The idea is to make use of images captured by cameras that are attached to video-see-through head-mounted displays. These egocentric frames can be segmented into foreground showing parts of the human body and background. Then the extremities can be overlayed with the user's current view of the virtual world, and thus a high-fidelity virtual body can be visualized.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124311369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811210
Hassan Alirezaei, Akihiko Nagakubo, Y. Kuniyoshi
Recaently, we have been studying various tactile distribution sensors based on Electrical Impedance Tomography (EIT) which is a non-invasive technique to measure the resistance distribution of a conductive material only from a boundary, and needs no wiring inside the sensing area. In this paper, we present a newly developed conductive structure which is pressure sensitive but stretch insensitive and is based on the concept of contact resistance between (1)a network of stretchable wave-like conductive yarns with high resistance and (2)a conductive stretchable sheet with low resistance. Based on this newly developed structure, we have realized a novel tactile distribution sensor which enables stable measurement under dynamic and large stretch from various directions. Stable measurement of pressure distribution under dynamic and complex deformation cases such as pinching and pushing on a balloon surface are demonstrated. The sensor has been originally designed for implementation over interactive robots with soft and highly deformable bodies, but can also be used as novel user interface devices, or ordinary pressure distribution sensors. Some of the most remarkable specifications of the developed tactile sensor are high stretchability up to 140% and toughness under adverse load conditions. The sensor also has a realistic potential of becoming as thin and stretchable as stocking fabric. A goal of this research is to combine this thin sensor with stretch distribution sensors so that richer and more sophisticated tactile interactions can be realized.
{"title":"A tactile distribution sensor which enables stable measurement under high and dynamic stretch","authors":"Hassan Alirezaei, Akihiko Nagakubo, Y. Kuniyoshi","doi":"10.1109/3DUI.2009.4811210","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811210","url":null,"abstract":"Recaently, we have been studying various tactile distribution sensors based on Electrical Impedance Tomography (EIT) which is a non-invasive technique to measure the resistance distribution of a conductive material only from a boundary, and needs no wiring inside the sensing area. In this paper, we present a newly developed conductive structure which is pressure sensitive but stretch insensitive and is based on the concept of contact resistance between (1)a network of stretchable wave-like conductive yarns with high resistance and (2)a conductive stretchable sheet with low resistance. Based on this newly developed structure, we have realized a novel tactile distribution sensor which enables stable measurement under dynamic and large stretch from various directions. Stable measurement of pressure distribution under dynamic and complex deformation cases such as pinching and pushing on a balloon surface are demonstrated. The sensor has been originally designed for implementation over interactive robots with soft and highly deformable bodies, but can also be used as novel user interface devices, or ordinary pressure distribution sensors. Some of the most remarkable specifications of the developed tactile sensor are high stretchability up to 140% and toughness under adverse load conditions. The sensor also has a realistic potential of becoming as thin and stretchable as stocking fabric. A goal of this research is to combine this thin sensor with stretch distribution sensors so that richer and more sophisticated tactile interactions can be realized.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126395769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811200
Fabrice Decle, M. Hachet, P. Guitton
Inspecting a 3D object is a common task in 3D applications. However, such a camera movement is not trivial and standard tools do not provide an efficient and unique tool for such a move. ScrutiCam is a new 3D camera manipulation technique. It is based on the “click-and-drag” mouse move, where the user “drags” the point of interest on the screen to perform different camera movements such as zooming, panning and rotating around a model. ScrutiCam can stay aligned with the surface of the model in order to keep the area of interest visible. ScrutiCam is also based on the Point-Of-Interest (POI) approach, where the final camera position is specified by clicking on the screen. Contrary to other POI techniques, ScrutiCam allows the user to control the animation of the camera along the trajectory. It is also inspired by the “Trackball” technique, where the virtual camera moves along the bounding sphere of the model. However, ScrutiCam's camera stays close to the surface of the model, whatever its shape. It can be used with mice as well as with touch screens as it only needs a 2D input and a single button.
{"title":"Tech-note: ScrutiCam: Camera manipulation technique for 3D objects inspection","authors":"Fabrice Decle, M. Hachet, P. Guitton","doi":"10.1109/3DUI.2009.4811200","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811200","url":null,"abstract":"Inspecting a 3D object is a common task in 3D applications. However, such a camera movement is not trivial and standard tools do not provide an efficient and unique tool for such a move. ScrutiCam is a new 3D camera manipulation technique. It is based on the “click-and-drag” mouse move, where the user “drags” the point of interest on the screen to perform different camera movements such as zooming, panning and rotating around a model. ScrutiCam can stay aligned with the surface of the model in order to keep the area of interest visible. ScrutiCam is also based on the Point-Of-Interest (POI) approach, where the final camera position is specified by clicking on the screen. Contrary to other POI techniques, ScrutiCam allows the user to control the animation of the camera along the trajectory. It is also inspired by the “Trackball” technique, where the virtual camera moves along the bounding sphere of the model. However, ScrutiCam's camera stays close to the surface of the model, whatever its shape. It can be used with mice as well as with touch screens as it only needs a 2D input and a single button.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114156926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811215
Stephen D. O'Connell, Magnus Axholt, M. Cooper, S. Ellis
This paper reports an experiment comparing three label separation methods for reducing visual clutter in Augmented Reality (AR) displays. We contrasted two common methods of avoiding visual overlap by moving labels in the 2D view plane with a third that distributes overlapping labels in stereoscopic depth. The experiment measured user identification performance during spatial judgment tasks in static scenes. The threemethods were compared with a control condition in which no label separation method was employed. The results showed significant performance improvements, generally 15–30%, for all three methods over the control; however, these methods were statistically indistinguishable from each other. Indepth analysis showed significant performance degradation when the 2D view plane methods produced potentially confusing spatial correlations between labels and the markers they designate. Stereoscopically separated labels were subjectively judged harder to read than view-plane separated labels. Since measured performance was affected both by label legibility and spatial correlation of labels and their designated objects, it is likely that the improved spatial correlation of stereoscopically separated labels and their designated objects has compensated for poorer stereoscopic text legibility. Future testing with dynamic scenes is expected to more clearly distinguish the three label separation techniques.
{"title":"Visual clutter management in augmented reality: Effects of three label separation methods on spatial judgments","authors":"Stephen D. O'Connell, Magnus Axholt, M. Cooper, S. Ellis","doi":"10.1109/3DUI.2009.4811215","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811215","url":null,"abstract":"This paper reports an experiment comparing three label separation methods for reducing visual clutter in Augmented Reality (AR) displays. We contrasted two common methods of avoiding visual overlap by moving labels in the 2D view plane with a third that distributes overlapping labels in stereoscopic depth. The experiment measured user identification performance during spatial judgment tasks in static scenes. The threemethods were compared with a control condition in which no label separation method was employed. The results showed significant performance improvements, generally 15–30%, for all three methods over the control; however, these methods were statistically indistinguishable from each other. Indepth analysis showed significant performance degradation when the 2D view plane methods produced potentially confusing spatial correlations between labels and the markers they designate. Stereoscopically separated labels were subjectively judged harder to read than view-plane separated labels. Since measured performance was affected both by label legibility and spatial correlation of labels and their designated objects, it is likely that the improved spatial correlation of stereoscopically separated labels and their designated objects has compensated for poorer stereoscopic text legibility. Future testing with dynamic scenes is expected to more clearly distinguish the three label separation techniques.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126387013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-03-14DOI: 10.1109/3DUI.2009.4811212
Dalia El-Shimy, G. Marentakis, J. Cooperstock
We investigated dynamic target acquisition within a 3D scene, rendered on a 2D display. Our focus was on the relative effects of specific perceptual cues provided as feedback. Participants were asked to use a specially designed input device to control the position of a volumetric cursor, and acquire targets as they appeared one by one on the screen. To compensate for the limited depth cues afforded by 2D rendering, additional feedback was offered through audio, visual and haptic modalities. Cues were delivered either as discrete multimodal feedback given only when the target was completely contained within the cursor, or continuously in proportion to the distance between the cursor and the target. Discrete feedback prevailed by improving accuracy without compromising selection times. Continuous feedback resulted in lower accuracy compared to discrete. In addition, reaction to the haptic stimulus was faster than for visual feedback. Finally, while the haptic modality helped decrease completion time, it led to a lower success rate.
{"title":"Tech-note: Multimodal feedback in 3D target acquisition","authors":"Dalia El-Shimy, G. Marentakis, J. Cooperstock","doi":"10.1109/3DUI.2009.4811212","DOIUrl":"https://doi.org/10.1109/3DUI.2009.4811212","url":null,"abstract":"We investigated dynamic target acquisition within a 3D scene, rendered on a 2D display. Our focus was on the relative effects of specific perceptual cues provided as feedback. Participants were asked to use a specially designed input device to control the position of a volumetric cursor, and acquire targets as they appeared one by one on the screen. To compensate for the limited depth cues afforded by 2D rendering, additional feedback was offered through audio, visual and haptic modalities. Cues were delivered either as discrete multimodal feedback given only when the target was completely contained within the cursor, or continuously in proportion to the distance between the cursor and the target. Discrete feedback prevailed by improving accuracy without compromising selection times. Continuous feedback resulted in lower accuracy compared to discrete. In addition, reaction to the haptic stimulus was faster than for visual feedback. Finally, while the haptic modality helped decrease completion time, it led to a lower success rate.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128253600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}