We describe two approaches to augment multi-touch user input with commodity devices (Kinect and wiiMote).
我们描述了用商品设备(Kinect和wiiMote)增强多点触摸用户输入的两种方法。
{"title":"Augmenting multi-touch with commodity devices","authors":"F. Ortega, A. Barreto, N. Rishe","doi":"10.1145/2491367.2491399","DOIUrl":"https://doi.org/10.1145/2491367.2491399","url":null,"abstract":"We describe two approaches to augment multi-touch user input with commodity devices (Kinect and wiiMote).","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122551026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steven J. Castellucci, Robert J. Teather, Andriy Pavlovych
We introduce new metrics to help explain 3D pointing device movement characteristics. We present a study to assess these by comparing two cursor control modes using a Sony PS Move. "Laser" mode used ray casting, while "position" mode mapped absolute device movement to cursor motion. Mouse pointing was also included, and all techniques were also analyzed with existing 2D accuracy measures. Results suggest that position mode shows promise due to its accurate and smooth pointer movements. Our 3D movement metrics do not correlate well with performance, but may be beneficial in understanding how devices are used.
{"title":"Novel metrics for 3D remote pointing","authors":"Steven J. Castellucci, Robert J. Teather, Andriy Pavlovych","doi":"10.1145/2491367.2491373","DOIUrl":"https://doi.org/10.1145/2491367.2491373","url":null,"abstract":"We introduce new metrics to help explain 3D pointing device movement characteristics. We present a study to assess these by comparing two cursor control modes using a Sony PS Move. \"Laser\" mode used ray casting, while \"position\" mode mapped absolute device movement to cursor motion. Mouse pointing was also included, and all techniques were also analyzed with existing 2D accuracy measures. Results suggest that position mode shows promise due to its accurate and smooth pointer movements. Our 3D movement metrics do not correlate well with performance, but may be beneficial in understanding how devices are used.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128078051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IllusionHole (IH) is an interactive stereoscopic tabletop display that allows multiple users to interactively observe and directly point at a particular position of a stereoscopic object in a shared workspace. We explored a mid-air direct multi-finger interaction technique to efficiently perform fundamental object manipulations for single user (e.g., selection, rotation, translation and scaling) on IH. Performance of the proposed technique was compared with a cursor-based single pointing technique by a 3D docking task. The results showed that direct object manipulation with proposed technique provides greater benefits on user experience in a collaborative environment.
{"title":"Direct 3D object manipulation on a collaborative stereoscopic display","authors":"K. Özacar, Kazuki Takashima, Y. Kitamura","doi":"10.1145/2491367.2491374","DOIUrl":"https://doi.org/10.1145/2491367.2491374","url":null,"abstract":"IllusionHole (IH) is an interactive stereoscopic tabletop display that allows multiple users to interactively observe and directly point at a particular position of a stereoscopic object in a shared workspace. We explored a mid-air direct multi-finger interaction technique to efficiently perform fundamental object manipulations for single user (e.g., selection, rotation, translation and scaling) on IH. Performance of the proposed technique was compared with a cursor-based single pointing technique by a 3D docking task. The results showed that direct object manipulation with proposed technique provides greater benefits on user experience in a collaborative environment.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131460537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a virtually tangible 3D interaction system that enables direct interaction with three dimensional virtual objects which are presented on an autostereoscopic display.
我们提出了一种虚拟有形的三维交互系统,可以与三维虚拟物体直接交互,这些物体呈现在自动立体显示器上。
{"title":"A virtually tangible 3D interaction system using an autostereoscopic display","authors":"Takumi Kusano, T. Niikura, T. Komuro","doi":"10.1145/2491367.2491394","DOIUrl":"https://doi.org/10.1145/2491367.2491394","url":null,"abstract":"We propose a virtually tangible 3D interaction system that enables direct interaction with three dimensional virtual objects which are presented on an autostereoscopic display.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125059109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-sensory displays provide information to users through multiple senses, not only through visuals. They can be designed for the purpose of creating a more-natural interface for users or reducing the cognitive load of a visual-only display. However, because multi-sensory displays are often application-specific, the general advantages of multi-sensory displays over visual-only displays are not yet well understood. Moreover, the optimal amount of information that can be perceived through multi-sensory displays without making them more cognitively demanding than a visual-only displays is also not yet clear. Last, the effects of using redundant feedback across senses on multi-sensory displays have not been fully explored. To shed some light on these issues, this study evaluates the effects of increasing the amount of multi-sensory feedback on an interface, specifically in a virtual teleoperation context. While objective data showed that increasing the number of senses in the interface from two to three led to an improvement in performance, subjective feedback indicated that multi-sensory interfaces with redundant feedback may impose an extra cognitive burden on users.
{"title":"Performance effects of multi-sensory displays in virtual teleoperation environments","authors":"P. D. Barros, R. Lindeman","doi":"10.1145/2491367.2491371","DOIUrl":"https://doi.org/10.1145/2491367.2491371","url":null,"abstract":"Multi-sensory displays provide information to users through multiple senses, not only through visuals. They can be designed for the purpose of creating a more-natural interface for users or reducing the cognitive load of a visual-only display. However, because multi-sensory displays are often application-specific, the general advantages of multi-sensory displays over visual-only displays are not yet well understood. Moreover, the optimal amount of information that can be perceived through multi-sensory displays without making them more cognitively demanding than a visual-only displays is also not yet clear. Last, the effects of using redundant feedback across senses on multi-sensory displays have not been fully explored. To shed some light on these issues, this study evaluates the effects of increasing the amount of multi-sensory feedback on an interface, specifically in a virtual teleoperation context. While objective data showed that increasing the number of senses in the interface from two to three led to an improvement in performance, subjective feedback indicated that multi-sensory interfaces with redundant feedback may impose an extra cognitive burden on users.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114054707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexis Clay, J. Lombardo, Julien Conan, N. Couture
We aim at combining surface generation by hands with 3D painting in a large space, from 10 to ~200 m2 (for a stage setup). Our long-term goal is to phase 3D surface generation in choreography, in order to produce augmented dance shows where the dancer can draw elements in 3D (characters, sets) while dancing. We present two systems; a first system in a CAVE environment, and second system more adapted to a stage setup. A comparison of both systems is provided, and an exploratory user experiment was performed, both with laypersons and dancers.
{"title":"Towards bi-manual 3D painting: generating virtual shapes with hands","authors":"Alexis Clay, J. Lombardo, Julien Conan, N. Couture","doi":"10.1145/2491367.2491396","DOIUrl":"https://doi.org/10.1145/2491367.2491396","url":null,"abstract":"We aim at combining surface generation by hands with 3D painting in a large space, from 10 to ~200 m2 (for a stage setup). Our long-term goal is to phase 3D surface generation in choreography, in order to produce augmented dance shows where the dancer can draw elements in 3D (characters, sets) while dancing. We present two systems; a first system in a CAVE environment, and second system more adapted to a stage setup. A comparison of both systems is provided, and an exploratory user experiment was performed, both with laypersons and dancers.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130456475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexis Clay, Anissa Samar, Maroua Ben Younes, R. Mollard, M. Wolff
In this poster we present an exploratory bottom-up experiment to assess the user's choices in terms of bodily interactions when facing a set of tasks. 29 subjects were asked to perform basic tasks on a large screen TV in three positions: standing, sitting, and lying on a couch, without any guidance on how to perform them. As such, we obtained spontaneous interaction propositions for each task. Subjects were then interviewed on their choices, and their internal representation of information and its dynamics. A statistical analysis highlighted the preferred interactions in each position.
{"title":"User-defined SUIs: an exploratory study","authors":"Alexis Clay, Anissa Samar, Maroua Ben Younes, R. Mollard, M. Wolff","doi":"10.1145/2491367.2491397","DOIUrl":"https://doi.org/10.1145/2491367.2491397","url":null,"abstract":"In this poster we present an exploratory bottom-up experiment to assess the user's choices in terms of bodily interactions when facing a set of tasks. 29 subjects were asked to perform basic tasks on a large screen TV in three positions: standing, sitting, and lying on a couch, without any guidance on how to perform them. As such, we obtained spontaneous interaction propositions for each task. Subjects were then interviewed on their choices, and their internal representation of information and its dynamics. A statistical analysis highlighted the preferred interactions in each position.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128700588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}