Yupeng Zhang, Teng Han, Zhimin Ren, Nobuyuki Umetani, Xin Tong, Yang Liu, Takaaki Shiratori, Xiang Cao
BodyAvatar is a Kinect-based interactive system that allows users without professional skills to create freeform 3D avatars using body gestures. Unlike existing gesture-based 3D modeling tools, BodyAvatar centers around a first-person "you're the avatar" metaphor, where the user treats their own body as a physical proxy of the virtual avatar. Based on an intuitive body-centric mapping, the user performs gestures to their own body as if wanting to modify it, which in turn results in corresponding modifications to the avatar. BodyAvatar provides an intuitive, immersive, and playful creation experience for the user. We present a formative study that leads to the design of BodyAvatar, the system's interactions and underlying algorithms, and results from initial user trials.
{"title":"BodyAvatar: creating freeform 3D avatars using first-person body gestures","authors":"Yupeng Zhang, Teng Han, Zhimin Ren, Nobuyuki Umetani, Xin Tong, Yang Liu, Takaaki Shiratori, Xiang Cao","doi":"10.1145/2501988.2502015","DOIUrl":"https://doi.org/10.1145/2501988.2502015","url":null,"abstract":"BodyAvatar is a Kinect-based interactive system that allows users without professional skills to create freeform 3D avatars using body gestures. Unlike existing gesture-based 3D modeling tools, BodyAvatar centers around a first-person \"you're the avatar\" metaphor, where the user treats their own body as a physical proxy of the virtual avatar. Based on an intuitive body-centric mapping, the user performs gestures to their own body as if wanting to modify it, which in turn results in corresponding modifications to the avatar. BodyAvatar provides an intuitive, immersive, and playful creation experience for the user. We present a formative study that leads to the design of BodyAvatar, the system's interactions and underlying algorithms, and results from initial user trials.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115174558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a tactile-rendering algorithm for simulating 3D geometric features, such as bumps, on touch screen surfaces. This is achieved by modulating friction forces between the user's finger and the touch screen, instead of physically moving the touch surface. We proposed that the percept of a 3D bump is created when local gradients of the rendered virtual surface are mapped to lateral friction forces. To validate this approach, we first establish a psychophysical model that relates the perceived friction force to the controlled voltage applied to the tactile feedback device. We then use this model to demonstrate that participants are three times more likely to prefer gradient force profiles than other commonly used rendering profiles. Finally, we present a generalized algorithm and conclude the paper with a set of applications using our tactile rendering technology.
{"title":"Tactile rendering of 3D features on touch surfaces","authors":"Seung-Chan Kim, A. Israr, I. Poupyrev","doi":"10.1145/2501988.2502020","DOIUrl":"https://doi.org/10.1145/2501988.2502020","url":null,"abstract":"We present a tactile-rendering algorithm for simulating 3D geometric features, such as bumps, on touch screen surfaces. This is achieved by modulating friction forces between the user's finger and the touch screen, instead of physically moving the touch surface. We proposed that the percept of a 3D bump is created when local gradients of the rendered virtual surface are mapped to lateral friction forces. To validate this approach, we first establish a psychophysical model that relates the perceived friction force to the controlled voltage applied to the tactile feedback device. We then use this model to demonstrate that participants are three times more likely to prefer gradient force profiles than other commonly used rendering profiles. Finally, we present a generalized algorithm and conclude the paper with a set of applications using our tactile rendering technology.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128405515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karen Vanderloock, V. Abeele, J. Suykens, L. Geurts
The Skweezee System is an easy, flexible and open system for designing and developing squeeze-based, gestural interactions. It consists of Skweezees, which are soft objects, filled with conductive padding, that can be deformed or squeezed by applying pressure. These objects contain a number of electrodes that are dispersed over the shape. The electrodes sense the shape shifting of the conductive filling by measuring the changing resistance between every possible pair of electrodes. In addition, the Skweezee System contains user-friendly software that allows end-users to define and to record their own squeeze gestures. These gestures are distinguished using a Support Vector Machine (SVM) classifier. In this paper we introduce the concept and the underlying technology of the Skweezee System and we demonstrate the robustness of the SVM based classifier via two experimental user studies. The results of these studies demonstrate accuracies of 81% (8 gestures, user-defined) to 97% (3 gestures, user-defined), with an accuracy of 90% for 7 pre-defined gestures.
{"title":"The skweezee system: enabling the design and the programming of squeeze interactions","authors":"Karen Vanderloock, V. Abeele, J. Suykens, L. Geurts","doi":"10.1145/2501988.2502033","DOIUrl":"https://doi.org/10.1145/2501988.2502033","url":null,"abstract":"The Skweezee System is an easy, flexible and open system for designing and developing squeeze-based, gestural interactions. It consists of Skweezees, which are soft objects, filled with conductive padding, that can be deformed or squeezed by applying pressure. These objects contain a number of electrodes that are dispersed over the shape. The electrodes sense the shape shifting of the conductive filling by measuring the changing resistance between every possible pair of electrodes. In addition, the Skweezee System contains user-friendly software that allows end-users to define and to record their own squeeze gestures. These gestures are distinguished using a Support Vector Machine (SVM) classifier. In this paper we introduce the concept and the underlying technology of the Skweezee System and we demonstrate the robustness of the SVM based classifier via two experimental user studies. The results of these studies demonstrate accuracies of 81% (8 gestures, user-defined) to 97% (3 gestures, user-defined), with an accuracy of 90% for 7 pre-defined gestures.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129434840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a new mechanism to induce a virtual force based on human illusory sensations. An asymmetric signal is applied to a tactile actuator consisting of an electromagnetic coil, a metal weight, and a spring, such that the user feels that the device is being pulled (or pushed) in a particular direction, although it is not supported by any mechanical connection to other objects or the ground. The proposed tactile device is smaller (35.0 mm x 5.0 mm x 7.5 mm) and lighter (5.2 g) than any previous force-feedback devices, which have to be connected to the ground with mechanical links. This small form factor allows the device to be implemented in several novel interactive applications, such as a pedestrian navigation system that includes a finger-mounted tactile device or an (untethered) input device that features virtual force. Our experimental results indicate that this illusory sensation actually exists and the proposed device can switch the virtual force direction within a short period. We combined this new technology with visible light transmission via a digital micromirror device (DMD) projector and developed a position guiding input device with force perception.
{"title":"Traxion: a tactile interaction device with virtual force sensation","authors":"J. Rekimoto","doi":"10.1145/2501988.2502044","DOIUrl":"https://doi.org/10.1145/2501988.2502044","url":null,"abstract":"This paper introduces a new mechanism to induce a virtual force based on human illusory sensations. An asymmetric signal is applied to a tactile actuator consisting of an electromagnetic coil, a metal weight, and a spring, such that the user feels that the device is being pulled (or pushed) in a particular direction, although it is not supported by any mechanical connection to other objects or the ground. The proposed tactile device is smaller (35.0 mm x 5.0 mm x 7.5 mm) and lighter (5.2 g) than any previous force-feedback devices, which have to be connected to the ground with mechanical links. This small form factor allows the device to be implemented in several novel interactive applications, such as a pedestrian navigation system that includes a finger-mounted tactile device or an (untethered) input device that features virtual force. Our experimental results indicate that this illusory sensation actually exists and the proposed device can switch the virtual force direction within a short period. We combined this new technology with visible light transmission via a digital micromirror device (DMD) projector and developed a position guiding input device with force perception.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"43 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132359482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Tompkin, Fabrizio Pece, R. Shah, S. Izadi, J. Kautz, C. Theobalt
Video collections of places show contrasts and changes in our world, but current interfaces to video collections make it hard for users to explore these changes. Recent state-of-the-art interfaces attempt to solve this problem for 'outside->in' collections, but cannot connect 'inside->out' collections of the same place which do not visually overlap. We extend the focus+context paradigm to create a video-collections+context interface by embedding videos into a panorama. We build a spatio-temporal index and tools for fast exploration of the space and time of the video collection. We demonstrate the flexibility of our representation with interfaces for desktop and mobile flat displays, and for a spherical display with joypad and tablet controllers. We study with users the effect of our video-collection+context system to spatio-temporal localization tasks, and find significant improvements to accuracy and completion time in visual search tasks compared to existing systems. We measure the usability of our interface with System Usability Scale (SUS) and task-specific questionnaires, and find our system scores higher.
{"title":"Video collections in panoramic contexts","authors":"J. Tompkin, Fabrizio Pece, R. Shah, S. Izadi, J. Kautz, C. Theobalt","doi":"10.1145/2501988.2502013","DOIUrl":"https://doi.org/10.1145/2501988.2502013","url":null,"abstract":"Video collections of places show contrasts and changes in our world, but current interfaces to video collections make it hard for users to explore these changes. Recent state-of-the-art interfaces attempt to solve this problem for 'outside->in' collections, but cannot connect 'inside->out' collections of the same place which do not visually overlap. We extend the focus+context paradigm to create a video-collections+context interface by embedding videos into a panorama. We build a spatio-temporal index and tools for fast exploration of the space and time of the video collection. We demonstrate the flexibility of our representation with interfaces for desktop and mobile flat displays, and for a spherical display with joypad and tablet controllers. We study with users the effect of our video-collection+context system to spatio-temporal localization tasks, and find significant improvements to accuracy and completion time in visual search tasks compared to existing systems. We measure the usability of our interface with System Usability Scale (SUS) and task-specific questionnaires, and find our system scores higher.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"8 13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132545821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongwook Yoon, Nicholas Chen, François Guimbretière
Having insufficient space for making annotations is a problem that afflicts both paper and digital documents. We introduce the TextTearing technique for in situ expansion of inter-line whitespace and pair it with a lightweight interaction for margin expansion as a way to address this problem. The full system leverages the dynamism of digital documents and employs a bimanual design that combines the precision of pen with the fluidity of touch. Our evaluation found that a simpler unimanual variant of TextTearing was preferred over direct annotation and margin-only expansion. Direct annotation in naturally occurring whitespace was least preferred.
{"title":"TextTearing: opening white space for digital ink annotation","authors":"Dongwook Yoon, Nicholas Chen, François Guimbretière","doi":"10.1145/2501988.2502036","DOIUrl":"https://doi.org/10.1145/2501988.2502036","url":null,"abstract":"Having insufficient space for making annotations is a problem that afflicts both paper and digital documents. We introduce the TextTearing technique for in situ expansion of inter-line whitespace and pair it with a lightweight interaction for margin expansion as a way to address this problem. The full system leverages the dynamism of digital documents and employs a bimanual design that combines the precision of pen with the fluidity of touch. Our evaluation found that a simpler unimanual variant of TextTearing was preferred over direct annotation and margin-only expansion. Direct annotation in naturally occurring whitespace was least preferred.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132755611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Brosz, Miguel A. Nacenta, R. Pusch, Sheelagh Carpendale, C. Hurter
A transmogrifier is a novel interface that enables quick, on-the-fly graphic transformations. A region of a graphic can be specified by a shape and transformed into a destination shape with real-time, visual feedback. Both origin and destination shapes can be circles, quadrilaterals or arbitrary shapes defined through touch. Transmogrifiers are flexible, fast and simple to create and invite use in casual InfoVis scenarios, opening the door to alternative ways of exploring and displaying existing visualizations (e.g., rectifying routes or rivers in maps), and enabling free-form prototyping of new visualizations (e.g., lenses).
{"title":"Transmogrification: causal manipulation of visualizations","authors":"J. Brosz, Miguel A. Nacenta, R. Pusch, Sheelagh Carpendale, C. Hurter","doi":"10.1145/2501988.2502046","DOIUrl":"https://doi.org/10.1145/2501988.2502046","url":null,"abstract":"A transmogrifier is a novel interface that enables quick, on-the-fly graphic transformations. A region of a graphic can be specified by a shape and transformed into a destination shape with real-time, visual feedback. Both origin and destination shapes can be circles, quadrilaterals or arbitrary shapes defined through touch. Transmogrifiers are flexible, fast and simple to create and invite use in casual InfoVis scenarios, opening the door to alternative ways of exploring and displaying existing visualizations (e.g., rectifying routes or rivers in maps), and enabling free-form prototyping of new visualizations (e.g., lenses).","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121910376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Crowd & creativitiy","authors":"Bjoern Hartmann","doi":"10.1145/3254702","DOIUrl":"https://doi.org/10.1145/3254702","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126943570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Hardware","authors":"J. Rekimoto","doi":"10.1145/3254699","DOIUrl":"https://doi.org/10.1145/3254699","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132774146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Children with Down syndrome have a variety of symptoms including speech and swallowing disorders. To improve these symptoms, tongue training is thought to be beneficial. However, inducing children with Down syndrome to do such training is not easy because tongue training can be an unpleasant experience for children. In addition, with no supporting technology for such training, teachers and families around such children must make efforts to induce them to undergo the training. In this research, we develop an interactive tongue training system especially for children with Down syndrome using SITA (Simple Interface for Tongue motion Acquisition) system. In this paper, we describe in detail our preliminary evaluations of SITA, and present the results of user tests.
患有唐氏综合症的儿童有多种症状,包括语言和吞咽障碍。为了改善这些症状,舌头训练被认为是有益的。然而,诱导患有唐氏综合症的儿童进行这种训练并不容易,因为舌头训练对儿童来说可能是一种不愉快的经历。此外,由于没有这种培训的辅助技术,这些儿童周围的教师和家庭必须努力诱使他们接受培训。在本研究中,我们利用SITA (Simple Interface for tongue motion Acquisition)系统开发了一套针对唐氏综合症儿童的交互式舌头训练系统。在本文中,我们详细描述了我们对SITA的初步评估,并给出了用户测试的结果。
{"title":"A tongue training system for children with down syndrome","authors":"Masato Miyauchi, Takashi Kimura, T. Nojima","doi":"10.1145/2501988.2502055","DOIUrl":"https://doi.org/10.1145/2501988.2502055","url":null,"abstract":"Children with Down syndrome have a variety of symptoms including speech and swallowing disorders. To improve these symptoms, tongue training is thought to be beneficial. However, inducing children with Down syndrome to do such training is not easy because tongue training can be an unpleasant experience for children. In addition, with no supporting technology for such training, teachers and families around such children must make efforts to induce them to undergo the training. In this research, we develop an interactive tongue training system especially for children with Down syndrome using SITA (Simple Interface for Tongue motion Acquisition) system. In this paper, we describe in detail our preliminary evaluations of SITA, and present the results of user tests.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131023053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}