Masa Ogata, Yuta Sugiura, Yasutoshi Makino, M. Inami, M. Imai
We present a sensing technology and input method that uses skin deformation estimated through a thin band-type device attached to the human body, the appearance of which seems socially acceptable in daily life. An input interface usually requires feedback. SenSkin provides tactile feedback that enables users to know which part of the skin they are touching in order to issue commands. The user, having found an acceptable area before beginning the input operation, can continue to input commands without receiving explicit feedback. We developed an experimental device with two armbands to sense three-dimensional pressure applied to the skin. Sensing tangential force on uncovered skin without haptic obstacles has not previously been achieved. SenSkin is also novel in that quantitative tangential force applied to the skin, such as that of the forearm or fingers, is measured. An infrared (IR) reflective sensor is used since its durability and inexpensiveness make it suitable for everyday human sensing purposes. The multiple sensors located on the two armbands allow the tangential and normal force applied to the skin dimension to be sensed. The input command is learned and recognized using a Support Vector Machine (SVM). Finally, we show an application in which this input method is implemented.
{"title":"SenSkin: adapting skin as a soft interface","authors":"Masa Ogata, Yuta Sugiura, Yasutoshi Makino, M. Inami, M. Imai","doi":"10.1145/2501988.2502039","DOIUrl":"https://doi.org/10.1145/2501988.2502039","url":null,"abstract":"We present a sensing technology and input method that uses skin deformation estimated through a thin band-type device attached to the human body, the appearance of which seems socially acceptable in daily life. An input interface usually requires feedback. SenSkin provides tactile feedback that enables users to know which part of the skin they are touching in order to issue commands. The user, having found an acceptable area before beginning the input operation, can continue to input commands without receiving explicit feedback. We developed an experimental device with two armbands to sense three-dimensional pressure applied to the skin. Sensing tangential force on uncovered skin without haptic obstacles has not previously been achieved. SenSkin is also novel in that quantitative tangential force applied to the skin, such as that of the forearm or fingers, is measured. An infrared (IR) reflective sensor is used since its durability and inexpensiveness make it suitable for everyday human sensing purposes. The multiple sensors located on the two armbands allow the tangential and normal force applied to the skin dimension to be sensed. The input command is learned and recognized using a Support Vector Machine (SVM). Finally, we show an application in which this input method is implemented.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126741733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present Fiberio, a rear-projected multitouch table that identifies users biometrically based on their fingerprints during each touch interaction. Fiberio accomplishes this using a new type of screen material: a large fiber optic plate. The plate diffuses light on transmission, thereby allowing it to act as projection surface. At the same time, the plate reflects light specularly, which produces the contrast required for fingerprint sensing. In addition to offering all the functionality known from traditional diffused illumination systems, Fiberio is the first interactive tabletop system that authenticates users during touch interaction-unobtrusively and securely using the biometric features of fingerprints, which eliminates the need for users to carry any identification tokens.
{"title":"Fiberio: a touchscreen that senses fingerprints","authors":"Christian Holz, Patrick Baudisch","doi":"10.1145/2501988.2502021","DOIUrl":"https://doi.org/10.1145/2501988.2502021","url":null,"abstract":"We present Fiberio, a rear-projected multitouch table that identifies users biometrically based on their fingerprints during each touch interaction. Fiberio accomplishes this using a new type of screen material: a large fiber optic plate. The plate diffuses light on transmission, thereby allowing it to act as projection surface. At the same time, the plate reflects light specularly, which produces the contrast required for fingerprint sensing. In addition to offering all the functionality known from traditional diffused illumination systems, Fiberio is the first interactive tabletop system that authenticates users during touch interaction-unobtrusively and securely using the biometric features of fingerprints, which eliminates the need for users to carry any identification tokens.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133516424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: GUI","authors":"Wilmot Li","doi":"10.1145/3254705","DOIUrl":"https://doi.org/10.1145/3254705","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130584953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Carter, S. A. Seah, Benjamin Long, B. Drinkwater, S. Subramanian
We introduce UltraHaptics, a system designed to provide multi-point haptic feedback above an interactive surface. UltraHaptics employs focused ultrasound to project discrete points of haptic feedback through the display and directly on to users' unadorned hands. We investigate the desirable properties of an acoustically transparent display and demonstrate that the system is capable of creating multiple localised points of feedback in mid-air. Through psychophysical experiments we show that feedback points with different tactile properties can be identified at smaller separations. We also show that users are able to distinguish between different vibration frequencies of non-contact points with training. Finally, we explore a number of exciting new interaction possibilities that UltraHaptics provides.
{"title":"UltraHaptics: multi-point mid-air haptic feedback for touch surfaces","authors":"Thomas Carter, S. A. Seah, Benjamin Long, B. Drinkwater, S. Subramanian","doi":"10.1145/2501988.2502018","DOIUrl":"https://doi.org/10.1145/2501988.2502018","url":null,"abstract":"We introduce UltraHaptics, a system designed to provide multi-point haptic feedback above an interactive surface. UltraHaptics employs focused ultrasound to project discrete points of haptic feedback through the display and directly on to users' unadorned hands. We investigate the desirable properties of an acoustically transparent display and demonstrate that the system is capable of creating multiple localised points of feedback in mid-air. Through psychophysical experiments we show that feedback points with different tactile properties can be identified at smaller separations. We also show that users are able to distinguish between different vibration frequencies of non-contact points with training. Finally, we explore a number of exciting new interaction possibilities that UltraHaptics provides.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"09 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116621153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eyes-free input usually relies on audio feedback that can be difficult to hear in noisy environments. We present DigiTaps, an eyes-free number entry method for touchscreen devices that requires little auditory attention. To enter a digit, users tap or swipe anywhere on the screen with one, two, or three fingers. The 10 digits are encoded by combinations of these gestures that relate to the digits' semantics. For example, the digit 2 is input with a 2-finger tap. We conducted a longitudinal evaluation with 16 people and found that DigiTaps with no audio feedback was faster but less accurate than with audio feedback after every input. Throughout the study, participants entered numbers with no audio feedback at an average rate of 0.87 characters per second, with an uncorrected error rate of 5.63%.
{"title":"DigiTaps: eyes-free number entry on touchscreens with minimal audio feedback","authors":"Shiri Azenkot, Cynthia L. Bennett, R. Ladner","doi":"10.1145/2501988.2502056","DOIUrl":"https://doi.org/10.1145/2501988.2502056","url":null,"abstract":"Eyes-free input usually relies on audio feedback that can be difficult to hear in noisy environments. We present DigiTaps, an eyes-free number entry method for touchscreen devices that requires little auditory attention. To enter a digit, users tap or swipe anywhere on the screen with one, two, or three fingers. The 10 digits are encoded by combinations of these gestures that relate to the digits' semantics. For example, the digit 2 is input with a 2-finger tap. We conducted a longitudinal evaluation with 16 people and found that DigiTaps with no audio feedback was faster but less accurate than with audio feedback after every input. Throughout the study, participants entered numbers with no audio feedback at an average rate of 0.87 characters per second, with an uncorrected error rate of 5.63%.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127630812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Butler, Adam M. Smith, Yun-En Liu, Zoran Popovic
Creating game content requires balancing design considerations at multiple scales: each level requires effort and iteration to produce, and broad-scale constraints such as the order in which game concepts are introduced must be respected. Game designers currently create informal plans for how the game's levels will fit together, but they rarely keep these plans up-to-date when levels change during iteration and testing. This leads to violations of constraints and makes changing the high-level plans expensive. To address these problems, we explore the creation of mixed-initiative game progression authoring tools which explicitly model broad-scale design considerations. These tools let the designer specify constraints on progressions, and keep the plan synchronized when levels are edited. This enables the designer to move between broad and narrow-scale editing and allows for automatic detection of problems caused by edits to levels. We further leverage advances in procedural content generation to help the designer rapidly explore and test game progressions. We present a prototype implementation of such a tool for our actively-developed educational game, Refraction. We also describe how this system could be extended for use in other games and domains, specifically for the domains of math problem sets and interactive programming tutorials.
{"title":"A mixed-initiative tool for designing level progressions in games","authors":"Eric Butler, Adam M. Smith, Yun-En Liu, Zoran Popovic","doi":"10.1145/2501988.2502011","DOIUrl":"https://doi.org/10.1145/2501988.2502011","url":null,"abstract":"Creating game content requires balancing design considerations at multiple scales: each level requires effort and iteration to produce, and broad-scale constraints such as the order in which game concepts are introduced must be respected. Game designers currently create informal plans for how the game's levels will fit together, but they rarely keep these plans up-to-date when levels change during iteration and testing. This leads to violations of constraints and makes changing the high-level plans expensive. To address these problems, we explore the creation of mixed-initiative game progression authoring tools which explicitly model broad-scale design considerations. These tools let the designer specify constraints on progressions, and keep the plan synchronized when levels are edited. This enables the designer to move between broad and narrow-scale editing and allows for automatic detection of problems caused by edits to levels. We further leverage advances in procedural content generation to help the designer rapidly explore and test game progressions. We present a prototype implementation of such a tool for our actively-developed educational game, Refraction. We also describe how this system could be extended for use in other games and domains, specifically for the domains of math problem sets and interactive programming tutorials.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126029125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Chang, B. Myers, Gene M. Cahill, Soumya Simanta, E. Morris, G. Lewis
Structure makes data more useful, but also makes data entry more cumbersome. Studies have found that this is especially true on mobile devices, as mobile users often reject structured personal information management tools because the structure is too restrictive and makes entering data slower. To overcome these problems, we introduce a new data entry technique that lets users create customized structured data in an unstructured manner. We use a novel notepad-like editing interface with built-in data detectors that allow users to specify structured data implicitly and reuse the structures when desired. To minimize the amount of typing, it provides intelligent, context-sensitive autocomplete suggestions using personal and public databases that contain candidate information to be entered. We implemented these mechanisms in an example application called Listpad. Our evaluation shows that people using Listpad create customized structured data 16% faster than using a conventional mobile database tool. The speed further increases to 42% when the fields can be autocompleted.
{"title":"Improving structured data entry on mobile devices","authors":"K. Chang, B. Myers, Gene M. Cahill, Soumya Simanta, E. Morris, G. Lewis","doi":"10.1145/2501988.2502043","DOIUrl":"https://doi.org/10.1145/2501988.2502043","url":null,"abstract":"Structure makes data more useful, but also makes data entry more cumbersome. Studies have found that this is especially true on mobile devices, as mobile users often reject structured personal information management tools because the structure is too restrictive and makes entering data slower. To overcome these problems, we introduce a new data entry technique that lets users create customized structured data in an unstructured manner. We use a novel notepad-like editing interface with built-in data detectors that allow users to specify structured data implicitly and reuse the structures when desired. To minimize the amount of typing, it provides intelligent, context-sensitive autocomplete suggestions using personal and public databases that contain candidate information to be entered. We implemented these mechanisms in an example application called Listpad. Our evaluation shows that people using Listpad create customized structured data 16% faster than using a conventional mobile database tool. The speed further increases to 42% when the fields can be autocompleted.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130863249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ken Pfeuffer, Mélodie Vidal, J. Turner, A. Bulling, Hans-Werner Gellersen
Eye gaze is a compelling interaction modality but requires user calibration before interaction can commence. State of the art procedures require the user to fixate on a succession of calibration markers, a task that is often experienced as difficult and tedious. We present pursuit calibration, a novel approach that, unlike existing methods, is able to detect the user's attention to a calibration target. This is achieved by using moving targets, and correlation of eye movement and target trajectory, implicitly exploiting smooth pursuit eye movement. Data for calibration is then only sampled when the user is attending to the target. Because of its ability to detect user attention, pursuit calibration can be performed implicitly, which enables more flexible designs of the calibration task. We demonstrate this in application examples and user studies, and show that pursuit calibration is tolerant to interruption, can blend naturally with applications and is able to calibrate users without their awareness.
{"title":"Pursuit calibration: making gaze calibration less tedious and more flexible","authors":"Ken Pfeuffer, Mélodie Vidal, J. Turner, A. Bulling, Hans-Werner Gellersen","doi":"10.1145/2501988.2501998","DOIUrl":"https://doi.org/10.1145/2501988.2501998","url":null,"abstract":"Eye gaze is a compelling interaction modality but requires user calibration before interaction can commence. State of the art procedures require the user to fixate on a succession of calibration markers, a task that is often experienced as difficult and tedious. We present pursuit calibration, a novel approach that, unlike existing methods, is able to detect the user's attention to a calibration target. This is achieved by using moving targets, and correlation of eye movement and target trajectory, implicitly exploiting smooth pursuit eye movement. Data for calibration is then only sampled when the user is attending to the target. Because of its ability to detect user attention, pursuit calibration can be performed implicitly, which enables more flexible designs of the calibration task. We demonstrate this in application examples and user studies, and show that pursuit calibration is tolerant to interruption, can blend naturally with applications and is able to calibrate users without their awareness.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132221513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fraser Anderson, Tovi Grossman, Justin Matejka, G. Fitzmaurice
YouMove is a novel system that allows users to record and learn physical movement sequences. The recording system is designed to be simple, allowing anyone to create and share training content. The training system uses recorded data to train the user using a large-scale augmented reality mirror. The system trains the user through a series of stages that gradually reduce the user's reliance on guidance and feedback. This paper discusses the design and implementation of YouMove and its interactive mirror. We also present a user study in which YouMove was shown to improve learning and short-term retention by a factor of 2 compared to a traditional video demonstration.
{"title":"YouMove: enhancing movement training with an augmented reality mirror","authors":"Fraser Anderson, Tovi Grossman, Justin Matejka, G. Fitzmaurice","doi":"10.1145/2501988.2502045","DOIUrl":"https://doi.org/10.1145/2501988.2502045","url":null,"abstract":"YouMove is a novel system that allows users to record and learn physical movement sequences. The recording system is designed to be simple, allowing anyone to create and share training content. The training system uses recorded data to train the user using a large-scale augmented reality mirror. The system trains the user through a series of stages that gradually reduce the user's reliance on guidance and feedback. This paper discusses the design and implementation of YouMove and its interactive mirror. We also present a user study in which YouMove was shown to improve learning and short-term retention by a factor of 2 compared to a traditional video demonstration.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131286893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is our pleasure to welcome you to the 26th Annual ACM Symposium on User Interface Software and Technology (UIST) 2013, held from October 8-11th, in the historic town and University of St Andrews, Scotland, United Kingdom. UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from many areas, including web and graphical interfaces, new input and output devices, information visualization, sensing technologies, interactive displays, tabletop and tangible computing, interaction techniques, augmented and virtual reality, ubiquitous computing, and computer supported cooperative work. The single-track program and intimate size, makes UIST 2013 an ideal place to exchange results at the cutting edge of user interfaces research, to meet friends and colleagues, and to forge future collaborations. We received a record 317 paper submissions from more than 30 countries. After a thorough review process, the program committee accepted 62 papers (19.5%). Each anonymous submission was first reviewed by three external reviewers, and meta-reviews were provided by two program committee members. If any of the five reviewers deemed a submission to pass a rejection threshold we asked the authors to submit a short rebuttal addressing the reviewers' concerns. The program committee met in person in Pittsburgh, PA, on May 30-31, 2013, to select the papers for the conference. Submissions were finally accepted only after the authors provided a final revision addressing the committee's comments. In addition to the presentations of accepted papers, this year's program includes a keynote by Raffaello D'Andrea (ETH Zurich) on feedback control systems for autonomous machines. A great line up of posters, demos, (the ninth) annual Doctoral Symposium, and (the fifth) annual Student Innovation Contest (this year focusing on programmable water pumps called Pumpspark) complete the program. We hope you enjoy all aspects of the UIST 2013 program, and that you get to enjoy our wonderful venues and that your discussions and interactions prove fruitful.
{"title":"Proceedings of the 26th annual ACM symposium on User interface software and technology","authors":"S. Izadi, A. Quigley, I. Poupyrev, T. Igarashi","doi":"10.1145/2501988","DOIUrl":"https://doi.org/10.1145/2501988","url":null,"abstract":"It is our pleasure to welcome you to the 26th Annual ACM Symposium on User Interface Software and Technology (UIST) 2013, held from October 8-11th, in the historic town and University of St Andrews, Scotland, United Kingdom. \u0000 \u0000UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from many areas, including web and graphical interfaces, new input and output devices, information visualization, sensing technologies, interactive displays, tabletop and tangible computing, interaction techniques, augmented and virtual reality, ubiquitous computing, and computer supported cooperative work. The single-track program and intimate size, makes UIST 2013 an ideal place to exchange results at the cutting edge of user interfaces research, to meet friends and colleagues, and to forge future collaborations. \u0000 \u0000We received a record 317 paper submissions from more than 30 countries. After a thorough review process, the program committee accepted 62 papers (19.5%). Each anonymous submission was first reviewed by three external reviewers, and meta-reviews were provided by two program committee members. If any of the five reviewers deemed a submission to pass a rejection threshold we asked the authors to submit a short rebuttal addressing the reviewers' concerns. The program committee met in person in Pittsburgh, PA, on May 30-31, 2013, to select the papers for the conference. Submissions were finally accepted only after the authors provided a final revision addressing the committee's comments. \u0000 \u0000In addition to the presentations of accepted papers, this year's program includes a keynote by Raffaello D'Andrea (ETH Zurich) on feedback control systems for autonomous machines. A great line up of posters, demos, (the ninth) annual Doctoral Symposium, and (the fifth) annual Student Innovation Contest (this year focusing on programmable water pumps called Pumpspark) complete the program. We hope you enjoy all aspects of the UIST 2013 program, and that you get to enjoy our wonderful venues and that your discussions and interactions prove fruitful.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132002393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}