Pixel-based methods have the potential to fundamentally change how we build graphical interfaces, but remain difficult to implement. We introduce a new toolkit for pixel based enhancements, focused on two areas of support. Prefab Layers helps developers write interpretation logic that can be composed, reused, and shared to manage the multi-faceted nature of pixel-based interpretation. Prefab Annotations supports robustly annotating interface elements with metadata needed to enable runtime enhancements. Together, these help developers overcome subtle but critical dependencies between code and data. We validate our toolkit with (1) demonstrative applications and (2) a lab study that compares how developers build an enhancement using our toolkit versus state of the art methods. Our toolkit addresses core challenges faced by developers when building pixel based enhancements, potentially opening up pixel based systems to broader adoption.
{"title":"Prefab layers and prefab annotations: extensible pixel-based interpretation of graphical interfaces","authors":"M. Dixon, A. C. Nied, J. Fogarty","doi":"10.1145/2642918.2647412","DOIUrl":"https://doi.org/10.1145/2642918.2647412","url":null,"abstract":"Pixel-based methods have the potential to fundamentally change how we build graphical interfaces, but remain difficult to implement. We introduce a new toolkit for pixel based enhancements, focused on two areas of support. Prefab Layers helps developers write interpretation logic that can be composed, reused, and shared to manage the multi-faceted nature of pixel-based interpretation. Prefab Annotations supports robustly annotating interface elements with metadata needed to enable runtime enhancements. Together, these help developers overcome subtle but critical dependencies between code and data. We validate our toolkit with (1) demonstrative applications and (2) a lab study that compares how developers build an enhancement using our toolkit versus state of the art methods. Our toolkit addresses core challenges faced by developers when building pixel based enhancements, potentially opening up pixel based systems to broader adoption.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"97 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86421711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jakob Zillner, Christoph Rhemann, S. Izadi, M. Haller
This paper presents 3D-Board, a digital whiteboard capable of capturing life-sized virtual embodiments of geographically distributed users. When using large-scale screens for remote collaboration, awareness for the distributed users' gestures and actions is of particular importance. Our work adds to the literature on remote collaborative workspaces, it facilitates intuitive remote collaboration on large scale interactive whiteboards by preserving awareness of the full-body pose and gestures of the remote collaborator. By blending the front-facing 3D embodiment of a remote collaborator with the shared workspace, an illusion is created as if the observer was looking through the transparent whiteboard into the remote user's room. The system was tested and verified in a usability assessment, showing that 3D-Board significantly improves the effectiveness of remote collaboration on a large interactive surface.
{"title":"3D-board: a whole-body remote collaborative whiteboard","authors":"Jakob Zillner, Christoph Rhemann, S. Izadi, M. Haller","doi":"10.1145/2642918.2647393","DOIUrl":"https://doi.org/10.1145/2642918.2647393","url":null,"abstract":"This paper presents 3D-Board, a digital whiteboard capable of capturing life-sized virtual embodiments of geographically distributed users. When using large-scale screens for remote collaboration, awareness for the distributed users' gestures and actions is of particular importance. Our work adds to the literature on remote collaborative workspaces, it facilitates intuitive remote collaboration on large scale interactive whiteboards by preserving awareness of the full-body pose and gestures of the remote collaborator. By blending the front-facing 3D embodiment of a remote collaborator with the shared workspace, an illusion is created as if the observer was looking through the transparent whiteboard into the remote user's room. The system was tested and verified in a usability assessment, showing that 3D-Board significantly improves the effectiveness of remote collaboration on a large interactive surface.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"146 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77150495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a comprehensive system to author planar section structures, common in art and engineering. A study on how planar section assemblies are imagined and drawn guide our design principles: planar sections are best drawn in-situ, with little foreshortening, orthogonal to intersecting planar sections, exhibiting regularities between planes and contours. We capture these principles with a novel drawing workflow where a single fluid user stroke specifies a 3D plane and its contour in relation to existing planar sections. Regularity is supported by defining a vocabulary of procedural operations for intersecting planar sections. We exploit planar structure properties to provide real-time visual feedback on physically simulated stresses, and geometric verification that the structure is stable, connected and can be assembled. This feedback is validated by real-world fabrication and testing. As evaluation, we report on over 50 subjects who all used our system with minimal instruction to create unique models.
{"title":"FlatFitFab: interactive modeling with planar sections","authors":"James McCrae, Nobuyuki Umetani, Karan Singh","doi":"10.1145/2642918.2647388","DOIUrl":"https://doi.org/10.1145/2642918.2647388","url":null,"abstract":"We present a comprehensive system to author planar section structures, common in art and engineering. A study on how planar section assemblies are imagined and drawn guide our design principles: planar sections are best drawn in-situ, with little foreshortening, orthogonal to intersecting planar sections, exhibiting regularities between planes and contours. We capture these principles with a novel drawing workflow where a single fluid user stroke specifies a 3D plane and its contour in relation to existing planar sections. Regularity is supported by defining a vocabulary of procedural operations for intersecting planar sections. We exploit planar structure properties to provide real-time visual feedback on physically simulated stresses, and geometric verification that the structure is stable, connected and can be assembled. This feedback is validated by real-world fabrication and testing. As evaluation, we report on over 50 subjects who all used our system with minimal instruction to create unique models.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"124 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88090797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasingly, authors are publishing long informational talks, lectures, and distance-learning videos online. However, it is difficult to browse and skim the content of such videos using current timeline-based video players. Video digests are a new format for informational videos that afford browsing and skimming by segmenting videos into a chapter/section structure and providing short text summaries and thumbnails for each section. Viewers can navigate by reading the summaries and clicking on sections to access the corresponding point in the video. We present a set of tools to help authors create such digests using transcript-based interactions. With our tools, authors can manually create a video digest from scratch, or they can automatically generate a digest by applying a combination of algorithmic and crowdsourcing techniques and then manually refine it as needed. Feedback from first-time users suggests that our transcript-based authoring tools and automated techniques greatly facilitate video digest creation. In an evaluative crowdsourced study we find that given a short viewing time, video digests support browsing and skimming better than timeline-based or transcript-based video players.
{"title":"Video digests: a browsable, skimmable format for informational lecture videos","authors":"Amy Pavel, Colorado Reed, Bjoern Hartmann, Maneesh Agrawala","doi":"10.1145/2642918.2647400","DOIUrl":"https://doi.org/10.1145/2642918.2647400","url":null,"abstract":"Increasingly, authors are publishing long informational talks, lectures, and distance-learning videos online. However, it is difficult to browse and skim the content of such videos using current timeline-based video players. Video digests are a new format for informational videos that afford browsing and skimming by segmenting videos into a chapter/section structure and providing short text summaries and thumbnails for each section. Viewers can navigate by reading the summaries and clicking on sections to access the corresponding point in the video. We present a set of tools to help authors create such digests using transcript-based interactions. With our tools, authors can manually create a video digest from scratch, or they can automatically generate a digest by applying a combination of algorithmic and crowdsourcing techniques and then manually refine it as needed. Feedback from first-time users suggests that our transcript-based authoring tools and automated techniques greatly facilitate video digest creation. In an evaluative crowdsourced study we find that given a short viewing time, video digests support browsing and skimming better than timeline-based or transcript-based video players.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"69 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86820186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adam Fourney, B. Lafreniere, Parmit K. Chilana, Michael A. Terry
Users often make continued and sustained use of online resources to complement use of a desktop application. For example, users may reference online tutorials to recall how to perform a particular task. While often used in a coordinated fashion, the browser and desktop application provide separate, independent mechanisms for helping users find and re-find task-relevant information. In this paper, we describe InterTwine, a system that links information in the web browser with relevant elements in the desktop application to create interapplication information scent. This explicit link produces a shared interapplication history to assist in re-finding information in both applications. As an example, InterTwine marks all menu items in the desktop application that are currently mentioned in the front-most web page. This paper introduces the notion of interapplication information scent, demonstrates the concept in InterTwine, and describes results from a formative study suggesting the utility of the concept.
{"title":"InterTwine: creating interapplication information scent to support coordinated use of software","authors":"Adam Fourney, B. Lafreniere, Parmit K. Chilana, Michael A. Terry","doi":"10.1145/2642918.2647420","DOIUrl":"https://doi.org/10.1145/2642918.2647420","url":null,"abstract":"Users often make continued and sustained use of online resources to complement use of a desktop application. For example, users may reference online tutorials to recall how to perform a particular task. While often used in a coordinated fashion, the browser and desktop application provide separate, independent mechanisms for helping users find and re-find task-relevant information. In this paper, we describe InterTwine, a system that links information in the web browser with relevant elements in the desktop application to create interapplication information scent. This explicit link produces a shared interapplication history to assist in re-finding information in both applications. As an example, InterTwine marks all menu items in the desktop application that are currently mentioned in the front-most web page. This paper introduces the notion of interapplication information scent, demonstrates the concept in InterTwine, and describes results from a formative study suggesting the utility of the concept.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"311 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85774066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present Programming by Manipulation, a new programming methodology for specifying the layout of data visualizations, targeted at non-programmers. We address the two central sources of bugs that arise when programming with constraints: ambiguities and conflicts (inconsistencies). We rule out conflicts by design and exploit ambiguity to explore possible layout designs. Our users design layouts by highlighting undesirable aspects of a current design, effectively breaking spurious constraints and introducing ambiguity by giving some elements freedom to move or resize. Subsequently, the tool indicates how the ambiguity can be removed, by computing how the free elements can be fixed with available constraints. To support this workflow, our tool computes the ambiguity and summarizes it visually. We evaluate our work with two user-studies demonstrating that both non-programmers and programmers can effectively use our prototype. Our results suggest that our tool is 5-times more productive than direct programming with constraints.
{"title":"Programming by manipulation for layout","authors":"Thibaud Hottelier, R. Bodík, Kimiko Ryokai","doi":"10.1145/2642918.2647378","DOIUrl":"https://doi.org/10.1145/2642918.2647378","url":null,"abstract":"We present Programming by Manipulation, a new programming methodology for specifying the layout of data visualizations, targeted at non-programmers. We address the two central sources of bugs that arise when programming with constraints: ambiguities and conflicts (inconsistencies). We rule out conflicts by design and exploit ambiguity to explore possible layout designs. Our users design layouts by highlighting undesirable aspects of a current design, effectively breaking spurious constraints and introducing ambiguity by giving some elements freedom to move or resize. Subsequently, the tool indicates how the ambiguity can be removed, by computing how the free elements can be fixed with available constraints. To support this workflow, our tool computes the ambiguity and summarizes it visually. We evaluate our work with two user-studies demonstrating that both non-programmers and programmers can effectively use our prototype. Our results suggest that our tool is 5-times more productive than direct programming with constraints.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"519 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77202021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
New representations of thought -- written language, mathematical notation, information graphics, etc -- have been responsible for some of the most significant leaps in the progress of civilization, by expanding humanity's collectively-thinkable territory. But at debilitating cost. These representations, having been invented for static media such as paper, tap into a small subset of human capabilities and neglect the rest. Knowledge work means sitting at a desk, interpreting and manipulating symbols. The human body is reduced to an eye staring at tiny rectangles and fingers on a pen or keyboard. Like any severely unbalanced way of living, this is crippling to mind and body. But less obviously, and more importantly, it is enormously wasteful of the vast human potential. Human beings naturally have many powerful modes of thinking and understanding. Most are incompatible with static media. In a culture that has contorted itself around the limitations of marks on paper, these modes are undeveloped, unrecognized, or scorned. We are now seeing the start of a dynamic medium. To a large extent, people today are using this medium merely to emulate and extend static representations from the era of paper, and to further constrain the ways in which the human body can interact with external representations of thought. But the dynamic medium offers the opportunity to deliberately invent a humane and empowering form of knowledge work. We can design dynamic representations which draw on the entire range of human capabilities -- all senses, all forms of movement, all forms of understanding -- instead of straining a few and atrophying the rest. This talk suggests how each of the human activities in which thought is externalized (conversing, presenting, reading, writing, etc) can be redesigned around such representations.
{"title":"Humane representation of thought: a trail map for the 21st century","authors":"Bret Victor","doi":"10.1145/2642918.2642920","DOIUrl":"https://doi.org/10.1145/2642918.2642920","url":null,"abstract":"New representations of thought -- written language, mathematical notation, information graphics, etc -- have been responsible for some of the most significant leaps in the progress of civilization, by expanding humanity's collectively-thinkable territory. But at debilitating cost. These representations, having been invented for static media such as paper, tap into a small subset of human capabilities and neglect the rest. Knowledge work means sitting at a desk, interpreting and manipulating symbols. The human body is reduced to an eye staring at tiny rectangles and fingers on a pen or keyboard. Like any severely unbalanced way of living, this is crippling to mind and body. But less obviously, and more importantly, it is enormously wasteful of the vast human potential. Human beings naturally have many powerful modes of thinking and understanding. Most are incompatible with static media. In a culture that has contorted itself around the limitations of marks on paper, these modes are undeveloped, unrecognized, or scorned. We are now seeing the start of a dynamic medium. To a large extent, people today are using this medium merely to emulate and extend static representations from the era of paper, and to further constrain the ways in which the human body can interact with external representations of thought. But the dynamic medium offers the opportunity to deliberately invent a humane and empowering form of knowledge work. We can design dynamic representations which draw on the entire range of human capabilities -- all senses, all forms of movement, all forms of understanding -- instead of straining a few and atrophying the rest. This talk suggests how each of the human activities in which thought is externalized (conversing, presenting, reading, writing, etc) can be redesigned around such representations.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82811547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongwook Yoon, Nicholas Chen, François Guimbretière, A. Sellen
This paper introduces a novel document annotation system that aims to enable the kinds of rich communication that usually only occur in face-to-face meetings. Our system, RichReview, lets users create annotations on top of digital documents using three main modalities: freeform inking, voice for narration, and deictic gestures in support of voice. RichReview uses novel visual representations and time-synchronization between modalities to simplify annotation access and navigation. Moreover, RichReview's versatile support for multi-modal annotations enables users to mix and interweave different modalities in threaded conversations. A formative evaluation demonstrates early promise for the system finding support for voice, pointing, and the combination of both to be especially valuable. In addition, initial findings point to the ways in which both content and social context affect modality choice.
{"title":"RichReview: blending ink, speech, and gesture to support collaborative document review","authors":"Dongwook Yoon, Nicholas Chen, François Guimbretière, A. Sellen","doi":"10.1145/2642918.2647390","DOIUrl":"https://doi.org/10.1145/2642918.2647390","url":null,"abstract":"This paper introduces a novel document annotation system that aims to enable the kinds of rich communication that usually only occur in face-to-face meetings. Our system, RichReview, lets users create annotations on top of digital documents using three main modalities: freeform inking, voice for narration, and deictic gestures in support of voice. RichReview uses novel visual representations and time-synchronization between modalities to simplify annotation access and navigation. Moreover, RichReview's versatile support for multi-modal annotations enables users to mix and interweave different modalities in threaded conversations. A formative evaluation demonstrates early promise for the system finding support for voice, pointing, and the combination of both to be especially valuable. In addition, initial findings point to the ways in which both content and social context affect modality choice.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76969717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, G. Fitzmaurice
We present Kitty, a sketch-based tool for authoring dynamic and interactive illustrations. Artists can sketch animated drawings and textures to convey the living phenomena, and specify the functional relationship between its entities to characterize the dynamic behavior of systems and environments. An underlying graph model, customizable through sketching, captures the functional relationships between the visual, spatial, temporal or quantitative parameters of its entities. As the viewer interacts with the resulting dynamic interactive illustration, the parameters of the drawing change accordingly, depicting the dynamics and chain of causal effects within a scene. The generality of this framework makes our tool applicable for a variety of purposes, including technical illustrations, scientific explanation, infographics, medical illustrations, children's e-books, cartoon strips and beyond. A user study demonstrates the ease of usage, variety of applications, artistic expressiveness and creative possibilities of our tool.
{"title":"Kitty: sketching dynamic and interactive illustrations","authors":"Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, G. Fitzmaurice","doi":"10.1145/2642918.2647375","DOIUrl":"https://doi.org/10.1145/2642918.2647375","url":null,"abstract":"We present Kitty, a sketch-based tool for authoring dynamic and interactive illustrations. Artists can sketch animated drawings and textures to convey the living phenomena, and specify the functional relationship between its entities to characterize the dynamic behavior of systems and environments. An underlying graph model, customizable through sketching, captures the functional relationships between the visual, spatial, temporal or quantitative parameters of its entities. As the viewer interacts with the resulting dynamic interactive illustration, the parameters of the drawing change accordingly, depicting the dynamics and chain of causal effects within a scene. The generality of this framework makes our tool applicable for a variety of purposes, including technical illustrations, scientific explanation, infographics, medical illustrations, children's e-books, cartoon strips and beyond. A user study demonstrates the ease of usage, variety of applications, artistic expressiveness and creative possibilities of our tool.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"81 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80483542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kent Lyons, S. Kim, Shigeyuki Seko, David H. Nguyen, Audrey Desjardins, Mélodie Vidal, D. Dobbelstein, Jeremy Rubin
Loupe is a novel interactive device with a near-eye virtual display similar to head-up display glasses that retains a handheld form factor. We present our hardware implementation and discuss our user interface that leverages Loupe's unique combination of properties. In particular, we present our input capabilities, spatial metaphor, opportunities for using the round aspect of Loupe, and our use of focal depth. We demonstrate how those capabilities come together in an example application designed to allow quick access to information feeds.
{"title":"Loupe: a handheld near-eye display","authors":"Kent Lyons, S. Kim, Shigeyuki Seko, David H. Nguyen, Audrey Desjardins, Mélodie Vidal, D. Dobbelstein, Jeremy Rubin","doi":"10.1145/2642918.2647361","DOIUrl":"https://doi.org/10.1145/2642918.2647361","url":null,"abstract":"Loupe is a novel interactive device with a near-eye virtual display similar to head-up display glasses that retains a handheld form factor. We present our hardware implementation and discuss our user interface that leverages Loupe's unique combination of properties. In particular, we present our input capabilities, spatial metaphor, opportunities for using the round aspect of Loupe, and our use of focal depth. We demonstrate how those capabilities come together in an example application designed to allow quick access to information feeds.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77751143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}