Julie Heiser, Doantam Phan, Maneesh Agrawala, B. Tversky, P. Hanrahan
Designing effective instructions for everyday products is challenging. One reason is that designers lack a set of design principles for producing visually comprehensible and accessible instructions. We describe an approach for identifying such design principles through experiments investigating the production, preference, and comprehension of assembly instructions for furniture. We instantiate these principles into an algorithm that automatically generates assembly instructions. Finally, we perform a user study comparing our computer-generated instructions to factory-provided and highly rated hand-designed instructions. Our results indicate that the computer-generated instructions informed by our cognitive design principles significantly reduce assembly time an average of 35% and error by 50%. Details of the experimental methodology and the implementation of the automated system are described.
{"title":"Identification and validation of cognitive design principles for automated generation of assembly instructions","authors":"Julie Heiser, Doantam Phan, Maneesh Agrawala, B. Tversky, P. Hanrahan","doi":"10.1145/989863.989917","DOIUrl":"https://doi.org/10.1145/989863.989917","url":null,"abstract":"Designing effective instructions for everyday products is challenging. One reason is that designers lack a set of design principles for producing visually comprehensible and accessible instructions. We describe an approach for identifying such design principles through experiments investigating the production, preference, and comprehension of assembly instructions for furniture. We instantiate these principles into an algorithm that automatically generates assembly instructions. Finally, we perform a user study comparing our computer-generated instructions to factory-provided and highly rated hand-designed instructions. Our results indicate that the computer-generated instructions informed by our cognitive design principles significantly reduce assembly time an average of 35% and error by 50%. Details of the experimental methodology and the implementation of the automated system are described.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129067876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Word Processing software usually only displays paragraphs of text immediately adjacent to the cursor position. Generally this is appropriate, for example when composing a single paragraph. However, when reviewing or working on the layout of a document it is necessary to establish awareness of current text in the context of the document as a whole. This can be done by scrolling or zooming, but when doing so, focus is easily lost and hard to regain.We have developed a system called DeepDocument using a two-layered LCD display in which both focussed and document-wide views are presented simultaneously. The overview is shown on the rear display and the focussed view on the front, maintaining full screen size for each. The physical separation of the layers takes advantage of human depth perception capabilities to allow users to perceive the views independently without having to redirect their gaze. DeepDocument has been written as an extension to Microsoft Word™.
{"title":"DeepDocument: use of a multi-layered display to provide context awareness in text editing","authors":"M. Masoodian, Sam McKoy, Bill Rogers, David Ware","doi":"10.1145/989863.989902","DOIUrl":"https://doi.org/10.1145/989863.989902","url":null,"abstract":"Word Processing software usually only displays paragraphs of text immediately adjacent to the cursor position. Generally this is appropriate, for example when composing a single paragraph. However, when reviewing or working on the layout of a document it is necessary to establish awareness of current text in the context of the document as a whole. This can be done by scrolling or zooming, but when doing so, focus is easily lost and hard to regain.We have developed a system called DeepDocument using a two-layered LCD display in which both focussed and document-wide views are presented simultaneously. The overview is shown on the rear display and the focussed view on the front, maintaining full screen size for each. The physical separation of the layers takes advantage of human depth perception capabilities to allow users to perceive the views independently without having to redirect their gaze. DeepDocument has been written as an extension to Microsoft Word™.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116327521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose ValueCharts, a set of visualizations and interactive techniques intended to support decision-makers in inspecting linear models of preferences and evaluation. Linear models are popular decision-making tools for individuals, groups and organizations. In Decision Analysis, they help the decision-maker analyze preferential choices under conflicting objectives. In Economics and the Social Sciences, similar models are devised to rank entities according to an evaluative index of interest. The fundamental goal of building models expressing preferences and evaluations is to help the decision-maker organize all the information relevant to a decision into a structure that can be effectively analyzed. However, as models and their domain of application grow in complexity, model analysis can become a very challenging task. We claim that ValueCharts will make the inspection and application of these models more natural and effective. We support our claim by showing how ValueCharts effectively enable a set of basic tasks that we argue are at the core of analyzing and understanding linear models of preferences and evaluation.
{"title":"ValueCharts: analyzing linear models expressing preferences and evaluations","authors":"G. Carenini, J. Loyd","doi":"10.1145/989863.989885","DOIUrl":"https://doi.org/10.1145/989863.989885","url":null,"abstract":"In this paper we propose ValueCharts, a set of visualizations and interactive techniques intended to support decision-makers in inspecting linear models of preferences and evaluation. Linear models are popular decision-making tools for individuals, groups and organizations. In Decision Analysis, they help the decision-maker analyze preferential choices under conflicting objectives. In Economics and the Social Sciences, similar models are devised to rank entities according to an evaluative index of interest. The fundamental goal of building models expressing preferences and evaluations is to help the decision-maker organize all the information relevant to a decision into a structure that can be effectively analyzed. However, as models and their domain of application grow in complexity, model analysis can become a very challenging task. We claim that ValueCharts will make the inspection and application of these models more natural and effective. We support our claim by showing how ValueCharts effectively enable a set of basic tasks that we argue are at the core of analyzing and understanding linear models of preferences and evaluation.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115417885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose a structure-based clustering technique that transforms a given graph into a specific double tree structure called multi-level outline tree. Each meta-node of the tree - that represents a subset of nodes - is itself hierarchically clustered. So, a meta-node is considered as a tree root of included clusters.The main originality of our approach is to account for the user focus in the clustering process to provide views from different perspectives. Multi-level outline trees are computed in linear time and easy to explore. We think that our technique is well suited to investigate various graphs like Web graphs or citation graphs.
{"title":"Focus dependent multi-level graph clustering","authors":"François Boutin, Mountaz Hascoët","doi":"10.1145/989863.989888","DOIUrl":"https://doi.org/10.1145/989863.989888","url":null,"abstract":"In this paper we propose a structure-based clustering technique that transforms a given graph into a specific double tree structure called multi-level outline tree. Each meta-node of the tree - that represents a subset of nodes - is itself hierarchically clustered. So, a meta-node is considered as a tree root of included clusters.The main originality of our approach is to account for the user focus in the clustering process to provide views from different perspectives. Multi-level outline trees are computed in linear time and easy to explore. We think that our technique is well suited to investigate various graphs like Web graphs or citation graphs.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126870773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Navigation support concerning both physical space as well as information spaces address fundamental information needs of mobile users in many application scenarios including the classical shopping visit in the town centre. Therefore it is a particular research objective in the mobile domain to explore, showcase, and test the interplay of physical navigation with navigation in an information space that, metaphorically speaking, superimposes the physical space. We have developed a demonstrator that couples a spatial navigation aid in the form of a 2D interactive map viewer with other information services, such as an interactive web directory service that provides information about shops and restaurants and their product palettes. The research has raised a number of interesting questions, such as of how to align interactions performed in the navigation aid with meaningful actions in a coupled twin application, and vice versa, how to reflect navigation in an information space in the aligned spatial navigation aid.
{"title":"Aligning information browsing and exploration methods with a spatial navigation aid for mobile city visitors","authors":"T. Rist, Stephan Baldes, Patrick Brandmeier","doi":"10.1145/989863.989900","DOIUrl":"https://doi.org/10.1145/989863.989900","url":null,"abstract":"Navigation support concerning both physical space as well as information spaces address fundamental information needs of mobile users in many application scenarios including the classical shopping visit in the town centre. Therefore it is a particular research objective in the mobile domain to explore, showcase, and test the interplay of physical navigation with navigation in an information space that, metaphorically speaking, superimposes the physical space. We have developed a demonstrator that couples a spatial navigation aid in the form of a 2D interactive map viewer with other information services, such as an interactive web directory service that provides information about shops and restaurants and their product palettes. The research has raised a number of interesting questions, such as of how to align interactions performed in the navigation aid with meaningful actions in a coupled twin application, and vice versa, how to reflect navigation in an information space in the aligned spatial navigation aid.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Spence, M. Witkowski, Catherine Fawcett, B. Craft, O. Bruijn
Rapid Serial Visual Presentation (RSVP) is a technique that allows images to be presented sequentially in the time-domain, thereby offering an alternative to the conventional concurrent display of images in the space domain. Such an alternative offers potential advantages where display area is at a premium. However, notwithstanding the flexibility to employ either or both domains for presentation purposes, little is known about the alternatives suited to specific tasks undertaken by a user. As a consequence there is a pressing need to provide guidance for the interaction designer faced with these alternatives.We investigated the task of identifying the presence or absence of a previously viewed image within a collection of images, a requirement of many real activities. In experiments with subjects, the collection of images was presented in three modes (1) 'slide show' RSVP mode; (2) concurrently and statically -- 'static mode'; and (3) a 'mixed' mode. Each mode employed the same display area and the same total presentation time, together regarded as primary resources available to the interaction designer. For each presentation mode, the outcome identified error profiles and subject preferences. Eye-gaze studies detected distinctive differences between the three presentation modes.
{"title":"Image presentation in space and time: errors, preferences and eye-gaze activity","authors":"R. Spence, M. Witkowski, Catherine Fawcett, B. Craft, O. Bruijn","doi":"10.1145/989863.989884","DOIUrl":"https://doi.org/10.1145/989863.989884","url":null,"abstract":"Rapid Serial Visual Presentation (RSVP) is a technique that allows images to be presented sequentially in the time-domain, thereby offering an alternative to the conventional concurrent display of images in the space domain. Such an alternative offers potential advantages where display area is at a premium. However, notwithstanding the flexibility to employ either or both domains for presentation purposes, little is known about the alternatives suited to specific tasks undertaken by a user. As a consequence there is a pressing need to provide guidance for the interaction designer faced with these alternatives.We investigated the task of identifying the presence or absence of a previously viewed image within a collection of images, a requirement of many real activities. In experiments with subjects, the collection of images was presented in three modes (1) 'slide show' RSVP mode; (2) concurrently and statically -- 'static mode'; and (3) a 'mixed' mode. Each mode employed the same display area and the same total presentation time, together regarded as primary resources available to the interaction designer. For each presentation mode, the outcome identified error profiles and subject preferences. Eye-gaze studies detected distinctive differences between the three presentation modes.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding complex 3D virtual models can be difficult, especially when the model has interior components not initially visible and ancillary text. We describe new techniques for the interactive exploration of 3D models. Specifically, in addition to traditional viewing operations, we present new text extrusion techniques combined with techniques that create an interactive explosion diagram. In our approach, scrollable text annotations that are associated with the various parts of the model can be revealed dynamically, either in part or in full, by moving the mouse cursor within annotation trigger areas. Strong visual connections between model parts and the associated text are included in order to aid comprehension. Furthermore, the model parts can be separated, creating interactive explosion diagrams. Using a 3D probe, occluding objects can be interactively moved apart and then returned to their initial locations. Displayed annotations are kept readable despite model manipulations. Hence, our techniques provide textual context within the spatial context of the 3D model.
{"title":"Integrating expanding annotations with a 3D explosion probe","authors":"Henry Sonnet, Sheelagh Carpendale, T. Strothotte","doi":"10.1145/989863.989871","DOIUrl":"https://doi.org/10.1145/989863.989871","url":null,"abstract":"Understanding complex 3D virtual models can be difficult, especially when the model has interior components not initially visible and ancillary text. We describe new techniques for the interactive exploration of 3D models. Specifically, in addition to traditional viewing operations, we present new text extrusion techniques combined with techniques that create an interactive explosion diagram. In our approach, scrollable text annotations that are associated with the various parts of the model can be revealed dynamically, either in part or in full, by moving the mouse cursor within annotation trigger areas. Strong visual connections between model parts and the associated text are included in order to aid comprehension. Furthermore, the model parts can be separated, creating interactive explosion diagrams. Using a 3D probe, occluding objects can be interactively moved apart and then returned to their initial locations. Displayed annotations are kept readable despite model manipulations. Hence, our techniques provide textual context within the spatial context of the 3D model.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134254483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of innovative Airborne Early Warning and Control (AEW&C) platform capabilities, we are building an environment that can support the generation of information tailored to operators' tasks. The challenging issues here are to improve the methods for managing information delivery to the operators, and thus provide them with high-value information on their display whilst avoiding noise and clutter. To this end, we enhance the operator's graphical interface with information delivery mechanisms that support maintenance of situation awareness and improving efficiency. We do this by proactively delivering task-relevant information.
{"title":"Task-sensitive user interfaces: grounding information provision within the context of the user's activity","authors":"N. Colineau, Andrew Lampert, Cécile Paris","doi":"10.1145/989863.989899","DOIUrl":"https://doi.org/10.1145/989863.989899","url":null,"abstract":"In the context of innovative Airborne Early Warning and Control (AEW&C) platform capabilities, we are building an environment that can support the generation of information tailored to operators' tasks. The challenging issues here are to improve the methods for managing information delivery to the operators, and thus provide them with high-value information on their display whilst avoiding noise and clutter. To this end, we enhance the operator's graphical interface with information delivery mechanisms that support maintenance of situation awareness and improving efficiency. We do this by proactively delivering task-relevant information.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"376 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123497152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fishnet is a web browser that always displays web pages in their entirety, independent of their size. Fishnet accomplishes this by using a fisheye view, i.e. by showing a focus region at readable scale while spatially compressing page content above and below that region. Fishnet offers search term highlighting, and assures that those terms are readable by using "popouts". This allows users to visually scan search results within the entire page without scrolling.The scope of this paper is twofold. First, we present fishnet as a novel way of viewing the results of highlighted search and we discuss the design space. Second, we present a user study that helps practitioners determine which visualization technique--- fisheye view, overview, or regular linear view---to pick for which type of visual search scenario.
{"title":"Fishnet, a fisheye web browser with search term popouts: a comparative evaluation with overview and linear view","authors":"Patrick Baudisch, Bongshin Lee, Libby Hanna","doi":"10.1145/989863.989883","DOIUrl":"https://doi.org/10.1145/989863.989883","url":null,"abstract":"Fishnet is a web browser that always displays web pages in their entirety, independent of their size. Fishnet accomplishes this by using a fisheye view, i.e. by showing a focus region at readable scale while spatially compressing page content above and below that region. Fishnet offers search term highlighting, and assures that those terms are readable by using \"popouts\". This allows users to visually scan search results within the entire page without scrolling.The scope of this paper is twofold. First, we present fishnet as a novel way of viewing the results of highlighted search and we discuss the design space. Second, we present a user study that helps practitioners determine which visualization technique--- fisheye view, overview, or regular linear view---to pick for which type of visual search scenario.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121017832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Complex hypermedia structures can be difficult to author and maintain, especially when the usual hierarchic representation cannot capture important relations. We propose a graph-based direct manipulation interface that uses multiple focus+context techniques to avoid display clutter and information overload. A semantical fisheye lens based on hierarchical clustering allows the user to work on high-level abstracts of the structure. Navigation through the resulting graph is animated in order to avoid loss of orientation, with a force-directed algorithm in charge of generating successive layouts. Multiple views can be generated over the same data, each with independent settings for filtering, clustering and degree of zoom.While these techniques are all well-known in the literature, it is their combination and application to the field of hypermedia authoring that constitutes a powerful tool for the development of next-generation hyperspaces.A generic framework, CLOVER, and two specific applications for existing hypermedia systems have been implemented.
{"title":"A graph-based interface to complex hypermedia structure visualization","authors":"Manuel Freire, P. Rodríguez","doi":"10.1145/989863.989887","DOIUrl":"https://doi.org/10.1145/989863.989887","url":null,"abstract":"Complex hypermedia structures can be difficult to author and maintain, especially when the usual hierarchic representation cannot capture important relations. We propose a graph-based direct manipulation interface that uses multiple focus+context techniques to avoid display clutter and information overload. A semantical fisheye lens based on hierarchical clustering allows the user to work on high-level abstracts of the structure. Navigation through the resulting graph is animated in order to avoid loss of orientation, with a force-directed algorithm in charge of generating successive layouts. Multiple views can be generated over the same data, each with independent settings for filtering, clustering and degree of zoom.While these techniques are all well-known in the literature, it is their combination and application to the field of hypermedia authoring that constitutes a powerful tool for the development of next-generation hyperspaces.A generic framework, CLOVER, and two specific applications for existing hypermedia systems have been implemented.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"24 Suppl 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121216904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}