As the field of information visualization matures, the tools and ideas described in our research publications are reaching users. The reports of usability studies and controlled experiments are helpful to understand the potential and limitations of our tools, but we need to consider other evaluation approaches that take into account the long exploratory nature of users tasks, the value of potential discoveries or the benefits of overall awareness. We need better metrics and benchmark repositories to compare tools, and we should also seek reports of successful adoption and demonstrated utility.
{"title":"The challenge of information visualization evaluation","authors":"C. Plaisant","doi":"10.1145/989863.989880","DOIUrl":"https://doi.org/10.1145/989863.989880","url":null,"abstract":"As the field of information visualization matures, the tools and ideas described in our research publications are reaching users. The reports of usability studies and controlled experiments are helpful to understand the potential and limitations of our tools, but we need to consider other evaluation approaches that take into account the long exploratory nature of users tasks, the value of potential discoveries or the benefits of overall awareness. We need better metrics and benchmark repositories to compare tools, and we should also seek reports of successful adoption and demonstrated utility.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"61 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120917329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe the results of empirical investigations that explore the effectiveness of moving graph diagrams to improve the comprehension of their structure. The investigations involved subjects playing a game that required understanding the structure of a number of graphs. The use of a game as the task was intended to motivate the exploration of the graph by the subjects. The results show that movement can be beneficial when there is node-node or node-edge occlusion in the graph diagram but can have a detrimental effect when there is no occlusion, particularly if the diagram is small. We believe the positive result should generalise to other graph exploration tasks, and that graph movement is likely be useful as an additional graph exploration tool.
{"title":"Using games to investigate movement for graph comprehension","authors":"J. Bovey, Florence Benoy, P. Rodgers","doi":"10.1145/989863.989872","DOIUrl":"https://doi.org/10.1145/989863.989872","url":null,"abstract":"We describe the results of empirical investigations that explore the effectiveness of moving graph diagrams to improve the comprehension of their structure. The investigations involved subjects playing a game that required understanding the structure of a number of graphs. The use of a game as the task was intended to motivate the exploration of the graph by the subjects. The results show that movement can be beneficial when there is node-node or node-edge occlusion in the graph diagram but can have a detrimental effect when there is no occlusion, particularly if the diagram is small. We believe the positive result should generalise to other graph exploration tasks, and that graph movement is likely be useful as an additional graph exploration tool.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126243564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The abundance of data available nowadays fosters the need of developing tools and methodologies to help users in extracting significant information. Visual data mining is going in this direction, exploiting data mining algorithms and methodologies together with information visualization techniques.The demand for visual and interactive analysis tools is particularly pressing in the Association Rules context where often the user has to analyze hundreds of rules in order to grasp valuable knowledge. This paper presents a visual strategy to face this drawback by exploiting graph-based technique and parallel coordinates to visualize the results of association rules mining algorithms. The combination of the two approaches allows both to get an overview on the association structure hidden in the data and to deeper investigate inside a specific set of rules selected by the user.
{"title":"Combining visual techniques for Association Rules exploration","authors":"D. Bruzzese, P. Buono","doi":"10.1145/989863.989930","DOIUrl":"https://doi.org/10.1145/989863.989930","url":null,"abstract":"The abundance of data available nowadays fosters the need of developing tools and methodologies to help users in extracting significant information. Visual data mining is going in this direction, exploiting data mining algorithms and methodologies together with information visualization techniques.The demand for visual and interactive analysis tools is particularly pressing in the Association Rules context where often the user has to analyze hundreds of rules in order to grasp valuable knowledge. This paper presents a visual strategy to face this drawback by exploiting graph-based technique and parallel coordinates to visualize the results of association rules mining algorithms. The combination of the two approaches allows both to get an overview on the association structure hidden in the data and to deeper investigate inside a specific set of rules selected by the user.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117138835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Ardito, M. De Marsico, R. Lanzilotti, S. Levialdi, T. Roselli, Veronica Rossano, Manuela Tersigni
The new challenge for designers and HCI researchers is to develop software tools for effective e-learning. Learner-Centered Design (LCD) provides guidelines to make new learning domains accessible in an educationally productive manner. A number of new issues have been raised because of the new "vehicle" for education. Effective e-learning systems should include sophisticated and advanced functions, yet their interface should hide their complexity, providing an easy and flexible interaction suited to catch students' interest. In particular, personalization and integration of learning paths and communication media should be provided.It is first necessary to dwell upon the difference between attributes for platforms (containers) and for educational modules provided by a platform (contents). In both cases, it is hard to go deeply into pedagogical issues of the provided knowledge content. This work is a first step towards identifying specific usability attributes for e-learning systems, capturing the peculiar features of this kind of applications. We report about a preliminary users study involving a group of e-students, observed during their interaction with an e-learning system in a real situation. We then propose to adapt to the e-learning domain the so called SUE (Systematic Usability Evaluation) inspection, providing evaluation patterns able to drive inspectors' activities in the evaluation of an e-learning tool.
{"title":"Usability of E-learning tools","authors":"C. Ardito, M. De Marsico, R. Lanzilotti, S. Levialdi, T. Roselli, Veronica Rossano, Manuela Tersigni","doi":"10.1145/989863.989873","DOIUrl":"https://doi.org/10.1145/989863.989873","url":null,"abstract":"The new challenge for designers and HCI researchers is to develop software tools for effective e-learning. Learner-Centered Design (LCD) provides guidelines to make new learning domains accessible in an educationally productive manner. A number of new issues have been raised because of the new \"vehicle\" for education. Effective e-learning systems should include sophisticated and advanced functions, yet their interface should hide their complexity, providing an easy and flexible interaction suited to catch students' interest. In particular, personalization and integration of learning paths and communication media should be provided.It is first necessary to dwell upon the difference between attributes for platforms (containers) and for educational modules provided by a platform (contents). In both cases, it is hard to go deeply into pedagogical issues of the provided knowledge content. This work is a first step towards identifying specific usability attributes for e-learning systems, capturing the peculiar features of this kind of applications. We report about a preliminary users study involving a group of e-students, observed during their interaction with an e-learning system in a real situation. We then propose to adapt to the e-learning domain the so called SUE (Systematic Usability Evaluation) inspection, providing evaluation patterns able to drive inspectors' activities in the evaluation of an e-learning tool.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130683714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper extends previous work on focus+context visualizations of tree-structured data, introducing an efficient, space-constrained, multi-focal tree layout algorithm ("TreeBlock") and techniques at both the system and interactive levels for dealing with scale. These contributions are realized in a new version of the Degree-Of-Interest Tree browser, supporting real-time interactive visualization and exploration of data sets containing on the order of a million nodes.
{"title":"DOITrees revisited: scalable, space-constrained visualization of hierarchical data","authors":"Jeffrey Heer, S. Card","doi":"10.1145/989863.989941","DOIUrl":"https://doi.org/10.1145/989863.989941","url":null,"abstract":"This paper extends previous work on focus+context visualizations of tree-structured data, introducing an efficient, space-constrained, multi-focal tree layout algorithm (\"TreeBlock\") and techniques at both the system and interactive levels for dealing with scale. These contributions are realized in a new version of the Degree-Of-Interest Tree browser, supporting real-time interactive visualization and exploration of data sets containing on the order of a million nodes.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114445362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I present an approach to designing decision support systems. The approach is to dissect a decision from both a normative and a cognitive perspective, and then to design a diagram that helps bridge the gap between the math and the mind. The resulting diagram is ultimately implemented as a visual interface in a support system. I apply the approach to two prototypical problems in "Command and Control" and highlight two practical principles that were used to guide the interface designs. One principle is that the system's interface should be informative, i.e., it should show users the underlying reasons for algorithmic results. The other principle is that the system's interface should be interactive, i.e., it should let users see and set the inputs that affect outputs. I discuss how interfaces designed by these principles can help users understand system recommendations and overcome system limitations.
{"title":"Painting pictures to augment advice","authors":"Kevin J. Burns","doi":"10.1145/989863.989921","DOIUrl":"https://doi.org/10.1145/989863.989921","url":null,"abstract":"I present an approach to designing decision support systems. The approach is to dissect a decision from both a normative and a cognitive perspective, and then to design a diagram that helps bridge the gap between the math and the mind. The resulting diagram is ultimately implemented as a visual interface in a support system. I apply the approach to two prototypical problems in \"Command and Control\" and highlight two practical principles that were used to guide the interface designs. One principle is that the system's interface should be informative, i.e., it should show users the underlying reasons for algorithmic results. The other principle is that the system's interface should be interactive, i.e., it should let users see and set the inputs that affect outputs. I discuss how interfaces designed by these principles can help users understand system recommendations and overcome system limitations.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127995015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Robertson, E. Horvitz, M. Czerwinski, Patrick Baudisch, D. Hutchings, B. Meyers, Daniel C. Robbins, Greg Smith
Our studies have shown that as displays become larger, users leave more windows open for easy multitasking. A larger number of windows, however, may increase the time that users spend arranging and switching between tasks. We present Scalable Fabric, a task management system designed to address problems with the proliferation of open windows on the PC desktop. Scalable Fabric couples window management with a flexible visual representation to provide a focus-plus-context solution to desktop complexity. Users interact with windows in a central focus region of the display in a normal manner, but when a user moves a window into the periphery, it shrinks in size, getting smaller as it nears the edge of the display. The window "minimize" action is redefined to return the window to its preferred location in the periphery, allowing windows to remain visible when not in use. Windows in the periphery may be grouped together into named tasks, and task switching is accomplished with a single mouse click. The spatial arrangement of tasks leverages human spatial memory to make task switching easier. We review the evolution of Scalable Fabric over three design iterations, including discussion of results from two user studies that were performed to compare the experience with Scalable Fabric to that of the Microsoft Windows XP TaskBar.
{"title":"Scalable Fabric: flexible task management","authors":"G. Robertson, E. Horvitz, M. Czerwinski, Patrick Baudisch, D. Hutchings, B. Meyers, Daniel C. Robbins, Greg Smith","doi":"10.1145/989863.989874","DOIUrl":"https://doi.org/10.1145/989863.989874","url":null,"abstract":"Our studies have shown that as displays become larger, users leave more windows open for easy multitasking. A larger number of windows, however, may increase the time that users spend arranging and switching between tasks. We present Scalable Fabric, a task management system designed to address problems with the proliferation of open windows on the PC desktop. Scalable Fabric couples window management with a flexible visual representation to provide a focus-plus-context solution to desktop complexity. Users interact with windows in a central focus region of the display in a normal manner, but when a user moves a window into the periphery, it shrinks in size, getting smaller as it nears the edge of the display. The window \"minimize\" action is redefined to return the window to its preferred location in the periphery, allowing windows to remain visible when not in use. Windows in the periphery may be grouped together into named tasks, and task switching is accomplished with a single mouse click. The spatial arrangement of tasks leverages human spatial memory to make task switching easier. We review the evolution of Scalable Fabric over three design iterations, including discussion of results from two user studies that were performed to compare the experience with Scalable Fabric to that of the Microsoft Windows XP TaskBar.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131762999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eyesight and speech are two channels that humans naturally use to communicate with each other. However both the eye tracking and the speech recognition technique existing are still far from perfect. This work explored how to integrate two (or more) error-prone sources of information on users' selection of objects in a visual interface. The implemented system integrated a commercial speech recognition system with gaze tracking in order to improve recognition results. In addition, we employed a new measure of the rate of mutual disambiguation for the multimodal system and conducted an experimental evaluation.
{"title":"Robust object-identification from inaccurate recognition-based inputs","authors":"Qiaohui Zhang, K. Go, A. Imamiya, Xiaoyang Mao","doi":"10.1145/989863.989905","DOIUrl":"https://doi.org/10.1145/989863.989905","url":null,"abstract":"Eyesight and speech are two channels that humans naturally use to communicate with each other. However both the eye tracking and the speech recognition technique existing are still far from perfect. This work explored how to integrate two (or more) error-prone sources of information on users' selection of objects in a visual interface. The implemented system integrated a commercial speech recognition system with gaze tracking in order to improve recognition results. In addition, we employed a new measure of the rate of mutual disambiguation for the multimodal system and conducted an experimental evaluation.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132379083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a novel integrated 3D editing environment that combines recent advantages in various fields of computer graphics, such as shape modelling, video-based Human Computer Interaction, force feedback and VR fine-manipulation techniques. This integration allows us to create a new compelling form of 3D object creation and manipulation preserving the metaphors designers, artists and painters have accustomed to during their day to day practice. Our system comprises a novel augmented reality workbench and enables users to simultaneously perform natural fine pose determination of the edited object with one hand and model or paint the object with the other hand. The hardware setup features a non-intrusive, video-based hand tracking subsystem, see-through glasses and a 3D 6-degree of freedom input device. The possibilities delivered by our AR workbench enable us to implement traditional and recent editing metaphors in an immersive and fully three-dimensional environment, as well as to develop novel approaches to 3D object interaction.
{"title":"Towards the next generation of 3D content creation","authors":"G. Bendels, F. Kahlesz, R. Klein","doi":"10.1145/989863.989912","DOIUrl":"https://doi.org/10.1145/989863.989912","url":null,"abstract":"In this paper we present a novel integrated 3D editing environment that combines recent advantages in various fields of computer graphics, such as shape modelling, video-based Human Computer Interaction, force feedback and VR fine-manipulation techniques. This integration allows us to create a new compelling form of 3D object creation and manipulation preserving the metaphors designers, artists and painters have accustomed to during their day to day practice. Our system comprises a novel augmented reality workbench and enables users to simultaneously perform natural fine pose determination of the edited object with one hand and model or paint the object with the other hand. The hardware setup features a non-intrusive, video-based hand tracking subsystem, see-through glasses and a 3D 6-degree of freedom input device. The possibilities delivered by our AR workbench enable us to implement traditional and recent editing metaphors in an immersive and fully three-dimensional environment, as well as to develop novel approaches to 3D object interaction.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122808736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present Collaborative Annotations on Visualizations (CAV), a system for annotating visual data in remote and collocated environments. Our system consists of a network framework, and a client application built for tablet PC's. CAV is designed to support the collection and sharing of annotations, through the use of mobile devices connected to visualization servers. We have developed a working system prototype based on tablet PC's that supports digital ink, voice and text annotation, and illustrates our approach in a variety of application domains, including biology, chemistry, and telemedicine. We have created an XML based open standard that supports access to a variety of client devices by publishing visualizations (data and annotations) as streams of images. CAV's primary goal is to enhance scientific discovery by supporting collaboration in the context of data visualizations.
{"title":"A collaborative annotation system for data visualization","authors":"Sean E. Ellis, D. Groth","doi":"10.1145/989863.989938","DOIUrl":"https://doi.org/10.1145/989863.989938","url":null,"abstract":"We present Collaborative Annotations on Visualizations (CAV), a system for annotating visual data in remote and collocated environments. Our system consists of a network framework, and a client application built for tablet PC's. CAV is designed to support the collection and sharing of annotations, through the use of mobile devices connected to visualization servers. We have developed a working system prototype based on tablet PC's that supports digital ink, voice and text annotation, and illustrates our approach in a variety of application domains, including biology, chemistry, and telemedicine. We have created an XML based open standard that supports access to a variety of client devices by publishing visualizations (data and annotations) as streams of images. CAV's primary goal is to enhance scientific discovery by supporting collaboration in the context of data visualizations.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132097830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}