Summarizing large multidimensional datasets is a challenging task, often requiring extensive investigation by a user to identify overall trends and important exceptions to them. While many visualization tools help a user produce a single summary of the data at a time, they require the user to explore the dataset manually. Our idea is to have the computer perform an exhaustive search and inform the user about where further investigation is warranted. Our algorithm takes a large, multidimensional dataset as input, along with a specification of the user's goals, and produces a concise summary that can be clearly visualized in bar graphs or linegraphs. We demonstrate our techniques in a sample prototype for summarizing information stored in spreadsheet databases.
{"title":"Interactive data summarization: an example application","authors":"N. Lesh, M. Mitzenmacher","doi":"10.1145/989863.989892","DOIUrl":"https://doi.org/10.1145/989863.989892","url":null,"abstract":"Summarizing large multidimensional datasets is a challenging task, often requiring extensive investigation by a user to identify overall trends and important exceptions to them. While many visualization tools help a user produce a single summary of the data at a time, they require the user to explore the dataset manually. Our idea is to have the computer perform an exhaustive search and inform the user about where further investigation is warranted. Our algorithm takes a large, multidimensional dataset as input, along with a specification of the user's goals, and produces a concise summary that can be clearly visualized in bar graphs or linegraphs. We demonstrate our techniques in a sample prototype for summarizing information stored in spreadsheet databases.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128314994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a graphical toolkit for debugging timing problems in hardware design. The toolkit was developed as a part of the graphical user interface for a static timing analysis tool PrimeTime from Synopsys Inc. A static timing analysis tool identifies critical logical paths with timing violations in a circuit design without simulating the design, thereby dramatically shortening the time required for timing closure. The toolkit's visual organization of multiple graphical views of timing data helps the user manage the complexity of the data and the debugging process.
{"title":"Task oriented visual interface for debugging timing problems in hardware design","authors":"Donna Nakano, Erric Solomon","doi":"10.1145/989863.989932","DOIUrl":"https://doi.org/10.1145/989863.989932","url":null,"abstract":"We describe a graphical toolkit for debugging timing problems in hardware design. The toolkit was developed as a part of the graphical user interface for a static timing analysis tool PrimeTime from Synopsys Inc. A static timing analysis tool identifies critical logical paths with timing violations in a circuit design without simulating the design, thereby dramatically shortening the time required for timing closure. The toolkit's visual organization of multiple graphical views of timing data helps the user manage the complexity of the data and the debugging process.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128169972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Satellite position based applications, location based information services and communication infrastructures, integrated in Infomobility systems, have had a great influence on the development of new applications in various fields. INStANT, "INfomobility Services for SafeTy-critical Application on Land and Sea based on the use of iNtegrated GNSS Terminals for needs of OLYMPIC cities", is a Pilot Project co-funded by European Commission and GALILEO Joint Undertaking; it aims to provide a scalable and dynamic re-configurable system for Infomobility services. The main innovations of the project are the design of an info-mobile architecture that allows scalability and dynamic mode of operations to achive robustness, service continuity and usability in specific contexts (e.g. Emergency Services). A demanding task of the project was the design of an innovative platform for the user terminal, based on the integration of advanced software components capable of geo-positioning, mobile communications, visualization and mapping.Modeling dynamic user interfaces, based on prescriptions written in XUL - "XML-based User Interface Language", is the target of our study. The resulting solution, capable to run on mobile devices such as Pocket and Tablet PCs, shows features like flexibility, dynamic response to context and processes, usability and on-demand features.
{"title":"Extensible interfaces for mobile devices in an advanced platform for infomobility services","authors":"L. Mazzucchelli, M. Pace","doi":"10.1145/989863.989949","DOIUrl":"https://doi.org/10.1145/989863.989949","url":null,"abstract":"Satellite position based applications, location based information services and communication infrastructures, integrated in Infomobility systems, have had a great influence on the development of new applications in various fields. INStANT, \"INfomobility Services for SafeTy-critical Application on Land and Sea based on the use of iNtegrated GNSS Terminals for needs of OLYMPIC cities\", is a Pilot Project co-funded by European Commission and GALILEO Joint Undertaking; it aims to provide a scalable and dynamic re-configurable system for Infomobility services. The main innovations of the project are the design of an info-mobile architecture that allows scalability and dynamic mode of operations to achive robustness, service continuity and usability in specific contexts (e.g. Emergency Services). A demanding task of the project was the design of an innovative platform for the user terminal, based on the integration of advanced software components capable of geo-positioning, mobile communications, visualization and mapping.Modeling dynamic user interfaces, based on prescriptions written in XUL - \"XML-based User Interface Language\", is the target of our study. The resulting solution, capable to run on mobile devices such as Pocket and Tablet PCs, shows features like flexibility, dynamic response to context and processes, usability and on-demand features.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126771522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduced detail-on-demand video as a simple type of hypervideo that allows users to watch short video segments and to follow hyperlinks to see additional detail. Such video lets users quickly access desired information without having to view the entire contents linearly. A challenge for presenting this type of video is to provide users with the appropriate affordances to understand the hypervideo structure and to navigate it effectively. Another challenge is to give authors tools that allow them to create good detail-on-demand video. Guided by user feedback, we iterated designs for a detail-on-demand video player. We also conducted two user studies to gain insight into people's understanding of hypervideo and to improve the user interface. We found that the interface design was tightly coupled to understanding hypervideo structure and that different designs greatly affected what parts of the video people accessed. The studies also suggested new guidelines for hypervideo authoring.
{"title":"Designing affordances for the navigation of detail-on-demand hypervideo","authors":"Andreas Girgensohn, L. Wilcox, F. Shipman, S. Bly","doi":"10.1145/989863.989913","DOIUrl":"https://doi.org/10.1145/989863.989913","url":null,"abstract":"We introduced detail-on-demand video as a simple type of hypervideo that allows users to watch short video segments and to follow hyperlinks to see additional detail. Such video lets users quickly access desired information without having to view the entire contents linearly. A challenge for presenting this type of video is to provide users with the appropriate affordances to understand the hypervideo structure and to navigate it effectively. Another challenge is to give authors tools that allow them to create good detail-on-demand video. Guided by user feedback, we iterated designs for a detail-on-demand video player. We also conducted two user studies to gain insight into people's understanding of hypervideo and to improve the user interface. We found that the interface design was tightly coupled to understanding hypervideo structure and that different designs greatly affected what parts of the video people accessed. The studies also suggested new guidelines for hypervideo authoring.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125731717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article describes ICON (Input Configurator), an input management system that enables interactive applications to achieve a high level of input adaptability. We define input adaptability as the ability of an interactive application to exploit alternative input devices effectively and offer users a way of adapting input interaction to suit their needs. We describe several examples of interaction techniques implemented using ICON with little or no support from applications that are hard or impossible to implement using regular GUI toolkits.
{"title":"The Input Configurator toolkit: towards high input adaptability in interactive applications","authors":"Pierre Dragicevic, Jean-Daniel Fekete","doi":"10.1145/989863.989904","DOIUrl":"https://doi.org/10.1145/989863.989904","url":null,"abstract":"This article describes ICON (Input Configurator), an input management system that enables interactive applications to achieve a high level of input adaptability. We define input adaptability as the ability of an interactive application to exploit alternative input devices effectively and offer users a way of adapting input interaction to suit their needs. We describe several examples of interaction techniques implemented using ICON with little or no support from applications that are hard or impossible to implement using regular GUI toolkits.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115636122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the user interface design for the NITE WorkBench for Windows (NWB) which enables annotation and analysis of full natural interactive communicative behaviour between humans and between humans and systems. The system enables users to perceive voice and video data and control its presentation when performing multi-level, cross-level and cross-modality annotation, information visualisation for data coding and analysis, information retrieval, and data exploitation.
本文讨论了NITE WorkBench for Windows (NWB)的用户界面设计,它能够对人之间以及人与系统之间的完全自然的交互通信行为进行注释和分析。该系统使用户能够感知语音和视频数据,并在执行多级、跨级别和跨模态注释、数据编码和分析的信息可视化、信息检索和数据开发时控制其表示。
{"title":"A visual interface for a multimodal interactivity annotation tool: design issues and implementation solutions","authors":"M. Kolodnytsky, N. Bernsen, L. Dybkjær","doi":"10.1145/989863.989937","DOIUrl":"https://doi.org/10.1145/989863.989937","url":null,"abstract":"This paper discusses the user interface design for the NITE WorkBench for Windows (NWB) which enables annotation and analysis of full natural interactive communicative behaviour between humans and between humans and systems. The system enables users to perceive voice and video data and control its presentation when performing multi-level, cross-level and cross-modality annotation, information visualisation for data coding and analysis, information retrieval, and data exploitation.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122462418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Bernsen, Marcela Charfuelan, A. Corradini, L. Dybkjær, Thomas Hansen, Svend Kiilerich, M. Kolodnytsky, Dmytro Kupkin, M. Mehta
This paper describes the implemented first prototype of a domain-oriented, conversational edutainment system which allows users to interact via speech and 2D gesture input with life-like animated fairy-tale author Hans Christian Andersen.
{"title":"First prototype of conversational H.C. Andersen","authors":"N. Bernsen, Marcela Charfuelan, A. Corradini, L. Dybkjær, Thomas Hansen, Svend Kiilerich, M. Kolodnytsky, Dmytro Kupkin, M. Mehta","doi":"10.1145/989863.989951","DOIUrl":"https://doi.org/10.1145/989863.989951","url":null,"abstract":"This paper describes the implemented first prototype of a domain-oriented, conversational edutainment system which allows users to interact via speech and 2D gesture input with life-like animated fairy-tale author Hans Christian Andersen.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128111145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Science introductory courses are known to be difficult for students. Kaasboll [1] reports that drop-out or failure rates vary from 25 to 80 % world-wide. The explanation is related to the very nature of programming: "programming is having a task done by a computer" [2]. We can notice three internal difficulties in this definition:• The task itself. How do we define it, and specify it?• The abstraction process. In order to "have it done by..." students need to create a static model covering each task behavior.• The "cognitive gap". It is difficult for novice programmers to model the computer, and its "mindset", which is required to express the task model in a computer-readable way. The bad usability of programming languages increases this difficulty.The lack of interactivity in the editing-running-debugging loop is often pointed as an important aggravating factor for these difficulties. In the mid-seventies, Smith [3] introduced with Pygmalion another programming paradigm: Programming by Examples, where algorithms are not described abstractly, but are demonstrated through concrete examples. This approach involves several advantages for novices. It allows them to work concretely, and to express the solution in their own way of thinking, instead of having to embrace a computer-centered mindset. The programming process becomes interactive, and as PbE languages are "animated" languages, no translation from the dynamic process to any static representation is required.In this paper we investigate both the novice programmer and existing PbE languages, to show how visual and example-based paradigms can be used to improve programming teaching. We give some elements of a new Example-based Programming environment, called Melba, based on this study, which has been designed to help novice programmers learning to program.
{"title":"Example-based programming: a pertinent visual approach for learning to program","authors":"Nicolas Guibert, P. Girard, L. Guittet","doi":"10.1145/989863.989924","DOIUrl":"https://doi.org/10.1145/989863.989924","url":null,"abstract":"Computer Science introductory courses are known to be difficult for students. Kaasboll [1] reports that drop-out or failure rates vary from 25 to 80 % world-wide. The explanation is related to the very nature of programming: \"programming is having a task done by a computer\" [2]. We can notice three internal difficulties in this definition:• The task itself. How do we define it, and specify it?• The abstraction process. In order to \"have it done by...\" students need to create a static model covering each task behavior.• The \"cognitive gap\". It is difficult for novice programmers to model the computer, and its \"mindset\", which is required to express the task model in a computer-readable way. The bad usability of programming languages increases this difficulty.The lack of interactivity in the editing-running-debugging loop is often pointed as an important aggravating factor for these difficulties. In the mid-seventies, Smith [3] introduced with Pygmalion another programming paradigm: Programming by Examples, where algorithms are not described abstractly, but are demonstrated through concrete examples. This approach involves several advantages for novices. It allows them to work concretely, and to express the solution in their own way of thinking, instead of having to embrace a computer-centered mindset. The programming process becomes interactive, and as PbE languages are \"animated\" languages, no translation from the dynamic process to any static representation is required.In this paper we investigate both the novice programmer and existing PbE languages, to show how visual and example-based paradigms can be used to improve programming teaching. We give some elements of a new Example-based Programming environment, called Melba, based on this study, which has been designed to help novice programmers learning to program.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124810329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
OPAL (Online PArtner Lens) is an application designed to match project requirements with suitable teams and individuals, and as part of its matching process features an evaluation mechanism designed to elicit measures of trust between potential partners. We describe a matrix-style visualisation that displays these hierarchically structured assessments between sets of OPAL users to allow them to select potential partners. The main feature of the matrix visualisation is the ability for users to assess the context of a specific assessment as the visualisation not only reveals simple related statistics for the two users concerned, but also overlays summaries of related assessor and candidate evaluations as compact and ordered 'value bars' when the user examines information in the matrix. This enables the user to better decide whether a given assessment is in line with what would be expected from an assessor's and candidate's history, or whether it indicates a specifically localised interplay between the two users. Other features include a simple focus+context effect that can reveal the tree-like structure and details of assessments, and filtering assessments by their position in the matrix or by particular assessment attributes.
{"title":"Exploring and examining assessment data via a matrix visualisation","authors":"Martin Graham, J. Kennedy","doi":"10.1145/989863.989886","DOIUrl":"https://doi.org/10.1145/989863.989886","url":null,"abstract":"OPAL (Online PArtner Lens) is an application designed to match project requirements with suitable teams and individuals, and as part of its matching process features an evaluation mechanism designed to elicit measures of trust between potential partners. We describe a matrix-style visualisation that displays these hierarchically structured assessments between sets of OPAL users to allow them to select potential partners. The main feature of the matrix visualisation is the ability for users to assess the context of a specific assessment as the visualisation not only reveals simple related statistics for the two users concerned, but also overlays summaries of related assessor and candidate evaluations as compact and ordered 'value bars' when the user examines information in the matrix. This enables the user to better decide whether a given assessment is in line with what would be expected from an assessor's and candidate's history, or whether it indicates a specifically localised interplay between the two users. Other features include a simple focus+context effect that can reveal the tree-like structure and details of assessments, and filtering assessments by their position in the matrix or by particular assessment attributes.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134407995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CommonGIS is a developing software system for exploratory analysis of spatial data. It includes a multitude of tools applicable to different data types and helping an analyst to find answers to a variety of questions. CommonGIS has been recently extended to support exploration of spatio-temporal data, i.e. temporally variant data referring to spatial locations. The set of new tools includes animated thematic maps, map series, value flow maps, time graphs, and dynamic transformations of the data. We demonstrate the use of the new tools by considering different analytical questions arising in the course of analysis of thematic spatio-temporal data.
{"title":"Interactive visual tools to explore spatio-temporal variation","authors":"N. Andrienko, G. Andrienko","doi":"10.1145/989863.989940","DOIUrl":"https://doi.org/10.1145/989863.989940","url":null,"abstract":"CommonGIS is a developing software system for exploratory analysis of spatial data. It includes a multitude of tools applicable to different data types and helping an analyst to find answers to a variety of questions. CommonGIS has been recently extended to support exploration of spatio-temporal data, i.e. temporally variant data referring to spatial locations. The set of new tools includes animated thematic maps, map series, value flow maps, time graphs, and dynamic transformations of the data. We demonstrate the use of the new tools by considering different analytical questions arising in the course of analysis of thematic spatio-temporal data.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132667636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}