Summarizing large multidimensional datasets is a challenging task, often requiring extensive investigation by a user to identify overall trends and important exceptions to them. While many visualization tools help a user produce a single summary of the data at a time, they require the user to explore the dataset manually. Our idea is to have the computer perform an exhaustive search and inform the user about where further investigation is warranted. Our algorithm takes a large, multidimensional dataset as input, along with a specification of the user's goals, and produces a concise summary that can be clearly visualized in bar graphs or linegraphs. We demonstrate our techniques in a sample prototype for summarizing information stored in spreadsheet databases.
{"title":"Interactive data summarization: an example application","authors":"N. Lesh, M. Mitzenmacher","doi":"10.1145/989863.989892","DOIUrl":"https://doi.org/10.1145/989863.989892","url":null,"abstract":"Summarizing large multidimensional datasets is a challenging task, often requiring extensive investigation by a user to identify overall trends and important exceptions to them. While many visualization tools help a user produce a single summary of the data at a time, they require the user to explore the dataset manually. Our idea is to have the computer perform an exhaustive search and inform the user about where further investigation is warranted. Our algorithm takes a large, multidimensional dataset as input, along with a specification of the user's goals, and produces a concise summary that can be clearly visualized in bar graphs or linegraphs. We demonstrate our techniques in a sample prototype for summarizing information stored in spreadsheet databases.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128314994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a graphical toolkit for debugging timing problems in hardware design. The toolkit was developed as a part of the graphical user interface for a static timing analysis tool PrimeTime from Synopsys Inc. A static timing analysis tool identifies critical logical paths with timing violations in a circuit design without simulating the design, thereby dramatically shortening the time required for timing closure. The toolkit's visual organization of multiple graphical views of timing data helps the user manage the complexity of the data and the debugging process.
{"title":"Task oriented visual interface for debugging timing problems in hardware design","authors":"Donna Nakano, Erric Solomon","doi":"10.1145/989863.989932","DOIUrl":"https://doi.org/10.1145/989863.989932","url":null,"abstract":"We describe a graphical toolkit for debugging timing problems in hardware design. The toolkit was developed as a part of the graphical user interface for a static timing analysis tool PrimeTime from Synopsys Inc. A static timing analysis tool identifies critical logical paths with timing violations in a circuit design without simulating the design, thereby dramatically shortening the time required for timing closure. The toolkit's visual organization of multiple graphical views of timing data helps the user manage the complexity of the data and the debugging process.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128169972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Satellite position based applications, location based information services and communication infrastructures, integrated in Infomobility systems, have had a great influence on the development of new applications in various fields. INStANT, "INfomobility Services for SafeTy-critical Application on Land and Sea based on the use of iNtegrated GNSS Terminals for needs of OLYMPIC cities", is a Pilot Project co-funded by European Commission and GALILEO Joint Undertaking; it aims to provide a scalable and dynamic re-configurable system for Infomobility services. The main innovations of the project are the design of an info-mobile architecture that allows scalability and dynamic mode of operations to achive robustness, service continuity and usability in specific contexts (e.g. Emergency Services). A demanding task of the project was the design of an innovative platform for the user terminal, based on the integration of advanced software components capable of geo-positioning, mobile communications, visualization and mapping.Modeling dynamic user interfaces, based on prescriptions written in XUL - "XML-based User Interface Language", is the target of our study. The resulting solution, capable to run on mobile devices such as Pocket and Tablet PCs, shows features like flexibility, dynamic response to context and processes, usability and on-demand features.
{"title":"Extensible interfaces for mobile devices in an advanced platform for infomobility services","authors":"L. Mazzucchelli, M. Pace","doi":"10.1145/989863.989949","DOIUrl":"https://doi.org/10.1145/989863.989949","url":null,"abstract":"Satellite position based applications, location based information services and communication infrastructures, integrated in Infomobility systems, have had a great influence on the development of new applications in various fields. INStANT, \"INfomobility Services for SafeTy-critical Application on Land and Sea based on the use of iNtegrated GNSS Terminals for needs of OLYMPIC cities\", is a Pilot Project co-funded by European Commission and GALILEO Joint Undertaking; it aims to provide a scalable and dynamic re-configurable system for Infomobility services. The main innovations of the project are the design of an info-mobile architecture that allows scalability and dynamic mode of operations to achive robustness, service continuity and usability in specific contexts (e.g. Emergency Services). A demanding task of the project was the design of an innovative platform for the user terminal, based on the integration of advanced software components capable of geo-positioning, mobile communications, visualization and mapping.Modeling dynamic user interfaces, based on prescriptions written in XUL - \"XML-based User Interface Language\", is the target of our study. The resulting solution, capable to run on mobile devices such as Pocket and Tablet PCs, shows features like flexibility, dynamic response to context and processes, usability and on-demand features.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126771522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduced detail-on-demand video as a simple type of hypervideo that allows users to watch short video segments and to follow hyperlinks to see additional detail. Such video lets users quickly access desired information without having to view the entire contents linearly. A challenge for presenting this type of video is to provide users with the appropriate affordances to understand the hypervideo structure and to navigate it effectively. Another challenge is to give authors tools that allow them to create good detail-on-demand video. Guided by user feedback, we iterated designs for a detail-on-demand video player. We also conducted two user studies to gain insight into people's understanding of hypervideo and to improve the user interface. We found that the interface design was tightly coupled to understanding hypervideo structure and that different designs greatly affected what parts of the video people accessed. The studies also suggested new guidelines for hypervideo authoring.
{"title":"Designing affordances for the navigation of detail-on-demand hypervideo","authors":"Andreas Girgensohn, L. Wilcox, F. Shipman, S. Bly","doi":"10.1145/989863.989913","DOIUrl":"https://doi.org/10.1145/989863.989913","url":null,"abstract":"We introduced detail-on-demand video as a simple type of hypervideo that allows users to watch short video segments and to follow hyperlinks to see additional detail. Such video lets users quickly access desired information without having to view the entire contents linearly. A challenge for presenting this type of video is to provide users with the appropriate affordances to understand the hypervideo structure and to navigate it effectively. Another challenge is to give authors tools that allow them to create good detail-on-demand video. Guided by user feedback, we iterated designs for a detail-on-demand video player. We also conducted two user studies to gain insight into people's understanding of hypervideo and to improve the user interface. We found that the interface design was tightly coupled to understanding hypervideo structure and that different designs greatly affected what parts of the video people accessed. The studies also suggested new guidelines for hypervideo authoring.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125731717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article describes ICON (Input Configurator), an input management system that enables interactive applications to achieve a high level of input adaptability. We define input adaptability as the ability of an interactive application to exploit alternative input devices effectively and offer users a way of adapting input interaction to suit their needs. We describe several examples of interaction techniques implemented using ICON with little or no support from applications that are hard or impossible to implement using regular GUI toolkits.
{"title":"The Input Configurator toolkit: towards high input adaptability in interactive applications","authors":"Pierre Dragicevic, Jean-Daniel Fekete","doi":"10.1145/989863.989904","DOIUrl":"https://doi.org/10.1145/989863.989904","url":null,"abstract":"This article describes ICON (Input Configurator), an input management system that enables interactive applications to achieve a high level of input adaptability. We define input adaptability as the ability of an interactive application to exploit alternative input devices effectively and offer users a way of adapting input interaction to suit their needs. We describe several examples of interaction techniques implemented using ICON with little or no support from applications that are hard or impossible to implement using regular GUI toolkits.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115636122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the user interface design for the NITE WorkBench for Windows (NWB) which enables annotation and analysis of full natural interactive communicative behaviour between humans and between humans and systems. The system enables users to perceive voice and video data and control its presentation when performing multi-level, cross-level and cross-modality annotation, information visualisation for data coding and analysis, information retrieval, and data exploitation.
本文讨论了NITE WorkBench for Windows (NWB)的用户界面设计,它能够对人之间以及人与系统之间的完全自然的交互通信行为进行注释和分析。该系统使用户能够感知语音和视频数据,并在执行多级、跨级别和跨模态注释、数据编码和分析的信息可视化、信息检索和数据开发时控制其表示。
{"title":"A visual interface for a multimodal interactivity annotation tool: design issues and implementation solutions","authors":"M. Kolodnytsky, N. Bernsen, L. Dybkjær","doi":"10.1145/989863.989937","DOIUrl":"https://doi.org/10.1145/989863.989937","url":null,"abstract":"This paper discusses the user interface design for the NITE WorkBench for Windows (NWB) which enables annotation and analysis of full natural interactive communicative behaviour between humans and between humans and systems. The system enables users to perceive voice and video data and control its presentation when performing multi-level, cross-level and cross-modality annotation, information visualisation for data coding and analysis, information retrieval, and data exploitation.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122462418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Bernsen, Marcela Charfuelan, A. Corradini, L. Dybkjær, Thomas Hansen, Svend Kiilerich, M. Kolodnytsky, Dmytro Kupkin, M. Mehta
This paper describes the implemented first prototype of a domain-oriented, conversational edutainment system which allows users to interact via speech and 2D gesture input with life-like animated fairy-tale author Hans Christian Andersen.
{"title":"First prototype of conversational H.C. Andersen","authors":"N. Bernsen, Marcela Charfuelan, A. Corradini, L. Dybkjær, Thomas Hansen, Svend Kiilerich, M. Kolodnytsky, Dmytro Kupkin, M. Mehta","doi":"10.1145/989863.989951","DOIUrl":"https://doi.org/10.1145/989863.989951","url":null,"abstract":"This paper describes the implemented first prototype of a domain-oriented, conversational edutainment system which allows users to interact via speech and 2D gesture input with life-like animated fairy-tale author Hans Christian Andersen.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128111145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sim-U-Sketch is an experimental sketch-based interface we developed for Matlab®'s Simulink® software package. With this tool, users can construct functional Simulink models simply by drawing sketches on a computer screen. To support iterative design, Sim-U-Sketch allows users to interact with their sketches in real time to modify existing objects and add new ones. The system is equipped with a domain-independent, trainable symbol recognizer that can learn new symbols from single prototype examples. This makes our system easily extensible and customizable to new domains and unique drawing styles.
{"title":"Sim-U-Sketch: a sketch-based interface for SimuLink","authors":"L. Kara, T. Stahovich","doi":"10.1145/989863.989923","DOIUrl":"https://doi.org/10.1145/989863.989923","url":null,"abstract":"Sim-U-Sketch is an experimental sketch-based interface we developed for Matlab®'s Simulink® software package. With this tool, users can construct functional Simulink models simply by drawing sketches on a computer screen. To support iterative design, Sim-U-Sketch allows users to interact with their sketches in real time to modify existing objects and add new ones. The system is equipped with a domain-independent, trainable symbol recognizer that can learn new symbols from single prototype examples. This makes our system easily extensible and customizable to new domains and unique drawing styles.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114559725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new method for visualizing musical expressions with a special focus on the three major elements of tempo change, dynamics change, and articulation. We have represented tempo change as a horizontal interval delimited by vertical lines, while dynamics change and articulation within the interval are represented by the height and width of a bar, respectively. Then we grouped local expression into several groups by k-means clustering based on the values of the elements. The resulting groups represented the emotional expression in a performance that is controlled by the rhythmic and melodic structure, which controls the gray scale of the graphical components. We ran a pilot experiment to test the effectiveness of our method using two matching tasks and a questionnaire. In the first task, we used the same section of music, played by two different interpretations, while in the second task, two different sections of a performance were used. The results of the test seem to support the present approach, although there is still room for further improvement that will reflect the subtleties in performance.
{"title":"Visualization of music performance as an aid to listener's comprehension","authors":"Rumi Hiraga, N. Matsuda","doi":"10.1145/989863.989878","DOIUrl":"https://doi.org/10.1145/989863.989878","url":null,"abstract":"We present a new method for visualizing musical expressions with a special focus on the three major elements of tempo change, dynamics change, and articulation. We have represented tempo change as a horizontal interval delimited by vertical lines, while dynamics change and articulation within the interval are represented by the height and width of a bar, respectively. Then we grouped local expression into several groups by k-means clustering based on the values of the elements. The resulting groups represented the emotional expression in a performance that is controlled by the rhythmic and melodic structure, which controls the gray scale of the graphical components. We ran a pilot experiment to test the effectiveness of our method using two matching tasks and a questionnaire. In the first task, we used the same section of music, played by two different interpretations, while in the second task, two different sections of a performance were used. The results of the test seem to support the present approach, although there is still room for further improvement that will reflect the subtleties in performance.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"729 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129684815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Laurini, L. Paolino, M. Sebillo, G. Tortora, G. Vitiello
Recently, much attention has been devoted to the management of continuous fields, which describe geographic phenomena, such as temperature, electromagnetism and pressure. While objects are distinguished by their dimensions, and can be associated with points, lines, or areas, such phenomena can be measurable at any point of their domain by distinguishing what varies, and how smoothly. Thus, when dealing with continuous fields, a basic requirement is represented by users' capability to capture some features of a scenario, by selecting an area of interest and handling the involved phenomena.The aim of our research is to provide GIS users with a visual environment where they can manage both continuous fields and discrete objects, by posing spatial queries which capture the heterogeneous nature of phenomena. In particular, in this paper we propose a visual query language Phenomena, which provides users with a uniform style of interaction with the world, which is conceptually modeled as a composition of continuous fields and discrete objects. The intuitiveness of the underlying operators as well as of the query formulation process is ensured by the choice of suitable metaphors and by the adoption of the paradigm of direct manipulation.A prototype of a visual environment running Phenomena has been realized, which allows users to query experimental data by following a SQL-like SELECT-FROM-WHERE scheme.
{"title":"Dealing with geographic continuous fields: the way to a visual GIS environment","authors":"R. Laurini, L. Paolino, M. Sebillo, G. Tortora, G. Vitiello","doi":"10.1145/989863.989920","DOIUrl":"https://doi.org/10.1145/989863.989920","url":null,"abstract":"Recently, much attention has been devoted to the management of continuous fields, which describe geographic phenomena, such as temperature, electromagnetism and pressure. While objects are distinguished by their dimensions, and can be associated with points, lines, or areas, such phenomena can be measurable at any point of their domain by distinguishing what varies, and how smoothly. Thus, when dealing with continuous fields, a basic requirement is represented by users' capability to capture some features of a scenario, by selecting an area of interest and handling the involved phenomena.The aim of our research is to provide GIS users with a visual environment where they can manage both continuous fields and discrete objects, by posing spatial queries which capture the heterogeneous nature of phenomena. In particular, in this paper we propose a visual query language Phenomena, which provides users with a uniform style of interaction with the world, which is conceptually modeled as a composition of continuous fields and discrete objects. The intuitiveness of the underlying operators as well as of the query formulation process is ensured by the choice of suitable metaphors and by the adoption of the paradigm of direct manipulation.A prototype of a visual environment running Phenomena has been realized, which allows users to query experimental data by following a SQL-like SELECT-FROM-WHERE scheme.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129702565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}