Sim-U-Sketch is an experimental sketch-based interface we developed for Matlab®'s Simulink® software package. With this tool, users can construct functional Simulink models simply by drawing sketches on a computer screen. To support iterative design, Sim-U-Sketch allows users to interact with their sketches in real time to modify existing objects and add new ones. The system is equipped with a domain-independent, trainable symbol recognizer that can learn new symbols from single prototype examples. This makes our system easily extensible and customizable to new domains and unique drawing styles.
{"title":"Sim-U-Sketch: a sketch-based interface for SimuLink","authors":"L. Kara, T. Stahovich","doi":"10.1145/989863.989923","DOIUrl":"https://doi.org/10.1145/989863.989923","url":null,"abstract":"Sim-U-Sketch is an experimental sketch-based interface we developed for Matlab®'s Simulink® software package. With this tool, users can construct functional Simulink models simply by drawing sketches on a computer screen. To support iterative design, Sim-U-Sketch allows users to interact with their sketches in real time to modify existing objects and add new ones. The system is equipped with a domain-independent, trainable symbol recognizer that can learn new symbols from single prototype examples. This makes our system easily extensible and customizable to new domains and unique drawing styles.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114559725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes two visualisation algorithms that give an impression of current activity on a web site. Both focus on giving a sense of the trail of individual visitors within the web space and showing their navigation paths. Past web activity is used to produce a spatial mapping of pages, which results in highly traversed page links lying close together in the 2D visualisation space. Pages visited by typical individual visitors thus form intelligible paths when plotted in the visualisation space. Both techniques attempt to enhance user awareness and experience, but they differ in their balance between utility and aesthetics.
{"title":"Quantum web fields and molecular meanderings: visualising web visitations","authors":"Geoffrey P. Ellis, A. Dix","doi":"10.1145/989863.989895","DOIUrl":"https://doi.org/10.1145/989863.989895","url":null,"abstract":"This paper describes two visualisation algorithms that give an impression of current activity on a web site. Both focus on giving a sense of the trail of individual visitors within the web space and showing their navigation paths. Past web activity is used to produce a spatial mapping of pages, which results in highly traversed page links lying close together in the 2D visualisation space. Pages visited by typical individual visitors thus form intelligible paths when plotted in the visualisation space. Both techniques attempt to enhance user awareness and experience, but they differ in their balance between utility and aesthetics.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123786903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new method for visualizing musical expressions with a special focus on the three major elements of tempo change, dynamics change, and articulation. We have represented tempo change as a horizontal interval delimited by vertical lines, while dynamics change and articulation within the interval are represented by the height and width of a bar, respectively. Then we grouped local expression into several groups by k-means clustering based on the values of the elements. The resulting groups represented the emotional expression in a performance that is controlled by the rhythmic and melodic structure, which controls the gray scale of the graphical components. We ran a pilot experiment to test the effectiveness of our method using two matching tasks and a questionnaire. In the first task, we used the same section of music, played by two different interpretations, while in the second task, two different sections of a performance were used. The results of the test seem to support the present approach, although there is still room for further improvement that will reflect the subtleties in performance.
{"title":"Visualization of music performance as an aid to listener's comprehension","authors":"Rumi Hiraga, N. Matsuda","doi":"10.1145/989863.989878","DOIUrl":"https://doi.org/10.1145/989863.989878","url":null,"abstract":"We present a new method for visualizing musical expressions with a special focus on the three major elements of tempo change, dynamics change, and articulation. We have represented tempo change as a horizontal interval delimited by vertical lines, while dynamics change and articulation within the interval are represented by the height and width of a bar, respectively. Then we grouped local expression into several groups by k-means clustering based on the values of the elements. The resulting groups represented the emotional expression in a performance that is controlled by the rhythmic and melodic structure, which controls the gray scale of the graphical components. We ran a pilot experiment to test the effectiveness of our method using two matching tasks and a questionnaire. In the first task, we used the same section of music, played by two different interpretations, while in the second task, two different sections of a performance were used. The results of the test seem to support the present approach, although there is still room for further improvement that will reflect the subtleties in performance.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"729 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129684815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biodiversity databases have recently become widely available to the public and to other researchers. To retrieve information from these resources, users must understand the underlying data schemas even though they often are not content experts. Many other domains share this problem.We developed an interface, TaxonTree, to visualize the taxonomic hierarchy of animal names. We applied integrated searching and browsing so that users need not have complete knowledge either of appropriate keywords or the organization of the data.Our qualitative user study of TaxonTree in an undergraduate course is the first to describe usage patterns in the biodiversity domain. We found that tree-based interaction and visualization aided users' understanding of the data. Most users approached biodiversity data by browsing, using common, general knowledge rather than the scientific keyword expertise necessary to search using traditional interfaces. Users with different levels of interest in the domain had different interaction preferences.
{"title":"How users interact with biodiversity information using TaxonTree","authors":"Bongshin Lee, C. Parr, D. Campbell, B. Bederson","doi":"10.1145/989863.989918","DOIUrl":"https://doi.org/10.1145/989863.989918","url":null,"abstract":"Biodiversity databases have recently become widely available to the public and to other researchers. To retrieve information from these resources, users must understand the underlying data schemas even though they often are not content experts. Many other domains share this problem.We developed an interface, TaxonTree, to visualize the taxonomic hierarchy of animal names. We applied integrated searching and browsing so that users need not have complete knowledge either of appropriate keywords or the organization of the data.Our qualitative user study of TaxonTree in an undergraduate course is the first to describe usage patterns in the biodiversity domain. We found that tree-based interaction and visualization aided users' understanding of the data. Most users approached biodiversity data by browsing, using common, general knowledge rather than the scientific keyword expertise necessary to search using traditional interfaces. Users with different levels of interest in the domain had different interaction preferences.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130833307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although some guidelines (e.g., based on architectural principles) have been proposed for designing Virtual Environments (VEs), several usability problems can be identified only by studying the behavior of real users in VEs. This paper proposes a tool, called VU-Flow, that is able to automatically record usage data of VEs and then visualize it in formats that make it easy for the VE designer to visually detect peculiar users' behaviors and thus better understand the effects of her design choices. In particular, the visualizations concern: i) the detailed paths followed by single users or groups of users in the VE, ii) areas of maximum (or minimum) users' flow, iii) the parts of the environment more seen (or less seen) by users, iv) detailed replay of users visits. We show examples of how these visualizations allow one to visually detect useful information such as the interests of users, navigation problems, users' visiting style. Although this paper describes how VU-Flow can be used in the context of VEs, it is interesting to note that the tool can be also applied to the study of users of location-aware mobile devices in physical environments.
{"title":"A visual tool for tracing users' behavior in Virtual Environments","authors":"L. Chittaro, Lucio Ieronutti","doi":"10.1145/989863.989868","DOIUrl":"https://doi.org/10.1145/989863.989868","url":null,"abstract":"Although some guidelines (e.g., based on architectural principles) have been proposed for designing Virtual Environments (VEs), several usability problems can be identified only by studying the behavior of real users in VEs. This paper proposes a tool, called VU-Flow, that is able to automatically record usage data of VEs and then visualize it in formats that make it easy for the VE designer to visually detect peculiar users' behaviors and thus better understand the effects of her design choices. In particular, the visualizations concern: i) the detailed paths followed by single users or groups of users in the VE, ii) areas of maximum (or minimum) users' flow, iii) the parts of the environment more seen (or less seen) by users, iv) detailed replay of users visits. We show examples of how these visualizations allow one to visually detect useful information such as the interests of users, navigation problems, users' visiting style. Although this paper describes how VU-Flow can be used in the context of VEs, it is interesting to note that the tool can be also applied to the study of users of location-aware mobile devices in physical environments.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129736342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Laurini, L. Paolino, M. Sebillo, G. Tortora, G. Vitiello
Recently, much attention has been devoted to the management of continuous fields, which describe geographic phenomena, such as temperature, electromagnetism and pressure. While objects are distinguished by their dimensions, and can be associated with points, lines, or areas, such phenomena can be measurable at any point of their domain by distinguishing what varies, and how smoothly. Thus, when dealing with continuous fields, a basic requirement is represented by users' capability to capture some features of a scenario, by selecting an area of interest and handling the involved phenomena.The aim of our research is to provide GIS users with a visual environment where they can manage both continuous fields and discrete objects, by posing spatial queries which capture the heterogeneous nature of phenomena. In particular, in this paper we propose a visual query language Phenomena, which provides users with a uniform style of interaction with the world, which is conceptually modeled as a composition of continuous fields and discrete objects. The intuitiveness of the underlying operators as well as of the query formulation process is ensured by the choice of suitable metaphors and by the adoption of the paradigm of direct manipulation.A prototype of a visual environment running Phenomena has been realized, which allows users to query experimental data by following a SQL-like SELECT-FROM-WHERE scheme.
{"title":"Dealing with geographic continuous fields: the way to a visual GIS environment","authors":"R. Laurini, L. Paolino, M. Sebillo, G. Tortora, G. Vitiello","doi":"10.1145/989863.989920","DOIUrl":"https://doi.org/10.1145/989863.989920","url":null,"abstract":"Recently, much attention has been devoted to the management of continuous fields, which describe geographic phenomena, such as temperature, electromagnetism and pressure. While objects are distinguished by their dimensions, and can be associated with points, lines, or areas, such phenomena can be measurable at any point of their domain by distinguishing what varies, and how smoothly. Thus, when dealing with continuous fields, a basic requirement is represented by users' capability to capture some features of a scenario, by selecting an area of interest and handling the involved phenomena.The aim of our research is to provide GIS users with a visual environment where they can manage both continuous fields and discrete objects, by posing spatial queries which capture the heterogeneous nature of phenomena. In particular, in this paper we propose a visual query language Phenomena, which provides users with a uniform style of interaction with the world, which is conceptually modeled as a composition of continuous fields and discrete objects. The intuitiveness of the underlying operators as well as of the query formulation process is ensured by the choice of suitable metaphors and by the adoption of the paradigm of direct manipulation.A prototype of a visual environment running Phenomena has been realized, which allows users to query experimental data by following a SQL-like SELECT-FROM-WHERE scheme.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129702565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrés Moreno, Niko Myller, E. Sutinen, M. Ben-Ari
We present a program visualization tool called Jeliot 3 that is designed to aid novice students to learn procedural and object oriented programming. The key feature of Jeliot is the fully or semi-automatic visualization of the data and control flows. The development process of Jeliot has been research-oriented, meaning that all the different versions have had their own research agenda rising from the design of the previous version and their empirical evaluations. In this process, the user interface and visualization has evolved to better suit the targeted audience, which in the case of Jeliot 3, is novice programmers. In this paper we explain the model for the system and introduce the features of the user interface and visualization engine. Moreover, we have developed an intermediate language that is used to decouple the interpretation of the program from its visualization. This has led to a modular design that permits both internal and external extensibility.
{"title":"Visualizing programs with Jeliot 3","authors":"Andrés Moreno, Niko Myller, E. Sutinen, M. Ben-Ari","doi":"10.1145/989863.989928","DOIUrl":"https://doi.org/10.1145/989863.989928","url":null,"abstract":"We present a program visualization tool called Jeliot 3 that is designed to aid novice students to learn procedural and object oriented programming. The key feature of Jeliot is the fully or semi-automatic visualization of the data and control flows. The development process of Jeliot has been research-oriented, meaning that all the different versions have had their own research agenda rising from the design of the previous version and their empirical evaluations. In this process, the user interface and visualization has evolved to better suit the targeted audience, which in the case of Jeliot 3, is novice programmers. In this paper we explain the model for the system and introduce the features of the user interface and visualization engine. Moreover, we have developed an intermediate language that is used to decouple the interpretation of the program from its visualization. This has led to a modular design that permits both internal and external extensibility.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128011880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper describes a framework supporting the creation of 3D user interfaces to visualize awareness information about the cooperation context of distributed actors. The paper discusses the motivations behind the framework and illustrates ThreeDmap, an editor allowing the creation and customization of 3D interfaces supporting the perception of awareness information.
{"title":"Perceiving awareness information through 3D representations","authors":"Fabrizio Nunnari, C. Simone","doi":"10.1145/989863.989947","DOIUrl":"https://doi.org/10.1145/989863.989947","url":null,"abstract":"The paper describes a framework supporting the creation of 3D user interfaces to visualize awareness information about the cooperation context of distributed actors. The paper discusses the motivations behind the framework and illustrates ThreeDmap, an editor allowing the creation and customization of 3D interfaces supporting the perception of awareness information.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122279963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scene-Driver is a software toolkit for the reuse of broadcast animation content to provide new engaging experiences for children. It has been developed and tested using content from the children's television series "Tiny Planets". Scene-Driver can be used to produce variations on a domino-like game. When playing, the child selects from a set of tiles that depict, for example, characters from the series. The child manipulates the direction of a story in the Tiny Planet world by their choice of tile. The successful selection of a tile will result in a scene from the show being played. A scene is defined as a section from an episode which has certain self-contained narrative elements such as conflict introduction, conflict resolution or comedic event. A scene-supervisor uses these descriptions to ensure that as well as having all the properties prescribed by the child's choice of tile, the scenes are presented in a coherent order according to certain plot and directorial principles. Inter-scene continuity is provided in the form of transition scenes which depict the departure and arrival of relevant characters between one scene and the next. Preliminary evaluations have demonstrated the potential of Scene-Driver to produce engaging and usable games based on broadcast content for young children.
{"title":"Scene-Driver: reusing broadcast animation content for engaging, narratively coherent games","authors":"A. Wolff, P. Mulholland, Z. Zdráhal","doi":"10.1145/989863.989876","DOIUrl":"https://doi.org/10.1145/989863.989876","url":null,"abstract":"Scene-Driver is a software toolkit for the reuse of broadcast animation content to provide new engaging experiences for children. It has been developed and tested using content from the children's television series \"Tiny Planets\". Scene-Driver can be used to produce variations on a domino-like game. When playing, the child selects from a set of tiles that depict, for example, characters from the series. The child manipulates the direction of a story in the Tiny Planet world by their choice of tile. The successful selection of a tile will result in a scene from the show being played. A scene is defined as a section from an episode which has certain self-contained narrative elements such as conflict introduction, conflict resolution or comedic event. A scene-supervisor uses these descriptions to ensure that as well as having all the properties prescribed by the child's choice of tile, the scenes are presented in a coherent order according to certain plot and directorial principles. Inter-scene continuity is provided in the form of transition scenes which depict the departure and arrival of relevant characters between one scene and the next. Preliminary evaluations have demonstrated the potential of Scene-Driver to produce engaging and usable games based on broadcast content for young children.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124043529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The wide spread of mobile devices in the consumer market has posed a number of new issues in the design of internet applications and their user interfaces. In particular, applications need to adapt their interaction modalities to different portable devices. In this paper we address the problem of defining models and techniques for designing internet based applications that automatically adapt to different mobile devices. First, we define a formal model that allows for specifying the interaction in a way that is abstract enough to be decoupled from the presentation layer, which is to be adapted to different contexts. The model is mainly based on the idea of describing the user interaction in terms of elementary actions. Then, we provide a formal device characterization showing how to effectively implements the AIUs in a multidevice context.
{"title":"Modelling internet based applications for designing multi-device adaptive interfaces","authors":"E. Bertini, G. Santucci","doi":"10.1145/989863.989906","DOIUrl":"https://doi.org/10.1145/989863.989906","url":null,"abstract":"The wide spread of mobile devices in the consumer market has posed a number of new issues in the design of internet applications and their user interfaces. In particular, applications need to adapt their interaction modalities to different portable devices. In this paper we address the problem of defining models and techniques for designing internet based applications that automatically adapt to different mobile devices. First, we define a formal model that allows for specifying the interaction in a way that is abstract enough to be decoupled from the presentation layer, which is to be adapted to different contexts. The model is mainly based on the idea of describing the user interaction in terms of elementary actions. Then, we provide a formal device characterization showing how to effectively implements the AIUs in a multidevice context.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129221145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}