This paper introduces a new approach for visualizing multidimensional time-referenced data sets, called Circle View. The Circle View technique is a combination of hierarchical visualization techniques, such as treemaps [6], and circular layout techniques such as Pie Charts and Circle Segments [2]. The main goal is to compare continuous data changing their characteristics over time in order to identify patterns, exceptions and similarities in the data.To achieve this goal Circle View is a intuitive and easy to understand visualization interface to enable the user very fast to acquire the information needed. This is an important feature for fast changing visualization caused by time related data streams. Circle View supports the visualization of the changing characteristics over time, to allow the user the observation of changes in the data. Additionally it provides user interaction and drill down mechanism depending on user demands for a effective exploratory data analysis. There is also the capability of exploring correlations and exceptions in the data by using similarity and ordering algorithms.
{"title":"CircleView: a new approach for visualizing time-related multidimensional data sets","authors":"D. Keim, Jörn Schneidewind, Mike Sips","doi":"10.1145/989863.989891","DOIUrl":"https://doi.org/10.1145/989863.989891","url":null,"abstract":"This paper introduces a new approach for visualizing multidimensional time-referenced data sets, called Circle View. The Circle View technique is a combination of hierarchical visualization techniques, such as treemaps [6], and circular layout techniques such as Pie Charts and Circle Segments [2]. The main goal is to compare continuous data changing their characteristics over time in order to identify patterns, exceptions and similarities in the data.To achieve this goal Circle View is a intuitive and easy to understand visualization interface to enable the user very fast to acquire the information needed. This is an important feature for fast changing visualization caused by time related data streams. Circle View supports the visualization of the changing characteristics over time, to allow the user the observation of changes in the data. Additionally it provides user interaction and drill down mechanism depending on user demands for a effective exploratory data analysis. There is also the capability of exploring correlations and exceptions in the data by using similarity and ordering algorithms.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115814414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an intelligent and adaptive virtual environment, which has its structure and presentation customized according to users' interests and preferences (represented in a user model) and in accordance with insertion and removal of contents in this environment. An automatic content categorization process is applied to create content models, used in the spatial organization of the contents in the environment. An intelligent agent assists users during navigation in the environment and retrieval of relevant information. In order to validate our proposal, a prototype of a distance learning environment, used to make educational content available, was developed.
{"title":"An intelligent and adaptive virtual environment and its application in distance learning","authors":"Cássia Trojahn dos Santos, F. Osório","doi":"10.1145/989863.989925","DOIUrl":"https://doi.org/10.1145/989863.989925","url":null,"abstract":"This paper presents an intelligent and adaptive virtual environment, which has its structure and presentation customized according to users' interests and preferences (represented in a user model) and in accordance with insertion and removal of contents in this environment. An automatic content categorization process is applied to create content models, used in the spatial organization of the contents in the environment. An intelligent agent assists users during navigation in the environment and retrieval of relevant information. In order to validate our proposal, a prototype of a distance learning environment, used to make educational content available, was developed.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116860331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Drucker, C. Wong, A. Roseway, Steve Glenner, Steven De Mar
Applying personal keywords to images and video clips makes it possible to organize and retrieve them, as well as automatically create thematically related slideshows. MediaBrowser is a system designed to help users create annotations by uniting a careful choice of interface elements, an elegant and pleasing design, smooth motion and animation, and a few simple tools that are predictable and consistent. The result is a friendly, useable tool for turning shoeboxes of old photos into labeled collections that can be easily browsed, shared, and enjoyed.
{"title":"MediaBrowser: reclaiming the shoebox","authors":"S. Drucker, C. Wong, A. Roseway, Steve Glenner, Steven De Mar","doi":"10.1145/989863.989944","DOIUrl":"https://doi.org/10.1145/989863.989944","url":null,"abstract":"Applying personal keywords to images and video clips makes it possible to organize and retrieve them, as well as automatically create thematically related slideshows. MediaBrowser is a system designed to help users create annotations by uniting a careful choice of interface elements, an elegant and pleasing design, smooth motion and animation, and a few simple tools that are predictable and consistent. The result is a friendly, useable tool for turning shoeboxes of old photos into labeled collections that can be easily browsed, shared, and enjoyed.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114359448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manuel J. Fonseca, B. Barroso, Pedro Ribeiro, J. Jorge
These days there are a lot of vector drawings available for people to integrate into documents. These come in a variety of formats, such as Corel, Postscript, CGM, WMF and recently SVG. Typically, such ClipArt drawings tend to be archieved and accessed by categories (e.g. food, shapes, transportation, etc.). However, to find a drawing among hundreds of thousands is not an easy task. While text-driven attempts at classifying image data have been recently supplemented with query-by-image content, these have been developed for bitmap-type data and cannot handle vectorial information. In this paper we present an approach to allow indexing and retrieving vector drawings by content from large datasets. Our prototype can already handle databases with thousands of drawings using commodity hardware. Furthermore, preliminary usability assessments show promising results and suggest good acceptance of sketching as a query mechanism by users.
{"title":"Sketch-based retrieval of ClipArt drawings","authors":"Manuel J. Fonseca, B. Barroso, Pedro Ribeiro, J. Jorge","doi":"10.1145/989863.989943","DOIUrl":"https://doi.org/10.1145/989863.989943","url":null,"abstract":"These days there are a lot of vector drawings available for people to integrate into documents. These come in a variety of formats, such as Corel, Postscript, CGM, WMF and recently SVG. Typically, such ClipArt drawings tend to be archieved and accessed by categories (e.g. food, shapes, transportation, etc.). However, to find a drawing among hundreds of thousands is not an easy task. While text-driven attempts at classifying image data have been recently supplemented with query-by-image content, these have been developed for bitmap-type data and cannot handle vectorial information. In this paper we present an approach to allow indexing and retrieving vector drawings by content from large datasets. Our prototype can already handle databases with thousands of drawings using commodity hardware. Furthermore, preliminary usability assessments show promising results and suggest good acceptance of sketching as a query mechanism by users.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132048628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Hinckley, Gonzalo A. Ramos, François Guimbretière, Patrick Baudisch, Marc A. Smith
Stitching is a new interaction technique that allows users to combine pen-operated mobile devices with wireless networking by using pen gestures that span multiple displays. To stitch, a user starts moving the pen on one screen, crosses over the bezel, and finishes the stroke on the screen of a nearby device. Properties of each portion of the pen stroke are observed by the participating devices, synchronized via wireless network communication, and recognized as a unitary act performed by one user, thus binding together the devices. We identify the general requirements of stitching and describe a prototype photo sharing application that uses stitching to allow users to copy images from one tablet to another that is nearby, expand an image across multiple screens, establish a persistent shared workspace, or use one tablet to present images that a user selects from another tablet. We also discuss design issues that arise from proxemics, that is, the sociological implications of users collaborating in close quarters.
{"title":"Stitching: pen gestures that span multiple displays","authors":"K. Hinckley, Gonzalo A. Ramos, François Guimbretière, Patrick Baudisch, Marc A. Smith","doi":"10.1145/989863.989866","DOIUrl":"https://doi.org/10.1145/989863.989866","url":null,"abstract":"Stitching is a new interaction technique that allows users to combine pen-operated mobile devices with wireless networking by using pen gestures that span multiple displays. To stitch, a user starts moving the pen on one screen, crosses over the bezel, and finishes the stroke on the screen of a nearby device. Properties of each portion of the pen stroke are observed by the participating devices, synchronized via wireless network communication, and recognized as a unitary act performed by one user, thus binding together the devices. We identify the general requirements of stitching and describe a prototype photo sharing application that uses stitching to allow users to copy images from one tablet to another that is nearby, expand an image across multiple screens, establish a persistent shared workspace, or use one tablet to present images that a user selects from another tablet. We also discuss design issues that arise from proxemics, that is, the sociological implications of users collaborating in close quarters.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126464596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the fields of industrial design and car manufacturing the creation of 3D curves plays a fundamental role within the design process: it allows the improvement of the visual appeal of artifacts, it enhances ergonomics and the product's commercial competitiveness through product differentiation. When flexibility and intuition are to be privileged it is fundamental to achieve natural, intuitive, mathematically correct, creation and modification of surfaces.The scientific aim of this research is the development of an innovative metaphor to modeling of 3D curves which maintains the natural expertise of the designer. The major contribution of this paper is the capability of the system to create and modify the curve naturally, without mathematical artifices, within the limits set by the use of Bezier curves.The proposed metaphor combines the benefits of two acknowledged techniques referred to as the Digital Tape Drawing and the Eraser Pen, which allow the real-time modification of the curve. The integrated adoption of tangible interfaces and innovative mathematical tools, combined with the adoption of semi-immersive environment and lightweight interaction devices, delivers intuitive curve creation for free-form modeling within the virtual scene. The paper describes the details of the algorithm developed and it highlights its strengths during the styling phase.
{"title":"Tangible interfaces in virtual environments for industrial design","authors":"R. Amicis, G. Conti, M. Fiorentino","doi":"10.1145/989863.989908","DOIUrl":"https://doi.org/10.1145/989863.989908","url":null,"abstract":"In the fields of industrial design and car manufacturing the creation of 3D curves plays a fundamental role within the design process: it allows the improvement of the visual appeal of artifacts, it enhances ergonomics and the product's commercial competitiveness through product differentiation. When flexibility and intuition are to be privileged it is fundamental to achieve natural, intuitive, mathematically correct, creation and modification of surfaces.The scientific aim of this research is the development of an innovative metaphor to modeling of 3D curves which maintains the natural expertise of the designer. The major contribution of this paper is the capability of the system to create and modify the curve naturally, without mathematical artifices, within the limits set by the use of Bezier curves.The proposed metaphor combines the benefits of two acknowledged techniques referred to as the Digital Tape Drawing and the Eraser Pen, which allow the real-time modification of the curve. The integrated adoption of tangible interfaces and innovative mathematical tools, combined with the adoption of semi-immersive environment and lightweight interaction devices, delivers intuitive curve creation for free-form modeling within the virtual scene. The paper describes the details of the algorithm developed and it highlights its strengths during the styling phase.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128710323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are many tools that provide the user with an abundance of sliders, buttons and options to change; such tools are popular in exploratory visualization. As the user changes the parameters so the display dynamically updates and responds appropriately to changes made. These multiparameter systems can be difficult to use, as the user is often unaware of the outcome of any action before it occurs. Specifically it may be unclear whether to increase or decrease a parameter value to get a desired result. Multiple view systems can help, as the user can try out various scenarios and compare the results side-by-side, although if unrestricted the user may be swamped by numerous and often unnecessary views. In this paper we present the novel idea of 'bracketing', where a principal view is supported with two additional views from slightly different parameterizations. The idea is inspired by exposure bracketing in photography. This provides a middle ground: it offers a way to see adjacent-parameterizations, while allowing yet restraining multiple views. Moreover, we demonstrate how bracketing can be exploited in many applications and used in various ways (within parameter, visual and temporal domains).
{"title":"Exploratory visualization using bracketing","authors":"Jonathan C. Roberts","doi":"10.1145/989863.989893","DOIUrl":"https://doi.org/10.1145/989863.989893","url":null,"abstract":"There are many tools that provide the user with an abundance of sliders, buttons and options to change; such tools are popular in exploratory visualization. As the user changes the parameters so the display dynamically updates and responds appropriately to changes made. These multiparameter systems can be difficult to use, as the user is often unaware of the outcome of any action before it occurs. Specifically it may be unclear whether to increase or decrease a parameter value to get a desired result. Multiple view systems can help, as the user can try out various scenarios and compare the results side-by-side, although if unrestricted the user may be swamped by numerous and often unnecessary views. In this paper we present the novel idea of 'bracketing', where a principal view is supported with two additional views from slightly different parameterizations. The idea is inspired by exposure bracketing in photography. This provides a middle ground: it offers a way to see adjacent-parameterizations, while allowing yet restraining multiple views. Moreover, we demonstrate how bracketing can be exploited in many applications and used in various ways (within parameter, visual and temporal domains).","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
User interfaces to information systems can be considered systematic as they consist of two types of tasks performed on classes of a domain model: basic tasks performed on one class at a time (such as insert, delete, modify, sort, list, print) and complex tasks performed on parts or whole of one or several classes (e.g., tasks involving various attributes of different classes with constraints between and establishing relationships between). This paper presents how a wizard tool can produce user interfaces to such tasks according to a model-driven approach based on a domain model of the information system. This process consists of seven steps: database selection, data source selection, building the opening procedure, data source selection for control widgets, building the closing procedure, setting the size of the widgets, and laying them out. The wizard generates code for Visual Basic and eMbedded Visual Basic, thus enabling to obtain support for both stationary and mobile tasks simultaneously, while maintaining consistency.
{"title":"A domain model-driven approach for producing user interfaces to multi-platform information systems","authors":"Julien Stocq, J. Vanderdonckt","doi":"10.1145/989863.989934","DOIUrl":"https://doi.org/10.1145/989863.989934","url":null,"abstract":"User interfaces to information systems can be considered systematic as they consist of two types of tasks performed on classes of a domain model: basic tasks performed on one class at a time (such as insert, delete, modify, sort, list, print) and complex tasks performed on parts or whole of one or several classes (e.g., tasks involving various attributes of different classes with constraints between and establishing relationships between). This paper presents how a wizard tool can produce user interfaces to such tasks according to a model-driven approach based on a domain model of the information system. This process consists of seven steps: database selection, data source selection, building the opening procedure, data source selection for control widgets, building the closing procedure, setting the size of the widgets, and laying them out. The wizard generates code for Visual Basic and eMbedded Visual Basic, thus enabling to obtain support for both stationary and mobile tasks simultaneously, while maintaining consistency.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134361960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Dietz, R. Raskar, S. Booth, J. Baar, K. Wittenburg, Brian Knep
Recent advances in computer video projection open up new possibilities for real-time interactive, persuasive displays. Now a display can continuously adapt to a viewer so as to maximize its effectiveness. However, by the very nature of persuasion, these displays must be both immersive and subtle. We have been working on technologies that support this application including multi-projector and implicit interaction techniques. These technologies have been used to create a series of interactive persuasive displays that are described.
{"title":"Multi-projectors and implicit interaction in persuasive public displays","authors":"P. Dietz, R. Raskar, S. Booth, J. Baar, K. Wittenburg, Brian Knep","doi":"10.1145/989863.989898","DOIUrl":"https://doi.org/10.1145/989863.989898","url":null,"abstract":"Recent advances in computer video projection open up new possibilities for real-time interactive, persuasive displays. Now a display can continuously adapt to a viewer so as to maximize its effectiveness. However, by the very nature of persuasion, these displays must be both immersive and subtle. We have been working on technologies that support this application including multi-projector and implicit interaction techniques. These technologies have been used to create a series of interactive persuasive displays that are described.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132641706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the 3Book, a 3D interactive visualization of a codex book as a component for various digital library and sensemaking systems. The book is designed to hold large books and to support sensemaking operations by readers. The book includes methods in which the automatic semantic analysis of the book's content is used to dynamically tailor access.
{"title":"3Book: a 3D electronic smart book","authors":"S. Card, Lichan Hong, J. Mackinlay, Ed H. Chi","doi":"10.1145/989863.989915","DOIUrl":"https://doi.org/10.1145/989863.989915","url":null,"abstract":"This paper describes the 3Book, a 3D interactive visualization of a codex book as a component for various digital library and sensemaking systems. The book is designed to hold large books and to support sensemaking operations by readers. The book includes methods in which the automatic semantic analysis of the book's content is used to dynamically tailor access.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121022042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}