The navigation support provided by user interfaces of Virtual Environments (VEs) is often inadequate and tends to be overly complex, especially in the case of large-scale VEs. In this paper, we propose a novel navigation aid that aims at allowing users to easily locate objects and places inside large-scale VEs. The aid exploits 3D arrows to point towards the objects and places the user is interested in. We illustrate and discuss the experimental evaluation we carried out to assess the usefulness of the proposed solution, contrasting it with more traditional 2D navigation aids. In particular, we compared subjects' performance in 4 conditions which differ for the type of provided navigation aid: three conditions employed respectively the proposed "3D arrows" aid, an aid based on 2D arrows, and a 2D aid based on a radar metaphor; the fourth condition was a control condition with no navigation aids available.
{"title":"3D location-pointing as a navigation aid in Virtual Environments","authors":"L. Chittaro, Stefano Burigat","doi":"10.1145/989863.989910","DOIUrl":"https://doi.org/10.1145/989863.989910","url":null,"abstract":"The navigation support provided by user interfaces of Virtual Environments (VEs) is often inadequate and tends to be overly complex, especially in the case of large-scale VEs. In this paper, we propose a novel navigation aid that aims at allowing users to easily locate objects and places inside large-scale VEs. The aid exploits 3D arrows to point towards the objects and places the user is interested in. We illustrate and discuss the experimental evaluation we carried out to assess the usefulness of the proposed solution, contrasting it with more traditional 2D navigation aids. In particular, we compared subjects' performance in 4 conditions which differ for the type of provided navigation aid: three conditions employed respectively the proposed \"3D arrows\" aid, an aid based on 2D arrows, and a 2D aid based on a radar metaphor; the fourth condition was a control condition with no navigation aids available.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114615007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a group recommender system for vacations that helps group members who are not able to communicate synchronously to specify their preferences collaboratively and to arrive at an agreement about an overall solution. The system's design includes two innovations in visual user interfaces: 1. An interface for collaborative preference specification offers various ways in which one group member can view and perhaps copy the previously specified preferences of other users. This interface has been found to further mutual understanding and agreement. The same interface is used by the system to display recommended solutions and to visualize the extent to which a solution satisfies the preferences of the various group members. 2. In a novel application of animated characters, each character serves as a representative of a group member who is not currently available for communication. By responding with speech, facial expressions, and gesture to proposed solutions, a representative conveys to the current real user some key aspects of the corresponding real group member's responses to a proposed solution. Taken together, these two aspects of the interface provide complementary and partly redundant means by which a group member can achieve awareness of the preferences and responses of other group members: an abstract, complete, graphical representation and a concrete, selective, human-like representation. By allowing users to choose flexibly which representation they will attend to under what circumstances, we aim to move beyond the usual debates about the relative merits of these two general types of representation.
{"title":"Two methods for enhancing mutual awareness in a group recommender system","authors":"A. Jameson, Stephan Baldes, Thomas Kleinbauer","doi":"10.1145/989863.989948","DOIUrl":"https://doi.org/10.1145/989863.989948","url":null,"abstract":"We present a group recommender system for vacations that helps group members who are not able to communicate synchronously to specify their preferences collaboratively and to arrive at an agreement about an overall solution. The system's design includes two innovations in visual user interfaces: 1. An interface for collaborative preference specification offers various ways in which one group member can view and perhaps copy the previously specified preferences of other users. This interface has been found to further mutual understanding and agreement. The same interface is used by the system to display recommended solutions and to visualize the extent to which a solution satisfies the preferences of the various group members. 2. In a novel application of animated characters, each character serves as a representative of a group member who is not currently available for communication. By responding with speech, facial expressions, and gesture to proposed solutions, a representative conveys to the current real user some key aspects of the corresponding real group member's responses to a proposed solution. Taken together, these two aspects of the interface provide complementary and partly redundant means by which a group member can achieve awareness of the preferences and responses of other group members: an abstract, complete, graphical representation and a concrete, selective, human-like representation. By allowing users to choose flexibly which representation they will attend to under what circumstances, we aim to move beyond the usual debates about the relative merits of these two general types of representation.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124664624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software development is prone to time-consuming and expensive errors. Finding and correcting errors in a program (debugging) is usually done by executing the program with different inputs and examining its intermediate and/or final results (testing). The tools that are currently available for debugging (debuggers) do not fully make use of several potentially useful visualisation and interaction techniques.This article presents a prototype debugging tool (MVT--Matrix Visual Tester) based on a new interactive graphical software testing methodology called visual testing. A programmer can use a visual testing tool to examine and manipulate a running program and its data structures. The tool combines aspects of visual algorithm simulation, high-level data visualisation and visual debugging, and allows easier testing, debugging and understanding of software.
{"title":"MVT: a system for visual testing of software","authors":"Jan Lönnberg, A. Korhonen, L. Malmi","doi":"10.1145/989863.989931","DOIUrl":"https://doi.org/10.1145/989863.989931","url":null,"abstract":"Software development is prone to time-consuming and expensive errors. Finding and correcting errors in a program (debugging) is usually done by executing the program with different inputs and examining its intermediate and/or final results (testing). The tools that are currently available for debugging (debuggers) do not fully make use of several potentially useful visualisation and interaction techniques.This article presents a prototype debugging tool (MVT--Matrix Visual Tester) based on a new interactive graphical software testing methodology called visual testing. A programmer can use a visual testing tool to examine and manipulate a running program and its data structures. The tool combines aspects of visual algorithm simulation, high-level data visualisation and visual debugging, and allows easier testing, debugging and understanding of software.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121056421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors of this paper have already proposed Treecube which is a visualization tool for browsing 3D multimedia data. In this paper, the authors also propose its interactive interfaces for efficiently browsing 3D multimedia data. Treecube is regarded as a 3D extension of treemap, which is a visualization tool for hierarchical information proposed by Ben Shneiderman et al. in 1992. For treemap, there are several layout algorithms: slice-and-dice, ordered treemap, strip treemap and so on. Furthermore, quantum treemap exists. It means a quantization version of these treemap layout algorithms. The authors implemented mainly three layout algorithms, i.e., slice-and-dice, ordered and strip treecube algorithm, and implemented their quantization version. Practically sophisticated interfaces are necessary for efficiently browsing 3D multimedia data. In this paper, the authors also propose such interfaces. The authors implemented mainly five interface functionalities for the following operations. (1) "Cutting plane" concept to solve the occlusion problem, i.e., nodes located before the plane are hidden to make it easy to see inside nodes. (2) The control of node frames, i.e., their brightness and thickness, for easily understanding the hierarchical structure of nodes. (3) Standard operations for the translation and the rotation of an eye position, and for the zoom in/out. (4) Particular operations for the extraction of the user focus node and for the backward/forward for browsing such node. The authors also implemented (5) a function to assign color information to any node properties because color is the most important factor of the visual display properties.
{"title":"Interactive interfaces of Treecube for browsing 3D multimedia data","authors":"Yoichi Tanaka, Y. Okada, K. Niijima","doi":"10.1145/989863.989914","DOIUrl":"https://doi.org/10.1145/989863.989914","url":null,"abstract":"The authors of this paper have already proposed Treecube which is a visualization tool for browsing 3D multimedia data. In this paper, the authors also propose its interactive interfaces for efficiently browsing 3D multimedia data. Treecube is regarded as a 3D extension of treemap, which is a visualization tool for hierarchical information proposed by Ben Shneiderman et al. in 1992. For treemap, there are several layout algorithms: slice-and-dice, ordered treemap, strip treemap and so on. Furthermore, quantum treemap exists. It means a quantization version of these treemap layout algorithms. The authors implemented mainly three layout algorithms, i.e., slice-and-dice, ordered and strip treecube algorithm, and implemented their quantization version. Practically sophisticated interfaces are necessary for efficiently browsing 3D multimedia data. In this paper, the authors also propose such interfaces. The authors implemented mainly five interface functionalities for the following operations. (1) \"Cutting plane\" concept to solve the occlusion problem, i.e., nodes located before the plane are hidden to make it easy to see inside nodes. (2) The control of node frames, i.e., their brightness and thickness, for easily understanding the hierarchical structure of nodes. (3) Standard operations for the translation and the rotation of an eye position, and for the zoom in/out. (4) Particular operations for the extraction of the user focus node and for the backward/forward for browsing such node. The authors also implemented (5) a function to assign color information to any node properties because color is the most important factor of the visual display properties.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126180386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although the power of personal computers has increased 1000-fold over the past 20 years, user interfaces remain essentially the same. Innovations in HCI research, particularly novel interaction techniques, are rarely incorporated into products. In this paper I argue that the only way to significantly improve user interfaces is to shift the research focus from designing interfaces to designing interaction. This requires powerful interaction models, a better understanding of both the sensory-motor details of interaction and a broader view of interaction in the context of use. It also requires novel interaction architectures that address reinterpretability, resilience and scalability.
{"title":"Designing interaction, not interfaces","authors":"M. Beaudouin-Lafon","doi":"10.1145/989863.989865","DOIUrl":"https://doi.org/10.1145/989863.989865","url":null,"abstract":"Although the power of personal computers has increased 1000-fold over the past 20 years, user interfaces remain essentially the same. Innovations in HCI research, particularly novel interaction techniques, are rarely incorporated into products. In this paper I argue that the only way to significantly improve user interfaces is to shift the research focus from designing interfaces to designing interaction. This requires powerful interaction models, a better understanding of both the sensory-motor details of interaction and a broader view of interaction in the context of use. It also requires novel interaction architectures that address reinterpretability, resilience and scalability.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128093978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As technologies in the area of storage, connectivity and displays are rapidly evolving and business development is pointing to the direction of the experience economy, the vision of Ambient Intelligence is positioning the human needs central to technology development. Equipped with a special research instrument called HomeLab, scenarios of Ambient Intelligence are implemented and tested. As two examples of bringing real user experiences through display technology into the digital home, research on creating the feeling of immersion and the feeling of being connected, are discussed. Results from this work indicate that visual displays can indeed be used beyond simple information rendering but can actually play an important role in creating user experiences.
{"title":"Ambient intelligence: visualizing the future","authors":"B. D. Ruyter, E. Aarts","doi":"10.1145/989863.989897","DOIUrl":"https://doi.org/10.1145/989863.989897","url":null,"abstract":"As technologies in the area of storage, connectivity and displays are rapidly evolving and business development is pointing to the direction of the experience economy, the vision of Ambient Intelligence is positioning the human needs central to technology development. Equipped with a special research instrument called HomeLab, scenarios of Ambient Intelligence are implemented and tested. As two examples of bringing real user experiences through display technology into the digital home, research on creating the feeling of immersion and the feeling of being connected, are discussed. Results from this work indicate that visual displays can indeed be used beyond simple information rendering but can actually play an important role in creating user experiences.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128455030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Hutchings, Greg Smith, B. Meyers, M. Czerwinski, G. Robertson
The continuing trend toward greater processing power, larger storage, and in particular increased display surface by using multiple monitor supports increased multi-tasking by the computer user. The concomitant increase in desktop complexity has the potential to push the overhead of window management to frustrating and counterproductive new levels. It is difficult to adequately design for multiple monitor systems without understanding how multiple monitor users differ from, or are similar to, single monitor users. Therefore, we deployed a tool to a group of single monitor and multiple monitor users to log window management activity. Analysis of the data collected from this tool revealed that usage of interaction components may change with an increase in number of monitors, and window visibility can be a useful measure of user display space management activity, especially for multiple monitor users. The results from this analysis begin to fill a gap in research about real-world window management practices.
{"title":"Display space usage and window management operation comparisons between single monitor and multiple monitor users","authors":"D. Hutchings, Greg Smith, B. Meyers, M. Czerwinski, G. Robertson","doi":"10.1145/989863.989867","DOIUrl":"https://doi.org/10.1145/989863.989867","url":null,"abstract":"The continuing trend toward greater processing power, larger storage, and in particular increased display surface by using multiple monitor supports increased multi-tasking by the computer user. The concomitant increase in desktop complexity has the potential to push the overhead of window management to frustrating and counterproductive new levels. It is difficult to adequately design for multiple monitor systems without understanding how multiple monitor users differ from, or are similar to, single monitor users. Therefore, we deployed a tool to a group of single monitor and multiple monitor users to log window management activity. Analysis of the data collected from this tool revealed that usage of interaction components may change with an increase in number of monitors, and window visibility can be a useful measure of user display space management activity, especially for multiple monitor users. The results from this analysis begin to fill a gap in research about real-world window management practices.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130655856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce the concept of the Temporal Thumbnail, used to quickly convey information about the amount of time spent viewing specific areas of a virtual 3D model. Temporal Thumbnails allow for large amounts of time-based information collected from model viewing sessions to be rapidly visualized by collapsing the time dimension onto the space of the model, creating a characteristic impression of the overall interaction. We describe three techniques that implement the Temporal Thumbnail concept and present a study comparing these techniques to more traditional video and storyboard representations. The results suggest that Temporal Thumbnails have potential as an effective technique for quickly analyzing large amounts of viewing data. Practical and theoretical issues for visualization and representation are also discussed.
{"title":"Temporal Thumbnails: rapid visualization of time-based viewing data","authors":"M. Tsang, N. Morris, Ravin Balakrishnan","doi":"10.1145/989863.989890","DOIUrl":"https://doi.org/10.1145/989863.989890","url":null,"abstract":"We introduce the concept of the Temporal Thumbnail, used to quickly convey information about the amount of time spent viewing specific areas of a virtual 3D model. Temporal Thumbnails allow for large amounts of time-based information collected from model viewing sessions to be rapidly visualized by collapsing the time dimension onto the space of the model, creating a characteristic impression of the overall interaction. We describe three techniques that implement the Temporal Thumbnail concept and present a study comparing these techniques to more traditional video and storyboard representations. The results suggest that Temporal Thumbnails have potential as an effective technique for quickly analyzing large amounts of viewing data. Practical and theoretical issues for visualization and representation are also discussed.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134147213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional interfaces for visualisation of time-based media support access to sequential data in a linear fashion. We present two visualisation interfaces for a mobile application that supports non-linear, structured browsing of multimedia recordings by exploiting certain features of concurrent multimedia streams. The system is built on a content mapping framework which automatically creates links between text and audio data by establishing "temporal neighbourhoods". It illustrates how non-linear browsing may be particularly valuable for devices with limited screen real-estate.
{"title":"A mobile system for non-linear access to time-based data","authors":"S. Luz, M. Masoodian","doi":"10.1145/989863.989950","DOIUrl":"https://doi.org/10.1145/989863.989950","url":null,"abstract":"Conventional interfaces for visualisation of time-based media support access to sequential data in a linear fashion. We present two visualisation interfaces for a mobile application that supports non-linear, structured browsing of multimedia recordings by exploiting certain features of concurrent multimedia streams. The system is built on a content mapping framework which automatically creates links between text and audio data by establishing \"temporal neighbourhoods\". It illustrates how non-linear browsing may be particularly valuable for devices with limited screen real-estate.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134172796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a navigation-oriented interaction paradigm, such as desktop, mixed and augmented virtual reality, recognizing the user needs is a valuable improvement, provided that the system is able to correctly anticipate the user actions. Methodologies for adapting both navigation and content allow the user to interact with a customized version of the 3D world, lessening the cognitive load needed for accomplishing tasks such as finding places and objects, and acting on virtual devices.This work discusses adaptivity of interaction in 3D environments, obtained through the coordinated use of three approaches: structured design of the interaction space, distinction between a base world layer and an interactive experience layer, and user monitoring in order to infer interaction patterns. Identification of such recurring patterns is used for anticipating users actions in approaching places and objects of each experience class. An agent based architecture is proposed, and a simple application related to consumer e-business is analyzed.
{"title":"Observing and adapting user behavior in navigational 3D interfaces","authors":"A. Celentano, Fabio Pittarello","doi":"10.1145/989863.989911","DOIUrl":"https://doi.org/10.1145/989863.989911","url":null,"abstract":"In a navigation-oriented interaction paradigm, such as desktop, mixed and augmented virtual reality, recognizing the user needs is a valuable improvement, provided that the system is able to correctly anticipate the user actions. Methodologies for adapting both navigation and content allow the user to interact with a customized version of the 3D world, lessening the cognitive load needed for accomplishing tasks such as finding places and objects, and acting on virtual devices.This work discusses adaptivity of interaction in 3D environments, obtained through the coordinated use of three approaches: structured design of the interaction space, distinction between a base world layer and an interactive experience layer, and user monitoring in order to infer interaction patterns. Identification of such recurring patterns is used for anticipating users actions in approaching places and objects of each experience class. An agent based architecture is proposed, and a simple application related to consumer e-business is analyzed.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"22 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120867726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}