Dina Goren-Bar, Yuval Shahar, M. Galperin-Aizenberg, David Boaz, Gil Tahan
KNAVE-II is an intelligent interface to a distributed web-based architecture that enables users (e.g., physicians) to query, visualize and explore clinical time-oriented databases. Based on prior studies, we have defined a set of requirements for provision of a service for interactive exploration of time oriented clinical data. The main requirements include the visualization, interactive exploration and explanation of both raw data and multiple levels of concepts abstracted from these data; the exploration of clinical data at different levels of temporal granularity along both absolute (calendar-based) and relative (clinically meaningful) time-lines; the exploration and dynamic visualization of the effects of simulated hypothetical modifications of raw data on the derived concepts; and the provision of generic services (such as statistics, documentation, fast search and retrieval of clinically significant concepts, amongst others). KNAVE-II has been implemented and is currently evaluated by expert clinicians in several medical domains, such as oncology, involving monitoring of chronic patients.
{"title":"KNAVE II: the definition and implementation of an intelligent tool for visualization and exploration of time-oriented clinical data","authors":"Dina Goren-Bar, Yuval Shahar, M. Galperin-Aizenberg, David Boaz, Gil Tahan","doi":"10.1145/989863.989889","DOIUrl":"https://doi.org/10.1145/989863.989889","url":null,"abstract":"KNAVE-II is an intelligent interface to a distributed web-based architecture that enables users (e.g., physicians) to query, visualize and explore clinical time-oriented databases. Based on prior studies, we have defined a set of requirements for provision of a service for interactive exploration of time oriented clinical data. The main requirements include the visualization, interactive exploration and explanation of both raw data and multiple levels of concepts abstracted from these data; the exploration of clinical data at different levels of temporal granularity along both absolute (calendar-based) and relative (clinically meaningful) time-lines; the exploration and dynamic visualization of the effects of simulated hypothetical modifications of raw data on the derived concepts; and the provision of generic services (such as statistics, documentation, fast search and retrieval of clinically significant concepts, amongst others). KNAVE-II has been implemented and is currently evaluated by expert clinicians in several medical domains, such as oncology, involving monitoring of chronic patients.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126965116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Systems that recommend items to a group of two or more users raise a number of challenging issues that are so far only partly understood. This paper identifies four of these issues and points out that they have been dealt with to only a limited extent in the group recommender systems that have been developed so far. The issues are especially important in settings where group members specify their preferences explicitly and where they are not able to engage in face-to-face interaction. We illustrate some of the solutions discussed with reference to the TRAVEL DECISION FORUM prototype. The issues concern (a) the design of suitable preference elicitation and aggregation methods, in particular nonmanipulable aggregation mechanisms; and (b) ways of making members aware of each other's preferences and motivational orientations, such as the use of animated representatives of group members.
{"title":"More than the sum of its members: challenges for group recommender systems","authors":"A. Jameson","doi":"10.1145/989863.989869","DOIUrl":"https://doi.org/10.1145/989863.989869","url":null,"abstract":"Systems that recommend items to a group of two or more users raise a number of challenging issues that are so far only partly understood. This paper identifies four of these issues and points out that they have been dealt with to only a limited extent in the group recommender systems that have been developed so far. The issues are especially important in settings where group members specify their preferences explicitly and where they are not able to engage in face-to-face interaction. We illustrate some of the solutions discussed with reference to the TRAVEL DECISION FORUM prototype. The issues concern (a) the design of suitable preference elicitation and aggregation methods, in particular nonmanipulable aggregation mechanisms; and (b) ways of making members aware of each other's preferences and motivational orientations, such as the use of animated representatives of group members.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122712900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present our experience in building a visual flle manager, VENNFS2, that offers to users an adaptive interface toward access to files. Our file manager was originally designed to overcome some of limitations of hierarchical file systems, since it allows users to categorize files in such a way that files may belong multiple categories at once. Based on the past history of the files that were opened and modified by the user, VENNFS2 graphically presents the user a small number of choices of the next file the user will modify. Some preliminary testing with interesting hints are also reported.
{"title":"A visual adaptive interface to file systems","authors":"R. D. Chiara, U. Erra, V. Scarano","doi":"10.1145/989863.989926","DOIUrl":"https://doi.org/10.1145/989863.989926","url":null,"abstract":"In this paper we present our experience in building a visual flle manager, VENNFS2, that offers to users an adaptive interface toward access to files. Our file manager was originally designed to overcome some of limitations of hierarchical file systems, since it allows users to categorize files in such a way that files may belong multiple categories at once. Based on the past history of the files that were opened and modified by the user, VENNFS2 graphically presents the user a small number of choices of the next file the user will modify. Some preliminary testing with interesting hints are also reported.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"326 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124603215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Cavalluzzi, B. D. Carolis, S. Pizzutilo, G. Cozzolongo
In this paper, we present the first results of a research aiming at developing an intelligent agent able to interact with users in public spaces through a touch screen or a personal device. The agent communication is adapted to the situation at both content and presentation levels, by generating an appropriate combination of verbal and non-verbal agent behaviours.
{"title":"Interacting with embodied agents in public environments","authors":"A. Cavalluzzi, B. D. Carolis, S. Pizzutilo, G. Cozzolongo","doi":"10.1145/989863.989903","DOIUrl":"https://doi.org/10.1145/989863.989903","url":null,"abstract":"In this paper, we present the first results of a research aiming at developing an intelligent agent able to interact with users in public spaces through a touch screen or a personal device. The agent communication is adapted to the situation at both content and presentation levels, by generating an appropriate combination of verbal and non-verbal agent behaviours.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114713497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paolo Bottoni, Roberta Civica, S. Levialdi, Laura Orso, Emanuele Panizzi, Rosa Trinchese
Digital annotation of multimedia documents adds information to a document (e.g. a web page) or parts of it (a multimedia object such as an image or a video stream contained in the document). Digital annotations can be kept private or shared among different users over the internet, allowing discussions and cooperative work. We study the possibility of annotating multimedia documents with objects which are in turn of multimedial nature. Annotations can refer to whole documents or single portions thereof, as usual, but also to multi-objects, i.e. groups of objects contained in a single document. We designed and developed a new digital annotation system organized in a client-server architecture, where the client is a plug-in for a standard web browser and the servers are repositories of annotations to which different clients can login. Annotations can be retrieved and filtered, and one can choose different annotation servers for a document. We present a platform-independent design for such a system, and illustrate a specific implementation for Microsoft Internet Explorer on the client side and on JSP/MySQL for the server side.
多媒体文档的数字注释将信息添加到文档(如网页)或文档的一部分(如文档中包含的图像或视频流等多媒体对象)。数字注释可以保密,也可以在互联网上的不同用户之间共享,允许讨论和合作工作。我们研究了用具有多媒体性质的对象来注释多媒体文档的可能性。注释通常可以引用整个文档或其中的单个部分,但也可以引用多对象,即单个文档中包含的对象组。我们设计并开发了一个以客户机-服务器架构组织的新型数字注释系统,其中客户机是标准web浏览器的插件,服务器是不同客户机可以登录的注释存储库。可以检索和过滤注释,并且可以为文档选择不同的注释服务器。我们提出了这样一个系统的独立于平台的设计,并举例说明了Microsoft Internet Explorer在客户端和JSP/MySQL在服务器端的具体实现。
{"title":"MADCOW: a multimedia digital annotation system","authors":"Paolo Bottoni, Roberta Civica, S. Levialdi, Laura Orso, Emanuele Panizzi, Rosa Trinchese","doi":"10.1145/989863.989870","DOIUrl":"https://doi.org/10.1145/989863.989870","url":null,"abstract":"Digital annotation of multimedia documents adds information to a document (e.g. a web page) or parts of it (a multimedia object such as an image or a video stream contained in the document). Digital annotations can be kept private or shared among different users over the internet, allowing discussions and cooperative work. We study the possibility of annotating multimedia documents with objects which are in turn of multimedial nature. Annotations can refer to whole documents or single portions thereof, as usual, but also to multi-objects, i.e. groups of objects contained in a single document. We designed and developed a new digital annotation system organized in a client-server architecture, where the client is a plug-in for a standard web browser and the servers are repositories of annotations to which different clients can login. Annotations can be retrieved and filtered, and one can choose different annotation servers for a document. We present a platform-independent design for such a system, and illustrate a specific implementation for Microsoft Internet Explorer on the client side and on JSP/MySQL for the server side.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124388072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. D. Mascio, Marco Francesconi, D. Frigioni, L. Tarantino
This paper presents a system supporting tuning and evaluation of a Content-Based Image Retrieval (CBIR) engine for vector images, by a graphical interface providing query-by-sketch and query-by-example interaction with query results, and analysis of result quality. Vector images are first modelled as an inertial system and then they are associated with descriptors representing visual features invariant to affine transformation. To support requirements of different application domains, the engine offers a variety of moment sets as well as difierent metrics for similarity computation. The graphical interface offers tools that helps in the selection of criteria and parameters necessary to tune the system to a specific application domain.
{"title":"Tuning a CBIR system for vector images: the interface support","authors":"T. D. Mascio, Marco Francesconi, D. Frigioni, L. Tarantino","doi":"10.1145/989863.989942","DOIUrl":"https://doi.org/10.1145/989863.989942","url":null,"abstract":"This paper presents a system supporting tuning and evaluation of a Content-Based Image Retrieval (CBIR) engine for vector images, by a graphical interface providing query-by-sketch and query-by-example interaction with query results, and analysis of result quality. Vector images are first modelled as an inertial system and then they are associated with descriptors representing visual features invariant to affine transformation. To support requirements of different application domains, the engine offers a variety of moment sets as well as difierent metrics for similarity computation. The graphical interface offers tools that helps in the selection of criteria and parameters necessary to tune the system to a specific application domain.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134433115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a novel approach to algorithms concretization that extends the current mode of software visualization from computer screens to the real world. The method combines hands-on robotics and traditional algorithm visualization techniques to help diverse learners comprehend the basic idea of the given algorithm. From this point of view the robots interpret an algorithm while their internal program and external appearance determine the role they have in it. This gives us the possibility to bring algorithms into the real physical world where students can even touch the data structures during the execution. In the first version, we have concentrated on a few sorting algorithms as a proof-of-concept. Moreover, we have carried out an evaluation with 13-to-15-year-old students who used the concretization for gaining insight into one sorting algorithm. The preliminary results indicate that the tool can enhance learning. Now, our aim is to build an environment that supports both visualizations and robotics based concretizations of algorithms at the same time.
{"title":"Sorting out sorting through concretization with robotics","authors":"Javier López, Niko Myller, E. Sutinen","doi":"10.1145/989863.989929","DOIUrl":"https://doi.org/10.1145/989863.989929","url":null,"abstract":"We describe a novel approach to algorithms concretization that extends the current mode of software visualization from computer screens to the real world. The method combines hands-on robotics and traditional algorithm visualization techniques to help diverse learners comprehend the basic idea of the given algorithm. From this point of view the robots interpret an algorithm while their internal program and external appearance determine the role they have in it. This gives us the possibility to bring algorithms into the real physical world where students can even touch the data structures during the execution. In the first version, we have concentrated on a few sorting algorithms as a proof-of-concept. Moreover, we have carried out an evaluation with 13-to-15-year-old students who used the concretization for gaining insight into one sorting algorithm. The preliminary results indicate that the tool can enhance learning. Now, our aim is to build an environment that supports both visualizations and robotics based concretizations of algorithms at the same time.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131791553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents improvements carried out to enhance the visual interaction of computer users in existing communication systems. It includes the usage of augmented reality techniques and the modification of a method for user model reconstruction according to particular requirements of such applications. Promised achievement is to prepare the background for further development of multi-user interface, videoconference or collaborative workspace.The aim of our research is replacing the standard computer interface components by equipment used in augmented reality and so immerse the user into augmented environment. Such approach allows to user positioning virtual objects in his workspace. One of possible techniques for precise virtual object pose evaluation widely used in augmented reality applications is to employ special tracking markers.Traditionally, communication systems of videoconference type represent a remote user using his sprite (plain, billboard-like) model. The lack of realistic appearance, when the participant is displayed as a sprite model, can be eliminated by its artificial reconstruction. The method gain depth information from knowledge of human anatomy and hence it is able to create the artificial relief model of the remote user.
{"title":"Augmented multi-user communication system","authors":"V. Beran","doi":"10.1145/989863.989907","DOIUrl":"https://doi.org/10.1145/989863.989907","url":null,"abstract":"This paper presents improvements carried out to enhance the visual interaction of computer users in existing communication systems. It includes the usage of augmented reality techniques and the modification of a method for user model reconstruction according to particular requirements of such applications. Promised achievement is to prepare the background for further development of multi-user interface, videoconference or collaborative workspace.The aim of our research is replacing the standard computer interface components by equipment used in augmented reality and so immerse the user into augmented environment. Such approach allows to user positioning virtual objects in his workspace. One of possible techniques for precise virtual object pose evaluation widely used in augmented reality applications is to employ special tracking markers.Traditionally, communication systems of videoconference type represent a remote user using his sprite (plain, billboard-like) model. The lack of realistic appearance, when the participant is displayed as a sprite model, can be eliminated by its artificial reconstruction. The method gain depth information from knowledge of human anatomy and hence it is able to create the artificial relief model of the remote user.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130771822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many applications require comparison between alternative scenarios; most support it poorly. A subjunctive interface supports comparison through its facilities for parallel setup, viewing and control of scenarios. To evaluate the usability and benefits of these facilities, we ran experiments in which subjects used both a simple and a subjunctive interface to make comparisons in a census data set. In the first experiment, subjects reported higher satisfaction and lower workload with the subjunctive interface, and relied less on interim marks on paper. Subjects also used fewer interface actions. However, we found no reduction in task completion time, mainly because some subjects encountered problems in using the facilities for setting up and controlling scenarios. Based on a detailed analysis of subjects' actions we redesigned the subjunctive interface to alleviate frequent problems, such as accidentally adjusting only one scenario when the intention was to adjust them all. At the end of a second, five-session experiment, users of this redesigned interface completed tasks 27% more quickly than with the simple interface.
{"title":"Usability studies on a visualisation for parallel display and control of alternative scenarios","authors":"A. Lunzer, K. Hornbæk","doi":"10.1145/989863.989882","DOIUrl":"https://doi.org/10.1145/989863.989882","url":null,"abstract":"Many applications require comparison between alternative scenarios; most support it poorly. A subjunctive interface supports comparison through its facilities for parallel setup, viewing and control of scenarios. To evaluate the usability and benefits of these facilities, we ran experiments in which subjects used both a simple and a subjunctive interface to make comparisons in a census data set. In the first experiment, subjects reported higher satisfaction and lower workload with the subjunctive interface, and relied less on interim marks on paper. Subjects also used fewer interface actions. However, we found no reduction in task completion time, mainly because some subjects encountered problems in using the facilities for setting up and controlling scenarios. Based on a detailed analysis of subjects' actions we redesigned the subjunctive interface to alleviate frequent problems, such as accidentally adjusting only one scenario when the intention was to adjust them all. At the end of a second, five-session experiment, users of this redesigned interface completed tasks 27% more quickly than with the simple interface.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117152499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasingly, rich and dynamic content and abundant links are making Web pages visually cluttered and widening the accessibility divide for the disabled and people with impairments. The adaptations approach of transforming Web pages has enabled users with diverse abilities to access a Web page. However, the challenge remains for these users to work with a Web page, particularly among people with minimal Web experience and cognitive limitations. We propose that scaffolding can allow users to learn certain skills that help them function online with greater autonomy. In the case of visually cluttered Web pages, several accessibility scaffoldings were created to enable users to learn where core content begins, how text flows in a part of a Web page, and what the overall structure of a Web page is. These scaffoldings expose the elements, pathways, and organization of a Web page that enable users to interpret and grasp the structure of a Web page. We present the concept of an accessibility scaffolding, the designs of the scaffoldings for visually cluttered pages, and user feedback from people who work with our target end-users.
{"title":"Scaffolding visually cluttered web pages to facilitate accessibility","authors":"Alison Lee","doi":"10.1145/989863.989875","DOIUrl":"https://doi.org/10.1145/989863.989875","url":null,"abstract":"Increasingly, rich and dynamic content and abundant links are making Web pages visually cluttered and widening the accessibility divide for the disabled and people with impairments. The adaptations approach of transforming Web pages has enabled users with diverse abilities to access a Web page. However, the challenge remains for these users to work with a Web page, particularly among people with minimal Web experience and cognitive limitations. We propose that scaffolding can allow users to learn certain skills that help them function online with greater autonomy. In the case of visually cluttered Web pages, several accessibility scaffoldings were created to enable users to learn where core content begins, how text flows in a part of a Web page, and what the overall structure of a Web page is. These scaffoldings expose the elements, pathways, and organization of a Web page that enable users to interpret and grasp the structure of a Web page. We present the concept of an accessibility scaffolding, the designs of the scaffoldings for visually cluttered pages, and user feedback from people who work with our target end-users.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122170906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}