The problem of providing help for complex application interfaces has been a source of interest for a number of researcher efforts. As the computational power of computers increases, typical applications not only increase in functionality but also in the degree of interaction with the computational environment in which they reside. This paper describes an ongoing project to design an Intelligent Help System (IHS) that provides context-sensitivity not only through its modeling of application states but also its modeling of the interaction between applications and between an application and the environment in which it resides.
{"title":"Providing intelligent help across applications in dynamic user and environment contexts","authors":"Ashwin Ramachandran, R. Young","doi":"10.1145/1040830.1040893","DOIUrl":"https://doi.org/10.1145/1040830.1040893","url":null,"abstract":"The problem of providing help for complex application interfaces has been a source of interest for a number of researcher efforts. As the computational power of computers increases, typical applications not only increase in functionality but also in the degree of interaction with the computational environment in which they reside. This paper describes an ongoing project to design an Intelligent Help System (IHS) that provides context-sensitivity not only through its modeling of application states but also its modeling of the interaction between applications and between an application and the environment in which it resides.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121444439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new context-based method for automatic detection and extraction of similar and related words from texts. Finding similar words is a very important task for many NLP applications including anaphora resolution, document retrieval, text segmentation, and text summarization. Here we use word similarity to improve search quality for search engines in (general and) specific domains. Our method is based on rules for extracting the words in the neighborhood of a target word, then connecting this with the surroundings of other occurrences of the same word in the (training) text corpus. This is an on-going work, and is still under extensive testing. The preliminary results, however, are promising and encouraging more work in this direction.
{"title":"Context-based similar words detection and its application in specialized search engines","authors":"H. Al-Mubaid, Ping Chen","doi":"10.1145/1040830.1040890","DOIUrl":"https://doi.org/10.1145/1040830.1040890","url":null,"abstract":"This paper presents a new context-based method for automatic detection and extraction of similar and related words from texts. Finding similar words is a very important task for many NLP applications including anaphora resolution, document retrieval, text segmentation, and text summarization. Here we use word similarity to improve search quality for search engines in (general and) specific domains. Our method is based on rules for extracting the words in the neighborhood of a target word, then connecting this with the surroundings of other occurrences of the same word in the (training) text corpus. This is an on-going work, and is still under extensive testing. The preliminary results, however, are promising and encouraging more work in this direction.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116414690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an Intelligent Shopping Assistant designed for a shopping cart mounted tablet PC that enables individual interactions with customers. We use machine learning algorithms to predict a shopping list for the customer's current trip and present this list on the device. As they navigate through the store, personalized promotions are presented using consumer models derived from loyalty card data for each inidvidual. In order for shopping assistant devices to be effective, we believe that they have to be powered by algorithms that are tuned for individual customers and can make accurate predictions about an individual's actions. We formally frame the shopping list prediction as a classification problem, describe the algorithms and methodology behind our system, and show that shopping list prediction can be done with high levels of accuracy, precision, and recall. Beyond the prediction of shopping lists we briefly introduce other aspects of the shopping assistant project, such as the use of consumer models to select appropriate promotional tactics, and the development of promotion planning simulation tools to enable retailers to plan personalized promotions delivered through such a shopping assistant.
{"title":"Building intelligent shopping assistants using individual consumer models","authors":"Chad M. Cumby, A. Fano, R. Ghani, Marko Krema","doi":"10.1145/1040830.1040915","DOIUrl":"https://doi.org/10.1145/1040830.1040915","url":null,"abstract":"This paper describes an Intelligent Shopping Assistant designed for a shopping cart mounted tablet PC that enables individual interactions with customers. We use machine learning algorithms to predict a shopping list for the customer's current trip and present this list on the device. As they navigate through the store, personalized promotions are presented using consumer models derived from loyalty card data for each inidvidual. In order for shopping assistant devices to be effective, we believe that they have to be powered by algorithms that are tuned for individual customers and can make accurate predictions about an individual's actions. We formally frame the shopping list prediction as a classification problem, describe the algorithms and methodology behind our system, and show that shopping list prediction can be done with high levels of accuracy, precision, and recall. Beyond the prediction of shopping lists we briefly introduce other aspects of the shopping assistant project, such as the use of consumer models to select appropriate promotional tactics, and the development of promotion planning simulation tools to enable retailers to plan personalized promotions delivered through such a shopping assistant.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123476856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current sketch recognition systems treat sketches as images or a collection of strokes, rather than viewing sketching as an interactive and incremental process. We show how viewing sketching as an interactive process allows us to recognize sketches using Hidden Markov Models. We report results of a user study indicating that in certain domains people draw objects using consistent stroke orderings. We show how this consistency, when present, can be used to perform sketch recognition efficiently. This novel approach enables us to have polynomial time algorithms for sketch recognition and segmentation, unlike conventional methods with exponential complexity.
{"title":"HMM-based efficient sketch recognition","authors":"T. M. Sezgin, Randall Davis","doi":"10.1145/1040830.1040899","DOIUrl":"https://doi.org/10.1145/1040830.1040899","url":null,"abstract":"Current sketch recognition systems treat sketches as images or a collection of strokes, rather than viewing sketching as an interactive and incremental process. We show how viewing sketching as an interactive process allows us to recognize sketches using Hidden Markov Models. We report results of a user study indicating that in certain domains people draw objects using consistent stroke orderings. We show how this consistency, when present, can be used to perform sketch recognition efficiently. This novel approach enables us to have polynomial time algorithms for sketch recognition and segmentation, unlike conventional methods with exponential complexity.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"22 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120814080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Gervasio, Michael D. Moffitt, M. Pollack, Joseph M. Taylor, Tomás E. Uribe
We present PLIANT, a learning system that supports adaptive assistance in an open calendaring system. PLIANT learns user preferences from the feedback that naturally occurs during interactive scheduling. It contributes a novel application of active learning in a domain where the choice of candidate schedules to present to the user must balance usefulness to the learning module with immediate benefit to the user. Our experimental results provide evidence of PLIANT's ability to learn user preferences under various conditions and reveal the tradeoffs made by the different active learning selection strategies.
{"title":"Active preference learning for personalized calendar scheduling assistance","authors":"M. Gervasio, Michael D. Moffitt, M. Pollack, Joseph M. Taylor, Tomás E. Uribe","doi":"10.1145/1040830.1040857","DOIUrl":"https://doi.org/10.1145/1040830.1040857","url":null,"abstract":"We present PLIANT, a learning system that supports adaptive assistance in an open calendaring system. PLIANT learns user preferences from the feedback that naturally occurs during interactive scheduling. It contributes a novel application of active learning in a domain where the choice of candidate schedules to present to the user must balance usefulness to the learning module with immediate benefit to the user. Our experimental results provide evidence of PLIANT's ability to learn user preferences under various conditions and reveal the tradeoffs made by the different active learning selection strategies.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134250852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A task-oriented space can benefit from an augmented reality interface that layers the existing tools and surfaces with useful information to make cooking more easy, safe and efficient. To serve experienced users as well as novices, augmented reality interfaces need to adapt modalities to the user's expertise and allow for multiple ways to perform tasks. We present a framework for designing an intelligent user interface that informs and choreographs multiple tasks in a single space according to a model of tasks and users. A residential kitchen has been outfitted with systems to gather data from tools and surfaces and project multi-modal interfaces back onto the tools and surfaces themselves. Based on user evaluations of this augmented reality kitchen, we propose a system to tailor information modalities based on the spatial and temporal qualities of the task, and the expertise, location and progress of the user. The intelligent augmented reality user interface choreographs multiple tasks in the same space at the same time.
{"title":"A framework for designing intelligent task-oriented augmented reality user interfaces","authors":"L. Bonanni, C. Lee, T. Selker","doi":"10.1145/1040830.1040913","DOIUrl":"https://doi.org/10.1145/1040830.1040913","url":null,"abstract":"A task-oriented space can benefit from an augmented reality interface that layers the existing tools and surfaces with useful information to make cooking more easy, safe and efficient. To serve experienced users as well as novices, augmented reality interfaces need to adapt modalities to the user's expertise and allow for multiple ways to perform tasks. We present a framework for designing an intelligent user interface that informs and choreographs multiple tasks in a single space according to a model of tasks and users. A residential kitchen has been outfitted with systems to gather data from tools and surfaces and project multi-modal interfaces back onto the tools and surfaces themselves. Based on user evaluations of this augmented reality kitchen, we propose a system to tailor information modalities based on the spatial and temporal qualities of the task, and the expertise, location and progress of the user. The intelligent augmented reality user interface choreographs multiple tasks in the same space at the same time.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129830485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Embodied Conversational Agents (ECAs) are computer-controlled synthetic characters that can engage in dialog with users. This tutorial will present an overview of techniques and methods relating to the design, construction, and evaluation of ECAs that interact appropriately with users. It will introduce the major technologies for controlling ECA behavior. It will then consider the problem of how to design a successful interactive interface that incorporates ECAs. Finally, it will discuss how to evaluate ECA-enhanced interfaces, including evaluation methods and factors that can influence the evaluation.
{"title":"Interaction with embodied conversational agents","authors":"Lewis Johnson","doi":"10.1145/1040830.1040841","DOIUrl":"https://doi.org/10.1145/1040830.1040841","url":null,"abstract":"Embodied Conversational Agents (ECAs) are computer-controlled synthetic characters that can engage in dialog with users. This tutorial will present an overview of techniques and methods relating to the design, construction, and evaluation of ECAs that interact appropriately with users. It will introduce the major technologies for controlling ECA behavior. It will then consider the problem of how to design a successful interactive interface that incorporates ECAs. Finally, it will discuss how to evaluate ECA-enhanced interfaces, including evaluation methods and factors that can influence the evaluation.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114889143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Preetha Appan, B. Shevade, H. Sundaram, David Birchfield
In this paper, we present our efforts towards creating interfaces for networked media exploration and collaborative annotation. The problem is important since online social networks are emerging as conduits for exchange of everyday experiences. These networks do not currently provide media-rich communication environments. Our approach has two parts -- collaborative annotation, and a media exploration framework. The collaborative annotation takes place through a web based interface, and provides to each user personalized recommendations, based on media features, and by using a common sense inference toolkit. We develop three media exploration interfaces that allow for two-way interaction amongst the participants -- (a) spatio-temporal evolution, (b) event cones and (c) viewpoint centric interaction. We also analyze the user activity to determine important people and events, for each user. We also develop subtle visual interface cues for activity feedback. Preliminary user studies indicate that the system performs well and is well liked by the users.
{"title":"Interfaces for networked media exploration and collaborative annotation","authors":"Preetha Appan, B. Shevade, H. Sundaram, David Birchfield","doi":"10.1145/1040830.1040860","DOIUrl":"https://doi.org/10.1145/1040830.1040860","url":null,"abstract":"In this paper, we present our efforts towards creating interfaces for networked media exploration and collaborative annotation. The problem is important since online social networks are emerging as conduits for exchange of everyday experiences. These networks do not currently provide media-rich communication environments. Our approach has two parts -- collaborative annotation, and a media exploration framework. The collaborative annotation takes place through a web based interface, and provides to each user personalized recommendations, based on media features, and by using a common sense inference toolkit. We develop three media exploration interfaces that allow for two-way interaction amongst the participants -- (a) spatio-temporal evolution, (b) event cones and (c) viewpoint centric interaction. We also analyze the user activity to determine important people and events, for each user. We also develop subtle visual interface cues for activity feedback. Preliminary user studies indicate that the system performs well and is well liked by the users.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115029338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This workshop intends to bring recommender systems researchers and practitioners together in order to discuss the current state of recommender systems research, both on existing and emerging research topics, and to determine how research in this area should proceed. We are at a pivotal point in recommender systems research where researchers are both looking inward at what recommender systems are and looking outward at where recommender systems can be applied, and the implications of applying them out 'in the wild.' This creates a unique opportunity to both reassess the current state of research and directions research is taking in the near and long term.
{"title":"Beyond personalization: the next stage of recommender systems research","authors":"M. V. Setten, S. McNee, J. Konstan","doi":"10.1145/1040830.1040839","DOIUrl":"https://doi.org/10.1145/1040830.1040839","url":null,"abstract":"This workshop intends to bring recommender systems researchers and practitioners together in order to discuss the current state of recommender systems research, both on existing and emerging research topics, and to determine how research in this area should proceed. We are at a pivotal point in recommender systems research where researchers are both looking inward at what recommender systems are and looking outward at where recommender systems can be applied, and the implications of applying them out 'in the wild.' This creates a unique opportunity to both reassess the current state of research and directions research is taking in the near and long term.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"291 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116423481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anton N. Dragunov, Thomas G. Dietterich, Kevin Johnsrude, Matthew R. McLaughlin, Lida Li, Jonathan L. Herlocker
This paper reports on TaskTracer --- a software system being designed to help highly multitasking knowledge workers rapidly locate, discover, and reuse past processes they used to successfully complete tasks. The system monitors users' interaction with a computer, collects detailed records of users' activities and resources accessed, associates (automatically or with users' assistance) each interaction event with a particular task, enables users to access records of past activities and quickly restore task contexts. We present a novel Publisher-Subscriber architecture for collecting and processing users' activity data, describe several different user interfaces tried with TaskTracer, and discuss the possibility of applying machine learning techniques to recognize/predict users' tasks.
{"title":"TaskTracer: a desktop environment to support multi-tasking knowledge workers","authors":"Anton N. Dragunov, Thomas G. Dietterich, Kevin Johnsrude, Matthew R. McLaughlin, Lida Li, Jonathan L. Herlocker","doi":"10.1145/1040830.1040855","DOIUrl":"https://doi.org/10.1145/1040830.1040855","url":null,"abstract":"This paper reports on TaskTracer --- a software system being designed to help highly multitasking knowledge workers rapidly locate, discover, and reuse past processes they used to successfully complete tasks. The system monitors users' interaction with a computer, collects detailed records of users' activities and resources accessed, associates (automatically or with users' assistance) each interaction event with a particular task, enables users to access records of past activities and quickly restore task contexts. We present a novel Publisher-Subscriber architecture for collecting and processing users' activity data, describe several different user interfaces tried with TaskTracer, and discuss the possibility of applying machine learning techniques to recognize/predict users' tasks.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121722196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}