Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Copyright is held by the author/owner(s). IUI’15 Companion, Mar 29–Apr 01, 2015, 2015, Atlanta, GA, USA. ACM 978-1-4503-3308-5/15/03. http://dx.doi.org/10.1145/2732158.2732170 Author
{"title":"Extended Virtual Presence of Therapists through Home Service Robots","authors":"Hee-Tae Jung","doi":"10.1145/2732158.2732170","DOIUrl":"https://doi.org/10.1145/2732158.2732170","url":null,"abstract":"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Copyright is held by the author/owner(s). IUI’15 Companion, Mar 29–Apr 01, 2015, 2015, Atlanta, GA, USA. ACM 978-1-4503-3308-5/15/03. http://dx.doi.org/10.1145/2732158.2732170 Author","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130971826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe OfficeHours, a recommender system that assists students in finding potential supervisors for their dissertation projects. OfficeHours is an interactive recommender system that combines reinforcement learning techniques with a novel interface that assists the student in formulating their query and allows active engagement in directing their search. Students can directly manipulate document features (keywords) extracted from scientific articles written by faculty members to indicate their interests and reinforcement learning is used to model the student's interests by allowing the system to trade off between exploration and exploitation. The goal of system is to give the student the opportunity to more effectively search for possible project supervisors in a situation where the student may have difficulties formulating their query or when very little information may be available on faculty members' websites about their research interests.
{"title":"OfficeHours: A System for Student Supervisor Matching through Reinforcement Learning","authors":"Yuan Gao, K. Ilves, D. Glowacka","doi":"10.1145/2732158.2732189","DOIUrl":"https://doi.org/10.1145/2732158.2732189","url":null,"abstract":"We describe OfficeHours, a recommender system that assists students in finding potential supervisors for their dissertation projects. OfficeHours is an interactive recommender system that combines reinforcement learning techniques with a novel interface that assists the student in formulating their query and allows active engagement in directing their search. Students can directly manipulate document features (keywords) extracted from scientific articles written by faculty members to indicate their interests and reinforcement learning is used to model the student's interests by allowing the system to trade off between exploration and exploitation. The goal of system is to give the student the opportunity to more effectively search for possible project supervisors in a situation where the student may have difficulties formulating their query or when very little information may be available on faculty members' websites about their research interests.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133012958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the last decade, there has been an exponential growth of online conversations thanks to the rise of social media. Analyzing and gaining insights from such conversations can be quite challenging for a user, especially when the discussions become very long. During my doctoral research, I aim to investigate how to integrate Information Visualization with Natural Language Processing techniques to better support the user's task of exploring and analyzing conversations. For this purpose, I consider the following approaches: apply design study methodology in InfoVis to uncover data and task abstractions; apply NLP methods for extracting the identified data to support those tasks; and incorporate human feedback in the text analysis process when the extracted data is noisy and/or may not match the user's mental model, and current tasks. Through a set of design studies, I aim to evaluate the effectiveness of our approaches.
{"title":"Visual Text Analytics for Asynchronous Online Conversations","authors":"Enamul Hoque","doi":"10.1145/2732158.2732160","DOIUrl":"https://doi.org/10.1145/2732158.2732160","url":null,"abstract":"In the last decade, there has been an exponential growth of online conversations thanks to the rise of social media. Analyzing and gaining insights from such conversations can be quite challenging for a user, especially when the discussions become very long. During my doctoral research, I aim to investigate how to integrate Information Visualization with Natural Language Processing techniques to better support the user's task of exploring and analyzing conversations. For this purpose, I consider the following approaches: apply design study methodology in InfoVis to uncover data and task abstractions; apply NLP methods for extracting the identified data to support those tasks; and incorporate human feedback in the text analysis process when the extracted data is noisy and/or may not match the user's mental model, and current tasks. Through a set of design studies, I aim to evaluate the effectiveness of our approaches.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130012839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangmin Fan, Youming Liu, Nan Cao, Jason I. Hong, Jingtao Wang
We present MindMiner, a mixed-initiative interface for capturing subjective similarity measurements via a combination of new interaction techniques and machine learning algorithms. MindMiner collects qualitative, hard to express similarity measurements from users via active polling with uncertainty and example based visual constraint creation. MindMiner also formulates human prior knowledge into a set of inequalities and learns a quantitative similarity distance metric via convex optimization. In a 12-participant peer-review understanding task, we found MindMiner was easy to learn and use, and could capture users' implicit knowledge about writing performance and cluster target entities into groups that match subjects' mental models.
{"title":"MindMiner: Quantifying Entity Similarity via Interactive Distance Metric Learning","authors":"Xiangmin Fan, Youming Liu, Nan Cao, Jason I. Hong, Jingtao Wang","doi":"10.1145/2732158.2732173","DOIUrl":"https://doi.org/10.1145/2732158.2732173","url":null,"abstract":"We present MindMiner, a mixed-initiative interface for capturing subjective similarity measurements via a combination of new interaction techniques and machine learning algorithms. MindMiner collects qualitative, hard to express similarity measurements from users via active polling with uncertainty and example based visual constraint creation. MindMiner also formulates human prior knowledge into a set of inequalities and learns a quantitative similarity distance metric via convex optimization. In a 12-participant peer-review understanding task, we found MindMiner was easy to learn and use, and could capture users' implicit knowledge about writing performance and cluster target entities into groups that match subjects' mental models.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"220 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129877020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heesun Kim, Bo Kyung Huh, S. Im, H. Joung, G. Kwon, Ji-Hyung Park
This study explores the contextual transparency of information presented on public transparent user interfaces through the maintenance of adequate legibility. To address this issue, we investigate the relationship between the information and transparency in a shop context. In this paper, we present an experiment which examines the effects of transparency, changing user's proximity, and information types on legibility with a public transparent information system. We report significant effects on performance and legibility, and the results indicate the different contextual transparency related to the user's proximity, depending on whether the user focuses on the information or the environment. In addition, under the 50% transparency (25% and 50% levels) fit into the closer proximity while the 50% transparency offers more harmonious view in a distant context. The implications of these results to the usability of public transparent user interfaces and design recommendations are also discussed.
{"title":"From \"Overview\" to \"Detail\": An Exploration of Contextual Transparency for Public Transparent Interfaces","authors":"Heesun Kim, Bo Kyung Huh, S. Im, H. Joung, G. Kwon, Ji-Hyung Park","doi":"10.1145/2732158.2732186","DOIUrl":"https://doi.org/10.1145/2732158.2732186","url":null,"abstract":"This study explores the contextual transparency of information presented on public transparent user interfaces through the maintenance of adequate legibility. To address this issue, we investigate the relationship between the information and transparency in a shop context. In this paper, we present an experiment which examines the effects of transparency, changing user's proximity, and information types on legibility with a public transparent information system. We report significant effects on performance and legibility, and the results indicate the different contextual transparency related to the user's proximity, depending on whether the user focuses on the information or the environment. In addition, under the 50% transparency (25% and 50% levels) fit into the closer proximity while the 50% transparency offers more harmonious view in a distant context. The implications of these results to the usability of public transparent user interfaces and design recommendations are also discussed.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":" 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120830774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A sonification is a rendering of audio in response to data, and is used in instances where visual representations of data are impossible, difficult, or unwanted. Designing sonifications often requires knowledge in multiple areas as well as an understanding of how the end users will use the system. This makes it an ideal candidate for end-user development where the user plays a role in the creation of the design. We present a model for sonification that utilizes user-specified examples and data to generate cross-domain mappings from data to sound. As a novel contribution we utilize soundscapes (acoustic scenes) for these user-selected examples to define a structure for the sonification. We demonstrate a proof of concept of our model using sound examples and discuss how we plan to build on this work in the future.
{"title":"A Model for Data-Driven Sonification Using Soundscapes","authors":"KatieAnna Wolf, Genna Gliner, R. Fiebrink","doi":"10.1145/2732158.2732188","DOIUrl":"https://doi.org/10.1145/2732158.2732188","url":null,"abstract":"A sonification is a rendering of audio in response to data, and is used in instances where visual representations of data are impossible, difficult, or unwanted. Designing sonifications often requires knowledge in multiple areas as well as an understanding of how the end users will use the system. This makes it an ideal candidate for end-user development where the user plays a role in the creation of the design. We present a model for sonification that utilizes user-specified examples and data to generate cross-domain mappings from data to sound. As a novel contribution we utilize soundscapes (acoustic scenes) for these user-selected examples to define a structure for the sonification. We demonstrate a proof of concept of our model using sound examples and discuss how we plan to build on this work in the future.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123253276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In contrast to traditional recommender systems (RS), context-aware recommender systems (CARS) emerged to adapt to users' preferences in various contextual situations. During those years, different context-aware recommendation algorithms have been developed and they are able to demonstrate the effectiveness of CARS. However, this field has yet to agree on the definition of context, where researchers may incorporate diversified variables (e.g., user profiles or item features), which further creates confusions between content-based RS and context-based RS, and positions the problem of context identification in CARS. In this paper, we revisit the definition of contexts in recommender systems, and propose a context identification framework to clarify the preliminary selection of contextual variables, which may further assist interpretation of contextual effects in RS.
{"title":"A Revisit to The Identification of Contexts in Recommender Systems","authors":"Yong Zheng","doi":"10.1145/2732158.2732167","DOIUrl":"https://doi.org/10.1145/2732158.2732167","url":null,"abstract":"In contrast to traditional recommender systems (RS), context-aware recommender systems (CARS) emerged to adapt to users' preferences in various contextual situations. During those years, different context-aware recommendation algorithms have been developed and they are able to demonstrate the effectiveness of CARS. However, this field has yet to agree on the definition of context, where researchers may incorporate diversified variables (e.g., user profiles or item features), which further creates confusions between content-based RS and context-based RS, and positions the problem of context identification in CARS. In this paper, we revisit the definition of contexts in recommender systems, and propose a context identification framework to clarify the preliminary selection of contextual variables, which may further assist interpretation of contextual effects in RS.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114751584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Prange, Indra Praveen Sandrala, Markus Weber, Daniel Sonntag
In this demo paper we describe how a digital pen and a humanoid robot companion can improve the social communication of a dementia patient. We propose the use of NAO, a humanoid robot, as a companion to the dementia patient in order to continuously monitor his or her activities and provide cognitive assistance in daily life situations. For example, patients can communicate with NAO through natural language by the speech dialogue functionality we integrated. Most importantly, to improve communication, i.e., sending digital messages (texting, emails), we propose the usage of a smartpen, where the patients write messages on normal paper with an invisible dot pattern to initiate hand-writing and sketch recognition in real-time. The smartpen application is embedded into the human-robot speech dialogue.
{"title":"Robot Companions and Smartpens for Improved Social Communication of Dementia Patients","authors":"Alexander Prange, Indra Praveen Sandrala, Markus Weber, Daniel Sonntag","doi":"10.1145/2732158.2732174","DOIUrl":"https://doi.org/10.1145/2732158.2732174","url":null,"abstract":"In this demo paper we describe how a digital pen and a humanoid robot companion can improve the social communication of a dementia patient. We propose the use of NAO, a humanoid robot, as a companion to the dementia patient in order to continuously monitor his or her activities and provide cognitive assistance in daily life situations. For example, patients can communicate with NAO through natural language by the speech dialogue functionality we integrated. Most importantly, to improve communication, i.e., sending digital messages (texting, emails), we propose the usage of a smartpen, where the patients write messages on normal paper with an invisible dot pattern to initiate hand-writing and sketch recognition in real-time. The smartpen application is embedded into the human-robot speech dialogue.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116955901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advancement of mobile technology inspired research communities to achieve centimeter level accuracy in indoor positioning systems [2]. But to get the best out of it, we need assisting navigation applications that will not only help us to reach the destination quickly but will also make us familiar with the surroundings. To address this concern, we propose a two stage approach which can help pedestrians navigate in an indoor location and simultaneously enhance their spatial awareness. In the first stage, we will conduct a behavioral user study to identify the prominent behavioral patterns during different navigational challenges. Once the behavioral state model is prepared, we need to analyze the sensor data in multiple dimensions and build a dynamic sensor state model. This model will enable us to map the behavioral state model to the sensor states and draw a direct one to one relation between the two.
{"title":"Know your Surroundings with an Interactive Map","authors":"S. Dey","doi":"10.1145/2732158.2732169","DOIUrl":"https://doi.org/10.1145/2732158.2732169","url":null,"abstract":"The advancement of mobile technology inspired research communities to achieve centimeter level accuracy in indoor positioning systems [2]. But to get the best out of it, we need assisting navigation applications that will not only help us to reach the destination quickly but will also make us familiar with the surroundings. To address this concern, we propose a two stage approach which can help pedestrians navigate in an indoor location and simultaneously enhance their spatial awareness. In the first stage, we will conduct a behavioral user study to identify the prominent behavioral patterns during different navigational challenges. Once the behavioral state model is prepared, we need to analyze the sensor data in multiple dimensions and build a dynamic sensor state model. This model will enable us to map the behavioral state model to the sensor states and draw a direct one to one relation between the two.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128066858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Orlosky, Markus Weber, Yecheng Gu, Daniel Sonntag, Sergey Sosnovsky
Recent advances in virtual and augmented reality have led to the development of a number of simulations for different applications. In particular, simulations for monitoring, evaluation, training, and education have started to emerge for the consumer market due to the availability and affordability of immersive display technology. In this work, we introduce a virtual reality environment that provides an immersive traffic simulation designed to observe behavior and monitor relevant skills and abilities of pedestrians who may be at risk, such as elderly persons with cognitive impairments. The system provides basic reactive functionality, such as display of navigation instructions and notifications of dangerous obstacles during navigation tasks. Methods for interaction using hand and arm gestures are also implemented to allow users explore the environment in a more natural manner.
{"title":"An Interactive Pedestrian Environment Simulator for Cognitive Monitoring and Evaluation","authors":"J. Orlosky, Markus Weber, Yecheng Gu, Daniel Sonntag, Sergey Sosnovsky","doi":"10.1145/2732158.2732175","DOIUrl":"https://doi.org/10.1145/2732158.2732175","url":null,"abstract":"Recent advances in virtual and augmented reality have led to the development of a number of simulations for different applications. In particular, simulations for monitoring, evaluation, training, and education have started to emerge for the consumer market due to the availability and affordability of immersive display technology. In this work, we introduce a virtual reality environment that provides an immersive traffic simulation designed to observe behavior and monitor relevant skills and abilities of pedestrians who may be at risk, such as elderly persons with cognitive impairments. The system provides basic reactive functionality, such as display of navigation instructions and notifications of dangerous obstacles during navigation tasks. Methods for interaction using hand and arm gestures are also implemented to allow users explore the environment in a more natural manner.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134090403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}