K. Verbert, Denis Parra, Peter Brusilovsky, E. Duval
Research on recommender systems has traditionally focused on the development of algorithms to improve accuracy of recommendations. So far, little research has been done to enable user interaction with such systems as a basis to support exploration and control by end users. In this paper, we present our research on the use of information visualization techniques to interact with recommender systems. We investigated how information visualization can improve user understanding of the typically black-box rationale behind recommendations in order to increase their perceived relevance and meaning and to support exploration and user involvement in the recommendation process. Our study has been performed using TalkExplorer, an interactive visualization tool developed for attendees of academic conferences. The results of user studies performed at two conferences allowed us to obtain interesting insights to enhance user interfaces that integrate recommendation technology. More specifically, effectiveness and probability of item selection both increase when users are able to explore and interrelate multiple entities -- i.e. items bookmarked by users, recommendations and tags.
{"title":"Visualizing recommendations to support exploration, transparency and controllability","authors":"K. Verbert, Denis Parra, Peter Brusilovsky, E. Duval","doi":"10.1145/2449396.2449442","DOIUrl":"https://doi.org/10.1145/2449396.2449442","url":null,"abstract":"Research on recommender systems has traditionally focused on the development of algorithms to improve accuracy of recommendations. So far, little research has been done to enable user interaction with such systems as a basis to support exploration and control by end users. In this paper, we present our research on the use of information visualization techniques to interact with recommender systems. We investigated how information visualization can improve user understanding of the typically black-box rationale behind recommendations in order to increase their perceived relevance and meaning and to support exploration and user involvement in the recommendation process. Our study has been performed using TalkExplorer, an interactive visualization tool developed for attendees of academic conferences. The results of user studies performed at two conferences allowed us to obtain interesting insights to enhance user interfaces that integrate recommendation technology. More specifically, effectiveness and probability of item selection both increase when users are able to explore and interrelate multiple entities -- i.e. items bookmarked by users, recommendations and tags.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75334247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a study exploring upper body 3D spatial interaction metaphors for control and communication with Unmanned Aerial Vehicles (UAV) such as the Parrot AR Drone. We discuss the design and implementation of five interaction techniques using the Microsoft Kinect, based on metaphors inspired by UAVs, to support a variety of flying operations a UAV can perform. Techniques include a first-person interaction metaphor where a user takes a pose like a winged aircraft, a game controller metaphor, where a user's hands mimic the control movements of console joysticks, "proxy" manipulation, where the user imagines manipulating the UAV as if it were in their grasp, and a pointing metaphor in which the user assumes the identity of a monarch and commands the UAV as such. We examine qualitative metrics such as perceived intuition, usability and satisfaction, among others. Our results indicate that novice users appreciate certain 3D spatial techniques over the smartphone application bundled with the AR Drone. We also discuss the trade-offs in the technique design metrics based on results from our study.
{"title":"Exploring 3d gesture metaphors for interaction with unmanned aerial vehicles","authors":"Kevin P. Pfeil, S. Koh, J. Laviola","doi":"10.1145/2449396.2449429","DOIUrl":"https://doi.org/10.1145/2449396.2449429","url":null,"abstract":"We present a study exploring upper body 3D spatial interaction metaphors for control and communication with Unmanned Aerial Vehicles (UAV) such as the Parrot AR Drone. We discuss the design and implementation of five interaction techniques using the Microsoft Kinect, based on metaphors inspired by UAVs, to support a variety of flying operations a UAV can perform. Techniques include a first-person interaction metaphor where a user takes a pose like a winged aircraft, a game controller metaphor, where a user's hands mimic the control movements of console joysticks, \"proxy\" manipulation, where the user imagines manipulating the UAV as if it were in their grasp, and a pointing metaphor in which the user assumes the identity of a monarch and commands the UAV as such. We examine qualitative metrics such as perceived intuition, usability and satisfaction, among others. Our results indicate that novice users appreciate certain 3D spatial techniques over the smartphone application bundled with the AR Drone. We also discuss the trade-offs in the technique design metrics based on results from our study.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80569685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Putze, Jutta Hild, Rainer Kärgel, C. Herff, Alexandra Redmann, J. Beyerer, Tanja Schultz
In expert video analysis, the selection of certain events in a continuous video stream is a frequently occurring operation, e.g., in surveillance applications. Due to the dynamic and rich visual input, the constantly high attention and the required hand-eye coordination for mouse interaction, this is a very demanding and exhausting task. Hence, relevant events might be missed. We propose to use eye tracking and electroencephalography (EEG) as additional input modalities for event selection. From eye tracking, we derive the spatial location of a perceived event and from patterns in the EEG signal we derive its temporal location within the video stream. This reduces the amount of the required active user input in the selection process, and thus has the potential to reduce the user's workload. In this paper, we describe the employed methods for the localization processes and introduce the developed scenario in which we investigate the feasibility of this approach. Finally, we present and discuss results on the accuracy and the speed of the method and investigate how the modalities interact.
{"title":"Locating user attention using eye tracking and EEG for spatio-temporal event selection","authors":"F. Putze, Jutta Hild, Rainer Kärgel, C. Herff, Alexandra Redmann, J. Beyerer, Tanja Schultz","doi":"10.1145/2449396.2449415","DOIUrl":"https://doi.org/10.1145/2449396.2449415","url":null,"abstract":"In expert video analysis, the selection of certain events in a continuous video stream is a frequently occurring operation, e.g., in surveillance applications. Due to the dynamic and rich visual input, the constantly high attention and the required hand-eye coordination for mouse interaction, this is a very demanding and exhausting task. Hence, relevant events might be missed. We propose to use eye tracking and electroencephalography (EEG) as additional input modalities for event selection. From eye tracking, we derive the spatial location of a perceived event and from patterns in the EEG signal we derive its temporal location within the video stream. This reduces the amount of the required active user input in the selection process, and thus has the potential to reduce the user's workload. In this paper, we describe the employed methods for the localization processes and introduce the developed scenario in which we investigate the feasibility of this approach. Finally, we present and discuss results on the accuracy and the speed of the method and investigate how the modalities interact.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91409373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the past few years, Augmented Reality has become widely popular in the form of smart phone applications, however most smart phone-based AR applications are limited in user interaction and do not support gesture-based direct manipulation of the augmented scene. In this paper, we introduce a new AR interaction methodology, employing users' hands and fingers to interact with the virtual (and possibly physical) objects that appear on the mobile phone screen. The goal of this project was to support different types of interaction (selection, transformation, and fine-grain control of an input value) while keeping the methodology for hand detection as simple as possible to maintain good performance on smart phones. We evaluated our methods in user studies, collecting task performance data and user impressions about this direct way of interacting with augmented scenes through mobile phones.
{"title":"Real-time hand interaction for augmented reality on mobile phones","authors":"Wendy H. Chun, Tobias Höllerer","doi":"10.1145/2449396.2449435","DOIUrl":"https://doi.org/10.1145/2449396.2449435","url":null,"abstract":"Over the past few years, Augmented Reality has become widely popular in the form of smart phone applications, however most smart phone-based AR applications are limited in user interaction and do not support gesture-based direct manipulation of the augmented scene. In this paper, we introduce a new AR interaction methodology, employing users' hands and fingers to interact with the virtual (and possibly physical) objects that appear on the mobile phone screen. The goal of this project was to support different types of interaction (selection, transformation, and fine-grain control of an input value) while keeping the methodology for hand detection as simple as possible to maintain good performance on smart phones. We evaluated our methods in user studies, collecting task performance data and user impressions about this direct way of interacting with augmented scenes through mobile phones.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83054654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Bakalov, Marie-Jean Meurs, B. König-Ries, Bahar Sateli, R. Witte, G. Butler, A. Tsang
Personalization nowadays is a commodity in a broad spectrum of computer systems. Examples range from online shops recommending products identified based on the user's previous purchases to web search engines sorting search hits based on the user browsing history. The aim of such adaptive behavior is to help users to find relevant content easier and faster. However, there are a number of negative aspects of this behavior. Adaptive systems have been criticized for violating the usability principles of direct manipulation systems, namely controllability, predictability, transparency, and unobtrusiveness. In this paper, we propose an approach to controlling adaptive behavior in recommender systems. It allows users to get an overview of personalization effects, view the user profile that is used for personalization, and adjust the profile and personalization effects to their needs and preferences. We present this approach using an example of a personalized portal for biochemical literature, whose users are biochemists, biologists and genomicists. Also, we report on a user study evaluating the impacts of controllable personalization on the usefulness, usability, user satisfaction, transparency, and trustworthiness of personalized systems.
{"title":"An approach to controlling user models and personalization effects in recommender systems","authors":"F. Bakalov, Marie-Jean Meurs, B. König-Ries, Bahar Sateli, R. Witte, G. Butler, A. Tsang","doi":"10.1145/2449396.2449405","DOIUrl":"https://doi.org/10.1145/2449396.2449405","url":null,"abstract":"Personalization nowadays is a commodity in a broad spectrum of computer systems. Examples range from online shops recommending products identified based on the user's previous purchases to web search engines sorting search hits based on the user browsing history. The aim of such adaptive behavior is to help users to find relevant content easier and faster. However, there are a number of negative aspects of this behavior. Adaptive systems have been criticized for violating the usability principles of direct manipulation systems, namely controllability, predictability, transparency, and unobtrusiveness. In this paper, we propose an approach to controlling adaptive behavior in recommender systems. It allows users to get an overview of personalization effects, view the user profile that is used for personalization, and adjust the profile and personalization effects to their needs and preferences. We present this approach using an example of a personalized portal for biochemical literature, whose users are biochemists, biologists and genomicists. Also, we report on a user study evaluating the impacts of controllable personalization on the usefulness, usability, user satisfaction, transparency, and trustworthiness of personalized systems.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84909160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of real-time traffic streaming offers users the opportunity to visualise current traffic conditions and congestion information. However, real-time information highlighting the underlying reason for tail-backs remains largely unexplored. Broken traffic lights, an accident, a large concert, or road-works reveal important information for citizens and traffic operators alike. Providing such information in real-time requires intelligent mechanisms and user interfaces in order to (i) harness heterogeneous data sources (volume, velocity, variety, veracity) and (ii) make derived knowledge consumable so users can visualize traffic conditions and congestion information making better routing decisions while travelling. This work focuses on surfacing relevant information and explaining the underlying reasons behind traffic conditions. To this end, static data from event providers, planned road works together with dynamically emerging events such as a traffic accidents, localized weather conditions or unplanned obstructions are captured through social media to provide users real-time feedback to highlight the causes of traffic congestion.
{"title":"Westland row why so slow?: fusing social media and linked data sources for understanding real-time traffic conditions","authors":"E. Daly, F. Lécué, V. Bicer","doi":"10.1145/2449396.2449423","DOIUrl":"https://doi.org/10.1145/2449396.2449423","url":null,"abstract":"The advent of real-time traffic streaming offers users the opportunity to visualise current traffic conditions and congestion information. However, real-time information highlighting the underlying reason for tail-backs remains largely unexplored. Broken traffic lights, an accident, a large concert, or road-works reveal important information for citizens and traffic operators alike. Providing such information in real-time requires intelligent mechanisms and user interfaces in order to (i) harness heterogeneous data sources (volume, velocity, variety, veracity) and (ii) make derived knowledge consumable so users can visualize traffic conditions and congestion information making better routing decisions while travelling. This work focuses on surfacing relevant information and explaining the underlying reasons behind traffic conditions. To this end, static data from event providers, planned road works together with dynamically emerging events such as a traffic accidents, localized weather conditions or unplanned obstructions are captured through social media to provide users real-time feedback to highlight the causes of traffic congestion.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88193653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper explores techniques for visualising display changes in multi-display environments. We present four subtle gaze-dependent techniques for visualising change on unattended displays called FreezeFrame, PixMap, WindowMap and Aura. To enable the techniques to be directly deployed to workstations, we also present a system that automatically identifies the user's eyes using computer vision and a set of web cameras mounted on the displays. An evaluation confirms this system can detect which display the user is attending to with high accuracy. We studied the efficacy of the visualisation techniques in a five-day case study with a working professional. This individual used our system eight hours per day for five consecutive days. The results of the study show that the participant found the system and the techniques useful, subtle, calm and non-intrusive. We conclude by discussing the challenges in evaluating intelligent subtle interaction techniques using traditional experimental paradigms.
{"title":"Subtle gaze-dependent techniques for visualising display changes in multi-display environments","authors":"Jakub Dostal, P. Kristensson, A. Quigley","doi":"10.1145/2449396.2449416","DOIUrl":"https://doi.org/10.1145/2449396.2449416","url":null,"abstract":"This paper explores techniques for visualising display changes in multi-display environments. We present four subtle gaze-dependent techniques for visualising change on unattended displays called FreezeFrame, PixMap, WindowMap and Aura. To enable the techniques to be directly deployed to workstations, we also present a system that automatically identifies the user's eyes using computer vision and a set of web cameras mounted on the displays. An evaluation confirms this system can detect which display the user is attending to with high accuracy. We studied the efficacy of the visualisation techniques in a five-day case study with a working professional. This individual used our system eight hours per day for five consecutive days. The results of the study show that the participant found the system and the techniques useful, subtle, calm and non-intrusive. We conclude by discussing the challenges in evaluating intelligent subtle interaction techniques using traditional experimental paradigms.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88031960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
CloudPrimer is a tablet-based interactive reading primer that aims to foster early literacy skills and shared parent-child reading through user-targeted discussion topic suggestions. The tablet application records discussions between parents and children as they read a story and leverages this information, in combination with a common sense knowledge base, to develop discussion topic models. The long-term goal of the project is to use such models to provide context-sensitive discussion topic suggestions to parents during the shared reading activity in order to enhance the interactive experience and foster parental engagement in literacy education. In this paper, we present a novel approach for using commonsense reasoning to effectively model topics of discussion in unstructured dialog. We introduce a metric for localizing concepts that the users are interested in at a given moment in the dialog and extract a time sequence of words of interest. We then present algorithms for topic modeling and refinement that leverage semantic knowledge acquired from ConceptNet, a commonsense knowledge base. We evaluate the performance of our algorithms using transcriptions of audio recordings of parent-child pairs interacting with a tablet application, and compare the output of our algorithms to human-generated topics. Our results show that words of interest and discussion topics selected by our algorithm closely match those identified by human readers.
{"title":"Modeling discussion topics in interactions with a tablet reading primer","authors":"Adrian Boteanu, S. Chernova","doi":"10.1145/2449396.2449409","DOIUrl":"https://doi.org/10.1145/2449396.2449409","url":null,"abstract":"CloudPrimer is a tablet-based interactive reading primer that aims to foster early literacy skills and shared parent-child reading through user-targeted discussion topic suggestions. The tablet application records discussions between parents and children as they read a story and leverages this information, in combination with a common sense knowledge base, to develop discussion topic models. The long-term goal of the project is to use such models to provide context-sensitive discussion topic suggestions to parents during the shared reading activity in order to enhance the interactive experience and foster parental engagement in literacy education. In this paper, we present a novel approach for using commonsense reasoning to effectively model topics of discussion in unstructured dialog. We introduce a metric for localizing concepts that the users are interested in at a given moment in the dialog and extract a time sequence of words of interest. We then present algorithms for topic modeling and refinement that leverage semantic knowledge acquired from ConceptNet, a commonsense knowledge base. We evaluate the performance of our algorithms using transcriptions of audio recordings of parent-child pairs interacting with a tablet application, and compare the output of our algorithms to human-generated topics. Our results show that words of interest and discussion topics selected by our algorithm closely match those identified by human readers.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81494472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sven Buschbeck, A. Jameson, T. Schneeberger, R. Woll
Intelligent technologies have been used in various ways to support more effective representation and processing of media and documents in terms of the events that they refer to. This demo presents some innovations that have been introduced in a web-based interface to a repository of media and documents that are organized in terms of hierarchically structured events.
{"title":"A web-based user interface for interaction with hierarchically structured events","authors":"Sven Buschbeck, A. Jameson, T. Schneeberger, R. Woll","doi":"10.1145/2166966.2167038","DOIUrl":"https://doi.org/10.1145/2166966.2167038","url":null,"abstract":"Intelligent technologies have been used in various ways to support more effective representation and processing of media and documents in terms of the events that they refer to. This demo presents some innovations that have been introduced in a web-based interface to a repository of media and documents that are organized in terms of hierarchically structured events.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79774939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we analyze pointing techniques for simple remote control of nearby and distant objects in an outdoor environment, using a mobile phone. In an experiment we determine the accuracy of pointing at targets from a few meters to a few hundred meters away, either by focusing the phone's camera on a target or holding the phone at waist level in the direction of the target. We describe a simulated network application in which users can activate and control one or more responsive objects using either interaction technique.
{"title":"Pointing at responsive objects outdoors","authors":"YangLei Zhao, Arpan Chakraborty, Kyung Wha Hong, Shishir Kakaraddi, R. Amant","doi":"10.1145/2166966.2167018","DOIUrl":"https://doi.org/10.1145/2166966.2167018","url":null,"abstract":"In this paper we analyze pointing techniques for simple remote control of nearby and distant objects in an outdoor environment, using a mobile phone. In an experiment we determine the accuracy of pointing at targets from a few meters to a few hundred meters away, either by focusing the phone's camera on a target or holding the phone at waist level in the direction of the target. We describe a simulated network application in which users can activate and control one or more responsive objects using either interaction technique.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91076778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}