Audio Guides have been the prevalent mode of information delivery in public spaces such as Museums and Art Galleries. These devices are programmed to render static information to their users about the collections and artworks present and require human input to operate. The inability to automatically deliver contextual messages and the lack of interactivity are major hurdles to ensuring a rich and seamless user experience. Ubiquitous smartphones can be leveraged to create pervasive audio guides that provide rich and personalized user experience. In this paper, we present the design and implementation of "Usher", an intelligent tour companion. Usher provides three distinct advantages over traditional audio guides. First, Usher uses smartphone sensors to infer user context such as his physical location, locomotive state and orientation to deliver relevant information to the user. Second, Usher also provides interface to a cognitive Question Answer(QA) service for the inquisitive users and answers contextual queries. Finally, Usher notifies users if any of their social media friends are present in the vicinity. The ability to seamlessly track user context to provide rich semantic information and the cognitive capability to answer contextual queries means that Usher can enhance the user experience in a museum by multitudes.
{"title":"USHER: An Intelligent Tour Companion","authors":"Shubham Toshniwal, Parikshit Sharma, Saurabh Srivastava, Richa Sehgal","doi":"10.1145/2732158.2732187","DOIUrl":"https://doi.org/10.1145/2732158.2732187","url":null,"abstract":"Audio Guides have been the prevalent mode of information delivery in public spaces such as Museums and Art Galleries. These devices are programmed to render static information to their users about the collections and artworks present and require human input to operate. The inability to automatically deliver contextual messages and the lack of interactivity are major hurdles to ensuring a rich and seamless user experience. Ubiquitous smartphones can be leveraged to create pervasive audio guides that provide rich and personalized user experience. In this paper, we present the design and implementation of \"Usher\", an intelligent tour companion. Usher provides three distinct advantages over traditional audio guides. First, Usher uses smartphone sensors to infer user context such as his physical location, locomotive state and orientation to deliver relevant information to the user. Second, Usher also provides interface to a cognitive Question Answer(QA) service for the inquisitive users and answers contextual queries. Finally, Usher notifies users if any of their social media friends are present in the vicinity. The ability to seamlessly track user context to provide rich semantic information and the cognitive capability to answer contextual queries means that Usher can enhance the user experience in a museum by multitudes.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129501437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Designing intelligent computer interfaces requires human intelligence, which can be captured through multimodal sensors during human-computer interactions. These data modalities may involve users' language, vision, and body signals, which shed light on different aspects of human cognition and behaviors. I propose to integrate multimodal data to more effectively understand users during interactions. Since users' manipulation of big data (e.g., texts, images, videos) through interfaces can be computationally intensive, an interactive machine learning framework will be constructed in an unsupervised manner.
{"title":"Multimodal Interactive Machine Learning for User Understanding","authors":"Xuan Guo","doi":"10.1145/2732158.2732166","DOIUrl":"https://doi.org/10.1145/2732158.2732166","url":null,"abstract":"Designing intelligent computer interfaces requires human intelligence, which can be captured through multimodal sensors during human-computer interactions. These data modalities may involve users' language, vision, and body signals, which shed light on different aspects of human cognition and behaviors. I propose to integrate multimodal data to more effectively understand users during interactions. Since users' manipulation of big data (e.g., texts, images, videos) through interfaces can be computationally intensive, an interactive machine learning framework will be constructed in an unsupervised manner.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"338 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115891747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Much discussion has taken place regarding environmental sustainability, fossil fuels and other efforts to reverse the trend of global climate change. Unfortunately, individuals often choose the path of least resistance when making home energy decisions. Thus, it is imperative to consider all of the underlying causes that influence home energy consumption in an effort to build a more perceptive interface that addresses the variety of household occupant needs. This research seeks to explore the dynamic nature of household occupancy, individual comfort and situational variants that impact home energy consumption in an effort to discover critical design factors for building novel interfaces for home energy systems.
{"title":"Perceptive Home Energy Interfaces: Navigating the Dynamic Household Structure","authors":"Germaine Irwin","doi":"10.1145/2732158.2732163","DOIUrl":"https://doi.org/10.1145/2732158.2732163","url":null,"abstract":"Much discussion has taken place regarding environmental sustainability, fossil fuels and other efforts to reverse the trend of global climate change. Unfortunately, individuals often choose the path of least resistance when making home energy decisions. Thus, it is imperative to consider all of the underlying causes that influence home energy consumption in an effort to build a more perceptive interface that addresses the variety of household occupant needs. This research seeks to explore the dynamic nature of household occupancy, individual comfort and situational variants that impact home energy consumption in an effort to discover critical design factors for building novel interfaces for home energy systems.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116091035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Connor Hamlet, Daniel Korn, Nikhil Prasad, Volodymyr Siedlecki, Eliezer Encarnacion, Jacob W. Bartel, P. Dewan
We have created a new set of existing and novel predictive user-interfaces for exchanging messages in asynchronous collaborative systems such as email and internet communities. These interfaces support predictions of tags, hierarchical recipients, and message response times. The predictions are made incrementally, as messages are composed, and are offered to both senders and receivers of messages. The user interfaces are implemented by a test-bed that also supports experiments to evaluate them. It can automate the actions of the collaborators with whom a subject exchanges messages, replay user actions, and gather and display effort and correctness metrics related to these predictions. The collaborator actions and predictions are specified using a declarative mechanism. A video demonstration of this work is available at http://youtu.be/NJt9Rfqb1ko.
{"title":"User-Interfaces for Incremental Recipient and Response Time Predictions in Asynchronous Messaging","authors":"Connor Hamlet, Daniel Korn, Nikhil Prasad, Volodymyr Siedlecki, Eliezer Encarnacion, Jacob W. Bartel, P. Dewan","doi":"10.1145/2732158.2732172","DOIUrl":"https://doi.org/10.1145/2732158.2732172","url":null,"abstract":"We have created a new set of existing and novel predictive user-interfaces for exchanging messages in asynchronous collaborative systems such as email and internet communities. These interfaces support predictions of tags, hierarchical recipients, and message response times. The predictions are made incrementally, as messages are composed, and are offered to both senders and receivers of messages. The user interfaces are implemented by a test-bed that also supports experiments to evaluate them. It can automate the actions of the collaborators with whom a subject exchanges messages, replay user actions, and gather and display effort and correctness metrics related to these predictions. The collaborator actions and predictions are specified using a declarative mechanism. A video demonstration of this work is available at http://youtu.be/NJt9Rfqb1ko.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124516278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Various eye-trackers have recently become commercially available, but studies on more high-spec eye-tracking system have been conducted. Especially, studies have shown that conventional eye-trackers are rather inflexible in layout. The cameras employed in these eye-trackers, as well as the light sources and user's position are fixed, and only a predefined plane can be the target of the eye-tracking. In this study, we propose a new framework that we call a Free-Target Eye-tracking System, which consists of eye-tracking hardware and a hardware layout solver. We developed a prototype of a hardware layout solver and demonstrated its effectiveness.
{"title":"Framework for Realizing a Free-Target Eye-tracking System","authors":"Daiki Sakai, Michiya Yamamoto, Takashi Nagamatsu","doi":"10.1145/2732158.2732184","DOIUrl":"https://doi.org/10.1145/2732158.2732184","url":null,"abstract":"Various eye-trackers have recently become commercially available, but studies on more high-spec eye-tracking system have been conducted. Especially, studies have shown that conventional eye-trackers are rather inflexible in layout. The cameras employed in these eye-trackers, as well as the light sources and user's position are fixed, and only a predefined plane can be the target of the eye-tracking. In this study, we propose a new framework that we call a Free-Target Eye-tracking System, which consists of eye-tracking hardware and a hardware layout solver. We developed a prototype of a hardware layout solver and demonstrated its effectiveness.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121094591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In my dissertation I plan to explore the design of digital systems and how we can support users in the design process. Specifically, I focus on the design of sonifications, which are the representation of data using sound. Creating the algorithm that maps data to sound is not an easy task as there are many things to consider: an individual's aesthetic preferences, multiple dimensions of sound, complexities of the data to be represented, and previously developed theories for how to convey information using sound. This makes it an ideal domain for end-user development and data-driven design creation.
{"title":"Assisting End Users in the Design of Sonification Systems","authors":"KatieAnna Wolf","doi":"10.1145/2732158.2732165","DOIUrl":"https://doi.org/10.1145/2732158.2732165","url":null,"abstract":"In my dissertation I plan to explore the design of digital systems and how we can support users in the design process. Specifically, I focus on the design of sonifications, which are the representation of data using sound. Creating the algorithm that maps data to sound is not an easy task as there are many things to consider: an individual's aesthetic preferences, multiple dimensions of sound, complexities of the data to be represented, and previously developed theories for how to convey information using sound. This makes it an ideal domain for end-user development and data-driven design creation.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121241293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our aim is to use our own bodies as an interactive platform. We are trying to move away from traditional wearable devices worn on clothes and accessories where gestures are noticeable and remind cyborg looking. We follow Beauty Technology paradigm that uses the body's surface as an interactive platform by integrating technology into beauty products applied directly to one's skin, fingernails and hair. Thus, we propose Hairware, a Beauty Technology Prototype that connects chemically metalized hair extensions to a microcontroller turning it into an input device for triggering different objects. Hairware acts as a capacitive touch sensor that detects touch variations on hair and uses machine learning algorithms in order to recognize user's intention. In this way, we add a new functionality to hair extensions, becoming a seamless device that recognizes auto-contact behaviors that no observers would identify. This work presents the design of Hairware's hardware and software implementation. In this demo, we show Hairware acting as a controller for smartphones and computers.
{"title":"Hairware: Conductive Hair Extensions as a Capacitive Touch Input Device","authors":"K. Vega, Márcio Cunha, H. Fuks","doi":"10.1145/2732158.2732176","DOIUrl":"https://doi.org/10.1145/2732158.2732176","url":null,"abstract":"Our aim is to use our own bodies as an interactive platform. We are trying to move away from traditional wearable devices worn on clothes and accessories where gestures are noticeable and remind cyborg looking. We follow Beauty Technology paradigm that uses the body's surface as an interactive platform by integrating technology into beauty products applied directly to one's skin, fingernails and hair. Thus, we propose Hairware, a Beauty Technology Prototype that connects chemically metalized hair extensions to a microcontroller turning it into an input device for triggering different objects. Hairware acts as a capacitive touch sensor that detects touch variations on hair and uses machine learning algorithms in order to recognize user's intention. In this way, we add a new functionality to hair extensions, becoming a seamless device that recognizes auto-contact behaviors that no observers would identify. This work presents the design of Hairware's hardware and software implementation. In this demo, we show Hairware acting as a controller for smartphones and computers.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130251666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ambient Assisted Living (AAL) applications aim to allow elderly, sick and disabled people to stay safely at home while collaboratively assisted by their family, friends and medical staff. In principle, AAL amalgamated with Internet of Things (IoT) introduces a new healthcare connectivity paradigm that interconnects mobile apps and sensors allowing constant monitoring of the patient. By hiding technology into light fixtures, in this thesis proposal we present AmbLEDs, a ambient light sensing system, as an alternative to spreading sensors that are perceived as invasive, such as cameras, microphones, microcontrollers, tags or wearables, in order to create a crowdware ubiquitous context-aware interface for recognizing, informing and alerting home environmental changes and human activities to support continuous proactive care.
{"title":"AmbLEDs: Context-Aware I/O for AAL Systems","authors":"Márcio Cunha","doi":"10.1145/2732158.2732168","DOIUrl":"https://doi.org/10.1145/2732158.2732168","url":null,"abstract":"Ambient Assisted Living (AAL) applications aim to allow elderly, sick and disabled people to stay safely at home while collaboratively assisted by their family, friends and medical staff. In principle, AAL amalgamated with Internet of Things (IoT) introduces a new healthcare connectivity paradigm that interconnects mobile apps and sensors allowing constant monitoring of the patient. By hiding technology into light fixtures, in this thesis proposal we present AmbLEDs, a ambient light sensing system, as an alternative to spreading sensors that are perceived as invasive, such as cameras, microphones, microcontrollers, tags or wearables, in order to create a crowdware ubiquitous context-aware interface for recognizing, informing and alerting home environmental changes and human activities to support continuous proactive care.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130058929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trevor Nelligan, Seth Polsley, Jaideep Ray, Michael Helms, J. Linsey, T. Hammond
At the university level, high enrollment numbers in classes can be overwhelming for professors and teaching assistants to manage. Grading assignments and tests for hundreds of students is time consuming and has led towards a push for software-based learning in large university classes. Unfortunately, traditional quantitative question-and-answer mechanisms are often not sufficient for STEM courses, where there is a focus on problem-solving techniques over finding the "right" answers. Working through problems by hand can be important in memory retention, so in order for software learning systems to be effective in STEM courses, they should be able to intelligently understand students' sketches. Mechanix is a sketch-based system that allows students to step through problems designed by their instructors with personalized feedback and optimized interface controls. Optimizations like color-coding, menu bar simplification, and tool consolidation are recent improvements in Mechanix that further the aim to engage and motivate students in learning.
{"title":"Mechanix: A Sketch-Based Educational Interface","authors":"Trevor Nelligan, Seth Polsley, Jaideep Ray, Michael Helms, J. Linsey, T. Hammond","doi":"10.1145/2732158.2732194","DOIUrl":"https://doi.org/10.1145/2732158.2732194","url":null,"abstract":"At the university level, high enrollment numbers in classes can be overwhelming for professors and teaching assistants to manage. Grading assignments and tests for hundreds of students is time consuming and has led towards a push for software-based learning in large university classes. Unfortunately, traditional quantitative question-and-answer mechanisms are often not sufficient for STEM courses, where there is a focus on problem-solving techniques over finding the \"right\" answers. Working through problems by hand can be important in memory retention, so in order for software learning systems to be effective in STEM courses, they should be able to intelligently understand students' sketches. Mechanix is a sketch-based system that allows students to step through problems designed by their instructors with personalized feedback and optimized interface controls. Optimizations like color-coding, menu bar simplification, and tool consolidation are recent improvements in Mechanix that further the aim to engage and motivate students in learning.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116061142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amrith Krishna, Plaban Kumar Bhowmick, K. Ghosh, Archana Sahu, Subhayan Roy
In this paper, we propose a prototype system for automatic generation and insertion of assessment items in online video courses. The proposed system analyzes text transcript of a requested video lecture to suggest self-assessment items in runtime through automatic discourse segmentation and question generation. To deal with the problem of question generation from noisy transcription, the system relies on semantically similar Wikipedia text segments. We base our study on a popular video lecture portal - National Programme on Technology Enhanced Learning (NPTEL). However, it can be adapted to other portals as well.
{"title":"Automatic Generation and Insertion of Assessment Items in Online Video Courses","authors":"Amrith Krishna, Plaban Kumar Bhowmick, K. Ghosh, Archana Sahu, Subhayan Roy","doi":"10.1145/2732158.2732183","DOIUrl":"https://doi.org/10.1145/2732158.2732183","url":null,"abstract":"In this paper, we propose a prototype system for automatic generation and insertion of assessment items in online video courses. The proposed system analyzes text transcript of a requested video lecture to suggest self-assessment items in runtime through automatic discourse segmentation and question generation. To deal with the problem of question generation from noisy transcription, the system relies on semantically similar Wikipedia text segments. We base our study on a popular video lecture portal - National Programme on Technology Enhanced Learning (NPTEL). However, it can be adapted to other portals as well.","PeriodicalId":177570,"journal":{"name":"Proceedings of the 20th International Conference on Intelligent User Interfaces Companion","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127720284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}