Although the exploration of variations is a key part of interface design, current processes for creating variations are mostly manual. We present Scout, a system that helps designers explore many variations rapidly through mixed-initiative interaction with high-level constraints and design feedback. Past constraint-based layout systems use low-level spatial constraints and mostly produce only a single design. Scout advances upon these systems by introducing high-level constraints based on design concepts (e.g. emphasis). With Scout, we have formalized several high-level constraints into their corresponding low-level spatial constraints to enable rapidly generating many designs through constraint solving and program synthesis.
{"title":"Scout: Mixed-Initiative Exploration of Design Variations through High-Level Design Constraints","authors":"Amanda Swearngin, Amy J. Ko, J. Fogarty","doi":"10.1145/3266037.3271626","DOIUrl":"https://doi.org/10.1145/3266037.3271626","url":null,"abstract":"Although the exploration of variations is a key part of interface design, current processes for creating variations are mostly manual. We present Scout, a system that helps designers explore many variations rapidly through mixed-initiative interaction with high-level constraints and design feedback. Past constraint-based layout systems use low-level spatial constraints and mostly produce only a single design. Scout advances upon these systems by introducing high-level constraints based on design concepts (e.g. emphasis). With Scout, we have formalized several high-level constraints into their corresponding low-level spatial constraints to enable rapidly generating many designs through constraint solving and program synthesis.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127107488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite advances in machine learning and deep neural networks, there is still a huge gap between machine and human image understanding. One of the causes is the annotation process used to label training images. In most image categorization tasks, there is a fundamental ambiguity between some image categories and the underlying class probability differs from very obvious cases to ambiguous ones. However, current machine learning systems and applications usually work with discrete annotation processes and the training labels do not reflect this ambiguity. To address this issue, we propose an new image annotation framework where labeling incorporates human gaze behavior. In this framework, gaze behavior is used to predict image labeling difficulty. The image classifier is then trained with sample weights defined by the predicted difficulty. We demonstrate our approach's effectiveness on four-class image classification tasks.
{"title":"Gaze-guided Image Classification for Reflecting Perceptual Class Ambiguity","authors":"Tatsuya Ishibashi, Yusuke Sugano, Y. Matsushita","doi":"10.1145/3266037.3266090","DOIUrl":"https://doi.org/10.1145/3266037.3266090","url":null,"abstract":"Despite advances in machine learning and deep neural networks, there is still a huge gap between machine and human image understanding. One of the causes is the annotation process used to label training images. In most image categorization tasks, there is a fundamental ambiguity between some image categories and the underlying class probability differs from very obvious cases to ambiguous ones. However, current machine learning systems and applications usually work with discrete annotation processes and the training labels do not reflect this ambiguity. To address this issue, we propose an new image annotation framework where labeling incorporates human gaze behavior. In this framework, gaze behavior is used to predict image labeling difficulty. The image classifier is then trained with sample weights defined by the predicted difficulty. We demonstrate our approach's effectiveness on four-class image classification tasks.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126990918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dennis Wolf, Daniel Besserer, K. Sejunaite, M. Riepe, E. Rukzio
Symptoms of progressing dementia like memory loss, impaired executive function and decreasing motivation can gradually undermine instrumental activities of daily living (IADL) such as cooking. Assisting technologies in form of augmented reality (AR) have previously been applied to support cognitively impaired users during IADLs. In most cases, instructions were provided locally via projection or a head-mounted display (HMD) but lacked an incentive mechanism and the flexibility to support a broad range of use-cases. To provide users and therapists with a holistic solution, we propose cARe, a framework that can be easily adapted by therapists to various use-cases without any programming knowledge. Users are then guided through manual processes with localized visual and auditory cues that are rendered by an HMD. Our ongoing user study indicates that users are more comfortable and successful in cooking with cARe as compared to a printed recipe, which promises a more dignified and autonomous living for dementia patients.
{"title":"cARe: An Augmented Reality Support System for Dementia Patients","authors":"Dennis Wolf, Daniel Besserer, K. Sejunaite, M. Riepe, E. Rukzio","doi":"10.1145/3266037.3266095","DOIUrl":"https://doi.org/10.1145/3266037.3266095","url":null,"abstract":"Symptoms of progressing dementia like memory loss, impaired executive function and decreasing motivation can gradually undermine instrumental activities of daily living (IADL) such as cooking. Assisting technologies in form of augmented reality (AR) have previously been applied to support cognitively impaired users during IADLs. In most cases, instructions were provided locally via projection or a head-mounted display (HMD) but lacked an incentive mechanism and the flexibility to support a broad range of use-cases. To provide users and therapists with a holistic solution, we propose cARe, a framework that can be easily adapted by therapists to various use-cases without any programming knowledge. Users are then guided through manual processes with localized visual and auditory cues that are rendered by an HMD. Our ongoing user study indicates that users are more comfortable and successful in cooking with cARe as compared to a printed recipe, which promises a more dignified and autonomous living for dementia patients.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130494149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to design serious games, attention needs to be paid to the target users. One important application of serious games is the design of games for older adults with dementia. Interfaces and activities in games designed for this group of users should be conducted by considering both the cognitive and physical limitations of these people, which may be challenging. We overcome these challenges by using the advantages of new head mounted display virtual reality (HMD-VR) technology and the knowledge of experts. The results of a preliminary three-week exercise involving participants with dementia shows that our design approach has been successful in achieving an interesting environment and could engage participants in the game.
{"title":"Game Design for Users with Constraint: Exergame for Older Adults with Cognitive Impairment","authors":"Mahzar Eisapour, Shi Cao, J. Boger","doi":"10.1145/3266037.3266124","DOIUrl":"https://doi.org/10.1145/3266037.3266124","url":null,"abstract":"In order to design serious games, attention needs to be paid to the target users. One important application of serious games is the design of games for older adults with dementia. Interfaces and activities in games designed for this group of users should be conducted by considering both the cognitive and physical limitations of these people, which may be challenging. We overcome these challenges by using the advantages of new head mounted display virtual reality (HMD-VR) technology and the knowledge of experts. The results of a preliminary three-week exercise involving participants with dementia shows that our design approach has been successful in achieving an interesting environment and could engage participants in the game.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"73 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124788954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Communication between screens and cameras has attracted attention as a ubiquitous information source, motivated by the widespread use of smartphones and the increase of public advertising and information screens. We propose embedding matrix barcodes into images projected on displays by utilizing imperceptible color vibration. This approach maintains the visual experience as the barcodes are imperceptible and can be implemented on almost any display and camera for the technology to be pervasive. In fact, the color vibration can be generated by ordinary 60 Hz LCDs and captured by 120 fps smartphone cameras. To illustrate the technology capabilities, we present scenarios of potential practical applications.
{"title":"Screen-Camera Communication via Matrix Barcode Utilizing Imperceptible Color Vibration","authors":"S. Abe, T. Hiraki, S. Fukushima, T. Naemura","doi":"10.1145/3266037.3271638","DOIUrl":"https://doi.org/10.1145/3266037.3271638","url":null,"abstract":"Communication between screens and cameras has attracted attention as a ubiquitous information source, motivated by the widespread use of smartphones and the increase of public advertising and information screens. We propose embedding matrix barcodes into images projected on displays by utilizing imperceptible color vibration. This approach maintains the visual experience as the barcodes are imperceptible and can be implemented on almost any display and camera for the technology to be pervasive. In fact, the color vibration can be generated by ordinary 60 Hz LCDs and captured by 120 fps smartphone cameras. To illustrate the technology capabilities, we present scenarios of potential practical applications.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115099659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shape-changing interfaces match forms and haptics with functions and bring affordances to devices. I believe that shape-changing interfaces will be increasingly available to end-users in the future. To increase acceptance of shape-changing interfaces by end-users, we need to provide designers with design criteria and framework closely grounded on their current skills and needs. Also, we need to provide them with prototyping tools to enable quick assessment of ideas in the physical world. In this paper, I introduce the three threads of my Ph.D. research in the direction of providing the design tools. First, I advance existing shape-changing interface taxonomies to broaden design vocabulary and systemize design framework, based on the classification of everyday objects. Second, I conduct a study with end-users to suggest interaction techniques and design guidelines for shape-changing interfaces from their current practice. Lastly, I develop a physical prototyping tool for shape-changing interfaces to shorten prototyping iterations based on well-known Lego-like bricks.
{"title":"Fostering Design Process of Shape-Changing Interfaces","authors":"Hyunyoung Kim","doi":"10.1145/3266037.3266131","DOIUrl":"https://doi.org/10.1145/3266037.3266131","url":null,"abstract":"Shape-changing interfaces match forms and haptics with functions and bring affordances to devices. I believe that shape-changing interfaces will be increasingly available to end-users in the future. To increase acceptance of shape-changing interfaces by end-users, we need to provide designers with design criteria and framework closely grounded on their current skills and needs. Also, we need to provide them with prototyping tools to enable quick assessment of ideas in the physical world. In this paper, I introduce the three threads of my Ph.D. research in the direction of providing the design tools. First, I advance existing shape-changing interface taxonomies to broaden design vocabulary and systemize design framework, based on the classification of everyday objects. Second, I conduct a study with end-users to suggest interaction techniques and design guidelines for shape-changing interfaces from their current practice. Lastly, I develop a physical prototyping tool for shape-changing interfaces to shorten prototyping iterations based on well-known Lego-like bricks.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":" 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113950749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patients waiting for long to use medical services become more physically and psychologically anxious than do people waiting to use general services. Since children feel more anxiety and fear in a hospital, it is necessary to reduce their perceived waiting time by disturbing their awareness of time and dispersing their attention. We present the D-Aquarium, a computer-based digital aquarium that provides psychological stability to pediatric patients and reduces their perceived waiting time by using distractions to alleviate their psychological anxiety and interfere with their perception of time.
{"title":"D-Aquarium: A Digital Aquarium to Reduce Perceived Waiting Time at Children's Hospital","authors":"Jooyoung Son, Suzi Choi, Jun-Dong Cho","doi":"10.1145/3266037.3266117","DOIUrl":"https://doi.org/10.1145/3266037.3266117","url":null,"abstract":"Patients waiting for long to use medical services become more physically and psychologically anxious than do people waiting to use general services. Since children feel more anxiety and fear in a hospital, it is necessary to reduce their perceived waiting time by disturbing their awareness of time and dispersing their attention. We present the D-Aquarium, a computer-based digital aquarium that provides psychological stability to pediatric patients and reduces their perceived waiting time by using distractions to alleviate their psychological anxiety and interfere with their perception of time.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125282190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Olwal, Jon Moeller, Greg Priest-Dorman, Thad Starner, B. Carroll
We introduce I/O Braid, an interactive textile cord with embedded sensing and visual feedback. I/O Braid senses proximity, touch, and twist through a spiraling, repeating braiding topology of touch matrices. This sensing topology is uniquely scalable, requiring only a few sensing lines to cover the whole length of a cord. The same topology allows us to embed fiber optic strands to integrate co-located visual feedback. We provide an overview of the enabling braiding techniques, design considerations, and approaches to gesture detection. These allow us to derive a set of interaction techniques, which we demonstrate with different form factors and capabilities. Our applications illustrate how I/O Braid can invisibly augment everyday objects, such as touch-sensitive headphones and interactive drawstrings on garments, while enabling discoverability and feedback through embedded light sources.
{"title":"I/O Braid: Scalable Touch-Sensitive Lighted Cords Using Spiraling, Repeating Sensing Textiles and Fiber Optics","authors":"A. Olwal, Jon Moeller, Greg Priest-Dorman, Thad Starner, B. Carroll","doi":"10.1145/3266037.3271651","DOIUrl":"https://doi.org/10.1145/3266037.3271651","url":null,"abstract":"We introduce I/O Braid, an interactive textile cord with embedded sensing and visual feedback. I/O Braid senses proximity, touch, and twist through a spiraling, repeating braiding topology of touch matrices. This sensing topology is uniquely scalable, requiring only a few sensing lines to cover the whole length of a cord. The same topology allows us to embed fiber optic strands to integrate co-located visual feedback. We provide an overview of the enabling braiding techniques, design considerations, and approaches to gesture detection. These allow us to derive a set of interaction techniques, which we demonstrate with different form factors and capabilities. Our applications illustrate how I/O Braid can invisibly augment everyday objects, such as touch-sensitive headphones and interactive drawstrings on garments, while enabling discoverability and feedback through embedded light sources.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122930499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce the Immersive Bubble Chart, a visualization for hierarchical datasets presented in a virtual reality (VR) world. Users get immersed into the visualization and interact with the bubbles using gestures with a view to overcoming some limitations of 2D visualizations due to the capabilities and interaction affordances of the devices. The technological advances in VR give the possibility to design malleable and extensible representations and more natural and engaging interactions. Using the Oculus Touch controllers, the users can grab and move the bubbles, throw them away or bump two of them for creating a cluster. We have tested the Immersive Bubble Chart with the hierarchical clusters of semantically related terms generated from Twitter.
{"title":"The Immersive Bubble Chart: a Semantic and Virtual Reality Visualization for Big Data","authors":"T. Onorati, P. Díaz, Telmo Zarraonandia, I. Aedo","doi":"10.1145/3266037.3271642","DOIUrl":"https://doi.org/10.1145/3266037.3271642","url":null,"abstract":"In this paper, we introduce the Immersive Bubble Chart, a visualization for hierarchical datasets presented in a virtual reality (VR) world. Users get immersed into the visualization and interact with the bubbles using gestures with a view to overcoming some limitations of 2D visualizations due to the capabilities and interaction affordances of the devices. The technological advances in VR give the possibility to design malleable and extensible representations and more natural and engaging interactions. Using the Oculus Touch controllers, the users can grab and move the bubbles, throw them away or bump two of them for creating a cluster. We have tested the Immersive Bubble Chart with the hierarchical clusters of semantically related terms generated from Twitter.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132136259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Meyer, Pascal Gruppe, Bastian Cornelsen, Tim Claudius Stratmann, Uwe Gruenefeld, Susanne CJ Boll
Learning new motor skills is a problem that people are constantly confronted with (e.g. to learn a new kind of sport). In our work, we investigate to which extent the learning process of a motor sequence can be optimized with the help of Augmented Reality as a technical assistant. Therefore, we propose an approach that divides the problem into three tasks: (1) the tracking of the necessary movements, (2) the creation of a model that calculates possible deviations and (3) the implementation of a visual feedback system. To evaluate our approach, we implemented the idea by using infrared depth sensors and an Augmented Reality head-mounted device (HoloLens). Our results show that the system can give an efficient assistance for the correct height of a throw with one ball. Furthermore, it provides a basis for the support of a complete juggling sequence.
{"title":"Juggling 4.0: Learning Complex Motor Skills with Augmented Reality Through the Example of Juggling","authors":"Benjamin Meyer, Pascal Gruppe, Bastian Cornelsen, Tim Claudius Stratmann, Uwe Gruenefeld, Susanne CJ Boll","doi":"10.1145/3266037.3266099","DOIUrl":"https://doi.org/10.1145/3266037.3266099","url":null,"abstract":"Learning new motor skills is a problem that people are constantly confronted with (e.g. to learn a new kind of sport). In our work, we investigate to which extent the learning process of a motor sequence can be optimized with the help of Augmented Reality as a technical assistant. Therefore, we propose an approach that divides the problem into three tasks: (1) the tracking of the necessary movements, (2) the creation of a model that calculates possible deviations and (3) the implementation of a visual feedback system. To evaluate our approach, we implemented the idea by using infrared depth sensors and an Augmented Reality head-mounted device (HoloLens). Our results show that the system can give an efficient assistance for the correct height of a throw with one ball. Furthermore, it provides a basis for the support of a complete juggling sequence.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131799850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}