In HCI there are attempts to create expressive computing by exploring emotional engagements through a wide range of shape-changing interfaces including experimental art, fashion, architecture, furniture, etc. These approaches range from hedonic design purposes, which represent novel ideas and materials to more informative design spaces, which allow users to interact with a visual interface in order to persuade users' behavior. By bridging these explorations, I present Spiky Starfish, a computational wearable bag, which utilizes "felt experience with technology" to influence user bodily interaction tactically and visually. It is designed to explore how expressive shape-changing interface create a disruptive interaction when peoples' habitual behavior triggers.
{"title":"Spiky Starfish: Exploring 'Felt Technology' Through a Shape Changing Wearable Bag","authors":"Young Suk Lee","doi":"10.1145/2677199.2690878","DOIUrl":"https://doi.org/10.1145/2677199.2690878","url":null,"abstract":"In HCI there are attempts to create expressive computing by exploring emotional engagements through a wide range of shape-changing interfaces including experimental art, fashion, architecture, furniture, etc. These approaches range from hedonic design purposes, which represent novel ideas and materials to more informative design spaces, which allow users to interact with a visual interface in order to persuade users' behavior. By bridging these explorations, I present Spiky Starfish, a computational wearable bag, which utilizes \"felt experience with technology\" to influence user bodily interaction tactically and visually. It is designed to explore how expressive shape-changing interface create a disruptive interaction when peoples' habitual behavior triggers.","PeriodicalId":117478,"journal":{"name":"Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"438 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123445656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a modular tangible user interface system and corresponding actuators for creating music with everyday objects. Users create percussive patterns by controlling algorithmic parameters, or by directly playing the interface. Various mechanical solutions allow users to investigate physical objects as sound sources. A standalone physical interface and an associated graphical programming environment enable different levels of user engagement and hardware/software transparency. We discuss a tool space in-between open and closed design concepts, as well as the physical and software design of the Kitsch-Instrument itself. We also describe recent interactions with the interface at a public event and discuss future plans.
{"title":"The Kitsch-Instrument: Hackable Robotic Music","authors":"J. Harriman, Michael Theodore, M. Gross","doi":"10.1145/2677199.2680593","DOIUrl":"https://doi.org/10.1145/2677199.2680593","url":null,"abstract":"We present a modular tangible user interface system and corresponding actuators for creating music with everyday objects. Users create percussive patterns by controlling algorithmic parameters, or by directly playing the interface. Various mechanical solutions allow users to investigate physical objects as sound sources. A standalone physical interface and an associated graphical programming environment enable different levels of user engagement and hardware/software transparency. We discuss a tool space in-between open and closed design concepts, as well as the physical and software design of the Kitsch-Instrument itself. We also describe recent interactions with the interface at a public event and discuss future plans.","PeriodicalId":117478,"journal":{"name":"Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130152736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dyslexic children have great difficulty in learning to read. While research in HCI suggests that tangible user interfaces (TUIs) have the potential to support children learning to read, few studies have explored how to help dyslexic children learn to read. Even fewer studies have specifically investigated the design space of texture cues in TUIs in supporting learning to read. In this paper, we present Tactile Letters, a multimodal tangible tabletop with texture cues developed to support English letter-sound correspondence learning for dyslexic children aged 5-6 years old. This prototype is used as a research instrument to investigate the role of texture cues in a multimodal TUI in alphabetic learning. We discuss the current knowledge gap, the theoretical foundations that informed our core design strategy, and the subsequent design decisions we made while developing Tactile Letters.
{"title":"Tactile Letters: A Tangible Tabletop with Texture Cues Supporting Alphabetic Learning for Dyslexic Children","authors":"Min Fan, A. Antle","doi":"10.1145/2677199.2688806","DOIUrl":"https://doi.org/10.1145/2677199.2688806","url":null,"abstract":"Dyslexic children have great difficulty in learning to read. While research in HCI suggests that tangible user interfaces (TUIs) have the potential to support children learning to read, few studies have explored how to help dyslexic children learn to read. Even fewer studies have specifically investigated the design space of texture cues in TUIs in supporting learning to read. In this paper, we present Tactile Letters, a multimodal tangible tabletop with texture cues developed to support English letter-sound correspondence learning for dyslexic children aged 5-6 years old. This prototype is used as a research instrument to investigate the role of texture cues in a multimodal TUI in alphabetic learning. We discuss the current knowledge gap, the theoretical foundations that informed our core design strategy, and the subsequent design decisions we made while developing Tactile Letters.","PeriodicalId":117478,"journal":{"name":"Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125330480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Hogan, Dylan Goveas, Rebecca Noonan, Luke Twomey
In this paper we present TaraScope, a multimodal installation that enables student groups, participating in workshops at a Space Observatory in Ireland, to remotely manipulate and capture images from a robotic telescope situated in San Francisco, California. This project is developed as part of an international initiative between Blackrock Castle Observatory (BCO), Ireland and Chabot Space & Science Center, California, with the aim of connecting the two locations while also stimulating interest in astronomy, science, technology, engineering and math. We describe the design rationale and implementation of the installation, which is based on creating an inviting, apprehendable, inexpensive and engaging system that supports inquiry-led learning and group interactions. Furthermore, we present several initial observations on the user experience of the system that we gathered through a series of evaluations that we conducted at the Observatory.
在本文中,我们介绍了TaraScope,这是一种多模式装置,使学生团体能够参加爱尔兰空间天文台的研讨会,从位于加利福尼亚州旧金山的机器人望远镜远程操纵和捕获图像。该项目是爱尔兰贝莱德城堡天文台(BCO)和加州查伯特空间与科学中心(Chabot Space & Science Center)国际倡议的一部分,旨在连接两个地点,同时激发人们对天文学、科学、技术、工程和数学的兴趣。我们描述了该装置的设计原理和实现,其基础是创建一个诱人的、可理解的、廉价的、引人入胜的系统,支持探究式学习和小组互动。此外,我们将介绍我们在天文台进行的一系列评估中收集到的关于该系统用户体验的一些初步观察结果。
{"title":"TaraScope: Controlling Remote Telescopes Through Tangible Interaction","authors":"T. Hogan, Dylan Goveas, Rebecca Noonan, Luke Twomey","doi":"10.1145/2677199.2680562","DOIUrl":"https://doi.org/10.1145/2677199.2680562","url":null,"abstract":"In this paper we present TaraScope, a multimodal installation that enables student groups, participating in workshops at a Space Observatory in Ireland, to remotely manipulate and capture images from a robotic telescope situated in San Francisco, California. This project is developed as part of an international initiative between Blackrock Castle Observatory (BCO), Ireland and Chabot Space & Science Center, California, with the aim of connecting the two locations while also stimulating interest in astronomy, science, technology, engineering and math. We describe the design rationale and implementation of the installation, which is based on creating an inviting, apprehendable, inexpensive and engaging system that supports inquiry-led learning and group interactions. Furthermore, we present several initial observations on the user experience of the system that we gathered through a series of evaluations that we conducted at the Observatory.","PeriodicalId":117478,"journal":{"name":"Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127571581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This half-day hands-on studio will teach how to design and develop effective interfaces for head mounted and wrist worn wearable computers through the application of user-centered design principles. Attendees will learn gain the knowledge and tools needed to rapidly develop prototype applications, and also complete a hands-on design task. They will also learn good design guidelines for wearable systems and how to apply those guidelines. A variety of tools will be used that do not require any hardware or software experience, many of which are free and/or open source. Attendees will also be provided with material that they can use to continue their learning after the studio is over.
{"title":"Rapid Prototyping for Wearables: Concept Design and Development for head- and wrist-mounted Wearables (Smart Watches and Google Glass)","authors":"M. Billinghurst, D. Busse","doi":"10.1145/2677199.2683592","DOIUrl":"https://doi.org/10.1145/2677199.2683592","url":null,"abstract":"This half-day hands-on studio will teach how to design and develop effective interfaces for head mounted and wrist worn wearable computers through the application of user-centered design principles. Attendees will learn gain the knowledge and tools needed to rapidly develop prototype applications, and also complete a hands-on design task. They will also learn good design guidelines for wearable systems and how to apply those guidelines. A variety of tools will be used that do not require any hardware or software experience, many of which are free and/or open source. Attendees will also be provided with material that they can use to continue their learning after the studio is over.","PeriodicalId":117478,"journal":{"name":"Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124172211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fine art and design students face a novel set of challenges when asked to realize their creative vision with code-based projects. In this paper, we discuss the development of a system of tangible tiles and integrated software code factory that is intended to help students build bridges of understanding between proposed interactive and networked experiences and the required computer syntax, software libraries and hardware that animate those proposals. Our tiles help students see relationships between artistic concept and programmatic code by giving them intuitive tools that can be directly manipulated.
{"title":"Tiles that Talk: Tangible Templates for Networked Objects","authors":"David Bouchard, Steve Daniels","doi":"10.1145/2677199.2680607","DOIUrl":"https://doi.org/10.1145/2677199.2680607","url":null,"abstract":"Fine art and design students face a novel set of challenges when asked to realize their creative vision with code-based projects. In this paper, we discuss the development of a system of tangible tiles and integrated software code factory that is intended to help students build bridges of understanding between proposed interactive and networked experiences and the required computer syntax, software libraries and hardware that animate those proposals. Our tiles help students see relationships between artistic concept and programmatic code by giving them intuitive tools that can be directly manipulated.","PeriodicalId":117478,"journal":{"name":"Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"197 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132125122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of our studio is to inspire attendees on what are the most important issues to think about when designing wearable technology products that provides meaningful experiences. During the studio, attendees will have the chance to discuss the best approach to design wearable tech products that can convey both meaningful information and emotions. Attendees will also build a prototype for a wearable technology product that solves a real problem, in a team setting. We will encourage attendees to think beyond well-known wearable technology categories such as smart watches, fitness bands, and smart glasses, but rather envision products that fit our body in newly imagined ways, creating new product categories altogether. The agenda of the studio will include discussion of "10 Principles of Good Design" by Dieter Rams, and case studies that shows the importance of invisible, unobtrusive design in wearable technology. A majority portion of the studio will be hands on, where attendees will design and prototype their own conceptual wearable tech product. The studio will be led by Billie Whitehouse, the co-founder and design director of Wearable Experiments. Since the company's takeoff in 2013, Billie has led multiple innovative wearable technology projects, such as Fundawear and Alert Shirt, which have won a Cannes Lion and Clio Sports Award respectively. Billie has recently made the list of 2014 Silicon Alley 100 by Business Insider. With a strong background in fashion and design, it is Billie's goal to put more intelligence into the clothes we wear every day.
{"title":"Designing for Humans in a Digital Age: Building Wearable Technology to Convey Information and Emotions","authors":"B. Whitehouse, Stanley He","doi":"10.1145/2677199.2683591","DOIUrl":"https://doi.org/10.1145/2677199.2683591","url":null,"abstract":"The purpose of our studio is to inspire attendees on what are the most important issues to think about when designing wearable technology products that provides meaningful experiences. During the studio, attendees will have the chance to discuss the best approach to design wearable tech products that can convey both meaningful information and emotions. Attendees will also build a prototype for a wearable technology product that solves a real problem, in a team setting. We will encourage attendees to think beyond well-known wearable technology categories such as smart watches, fitness bands, and smart glasses, but rather envision products that fit our body in newly imagined ways, creating new product categories altogether. The agenda of the studio will include discussion of \"10 Principles of Good Design\" by Dieter Rams, and case studies that shows the importance of invisible, unobtrusive design in wearable technology. A majority portion of the studio will be hands on, where attendees will design and prototype their own conceptual wearable tech product. The studio will be led by Billie Whitehouse, the co-founder and design director of Wearable Experiments. Since the company's takeoff in 2013, Billie has led multiple innovative wearable technology projects, such as Fundawear and Alert Shirt, which have won a Cannes Lion and Clio Sports Award respectively. Billie has recently made the list of 2014 Silicon Alley 100 by Business Insider. With a strong background in fashion and design, it is Billie's goal to put more intelligence into the clothes we wear every day.","PeriodicalId":117478,"journal":{"name":"Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133612801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joyce Ma, Lisa Sindorf, Isaac Liao, Jennifer Frazier
We describe a study comparing the behavior of museum visitors at an interactive exhibit that used physical versus virtual objects to explore a large scientific dataset. The exhibit visualized the distribution of phytoplankton in the world's oceans on a multi-touch table. In one version, visitors used physical rings to look at the type and proportion of phytoplankton in different areas of the oceans, and in the other version they used virtual rings. The findings suggest that the physical rings better afforded touching and manipulations, which were prerequisites to further exploration, and attracted more groups, thereby providing opportunities for people to talk and share. However, the comparison did not detect any measurable differences in the thoroughness of visitors' interactions, the questions they asked, or on-topic talk with others at the exhibit. These results should help museum professionals and interaction designers better weigh the costs and benefits of tangible user interfaces.
{"title":"Using a Tangible Versus a Multi-touch Graphical User Interface to Support Data Exploration at a Museum Exhibit","authors":"Joyce Ma, Lisa Sindorf, Isaac Liao, Jennifer Frazier","doi":"10.1145/2677199.2680555","DOIUrl":"https://doi.org/10.1145/2677199.2680555","url":null,"abstract":"We describe a study comparing the behavior of museum visitors at an interactive exhibit that used physical versus virtual objects to explore a large scientific dataset. The exhibit visualized the distribution of phytoplankton in the world's oceans on a multi-touch table. In one version, visitors used physical rings to look at the type and proportion of phytoplankton in different areas of the oceans, and in the other version they used virtual rings. The findings suggest that the physical rings better afforded touching and manipulations, which were prerequisites to further exploration, and attracted more groups, thereby providing opportunities for people to talk and share. However, the comparison did not detect any measurable differences in the thoroughness of visitors' interactions, the questions they asked, or on-topic talk with others at the exhibit. These results should help museum professionals and interaction designers better weigh the costs and benefits of tangible user interfaces.","PeriodicalId":117478,"journal":{"name":"Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133377892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a gaze-based installation enabling two participants to experience cycles of relationship "destroyed" by tunnel vision. The installation comprises a holographic projection screen and two eye-trackers. The trackers allow us to identify the gaze positions and onset of eye contacts. The holographic screen makes it possible to block out the eyesight by projection and to see through without projection. Two participants sitting on both sides of the screen will encounter series of artificial tunnel vision and blindness.
{"title":"Relationship Tunnel Vision: Altered Social Interaction Using Eye-Tracking","authors":"Mon-Chu Chen, Bongkeum Jeong, Víctor Rivera","doi":"10.1145/2677199.2690868","DOIUrl":"https://doi.org/10.1145/2677199.2690868","url":null,"abstract":"We present a gaze-based installation enabling two participants to experience cycles of relationship \"destroyed\" by tunnel vision. The installation comprises a holographic projection screen and two eye-trackers. The trackers allow us to identify the gaze positions and onset of eye contacts. The holographic screen makes it possible to block out the eyesight by projection and to see through without projection. Two participants sitting on both sides of the screen will encounter series of artificial tunnel vision and blindness.","PeriodicalId":117478,"journal":{"name":"Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116018591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce TIMMi, a textile input device for mobile interactions. TIMMi is worn on the index finger to provide a multimodal sensing input metaphor. The prototype is fabricated on a single layer of textile where the conductive silicone rubber is painted and the conductive threads are stitched. The sensing area comprises of three equally spaced dots and a separate wide line. Strain and pressure values are extracted from the line and three dots, respectively via voltage dividers. Regression analysis is performed to model the relationship between sensing values and finger pressure and bending. A multi-level thresholding is applied to capture different levels of finger bending and pressure. A temporal position tracking algorithm is implemented to capture the swipe gesture. In this preliminary study, we demonstrate TIMMi as a finger-worn input device with two applications: controlling music player and interacting with smartglasses.
{"title":"TIMMi: Finger-worn Textile Input Device with Multimodal Sensing in Mobile Interaction","authors":"S. Yoon, Ke Huo, V. P. Nguyen, K. Ramani","doi":"10.1145/2677199.2680560","DOIUrl":"https://doi.org/10.1145/2677199.2680560","url":null,"abstract":"We introduce TIMMi, a textile input device for mobile interactions. TIMMi is worn on the index finger to provide a multimodal sensing input metaphor. The prototype is fabricated on a single layer of textile where the conductive silicone rubber is painted and the conductive threads are stitched. The sensing area comprises of three equally spaced dots and a separate wide line. Strain and pressure values are extracted from the line and three dots, respectively via voltage dividers. Regression analysis is performed to model the relationship between sensing values and finger pressure and bending. A multi-level thresholding is applied to capture different levels of finger bending and pressure. A temporal position tracking algorithm is implemented to capture the swipe gesture. In this preliminary study, we demonstrate TIMMi as a finger-worn input device with two applications: controlling music player and interacting with smartglasses.","PeriodicalId":117478,"journal":{"name":"Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116463439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}