Spatial augmented reality and tangible interaction enrich the standard computer I/O space. Systems based on such modalities offer new user experiences and open up interesting perspectives in various fields. On the other hand, such systems tend to live outside the standard desktop paradigm and, as a consequence, they do not benefit from the richness and versatility of desktop environments. In this work, we propose to join together physical visualization and tangible interaction within a standard desktop environment. We introduce the concept of Tangible Viewport, an on-screen window that creates a dynamic link between augmented objects and computer screens, allowing a screen-based cursor to move onto the object in a seamless manner. We describe an implementation of this concept and explore the interaction space around it. A preliminary evaluation shows the metaphor is transparent to the users while providing the benefits of tangibility.
{"title":"Tangible Viewports: Getting Out of Flatland in Desktop Environments","authors":"Renaud Gervais, J. Roo, M. Hachet","doi":"10.1145/2839462.2839468","DOIUrl":"https://doi.org/10.1145/2839462.2839468","url":null,"abstract":"Spatial augmented reality and tangible interaction enrich the standard computer I/O space. Systems based on such modalities offer new user experiences and open up interesting perspectives in various fields. On the other hand, such systems tend to live outside the standard desktop paradigm and, as a consequence, they do not benefit from the richness and versatility of desktop environments. In this work, we propose to join together physical visualization and tangible interaction within a standard desktop environment. We introduce the concept of Tangible Viewport, an on-screen window that creates a dynamic link between augmented objects and computer screens, allowing a screen-based cursor to move onto the object in a seamless manner. We describe an implementation of this concept and explore the interaction space around it. A preliminary evaluation shows the metaphor is transparent to the users while providing the benefits of tangibility.","PeriodicalId":422083,"journal":{"name":"Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122193588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Innovations in material, information and communication technology enable the application of efficient light sources and shape-changing techniques in our environment and everyday objects of the future. Based on this trend, we propose Sculpting Light Systems (SLS), which combine shape change with light to provide multi-modal displays of light and shape, and which support the tangible manipulation of light. In order to get a primary understanding of how to design a manipulable SLS that could seamlessly merge into the user's context by means of information decoration and tangible interaction, we discuss a framework and two design issues to explore the design space of SLS. In addition, two design cases are developed to demonstrate our framework and to further investigate the potential and challenges of designing SLS with respect to the design issues.
{"title":"Designing Sculpting Light Systems for Information Decoration","authors":"Jiang Wu, H. V. Essen, Berry Eggen","doi":"10.1145/2839462.2856547","DOIUrl":"https://doi.org/10.1145/2839462.2856547","url":null,"abstract":"Innovations in material, information and communication technology enable the application of efficient light sources and shape-changing techniques in our environment and everyday objects of the future. Based on this trend, we propose Sculpting Light Systems (SLS), which combine shape change with light to provide multi-modal displays of light and shape, and which support the tangible manipulation of light. In order to get a primary understanding of how to design a manipulable SLS that could seamlessly merge into the user's context by means of information decoration and tangible interaction, we discuss a framework and two design issues to explore the design space of SLS. In addition, two design cases are developed to demonstrate our framework and to further investigate the potential and challenges of designing SLS with respect to the design issues.","PeriodicalId":422083,"journal":{"name":"Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125586140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Arif, R. Manshaei, Sean DeLong, Brien East, M. Kyan, Ali Mazalek
We present Sparse Tangibles, a tabletop and active tangible-based framework to support cross-platform, collaborative gene network exploration using a Web interface. It uses smartwatches as active tangibles to allow query construction on- and off-the-table. We expand their interaction vocabulary using inertial sensors and a custom case. We also introduce a new metric for measuring the "confidence level" of protein and genetic interactions. Three expert biologists evaluated the system and found it fun, useful, easy to use, and ideal for collaborative explorations.
{"title":"Sparse Tangibles: Collaborative Exploration of Gene Networks using Active Tangibles and Interactive Tabletops","authors":"A. Arif, R. Manshaei, Sean DeLong, Brien East, M. Kyan, Ali Mazalek","doi":"10.1145/2839462.2839500","DOIUrl":"https://doi.org/10.1145/2839462.2839500","url":null,"abstract":"We present Sparse Tangibles, a tabletop and active tangible-based framework to support cross-platform, collaborative gene network exploration using a Web interface. It uses smartwatches as active tangibles to allow query construction on- and off-the-table. We expand their interaction vocabulary using inertial sensors and a custom case. We also introduce a new metric for measuring the \"confidence level\" of protein and genetic interactions. Three expert biologists evaluated the system and found it fun, useful, easy to use, and ideal for collaborative explorations.","PeriodicalId":422083,"journal":{"name":"Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"27 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132834293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we introduce Click, a physical coding platform that utilizes smart devices as component pieces. Click encourages group learning of coding by turning individual smart devices into code blocks. These code blocks can then be connected to form programs. By contributing more personal devices to the code chain, users are able to increase program complexity. Because code blocks in Click have a physical and virtual component, we designed virtual interactions that encourage physical manipulation of devices. Finally we show example programs that can be built on the system. Code chains are able to use inbuilt and installed smart device hardware and software as inputs/outputs, because of this, the power of resulting programs grow in line with advances in smart device technology.
{"title":"Click: Using Smart Devices For Physical Collaborative Coding Education","authors":"D. Lo, Austin S. Lee","doi":"10.1145/2839462.2856522","DOIUrl":"https://doi.org/10.1145/2839462.2856522","url":null,"abstract":"In this paper we introduce Click, a physical coding platform that utilizes smart devices as component pieces. Click encourages group learning of coding by turning individual smart devices into code blocks. These code blocks can then be connected to form programs. By contributing more personal devices to the code chain, users are able to increase program complexity. Because code blocks in Click have a physical and virtual component, we designed virtual interactions that encourage physical manipulation of devices. Finally we show example programs that can be built on the system. Code chains are able to use inbuilt and installed smart device hardware and software as inputs/outputs, because of this, the power of resulting programs grow in line with advances in smart device technology.","PeriodicalId":422083,"journal":{"name":"Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"272 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122769601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oren Zuckerman, Tamar Gal, T. Keren-Capelovitch, T. Krasovsky, Ayelet Gal-Oz, P. Weiss
The design of tangible and embedded assistive technologies poses unique challenges. We describe the challenges we encountered during the design of "DataSpoon", explain how we overcame them, and suggest design guidelines. DataSpoon is an instrumented spoon that monitors movement kinematics during self-feeding. Children with motor disorders often encounter difficulty mastering self-feeding. In order to treat them effectively, professional caregivers need to assess their movement kinematics. Currently, assessment is performed through observations and questionnaires. DataSpoon adds sensor-based data to this process. A validation study showed that data obtained from DataSpoon and from a 6-camera 3D motion capture system were similar. Our experience yielded three design guidelines: needs of both caregivers and children should be considered; distractions to direct caregiver-child interaction should be minimized; familiar-looking devices may alleviate concerns associated with unfamiliar technology.
{"title":"DataSpoon: Overcoming Design Challenges in Tangible and Embedded Assistive Technologies","authors":"Oren Zuckerman, Tamar Gal, T. Keren-Capelovitch, T. Krasovsky, Ayelet Gal-Oz, P. Weiss","doi":"10.1145/2839462.2839505","DOIUrl":"https://doi.org/10.1145/2839462.2839505","url":null,"abstract":"The design of tangible and embedded assistive technologies poses unique challenges. We describe the challenges we encountered during the design of \"DataSpoon\", explain how we overcame them, and suggest design guidelines. DataSpoon is an instrumented spoon that monitors movement kinematics during self-feeding. Children with motor disorders often encounter difficulty mastering self-feeding. In order to treat them effectively, professional caregivers need to assess their movement kinematics. Currently, assessment is performed through observations and questionnaires. DataSpoon adds sensor-based data to this process. A validation study showed that data obtained from DataSpoon and from a 6-camera 3D motion capture system were similar. Our experience yielded three design guidelines: needs of both caregivers and children should be considered; distractions to direct caregiver-child interaction should be minimized; familiar-looking devices may alleviate concerns associated with unfamiliar technology.","PeriodicalId":422083,"journal":{"name":"Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116546101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Qiu, S. A. Anas, Hirotaka Osawa, G.W.M. Rauterberg, Jun Hu
Gaze and eye contact are frequently in social occasions used among sighted people. Gaze is considered as a predictor of attention and engagement between interlocutors in conversations. However, gaze signals from the sighted are not accessible for the blind person in face-to-face communication. In this paper, we present functional work-in-progress prototype, E-Gaze glasses, an assistive device based on an eye tracking system. E-Gaze simulates natural gaze for blind people, especially establishing the "eye contact" between blind and sighted people to enhance their engagement in face-to-face conversations. The gaze behavior is designed based on a turn-taking model, which interprets the corresponding relationship between the conclusive gaze behavior and the interlocutors' conversation flow.
{"title":"E-Gaze Glasses: Simulating Natural Gazes for Blind People","authors":"S. Qiu, S. A. Anas, Hirotaka Osawa, G.W.M. Rauterberg, Jun Hu","doi":"10.1145/2839462.2856518","DOIUrl":"https://doi.org/10.1145/2839462.2856518","url":null,"abstract":"Gaze and eye contact are frequently in social occasions used among sighted people. Gaze is considered as a predictor of attention and engagement between interlocutors in conversations. However, gaze signals from the sighted are not accessible for the blind person in face-to-face communication. In this paper, we present functional work-in-progress prototype, E-Gaze glasses, an assistive device based on an eye tracking system. E-Gaze simulates natural gaze for blind people, especially establishing the \"eye contact\" between blind and sighted people to enhance their engagement in face-to-face conversations. The gaze behavior is designed based on a turn-taking model, which interprets the corresponding relationship between the conclusive gaze behavior and the interlocutors' conversation flow.","PeriodicalId":422083,"journal":{"name":"Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115326919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Konno, Richi Owaki, Yoshito Onishi, Ryo Kanda, Sheep, Akiko Takeshita, T. Nishi, Naoko Shiomi, Kyle McDonald, Satoru Higa, M. Shimizu, Yosuke Sakai, Y. Kakehi, Kazuhiro Jo, Yoko Ando, Kazunao Abe, Takayuki Ito
"Dividual Plays Experimental Lab" is an extract from the dance piece "Dividual Plays". Dividual Plays was produced as the first research outcome of "Reactor for Awareness in Motion [RAM]", a research project we have been involved since 2010 (http://ram.ycam.jp/en/). Dividual Plays Experimental Lab consists of essential elements of Dividual Plays, virtual environments for dance "scenes", a programming toolkit "RAM Dance Toolkit", and a motion capture system "MOTIONER". With these systems, the lab allows the visitors to explore and create their own body movements correspond with the experience of the dancers in Dividual Plays.
“个人戏剧实验实验室”摘自舞蹈作品《个人戏剧》。《individual Plays》是我们从2010年开始参与的研究项目“Reactor for Awareness in Motion [RAM]”的第一个研究成果(http://ram.ycam.jp/en/)。个人戏剧实验实验室由个人戏剧的基本元素、舞蹈“场景”的虚拟环境、编程工具包“RAM舞蹈工具包”和动作捕捉系统“MOTIONER”组成。有了这些系统,实验室允许游客探索和创造自己的身体动作,与个人戏剧中的舞者的经验相对应。
{"title":"Dividual Plays Experimental Lab: An installation derived from Dividual Plays","authors":"K. Konno, Richi Owaki, Yoshito Onishi, Ryo Kanda, Sheep, Akiko Takeshita, T. Nishi, Naoko Shiomi, Kyle McDonald, Satoru Higa, M. Shimizu, Yosuke Sakai, Y. Kakehi, Kazuhiro Jo, Yoko Ando, Kazunao Abe, Takayuki Ito","doi":"10.1145/2839462.2856346","DOIUrl":"https://doi.org/10.1145/2839462.2856346","url":null,"abstract":"\"Dividual Plays Experimental Lab\" is an extract from the dance piece \"Dividual Plays\". Dividual Plays was produced as the first research outcome of \"Reactor for Awareness in Motion [RAM]\", a research project we have been involved since 2010 (http://ram.ycam.jp/en/). Dividual Plays Experimental Lab consists of essential elements of Dividual Plays, virtual environments for dance \"scenes\", a programming toolkit \"RAM Dance Toolkit\", and a motion capture system \"MOTIONER\". With these systems, the lab allows the visitors to explore and create their own body movements correspond with the experience of the dancers in Dividual Plays.","PeriodicalId":422083,"journal":{"name":"Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115387387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siyan Zhao, Zachary Schwemler, Adam Fritz, A. Israr
Our hands-on studio will explore how to create meaningful haptic interactions that engage different areas of the body. Participants will gain an understanding of apparent tactile illusions, a perception of illusory motion between two areas on the body, and apply this knowledge towards generating their own haptic experiences. We will introduce participants to Stereo Haptics, a toolkit used to quickly generate haptic sensations through audio platforms with off the shelf hardware and open source software. The studio begins with an introduction to haptics, haptic technology and the illusions it can help create. Next, participants will begin experimenting with Stereo Haptics and using the toolkit to create dynamic haptic interactions. In the final section, participants will work in groups to design haptic solutions for real-life scenarios. By the end of the studio, participants will have a good understanding of tactile illusions, how to create them, and how they can be applied to enrich tangible and embodied interaction using simple stereo-sound technologies.
{"title":"Stereo Haptics: Designing Haptic Interactions using Audio Tools","authors":"Siyan Zhao, Zachary Schwemler, Adam Fritz, A. Israr","doi":"10.1145/2839462.2854120","DOIUrl":"https://doi.org/10.1145/2839462.2854120","url":null,"abstract":"Our hands-on studio will explore how to create meaningful haptic interactions that engage different areas of the body. Participants will gain an understanding of apparent tactile illusions, a perception of illusory motion between two areas on the body, and apply this knowledge towards generating their own haptic experiences. We will introduce participants to Stereo Haptics, a toolkit used to quickly generate haptic sensations through audio platforms with off the shelf hardware and open source software. The studio begins with an introduction to haptics, haptic technology and the illusions it can help create. Next, participants will begin experimenting with Stereo Haptics and using the toolkit to create dynamic haptic interactions. In the final section, participants will work in groups to design haptic solutions for real-life scenarios. By the end of the studio, participants will have a good understanding of tactile illusions, how to create them, and how they can be applied to enrich tangible and embodied interaction using simple stereo-sound technologies.","PeriodicalId":422083,"journal":{"name":"Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114732163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Makerspaces of various models are forming all around the world. We present a model and case study of the Maketec, a public drop-in makerspace for children, run by teens. The Maketec model is designed to promote making and socializing opportunities for girls and boys of ages 9-14. It is based on three underlying principles: (1) "Low Floor/Wide Walls": construction kits and digital fabrication technologies that allow kids to invent and create with no prior knowledge or expertise; (2) "Unstructured Learning": no formal instructors, teens serve as mentors for kids, and promote a culture of self-driven learning through projects; and (3) "A Makerspace as a Third Place": the Maketec is free and managed by kids for kids in an effort to form a unique community of young makers. We report on interviews with four recurring visitors, and discuss our insights around the three principles and the proposed model.
{"title":"Maketec: A Makerspace as a Third Place for Children","authors":"David Bar-El, Oren Zuckerman","doi":"10.1145/2839462.2856556","DOIUrl":"https://doi.org/10.1145/2839462.2856556","url":null,"abstract":"Makerspaces of various models are forming all around the world. We present a model and case study of the Maketec, a public drop-in makerspace for children, run by teens. The Maketec model is designed to promote making and socializing opportunities for girls and boys of ages 9-14. It is based on three underlying principles: (1) \"Low Floor/Wide Walls\": construction kits and digital fabrication technologies that allow kids to invent and create with no prior knowledge or expertise; (2) \"Unstructured Learning\": no formal instructors, teens serve as mentors for kids, and promote a culture of self-driven learning through projects; and (3) \"A Makerspace as a Third Place\": the Maketec is free and managed by kids for kids in an effort to form a unique community of young makers. We report on interviews with four recurring visitors, and discuss our insights around the three principles and the proposed model.","PeriodicalId":422083,"journal":{"name":"Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131069207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information visualisation is the transformation of abstract data into visual, interactive representations. In this paper we present InfoPhys, a device that enables the direct, tangible manipulation of visualisations. InfoPhys makes use of a force-feedback pointing device to simulate haptic feedback while the user explores visualisations projected on top of the device. We present a use case illustrating the trends in ten years of TEI proceedings and how InfoPhys allows users to feel and manipulate these trends. The technical and software aspects of our prototype are presented, and promising improvements and future work opened by InfoPhys are then discussed.
{"title":"InfoPhys: Direct Manipulation of Information Visualisation through a Force-Feedback Pointing Device","authors":"Christian Frisson, Bruno Dumas","doi":"10.1145/2839462.2856545","DOIUrl":"https://doi.org/10.1145/2839462.2856545","url":null,"abstract":"Information visualisation is the transformation of abstract data into visual, interactive representations. In this paper we present InfoPhys, a device that enables the direct, tangible manipulation of visualisations. InfoPhys makes use of a force-feedback pointing device to simulate haptic feedback while the user explores visualisations projected on top of the device. We present a use case illustrating the trends in ten years of TEI proceedings and how InfoPhys allows users to feel and manipulate these trends. The technical and software aspects of our prototype are presented, and promising improvements and future work opened by InfoPhys are then discussed.","PeriodicalId":422083,"journal":{"name":"Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130039945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}