Science, Technology, Engineering, and Mathematics (STEM) have received a huge push in education during the past few years. However, current methods to teach STEM concepts often lack the ability to allow students creative and open-ended expression. While some toys on the market try to address these issues, they often fail to fulfill learning affordances to their full potential. Our system, MagneTracks, is a multi-component educational toolkit that permits users to engage in creative, exploratory, and open-ended learning of Newtonian physics. MagneTracks consists of dynamic, tangible, magnetic tracks that attach to a vertical whiteboard, a computer-based tracking program integrated into the Netlogo platform, and curriculum challenge activity cards. MagneTracks is specifically focused on teaching physics concepts but can be used to educate in other STEM fields. Initial user observation has shown positive learning outcomes and high engagement.
{"title":"MagneTracks: a tangible constructionist toolkit for Newtonian physics","authors":"Andrea Miller, Claire Rosenbaum, Paulo Blikstein","doi":"10.1145/2148131.2148185","DOIUrl":"https://doi.org/10.1145/2148131.2148185","url":null,"abstract":"Science, Technology, Engineering, and Mathematics (STEM) have received a huge push in education during the past few years. However, current methods to teach STEM concepts often lack the ability to allow students creative and open-ended expression. While some toys on the market try to address these issues, they often fail to fulfill learning affordances to their full potential. Our system, MagneTracks, is a multi-component educational toolkit that permits users to engage in creative, exploratory, and open-ended learning of Newtonian physics. MagneTracks consists of dynamic, tangible, magnetic tracks that attach to a vertical whiteboard, a computer-based tracking program integrated into the Netlogo platform, and curriculum challenge activity cards. MagneTracks is specifically focused on teaching physics concepts but can be used to educate in other STEM fields. Initial user observation has shown positive learning outcomes and high engagement.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"296 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133606717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we extend the field of organic user interfaces and introduce the Splash Controller. The main concept of a Splash Controller is that a user interacts with computing technology by manipulation of water in some kind of receptacle. To this end we highlight the possibilities of Splash Controllers, specifically as game controllers. Next, we specify a simple and robust technology for the detection of water. In order to demonstrate the feasibility of a Splash Controller, we additionally present the design and development of one specific Splash Controller prototype.
{"title":"Splash controllers: game controllers involving the uncareful manipulation of water","authors":"L. Geurts, V. Abeele","doi":"10.1145/2148131.2148170","DOIUrl":"https://doi.org/10.1145/2148131.2148170","url":null,"abstract":"In this paper we extend the field of organic user interfaces and introduce the Splash Controller. The main concept of a Splash Controller is that a user interacts with computing technology by manipulation of water in some kind of receptacle. To this end we highlight the possibilities of Splash Controllers, specifically as game controllers. Next, we specify a simple and robust technology for the detection of water. In order to demonstrate the feasibility of a Splash Controller, we additionally present the design and development of one specific Splash Controller prototype.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131848493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenwei Chiang, Shu-Chuan Chiu, Anak Agung Gede Dharma, Kiyoshi Tomimatsu
In this paper we describes a new concept of utilizing a mobile device or Personal Digital Assistant (PDA) for musical composition. We design a new interface that combines the ease of use of a pencil and the portability and customizability of mobile device. Our proposed kit involves the affordances provided by paper computing in order to provide user experiences to novice users. By effectively using the principle of electrical conductivity and signal processing, we have developed a functional prototype ("Birds on Paper") that enables users to compose their own music. Our proposed kit consists of 4 main elements, i.e.: pencil, birds-shaped sensor, hub connector, and mobile device or PDA. Pencil can be applied on a piece of paper as the main medium to visualize the musical composition. Touching the graphite surface of the drawing will trigger an audio feedback in the form of musical notes. Musical notes will be generated based on the thickness and the length of the pencil drawings, thus enables users to intuitively compose the music according to their preference. In addition to the description of the kit, we also discuss the concept behind the design and possible user scenarios.
{"title":"Birds on paper: an alternative interface to compose music by utilizing sketch drawing and mobile device","authors":"Chenwei Chiang, Shu-Chuan Chiu, Anak Agung Gede Dharma, Kiyoshi Tomimatsu","doi":"10.1145/2148131.2148175","DOIUrl":"https://doi.org/10.1145/2148131.2148175","url":null,"abstract":"In this paper we describes a new concept of utilizing a mobile device or Personal Digital Assistant (PDA) for musical composition. We design a new interface that combines the ease of use of a pencil and the portability and customizability of mobile device. Our proposed kit involves the affordances provided by paper computing in order to provide user experiences to novice users. By effectively using the principle of electrical conductivity and signal processing, we have developed a functional prototype (\"Birds on Paper\") that enables users to compose their own music. Our proposed kit consists of 4 main elements, i.e.: pencil, birds-shaped sensor, hub connector, and mobile device or PDA. Pencil can be applied on a piece of paper as the main medium to visualize the musical composition. Touching the graphite surface of the drawing will trigger an audio feedback in the form of musical notes. Musical notes will be generated based on the thickness and the length of the pencil drawings, thus enables users to intuitively compose the music according to their preference. In addition to the description of the kit, we also discuss the concept behind the design and possible user scenarios.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134461371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Graduate student consortium","authors":"Caroline Hummels, Amon Millner, Orit Shaer","doi":"10.1145/3256405","DOIUrl":"https://doi.org/10.1145/3256405","url":null,"abstract":"","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132692204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Samosky, Douglas A. Nelson, Bo Wang, R. Bregman, A. Hosmer, B. Mikulis, R. A. Weaver
BodyExplorerAR is a system designed to enhance a learner's ability to explore anatomy, physiology and clinical interventions though naturalistic interaction with an augmented reality enhanced full-body mannequin simulator. We are developing a platform that integrates projective AR and multi-modal sensor inputs. A user can use an IR pen to open, resize and move viewports providing windows into the body that can display dynamic anatomy. The user can point to an organ and display additional information such as graphs of physiological parameters or heart sounds. Custom sensing systems provide natural interactions with common medical devices such as syringes, breathing tubes and catheters. A user can open a window displaying the beating heart in situ, display an electrocardiogram (ECG), then inject drugs and see and hear changes in heart rate. Our goal is an engaging experience that empowers a learner to create customized, media-rich explorations revealing the internal consequences of external actions.
{"title":"BodyExplorerAR: enhancing a mannequin medical simulator with sensing and projective augmented reality for exploring dynamic anatomy and physiology","authors":"J. Samosky, Douglas A. Nelson, Bo Wang, R. Bregman, A. Hosmer, B. Mikulis, R. A. Weaver","doi":"10.1145/2148131.2148187","DOIUrl":"https://doi.org/10.1145/2148131.2148187","url":null,"abstract":"BodyExplorerAR is a system designed to enhance a learner's ability to explore anatomy, physiology and clinical interventions though naturalistic interaction with an augmented reality enhanced full-body mannequin simulator. We are developing a platform that integrates projective AR and multi-modal sensor inputs. A user can use an IR pen to open, resize and move viewports providing windows into the body that can display dynamic anatomy. The user can point to an organ and display additional information such as graphs of physiological parameters or heart sounds. Custom sensing systems provide natural interactions with common medical devices such as syringes, breathing tubes and catheters. A user can open a window displaying the beating heart in situ, display an electrocardiogram (ECG), then inject drugs and see and hear changes in heart rate. Our goal is an engaging experience that empowers a learner to create customized, media-rich explorations revealing the internal consequences of external actions.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132806542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amy Wibowo, Daisuke Sakamoto, J. Mitani, T. Igarashi
This paper introduces DressUp, a computerized system for designing dresses with 3D input using the form of the human body as a guide. It consists of a body-sized physical mannequin, a screen, and tangible prop tools for drawing in 3D on and around the mannequin. As the user draws, he/she modifies or creates pieces of digital cloth, which are displayed on a model of the mannequin on the screen. We explore the capacity of our 3D input tools to create a variety of dresses. We also describe observations gained from users designing actual physical garments with the system.
{"title":"DressUp: a 3D interface for clothing design with a physical mannequin","authors":"Amy Wibowo, Daisuke Sakamoto, J. Mitani, T. Igarashi","doi":"10.1145/2148131.2148153","DOIUrl":"https://doi.org/10.1145/2148131.2148153","url":null,"abstract":"This paper introduces DressUp, a computerized system for designing dresses with 3D input using the form of the human body as a guide. It consists of a body-sized physical mannequin, a screen, and tangible prop tools for drawing in 3D on and around the mannequin. As the user draws, he/she modifies or creates pieces of digital cloth, which are displayed on a model of the mannequin on the screen. We explore the capacity of our 3D input tools to create a variety of dresses. We also describe observations gained from users designing actual physical garments with the system.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122020242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Come out and play","authors":"Ali Mazalek","doi":"10.1145/3256399","DOIUrl":"https://doi.org/10.1145/3256399","url":null,"abstract":"","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116476006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Nesbitt, Matthew Rabinovitch, A. Girouard, Roel Vertegaal
The Hum is an immersive art installation filled with hundreds of suspended furred catkins surrounding a cocoon. As visitors enter the space, catkins twitch, shiver and hum. In The Hum, we explore the idea of computers that communicate ephemerally through alterations of room and space.
{"title":"The Hum: interacting with an actuated ambient organism","authors":"Andrea Nesbitt, Matthew Rabinovitch, A. Girouard, Roel Vertegaal","doi":"10.1145/2148131.2148206","DOIUrl":"https://doi.org/10.1145/2148131.2148206","url":null,"abstract":"The Hum is an immersive art installation filled with hundreds of suspended furred catkins surrounding a cocoon. As visitors enter the space, catkins twitch, shiver and hum. In The Hum, we explore the idea of computers that communicate ephemerally through alterations of room and space.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121557734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive art has emerged as a distinctive genre in media art that relies on digital contents to express the artist's message. Situated within this field, this work presents an approach to multimedia storytelling that allows audience members to control separate but overlapping parts of the story chapters. We believe that the system engages its audience with a high level of immersion due to its combination of digital computation and tangibility; the tangible system supports a stronger connection to the storytelling than traditional screen-based systems, helping to bridge the gap between the physical world and cyberspace within the field of multimedia storytelling 1]. Consequently, it offers significant potential to share storytelling among a group based its immersive environment and support for embodied interaction paradigms.
{"title":"The memory of a tree: an interactive storytelling installation","authors":"Hyunjoo Oh, António Gomes","doi":"10.1145/2148131.2148207","DOIUrl":"https://doi.org/10.1145/2148131.2148207","url":null,"abstract":"Interactive art has emerged as a distinctive genre in media art that relies on digital contents to express the artist's message. Situated within this field, this work presents an approach to multimedia storytelling that allows audience members to control separate but overlapping parts of the story chapters. We believe that the system engages its audience with a high level of immersion due to its combination of digital computation and tangibility; the tangible system supports a stronger connection to the storytelling than traditional screen-based systems, helping to bridge the gap between the physical world and cyberspace within the field of multimedia storytelling 1]. Consequently, it offers significant potential to share storytelling among a group based its immersive environment and support for embodied interaction paradigms.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123787788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
LSP is a research trajectory exploring the relationship between sound and three dimensional image by means of laser projection, resulting in live performances and immersive installations. In 1815 Nathaniel Bowditch described a way to produce visual patterns by using a sine wave for the horizontal movement of a point and another sine wave for the vertical movement of that point. The shape of the resulting patterns depends on the frequency and phase relationships of the two sine waves and are known as Lissajous figures, or Bowditch curves. LSP interprets Bowditch's work as starting point to develop real-time relationships between sound and image. The sine waves used to create the visual shapes can, while being within our auditory frequency range, at the same time be interpreted as audio signals and therefor define a direct relationship between sound and image. This means that frequency ratios between sounds, de-tuning and phase shifts have a direct visual counterpart and vice versa. Although theoretically all sounds can be seen as sums of multiple sine waves, music in general is often too complex to result in interesting visual patterns. The research of LSP focuses therefor on creating, structuring and composing signals that have both a structural musical quality and a structural time-based visual quality. Different models for the relationship between sound and image are used throughout the performance. When audio is combined with video projection the spatial perception of sound is often being reduced because the two-dimensional nature of the image interferes with the three-dimensional nature of sound. By using lasers in combination with a medium (i.e. fog) to visualize the light in space, it becomes possible to create a three-dimensional changing environment that surrounds the audience. The environment challenges the audience to change their perspective continuously since there are multiple ways of looking at it.
{"title":"LSP","authors":"Edwin van der Heide","doi":"10.1145/2148131.2148138","DOIUrl":"https://doi.org/10.1145/2148131.2148138","url":null,"abstract":"LSP is a research trajectory exploring the relationship between sound and three dimensional image by means of laser projection, resulting in live performances and immersive installations. In 1815 Nathaniel Bowditch described a way to produce visual patterns by using a sine wave for the horizontal movement of a point and another sine wave for the vertical movement of that point. The shape of the resulting patterns depends on the frequency and phase relationships of the two sine waves and are known as Lissajous figures, or Bowditch curves. LSP interprets Bowditch's work as starting point to develop real-time relationships between sound and image. The sine waves used to create the visual shapes can, while being within our auditory frequency range, at the same time be interpreted as audio signals and therefor define a direct relationship between sound and image. This means that frequency ratios between sounds, de-tuning and phase shifts have a direct visual counterpart and vice versa. Although theoretically all sounds can be seen as sums of multiple sine waves, music in general is often too complex to result in interesting visual patterns. The research of LSP focuses therefor on creating, structuring and composing signals that have both a structural musical quality and a structural time-based visual quality. Different models for the relationship between sound and image are used throughout the performance. When audio is combined with video projection the spatial perception of sound is often being reduced because the two-dimensional nature of the image interferes with the three-dimensional nature of sound. By using lasers in combination with a medium (i.e. fog) to visualize the light in space, it becomes possible to create a three-dimensional changing environment that surrounds the audience. The environment challenges the audience to change their perspective continuously since there are multiple ways of looking at it.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128146131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}