{"title":"Session details: Graduate student consortium","authors":"Caroline Hummels, Amon Millner, Orit Shaer","doi":"10.1145/3256405","DOIUrl":"https://doi.org/10.1145/3256405","url":null,"abstract":"","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132692204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Samosky, Douglas A. Nelson, Bo Wang, R. Bregman, A. Hosmer, B. Mikulis, R. A. Weaver
BodyExplorerAR is a system designed to enhance a learner's ability to explore anatomy, physiology and clinical interventions though naturalistic interaction with an augmented reality enhanced full-body mannequin simulator. We are developing a platform that integrates projective AR and multi-modal sensor inputs. A user can use an IR pen to open, resize and move viewports providing windows into the body that can display dynamic anatomy. The user can point to an organ and display additional information such as graphs of physiological parameters or heart sounds. Custom sensing systems provide natural interactions with common medical devices such as syringes, breathing tubes and catheters. A user can open a window displaying the beating heart in situ, display an electrocardiogram (ECG), then inject drugs and see and hear changes in heart rate. Our goal is an engaging experience that empowers a learner to create customized, media-rich explorations revealing the internal consequences of external actions.
{"title":"BodyExplorerAR: enhancing a mannequin medical simulator with sensing and projective augmented reality for exploring dynamic anatomy and physiology","authors":"J. Samosky, Douglas A. Nelson, Bo Wang, R. Bregman, A. Hosmer, B. Mikulis, R. A. Weaver","doi":"10.1145/2148131.2148187","DOIUrl":"https://doi.org/10.1145/2148131.2148187","url":null,"abstract":"BodyExplorerAR is a system designed to enhance a learner's ability to explore anatomy, physiology and clinical interventions though naturalistic interaction with an augmented reality enhanced full-body mannequin simulator. We are developing a platform that integrates projective AR and multi-modal sensor inputs. A user can use an IR pen to open, resize and move viewports providing windows into the body that can display dynamic anatomy. The user can point to an organ and display additional information such as graphs of physiological parameters or heart sounds. Custom sensing systems provide natural interactions with common medical devices such as syringes, breathing tubes and catheters. A user can open a window displaying the beating heart in situ, display an electrocardiogram (ECG), then inject drugs and see and hear changes in heart rate. Our goal is an engaging experience that empowers a learner to create customized, media-rich explorations revealing the internal consequences of external actions.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132806542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
LSP is a research trajectory exploring the relationship between sound and three dimensional image by means of laser projection, resulting in live performances and immersive installations. In 1815 Nathaniel Bowditch described a way to produce visual patterns by using a sine wave for the horizontal movement of a point and another sine wave for the vertical movement of that point. The shape of the resulting patterns depends on the frequency and phase relationships of the two sine waves and are known as Lissajous figures, or Bowditch curves. LSP interprets Bowditch's work as starting point to develop real-time relationships between sound and image. The sine waves used to create the visual shapes can, while being within our auditory frequency range, at the same time be interpreted as audio signals and therefor define a direct relationship between sound and image. This means that frequency ratios between sounds, de-tuning and phase shifts have a direct visual counterpart and vice versa. Although theoretically all sounds can be seen as sums of multiple sine waves, music in general is often too complex to result in interesting visual patterns. The research of LSP focuses therefor on creating, structuring and composing signals that have both a structural musical quality and a structural time-based visual quality. Different models for the relationship between sound and image are used throughout the performance. When audio is combined with video projection the spatial perception of sound is often being reduced because the two-dimensional nature of the image interferes with the three-dimensional nature of sound. By using lasers in combination with a medium (i.e. fog) to visualize the light in space, it becomes possible to create a three-dimensional changing environment that surrounds the audience. The environment challenges the audience to change their perspective continuously since there are multiple ways of looking at it.
{"title":"LSP","authors":"Edwin van der Heide","doi":"10.1145/2148131.2148138","DOIUrl":"https://doi.org/10.1145/2148131.2148138","url":null,"abstract":"LSP is a research trajectory exploring the relationship between sound and three dimensional image by means of laser projection, resulting in live performances and immersive installations. In 1815 Nathaniel Bowditch described a way to produce visual patterns by using a sine wave for the horizontal movement of a point and another sine wave for the vertical movement of that point. The shape of the resulting patterns depends on the frequency and phase relationships of the two sine waves and are known as Lissajous figures, or Bowditch curves. LSP interprets Bowditch's work as starting point to develop real-time relationships between sound and image. The sine waves used to create the visual shapes can, while being within our auditory frequency range, at the same time be interpreted as audio signals and therefor define a direct relationship between sound and image. This means that frequency ratios between sounds, de-tuning and phase shifts have a direct visual counterpart and vice versa. Although theoretically all sounds can be seen as sums of multiple sine waves, music in general is often too complex to result in interesting visual patterns. The research of LSP focuses therefor on creating, structuring and composing signals that have both a structural musical quality and a structural time-based visual quality. Different models for the relationship between sound and image are used throughout the performance. When audio is combined with video projection the spatial perception of sound is often being reduced because the two-dimensional nature of the image interferes with the three-dimensional nature of sound. By using lasers in combination with a medium (i.e. fog) to visualize the light in space, it becomes possible to create a three-dimensional changing environment that surrounds the audience. The environment challenges the audience to change their perspective continuously since there are multiple ways of looking at it.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128146131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenwei Chiang, Shu-Chuan Chiu, Anak Agung Gede Dharma, Kiyoshi Tomimatsu
In this paper we describes a new concept of utilizing a mobile device or Personal Digital Assistant (PDA) for musical composition. We design a new interface that combines the ease of use of a pencil and the portability and customizability of mobile device. Our proposed kit involves the affordances provided by paper computing in order to provide user experiences to novice users. By effectively using the principle of electrical conductivity and signal processing, we have developed a functional prototype ("Birds on Paper") that enables users to compose their own music. Our proposed kit consists of 4 main elements, i.e.: pencil, birds-shaped sensor, hub connector, and mobile device or PDA. Pencil can be applied on a piece of paper as the main medium to visualize the musical composition. Touching the graphite surface of the drawing will trigger an audio feedback in the form of musical notes. Musical notes will be generated based on the thickness and the length of the pencil drawings, thus enables users to intuitively compose the music according to their preference. In addition to the description of the kit, we also discuss the concept behind the design and possible user scenarios.
{"title":"Birds on paper: an alternative interface to compose music by utilizing sketch drawing and mobile device","authors":"Chenwei Chiang, Shu-Chuan Chiu, Anak Agung Gede Dharma, Kiyoshi Tomimatsu","doi":"10.1145/2148131.2148175","DOIUrl":"https://doi.org/10.1145/2148131.2148175","url":null,"abstract":"In this paper we describes a new concept of utilizing a mobile device or Personal Digital Assistant (PDA) for musical composition. We design a new interface that combines the ease of use of a pencil and the portability and customizability of mobile device. Our proposed kit involves the affordances provided by paper computing in order to provide user experiences to novice users. By effectively using the principle of electrical conductivity and signal processing, we have developed a functional prototype (\"Birds on Paper\") that enables users to compose their own music. Our proposed kit consists of 4 main elements, i.e.: pencil, birds-shaped sensor, hub connector, and mobile device or PDA. Pencil can be applied on a piece of paper as the main medium to visualize the musical composition. Touching the graphite surface of the drawing will trigger an audio feedback in the form of musical notes. Musical notes will be generated based on the thickness and the length of the pencil drawings, thus enables users to intuitively compose the music according to their preference. In addition to the description of the kit, we also discuss the concept behind the design and possible user scenarios.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134461371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
TEI is already in its seventh edition. This conference now counts with a solid and mature community, which is nonetheless growing every year. We are very happy to host this conference in Barcelona with one of the highest registrations ever, proving the high acceptance of the conference and its scientific value. We are also very happy to do it from Universitat Pompeu Fabra (UPF), the youngest university in Barcelona - counting only 20 years of life. We are proud to host TEI'13 from this small university (~13,000 undergraduate and graduate students), which is nonetheless (according to a recent official study by U. of Granada and U. of Zaragoza), the first university in Spain in percentage of scientific production per researcher.
{"title":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","authors":"S. Jordà, N. Parés","doi":"10.1145/2148131","DOIUrl":"https://doi.org/10.1145/2148131","url":null,"abstract":"TEI is already in its seventh edition. This conference now counts with a solid and mature community, which is nonetheless growing every year. We are very happy to host this conference in Barcelona with one of the highest registrations ever, proving the high acceptance of the conference and its scientific value. We are also very happy to do it from Universitat Pompeu Fabra (UPF), the youngest university in Barcelona - counting only 20 years of life. We are proud to host TEI'13 from this small university (~13,000 undergraduate and graduate students), which is nonetheless (according to a recent official study by U. of Granada and U. of Zaragoza), the first university in Spain in percentage of scientific production per researcher.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121470772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Folds are everywhere throughout nature, in our DNA, in leaves, in insect wings, in the mountain forming forces of tektonic plates. We see folds in art as ancient as origami, and in design as packaging, lighting designs and surface aesthetics. Contemporary architects and designers have embraced the organic form of the fold, leveraging the complexity of computational and algorithmic design alongside the affordability of automated and programmable engineering processes and the efficiency of transforming flat sheets into three dimensions using only cuts and bends. In the fields of tangible interactions, we see artists, designers and researchers experimenting and developing new aesthetics, functions, and forms of interactions. They are inspired by the simple but elegant beauty of folded geometry, and interaction possibilities latent within the hinged surfaces of folds. Combined with new materials and technologies, this research area opens up the possibility to free the screen and sensor interfaces from the tyranny of the Euclidean plane of monitors, tablets and flat devices. Artworks such as Oribotics have been focused on flexible, foldable, shape programmed interfaces with mathematically defined geometries through an evolving series of robotic sculptures. The term oribot, literally meaning ori=fold, bot=robot, was originally inspired by the idea to bring an animation out of the flatness of the screen and into reality; to make programmable folded sculpture combined with motion graphics. This keynote addresses specific knowhow and the broader topics within the practice-based-research of Oribotics. Such as: producing kinetic folded membranes with longevity, resistance to corruption and low actuation force; applied techno-origami; biomimetics for design solutions; analysis of interaction metaphors; and horizon edge technologies, materials and ideas for future developments. The future will unfold.
{"title":"The functional aesthetic of folding","authors":"M. Gardiner","doi":"10.1145/2148131.2148133","DOIUrl":"https://doi.org/10.1145/2148131.2148133","url":null,"abstract":"Folds are everywhere throughout nature, in our DNA, in leaves, in insect wings, in the mountain forming forces of tektonic plates. We see folds in art as ancient as origami, and in design as packaging, lighting designs and surface aesthetics. Contemporary architects and designers have embraced the organic form of the fold, leveraging the complexity of computational and algorithmic design alongside the affordability of automated and programmable engineering processes and the efficiency of transforming flat sheets into three dimensions using only cuts and bends. In the fields of tangible interactions, we see artists, designers and researchers experimenting and developing new aesthetics, functions, and forms of interactions. They are inspired by the simple but elegant beauty of folded geometry, and interaction possibilities latent within the hinged surfaces of folds. Combined with new materials and technologies, this research area opens up the possibility to free the screen and sensor interfaces from the tyranny of the Euclidean plane of monitors, tablets and flat devices. Artworks such as Oribotics have been focused on flexible, foldable, shape programmed interfaces with mathematically defined geometries through an evolving series of robotic sculptures. The term oribot, literally meaning ori=fold, bot=robot, was originally inspired by the idea to bring an animation out of the flatness of the screen and into reality; to make programmable folded sculpture combined with motion graphics. This keynote addresses specific knowhow and the broader topics within the practice-based-research of Oribotics. Such as: producing kinetic folded membranes with longevity, resistance to corruption and low actuation force; applied techno-origami; biomimetics for design solutions; analysis of interaction metaphors; and horizon edge technologies, materials and ideas for future developments. The future will unfold.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115240389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Rock that body","authors":"Elise van den Hoven","doi":"10.1145/3256396","DOIUrl":"https://doi.org/10.1145/3256396","url":null,"abstract":"","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125956811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chih-Sung Wu, Sam Mendenhall, Jayraj Jog, Loring Scotty Hoag, Ali Mazalek
In this paper we present Responsive Objects, Surfaces, and Spaces (ROSS) API, a tangible toolkit that allows designers and developers to easily build applications for heterogeneous network devices. We describe the unique nested structure of the ROSS framework that enables cross-platform and device development and demonstrate its capabilities using several prototype applications.
{"title":"A nested APi structure to simplify cross-device communication","authors":"Chih-Sung Wu, Sam Mendenhall, Jayraj Jog, Loring Scotty Hoag, Ali Mazalek","doi":"10.1145/2148131.2148180","DOIUrl":"https://doi.org/10.1145/2148131.2148180","url":null,"abstract":"In this paper we present Responsive Objects, Surfaces, and Spaces (ROSS) API, a tangible toolkit that allows designers and developers to easily build applications for heterogeneous network devices. We describe the unique nested structure of the ROSS framework that enables cross-platform and device development and demonstrate its capabilities using several prototype applications.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126809773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Doris Hausen, Sebastian Boring, Clara Lueling, Simone Rodestock, A. Butz
Instant messaging systems, such as Skype, offer text, audio and video channels for one-on-one and group conversations, both for personal and professional communication. They are commonly used at a distance, i.e., across countries and continents. To avoid disrupting other tasks, they display personal states to signal others when to contact someone and when not. This mechanism, however, heavily relies on users setting their own state correctly. In an online survey with 46 participants we found that neglecting state updates leads to unwanted messages, either because the state is incorrect or others disrespect it because they assume it to be wrong anyway. We address this situation with the StaTube, a tangible object offering (1) peripheral interaction for setting one's own state and (2) peripheral awareness of selected others' state. In an in-situ evaluation we found first indicators that (1) peripheral interaction fosters more frequent state updates and more accurate state information, and (2) that our participants felt more aware of their contacts' states due to the physical ambient representation.
{"title":"StaTube: facilitating state management in instant messaging systems","authors":"Doris Hausen, Sebastian Boring, Clara Lueling, Simone Rodestock, A. Butz","doi":"10.1145/2148131.2148191","DOIUrl":"https://doi.org/10.1145/2148131.2148191","url":null,"abstract":"Instant messaging systems, such as Skype, offer text, audio and video channels for one-on-one and group conversations, both for personal and professional communication. They are commonly used at a distance, i.e., across countries and continents. To avoid disrupting other tasks, they display personal states to signal others when to contact someone and when not. This mechanism, however, heavily relies on users setting their own state correctly. In an online survey with 46 participants we found that neglecting state updates leads to unwanted messages, either because the state is incorrect or others disrespect it because they assume it to be wrong anyway. We address this situation with the StaTube, a tangible object offering (1) peripheral interaction for setting one's own state and (2) peripheral awareness of selected others' state. In an in-situ evaluation we found first indicators that (1) peripheral interaction fosters more frequent state updates and more accurate state information, and (2) that our participants felt more aware of their contacts' states due to the physical ambient representation.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122325033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most current mobile technologies require on-screen operations for interacting with devices' visual contents. However, as a trade-off for mobility, screens usually provide limited space for interactions. To address this problem, I explore Body-Centric Interaction (BCI) -- a design theme that extends a mobile device's interaction space from screen space to body space. My research methodology follows several steps. First, I use a generative bottom-up method -- sketches and proof of concept implementations -- to frame the breadth of the design space. Second, I populate the space with related work, which also unifies what has been done. Third -- which is work in progress -- I explore the depth of promising BCI methods, with the goal of developing, refining and testing particular mobile interaction techniques.
{"title":"Body-centric interaction with mobile devices","authors":"Xiang 'Anthony' Chen","doi":"10.1145/2148131.2148226","DOIUrl":"https://doi.org/10.1145/2148131.2148226","url":null,"abstract":"Most current mobile technologies require on-screen operations for interacting with devices' visual contents. However, as a trade-off for mobility, screens usually provide limited space for interactions. To address this problem, I explore Body-Centric Interaction (BCI) -- a design theme that extends a mobile device's interaction space from screen space to body space. My research methodology follows several steps. First, I use a generative bottom-up method -- sketches and proof of concept implementations -- to frame the breadth of the design space. Second, I populate the space with related work, which also unifies what has been done. Third -- which is work in progress -- I explore the depth of promising BCI methods, with the goal of developing, refining and testing particular mobile interaction techniques.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115009702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}