Dual-Space Drawing is an interface that enables children to express their drawing ideas in both the digital and real worlds. It supports creative and reflective drawing experiences using two layers: a transparent layer and a screen layer. The interface takes a user's drawing movements on the transparent display unobtrusively and then projects the movements on the screen display while presenting the user-selected multimedia components. Dual-Space Drawing lets users interact with motion graphics on a mirror-like display. In the process of designing the self-projected scenes and creating digital contents, children can express themselves and embody their ideas. While designing a digital object, a user's response to the object creates a new relationship to the object in connection with the user's self-reflectionprojection. In this way, Dual-Space Drawing integrates the user's drawing activity with expressive interaction.
{"title":"Dual-space drawing: designing an interface to support creative and reflective drawing experiences","authors":"Jee Yeon Hwang, Henry Holtzman, M. Resnick","doi":"10.1145/1979742.1979912","DOIUrl":"https://doi.org/10.1145/1979742.1979912","url":null,"abstract":"Dual-Space Drawing is an interface that enables children to express their drawing ideas in both the digital and real worlds. It supports creative and reflective drawing experiences using two layers: a transparent layer and a screen layer. The interface takes a user's drawing movements on the transparent display unobtrusively and then projects the movements on the screen display while presenting the user-selected multimedia components. Dual-Space Drawing lets users interact with motion graphics on a mirror-like display. In the process of designing the self-projected scenes and creating digital contents, children can express themselves and embody their ideas. While designing a digital object, a user's response to the object creates a new relationship to the object in connection with the user's self-reflectionprojection. In this way, Dual-Space Drawing integrates the user's drawing activity with expressive interaction.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131135954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Incorporating social media into the Enterprise is a key opportunity as well as critical challenge facing many organizations today. Tantamount in decision-making about social media implementation is the question of 'value'. Our research examines the deployment of an online innovation management platform to execute an annual research and development proposal competition over two cycles of usage. Our findings suggest strategies for monitoring and measuring the effectiveness of social media's impact to an existing innovation process within the context of a business strategy.
{"title":"Measuring the effectiveness of social media on an innovation process","authors":"L. J. Holtzblatt, M. Tierney","doi":"10.1145/1979742.1979669","DOIUrl":"https://doi.org/10.1145/1979742.1979669","url":null,"abstract":"Incorporating social media into the Enterprise is a key opportunity as well as critical challenge facing many organizations today. Tantamount in decision-making about social media implementation is the question of 'value'. Our research examines the deployment of an online innovation management platform to execute an annual research and development proposal competition over two cycles of usage. Our findings suggest strategies for monitoring and measuring the effectiveness of social media's impact to an existing innovation process within the context of a business strategy.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131287567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed collaboration has been enhanced in recent years by sophisticated new video conferencing setups like HP Halo and Cisco Telepresence, improving the user experience of distributed meeting situations over traditional video conferencing. The experience created can be described as one of "blending" distributed physical locations into one shared space. Inspired by this trend, we have been exploring the systematic creation of blended spaces for distributed collaboration through the design of appropriate shared spatial geometries. We present early iterations of our design work: the Blended Interaction Space One prototype, BISi, and the lessons learned from its creation.
{"title":"BISi: a blended interaction space","authors":"J. Paay, J. Kjeldskov, Kenton O'hara","doi":"10.1145/1979742.1979644","DOIUrl":"https://doi.org/10.1145/1979742.1979644","url":null,"abstract":"Distributed collaboration has been enhanced in recent years by sophisticated new video conferencing setups like HP Halo and Cisco Telepresence, improving the user experience of distributed meeting situations over traditional video conferencing. The experience created can be described as one of \"blending\" distributed physical locations into one shared space. Inspired by this trend, we have been exploring the systematic creation of blended spaces for distributed collaboration through the design of appropriate shared spatial geometries. We present early iterations of our design work: the Blended Interaction Space One prototype, BISi, and the lessons learned from its creation.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131302718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Similar sliding gestures may have different meanings when they are performed with changing intensity. Touch screens, however, fail to properly distinguish those intensities due to their inability to sense variable pressures. Enabled by distinguishing normal and tangential forces, we explore new possibilities for gestures on a touch screen. We have implemented a pressure-sensitive prototype and have designed a set of gestures that utilize alterable forces. The gestures' feasibility has been tested through a simple experiment. Finally, we discuss the new possibility of touch interactions that are sensitive to pressure.
{"title":"Force gestures: augmented touch screen gestures using normal and tangential force","authors":"Seongkook Heo, Geehyuk Lee","doi":"10.1145/1979742.1979895","DOIUrl":"https://doi.org/10.1145/1979742.1979895","url":null,"abstract":"Similar sliding gestures may have different meanings when they are performed with changing intensity. Touch screens, however, fail to properly distinguish those intensities due to their inability to sense variable pressures. Enabled by distinguishing normal and tangential forces, we explore new possibilities for gestures on a touch screen. We have implemented a pressure-sensitive prototype and have designed a set of gestures that utilize alterable forces. The gestures' feasibility has been tested through a simple experiment. Finally, we discuss the new possibility of touch interactions that are sensitive to pressure.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128719812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce Communiclay, a modular construction system for tangible kinetic communication of gesture and form over a distance. Users assemble a number of Communiclay nodes into unique configurations, connect their creations to each others' Communiclay creations on a network, and then physically deform one creation to synchronously output those same gestures on the other networked creations. Communiclay builds on trends in tangible interfaces and explores the ways in which future actuated materials can enable a variety of tangible interfaces. We present applications that stem from past research in tangible media, and describe explorations that address ways in which people make meaning of remote communication through gesture and dynamic physical form. Our hypothesis is that current research in programmable matter will eventually converge with UI research; Communiclay demonstrates that we can begin to explore design and social issues with today's technologies.
{"title":"Communiclay: a modular system for tangible telekinetic communication","authors":"Hayes Raffle, Ruibing Wang, K. Seada, H. Ishii","doi":"10.1145/1979742.1979612","DOIUrl":"https://doi.org/10.1145/1979742.1979612","url":null,"abstract":"We introduce Communiclay, a modular construction system for tangible kinetic communication of gesture and form over a distance. Users assemble a number of Communiclay nodes into unique configurations, connect their creations to each others' Communiclay creations on a network, and then physically deform one creation to synchronously output those same gestures on the other networked creations. Communiclay builds on trends in tangible interfaces and explores the ways in which future actuated materials can enable a variety of tangible interfaces. We present applications that stem from past research in tangible media, and describe explorations that address ways in which people make meaning of remote communication through gesture and dynamic physical form. Our hypothesis is that current research in programmable matter will eventually converge with UI research; Communiclay demonstrates that we can begin to explore design and social issues with today's technologies.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121838640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stacey Birkett, A. Galpin, S. Cassidy, L. Marrow, S. Norgate
This paper presents a critical review of eye tracking as a research approach and evaluates its potential for usability testing in pre-school children. We argue that eye-tracking data is useful for assessing web engagement in this age-group, but only if triangulated against other usability methods. Recommendations for potential usability methods to use in tandem with eye-tracking are presented as part of a work in progress within a joint partner project between the University of Salford (UK) and the British Broadcasting Corporation (BBC) exploring best-fit methodologies for understanding web engagement in young children.
{"title":"How revealing are eye-movements for understanding web engagement in young children","authors":"Stacey Birkett, A. Galpin, S. Cassidy, L. Marrow, S. Norgate","doi":"10.1145/1979742.1979900","DOIUrl":"https://doi.org/10.1145/1979742.1979900","url":null,"abstract":"This paper presents a critical review of eye tracking as a research approach and evaluates its potential for usability testing in pre-school children. We argue that eye-tracking data is useful for assessing web engagement in this age-group, but only if triangulated against other usability methods. Recommendations for potential usability methods to use in tandem with eye-tracking are presented as part of a work in progress within a joint partner project between the University of Salford (UK) and the British Broadcasting Corporation (BBC) exploring best-fit methodologies for understanding web engagement in young children.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"14 19","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120862392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We developed an application to gather text entry speed and accuracy metrics on Android devices. This paper details the features of the application and describes a pilot study to demonstrate its utility. We evaluated and compared three mobile text entry methods: QWERTY typing, handwriting recognition, and shape writing recognition. Handwriting was the slowest and least accurate technique. QWERTY was faster than shape writing, but we found no significant difference in accuracy between the two techniques.
{"title":"Gathering text entry metrics on android devices","authors":"Steven J. Castellucci, I. Mackenzie","doi":"10.1145/1979742.1979799","DOIUrl":"https://doi.org/10.1145/1979742.1979799","url":null,"abstract":"We developed an application to gather text entry speed and accuracy metrics on Android devices. This paper details the features of the application and describes a pilot study to demonstrate its utility. We evaluated and compared three mobile text entry methods: QWERTY typing, handwriting recognition, and shape writing recognition. Handwriting was the slowest and least accurate technique. QWERTY was faster than shape writing, but we found no significant difference in accuracy between the two techniques.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121219678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Komatsu, S. Yamada, Kazuki Kobayashi, Kotaro Funakoshi, Mikio Nakano
So far, we already confirmed that the artificial subtle expressions (ASEs) from a robot could convey its internal states to participants accurately and intuitively. In this paper, we investigated whether the ASEs from an on-screen artifact could also convey the artifact's internal states to participants in order to confirm whether the ASEs can be interpreted consistently for various types of artifacts. The results clearly showed that the ASEs' interpretations from on-screen artifact were consistent with the ones from robotic agent.
{"title":"Effects of different types of artifacts on interpretations of artificial subtle expressions (ASEs)","authors":"T. Komatsu, S. Yamada, Kazuki Kobayashi, Kotaro Funakoshi, Mikio Nakano","doi":"10.1145/1979742.1979756","DOIUrl":"https://doi.org/10.1145/1979742.1979756","url":null,"abstract":"So far, we already confirmed that the artificial subtle expressions (ASEs) from a robot could convey its internal states to participants accurately and intuitively. In this paper, we investigated whether the ASEs from an on-screen artifact could also convey the artifact's internal states to participants in order to confirm whether the ASEs can be interpreted consistently for various types of artifacts. The results clearly showed that the ASEs' interpretations from on-screen artifact were consistent with the ones from robotic agent.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122326474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Mujumdar, Manuel Kallenbach, Brandon Liu, Bjoern Hartmann
Developers increasingly consult online examples and message boards to find solutions to common programming tasks. On the web, finding solutions to debugging problems is harder than searching for working code. Prior research introduced a social recommender system, HelpMeOut, that crowdsources debugging suggestions by presenting fixes to errors that peers have applied in the past. However, HelpMeOut only worked for statically typed, compiled programming languages like Java. We investigate how suggestions can be provided for dynamic, interpreted web development languages. Our primary insight is to instrument test-driven development to collect examples of bug fixes. We present Crowd::Debug, a tool for Ruby programmers that realizes these benefits.
{"title":"Crowdsourcing suggestions to programming problems for dynamic web development languages","authors":"D. Mujumdar, Manuel Kallenbach, Brandon Liu, Bjoern Hartmann","doi":"10.1145/1979742.1979802","DOIUrl":"https://doi.org/10.1145/1979742.1979802","url":null,"abstract":"Developers increasingly consult online examples and message boards to find solutions to common programming tasks. On the web, finding solutions to debugging problems is harder than searching for working code. Prior research introduced a social recommender system, HelpMeOut, that crowdsources debugging suggestions by presenting fixes to errors that peers have applied in the past. However, HelpMeOut only worked for statically typed, compiled programming languages like Java. We investigate how suggestions can be provided for dynamic, interpreted web development languages. Our primary insight is to instrument test-driven development to collect examples of bug fixes. We present Crowd::Debug, a tool for Ruby programmers that realizes these benefits.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125973108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tag clouds are typically presented so that users can actively utilize community-generated metadata to query a collection. This research investigates whether such keyword clouds, and other interactive search metadata, also provide measureable passive support for users who do not directly interact with them. If so, then objective interaction-based measurements may not be the best way to evaluate these kinds of search user interface features. This paper discusses our study design, and the insights provided by a pilot study that led to a series of improvements to our study design.
{"title":"Tag clouds and keyword clouds: evaluating zero-interaction benefits","authors":"M. Wilson, Max L. Wilson","doi":"10.1145/1979742.1979913","DOIUrl":"https://doi.org/10.1145/1979742.1979913","url":null,"abstract":"Tag clouds are typically presented so that users can actively utilize community-generated metadata to query a collection. This research investigates whether such keyword clouds, and other interactive search metadata, also provide measureable passive support for users who do not directly interact with them. If so, then objective interaction-based measurements may not be the best way to evaluate these kinds of search user interface features. This paper discusses our study design, and the insights provided by a pilot study that led to a series of improvements to our study design.","PeriodicalId":275462,"journal":{"name":"CHI '11 Extended Abstracts on Human Factors in Computing Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126351829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}