In this paper, we present a study examining how individuals embody emotion within form. Our findings provide a general taxonomy of affective dimensions of shape consistent with and extending previous literature. We also show that ordinary people can reasonably construct embodied shapes using affective dimensions, and illustrate that emotion is conveyed through both visual dimensions and tactile manipulations of shape. Participants used three distinct strategies for embodiment of emotion through shape: the look of a shape (visual representation), creation of a shape symbolizing the experience of an intended emotion (metaphor), and by evoking the intended emotion in the creator through affective movements and manipulations during construction (motion). This work ties together and extends understanding around emotion and form in HCI subdomains such as tangible embodied interaction, emotional assessment, and user experience evaluation.
{"title":"Motion, Emotion, and Form: Exploring Affective Dimensions of Shape","authors":"Edward F. Melcer, K. Isbister","doi":"10.1145/2851581.2892361","DOIUrl":"https://doi.org/10.1145/2851581.2892361","url":null,"abstract":"In this paper, we present a study examining how individuals embody emotion within form. Our findings provide a general taxonomy of affective dimensions of shape consistent with and extending previous literature. We also show that ordinary people can reasonably construct embodied shapes using affective dimensions, and illustrate that emotion is conveyed through both visual dimensions and tactile manipulations of shape. Participants used three distinct strategies for embodiment of emotion through shape: the look of a shape (visual representation), creation of a shape symbolizing the experience of an intended emotion (metaphor), and by evoking the intended emotion in the creator through affective movements and manipulations during construction (motion). This work ties together and extends understanding around emotion and form in HCI subdomains such as tangible embodied interaction, emotional assessment, and user experience evaluation.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126751093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Manojlovic, K. Gavrilo, J. D. Wit, Vassilis-Javed Khan, P. Markopoulos
Recently, companies and academia have turned to crowdsourcing to stimulate creativity and innovation. Although children's creative nature has been well documented in the design process in co-creation for new products and/or services, this has not yet extended to crowdsourcing. With this paper, we investigate -- through crowdsourcing -- the gap between children and crowdsourcing. To gather a diverse sample of participants we used CrowdFlower, a crowdsourcing platform, to generate, evaluate and rank ideas and concepts. Results show that 93% of parents and 80% of non-parents would involve children in crowdsourcing. The most valued concept of the crowd was the collaboration between parents and children, who are innovating for companies. This concept involves publishing companies requesting drawings from children for book illustrations.
{"title":"Exploring the Potential of Children in Crowdsourcing","authors":"S. Manojlovic, K. Gavrilo, J. D. Wit, Vassilis-Javed Khan, P. Markopoulos","doi":"10.1145/2851581.2892312","DOIUrl":"https://doi.org/10.1145/2851581.2892312","url":null,"abstract":"Recently, companies and academia have turned to crowdsourcing to stimulate creativity and innovation. Although children's creative nature has been well documented in the design process in co-creation for new products and/or services, this has not yet extended to crowdsourcing. With this paper, we investigate -- through crowdsourcing -- the gap between children and crowdsourcing. To gather a diverse sample of participants we used CrowdFlower, a crowdsourcing platform, to generate, evaluate and rank ideas and concepts. Results show that 93% of parents and 80% of non-parents would involve children in crowdsourcing. The most valued concept of the crowd was the collaboration between parents and children, who are innovating for companies. This concept involves publishing companies requesting drawings from children for book illustrations.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114953258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioanna Lykourentzou, Shannon Wang, R. Kraut, Steven W. Dow
Online crowds have the potential to do more complex work in teams, rather than as individuals. However, at such a large scale, team formation can be difficult to coordinate. (How) can we rely on the crowd itself to organize into effective teams? Our research explores a strategy for "team dating", a self-organized crowd team formation approach where workers try out and rate different candidate partners. In two online experiments, we find that team dating affects the way that people select partners and how they evaluate them. We use these results to draw useful conclusions for the future of team dating and its implications for collaborative crowdsourcing.
{"title":"Team Dating: A Self-Organized Team Formation Strategy for Collaborative Crowdsourcing","authors":"Ioanna Lykourentzou, Shannon Wang, R. Kraut, Steven W. Dow","doi":"10.1145/2851581.2892421","DOIUrl":"https://doi.org/10.1145/2851581.2892421","url":null,"abstract":"Online crowds have the potential to do more complex work in teams, rather than as individuals. However, at such a large scale, team formation can be difficult to coordinate. (How) can we rely on the crowd itself to organize into effective teams? Our research explores a strategy for \"team dating\", a self-organized crowd team formation approach where workers try out and rate different candidate partners. In two online experiments, we find that team dating affects the way that people select partners and how they evaluate them. We use these results to draw useful conclusions for the future of team dating and its implications for collaborative crowdsourcing.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115107266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Rice, Shue-Ching Chia, Hong Huei Tay, M. Wan, Liyuan Li, Jamie Ng, Joo-Hwee Lim
In this paper, we report on the evaluation of a remote assistance platform (RAP) that is designed to enable an expert to remotely assist a field operator. A user study with 16 participants was conducted to evaluate its usability with two assembly tasks that varied in their complexity. As part of the assessment, we compared the interaction behavior of our platform with a commercial instant messaging application, which lacked the ability to augment or view video imagery. The results identified differences in the completion times between the two conditions, as we examined the use of visual augmentation, including recommendations to improve the platform.
{"title":"Exploring the Use of Visual Annotations in a Remote Assistance Platform","authors":"M. Rice, Shue-Ching Chia, Hong Huei Tay, M. Wan, Liyuan Li, Jamie Ng, Joo-Hwee Lim","doi":"10.1145/2851581.2892346","DOIUrl":"https://doi.org/10.1145/2851581.2892346","url":null,"abstract":"In this paper, we report on the evaluation of a remote assistance platform (RAP) that is designed to enable an expert to remotely assist a field operator. A user study with 16 participants was conducted to evaluate its usability with two assembly tasks that varied in their complexity. As part of the assessment, we compared the interaction behavior of our platform with a commercial instant messaging application, which lacked the ability to augment or view video imagery. The results identified differences in the completion times between the two conditions, as we examined the use of visual augmentation, including recommendations to improve the platform.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115582585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current work on design fiction has discussed their use for personal reflection, sharing with collaborators, forming a public "vision" but with small numbers of participant readers. We wanted to explore a new way of using design fictions as a tool for discussion with large global audiences via social authoring web sights. To achieve this, we wrote a highly read, science-fiction novel called I'm a Cyborg's Pet (The Thinking Girl's Guide to Surviving a Robot Apocalypse), on an online, social, serial-writing website called Wattpad. We found our readers confounded our initial expectations of dystopian fiction.
{"title":"Resistance is Fertile: Design Fictions in Dystopian Worlds","authors":"N. Dalton, R. Moreau, R. Adams","doi":"10.1145/2851581.2892572","DOIUrl":"https://doi.org/10.1145/2851581.2892572","url":null,"abstract":"Current work on design fiction has discussed their use for personal reflection, sharing with collaborators, forming a public \"vision\" but with small numbers of participant readers. We wanted to explore a new way of using design fictions as a tool for discussion with large global audiences via social authoring web sights. To achieve this, we wrote a highly read, science-fiction novel called I'm a Cyborg's Pet (The Thinking Girl's Guide to Surviving a Robot Apocalypse), on an online, social, serial-writing website called Wattpad. We found our readers confounded our initial expectations of dystopian fiction.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122424268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Majeed Kazemitabaar, Liang He, K. Wang, Chloe Aloimonos, T. Cheng, Jon E. Froehlich
We present, ReWear, a modular 'plug-and-play' construction kit for retrofitting existing textiles (e.g., hats, scarfs, shirts) with interactive electronic and computational behaviors without sewing or the creation of code. While a range of well-designed e-textile toolkits exist (e.g., LilyPad), they cater primarily to adults and older children and present a high barrier of entry for some users. ReWear is part of a larger research agenda, called MakerWear, that is aimed at engaging younger children (ages 4-12) in the creative design, play, and customization of e-textiles/wearables. We discuss our initial ReWear prototype, contrast it with past work, and describe a preliminary evaluation.
{"title":"ReWear: Early Explorations of a Modular Wearable Construction Kit for Young Children","authors":"Majeed Kazemitabaar, Liang He, K. Wang, Chloe Aloimonos, T. Cheng, Jon E. Froehlich","doi":"10.1145/2851581.2892525","DOIUrl":"https://doi.org/10.1145/2851581.2892525","url":null,"abstract":"We present, ReWear, a modular 'plug-and-play' construction kit for retrofitting existing textiles (e.g., hats, scarfs, shirts) with interactive electronic and computational behaviors without sewing or the creation of code. While a range of well-designed e-textile toolkits exist (e.g., LilyPad), they cater primarily to adults and older children and present a high barrier of entry for some users. ReWear is part of a larger research agenda, called MakerWear, that is aimed at engaging younger children (ages 4-12) in the creative design, play, and customization of e-textiles/wearables. We discuss our initial ReWear prototype, contrast it with past work, and describe a preliminary evaluation.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122867768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Jones, V. Bellotti, Robert G. Capra, J. Dinneen, G. Mark, C. Marshall, Karyn Moffatt, J. Teevan, M. V. Kleek
People are amassing large personal information stores. These stores present rich opportunities for analysis and use in matters of wealth, health, living and legacy. But these stores also bring with them new challenges for managing information across long periods of time. Hence personal information management (PIM) research increasingly must address the long term. For the seventh PIM workshop in a successful series started in 2005, we propose taking a look at personal information with exactly this longitudinal perspective. We expect the workshop to attract a range of people doing research related to PIM, HCI, personal digital archiving, aging, and the design of informational spaces for later life. Attendees will discuss issues related to storing information for the long run, how stored information can benefit a person throughout their lifetime (and into old age), and the legacy of a person's personal information.
{"title":"For Richer, for Poorer, in Sickness or in Health...: The Long-Term Management of Personal Information","authors":"W. Jones, V. Bellotti, Robert G. Capra, J. Dinneen, G. Mark, C. Marshall, Karyn Moffatt, J. Teevan, M. V. Kleek","doi":"10.1145/2851581.2856481","DOIUrl":"https://doi.org/10.1145/2851581.2856481","url":null,"abstract":"People are amassing large personal information stores. These stores present rich opportunities for analysis and use in matters of wealth, health, living and legacy. But these stores also bring with them new challenges for managing information across long periods of time. Hence personal information management (PIM) research increasingly must address the long term. For the seventh PIM workshop in a successful series started in 2005, we propose taking a look at personal information with exactly this longitudinal perspective. We expect the workshop to attract a range of people doing research related to PIM, HCI, personal digital archiving, aging, and the design of informational spaces for later life. Attendees will discuss issues related to storing information for the long run, how stored information can benefit a person throughout their lifetime (and into old age), and the legacy of a person's personal information.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114045192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Hamdan, Marcel Lahaye, Christian Corsten, Jan O. Borchers
This work provides first insights into supporting hierarchical micro-navigation in the physical world in a manner relevant to AR systems. In this paper, we study the performance of two presentation strategies in tasks that involve navigating to an object inside a hierarchy of physical containers within the user's reach. We consider two types of navigation aids: Those that provide route knowledge via step-by-step instructions, using simple graphical overlays, and those that provide survey knowledge via map-like overviews, using 3D depth visualizations. We performed a user study using a cardboard mock-up of a spatial display. Our experiment shows that in shallow hierarchies route aids and survey aids perform comparably in terms of navigation time and accuracy. When a target is embedded deeper into a structure, the performance of survey aids is affected negatively, while route aids maintain a consistent performance. Users re- ported that survey aids helped them understand a container hierarchy, but route aids required less processing time and effort, and thus, were more preferred. We found no significant effect of aid type on users' preference. Accordingly, we recommend considering the depth of task when employing these presentation strategies.
{"title":"Presentation Strategies for Micro-Navigation in the Physical World","authors":"N. Hamdan, Marcel Lahaye, Christian Corsten, Jan O. Borchers","doi":"10.1145/2851581.2892543","DOIUrl":"https://doi.org/10.1145/2851581.2892543","url":null,"abstract":"This work provides first insights into supporting hierarchical micro-navigation in the physical world in a manner relevant to AR systems. In this paper, we study the performance of two presentation strategies in tasks that involve navigating to an object inside a hierarchy of physical containers within the user's reach. We consider two types of navigation aids: Those that provide route knowledge via step-by-step instructions, using simple graphical overlays, and those that provide survey knowledge via map-like overviews, using 3D depth visualizations. We performed a user study using a cardboard mock-up of a spatial display. Our experiment shows that in shallow hierarchies route aids and survey aids perform comparably in terms of navigation time and accuracy. When a target is embedded deeper into a structure, the performance of survey aids is affected negatively, while route aids maintain a consistent performance. Users re- ported that survey aids helped them understand a container hierarchy, but route aids required less processing time and effort, and thus, were more preferred. We found no significant effect of aid type on users' preference. Accordingly, we recommend considering the depth of task when employing these presentation strategies.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122202436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Hayes, M. Barthet, Yongmeng Wu, Leshao Zhang, N. Bryan-Kinns
Our Open Symphony system reimagines the music experience for a digital age, fostering alliances between performer and audience and our digital selves. Open Symphony enables live participatory music performance where the audience actively engages in the music creation process. This is made possible by using state-of-the-art web technologies and data visualisation techniques. Through collaborations with local performers we will conduct a series of interactive music performance revolutionizing the performance experience both for performers and audiences. The system throws open music-creating possibilities to every participant and is a genuine novel way to demonstrate the field of Human Computer Interaction through computer-supported cooperative creation and multimodal music and visual perception.
{"title":"A Participatory Live Music Performance with the Open Symphony System","authors":"K. Hayes, M. Barthet, Yongmeng Wu, Leshao Zhang, N. Bryan-Kinns","doi":"10.1145/2851581.2889471","DOIUrl":"https://doi.org/10.1145/2851581.2889471","url":null,"abstract":"Our Open Symphony system reimagines the music experience for a digital age, fostering alliances between performer and audience and our digital selves. Open Symphony enables live participatory music performance where the audience actively engages in the music creation process. This is made possible by using state-of-the-art web technologies and data visualisation techniques. Through collaborations with local performers we will conduct a series of interactive music performance revolutionizing the performance experience both for performers and audiences. The system throws open music-creating possibilities to every participant and is a genuine novel way to demonstrate the field of Human Computer Interaction through computer-supported cooperative creation and multimodal music and visual perception.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129506992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This papers shows the Embodied Encounters Studio (EES) which facilitates live encounters between 2-4 persons while stimulating them to have a hands-on discussion, and which facilitates also the recording of the encounters, thus collecting data for further analysis. EES is especially developed for the project "Engaging Encounters: sketching futures together", during which I explore with 100 inspirators all over the world potential futures. The studio is based on the principle of participatory sensemaking grounded in embodied and situated interactions in a shared action space. The studio offers a stage for interaction, several tools to spark the encounter and recording devices to capture the encounters for data analysis. We are currently developing the more interactive 2.0 version that visualizes on spot thus enhancing sensemaking.
{"title":"Embodied Encounters Studio: A Tangible Platform for Sensemaking","authors":"Caroline Hummels","doi":"10.1145/2851581.2890272","DOIUrl":"https://doi.org/10.1145/2851581.2890272","url":null,"abstract":"This papers shows the Embodied Encounters Studio (EES) which facilitates live encounters between 2-4 persons while stimulating them to have a hands-on discussion, and which facilitates also the recording of the encounters, thus collecting data for further analysis. EES is especially developed for the project \"Engaging Encounters: sketching futures together\", during which I explore with 100 inspirators all over the world potential futures. The studio is based on the principle of participatory sensemaking grounded in embodied and situated interactions in a shared action space. The studio offers a stage for interaction, several tools to spark the encounter and recording devices to capture the encounters for data analysis. We are currently developing the more interactive 2.0 version that visualizes on spot thus enhancing sensemaking.","PeriodicalId":285547,"journal":{"name":"Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129520311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}