Parental mediation literature is mostly situated in the contexts of television, Internet use, video games, and mobile devices, while there is less understanding of how parents mediate their children’s engagement with educational-focused media. We examine parental involvement in young children’s use of a creation-oriented educational media, i.e., coding kits, from a mediation perspective through an interview study. We frame parents’ mediation practices along three dimensions: (1) creative mediation, where parents mediate to support children’s creating and learning with media; (2) preparative mediation, where parents explore and prepare media for children’s engagement; and (3) administrative mediation, where parents administer and regulate their children’s media use. Compared to the restrictive, active, and co-using mediation theory, our proposed framework highlights various supportive practices parents take to help their children learn and create with media. We further connect our findings to Joint Media Engagement and reflect on implications for parent involvement in children’s creation-oriented media design.
{"title":"Parental Mediation for Young Children’s Use of Educational Media: A Case Study with Computational Toys and Kits","authors":"Junnan Yu, Andrea DeVore, Ricarose Roque","doi":"10.1145/3411764.3445427","DOIUrl":"https://doi.org/10.1145/3411764.3445427","url":null,"abstract":"Parental mediation literature is mostly situated in the contexts of television, Internet use, video games, and mobile devices, while there is less understanding of how parents mediate their children’s engagement with educational-focused media. We examine parental involvement in young children’s use of a creation-oriented educational media, i.e., coding kits, from a mediation perspective through an interview study. We frame parents’ mediation practices along three dimensions: (1) creative mediation, where parents mediate to support children’s creating and learning with media; (2) preparative mediation, where parents explore and prepare media for children’s engagement; and (3) administrative mediation, where parents administer and regulate their children’s media use. Compared to the restrictive, active, and co-using mediation theory, our proposed framework highlights various supportive practices parents take to help their children learn and create with media. We further connect our findings to Joint Media Engagement and reflect on implications for parent involvement in children’s creation-oriented media design.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"99 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72839586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jussi P. P. Jokinen, Aditya Acharya, M. Uzair, Xinhui Jiang, A. Oulasvirta
Traditionally, touchscreen typing has been studied in terms of motor performance. However, recent research has exposed a decisive role of visual attention being shared between the keyboard and the text area. Strategies for this are known to adapt to the task, design, and user. In this paper, we propose a unifying account of touchscreen typing, regarding it as optimal supervisory control. Under this theory, rules for controlling visuo-motor resources are learned via exploration in pursuit of maximal typing performance. The paper outlines the control problem and explains how visual and motor limitations affect it. We then present a model, implemented via reinforcement learning, that simulates co-ordination of eye and finger movements. Comparison with human data affirms that the model creates realistic finger- and eye-movement patterns and shows human-like adaptation. We demonstrate the model’s utility for interface development in evaluating touchscreen keyboard designs.
{"title":"Touchscreen Typing As Optimal Supervisory Control","authors":"Jussi P. P. Jokinen, Aditya Acharya, M. Uzair, Xinhui Jiang, A. Oulasvirta","doi":"10.1145/3411764.3445483","DOIUrl":"https://doi.org/10.1145/3411764.3445483","url":null,"abstract":"Traditionally, touchscreen typing has been studied in terms of motor performance. However, recent research has exposed a decisive role of visual attention being shared between the keyboard and the text area. Strategies for this are known to adapt to the task, design, and user. In this paper, we propose a unifying account of touchscreen typing, regarding it as optimal supervisory control. Under this theory, rules for controlling visuo-motor resources are learned via exploration in pursuit of maximal typing performance. The paper outlines the control problem and explains how visual and motor limitations affect it. We then present a model, implemented via reinforcement learning, that simulates co-ordination of eye and finger movements. Comparison with human data affirms that the model creates realistic finger- and eye-movement patterns and shows human-like adaptation. We demonstrate the model’s utility for interface development in evaluating touchscreen keyboard designs.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73369134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Chivukula, Aiza Hasib, Ziqing Li, Jingle Chen, Colin M. Gray
HCI and STS researchers have previously described the ethical complexity of practice, drawing together aspects of organizational complexity, design knowledge, and ethical frameworks. Building on this work, we investigate the identity claims and beliefs that impact practitioners’ ability to recognize and act upon ethical concerns in a range of technology-focused disciplines. In this paper, we report results from an interview study with 12 practitioners, identifying and describing their identity claims related to ethical awareness and action. We conducted a critically-focused thematic analysis to identify eight distinct claims representing roles relating to learning, educating, following policies, feeling a sense of responsibility, being a member of a profession, a translator, an activist, and deliberative. Based on our findings, we demonstrate how the claims foreground building competence in relation to ethical practice. We highlight the dynamic interplay among these claims and point towards implications for identity work in socio-technical contexts.
{"title":"Identity Claims that Underlie Ethical Awareness and Action","authors":"S. Chivukula, Aiza Hasib, Ziqing Li, Jingle Chen, Colin M. Gray","doi":"10.1145/3411764.3445375","DOIUrl":"https://doi.org/10.1145/3411764.3445375","url":null,"abstract":"HCI and STS researchers have previously described the ethical complexity of practice, drawing together aspects of organizational complexity, design knowledge, and ethical frameworks. Building on this work, we investigate the identity claims and beliefs that impact practitioners’ ability to recognize and act upon ethical concerns in a range of technology-focused disciplines. In this paper, we report results from an interview study with 12 practitioners, identifying and describing their identity claims related to ethical awareness and action. We conducted a critically-focused thematic analysis to identify eight distinct claims representing roles relating to learning, educating, following policies, feeling a sense of responsibility, being a member of a profession, a translator, an activist, and deliberative. Based on our findings, we demonstrate how the claims foreground building competence in relation to ethical practice. We highlight the dynamic interplay among these claims and point towards implications for identity work in socio-technical contexts.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73375082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elena Márquez Segura, Laia Turmo Vidal, Annika Wærn, Jared Duval, Luis Parrilla Bel, Ferran Altarriba Bertran
Warm-up games are widespread practices in multiple activities across domains, yet little scholarly work can be found about their role in physical training. Here, we study potential goals and benefits of warm-up games, and explore opportunities for technology inclusion through investigating a collection of warm-up games gathered: online, from a survey of online warm-up games curated, described, and used by Physical Education teachers; and in person, from an ongoing design research work as part of a technology-supported circus training course. Further, in the context of the latter, we conducted explorative design interventions, augmenting a range of the warm-up games with wearable technology. Our work surfaces major goals and benefits of warm-up games, which can be broadly classified as preparing participants physically, socially, and mentally. We also show how the inclusion of open-ended technology can support these goals and discuss broader opportunities for technology inclusion in warm-up games.
{"title":"Physical Warm-up Games: Exploring the Potential of Play and Technology Design","authors":"Elena Márquez Segura, Laia Turmo Vidal, Annika Wærn, Jared Duval, Luis Parrilla Bel, Ferran Altarriba Bertran","doi":"10.1145/3411764.3445163","DOIUrl":"https://doi.org/10.1145/3411764.3445163","url":null,"abstract":"Warm-up games are widespread practices in multiple activities across domains, yet little scholarly work can be found about their role in physical training. Here, we study potential goals and benefits of warm-up games, and explore opportunities for technology inclusion through investigating a collection of warm-up games gathered: online, from a survey of online warm-up games curated, described, and used by Physical Education teachers; and in person, from an ongoing design research work as part of a technology-supported circus training course. Further, in the context of the latter, we conducted explorative design interventions, augmenting a range of the warm-up games with wearable technology. Our work surfaces major goals and benefits of warm-up games, which can be broadly classified as preparing participants physically, socially, and mentally. We also show how the inclusion of open-ended technology can support these goals and discuss broader opportunities for technology inclusion in warm-up games.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73899612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arjun Srinivasan, Nikhila Nyapathy, Bongshin Lee, S. Drucker, J. Stasko
Natural language interfaces (NLIs) for data visualization are becoming increasingly popular both in academic research and in commercial software. Yet, there is a lack of empirical understanding of how people specify visualizations through natural language. We conducted an online study (N = 102), showing participants a series of visualizations and asking them to provide utterances they would pose to generate the displayed charts. From the responses, we curated a dataset of 893 utterances and characterized the utterances according to (1) their phrasing (e.g., commands, queries, questions) and (2) the information they contained (e.g., chart types, data aggregations). To help guide future research and development, we contribute this utterance dataset and discuss its applications toward the creation and benchmarking of NLIs for visualization.
{"title":"Collecting and Characterizing Natural Language Utterances for Specifying Data Visualizations","authors":"Arjun Srinivasan, Nikhila Nyapathy, Bongshin Lee, S. Drucker, J. Stasko","doi":"10.1145/3411764.3445400","DOIUrl":"https://doi.org/10.1145/3411764.3445400","url":null,"abstract":"Natural language interfaces (NLIs) for data visualization are becoming increasingly popular both in academic research and in commercial software. Yet, there is a lack of empirical understanding of how people specify visualizations through natural language. We conducted an online study (N = 102), showing participants a series of visualizations and asking them to provide utterances they would pose to generate the displayed charts. From the responses, we curated a dataset of 893 utterances and characterized the utterances according to (1) their phrasing (e.g., commands, queries, questions) and (2) the information they contained (e.g., chart types, data aggregations). To help guide future research and development, we contribute this utterance dataset and discuss its applications toward the creation and benchmarking of NLIs for visualization.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84307823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drone assisted navigation aids for supporting walking activities of visually impaired have been established in related work but fine-point object grasping tasks and the object localization in unknown environments still presents an open and complex challenge. We present a drone-based interface that provides fine-grain haptic feedback and thus physically guides them in hand-object localization tasks in unknown surroundings. Our research is built around community groups of blind or visually impaired (BVI) people, which provide in-depth insights during the development process and serve later as study participants. A pilot study infers users’ sensibility to applied guiding stimuli forces and the different human-drone tether interfacing possibilities. In a comparative follow-up study, we show that our drone-based approach achieves greater accuracy compared to a current audio-based hand guiding system and delivers overall a more intuitive and relatable fine-point guiding experience.
{"title":"GuideCopter - A Precise Drone-Based Haptic Guidance Interface for Blind or Visually Impaired People","authors":"Felix Huppert, Gerold Hoelzl, M. Kranz","doi":"10.1145/3411764.3445676","DOIUrl":"https://doi.org/10.1145/3411764.3445676","url":null,"abstract":"Drone assisted navigation aids for supporting walking activities of visually impaired have been established in related work but fine-point object grasping tasks and the object localization in unknown environments still presents an open and complex challenge. We present a drone-based interface that provides fine-grain haptic feedback and thus physically guides them in hand-object localization tasks in unknown surroundings. Our research is built around community groups of blind or visually impaired (BVI) people, which provide in-depth insights during the development process and serve later as study participants. A pilot study infers users’ sensibility to applied guiding stimuli forces and the different human-drone tether interfacing possibilities. In a comparative follow-up study, we show that our drone-based approach achieves greater accuracy compared to a current audio-based hand guiding system and delivers overall a more intuitive and relatable fine-point guiding experience.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84308616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital data has become a key part of everyday life: people manage increasingly large and disparate collections of photos, documents, media, etc. But what happens after death? How can users select and prepare what data to leave behind before their eventual death? To explore how to support users, we first ran an ideation workshop to generate design ideas; then, we created a design workbook with 12 speculative concepts that explore diverging approaches and perspectives. We elicited reactions to the concepts from 20 participants (18-81, varied occupations). We found that participants anticipated different types of motivation at different life stages, wished for tools to feel personal and intimate, and preferred individual control on their post-death self-representation. They also found comprehensive data replicas creepy and saw smart assistants as potential aides for suggesting meaningful data. Based on the results, we discuss key directions for designing more personalized and respectful death-preparation tools.
{"title":"What Happens After Death? Using a Design Workbook to Understand User Expectations for Preparing their Data","authors":"Janet X. Chen, F. Vitale, Joanna McGrenere","doi":"10.1145/3411764.3445359","DOIUrl":"https://doi.org/10.1145/3411764.3445359","url":null,"abstract":"Digital data has become a key part of everyday life: people manage increasingly large and disparate collections of photos, documents, media, etc. But what happens after death? How can users select and prepare what data to leave behind before their eventual death? To explore how to support users, we first ran an ideation workshop to generate design ideas; then, we created a design workbook with 12 speculative concepts that explore diverging approaches and perspectives. We elicited reactions to the concepts from 20 participants (18-81, varied occupations). We found that participants anticipated different types of motivation at different life stages, wished for tools to feel personal and intimate, and preferred individual control on their post-death self-representation. They also found comprehensive data replicas creepy and saw smart assistants as potential aides for suggesting meaningful data. Based on the results, we discuss key directions for designing more personalized and respectful death-preparation tools.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81624548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Narjes Pourjafarian, Marion Koelle, Bruno Fruchard, Sahar Mavali, Konstantin Klamka, Daniel Groeger, P. Strohmeier, Jürgen Steimle
In traditional body-art, designs are adjusted to the body as they are applied, enabling creative improvisation and exploration. Conventional design and fabrication methods of epidermal interfaces, however, separate these steps. With BodyStylus we present the first computer-assisted approach for on-body design and fabrication of epidermal interfaces. Inspired by traditional techniques, we propose a hand-held tool that augments freehand inking with digital support: projected in-situ guidance assists creating valid on-body circuits and aesthetic ornaments that align with the human bodyscape, while pro-active switching between inking and non-inking creates error preventing constraints. We contribute BodyStylus’s design rationale and interaction concept along with an interactive prototype that uses self-sintering conductive ink. Results of two focus group explorations showed that guidance was more appreciated by artists, while constraints appeared more useful to engineers, and that working on the body inspired critical reflection on the relationship between bodyscape, interaction, and designs.
{"title":"BodyStylus: Freehand On-Body Design and Fabrication of Epidermal Interfaces","authors":"Narjes Pourjafarian, Marion Koelle, Bruno Fruchard, Sahar Mavali, Konstantin Klamka, Daniel Groeger, P. Strohmeier, Jürgen Steimle","doi":"10.1145/3411764.3445475","DOIUrl":"https://doi.org/10.1145/3411764.3445475","url":null,"abstract":"In traditional body-art, designs are adjusted to the body as they are applied, enabling creative improvisation and exploration. Conventional design and fabrication methods of epidermal interfaces, however, separate these steps. With BodyStylus we present the first computer-assisted approach for on-body design and fabrication of epidermal interfaces. Inspired by traditional techniques, we propose a hand-held tool that augments freehand inking with digital support: projected in-situ guidance assists creating valid on-body circuits and aesthetic ornaments that align with the human bodyscape, while pro-active switching between inking and non-inking creates error preventing constraints. We contribute BodyStylus’s design rationale and interaction concept along with an interactive prototype that uses self-sintering conductive ink. Results of two focus group explorations showed that guidance was more appreciated by artists, while constraints appeared more useful to engineers, and that working on the body inspired critical reflection on the relationship between bodyscape, interaction, and designs.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81999466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of design fiction to speculate an imaginative and critical future has increasingly been recognized in the design research community. Instead of focusing on speculation with a critical position, this paper reports an experiential approach to entangling everyday experiences in the process of speculating. As science fiction has successfully provided fictional-world-building as entanglement material for speculation, we held a workshop to conduct an entanglement experiment of personal photographs with cyberpunk literature, Neuromancer. We built a card deck consisting of 206 quotes from the novel and invited 15 participants to shuffle, draw, and re-compose sentences that best matched their personal photographs. Purposefully selected everyday anchors and sci-fi features in the quotes allowed us to investigate the moment when an everyday photograph encountered a fictional world. We describe the phenomena of imagination and entanglement, explain experiential entanglement, propose a conceptual model for entangled status, and present interpretations and implications in HCI.
{"title":"Neuromancer Workshop: Towards Designing Experiential Entanglement with Science Fiction","authors":"Bowen Kong, Rung-Huei Liang, MengChi Liu, Shu-Hsiang Chang, Hsiu-Chen Tseng, Chian-Huei Ju","doi":"10.1145/3411764.3445273","DOIUrl":"https://doi.org/10.1145/3411764.3445273","url":null,"abstract":"The use of design fiction to speculate an imaginative and critical future has increasingly been recognized in the design research community. Instead of focusing on speculation with a critical position, this paper reports an experiential approach to entangling everyday experiences in the process of speculating. As science fiction has successfully provided fictional-world-building as entanglement material for speculation, we held a workshop to conduct an entanglement experiment of personal photographs with cyberpunk literature, Neuromancer. We built a card deck consisting of 206 quotes from the novel and invited 15 participants to shuffle, draw, and re-compose sentences that best matched their personal photographs. Purposefully selected everyday anchors and sci-fi features in the quotes allowed us to investigate the moment when an everyday photograph encountered a fictional world. We describe the phenomena of imagination and entanglement, explain experiential entanglement, propose a conceptual model for entangled status, and present interpretations and implications in HCI.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84129415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyunyoung Kim, Aluna Everitt, Carlos E. Tejada, Men-Lin Zhong, Daniel Ashbrook
Toolkits for shape-changing interfaces (SCIs) enable designers and researchers to easily explore the broad design space of SCIs. However, despite their utility, existing approaches are often limited in the number of shape-change features they can express. This paper introduces MorpheesPlug , a toolkit for creating SCIs that covers seven of the eleven shape-change features identified in the literature. MorpheesPlug is comprised of (1) a set of six standardized widgets that express the shape-change features with user-definable parameters; (2) software for 3D-modeling the widgets to create 3D-printable pneumatic SCIs; and (3) a hardware platform to control the widgets. To evaluate MorpheesPlug we carried out ten open-ended interviews with novice and expert designers who were asked to design a SCI using our software. Participants highlighted the ease of use and expressivity of the MorpheesPlug.
{"title":"MorpheesPlug: A Toolkit for Prototyping Shape-Changing Interfaces","authors":"Hyunyoung Kim, Aluna Everitt, Carlos E. Tejada, Men-Lin Zhong, Daniel Ashbrook","doi":"10.1145/3411764.3445786","DOIUrl":"https://doi.org/10.1145/3411764.3445786","url":null,"abstract":"Toolkits for shape-changing interfaces (SCIs) enable designers and researchers to easily explore the broad design space of SCIs. However, despite their utility, existing approaches are often limited in the number of shape-change features they can express. This paper introduces MorpheesPlug , a toolkit for creating SCIs that covers seven of the eleven shape-change features identified in the literature. MorpheesPlug is comprised of (1) a set of six standardized widgets that express the shape-change features with user-definable parameters; (2) software for 3D-modeling the widgets to create 3D-printable pneumatic SCIs; and (3) a hardware platform to control the widgets. To evaluate MorpheesPlug we carried out ten open-ended interviews with novice and expert designers who were asked to design a SCI using our software. Participants highlighted the ease of use and expressivity of the MorpheesPlug.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84134029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}