From automated customer support to virtual assistants, conversational agents have transformed everyday interactions, yet despite phenomenal progress, no agent exists for programming tasks. To understand the design space of such an agent, we prototyped PairBuddy—an interactive pair programming partner—based on research from conversational agents, software engineering, education, human-robot interactions, psychology, and artificial intelligence. We iterated PairBuddy’s design using a series of Wizard-of-Oz studies. Our pilot study of six programmers showed promising results and provided insights toward PairBuddy’s interface design. Our second study of 14 programmers was positively praised across all skill levels. PairBuddy’s active application of soft skills—adaptability, motivation, and social presence—as a navigator increased participants’ confidence and trust, while its technical skills—code contributions, just-in-time feedback, and creativity support—as a driver helped participants realize their own solutions. PairBuddy takes the first step towards an Alexa-like programming partner.
{"title":"Designing PairBuddy—A Conversational Agent for Pair Programming","authors":"Peter Robe, S. Kuttal","doi":"10.1145/3498326","DOIUrl":"https://doi.org/10.1145/3498326","url":null,"abstract":"From automated customer support to virtual assistants, conversational agents have transformed everyday interactions, yet despite phenomenal progress, no agent exists for programming tasks. To understand the design space of such an agent, we prototyped PairBuddy—an interactive pair programming partner—based on research from conversational agents, software engineering, education, human-robot interactions, psychology, and artificial intelligence. We iterated PairBuddy’s design using a series of Wizard-of-Oz studies. Our pilot study of six programmers showed promising results and provided insights toward PairBuddy’s interface design. Our second study of 14 programmers was positively praised across all skill levels. PairBuddy’s active application of soft skills—adaptability, motivation, and social presence—as a navigator increased participants’ confidence and trust, while its technical skills—code contributions, just-in-time feedback, and creativity support—as a driver helped participants realize their own solutions. PairBuddy takes the first step towards an Alexa-like programming partner.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"379 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116582748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haijun Xia, Michael Glueck, M. Annett, Michael Wang, Daniel J. Wigdor
Gestural interaction has evolved from a set of novel interaction techniques developed in research labs, to a dominant interaction modality used by millions of users everyday. Despite its widespread adoption, the design of appropriate gesture vocabularies remains a challenging task for developers and designers. Existing research has largely used Expert-Led, User-Led, or Computationally-Based methodologies to design gesture vocabularies. These methodologies leverage the expertise, experience, and capabilities of experts, users, and systems to fulfill different requirements. In practice, however, none of these methodologies provide designers with a complete, multi-faceted perspective of the many factors that influence the design of gesture vocabularies, largely because a singular set of factors has yet to be established. Additionally, these methodologies do not identify or emphasize the subset of factors that are crucial to consider when designing for a given use case. Therefore, this work reports on the findings from an exhaustive literature review that identified 13 factors crucial to gesture vocabulary design and examines the evaluation methods and interaction techniques commonly associated with each factor. The identified factors also enable a holistic examination of existing gesture design methodologies from a factor-oriented viewpoint and highlighting the strengths and weaknesses of each methodology. This work closes with proposals of future research directions of developing an iterative user-centered and factor-centric gesture design approach as well as establishing an evolving ecosystem of factors that are crucial to gesture design.
{"title":"Iteratively Designing Gesture Vocabularies: A Survey and Analysis of Best Practices in the HCI Literature","authors":"Haijun Xia, Michael Glueck, M. Annett, Michael Wang, Daniel J. Wigdor","doi":"10.1145/3503537","DOIUrl":"https://doi.org/10.1145/3503537","url":null,"abstract":"Gestural interaction has evolved from a set of novel interaction techniques developed in research labs, to a dominant interaction modality used by millions of users everyday. Despite its widespread adoption, the design of appropriate gesture vocabularies remains a challenging task for developers and designers. Existing research has largely used Expert-Led, User-Led, or Computationally-Based methodologies to design gesture vocabularies. These methodologies leverage the expertise, experience, and capabilities of experts, users, and systems to fulfill different requirements. In practice, however, none of these methodologies provide designers with a complete, multi-faceted perspective of the many factors that influence the design of gesture vocabularies, largely because a singular set of factors has yet to be established. Additionally, these methodologies do not identify or emphasize the subset of factors that are crucial to consider when designing for a given use case. Therefore, this work reports on the findings from an exhaustive literature review that identified 13 factors crucial to gesture vocabulary design and examines the evaluation methods and interaction techniques commonly associated with each factor. The identified factors also enable a holistic examination of existing gesture design methodologies from a factor-oriented viewpoint and highlighting the strengths and weaknesses of each methodology. This work closes with proposals of future research directions of developing an iterative user-centered and factor-centric gesture design approach as well as establishing an evolving ecosystem of factors that are crucial to gesture design.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127299002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Putze, Susanne Putze, Merle Sagehorn, C. Micek, E. Solovey
In human-computer interaction (HCI), there has been a push towards open science, but to date, this has not happened consistently for HCI research utilizing brain signals due to unclear guidelines to support reuse and reproduction. To understand existing practices in the field, this paper examines 110 publications, exploring domains, applications, modalities, mental states and processes, and more. This analysis reveals variance in how authors report experiments, which creates challenges to understand, reproduce, and build on that research. It then describes an overarching experiment model that provides a formal structure for reporting HCI research with brain signals, including definitions, terminology, categories, and examples for each aspect. Multiple distinct reporting styles were identified through factor analysis and tied to different types of research. The paper concludes with recommendations and discusses future challenges. This creates actionable items from the abstract model and empirical observations to make HCI research with brain signals more reproducible and reusable.
{"title":"Understanding HCI Practices and Challenges of Experiment Reporting with Brain Signals: Towards Reproducibility and Reuse","authors":"F. Putze, Susanne Putze, Merle Sagehorn, C. Micek, E. Solovey","doi":"10.1145/3490554","DOIUrl":"https://doi.org/10.1145/3490554","url":null,"abstract":"In human-computer interaction (HCI), there has been a push towards open science, but to date, this has not happened consistently for HCI research utilizing brain signals due to unclear guidelines to support reuse and reproduction. To understand existing practices in the field, this paper examines 110 publications, exploring domains, applications, modalities, mental states and processes, and more. This analysis reveals variance in how authors report experiments, which creates challenges to understand, reproduce, and build on that research. It then describes an overarching experiment model that provides a formal structure for reporting HCI research with brain signals, including definitions, terminology, categories, and examples for each aspect. Multiple distinct reporting styles were identified through factor analysis and tied to different types of research. The paper concludes with recommendations and discusses future challenges. This creates actionable items from the abstract model and empirical observations to make HCI research with brain signals more reproducible and reusable.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114647028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Wallace, Z. Bylinskii, Jonathan Dobres, Bernard Kerr, Sam Berlow, Rick Treitman, N. Kumawat, Kathleen Arpin, Dave B. Miller, Jeff Huang, B. Sawyer
In our age of ubiquitous digital displays, adults often read in short, opportunistic interludes. In this context of Interlude Reading, we consider if manipulating font choice can improve adult readers’ reading outcomes. Our studies normalize font size by human perception and use hundreds of crowdsourced participants to provide a foundation for understanding, which fonts people prefer and which fonts make them more effective readers. Participants’ reading speeds (measured in words-per-minute (WPM)) increased by 35% when comparing fastest and slowest fonts without affecting reading comprehension. High WPM variability across fonts suggests that one font does not fit all. We provide font recommendations related to higher reading speed and discuss the need for individuation, allowing digital devices to match their readers’ needs in the moment. We provide recommendations from one of the most significant online reading efforts to date. To complement this, we release our materials and tools with this article.
{"title":"Towards Individuated Reading Experiences: Different Fonts Increase Reading Speed for Different Individuals","authors":"S. Wallace, Z. Bylinskii, Jonathan Dobres, Bernard Kerr, Sam Berlow, Rick Treitman, N. Kumawat, Kathleen Arpin, Dave B. Miller, Jeff Huang, B. Sawyer","doi":"10.1145/3502222","DOIUrl":"https://doi.org/10.1145/3502222","url":null,"abstract":"In our age of ubiquitous digital displays, adults often read in short, opportunistic interludes. In this context of Interlude Reading, we consider if manipulating font choice can improve adult readers’ reading outcomes. Our studies normalize font size by human perception and use hundreds of crowdsourced participants to provide a foundation for understanding, which fonts people prefer and which fonts make them more effective readers. Participants’ reading speeds (measured in words-per-minute (WPM)) increased by 35% when comparing fastest and slowest fonts without affecting reading comprehension. High WPM variability across fonts suggests that one font does not fit all. We provide font recommendations related to higher reading speed and discuss the need for individuation, allowing digital devices to match their readers’ needs in the moment. We provide recommendations from one of the most significant online reading efforts to date. To complement this, we release our materials and tools with this article.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129227254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joanna Bergström, Jarrod Knibbe, Henning Pohl, K. Hornbæk
Sense of control is increasingly used as a measure of quality in human-computer interaction. Control has been investigated mainly at a high level, using subjective questionnaire data, but also at a low level, using objective data on participants’ sense of agency. However, it remains unclear how differences in higher level, experienced control reflect lower level sense of control. We study that link in two experiments. In the first one we measure the low-level sense of agency with button, touchpad, and on-skin input. The results show a higher sense of agency with on-skin input. In the second experiment, participants played a simple game controlled with the same three inputs. We find that on-skin input results in both increased sense and experience of control compared to touchpad input. However, the corresponding difference is not found between on-skin and button input, whereas the button performed better in the experiment task. These results suggest that other factors of user experience spill over to the experienced control at rates that overcome differences in the sense of control. We discuss the implications for using subjective measures about the sense of control in evaluating qualities of interaction.
{"title":"Sense of Agency and User Experience: Is There a Link?","authors":"Joanna Bergström, Jarrod Knibbe, Henning Pohl, K. Hornbæk","doi":"10.1145/3490493","DOIUrl":"https://doi.org/10.1145/3490493","url":null,"abstract":"Sense of control is increasingly used as a measure of quality in human-computer interaction. Control has been investigated mainly at a high level, using subjective questionnaire data, but also at a low level, using objective data on participants’ sense of agency. However, it remains unclear how differences in higher level, experienced control reflect lower level sense of control. We study that link in two experiments. In the first one we measure the low-level sense of agency with button, touchpad, and on-skin input. The results show a higher sense of agency with on-skin input. In the second experiment, participants played a simple game controlled with the same three inputs. We find that on-skin input results in both increased sense and experience of control compared to touchpad input. However, the corresponding difference is not found between on-skin and button input, whereas the button performed better in the experiment task. These results suggest that other factors of user experience spill over to the experienced control at rates that overcome differences in the sense of control. We discuss the implications for using subjective measures about the sense of control in evaluating qualities of interaction.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122645466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Papenmeier, Dagmar Kern, G. Englebienne, C. Seifert
Automated decision-making systems become increasingly powerful due to higher model complexity. While powerful in prediction accuracy, Deep Learning models are black boxes by nature, preventing users from making informed judgments about the correctness and fairness of such an automated system. Explanations have been proposed as a general remedy to the black box problem. However, it remains unclear if effects of explanations on user trust generalise over varying accuracy levels. In an online user study with 959 participants, we examined the practical consequences of adding explanations for user trust: We evaluated trust for three explanation types on three classifiers of varying accuracy. We find that the influence of our explanations on trust differs depending on the classifier’s accuracy. Thus, the interplay between trust and explanations is more complex than previously reported. Our findings also reveal discrepancies between self-reported and behavioural trust, showing that the choice of trust measure impacts the results.
{"title":"It’s Complicated: The Relationship between User Trust, Model Accuracy and Explanations in AI","authors":"A. Papenmeier, Dagmar Kern, G. Englebienne, C. Seifert","doi":"10.1145/3495013","DOIUrl":"https://doi.org/10.1145/3495013","url":null,"abstract":"Automated decision-making systems become increasingly powerful due to higher model complexity. While powerful in prediction accuracy, Deep Learning models are black boxes by nature, preventing users from making informed judgments about the correctness and fairness of such an automated system. Explanations have been proposed as a general remedy to the black box problem. However, it remains unclear if effects of explanations on user trust generalise over varying accuracy levels. In an online user study with 959 participants, we examined the practical consequences of adding explanations for user trust: We evaluated trust for three explanation types on three classifiers of varying accuracy. We find that the influence of our explanations on trust differs depending on the classifier’s accuracy. Thus, the interplay between trust and explanations is more complex than previously reported. Our findings also reveal discrepancies between self-reported and behavioural trust, showing that the choice of trust measure impacts the results.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121915341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingyi Xie, Madison Reddie, Sooyeon Lee, Syed Masum Billah, Zihan Zhou, Chun-Hua Tsai, John M. Carroll
Remote sighted assistance (RSA) is an emerging navigational aid for people with visual impairments (PVI). Using scenario-based design to illustrate our ideas, we developed a prototype showcasing potential applications for computer vision to support RSA interactions. We reviewed the prototype demonstrating real-world navigation scenarios with an RSA expert, and then iteratively refined the prototype based on feedback. We reviewed the refined prototype with 12 RSA professionals to evaluate the desirability and feasibility of the prototyped computer vision concepts. The RSA expert and professionals were engaged by, and reacted insightfully and constructively to the proposed design ideas. We discuss what we learned about key resources, goals, and challenges of the RSA prosthetic practice through our iterative prototype review, as well as implications for the design of RSA systems and the integration of computer vision technologies into RSA.
{"title":"Iterative Design and Prototyping of Computer Vision Mediated Remote Sighted Assistance","authors":"Jingyi Xie, Madison Reddie, Sooyeon Lee, Syed Masum Billah, Zihan Zhou, Chun-Hua Tsai, John M. Carroll","doi":"10.1145/3501298","DOIUrl":"https://doi.org/10.1145/3501298","url":null,"abstract":"Remote sighted assistance (RSA) is an emerging navigational aid for people with visual impairments (PVI). Using scenario-based design to illustrate our ideas, we developed a prototype showcasing potential applications for computer vision to support RSA interactions. We reviewed the prototype demonstrating real-world navigation scenarios with an RSA expert, and then iteratively refined the prototype based on feedback. We reviewed the refined prototype with 12 RSA professionals to evaluate the desirability and feasibility of the prototyped computer vision concepts. The RSA expert and professionals were engaged by, and reacted insightfully and constructively to the proposed design ideas. We discuss what we learned about key resources, goals, and challenges of the RSA prosthetic practice through our iterative prototype review, as well as implications for the design of RSA systems and the integration of computer vision technologies into RSA.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129679119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dina Sabie, Cansu Ekmekcioglu, Syed Ishtiaque Ahmed
This article presents a thorough discussion of the trajectories of international migration research in HCI. We begin by reporting our survey findings of 282 HCI-related publications about migration from nine digital libraries between 2010–2019, summarizing how this research stream has evolved, the geographies and populations it encompasses, and the methodologies it utilizes. We then augment these findings with data from interviews with 11 skilled researchers who reflect on their working experience in this area. Our analysis reveals how the domain has evolved from the European migrant crisis to a more global agenda of migration and points towards a shifting focus from addressing immediate needs to acknowledging more complex political and emotional aspects of mobility. We also uncover the critical role of academic, local, and international politics in migration research in HCI. We discuss these findings to explore future opportunities in this area and advance HCI research discourse with the marginalized populace.
{"title":"A Decade of International Migration Research in HCI: Overview, Challenges, Ethics, Impact, and Future Directions","authors":"Dina Sabie, Cansu Ekmekcioglu, Syed Ishtiaque Ahmed","doi":"10.1145/3490555","DOIUrl":"https://doi.org/10.1145/3490555","url":null,"abstract":"This article presents a thorough discussion of the trajectories of international migration research in HCI. We begin by reporting our survey findings of 282 HCI-related publications about migration from nine digital libraries between 2010–2019, summarizing how this research stream has evolved, the geographies and populations it encompasses, and the methodologies it utilizes. We then augment these findings with data from interviews with 11 skilled researchers who reflect on their working experience in this area. Our analysis reveals how the domain has evolved from the European migrant crisis to a more global agenda of migration and points towards a shifting focus from addressing immediate needs to acknowledging more complex political and emotional aspects of mobility. We also uncover the critical role of academic, local, and international politics in migration research in HCI. We discuss these findings to explore future opportunities in this area and advance HCI research discourse with the marginalized populace.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133439223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teresa Hirzle, Fabian Fischbach, Julian Karlbauer, Pascal Jansen, Jan Gugenheimer, E. Rukzio, A. Bulling
Digital eye strain (DES), caused by prolonged exposure to digital screens, stresses the visual system and negatively affects users’ well-being and productivity. While DES is well-studied in computer displays, its impact on users of virtual reality (VR) head-mounted displays (HMDs) is largely unexplored—despite that some of their key properties (e.g., the vergence-accommodation conflict) make VR-HMDs particularly prone. This work provides the first comprehensive investigation into DES in VR HMDs. We present results from a survey with 68 experienced users to understand DES symptoms in VR-HMDs. To help address DES, we investigate eye exercises resulting from survey answers and blue light filtering in three user studies (N = 71). Results demonstrate that eye exercises, but not blue light filtering, can effectively reduce DES. We conclude with an extensive analysis of the user studies and condense our findings in 10 key challenges that guide future work in this emerging research area.
{"title":"Understanding, Addressing, and Analysing Digital Eye Strain in Virtual Reality Head-Mounted Displays","authors":"Teresa Hirzle, Fabian Fischbach, Julian Karlbauer, Pascal Jansen, Jan Gugenheimer, E. Rukzio, A. Bulling","doi":"10.1145/3492802","DOIUrl":"https://doi.org/10.1145/3492802","url":null,"abstract":"Digital eye strain (DES), caused by prolonged exposure to digital screens, stresses the visual system and negatively affects users’ well-being and productivity. While DES is well-studied in computer displays, its impact on users of virtual reality (VR) head-mounted displays (HMDs) is largely unexplored—despite that some of their key properties (e.g., the vergence-accommodation conflict) make VR-HMDs particularly prone. This work provides the first comprehensive investigation into DES in VR HMDs. We present results from a survey with 68 experienced users to understand DES symptoms in VR-HMDs. To help address DES, we investigate eye exercises resulting from survey answers and blue light filtering in three user studies (N = 71). Results demonstrate that eye exercises, but not blue light filtering, can effectively reduce DES. We conclude with an extensive analysis of the user studies and condense our findings in 10 key challenges that guide future work in this emerging research area.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"515 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116215982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pavel Karpashevich, Pedro Sanches, Rachael Garrett, Yoav Luft, Kelsey Cotton, Vasiliki Tsaknaki, K. Höök
We report on a soma design process, where we designed a novel shape-changing garment—the Soma Corset. The corset integrates sensing and actuation around the torso in tight interaction loops. The design process revealed how boundaries between the garment and the wearer can become blurred, leading to three flavours of cyborg relations. First, through the lens of the monster, we articulate how the wearer can adopt or reject the garment, resulting in either harmonious or disconcerting experiences of touch. Second, it can be experienced as an organic “other”-with its own agency-resulting in uncanny experiences of touch. Through mirroring the wearer’s breathing, the garment can also be experienced as a twisted version of one’s own body. We suggest that a gradual sensitisation of designers-through soma design and reflection on the emerging human-technology relations-may serve as a pathway for uncovering and articulating novel, machine-like, digital touch experiences.
{"title":"Touching Our Breathing through Shape-Change: Monster, Organic Other, or Twisted Mirror","authors":"Pavel Karpashevich, Pedro Sanches, Rachael Garrett, Yoav Luft, Kelsey Cotton, Vasiliki Tsaknaki, K. Höök","doi":"10.1145/3490498","DOIUrl":"https://doi.org/10.1145/3490498","url":null,"abstract":"We report on a soma design process, where we designed a novel shape-changing garment—the Soma Corset. The corset integrates sensing and actuation around the torso in tight interaction loops. The design process revealed how boundaries between the garment and the wearer can become blurred, leading to three flavours of cyborg relations. First, through the lens of the monster, we articulate how the wearer can adopt or reject the garment, resulting in either harmonious or disconcerting experiences of touch. Second, it can be experienced as an organic “other”-with its own agency-resulting in uncanny experiences of touch. Through mirroring the wearer’s breathing, the garment can also be experienced as a twisted version of one’s own body. We suggest that a gradual sensitisation of designers-through soma design and reflection on the emerging human-technology relations-may serve as a pathway for uncovering and articulating novel, machine-like, digital touch experiences.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121754151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}