End-to-end latency corresponds to the temporal difference between a user input and the corresponding output from a system. It has been shown to degrade user performance in both direct and indirect interaction. If it can be reduced to some extend, latency can also be compensated through software compensation by trying to predict the future position of the cursor based on previous positions, velocities and accelerations. In this paper, we propose a hybrid hardware and software prediction technique specifically designed for partially compensating end-to-end latency in indirect pointing. We combine a computer mouse with a high frequency accelerometer to predict the future location of the pointer using Euler based equations. Our prediction method results in more accurate prediction than previously introduced prediction algorithms for direct touch. A controlled experiment also revealed that it can improve target acquisition time in pointing tasks.
{"title":"Using High Frequency Accelerometer and Mouse to Compensate for End-to-end Latency in Indirect Interaction","authors":"A. Antoine, Sylvain Malacria, Géry Casiez","doi":"10.1145/3173574.3174183","DOIUrl":"https://doi.org/10.1145/3173574.3174183","url":null,"abstract":"End-to-end latency corresponds to the temporal difference between a user input and the corresponding output from a system. It has been shown to degrade user performance in both direct and indirect interaction. If it can be reduced to some extend, latency can also be compensated through software compensation by trying to predict the future position of the cursor based on previous positions, velocities and accelerations. In this paper, we propose a hybrid hardware and software prediction technique specifically designed for partially compensating end-to-end latency in indirect pointing. We combine a computer mouse with a high frequency accelerometer to predict the future location of the pointer using Euler based equations. Our prediction method results in more accurate prediction than previously introduced prediction algorithms for direct touch. A controlled experiment also revealed that it can improve target acquisition time in pointing tasks.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"67 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90762382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Woodward, Zari McFadden, Nicole Shiver, Amir Ben-hayon, Jason C. Yip, Lisa Anthony
Prior work has shown that intelligent user interfaces (IUIs) that use modalities such as speech, gesture, and writing pose challenges for children due to their developing cognitive and motor skills. Research has focused on improving recognition and accuracy by accommodating children's specific interaction behaviors. Understanding children's expectations of IUIs is also important to decrease the impact of recognition errors that occur. To understand children's conceptual model of IUIs, we completed four consecutive participatory design sessions on designing IUIs with an emphasis on error detection and correction. We found that, while children think of interactive systems in terms of both user input and behavior and system output and behavior, they also propose ideas that require advanced system intelligence, e.g., context and conversation. Our work contributes new understanding of how children conceptualize IUIs and new methods for error detection and correction, and will inform the design of future IUIs for children to improve their experience.
{"title":"Using Co-Design to Examine How Children Conceptualize Intelligent Interfaces","authors":"Julia Woodward, Zari McFadden, Nicole Shiver, Amir Ben-hayon, Jason C. Yip, Lisa Anthony","doi":"10.1145/3173574.3174149","DOIUrl":"https://doi.org/10.1145/3173574.3174149","url":null,"abstract":"Prior work has shown that intelligent user interfaces (IUIs) that use modalities such as speech, gesture, and writing pose challenges for children due to their developing cognitive and motor skills. Research has focused on improving recognition and accuracy by accommodating children's specific interaction behaviors. Understanding children's expectations of IUIs is also important to decrease the impact of recognition errors that occur. To understand children's conceptual model of IUIs, we completed four consecutive participatory design sessions on designing IUIs with an emphasis on error detection and correction. We found that, while children think of interactive systems in terms of both user input and behavior and system output and behavior, they also propose ideas that require advanced system intelligence, e.g., context and conversation. Our work contributes new understanding of how children conceptualize IUIs and new methods for error detection and correction, and will inform the design of future IUIs for children to improve their experience.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91340412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naomi Yamashita, H. Kuzuoka, Takashi Kudo, K. Hirata, E. Aramaki, Kazuki Hattori
Previous research has shown that tracking technologies have the potential to help family caregivers optimize their coping strategies and improve their relationships with care recipients. In this paper, we explore how sharing the tracked data (i.e., caregiving journals and patient's conditions) with other family caregivers affects home care and family communication. Although previous works suggested that family caregivers may benefit from reading the records of others, sharing patients' private information might fuel negative feelings of surveillance and violation of trust for care recipients. To address this research question, we added a sharing feature to the previously developed tracking tool and deployed it for six weeks in the homes of 15 family caregivers who were caring for a depressed family member. Our findings show how the sharing feature attracted the attention of care recipients and helped the family caregivers discuss sensitive issues with care recipients.
{"title":"How Information Sharing about Care Recipients by Family Caregivers Impacts Family Communication","authors":"Naomi Yamashita, H. Kuzuoka, Takashi Kudo, K. Hirata, E. Aramaki, Kazuki Hattori","doi":"10.1145/3173574.3173796","DOIUrl":"https://doi.org/10.1145/3173574.3173796","url":null,"abstract":"Previous research has shown that tracking technologies have the potential to help family caregivers optimize their coping strategies and improve their relationships with care recipients. In this paper, we explore how sharing the tracked data (i.e., caregiving journals and patient's conditions) with other family caregivers affects home care and family communication. Although previous works suggested that family caregivers may benefit from reading the records of others, sharing patients' private information might fuel negative feelings of surveillance and violation of trust for care recipients. To address this research question, we added a sharing feature to the previously developed tracking tool and deployed it for six weeks in the homes of 15 family caregivers who were caring for a depressed family member. Our findings show how the sharing feature attracted the attention of care recipients and helped the family caregivers discuss sensitive issues with care recipients.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"100 6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76667613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Text selection on touch devices can be a difficult task for users. Letters and words are often too small to select directly, and the enhanced interaction techniques provided by the OS -- magnifiers, selection handles, and methods for selecting at the character, word, or sentence level -- often lead to as many usability problems as they solve. The introduction of force-sensitive touchscreens has added another enhancement to text selection (using force for different selection modes); however, these modes are difficult to discover and many users continue to struggle with accurate selection. In this paper we report on an investigation of the design of touch-based and force-based text selection mechanisms, and describe two novel text-selection techniques that provide improved discoverability, enhanced visual feedback, and a higher performance ceiling for experienced users. Two evaluations show that one design successfully combined support for novices and experts, was never worse than the standard iOS technique, and was preferred by participants.
{"title":"Improving Discoverability and Expert Performance in Force-Sensitive Text Selection for Touch Devices with Mode Gauges","authors":"Alix Goguey, Sylvain Malacria, C. Gutwin","doi":"10.1145/3173574.3174051","DOIUrl":"https://doi.org/10.1145/3173574.3174051","url":null,"abstract":"Text selection on touch devices can be a difficult task for users. Letters and words are often too small to select directly, and the enhanced interaction techniques provided by the OS -- magnifiers, selection handles, and methods for selecting at the character, word, or sentence level -- often lead to as many usability problems as they solve. The introduction of force-sensitive touchscreens has added another enhancement to text selection (using force for different selection modes); however, these modes are difficult to discover and many users continue to struggle with accurate selection. In this paper we report on an investigation of the design of touch-based and force-based text selection mechanisms, and describe two novel text-selection techniques that provide improved discoverability, enhanced visual feedback, and a higher performance ceiling for experienced users. Two evaluations show that one design successfully combined support for novices and experts, was never worse than the standard iOS technique, and was preferred by participants.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78218402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Costa, Malte F. Jung, M. Czerwinski, François Guimbretière, T. Le, Tanzeem Choudhury
Emotions play a major role in how interpersonal conflicts unfold. Although several strategies and technological approaches have been proposed for emotion regulation, they often require conscious attention and effort. This often limits their efficacy in practice. In this paper, we propose a different approach inspired by self-perception theory: noticing that people are often reacting to the perception of their own behavior, we artificially change their perceptions to influence their emotions. We conducted two studies to evaluate the potential of this approach by automatically and subtly altering how people perceive their own voice. In one study, participants that received voice feedback with a calmer tone during relationship conflicts felt less anxious. In the other study, participants who listened to their own voices with a lower pitch during contentious debates felt more powerful. We discuss the implications of our findings and the opportunities for designing automatic and less perceptible emotion regulation systems.
{"title":"Regulating Feelings During Interpersonal Conflicts by Changing Voice Self-perception","authors":"J. Costa, Malte F. Jung, M. Czerwinski, François Guimbretière, T. Le, Tanzeem Choudhury","doi":"10.1145/3173574.3174205","DOIUrl":"https://doi.org/10.1145/3173574.3174205","url":null,"abstract":"Emotions play a major role in how interpersonal conflicts unfold. Although several strategies and technological approaches have been proposed for emotion regulation, they often require conscious attention and effort. This often limits their efficacy in practice. In this paper, we propose a different approach inspired by self-perception theory: noticing that people are often reacting to the perception of their own behavior, we artificially change their perceptions to influence their emotions. We conducted two studies to evaluate the potential of this approach by automatically and subtly altering how people perceive their own voice. In one study, participants that received voice feedback with a calmer tone during relationship conflicts felt less anxious. In the other study, participants who listened to their own voices with a lower pitch during contentious debates felt more powerful. We discuss the implications of our findings and the opportunities for designing automatic and less perceptible emotion regulation systems.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78227686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carman Neustaedter, Brennan Jones, Kenton O'hara, A. Sellen
In the coming years, emergency calling services in North America will begin to incorporate new modalities for reporting emergencies, including video-based calling. The challenge is that we know little of how video calling systems should be designed and what benefits or challenges video calling might bring. We conducted observations and contextual interviews within three emergency response call centres to investigate these points. We focused on the work practices of call takers and dispatchers. Results show that video calls could provide valuable contextual information about a situation and help to overcome call taker challenges with information ambiguity, location, deceit, and communication issues. Yet video calls have the potential to introduce issues around control, information overload, and privacy if systems are not designed well. These results point to the need to think about emergency video calling along a continuum of visual modalities ranging from audio calls accompanied with images or video clips to one-way video streams to two-way video streams where camera control and camera work need to be carefully designed.
{"title":"The Benefits and Challenges of Video Calling for Emergency Situations","authors":"Carman Neustaedter, Brennan Jones, Kenton O'hara, A. Sellen","doi":"10.1145/3173574.3174231","DOIUrl":"https://doi.org/10.1145/3173574.3174231","url":null,"abstract":"In the coming years, emergency calling services in North America will begin to incorporate new modalities for reporting emergencies, including video-based calling. The challenge is that we know little of how video calling systems should be designed and what benefits or challenges video calling might bring. We conducted observations and contextual interviews within three emergency response call centres to investigate these points. We focused on the work practices of call takers and dispatchers. Results show that video calls could provide valuable contextual information about a situation and help to overcome call taker challenges with information ambiguity, location, deceit, and communication issues. Yet video calls have the potential to introduce issues around control, information overload, and privacy if systems are not designed well. These results point to the need to think about emergency video calling along a continuum of visual modalities ranging from audio calls accompanied with images or video clips to one-way video streams to two-way video streams where camera control and camera work need to be carefully designed.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"79 11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75570920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amanda Swearngin, Mira Dontcheva, Wilmot Li, Joel Brandt, M. Dixon, Amy J. Ko
Interface designers often use screenshot images of example designs as building blocks for new designs. Since images are unstructured and hard to edit, designers typically reconstruct screenshots with vector graphics tools in order to reuse or edit parts of the design. Unfortunately, this reconstruction process is tedious and slow. We present Rewire, an interactive system that helps designers leverage example screenshots. Rewire automatically infers a vector representation of screenshots where each UI component is a separate object with editable shape and style properties. Based on this representation, the system provides three design assistance modes that help designers reuse or redraw components of the example design. The results from our quantitative and user evaluations demonstrate that Rewire can generate accurate vector representations of interface screenshots found in the wild and that design assistance enables users to reconstruct and edit example designs more efficiently compared to a baseline design tool.
{"title":"Rewire: Interface Design Assistance from Examples","authors":"Amanda Swearngin, Mira Dontcheva, Wilmot Li, Joel Brandt, M. Dixon, Amy J. Ko","doi":"10.1145/3173574.3174078","DOIUrl":"https://doi.org/10.1145/3173574.3174078","url":null,"abstract":"Interface designers often use screenshot images of example designs as building blocks for new designs. Since images are unstructured and hard to edit, designers typically reconstruct screenshots with vector graphics tools in order to reuse or edit parts of the design. Unfortunately, this reconstruction process is tedious and slow. We present Rewire, an interactive system that helps designers leverage example screenshots. Rewire automatically infers a vector representation of screenshots where each UI component is a separate object with editable shape and style properties. Based on this representation, the system provides three design assistance modes that help designers reuse or redraw components of the example design. The results from our quantitative and user evaluations demonstrate that Rewire can generate accurate vector representations of interface screenshots found in the wild and that design assistance enables users to reconstruct and edit example designs more efficiently compared to a baseline design tool.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74812782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drivers use nonverbal cues such as vehicle speed, eye gaze, and hand gestures to communicate awareness and intent to pedestrians. Conversely, in autonomous vehicles, drivers can be distracted or absent, leaving pedestrians to infer awareness and intent from the vehicle alone. In this paper, we investigate the usefulness of interfaces (beyond vehicle movement) that explicitly communicate awareness and intent of autonomous vehicles to pedestrians, focusing on crosswalk scenarios. We conducted a preliminary study to gain insight on designing interfaces that communicate autonomous vehicle awareness and intent to pedestrians. Based on study outcomes, we developed four prototype interfaces and deployed them in studies involving a Segway and a car. We found interfaces communicating vehicle awareness and intent: (1) can help pedestrians attempting to cross; (2) are not limited to the vehicle and can exist in the environment; and (3) should use a combination of modalities such as visual, auditory, and physical.
{"title":"Communicating Awareness and Intent in Autonomous Vehicle-Pedestrian Interaction","authors":"Karthik Mahadevan, Sowmya Somanath, E. Sharlin","doi":"10.1145/3173574.3174003","DOIUrl":"https://doi.org/10.1145/3173574.3174003","url":null,"abstract":"Drivers use nonverbal cues such as vehicle speed, eye gaze, and hand gestures to communicate awareness and intent to pedestrians. Conversely, in autonomous vehicles, drivers can be distracted or absent, leaving pedestrians to infer awareness and intent from the vehicle alone. In this paper, we investigate the usefulness of interfaces (beyond vehicle movement) that explicitly communicate awareness and intent of autonomous vehicles to pedestrians, focusing on crosswalk scenarios. We conducted a preliminary study to gain insight on designing interfaces that communicate autonomous vehicle awareness and intent to pedestrians. Based on study outcomes, we developed four prototype interfaces and deployed them in studies involving a Segway and a car. We found interfaces communicating vehicle awareness and intent: (1) can help pedestrians attempting to cross; (2) are not limited to the vehicle and can exist in the environment; and (3) should use a combination of modalities such as visual, auditory, and physical.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74625823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Robinson, Jennifer Pearson, Thomas Reitmaier, Shashank Ahire, Matt Jones
We present APPropriate -- a novel mobile design to allow users to temporarily annex any Android device for their own use. APPropriate is a small, cheap storage pod, designed to be easily carried in a pocket or hidden within clothing. Its purpose is simple: to hold a copy of the local content an owner has on their mobile, liberating them from carrying a phone, or allowing them to use another device that provides advantages over their own. Picking up another device when carrying APPropriate transfers all pertinent content to the borrowed device (using local no-cost WiFi from the APPropriate device), transforming it to give the impression that they are using their own phone. While APPropriate is useful for a wide range of contexts, the design was envisaged through a co-design process with resource-constrained emergent users in three countries. Lab studies and a subsequent deployment on participants' own devices identified key benefits of the approach in these contexts, including for security, resource sharing, and privacy.
{"title":"Make Yourself at Phone: Reimagining Mobile Interaction Architectures With Emergent Users","authors":"Simon Robinson, Jennifer Pearson, Thomas Reitmaier, Shashank Ahire, Matt Jones","doi":"10.1145/3173574.3173981","DOIUrl":"https://doi.org/10.1145/3173574.3173981","url":null,"abstract":"We present APPropriate -- a novel mobile design to allow users to temporarily annex any Android device for their own use. APPropriate is a small, cheap storage pod, designed to be easily carried in a pocket or hidden within clothing. Its purpose is simple: to hold a copy of the local content an owner has on their mobile, liberating them from carrying a phone, or allowing them to use another device that provides advantages over their own. Picking up another device when carrying APPropriate transfers all pertinent content to the borrowed device (using local no-cost WiFi from the APPropriate device), transforming it to give the impression that they are using their own phone. While APPropriate is useful for a wide range of contexts, the design was envisaged through a co-design process with resource-constrained emergent users in three countries. Lab studies and a subsequent deployment on participants' own devices identified key benefits of the approach in these contexts, including for security, resource sharing, and privacy.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"17 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72611579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
More than a decade into Sustainable HCI (SHCI) research, the community is still struggling to converge on a shared understanding of sustainability and HCI's role in addressing it. We think this is largely a positive sign, reflective of maturity; yet, lacking a clear set of aims and metrics for sustainability continues to be the community's impediment to progressing, hence we seek to articulate a vision around which the community can productively coalesce. Drawing from recent SHCI publications, we identify commonalities that might form the basis of a shared understanding, and we show that this understanding closely aligns with the authoritative conception of a path to a sustainable future proffered by Naomi Klein in her book emphThis Changes Everything. We elaborate a set of contributions that SHCI is already making that can be unified under Klein's narrative, and compare these categories of work to those found in past surveys of the field as evidence of substantive progress in SHCI.
{"title":"This Changes Sustainable HCI","authors":"Bran Knowles, Oliver Bates, M. Håkansson","doi":"10.1145/3173574.3174045","DOIUrl":"https://doi.org/10.1145/3173574.3174045","url":null,"abstract":"More than a decade into Sustainable HCI (SHCI) research, the community is still struggling to converge on a shared understanding of sustainability and HCI's role in addressing it. We think this is largely a positive sign, reflective of maturity; yet, lacking a clear set of aims and metrics for sustainability continues to be the community's impediment to progressing, hence we seek to articulate a vision around which the community can productively coalesce. Drawing from recent SHCI publications, we identify commonalities that might form the basis of a shared understanding, and we show that this understanding closely aligns with the authoritative conception of a path to a sustainable future proffered by Naomi Klein in her book emphThis Changes Everything. We elaborate a set of contributions that SHCI is already making that can be unified under Klein's narrative, and compare these categories of work to those found in past surveys of the field as evidence of substantive progress in SHCI.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74066276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}