Navigating a space populated by fenceless industrial robots while carrying out other tasks can be stressful, as the worker is unsure about when she is invading the area of influence of a robot, which is a hazard zone. Such areas are difficult to estimate and standing in one may have consequences for worker safety and for the productivity of the robot. We investigate the use of multimodal (auditory and/or visual) head-mounted AR displays to warn about entering hazard zones while performing an independent navigation task. As a first step in this research, we report a design-research study (including a user study), conducted to obtain a visual and an auditory AR display subjectively judged to approach equivalence. The goal is that these designs can serve as the basis for a future modality comparison study.
{"title":"Audio-visual AR to Improve Awareness of Hazard Zones Around Robots","authors":"Ane San Martín, Johan Kildal","doi":"10.1145/3290607.3312996","DOIUrl":"https://doi.org/10.1145/3290607.3312996","url":null,"abstract":"Navigating a space populated by fenceless industrial robots while carrying out other tasks can be stressful, as the worker is unsure about when she is invading the area of influence of a robot, which is a hazard zone. Such areas are difficult to estimate and standing in one may have consequences for worker safety and for the productivity of the robot. We investigate the use of multimodal (auditory and/or visual) head-mounted AR displays to warn about entering hazard zones while performing an independent navigation task. As a first step in this research, we report a design-research study (including a user study), conducted to obtain a visual and an auditory AR display subjectively judged to approach equivalence. The goal is that these designs can serve as the basis for a future modality comparison study.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"115 16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126368705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michelle S. Lam, Grace B. Young, Catherine Y. Xu, Ranjay Krishna, Michael S. Bernstein
There is a significant gap between the high-level, semantic manner in which we reason about image edits and the low-level, pixel-oriented way in which we execute these edits. While existing image-editing tools provide a great deal of flexibility for professionals, they can be disorienting to novice editors because of the gap between a user's goals and the unfamiliar operations needed to actualize them. We present Eevee, an image-editing system that empowers users to transform images by specifying intents in terms of high-level themes. Based on a provided theme and an understanding of the objects and relationships in the original image, we introduce an optimization function that balances semantic plausibility, visual plausibility, and theme relevance to surface possible image edits. A formative evaluation finds that we are able to guide users to meet their goals while helping them to explore novel, creative ideas for their image edit.
{"title":"Eevee","authors":"Michelle S. Lam, Grace B. Young, Catherine Y. Xu, Ranjay Krishna, Michael S. Bernstein","doi":"10.1145/3290607.3312929","DOIUrl":"https://doi.org/10.1145/3290607.3312929","url":null,"abstract":"There is a significant gap between the high-level, semantic manner in which we reason about image edits and the low-level, pixel-oriented way in which we execute these edits. While existing image-editing tools provide a great deal of flexibility for professionals, they can be disorienting to novice editors because of the gap between a user's goals and the unfamiliar operations needed to actualize them. We present Eevee, an image-editing system that empowers users to transform images by specifying intents in terms of high-level themes. Based on a provided theme and an understanding of the objects and relationships in the original image, we introduce an optimization function that balances semantic plausibility, visual plausibility, and theme relevance to surface possible image edits. A formative evaluation finds that we are able to guide users to meet their goals while helping them to explore novel, creative ideas for their image edit.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126432762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Hadjidemetriou, Marios Belk, C. Fidas, A. Pitsillides
We present HoloPass, a mixed reality application for the HoloLens wearable device, which allows users to perform user authentication tasks through gesture-based interaction. In particular, this paper reports the implementation of picture passwords for mixed reality environments, and highlights the development procedure, lessons learned from common design and development issues, and how they were addressed. It further reports a between-subjects study (N=30) which compared usability, security, and likeability aspects of picture passwords in mixed reality vs. traditional desktop contexts aiming to investigate and reason on the viability of picture passwords as an alternative user authentication approach for mixed reality. This work can be of value for enhancing and driving future implementations of picture passwords in mixed reality since initial results are promising towards following such a research line.
{"title":"Picture Passwords in Mixed Reality: Implementation and Evaluation","authors":"George Hadjidemetriou, Marios Belk, C. Fidas, A. Pitsillides","doi":"10.1145/3290607.3313076","DOIUrl":"https://doi.org/10.1145/3290607.3313076","url":null,"abstract":"We present HoloPass, a mixed reality application for the HoloLens wearable device, which allows users to perform user authentication tasks through gesture-based interaction. In particular, this paper reports the implementation of picture passwords for mixed reality environments, and highlights the development procedure, lessons learned from common design and development issues, and how they were addressed. It further reports a between-subjects study (N=30) which compared usability, security, and likeability aspects of picture passwords in mixed reality vs. traditional desktop contexts aiming to investigate and reason on the viability of picture passwords as an alternative user authentication approach for mixed reality. This work can be of value for enhancing and driving future implementations of picture passwords in mixed reality since initial results are promising towards following such a research line.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125588924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
My research introduces expressive biosignals as a novel social cue to improve interpersonal communication. Expressive biosignals are sensed physiological data revealed between people to provide a deeper understanding of each other's psychological states. My prior work has shown the potential for these cues to provide authentic and validating emotional expression, while fostering awareness and social connection between people. In my proposed research, I expand on this work by exploring how social responses to biosignals can benefit communication through empathy-building and social support. This work will scope the design space for expressive biosignals and inform future interventions for a variety of social contexts, including interpersonal relationships and mental health.
{"title":"Expressive Biosignals: Authentic Social Cues for Social Connection","authors":"Fannie Liu","doi":"10.1145/3290607.3299081","DOIUrl":"https://doi.org/10.1145/3290607.3299081","url":null,"abstract":"My research introduces expressive biosignals as a novel social cue to improve interpersonal communication. Expressive biosignals are sensed physiological data revealed between people to provide a deeper understanding of each other's psychological states. My prior work has shown the potential for these cues to provide authentic and validating emotional expression, while fostering awareness and social connection between people. In my proposed research, I expand on this work by exploring how social responses to biosignals can benefit communication through empathy-building and social support. This work will scope the design space for expressive biosignals and inform future interventions for a variety of social contexts, including interpersonal relationships and mental health.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115855679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
André Zenner, Sören Klingner, David Liebemann, Akhmajon Makhsadov, A. Krüger
In many domains, real-world processes are traditionally communicated to users through abstract graph-based models like event-driven process chains (EPCs), i.e. 2D representations on paper or desktop monitors. We propose an alternative interface to explore EPCs, called immersive process models, which aims to transform the exploration of EPCs into a multisensory virtual reality journey. To make EPC exploration more enjoyable, interactive and memorable, we propose a concept that spatializes EPCs by mapping traditional 2D graphs to 3D virtual environments. EPC graph nodes are represented by room-scale floating platforms and explored by users through natural walking. Our concept additionally enables users to experience important node types and the information flow through passive haptic interactions. Complementarily, gamification aspects aim to support the communication of logical dependencies within explored processes. This paper presents the concept of immersive process models and discusses future research directions.
{"title":"Immersive Process Models","authors":"André Zenner, Sören Klingner, David Liebemann, Akhmajon Makhsadov, A. Krüger","doi":"10.1145/3290607.3312866","DOIUrl":"https://doi.org/10.1145/3290607.3312866","url":null,"abstract":"In many domains, real-world processes are traditionally communicated to users through abstract graph-based models like event-driven process chains (EPCs), i.e. 2D representations on paper or desktop monitors. We propose an alternative interface to explore EPCs, called immersive process models, which aims to transform the exploration of EPCs into a multisensory virtual reality journey. To make EPC exploration more enjoyable, interactive and memorable, we propose a concept that spatializes EPCs by mapping traditional 2D graphs to 3D virtual environments. EPC graph nodes are represented by room-scale floating platforms and explored by users through natural walking. Our concept additionally enables users to experience important node types and the information flow through passive haptic interactions. Complementarily, gamification aspects aim to support the communication of logical dependencies within explored processes. This paper presents the concept of immersive process models and discusses future research directions.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"70 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115907962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yoonjeong Cha, Jincheul Jang, Younghyun Hong, M. Yi
A growing number of conversational agents are being embedded into larger systems such as smart homes. However, little attention has been paid to the user interactions with conversational agents in the multi-device collaboration context (MDCC), where a multiple number of devices are connected to accomplish a common mission. The objective of this study is to identify the roles of conversational agents in the MDCC. Toward this goal, we conducted semi-structured interviews with nine participants who are heavy users of smart speakers connected with home IoT devices. We collected 107 rules (usage instances) and asked benefits and limitations of using those rules. Our thematic analysis has found that, while the smart speakers perform the role of voice controller in the single device context, their role extended to automation hub, reporter, and companion in the MDCC. Based on the findings, we provide design implications for smart speakers in the MDCC.
{"title":"\"Jack-of-All-Trades\": A Thematic Analysis of Conversational Agents in Multi-Device Collaboration Contexts","authors":"Yoonjeong Cha, Jincheul Jang, Younghyun Hong, M. Yi","doi":"10.1145/3290607.3313045","DOIUrl":"https://doi.org/10.1145/3290607.3313045","url":null,"abstract":"A growing number of conversational agents are being embedded into larger systems such as smart homes. However, little attention has been paid to the user interactions with conversational agents in the multi-device collaboration context (MDCC), where a multiple number of devices are connected to accomplish a common mission. The objective of this study is to identify the roles of conversational agents in the MDCC. Toward this goal, we conducted semi-structured interviews with nine participants who are heavy users of smart speakers connected with home IoT devices. We collected 107 rules (usage instances) and asked benefits and limitations of using those rules. Our thematic analysis has found that, while the smart speakers perform the role of voice controller in the single device context, their role extended to automation hub, reporter, and companion in the MDCC. Based on the findings, we provide design implications for smart speakers in the MDCC.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132020537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lisa A. Thomas, Elizabeth Sillence, Vicki Elsey, E. Simpson, L. Moody
Today, new mothers are experiencing parenthood differently. Digital resources can provide a wealth of information, present opportunities for socialising, and even assist in tracking a baby's development. However, women are often juggling the role of motherhood with other commitments, such as work. The aim of this workshop is to understand the digital support needs and practices during parenthood from the perspective of employed mothers. We are interested in exploring the ways that women utilise the technologies which have been designed to support mothers, and specifically, the importance of work-life balance and the various roles that mothers play. There is a need to better understand and identify which technologies are being used to support working women through their motherhood journey, and ensure a healthy transition to support women's changing identities.
{"title":"Technology to Mediate Role Conflict in Motherhood","authors":"Lisa A. Thomas, Elizabeth Sillence, Vicki Elsey, E. Simpson, L. Moody","doi":"10.1145/3290607.3299024","DOIUrl":"https://doi.org/10.1145/3290607.3299024","url":null,"abstract":"Today, new mothers are experiencing parenthood differently. Digital resources can provide a wealth of information, present opportunities for socialising, and even assist in tracking a baby's development. However, women are often juggling the role of motherhood with other commitments, such as work. The aim of this workshop is to understand the digital support needs and practices during parenthood from the perspective of employed mothers. We are interested in exploring the ways that women utilise the technologies which have been designed to support mothers, and specifically, the importance of work-life balance and the various roles that mothers play. There is a need to better understand and identify which technologies are being used to support working women through their motherhood journey, and ensure a healthy transition to support women's changing identities.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132029528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Suleri, Vinoth Pandian Sermuga Pandian, Svetlana Shishkovets, Matthias Jarke
Prototyping involves the evolution of an idea into various stages of design until it reaches a certain level of maturity. These design stages include low, medium and high fidelity prototypes. Workload analysis of prototyping using NASA-TLX showed an increase in workload specifically in frustration, temporal demand, effort, and decline in performance as the participants progressed from low to high fidelity. Upon reviewing numerous commercial and academic tools that directly or indirectly support software prototyping in one aspect or another, we identified a need for a comprehensive solution to support the entire software prototyping process. In this paper, we introduce Eve, a prototyping workbench that enables the users to sketch their concept as low fidelity prototype. It generates the consequent medium and high fidelity prototypes by means of UI element detection and code generation. We evaluated Eve using SUS with 15 UI/UX designers; the results depict good usability and high learnability (Usability score: 78.5). In future, we aim to study the impact of Eve on subjective workload experienced by users during software prototyping.
{"title":"Eve","authors":"Sarah Suleri, Vinoth Pandian Sermuga Pandian, Svetlana Shishkovets, Matthias Jarke","doi":"10.1145/3290607.3312994","DOIUrl":"https://doi.org/10.1145/3290607.3312994","url":null,"abstract":"Prototyping involves the evolution of an idea into various stages of design until it reaches a certain level of maturity. These design stages include low, medium and high fidelity prototypes. Workload analysis of prototyping using NASA-TLX showed an increase in workload specifically in frustration, temporal demand, effort, and decline in performance as the participants progressed from low to high fidelity. Upon reviewing numerous commercial and academic tools that directly or indirectly support software prototyping in one aspect or another, we identified a need for a comprehensive solution to support the entire software prototyping process. In this paper, we introduce Eve, a prototyping workbench that enables the users to sketch their concept as low fidelity prototype. It generates the consequent medium and high fidelity prototypes by means of UI element detection and code generation. We evaluated Eve using SUS with 15 UI/UX designers; the results depict good usability and high learnability (Usability score: 78.5). In future, we aim to study the impact of Eve on subjective workload experienced by users during software prototyping.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130009730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Eichmann, Franco Solleza, Junjay Tan, Nesime Tatbul, S. Zdonik
Millions of time-based data streams (a.k.a., time series) are being recorded every day in a wide-range of industrial and scientific domains, from healthcare and finance to autonomous driving. Detecting anomalous behavior in such streams has become a common analysis task for which data scientists employ complex machine learning models. Analyzing the behavior and performance of these models is a challenge on its own. While traditional accuracy metrics (e.g., precision/recall) are often used in practice to measure and compare the performance of different anomaly detectors, such statistics alone are insufficient to characterize and compare the algorithms in a systematic, human-interpretable way. In this extended abstract, we present Metro-Viz, a visual analysis tool to help data scientists and domain experts reason about commonalities and differences among anomaly detectors, and to identify their strengths and weaknesses.
{"title":"Metro-Viz: Black-Box Analysis of Time Series Anomaly Detectors","authors":"P. Eichmann, Franco Solleza, Junjay Tan, Nesime Tatbul, S. Zdonik","doi":"10.1145/3290607.3312912","DOIUrl":"https://doi.org/10.1145/3290607.3312912","url":null,"abstract":"Millions of time-based data streams (a.k.a., time series) are being recorded every day in a wide-range of industrial and scientific domains, from healthcare and finance to autonomous driving. Detecting anomalous behavior in such streams has become a common analysis task for which data scientists employ complex machine learning models. Analyzing the behavior and performance of these models is a challenge on its own. While traditional accuracy metrics (e.g., precision/recall) are often used in practice to measure and compare the performance of different anomaly detectors, such statistics alone are insufficient to characterize and compare the algorithms in a systematic, human-interpretable way. In this extended abstract, we present Metro-Viz, a visual analysis tool to help data scientists and domain experts reason about commonalities and differences among anomaly detectors, and to identify their strengths and weaknesses.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130286676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Zepf, Monique Dittrich, Javier Hernández, Alexander Schmitt
Monitoring the emotions of drivers can play a critical role to reduce road accidents and enable novel driver-car interactions. To help understand the possibilities, this work systematically studies the in-road triggers that may lead to different emotional states. In particular, we monitored the experience of 33 drivers during 50 minutes of naturalistic driving each. With a total of 531 voice self-reports, we identified four main groups of emotional triggers based on their originating source, being those related to human-machine interaction and navigation the ones that more commonly elicited negative emotions. Based on the findings, this work provides some recommendations for potential future emotion-enabled interventions.
{"title":"Towards Empathetic Car Interfaces: Emotional Triggers while Driving","authors":"Sebastian Zepf, Monique Dittrich, Javier Hernández, Alexander Schmitt","doi":"10.1145/3290607.3312883","DOIUrl":"https://doi.org/10.1145/3290607.3312883","url":null,"abstract":"Monitoring the emotions of drivers can play a critical role to reduce road accidents and enable novel driver-car interactions. To help understand the possibilities, this work systematically studies the in-road triggers that may lead to different emotional states. In particular, we monitored the experience of 33 drivers during 50 minutes of naturalistic driving each. With a total of 531 voice self-reports, we identified four main groups of emotional triggers based on their originating source, being those related to human-machine interaction and navigation the ones that more commonly elicited negative emotions. Based on the findings, this work provides some recommendations for potential future emotion-enabled interventions.","PeriodicalId":389485,"journal":{"name":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134114599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}