Samuel Kernan Freire, E. Niforatos, Z. Rusák, D. Aschenbrenner, A. Bozzon
Maintaining a complex system, such as a modern production line, is a knowledge-intensive task. Many firms use maintenance reports as a decision support tool. However, reports are often poor quality and tedious to compile. A Conversational User Interface (CUI) could streamline the reporting process by validating the user’s input, eliciting more valuable information, and reducing the time needed. In this paper, we use a Technology Probe to explore the potential of a CUI to create instructional maintenance reports. We conducted a between-groups study (N = 24) in which participants had to replace the inner tube of a bicycle tire. One group documented the procedure using a CUI while replacing the inner tube, whereas the other group compiled a paper report afterward. The CUI was enacted by a researcher according to a set of rules. Our results indicate that using a CUI for maintenance reports saves a significant amount of time, is no more cognitively demanding than writing a report, and results in maintenance reports of higher quality.
维护一个复杂的系统,如现代化的生产线,是一项知识密集型的任务。许多公司使用维护报告作为决策支持工具。然而,报告的质量往往很差,编写起来也很繁琐。会话用户界面(conversation User Interface, CUI)可以通过验证用户的输入、获取更有价值的信息和减少所需的时间来简化报告过程。在本文中,我们使用技术探针来探索CUI创建指导性维护报告的潜力。我们进行了一项组间研究(N = 24),参与者必须更换自行车轮胎的内胎。一组在更换内管时使用CUI记录了这一过程,而另一组随后编写了一份纸质报告。CUI是由一位研究人员根据一套规则制定的。我们的结果表明,使用CUI进行维护报告节省了大量的时间,并不比编写报告需要更多的认知需求,并且产生了更高质量的维护报告。
{"title":"A Conversational User Interface for Instructional Maintenance Reports","authors":"Samuel Kernan Freire, E. Niforatos, Z. Rusák, D. Aschenbrenner, A. Bozzon","doi":"10.1145/3543829.3544516","DOIUrl":"https://doi.org/10.1145/3543829.3544516","url":null,"abstract":"Maintaining a complex system, such as a modern production line, is a knowledge-intensive task. Many firms use maintenance reports as a decision support tool. However, reports are often poor quality and tedious to compile. A Conversational User Interface (CUI) could streamline the reporting process by validating the user’s input, eliciting more valuable information, and reducing the time needed. In this paper, we use a Technology Probe to explore the potential of a CUI to create instructional maintenance reports. We conducted a between-groups study (N = 24) in which participants had to replace the inner tube of a bicycle tire. One group documented the procedure using a CUI while replacing the inner tube, whereas the other group compiled a paper report afterward. The CUI was enacted by a researcher according to a set of rules. Our results indicate that using a CUI for maintenance reports saves a significant amount of time, is no more cognitively demanding than writing a report, and results in maintenance reports of higher quality.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"463 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120873437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As conversational user interfaces (CUIs) are increasingly integrated into daily life, ethical and societal concerns about integrated content filtering algorithms emerge. In addressing these concerns, it is essential to know how aware and knowledgeable society is of the algorithms it encounters using CUIs and the extent to which this impacts the attitude towards these technologies. In this survey study, we made a first attempt to measure and compare participants’ algorithm awareness of chatbots and voice assistants. Further, we assessed the effect of algorithm literacy on the attitude towards CUIs and possible interaction effects with technology acceptance. Lastly, we compared previous and future usage purposes for chatbots and voice assistants. We found higher algorithm awareness for voice assistants than for chatbots. No correlation between algorithm literacy and attitude towards either chatbots or voice assistants was found. An additional personal-level factor, technology acceptance, did not affect this relationship. The results show that participants preferred using voice assistants for task completion and social purposes over chatbots, while getting information was equally preferred between chatbots and voice assistants. Considering its societal relevance, we want to encourage more research on algorithmic awareness and understanding in the field of CUI and its cognitive and behavioral effects.
{"title":"Do we know and do we care? Algorithms and Attitude towards Conversational User Interfaces: Comparing Chatbots and Voice Assistants","authors":"Sara Irma Parnell, Stefan Klein, Franziska Gaiser","doi":"10.1145/3543829.3544517","DOIUrl":"https://doi.org/10.1145/3543829.3544517","url":null,"abstract":"As conversational user interfaces (CUIs) are increasingly integrated into daily life, ethical and societal concerns about integrated content filtering algorithms emerge. In addressing these concerns, it is essential to know how aware and knowledgeable society is of the algorithms it encounters using CUIs and the extent to which this impacts the attitude towards these technologies. In this survey study, we made a first attempt to measure and compare participants’ algorithm awareness of chatbots and voice assistants. Further, we assessed the effect of algorithm literacy on the attitude towards CUIs and possible interaction effects with technology acceptance. Lastly, we compared previous and future usage purposes for chatbots and voice assistants. We found higher algorithm awareness for voice assistants than for chatbots. No correlation between algorithm literacy and attitude towards either chatbots or voice assistants was found. An additional personal-level factor, technology acceptance, did not affect this relationship. The results show that participants preferred using voice assistants for task completion and social purposes over chatbots, while getting information was equally preferred between chatbots and voice assistants. Considering its societal relevance, we want to encourage more research on algorithmic awareness and understanding in the field of CUI and its cognitive and behavioral effects.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127077951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Voice user interfaces (VUIs) are currently experiencing rapid growth as commercial devices like Google Home, Amazon Echo, and Apple Homepod are adopted by users. However, due to the pace of this growth, the tech industry has had to adapt quickly and vigorously to keep up with demand. Due to this, we currently have limited understanding of the environment of VUI design in industry, including the various multitude of practices and tools that are used. We also have a limited understanding of the barriers VUI designers currently still face. To address such knowledge gaps, we conducted a large-scale online survey to explore the design practices employed by VUI industry designers on-the-job, and the barriers and needs of VUI designers. We found that despite the availability of a wide range of guidelines, textbooks, tools, etc, there are significant gaps in the adoption of these tools within VUI industry design, and that designers rely on their previous experience in developing GUIs when designing VUIs. Based on our survey findings, we provide recommendations for how the HCI community may direct research efforts in developing tools to assist designers in overcoming existing barriers and build usable and adoptable VUIs.
随着Google Home、Amazon Echo、Apple Homepod等商用设备的普及,语音用户界面(Voice user interface, VUIs)正在快速增长。然而,由于这种增长的速度,科技行业必须迅速而积极地适应,以跟上需求。因此,我们目前对工业中VUI设计环境的了解有限,包括所使用的各种实践和工具。我们对VUI设计师目前仍然面临的障碍的理解也很有限。为了解决这些知识差距,我们进行了一项大规模的在线调查,探讨VUI行业设计师在职时的设计实践,以及VUI设计师的障碍和需求。我们发现,尽管有广泛的指导方针、教科书、工具等,但在VUI行业设计中,这些工具的采用存在显著差距,并且设计师在设计ui时依赖于他们以前开发gui的经验。根据我们的调查结果,我们为HCI社区如何指导开发工具的研究工作提供了建议,以帮助设计人员克服现有的障碍,并构建可用且可采用的ui。
{"title":"“Voice-First Interfaces in a GUI-First Design World”: Barriers and Opportunities to Supporting VUI Designers On-the-Job","authors":"Christine Murad, Cosmin Munteanu","doi":"10.1145/3543829.3543842","DOIUrl":"https://doi.org/10.1145/3543829.3543842","url":null,"abstract":"Voice user interfaces (VUIs) are currently experiencing rapid growth as commercial devices like Google Home, Amazon Echo, and Apple Homepod are adopted by users. However, due to the pace of this growth, the tech industry has had to adapt quickly and vigorously to keep up with demand. Due to this, we currently have limited understanding of the environment of VUI design in industry, including the various multitude of practices and tools that are used. We also have a limited understanding of the barriers VUI designers currently still face. To address such knowledge gaps, we conducted a large-scale online survey to explore the design practices employed by VUI industry designers on-the-job, and the barriers and needs of VUI designers. We found that despite the availability of a wide range of guidelines, textbooks, tools, etc, there are significant gaps in the adoption of these tools within VUI industry design, and that designers rely on their previous experience in developing GUIs when designing VUIs. Based on our survey findings, we provide recommendations for how the HCI community may direct research efforts in developing tools to assist designers in overcoming existing barriers and build usable and adoptable VUIs.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"676 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114085298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Irene Lopatovska, Olivia Turpin, Jessika Davis, E. Connell, Christopher M. Denney, Hilda Fournier, Archana Ravi, Jihye Yoon, Eesha Parasnis
Adolescence is a period of intense transformation that requires social interactions and emotional support. When such support is not available, technology can offer a solution. We conducted a participatory design study with the aim of producing content recommendations for a supportive conversational agent (CA) for adolescents. Twenty teens between the ages of 12 to 18 were invited to converse about issues they were experiencing and offer conversational support to each other. Analysis of participants’ conversations revealed that stress was the most frequent problem participants would seek support for. Participant responses to each other's problems offered both cognitive and emotional support, including advice on changing one's behavior, seeking help from others, prioritizing one's wellbeing, and statements of unconditional emotional support, among others. The findings indicate that many of the observed conversational solutions can be programmed in a supportive CA to appeal to a large group of adolescents.
{"title":"Capturing Teens’ Voice in Designing Supportive Agents","authors":"Irene Lopatovska, Olivia Turpin, Jessika Davis, E. Connell, Christopher M. Denney, Hilda Fournier, Archana Ravi, Jihye Yoon, Eesha Parasnis","doi":"10.1145/3543829.3543838","DOIUrl":"https://doi.org/10.1145/3543829.3543838","url":null,"abstract":"Adolescence is a period of intense transformation that requires social interactions and emotional support. When such support is not available, technology can offer a solution. We conducted a participatory design study with the aim of producing content recommendations for a supportive conversational agent (CA) for adolescents. Twenty teens between the ages of 12 to 18 were invited to converse about issues they were experiencing and offer conversational support to each other. Analysis of participants’ conversations revealed that stress was the most frequent problem participants would seek support for. Participant responses to each other's problems offered both cognitive and emotional support, including advice on changing one's behavior, seeking help from others, prioritizing one's wellbeing, and statements of unconditional emotional support, among others. The findings indicate that many of the observed conversational solutions can be programmed in a supportive CA to appeal to a large group of adolescents.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114014608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a real-world shopping scenario, users can express their natural-language feedback when communicating with a shopping assistant by stating their satisfactions positively with “I like” or negatively with “I dislike” according to the quality of the recommended fashion products. A multimodal conversational recommender system (using text and images in particular) aims to replicate this process by eliciting the dynamic preferences of users from their natural-language feedback and updating the visual recommendations so as to satisfy the users’ current needs through multi-turn interactions. However, the impact of positive and negative natural-language feedback on the effectiveness of multimodal conversational recommendation has not yet been fully explored.Since there are no datasets of conversational recommendation with both positive and negative natural-language feedback, the existing research on multimodal conversational recommendation imposed several constraints on the users’ natural-language expressions (i.e. either only describing their preferred attributes as positive feedback or rejecting the undesired recommendations without any natural-language critiques) to simplify the multimodal conversational recommendation task. To further explore the multimodal conversational recommendation with positive and negative natural-language feedback, we investigate the effectiveness of the recent multimodal conversational recommendation models for effectively incorporating the users’ preferences over time from both positively and negatively natural-language oriented feedback corresponding to the visual recommendations. We also propose an approach to generate both positive and negative natural-language critiques about the recommendations within an existing user simulator. Following previous work, we train and evaluate the two existing conversational recommendation models by using the user simulator with positive and negative feedback as a surrogate for real human users. Extensive experiments conducted on a well-known fashion dataset demonstrate that positive natural-language feedback is more informative relating to the users’ preferences in comparison to negative natural-language feedback.
{"title":"Multimodal Conversational Fashion Recommendation with Positive and Negative Natural-Language Feedback","authors":"Yaxiong Wu, C. Macdonald, I. Ounis","doi":"10.1145/3543829.3543837","DOIUrl":"https://doi.org/10.1145/3543829.3543837","url":null,"abstract":"In a real-world shopping scenario, users can express their natural-language feedback when communicating with a shopping assistant by stating their satisfactions positively with “I like” or negatively with “I dislike” according to the quality of the recommended fashion products. A multimodal conversational recommender system (using text and images in particular) aims to replicate this process by eliciting the dynamic preferences of users from their natural-language feedback and updating the visual recommendations so as to satisfy the users’ current needs through multi-turn interactions. However, the impact of positive and negative natural-language feedback on the effectiveness of multimodal conversational recommendation has not yet been fully explored.Since there are no datasets of conversational recommendation with both positive and negative natural-language feedback, the existing research on multimodal conversational recommendation imposed several constraints on the users’ natural-language expressions (i.e. either only describing their preferred attributes as positive feedback or rejecting the undesired recommendations without any natural-language critiques) to simplify the multimodal conversational recommendation task. To further explore the multimodal conversational recommendation with positive and negative natural-language feedback, we investigate the effectiveness of the recent multimodal conversational recommendation models for effectively incorporating the users’ preferences over time from both positively and negatively natural-language oriented feedback corresponding to the visual recommendations. We also propose an approach to generate both positive and negative natural-language critiques about the recommendations within an existing user simulator. Following previous work, we train and evaluate the two existing conversational recommendation models by using the user simulator with positive and negative feedback as a surrogate for real human users. Extensive experiments conducted on a well-known fashion dataset demonstrate that positive natural-language feedback is more informative relating to the users’ preferences in comparison to negative natural-language feedback.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128117920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chatbots are increasingly used to replace human interviewers and survey forms for soliciting information from users. This paper presents two studies that investigate how the formality of a chatbot’s conversational style can affect the likelihood of users engaging with and disclosing sensitive information to a chatbot. In our first study, we show that the domain and sensitivity of the information being requested impact users’ preferred conversational style. Specifically, when users were asked to disclose sensitive health information, they perceived a formal style as more competent and appropriate. In our second study, we investigate the health domain further by analysing the quality of user utterances as users talk to a chatbot about their dental flossing. We found that users who do not floss every day gave higher quality responses when talking to a formal chatbot. These findings can help designers choose a chatbot’s language formality for their given use case.
{"title":"Does Chatbot Language Formality Affect Users’ Self-Disclosure?","authors":"Samuel Rhys Cox, Wei Tsang Ooi","doi":"10.1145/3543829.3543831","DOIUrl":"https://doi.org/10.1145/3543829.3543831","url":null,"abstract":"Chatbots are increasingly used to replace human interviewers and survey forms for soliciting information from users. This paper presents two studies that investigate how the formality of a chatbot’s conversational style can affect the likelihood of users engaging with and disclosing sensitive information to a chatbot. In our first study, we show that the domain and sensitivity of the information being requested impact users’ preferred conversational style. Specifically, when users were asked to disclose sensitive health information, they perceived a formal style as more competent and appropriate. In our second study, we investigate the health domain further by analysing the quality of user utterances as users talk to a chatbot about their dental flossing. We found that users who do not floss every day gave higher quality responses when talking to a formal chatbot. These findings can help designers choose a chatbot’s language formality for their given use case.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116331903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicolas Wagner, Matthias Kraus, Tibor Tonn, W. Minker
The increasing capabilities of chatbots will become more and more important in the near future. In this paper, we introduce a conversational system which connects a chatbot with a mainstream messaging service in a multi-user scenario. While there are already numerous options for single-user bots, ready-to-use systems for multiple users and group chats are scarce. This work thus aims to get insight into how such a group chatbot should behave during a multi-turn conversation. For this, we implemented and evaluated four different moderation strategies in an everyday use-case: the planning and negotiation of a joint appointment. In our subsequent user study with 40 participants, we investigated how the different strategies were perceived and what influence they had on the acceptance, usability and efficiency of the system. Our evaluation results show that users’ perceptions of innovation and inventiveness of the bot were influenced by the moderation strategies.
{"title":"Comparing Moderation Strategies in Group Chats with Multi-User Chatbots","authors":"Nicolas Wagner, Matthias Kraus, Tibor Tonn, W. Minker","doi":"10.1145/3543829.3544527","DOIUrl":"https://doi.org/10.1145/3543829.3544527","url":null,"abstract":"The increasing capabilities of chatbots will become more and more important in the near future. In this paper, we introduce a conversational system which connects a chatbot with a mainstream messaging service in a multi-user scenario. While there are already numerous options for single-user bots, ready-to-use systems for multiple users and group chats are scarce. This work thus aims to get insight into how such a group chatbot should behave during a multi-turn conversation. For this, we implemented and evaluated four different moderation strategies in an everyday use-case: the planning and negotiation of a joint appointment. In our subsequent user study with 40 participants, we investigated how the different strategies were perceived and what influence they had on the acceptance, usability and efficiency of the system. Our evaluation results show that users’ perceptions of innovation and inventiveness of the bot were influenced by the moderation strategies.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131095699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In what ways could the future of emotional bonds between humans and conversational AI change us? To explore this question in a multi-faceted manner, designers, engineers, philosophers as separate focus groups were given a design fiction probe— a story of a chatbot’s disappearance from a person’s life. Though articulated in discipline-specific ways, participants expressed similar concerns and hopes: 1) caring for a machine could teach people to emotionally care for themselves and others, 2) the boundary between human and non-human emotions may become blurred when people project their own emotions onto AI, e.g., a bot’s ”breakdown” as one’s own, and 3) people may then intertwine their identities with AI through emotions. We consider ethical ramifications of socially constructed emotions between humans and conversational agents.
{"title":"Where is Vincent? Expanding our emotional selves with AI","authors":"Minha Lee, L. Frank, Y. D. Kort, W. Ijsselsteijn","doi":"10.1145/3543829.3543835","DOIUrl":"https://doi.org/10.1145/3543829.3543835","url":null,"abstract":"In what ways could the future of emotional bonds between humans and conversational AI change us? To explore this question in a multi-faceted manner, designers, engineers, philosophers as separate focus groups were given a design fiction probe— a story of a chatbot’s disappearance from a person’s life. Though articulated in discipline-specific ways, participants expressed similar concerns and hopes: 1) caring for a machine could teach people to emotionally care for themselves and others, 2) the boundary between human and non-human emotions may become blurred when people project their own emotions onto AI, e.g., a bot’s ”breakdown” as one’s own, and 3) people may then intertwine their identities with AI through emotions. We consider ethical ramifications of socially constructed emotions between humans and conversational agents.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133261732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chatbots have become commonplace – they can provide customer support, take orders, collect feedback, and even provide (mental) health support. Despite this diversity, the opportunities of designing chatbots for more complex decision-making tasks remain largely underexplored. Bearing this in mind leads us to ask: How can chatbots be embedded into software tools used for complex decision-making and designed to scaffold and probe human cognition? The goal of our research was to explore possible uses of such “probing bots”. The domain we examined was stock investment where many complex decisions need to be made. In our study, different types of investors interacted with a prototype, which we called “ProberBot”, and subsequently took part in in-depth interviews. They generally found our ProberBot was effective at supporting their thinking but when this is desirable depends on the type of task and activity. We discuss these and other findings as well as design considerations for developing ProberBots for similar types of decision-making tasks.
{"title":"Extending Chatbots to Probe Users: Enhancing Complex Decision-Making Through Probing Conversations","authors":"Leon Reicherts, Gun-Woo Park, Y. Rogers","doi":"10.1145/3543829.3543832","DOIUrl":"https://doi.org/10.1145/3543829.3543832","url":null,"abstract":"Chatbots have become commonplace – they can provide customer support, take orders, collect feedback, and even provide (mental) health support. Despite this diversity, the opportunities of designing chatbots for more complex decision-making tasks remain largely underexplored. Bearing this in mind leads us to ask: How can chatbots be embedded into software tools used for complex decision-making and designed to scaffold and probe human cognition? The goal of our research was to explore possible uses of such “probing bots”. The domain we examined was stock investment where many complex decisions need to be made. In our study, different types of investors interacted with a prototype, which we called “ProberBot”, and subsequently took part in in-depth interviews. They generally found our ProberBot was effective at supporting their thinking but when this is desirable depends on the type of task and activity. We discuss these and other findings as well as design considerations for developing ProberBots for similar types of decision-making tasks.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116449827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When designing a natural-language interaction for a social robot, it is not enough to design the conversation itself: the success of a human-robot interaction can also be significantly affected by seemingly small factors such as a robot’s physical appearance and non-verbal behaviour. In this paper, we deploy an identical chatbot system onto two different robots, Furhat and Pepper, and compare users’ subjective responses to conversations with both robots to get a clear measure of the impact of robot appearance on a social robot when the interaction context is held constant. The results of the study were varied: Furhat was considered to display emotions better and to be more intelligent and trustworthy than Pepper, while both robots were seen as equally friendly. No significant differences were found in the likeability and comfort categories.
{"title":"A Study on Human Interactions With Robots Based on Their Appearance and Behaviour","authors":"Zuzanna Janeczko, M. Foster","doi":"10.1145/3543829.3544523","DOIUrl":"https://doi.org/10.1145/3543829.3544523","url":null,"abstract":"When designing a natural-language interaction for a social robot, it is not enough to design the conversation itself: the success of a human-robot interaction can also be significantly affected by seemingly small factors such as a robot’s physical appearance and non-verbal behaviour. In this paper, we deploy an identical chatbot system onto two different robots, Furhat and Pepper, and compare users’ subjective responses to conversations with both robots to get a clear measure of the impact of robot appearance on a social robot when the interaction context is held constant. The results of the study were varied: Furhat was considered to display emotions better and to be more intelligent and trustworthy than Pepper, while both robots were seen as equally friendly. No significant differences were found in the likeability and comfort categories.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127016788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}