Iris Jestin, J. Fischer, Maria Jose Galvez Trigo, D. Large, G. Burnett
Voice assistants in future autonomous vehicles may play a major role in supporting the driver during periods of a transfer of control with the vehicle (handover and handback). However, little is known about the effects of different qualities of the voice assistant on its perceived acceptability, and thus its potential to support the driver’s trust in the vehicle. A desktop study was carried out with 18 participants, investigating the effects of three gendered voices and different wording of prompts during handover and handback driving scenarios on measures of acceptability. Participants rated prompts by the voice assistant in nine different driving scenarios, using 5-point Likert style items in a during and post-study questionnaire as well as a short interview at the end. A commanding/formally worded prompt was rated higher on most of the desirable measures of acceptability as compared to an informally worded prompt. The ‘Matthew’ voice used was perceived to be less artificial and more desirable than the ‘Joanna’ voice and the gender-ambiguous ‘Jordan’ voice; however, we caution against interpreting these results as indicative of a general preference of gender, and instead discuss our results to throw light on the complex socio-phonetic nature of voices (including gender) and wording of voice assistants, and the need for careful consideration while designing the same. Results gained facilitate the drawing of insights needed to take better care when designing the voice and wording for voice assistants in future autonomous vehicles.
{"title":"Effects of Wording and Gendered Voices on Acceptability of Voice Assistants in Future Autonomous Vehicles","authors":"Iris Jestin, J. Fischer, Maria Jose Galvez Trigo, D. Large, G. Burnett","doi":"10.1145/3543829.3543836","DOIUrl":"https://doi.org/10.1145/3543829.3543836","url":null,"abstract":"Voice assistants in future autonomous vehicles may play a major role in supporting the driver during periods of a transfer of control with the vehicle (handover and handback). However, little is known about the effects of different qualities of the voice assistant on its perceived acceptability, and thus its potential to support the driver’s trust in the vehicle. A desktop study was carried out with 18 participants, investigating the effects of three gendered voices and different wording of prompts during handover and handback driving scenarios on measures of acceptability. Participants rated prompts by the voice assistant in nine different driving scenarios, using 5-point Likert style items in a during and post-study questionnaire as well as a short interview at the end. A commanding/formally worded prompt was rated higher on most of the desirable measures of acceptability as compared to an informally worded prompt. The ‘Matthew’ voice used was perceived to be less artificial and more desirable than the ‘Joanna’ voice and the gender-ambiguous ‘Jordan’ voice; however, we caution against interpreting these results as indicative of a general preference of gender, and instead discuss our results to throw light on the complex socio-phonetic nature of voices (including gender) and wording of voice assistants, and the need for careful consideration while designing the same. Results gained facilitate the drawing of insights needed to take better care when designing the voice and wording for voice assistants in future autonomous vehicles.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116992606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the 2020s, world countries are called to take action to solve global issues, as defined in the Sustainable Development Goals (SDG). In our research, we are interested in exploring how Conversational Agents can be exploited to pursue the above goals, particularly in domestic spaces where CAs are becoming more and more popular. As a preliminary step in this research work, we organized a focus group with seven participants aimed at: i) investigating the potential of Conversational Agents - integrated with digital devices - to promote a more sustainable behavior at home; ii) eliciting the requirements on conversational interaction that such CAs should meet for this purpose. From the experience and findings of the focus group, we distilled a conceptual framework called CANDY, which highlights the core design dimension of Conversational Agents for Sustainability, and can be used to guide the processes of requirements elicitation and design for this category of CAs.
{"title":"CANDY: a framework to design Conversational AgeNts for Domestic sustainabilitY","authors":"Mathyas Giudici, Pietro Crovari, F. Garzotto","doi":"10.1145/3543829.3544515","DOIUrl":"https://doi.org/10.1145/3543829.3544515","url":null,"abstract":"In the 2020s, world countries are called to take action to solve global issues, as defined in the Sustainable Development Goals (SDG). In our research, we are interested in exploring how Conversational Agents can be exploited to pursue the above goals, particularly in domestic spaces where CAs are becoming more and more popular. As a preliminary step in this research work, we organized a focus group with seven participants aimed at: i) investigating the potential of Conversational Agents - integrated with digital devices - to promote a more sustainable behavior at home; ii) eliciting the requirements on conversational interaction that such CAs should meet for this purpose. From the experience and findings of the focus group, we distilled a conceptual framework called CANDY, which highlights the core design dimension of Conversational Agents for Sustainability, and can be used to guide the processes of requirements elicitation and design for this category of CAs.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"127 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132879640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous work identified trust as one of the key requirements for adoption and continued use of conversational agents (CAs). Given recent advances in natural language processing and deep learning, it is currently possible to execute simple goal-oriented tasks by using voice. As CAs start to provide a gateway for purchasing products and booking services online, the question of trust and its impact on users’ reliance and agency becomes ever-more pertinent. This paper collates trust-related literature and proposes four design suggestions that are illustrated through example conversations. Our goal is to encourage discussion on ethical design practices to develop CAs that are capable of employing trust-calibration techniques that should, when relevant, reduce the user’s trust in the agent. We hope that our reflections, based on the synthesis of insights from the fields of human-agent interaction, explainable ai, and information retrieval, can serve as a reminder of the dangers of excessive trust in automation and contribute to more user-centred CA design.
{"title":"Conversational Agents Trust Calibration: A User-Centred Perspective to Design","authors":"Mateusz Dubiel, Sylvain Daronnat, Luis A. Leiva","doi":"10.1145/3543829.3544518","DOIUrl":"https://doi.org/10.1145/3543829.3544518","url":null,"abstract":"Previous work identified trust as one of the key requirements for adoption and continued use of conversational agents (CAs). Given recent advances in natural language processing and deep learning, it is currently possible to execute simple goal-oriented tasks by using voice. As CAs start to provide a gateway for purchasing products and booking services online, the question of trust and its impact on users’ reliance and agency becomes ever-more pertinent. This paper collates trust-related literature and proposes four design suggestions that are illustrated through example conversations. Our goal is to encourage discussion on ethical design practices to develop CAs that are capable of employing trust-calibration techniques that should, when relevant, reduce the user’s trust in the agent. We hope that our reflections, based on the synthesis of insights from the fields of human-agent interaction, explainable ai, and information retrieval, can serve as a reminder of the dangers of excessive trust in automation and contribute to more user-centred CA design.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128851157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nima Zargham, Leon Reicherts, Michael Bonfert, Sarah Theres Voelkel, J. Schoening, R. Malaka, Y. Rogers
The next major evolutionary stage for voice assistants will be their capability to initiate interactions by themselves. However, to design proactive interactions, it is crucial to understand whether and when this behaviour is considered useful and how desirable it is perceived for different social contexts or ongoing activities. To investigate people’s perspectives on proactivity and appropriate circumstances for it, we designed a set of storyboards depicting a variety of proactive actions in everyday situations and social settings and presented them to 15 participants in interactive interviews. Our findings suggest that, although many participants see benefits in agent proactivity, such as for urgent or critical issues, there are concerns about interference with social activities in multi-party settings, potential loss of agency, and intrusiveness. We discuss our implications for designing voice assistants with desirable proactive features.
{"title":"Understanding Circumstances for Desirable Proactive Behaviour of Voice Assistants: The Proactivity Dilemma","authors":"Nima Zargham, Leon Reicherts, Michael Bonfert, Sarah Theres Voelkel, J. Schoening, R. Malaka, Y. Rogers","doi":"10.1145/3543829.3543834","DOIUrl":"https://doi.org/10.1145/3543829.3543834","url":null,"abstract":"The next major evolutionary stage for voice assistants will be their capability to initiate interactions by themselves. However, to design proactive interactions, it is crucial to understand whether and when this behaviour is considered useful and how desirable it is perceived for different social contexts or ongoing activities. To investigate people’s perspectives on proactivity and appropriate circumstances for it, we designed a set of storyboards depicting a variety of proactive actions in everyday situations and social settings and presented them to 15 participants in interactive interviews. Our findings suggest that, although many participants see benefits in agent proactivity, such as for urgent or critical issues, there are concerns about interference with social activities in multi-party settings, potential loss of agency, and intrusiveness. We discuss our implications for designing voice assistants with desirable proactive features.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125511683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual assistants (VA) such as Alexa, Siri and Google Assistant are becoming increasingly popular. Recent literature has argued that VAs may raise ethical concerns for users’ autonomy. However, there is a lack of frameworks which can unite the wide range of autonomy concerns discussed in literature, as well as help designers envision the ethical implications of emerging VA technologies. This paper argues that designers and policymakers need to be sensitive to the ethical side of the future of virtual assistants, and systematic frameworks are required to aid their moral imagination. The paper proposes a framework to help designers imagine potential ethical concerns pertaining to users’ autonomy. We demonstrate the usefulness of the proposed framework by showing how existing ethical concerns can be situated within the framework. We also use the framework to imagine ethical concerns with emerging VA technologies. The proposed framework can aid in systematic identification of autonomy related ethical concerns within human computer interactions.
{"title":"Assistant or Master: Envisioning the User Autonomy Implications of Virtual Assistants","authors":"Sanju Ahuja, Jyotish Kumar","doi":"10.1145/3543829.3544514","DOIUrl":"https://doi.org/10.1145/3543829.3544514","url":null,"abstract":"Virtual assistants (VA) such as Alexa, Siri and Google Assistant are becoming increasingly popular. Recent literature has argued that VAs may raise ethical concerns for users’ autonomy. However, there is a lack of frameworks which can unite the wide range of autonomy concerns discussed in literature, as well as help designers envision the ethical implications of emerging VA technologies. This paper argues that designers and policymakers need to be sensitive to the ethical side of the future of virtual assistants, and systematic frameworks are required to aid their moral imagination. The paper proposes a framework to help designers imagine potential ethical concerns pertaining to users’ autonomy. We demonstrate the usefulness of the proposed framework by showing how existing ethical concerns can be situated within the framework. We also use the framework to imagine ethical concerns with emerging VA technologies. The proposed framework can aid in systematic identification of autonomy related ethical concerns within human computer interactions.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114545535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Selina Meyer, David Elsweiler, Bernd Ludwig, Marcos Fernández-Pichel, D. Losada
Scarcity of user data continues to be a problem in research on conversational user interfaces and often hinders or slows down technical innovation. In the past, different ways of synthetically generating data, such as data augmentation techniques have been explored. With the rise of ever improving pre-trained language models, we ask if we can go beyond such methods by simply providing appropriate prompts to these general purpose models to generate data. We explore the feasibility and cost-benefit trade-offs of using non fine-tuned synthetic data to train classification algorithms for conversational agents. We compare this synthetically generated data with real user data and evaluate the performance of classifiers trained on different combinations of synthetic and real data. We come to the conclusion that, although classifiers trained on such synthetic data perform much better than random baselines, they do not compare to the performance of classifiers trained on even very small amounts of real user data, largely because such data is lacking much of the variability found in user generated data. Nevertheless, we show that in situations where very little data and resources are available, classifiers trained on such synthetically generated data might be preferable to the collection and annotation of naturalistic data.
{"title":"Do We Still Need Human Assessors? Prompt-Based GPT-3 User Simulation in Conversational AI","authors":"Selina Meyer, David Elsweiler, Bernd Ludwig, Marcos Fernández-Pichel, D. Losada","doi":"10.1145/3543829.3544529","DOIUrl":"https://doi.org/10.1145/3543829.3544529","url":null,"abstract":"Scarcity of user data continues to be a problem in research on conversational user interfaces and often hinders or slows down technical innovation. In the past, different ways of synthetically generating data, such as data augmentation techniques have been explored. With the rise of ever improving pre-trained language models, we ask if we can go beyond such methods by simply providing appropriate prompts to these general purpose models to generate data. We explore the feasibility and cost-benefit trade-offs of using non fine-tuned synthetic data to train classification algorithms for conversational agents. We compare this synthetically generated data with real user data and evaluate the performance of classifiers trained on different combinations of synthetic and real data. We come to the conclusion that, although classifiers trained on such synthetic data perform much better than random baselines, they do not compare to the performance of classifiers trained on even very small amounts of real user data, largely because such data is lacking much of the variability found in user generated data. Nevertheless, we show that in situations where very little data and resources are available, classifiers trained on such synthetically generated data might be preferable to the collection and annotation of naturalistic data.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124962666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marilena Wilhelm, Tabea Otten, Eva Schwaetzer, Kinga Schumacher
Most of us have chatted with a chatbot before and remember frustrating experiences. Research shows that friendly and empathetic communication can compensate for existing communication problems. We therefore conducted a study to investigate whether the use of emoticons leads to higher user satisfaction when communicating with a chatbot. The use case was a chatbot recommending courses on a German e-learning platform. The results did not reach significance, but show a positive trend, which leads to indications of what future research should investigate.
{"title":"Keep on Smiling: An Investigation of the Influence of the Use of Emoticons by Chatbots on User Satisfaction","authors":"Marilena Wilhelm, Tabea Otten, Eva Schwaetzer, Kinga Schumacher","doi":"10.1145/3543829.3544533","DOIUrl":"https://doi.org/10.1145/3543829.3544533","url":null,"abstract":"Most of us have chatted with a chatbot before and remember frustrating experiences. Research shows that friendly and empathetic communication can compensate for existing communication problems. We therefore conducted a study to investigate whether the use of emoticons leads to higher user satisfaction when communicating with a chatbot. The use case was a chatbot recommending courses on a German e-learning platform. The results did not reach significance, but show a positive trend, which leads to indications of what future research should investigate.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130157714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers unifying research on conversational user interfaces and recommender systems. Studies on conversational user interfaces (CUIs) typically examine how conversations can be facilitated (i.e., optimizing the means). Recommender systems research (RecSys) aims to retrieve and present recommendations in a user’s session (i.e., optimizing the ends). Though these aims are overlapping across both areas, they can be better examined together to target the means and ends of what people can achieve with technology as conversational recommender systems (CRSs). We discuss the intersection of conversational user interfaces, recommender systems, and conversational recommender systems. We argue how conversations and recommendations can be designed holistically, in which recommendations can also be a means to foster engaging conversational interaction, while conversations as ends can better sustain curated, long-term recommendations.
{"title":"Unifying Recommender Systems and Conversational User Interfaces","authors":"A. Starke, Minha Lee","doi":"10.1145/3543829.3544524","DOIUrl":"https://doi.org/10.1145/3543829.3544524","url":null,"abstract":"This paper considers unifying research on conversational user interfaces and recommender systems. Studies on conversational user interfaces (CUIs) typically examine how conversations can be facilitated (i.e., optimizing the means). Recommender systems research (RecSys) aims to retrieve and present recommendations in a user’s session (i.e., optimizing the ends). Though these aims are overlapping across both areas, they can be better examined together to target the means and ends of what people can achieve with technology as conversational recommender systems (CRSs). We discuss the intersection of conversational user interfaces, recommender systems, and conversational recommender systems. We argue how conversations and recommendations can be designed holistically, in which recommendations can also be a means to foster engaging conversational interaction, while conversations as ends can better sustain curated, long-term recommendations.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131241938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Bispham, Clara Zard, S. Sattar, Xavier Ferrer Aran, Guillermo Suarez-Tangil, J. Such
In this paper we investigate the issue of sensitive information leakage to third-party voice applications in voice assistant ecosystems. We focus specifically on leakage of sensitive information via the conversational interface. We use a bespoke testing infrastructure to investigate leakage of sensitive information via the conversational interface of Google Actions and Alexa Skills. Our work augments prior work in this area to consider not only specific categories of personal data, but also other types of potentially sensitive information that may be disclosed in voice-based interactions with third-party voice applications. Our findings indicate that current privacy and security measures for third-party voice applications are not sufficient to prevent leakage of all types of sensitive information via the conversational interface. We make key recommendations for the redesign of voice assistant architectures to better prevent leakage of sensitive information via the conversational interface of third-party voice applications in the future.
{"title":"Leakage of Sensitive Information to Third-Party Voice Applications","authors":"M. Bispham, Clara Zard, S. Sattar, Xavier Ferrer Aran, Guillermo Suarez-Tangil, J. Such","doi":"10.1145/3543829.3544520","DOIUrl":"https://doi.org/10.1145/3543829.3544520","url":null,"abstract":"In this paper we investigate the issue of sensitive information leakage to third-party voice applications in voice assistant ecosystems. We focus specifically on leakage of sensitive information via the conversational interface. We use a bespoke testing infrastructure to investigate leakage of sensitive information via the conversational interface of Google Actions and Alexa Skills. Our work augments prior work in this area to consider not only specific categories of personal data, but also other types of potentially sensitive information that may be disclosed in voice-based interactions with third-party voice applications. Our findings indicate that current privacy and security measures for third-party voice applications are not sufficient to prevent leakage of all types of sensitive information via the conversational interface. We make key recommendations for the redesign of voice assistant architectures to better prevent leakage of sensitive information via the conversational interface of third-party voice applications in the future.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124267642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Bowman, Benjamin R. Cowan, Anja Thieme, Gavin Doherty
Conversational user interfaces (CUIs) are a promising interaction modality to engage people with self-report activities that are widely used to study people’s experiences and support them with their mental health. However, this potential is limited by the prevailing CUI interaction paradigm being subservience to the user, which constrains self-reporting to being user initiated. A more effective approach would be for CUIs to proactively engage users with self-reporting, particularly at opportune moments. This paper proposes that joint action theory, specifically joint commitment, can be an effective framework to support designers in designing effective proactive CUI interactions. Using mood logging as a use case, we highlight three key areas where joint commitment can impact proactive CUI design. We also discuss wider challenges and future areas of research needed to identify the opportunities and challenges of using joint commitment within proactive CUI research and development.
{"title":"Beyond Subservience: Using Joint Commitment to Enable Proactive CUIs for Mood Logging","authors":"Robert Bowman, Benjamin R. Cowan, Anja Thieme, Gavin Doherty","doi":"10.1145/3543829.3544512","DOIUrl":"https://doi.org/10.1145/3543829.3544512","url":null,"abstract":"Conversational user interfaces (CUIs) are a promising interaction modality to engage people with self-report activities that are widely used to study people’s experiences and support them with their mental health. However, this potential is limited by the prevailing CUI interaction paradigm being subservience to the user, which constrains self-reporting to being user initiated. A more effective approach would be for CUIs to proactively engage users with self-reporting, particularly at opportune moments. This paper proposes that joint action theory, specifically joint commitment, can be an effective framework to support designers in designing effective proactive CUI interactions. Using mood logging as a use case, we highlight three key areas where joint commitment can impact proactive CUI design. We also discuss wider challenges and future areas of research needed to identify the opportunities and challenges of using joint commitment within proactive CUI research and development.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"262 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123043774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}