Evaluating dialogue strategies and user behavior

M. Danieli
{"title":"Evaluating dialogue strategies and user behavior","authors":"M. Danieli","doi":"10.1109/ASRU.2001.1034630","DOIUrl":null,"url":null,"abstract":"Summary form only given. The need for accurate and flexible evaluation frameworks for spoken and multimodal dialogue systems has become crucial. In the early design phases of spoken dialogue systems, it is worthwhile evaluating the user's easiness in interacting with different dialogue strategies, rather than the efficiency of the dialogue system in providing the required information. The success of a task-oriented dialogue system greatly depends on the ability of providing a meaningful match between user's expectations and system capabilities, and a good trade-off improves the user's effectiveness. The evaluation methodology requires three steps. The first step has the goal of individuating the different tokens and relations that constitute the user mental model of the task. Once tokens and relations are considered for designing one or more dialogue strategies, the evaluation enters its second step which is constituted by a between-group experiment. Each strategy is tried by a representative set of experimental subjects. The third step includes measuring user effectiveness in providing the spoken dialogue system with the information it needs to solve the task. The paper argues that the application of the three-steps evaluation method may increase our understanding of the user mental model of a task during early stages of development of a spoken language agent. Experimental data supporting this claim are reported.","PeriodicalId":118671,"journal":{"name":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","volume":"474 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASRU.2001.1034630","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Summary form only given. The need for accurate and flexible evaluation frameworks for spoken and multimodal dialogue systems has become crucial. In the early design phases of spoken dialogue systems, it is worthwhile evaluating the user's easiness in interacting with different dialogue strategies, rather than the efficiency of the dialogue system in providing the required information. The success of a task-oriented dialogue system greatly depends on the ability of providing a meaningful match between user's expectations and system capabilities, and a good trade-off improves the user's effectiveness. The evaluation methodology requires three steps. The first step has the goal of individuating the different tokens and relations that constitute the user mental model of the task. Once tokens and relations are considered for designing one or more dialogue strategies, the evaluation enters its second step which is constituted by a between-group experiment. Each strategy is tried by a representative set of experimental subjects. The third step includes measuring user effectiveness in providing the spoken dialogue system with the information it needs to solve the task. The paper argues that the application of the three-steps evaluation method may increase our understanding of the user mental model of a task during early stages of development of a spoken language agent. Experimental data supporting this claim are reported.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估对话策略和用户行为
只提供摘要形式。为口头和多模式对话系统建立准确和灵活的评价框架已变得至关重要。在口语对话系统的早期设计阶段,值得评估的是用户与不同对话策略交互的容易程度,而不是对话系统在提供所需信息方面的效率。面向任务的对话系统的成功在很大程度上取决于在用户期望和系统功能之间提供有意义匹配的能力,良好的权衡可以提高用户的效率。评价方法需要三个步骤。第一步的目标是使构成任务的用户心理模型的不同标记和关系个性化。一旦考虑到符号和关系来设计一个或多个对话策略,评估就进入了第二步,即由组间实验组成的评估。每种策略都由一组有代表性的实验对象来尝试。第三步包括衡量用户在为口语对话系统提供解决任务所需信息方面的有效性。本文认为,应用三步评估方法可以增加我们对语音智能体开发早期阶段任务的用户心理模型的理解。报告了支持这一说法的实验数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Example-based query generation for spontaneous speech Multilingual acoustic models for the recognition of non-native speech A comparative study of model-based adaptation techniques for a compact speech recognizer Trend tying in the segmental-feature HMM Estimated rank pruning and Java-based speech recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1