Spyros Kousidis, C. Kennington, Timo Baumann, Hendrik Buschmeier, S. Kopp, David Schlangen
Holding non-co-located conversations while driving is dangerous (Horrey and Wickens, 2006; Strayer et al., 2006), much more so than conversations with physically present, “situated” interlocutors (Drews et al., 2004). In-car dialogue systems typically resemble non-co-located conversations more, and share their negative impact (Strayer et al., 2013). We implemented and tested a simple strategy for making in-car dialogue systems aware of the driving situation, by giving them the capability to interrupt themselves when a dangerous situation is detected, and resume when over. We show that this improves both driving performance and recall of system-presented information, compared to a non-adaptive strategy.
开车时进行非同一地点的谈话是危险的(Horrey和Wickens, 2006;Strayer et al., 2006),比与实际在场的“情境”对话者进行对话(Drews et al., 2004)要重要得多。车内对话系统通常更类似于非同地对话,并分享其负面影响(Strayer et al., 2013)。我们实施并测试了一种简单的策略,使车内对话系统能够感知驾驶情况,让它们在检测到危险情况时中断自己,并在结束后恢复。我们表明,与非自适应策略相比,这提高了驾驶性能和系统呈现信息的召回。
{"title":"Situationally Aware In-Car Information Presentation Using Incremental Speech Generation: Safer, and More Effective","authors":"Spyros Kousidis, C. Kennington, Timo Baumann, Hendrik Buschmeier, S. Kopp, David Schlangen","doi":"10.3115/v1/W14-0212","DOIUrl":"https://doi.org/10.3115/v1/W14-0212","url":null,"abstract":"Holding non-co-located conversations while driving is dangerous (Horrey and Wickens, 2006; Strayer et al., 2006), much more so than conversations with physically present, “situated” interlocutors (Drews et al., 2004). In-car dialogue systems typically resemble non-co-located conversations more, and share their negative impact (Strayer et al., 2013). We implemented and tested a simple strategy for making in-car dialogue systems aware of the driving situation, by giving them the capability to interrupt themselves when a dangerous situation is detected, and resume when over. We show that this improves both driving performance and recall of system-presented information, compared to a non-adaptive strategy.","PeriodicalId":198983,"journal":{"name":"DM@EACL","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121099334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Navigation of blind people is different from the navigation of sighted people and there is also difference when the blind person is recovering from getting lost. In this paper we focus on qualitative analysis of dialogs between lost blind person and navigator, which is done through the mobile phone. The research was done in two outdoor and one indoor location. The analysis revealed several areas where the dialog model must focus on detailed information, like evaluation of instructions provided by blind person and his/her ability to reliably locate navigation points.
{"title":"Navigation Dialog of Blind People: Recovery from Getting Lost","authors":"J. Vystrcil, I. Maly, Jan Balata, Z. Míkovec","doi":"10.3115/v1/W14-0210","DOIUrl":"https://doi.org/10.3115/v1/W14-0210","url":null,"abstract":"Navigation of blind people is different from the navigation of sighted people and there is also difference when the blind person is recovering from getting lost. In this paper we focus on qualitative analysis of dialogs between lost blind person and navigator, which is done through the mobile phone. The research was done in two outdoor and one indoor location. The analysis revealed several areas where the dialog model must focus on detailed information, like evaluation of instructions provided by blind person and his/her ability to reliably locate navigation points.","PeriodicalId":198983,"journal":{"name":"DM@EACL","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129705124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile Internet access via smartphones puts demands on in-car infotainment systems, as more and more drivers like to access the Internet while driving. Spoken dialog systems support the user by less distracting interaction than visual/hapticbased dialog systems. To develop an intuitive and usable spoken dialog system, an extensive analysis of the interaction concept is necessary. We conducted a Wizard of Oz study to investigate how users will carry out tasks which involve multiple applications in a speech-only, user-initiative infotainment system while driving. Results show that users are not aware of different applications and use anaphoric expressions in task switches. Speaking styles vary and depend on type of task and dialog state. Users interact efficiently and provide multiple semantic concepts in one utterance. This sets high demands for future spoken dialog systems.
{"title":"In-Car Multi-Domain Spoken Dialogs: A Wizard of Oz Study","authors":"Sven Reichel, U. Ehrlich, A. Berton, M. Weber","doi":"10.3115/v1/W14-0201","DOIUrl":"https://doi.org/10.3115/v1/W14-0201","url":null,"abstract":"Mobile Internet access via smartphones puts demands on in-car infotainment systems, as more and more drivers like to access the Internet while driving. Spoken dialog systems support the user by less distracting interaction than visual/hapticbased dialog systems. To develop an intuitive and usable spoken dialog system, an extensive analysis of the interaction concept is necessary. We conducted a Wizard of Oz study to investigate how users will carry out tasks which involve multiple applications in a speech-only, user-initiative infotainment system while driving. Results show that users are not aware of different applications and use anaphoric expressions in task switches. Speaking styles vary and depend on type of task and dialog state. Users interact efficiently and provide multiple semantic concepts in one utterance. This sets high demands for future spoken dialog systems.","PeriodicalId":198983,"journal":{"name":"DM@EACL","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130423831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}