S. Yildirim-Erbasli, O. Bulut, Carrie Demmans Epp, Yingqi Cui
{"title":"Conversation-Based Assessments in Education: Design, Implementation, and Cognitive Walkthroughs for Usability Testing","authors":"S. Yildirim-Erbasli, O. Bulut, Carrie Demmans Epp, Yingqi Cui","doi":"10.1177/00472395231178943","DOIUrl":null,"url":null,"abstract":"Conversational agents have been widely used in education to support student learning. There have been recent attempts to design and use conversational agents to conduct assessments (i.e., conversation-based assessments: CBA). In this study, we developed CBA with constructed and selected-response tests using Rasa—an artificial intelligence-based tool. CBA was deployed via Google Chat to support formative assessment. We evaluated (1) its performance in answering students’ responses and (2) its usability with cognitive walkthroughs conducted by external evaluators. CBA with constructed-response tests consistently matched student responses to the appropriate conversation paths in most cases. In comparison, CBA with selected-response tests demonstrated perfect accuracy between system design and implementation. A cognitive walkthrough of CBA showed its usability as well as several potential issues that could be improved. Participating students did not experience these issues, however, we reported them to help researchers, designers, and practitioners improve the assessment experience for students using CBA.","PeriodicalId":300288,"journal":{"name":"Journal of Educational Technology Systems","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Educational Technology Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/00472395231178943","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Conversational agents have been widely used in education to support student learning. There have been recent attempts to design and use conversational agents to conduct assessments (i.e., conversation-based assessments: CBA). In this study, we developed CBA with constructed and selected-response tests using Rasa—an artificial intelligence-based tool. CBA was deployed via Google Chat to support formative assessment. We evaluated (1) its performance in answering students’ responses and (2) its usability with cognitive walkthroughs conducted by external evaluators. CBA with constructed-response tests consistently matched student responses to the appropriate conversation paths in most cases. In comparison, CBA with selected-response tests demonstrated perfect accuracy between system design and implementation. A cognitive walkthrough of CBA showed its usability as well as several potential issues that could be improved. Participating students did not experience these issues, however, we reported them to help researchers, designers, and practitioners improve the assessment experience for students using CBA.