{"title":"Assessing Post-hoc Explainability of the BKT Algorithm","authors":"Tongyu Zhou, Haoyu Sheng, I. Howley","doi":"10.1145/3375627.3375856","DOIUrl":null,"url":null,"abstract":"As machine intelligence is increasingly incorporated into educational technologies, it becomes imperative for instructors and students to understand the potential flaws of the algorithms on which their systems rely. This paper describes the design and implementation of an interactive post-hoc explanation of the Bayesian Knowledge Tracing algorithm which is implemented in learning analytics systems used across the United States. After a user-centered design process to smooth out interaction design difficulties, we ran a controlled experiment to evaluate whether the interactive or static version of the explainable led to increased learning. Our results reveal that learning about an algorithm through an explainable depends on users' educational background. For other contexts, designers of post-hoc explainables must consider their users' educational background to best determine how to empower more informed decision-making with AI-enhanced systems.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"43 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375627.3375856","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
As machine intelligence is increasingly incorporated into educational technologies, it becomes imperative for instructors and students to understand the potential flaws of the algorithms on which their systems rely. This paper describes the design and implementation of an interactive post-hoc explanation of the Bayesian Knowledge Tracing algorithm which is implemented in learning analytics systems used across the United States. After a user-centered design process to smooth out interaction design difficulties, we ran a controlled experiment to evaluate whether the interactive or static version of the explainable led to increased learning. Our results reveal that learning about an algorithm through an explainable depends on users' educational background. For other contexts, designers of post-hoc explainables must consider their users' educational background to best determine how to empower more informed decision-making with AI-enhanced systems.