H. Ogata, B. Flanagan, Kyosuke Takami, Yiling Dai, Ryosuke Nakamoto, Kensuke Takii
{"title":"EXAIT:个性化学习的教育可解释人工智能工具","authors":"H. Ogata, B. Flanagan, Kyosuke Takami, Yiling Dai, Ryosuke Nakamoto, Kensuke Takii","doi":"10.58459/rptel.2024.19019","DOIUrl":null,"url":null,"abstract":"As artificial intelligence systems increasingly make high-stakes recommendations and decisions automatically in many facets of our lives, the use of explainable artificial intelligence to inform stakeholders about the reasons behind such systems has been gaining much attention in a wide range of fields, including education. Also, in the field of education there has been a long history of research into self-explanation, where students explain the process of their answers. This has been recognized as a beneficial intervention to promote metacognitive skills, however, there is also unexplored potential to gain insight into the problems that learners experience due to inadequate prerequisite knowledge and skills that are required, or in the process of their application to the task at hand. While this aspect of self-explanation has been of interest to teachers, there is little research into the use of such information to inform educational AI systems. In this paper, we propose a system in which both students and the AI system explain to each other their reasons behind decisions that were made, such as: self-explanation of student cognition during the answering process, and explanation of recommendations based on internal mechanizes and other abstract representations of model algorithms.","PeriodicalId":37055,"journal":{"name":"Research and Practice in Technology Enhanced Learning","volume":"56 1","pages":"19"},"PeriodicalIF":3.1000,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"EXAIT: Educational eXplainable Artificial Intelligent Tools for personalized learning\",\"authors\":\"H. Ogata, B. Flanagan, Kyosuke Takami, Yiling Dai, Ryosuke Nakamoto, Kensuke Takii\",\"doi\":\"10.58459/rptel.2024.19019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As artificial intelligence systems increasingly make high-stakes recommendations and decisions automatically in many facets of our lives, the use of explainable artificial intelligence to inform stakeholders about the reasons behind such systems has been gaining much attention in a wide range of fields, including education. Also, in the field of education there has been a long history of research into self-explanation, where students explain the process of their answers. This has been recognized as a beneficial intervention to promote metacognitive skills, however, there is also unexplored potential to gain insight into the problems that learners experience due to inadequate prerequisite knowledge and skills that are required, or in the process of their application to the task at hand. While this aspect of self-explanation has been of interest to teachers, there is little research into the use of such information to inform educational AI systems. In this paper, we propose a system in which both students and the AI system explain to each other their reasons behind decisions that were made, such as: self-explanation of student cognition during the answering process, and explanation of recommendations based on internal mechanizes and other abstract representations of model algorithms.\",\"PeriodicalId\":37055,\"journal\":{\"name\":\"Research and Practice in Technology Enhanced Learning\",\"volume\":\"56 1\",\"pages\":\"19\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-08-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research and Practice in Technology Enhanced Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.58459/rptel.2024.19019\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research and Practice in Technology Enhanced Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.58459/rptel.2024.19019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
EXAIT: Educational eXplainable Artificial Intelligent Tools for personalized learning
As artificial intelligence systems increasingly make high-stakes recommendations and decisions automatically in many facets of our lives, the use of explainable artificial intelligence to inform stakeholders about the reasons behind such systems has been gaining much attention in a wide range of fields, including education. Also, in the field of education there has been a long history of research into self-explanation, where students explain the process of their answers. This has been recognized as a beneficial intervention to promote metacognitive skills, however, there is also unexplored potential to gain insight into the problems that learners experience due to inadequate prerequisite knowledge and skills that are required, or in the process of their application to the task at hand. While this aspect of self-explanation has been of interest to teachers, there is little research into the use of such information to inform educational AI systems. In this paper, we propose a system in which both students and the AI system explain to each other their reasons behind decisions that were made, such as: self-explanation of student cognition during the answering process, and explanation of recommendations based on internal mechanizes and other abstract representations of model algorithms.